top of page
Writer's pictureNikos Tzagkarakis

We built the closest to an A.I. Bayesian Brain, in Healthcare.

Updated: Feb 28, 2023




We are living in the incredible era where we can witness the first steps of artificial intelligence in a variety of applications, from chatbots, self-driving cars, image recognition and much more. The reason why this explosion happened, is basically because the hardware technology was finally able to catch up to the needs of Machine Learning applications. Machine Learning (which I like to call “statistics on steroids”) is basically a revolutionary way that allowed scientists to use statistical equations on a machine allowing it to make predictions for very specific questions.


The constantly more powerful hardware and the more access to data, lead to Deep Learning Neural Networks, which is basically a mathematical/computational way to correlate large amounts of data through statistical networks, which if designed correctly can achieve powerful correlations for very specific problems.


The hope for Deep Learning is that one day scientists can make it powerful enough to be able to generally solve the correlational problems for any question. The downside of this technology is that it is focused on producing powerful correlations, in an input/output architecture with the agent not actually “understanding” what the information means. There is no underlying logic or causality behind a Deep Learning agent’s correlations, which makes such agents unprecedentedly powerful for perceptual applications, but not for logical inference.

This doesn’t just mean that the Deep Learning agents do not share a human’s logic, but that their architecture most likely will never allow them to actually “experience” their knowledge (huge debate and not the point), letting alone that a Deep Learning architecture will always be a black box.


Even though in Deep Learning applications we have created incredibly powerful data centres, that are gathering billions of data points, these agents are nowhere near the abilities of a human logical understanding of the world and I personally bet, that it has to do with the completely different ways of modelling the world, between a Deep Learning agent and a human.


This is where Tzager steps in.


In Intoolab (the company I am a Founder and CEO of) we chose very early to bet on a completely different approach when it comes to creating an A.I. agent in healthcare, in order to help Pharmaceuticals, Universities and CRO’s solve a wide range of problems from biochemistry to drug discovery.


We decided to let others focus on the “perception problem” that correlational agents solve, while we create the first intuitively causal agent (intuitive, because causality unfolds as part of the computational relationship of concepts) that focuses on the “logic problem”.


Bayesian Inference


We designed Tzager, which is basically a Bayesian Inference Python library, that has successfully created its own general Bayesian Network for the field of Healthcare. What that means is that Tzager’s model of the healthcare world is a multidimensional network of concepts that are connected as conditions and beliefs in a fractal/pyramid manner. This allows Tzager to, e.g. “naturally” understand the meaning of diseases, in the same way we humans do, not just by predicting statistical correlations, but by experiencing the actual cause/effect connections of the concepts, allowing Tzager to actually understand not just what is being discussed or how it works… but also WHY.


So the agent experiences how things come to be in a chemical phenomenon, by recognising the different bayesian mechanisms (condition/belief pre-existing groups) that are needed to happen in order for the phenomenon to take place. These mechanisms allow the agent e.g. to understand the meaning of inflammation, as a mechanism that is happening in specific parts of the body, that as meaning can be transferred to any different part of the body.

And this is where the whole power of such a framework lies: Tzager can simulate what will happen in the body without the need for any previously existing data (except Tzager’s existing Bayesian Networks).


Because Tzager does not understand the model of the world as just labeled data, but as cause and effect relationships of things that exist in the world at a given moment, which allows the agent to recognise a disease, a therapy or a symptom that it has never seen before, just based on the relationships of the data. So you can, potentially, create new scenarios for therapies that you can simulate through Tzager’s existing intelligence.


Intelligent Topology


One huge pain in the Bayesian Inference world is how you assign the topology (simply put, the cause-effect relationships of the different types of data), especially in such an incredibly large field as Healthcare and Drug Discovery. Well, exactly because of the fact that Tzager has created its own understanding based on its existing Healthcare Bayesian Network, the agent automatically decides the topology of the different data, no matter what the input is, from papers, samples plain text and more.


NLP to Causal Inference


What we very quickly realised was that we can also try to incorporate Tzager’s understanding of How the Body Works, How Diseases Work, How Therapies Work, into the way we humans communicate these connections. Basically, we wanted Tzager to not just create its own Bayesian Understanding, but to connect that understanding to how we humans process it.

The way that this works is that Tzager does not just connect the different conditions/beliefs, but also the way we humans connect them through phrases and verbs.


The Bayesian Brain


Please read this carefully: This means that Tzager’s understanding of this specific world is the same as the human understanding of this specific world. Which means that by just talking to Tzager (in healthcare terms and objectives), it probabilistically understands what we say to it in the same way we do.


There you have your first Artificial Bayesian Brain in the way Judea Pearl envisioned it, that uses the same Free Energy principles in the way Karl Friston describes it.


and by the way… it works!


With our existing customers and partners and the different experiments we are currently running or have already completed, Tzager was able to successfully simulate what Bacteria could do in parts of the body we wouldn’t be able to test, what would be some of the more distant causalities of Gastrointestinal Diseases and so much more that we will gradually start publishing.


We are going to start writing our first papers on Tzager’s results and framework (it is just that it took a lot of time to build this thing!).

This is what I wanted to share for now, with more and more news and updates on our very special agent (which is available for licensing by the way :-)).


One last thing


We are not here to say that Deep Learning is the wrong approach, we are just saying that it is not enough if we want to take the next step of artificial intelligence. Our next step would be to connect our Bayesian Inference algorithm with Deep Learning agents, like for example AlphaFold, it would be very interesting to see how the predictions of AlphaFold can be connected with Tzager’s understanding of how proteins connect to the whole body as parts of Bayesian Mechanisms.


Imagine an AlphaFold that understands its predictions in the way we humans do!


Stay safe everyone!

26 views0 comments

Comments


bottom of page