top of page
  • Writer's pictureNikos Tzagkarakis

The illusion of Intelligence as the product of time.




I have been working professionally on A.I. projects the past 7 years, including off the shelf deep learning algorithms and neurosymbolic algorithms (4 years ago I designed the first Bayesian Inference engine for Healthcare, called Tzager and now I work on the first neurosymbolic Data platform on game intelligence/economies and web 3 called Thunderstruck) and from day one, I always wanted to understand why we call humans intelligent.


Most of the times the data scientists that learn machine learning through courses, fail to understand that they first need to explore the nature of intelligence if they want to have any significant contribution to the field, or even to just be relevant in 5 years. This means that one has to first break down philosophically what it means to be intelligent.

Let’s break down a major issue with intelligence.


A fairly accepted and generic description of intelligence is this: Intelligence is the ability to create models of your environment.


This means that an agent can be called an intelligent complexity, if it is able to model different complexities of an environment through repetitive patterns, that allow it to manipulate the environment, in order for it to achieve a goal (most often the goal is to survive, especially if the agent is the product of evolution). So basically an agent that pushes back or changes its environment in order to survive.


We humans are calling ourselves intelligent and are using our species as the basic blueprint.

What is not apparent in this logic, is that we are using the most consistent bias as a hidden condition of intelligence, and that is… time.


Let me elaborate.


We as cognitive agents, have observed other complexities with similar conditions, change their environment sufficiently enough, that allows them to stay fairly consistent for a large amount of time (stay alive). We tend to believe that the longer a complexity is able to change its environment and survive, the more likely it is to be intelligent. The problem with time, is that it allows for more complex models to emerge, thus having a direct correlation with the level of intelligence. We can argue that we cannot call flies sufficiently intelligent because they don’t live as long.


Of course, It is clear that there are species that can process more complex information much faster, and also there are species that never seem to pass their cognitive capacity. But let’s assume that if the scale of time is infinite, the capacity difference may be insignificant since there could be processing thresholds due to energy limitations. Meaning that energy cannot be infinite, but time can.


So why time is key.

We humans are living on a particular timeframe and the rate of change we manifest in our environment is a rate we can actually understand and quantify, since the source is us. Which of course means that sequoias maybe be immensely intelligent in their timeframe (i.e. a sequoia thought may take a week to manifest but they can afford it since they live for thousands of years), BUT it also means that for a species that lives hundreds of thousands of years, or millions of years, we would never be classified as intelligent since our contribution to the environment would be quick and insignificant, plus we would seem a fairly insignificant complexity that cannot push back the environment sufficiently in order to survive, since for them our lifespan would be equivalent to a fly.


What can we do with this.


This could mean that intelligence is not a universal threshold of anything more, than a biased label that we are giving to complexities that we can observe in our own terms and capacities. Which means that we should see A.I. not as a way to solve intelligence, but as a way to allow non-biological complexities to observe their environment in their own terms, opening our culture to the possibility that we are nothing more than several states of computations.

In the Deep Learning side, you cannot expect from an agent (if that, depending on the case) to model our world in the same way and hierarchies as us, since it does not have access to our world. So it is not a matter of “intelligence”, but a matter of what environment you are modelling.


The neurosymbolic side of today is trying to create agents that learn by reconstructing hierarchies of their environment, based on the way we humans understand it. Which means that unless the non-biological agents are exactly like us, they will have a different definition of intelligence.


It is obvious that intelligence is NOT just a matter of time. But the scale of time as one of the conditions, allows us to see the true purpose of A.I.: to create complexities that can model their environment in their terms.

Suddenly A.I. becomes the colourful manifestation of complexities, not with the purpose of cross-communication, but the purpose of exploration of new dimensions of existence, that our ridiculously small capacity will never reach.

Stop arguing about intelligence and start making your agents describe you how they see their world. Let there be light.

16 views0 comments
bottom of page