top of page
  • Writer's pictureNikos Tzagkarakis

The self is a product of our brain’s energy limitations






When I first started thinking about consciousness as an emergent quality of agents, the first thing I tried to understand was the notion of self as a separate model deriving from an agent’s environment. That means that when an agent creates models of its environment, it is somehow able to separate the models that describe itself (e.g. the different parts of its body) from the rest of the complexities in the environment.


I realise that this is an open philosophical question with many different schools of thought, but the point that I am trying to make is a rather practical one, that could be made no matter the theory you prefer.


When you try to explore how the self emerges, it shouldn’t matter what philosophical theory of consciousness you prefer, if you accept that there is a cognitive architecture that through computations, allows agents to create models of their environment, one of them being the self.


What do I mean by the “model of self”


As an agent learns, it is associating the different model prediction to its costs and goals, which allow it to attend to specific complexities of the environment depending on the state of its needs. Which means, that it is eventually able to correlate specific complexities as very important, because they always tend to be predicted as part of attention. Which means that these aspects of the environment will be connected to more and more goals and costs (e.g. the agent will always predict that it needs its hands when it has to eat, or that it is always painful when it hurts its legs).


So it is reasonable to hypothesize, that the self could be emerged because an agent has to attend to specific aspects of its environment in order to survive, separating specific models as more important and predictable, that later through culture are called the “self”.

My question is this: What if attention was unlimited?


The reason that attention is needed is simply because we cannot possibly, attend to, or analyse every possible information that surrounds us. Agents need to focus on the specific aspects of their environment in order to choose to interact with the ones, that according to their experience, are closer to their needs and goals.


This means that the more computational capacity an agent can have, the more aspects of its environment it could associate as very important and highly predictable. Which means that a Laplace’s Demon would either not have a model of the self, or “feel” that itself is the whole Universe.


Our energy limitations make us self-ish


Our own processing inability makes us separate our bodies from the rest of the universe, just because we have specific energy limitations, that prevents us from being able to associate our needs to a larger part of the environment.


That is why when we combine our needs to other agents and share our inferences with others, we are able to create larger selfs such as “family”, “nation”, “ the human species”.

I suspect that if we want to create A.I. agents, with human-like models of the self, we need to limit their attention capacity. We need to introduce similar energy limitations. I am expecting different types of agencies in different types of A.I. agents, where in some cases we will have agents that identify themselves as just their body and other cases where agents could think of themselves as a whole planet, just because they have different energy limitations, thus processing capacities.


After all, there is no real reason to separate your body from the rest of the environment, if you could predict the exact outcomes of any computation that is happenning around you.

Eventually every computation “affects you”, if the YOU, could be the universe.

コメント


bottom of page