Understanding how we build AI systems has provided significant insight into how we perceive the world. The drive to endow machines with understanding has not only deepened our knowledge of the human mind but also given rise to various theories on consciousness and how we map our surroundings.
We have long known that we navigate reality through a series of symbols and associations. Our ability to record these symbols and link spaces with feelings and ideas is an area where we already have some understanding. In this regard, architectural theory often aligns closely with psychological studies.
However, when discussing AI, there is one limiting factor we tend to overlook in the industry’s hype: how much computers actually “understand” and are capable of replicating based on their current capacities. Bernstein highlights in machine learning that while AI models can identify patterns and tendencies in vast amounts of data, they lack the ability to reason counterfactually, and to understand causality or correlation.
He refers to the argument established by the computer scientist Judea Pearl and the so-called “Ladder of Causation.” This model of understanding divides the way our mind interprets the world into three clear hierarchies or “degrees of understanding.” Each level builds upon the previous one, much like a ladder gives access to higher levels.
The three levels of the ladder are as follows:
- Association – Seeing/Observing
- Intervention – Doing/Intervening
- Counterfactuals – Imagining/Retrospecting/Understanding
Our current technology reaches only the level of observation, without understanding why events happen or the ability to evaluate different outcomes had the events not occurred.
A true intelligence in the human perception of it would in addition be able to intervene and conclude if this is something of interest at all.
Observing an apple on a tree for instance, then deciding either climbing the tree or perhaps throwing something to get access to it. Deciding for one of the options because you might not be a good climber and value your physical integrity.
Speculating further, we encounter two possibilities:
- Since we don’t fully understand how intelligence is generated, we might inadvertently be on the path to developing it. Consider Asimov’s classic scenario in “Robot Dreams,” in which a machine gains sentience. An intelligence that has a completely different physical experience might not read reality in the way we do because the interactions are by nature different. (a machine doesn’t need to climb a tree for an apple) or,
- We might never actually experience true AGI (Artificial General Intelligence) within our lifetime, as our technical capacity is currently insufficient, and our best models are incapable of replicating the complexity of nature’s creations.
At the moment however, it seems unlikely that we will move from the lowest step of the ladder anytime soon.





Share what you think!