Science

Sorry AI – The Brain Is Still The Best Inference Machine Out There

Despite the continued progress that the state of the art in machine learning and artificial intelligence (AI) have been able to achieve, one thing that still sets the human brain apart – and those of some other animals – is its ability to connect the dots and infer information that supports problem solving in situations that are inherently uncertain. It does this remarkably well despite sparse, incomplete, and almost always less than perfect data. In contrast, machines have a very difficult time inferring new insights and generalizing beyond what they have been explicitly trained on or exposed too.

How the brain represents and navigates through space

Back in 1948 scientists suggested that the brain forms a ‘cognitive map’, an internal flexible and adaptable model of the outside spatial world capable of being dynamically updated as new external information comes in. Although the biological and physiological mechanisms responsible for creating and maintaining such cognitive maps were unknown, it was hypothesized that however the brain was doing it, this map allows the brain to infer and navigate paths and routes through its physical spatial environment. It can make decisions about shortcuts and paths it has not physically been exposed to. It does so based on what it learned from partial information.

More recent work is uncovering the cells and parts of the brain that are involved in creating such cognitive maps. In particular, ‘place cells’ in the hippocampus – an important structure involved in learning and memory – have been suggested to encode within the brain the current position of an individual in their physical spatial environment. They also predict future spatial positions in the context of navigation and path selection decisions.

‘Grid cells’ in another part of the brain called the medial entorhinal cortex, located in the brain’s temporal lobes, are presumably responsible for generalizing how the brain encodes spatial information in order to make navigation and route decisions when it finds itself in new situations. The entorhinal cortex is known to act as a relay, or interface, between the hippocampus (which is deeper in an evolutionary older part the brain) and the rest of the cortex.

These ideas are further supported by theoretical (mathematical) and computational models of the brain that go one step further and propose how neurons in the entorhinal cortex create a two dimensional grid code. This code is capable of inferring new paths and shortcuts through spatial environments and in situations the brain hasn’t encountered before. And what’s more, analogous neural-derived grid-based models have been tested in artificial neural networks.

Beyond space: Representing and inferring abstract concepts

Other recent work has extended the concept of cognitive maps beyond just spatial representations of the physical environment. Cognitive maps of continuous representations of sound frequencies and odor concentrations have also been shown experimentally. And a paper published just over a month ago showed that in addition to encoding abstract cognitive map representations of non-spatial information, the brain is able to use such maps to infer relationships – abstract ‘paths’ through the maps – that it was not directly exposed to. Doing so in turn supports decisions that necessitate information beyond what it has directly experienced and learned.

The researchers used functional magnetic resonance imaging (fMRI) to derive brain activity from blood flow measurements in test subjects as they performed different cognitive tasks. Specifically, they were able to show that the brain creates a learned abstract (non-spatial or physical) two-dimensional grid-like navigation system for traversing and prioritizing social hierarchies – learned relationships of social rankings between different people. Importantly, the social hierarchy was not directly presented to test subjects, in other words, they were never directly exposed to all the relationships linking the different individuals in the data set. Rather, it was reconstructed – organized – by the brain given only pieces of data represented by pairwise social information between pairs of individuals.

Then, by designing carefully constructed and controlled task decision experiments, they were also able to show that the brain uses these social hierarchy cognitive maps to infer different ‘connections’ between individuals to support decision problems. The brain can solve decision problems it’s faced with by inferring socially important connections and paths in its abstract internal model of what the social hierarchies look like. It’s the equivalent of the brain inferring routes and navigating a physical space from a point A to point B, even though it may not have been exposed to a direct route from A to B.

Because the researchers were using fMRI, they were also able to map out what regions of the brain participate in constructing such abstract cognitive maps. It’s the same regions of the brain observed to play a role in constructing spatial cognitive maps. This is allowing neuroscientists to make sense of the underlying physiology that enables these capabilities.

As with any work, there are of course limitations to the experimental methodology, and as a consequence to how far one can push the interpretations of the results. In this particular case the scenarios and amount of data presented to and decision tasks asked of test subjects were highly controlled and rather simple. What makes the resultant cognitive maps two dimensional was that there were only two classes of features subjects were asked to take into consideration. This simplified both the cognitive load associated with learning and inferring relationships as well as the difficulty of the decision problems they were asked to solve. It’s not obvious how this can scale internally in the brain to more complex multi-dimensional features and data. Or how it can scale when the number of necessary learned instances substantially increases. To be fair, what the researchers were able to show in this paper was an experimental tour du force onto itself. But it does raise questions that will need to be addressed by future research.

Nonetheless, it seems likely that this is not just limited to social hierarchies and spatial navigation, but represents a generalized ability of the brain to take all kinds of different and abstract information, and create association and relational internal models. And it does so in the face of often sparse and almost always incomplete or imperfect information. It then uses these internal model cognitive maps to infer paths that support decision making tasks.

How the brain evolved to achieve these abilities and what are the underlying ‘algorithms’ that enable them remain poorly understood. The development and investigation of mathematical models that will lead to a deep understanding of what the brain is doing and how are not mature and remain a very active area of research.

But however the brain is doing it, it’s an ability machines and AI have not (yet) been able to seriously replicate. While spatial navigation by machine learning systems has come a far way and can be impressive, it still necessitates a lot of training and can fail miserably when unpredictable scenarios are encountered. But in particular, where machines really can’t compete with the brain at all is in making good enough educated guesses and decisions in the face of incomplete, sparse and noisy abstract situations that necessitate problem solving through inference. From a machine learning and AI perspective, there remains much to be learned from the brain.


Source link

Related Articles

Back to top button