In this research paper, Numenta proposes a novel theoretical framework for understanding what the neocortex does and how it does it. The framework is based on grid cells and has significant implications for neuroscience and machine intelligence.
Underlying our AI technology is neuroscience research that demonstrates how the brain creates intelligence. Our two decades of neuroscience research have uncovered a number of core principles that are not reflected in today’s machine learning systems. We have developed a new theory of intelligence called the Thousand Brains Theory that contains key principles that we believe must be incorporated in AI and ML systems to build intelligent machines.
The Existing View of the Neocortex
Consistent with anatomical and physiological evidence, the Thousand Brains Theory offers a new perspective of how our brains work.
Today, the most common way of thinking about the neocortex is like a flowchart – information from the senses is processed step-by–step, from simple features to complex features to complex objects, as it passes from one region of the neocortex to the other. Although widely accepted, numerous observations suggest this existing view needs modification.
A New Understanding of How the Brain Works
In 1955, Vernon Mountcastle proposed that the way we see, feel, hear, move, and even do high-level planning runs on the same cortical circuitry. A cortical column is the basic functional unit that is tightly replicated across the cortical sheet. Therefore, if we can understand a cortical column, we will understand the neocortex.
Based on Mountcastle’s proposal, the Thousand Brains Theory states that all cortical columns, even in low-level sensory regions, are capable of learning and recognizing complete objects through sensory inputs and movement. And they build tens of thousands of models for everything we know.
It’s as if your brain is actually thousands of brains working simultaneously.
Animation: How the Brain Works: The Thousand Brains Theory of Intelligence
This paper proposes a network model composed of columns and layers that performs robust object learning and recognition. The model introduces a new feature to cortical columns, location information, which is represented relative to the object being sensed.
This foundational paper describes core HTM theory for sequence memory and its relationship to the neocortex. Written with a neuroscience perspective, the paper explains why neurons have so many synapses and how networks of neurons can form a powerful sequence learning mechanism.