Navigating Numenta’s Brain Theory through a Progression of Papers

For the near thirteen years that Numenta has existed, we have had two missions:

  1. Reverse-engineer the neocortex to understand how we learn and behave
  2. Enable technology based on brain theory

While we sometimes refer to them as dual missions, the order matters. We’ve gone through different business models over the years-from focusing on a single product to building example applications to now focusing solely on neuroscience research and theory. Yet our primary mission has always been a scientific one.

The Balancing Act: Scientific Research vs. Scientific Publishing

As Jeff explained in a blog post this summer, the past two years have put us on an accelerated scientific course. In early 2016, we had a major insight related to brain theory. That insight has unlocked additional discoveries and set the stage for tackling new challenges that were previously unsolvable. Just as important as progressing the research, however, is documenting it. While that demands a balance of focusing on the future and parsing the past, we’ve made it a goal to document all of our discoveries in scientific journals.

Coincidentally, early 2016 brought milestones on the publishing front as well as the research, as we published a seminal peer-reviewed paper in March 2016. Since then, we have published 4 additional peer-reviewed papers, with more in the works, as well as supplemental white papers and research manuscripts. As we continue to build out a library articulating our brain theory, questions may arise like: How do the papers relate to each other? What’s the significance of each one? How do they contribute to the overall theory?

Mapping our Progress through Papers

If you had to summarize our research hypothesis in one high-level statement it would be, “How does the brain learn predictive models of the world?” Our progress to date can be summarized by two important discoveries, each one marked by a fundamental paper:

  1. How the brain learns predictive models of extrinsic sequences
    Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex (Frontiers in Neural Circuits, 2016)

  2. How the brain learns predictive models of sensorimotor sequences
    A Theory of How Columns in the Neocortex Learn the Structure of the World (Frontiers in Neural Circuits, 2017)

Extrinsic sequences are those where sensory inputs change due to external factors. For example, when you hear a song, the melody changes regardless of where you are or what you’re doing as you listen. Sensorimotor sequences are those where inputs change due to your own behavior. When you turn your head, for example, you see an entirely different view of the world, but not because the world is moving. Your movement changes the input to your retina.

As you can see in the diagram below, most of our existing papers relate to the first discovery. In addition to the keystone “Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex,” we’ve produced papers that focus on one particular aspect of the theory or related applications. Going forward, we plan to do the same for the sensorimotor work. “A Theory of How Columns in the Neocortex Learn the Structure of the World” is the first of many we hope to publish on the new research.

Numenta Research Papers


Discovery 1: How the brain learns predictive models of extrinsic sequences
Key Paper:

Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex

  • New pyramidal neuron model – how most of a neuron’s activity is dedicated to predicting
  • Sequence memory model – how a layer of neurons learns sequences of patterns
  • Sparse distributed representations (SDRS) – how the brain represents uncertainty

Supporting Papers:

Sequence memory model:
Continuous Online Sequence Learning with an Unsupervised Neural Network Model

  • Analysis of HTM sequence memory applied to various sequence learning and prediction problems

  • Compares HTM to statistical and Deep Learning techniques

SDRs:
The HTM Spatial Pooler: A Neocortical Algorithm for Online Sparse Distributed Coding

  • Introduces Spatial Pooler and explains how it models how neurons learn feedforward connections
  • Shows how the Spatial Pooler creates SDRs and supports essential neural computations such as sequence learning and memory

How Do Neurons Operate on Sparse Distributed Representations? A Mathematical Theory of Sparsity, Neurons and Active Dendrites

  • Proposes a formal mathematical model for sparse representations and active dendrites in the cortex
  • Quantifies the benefits and limitations of sparse representations in neurons and cortical networks

Properties of Sparse Distributed Representations and their Application To Hierarchical Temporal Memory

  • Applies sparse representations to practical HTM systems
  • Earlier version of the above paper
Applications of Discovery 1:

Machine Learning Applications

Unsupervised Real-Time Anomaly Detection for Streaming Data

  • Demonstrates how HTM meets the requirements necessary for real-time anomaly detection in streaming data
  • Presents results using the Numenta Anomaly Benchmark (NAB), the first open-source benchmark designed for testing anomaly detection algorithms on streaming data

Evaluating Real-time Anomaly Detection Algorithms-the Numenta Anomaly Benchmark

  • Discusses how we should think about anomaly detection for streaming applications
  • Introduces a new open-source benchmark for detecting anomalies in real-time, time-series data

Encoding Data for HTM Systems

  • Describes how to encode data as Sparse Distributed Representations (SDRs) for use in HTM systems
  • Explains several existing encoders and discusses requirements for creating encoders for new types of data

Neuromorphic Applications

Porting HTM Models to the Heidelberg Neuromorphic Computing Platform

  • Provides an example of how to port HTM algorithms to analog hardware platforms


Discovery 2: How the brain learns predictive models of sensorimotor sequences
Key Paper:

A Theory of How Columns in the Neocortex Learn the Structure of the World

  • Extension of sequence memory model – how multiple layers of neurons learn to recognize objects through movement
  • Location signal – key feature of cortical function that every column computes for all input
  • Every column can learn complete objects – through movement

Supporting Paper:

Untangling Sequences: Behavior vs. External Causes

  • Describes a cortical model for untangling sensorimotor from external sequences
  • Shows how a single neural mechanism can learn and recognize these two types of sequences

Though the publishing process can take more than a year for a single paper, we’ll share our work as we go, along with any pre-print manuscripts, until eventually our cortical theory and its associated papers are complete. Until then, we invite you to catch up on what’s available so far, [including this video presentation at MIT on December 15](https://cbmm.mit.edu/video/have-we-missed-half-what-neocortex-does-allocentric-location-basis-perception) where Jeff discussed the content from our two fundamental papers, as well as new material that we have yet to document.

Authors
Christy Maver • VP of Marketing
Share