Catching up on Numenta

A selection of Numenta resources for neuroscience novices, seasoned scientists, and everyone in between

Over the past four weeks, Numenta has been working remotely, joined by much of the world as “shelter in place” and “social distancing” have become part of our collective daily vocabulary. We’re fortunate that we can continue our work with minimal adjustments. The backdrops may have changed, but our research meetings continue. And though we miss the hallway chats and conference room gatherings, we’ve replaced them with slack chatter and Zoom lunches.

One common theme that comes up over virtual lunch is we ask each other for recommendations on what to watch, read or listen to. Sheltering in place seems to be a good time to catch up on material that has long been on your list. If you haven’t had a chance to catch up on our papers and videos, I’ve put together a list that highlights where to start. Whether you’re a neuroscience novice or a machine learning expert, there’s something for everyone.

For the science reader: The “Frameworks” paper

This paper, “A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex” formally introduced our theory of how the brain works: The Thousand Brains Theory of Intelligence. The framework suggests mechanisms for how the cortex represents object compositionality, object behaviors and even high-level concepts.

For the non-scientist: “Frameworks” Companion paper 

Because not everyone wants to read a peer-reviewed scientific paper, we created a version of the “Frameworks” paper specifically for non-neuroscientists. In it, we’ve simplified the explanations to focus on the big ideas in the theory.  You can read this companion piece on its own or use it as a primer for the scientific paper.

For the machine learner: How Can We Be So Dense? The Benefits of Using Highly Sparse Representations (paper)

Those who want a look into how we’re applying our neuroscience research to machine learning systems can start here. Most artificial networks today use dense representations, as opposed to biological networks, which rely on sparse representations. This paper demonstrates how sparse representations can be more robust to noise and interference.

For the AI enthusiast: Lex Fridman’s AI Podcast interview with Jeff Hawkins

Also available as a podcast, this two-hour long conversation with Lex Fridman and our Co-founder Jeff Hawkins is part of Lex’s popular Artificial Intelligence podcast series. The two discuss the Thousand Brains Theory of Intelligence, deep learning, super-human intelligence and the existential threats of AI.

For the intellectually curious: HTM School (YouTube Tutorials) 

If you’ve been wanting to learn more about the concepts in our theory before diving into the technical components of the algorithms, HTM School is a great place to start. Each video, hosted by our Community Manager Matt Taylor , features visualizations and breakdowns of the biological algorithms involved. Best of all, the series is designed for a general audience. No neuroscience or computer science background required.

Authors
Christy Maver • VP of Marketing
Share