Yearly Recap: Taking a Look Back at Our Top Research Meetings

Numenta’s commitment to open science has been clear since we launched our open source project in 2013: we release our day-to-day research code to Github and publish our research meetings on YouTube.

Our research meetings are the cornerstone of everything we do. It’s where we brainstorm and solve problems, share and discuss research updates, and inform each other of exciting new research. In addition to posting on YouTube, we post our meetings on our open-source discussion group HTM Forum. If you have any questions or ideas to share, I encourage you to join the forum and chime in. We often answer questions as part of our research meetings and always appreciate your comments.

Let’s hit the highlights: here are our most popular research meetings from the previous 12 months – just in case you missed them!

Staying Current on Industry Trends & Research

A majority of our research meetings took place virtually as a result of the COVID-19 pandemic. This shift allowed us to invite many guest speakers and visiting scientists across the globe to talk about their work and how it can potentially extend or relate to our Thousand Brains Theory.

> OpenAI’s GPT-3 Language Model  (Steve Omohundro)

Computer Scientist Steve Omohundro gave a fascinating talk on GPT-3, the OpenAI Natural Language Processing model. He reviewed the network architecture, training process, and results in the context of past work. We then extensively discussed the implications for NLP and for Machine Intelligence / AGI.

> An Attempt to Model the Neocortical Microcircuit in Sensory Cortex (Max Bennett)

We had the pleasure to learn about Max Bennett, Co-founder & Chief Product Officer of Bluecore‘s paper and work on the model of cortical columns, sequences with precise time scales, and working memory. His work builds on and extends our past work in several interesting directions.

> Sparse and Meaningful Representations Through Embodiment (Viviane Clay)

How do we learn representations in an embodied setting? Would results be different in a curiosity-driven learning setting? Visiting Scientist Viviane Clay from the University of Osnabrück talked about her research on learning sparse and meaningful representations through embodiment.

Diving into Scientific Literature

Working in machine learning and neuroscience is an exciting field. New techniques and ideas are constantly published through research articles. We read a great deal of peer-reviewed journals and pre-prints to stay informed and up-to-date with current research.

> GLOM Paper Review (Marcus Lewis)

Through the lens of our Thousand Brains Theory, Senior Researcher Marcus Lewis reviewed Geoffrey Hinton’s GLOM model. He highlighted the similarities and differences between each model’s voting mechanisms, structure, and the use of neural representations. He also explored the idea of GLOM handling movement. We followed up this research meeting with a blog that explores the high-level commonalities and differences.

> A Review of Learning Rules in Machine Learning (Alex Cuozzo)

Our Researcher Alex Cuozzo looked at a few notable papers and explained high-level concepts related to learning rules in machine learning. Moving away from backpropagation with gradient descent, he talked about different attempts at biologically plausible learning regimes. He also covered work that used machine learning to create novel optimizers and local learning rules.

> Book Review: SDR by Pentti Kanerva (Alex Cuozzo)

Alex Cuozzo discussed the book Sparse Distributed Memory by Pentti Kanerva. He first explored a few concepts related to high dimensional vectors mentioned in the book such as rotational symmetry, and distribution of distances. He then talked about the key properties of the Sparse Distributed Memory model and how it relates to a biological one.

Developing a Hypothesis

Scientific theories are the foundations for furthering scientific knowledge and putting facts and observable information to practical use. We introduce ideas, develop theories, and present hypotheses to the Team, and use their comments to further refine our research.

> Reference Frames Transformation in the Thalamus (Jeff Hawkins)

Our Co-founder Jeff Hawkins explored the relationship between the thalamus and the neocortex, and whether reference frame transformation can occur in the thalamus. Jeff proposed that the anatomy and physiology of the thalamus suggest that thalamocortical relay cells might be playing a role in information processing.

> Scale and Orientation in Cortical Columns (Jeff Hawkins)

Jeff explained how introspection can be a helpful tool in neuroscience research and gave an overview of what cortical columns need to represent objects. The team then extensively discussed how scaling works in the columns. Lastly, Jeff gave four possible explanations for how the neocortex can represent objects with various orientations.

> ‘Eigen-view’ on Grid Cells (Marcus Lewis)

Marcus presented his ‘Eigen-view’ on grid cells and connected ideas in 3 underlying papers to Numenta’s research. He talked about the mapping of grid cells in terms of eigenvectors, and evaluated eigenvectors in terms of the Fourier transform (space) and the non Fourier transform, called “spectral graph theory” (2D graph).

Subscribe to our YouTube channel and Twitter to stay up to date with our research.

Authors
Charmaine Lai • Marketing Manager
Share