Brains@Bay – The Role of Active Dendrites in Learning

In this meetup, we focus on the role of active dendrites in learning from a neuroscience and a computational perspective. We invited Matthew Larkum (Larkum Lab), Ilenna Jones (Kording Lab) and Blake Richards (Linc Lab) to present their views on the role of active dendrites in machine learning.

Meetup link: https://www.meetup.com/BraIns-Bay/events/262647238/

Brains@Bay Meetups focus on how neuroscience can inspire us to create improved artificial intelligence and machine learning algorithms. Join the discussion here.

Video

Follow-up Q&A with Blake Richards

We received an overwhelming number of questions and could’t finish answering them. Hence, we set aside some time with Blake Richards to go over the remaining questions.

Q: What are your thoughts on the need for teaching signals regarding new learning approaches such as contrastive learning? Could it be the contrastive mechanism that frees us from teaching signals?

Blake: No. Contrastive learning, and other self-supervised or unsupervised approaches, could free us from having to use teaching signals for representation/model learning. But, how can you use a contrastive signal to, say, learn how to perform a figure skating routine? At the end of the day, we clearly have motor targets that we can learn to match.

Q: If you have multiple mechanisms without a unifying factor, won’t they fail to work in sync and lead to a fallout of working of the brain? 

Blake: Not necessarily. Many neural networks trained with multiple cost functions show interesting and useful properties, even when those cost functions are adversarial with each other (as with GANs).

Q: Is there any research related to the morphological structure of the dendritic tree with the type of pattern recognition problem they would be more suitable for?

Blake: Yes, see Michael Hausser’s work on direction selectivity in dendrites.

Q: Is it realistic to assume large training sets of deep learning when talking about similar computations in neural systems? Are we trying to put jet engines on a bird?

Blake: Yes and no. Does the brain have access to huge *labelled* training sets? No. Does it have access to huge unlabelled datasets? You bet! But, the need for end-to-end credit assignment remains, in my opinion, regardless of whether you have a large labelled dataset or not.

Q: Most of machine learning/deep learning are primarily 3 things – Tensor/Matrix math, optimization/back-propagation, and a little bit of non-linearity. Are neurons/brains as mathematical and may I say, simple ?

Blake: Nothing in reality is as simple as any model in science. Let’s be clear on this: all models are abstractions that only apply in certain contexts. Newtonian physics is not actually how the world works, but it’s a damn useful model when working at the human-scale. The question we have to ask ourselves is whether the model captures the key components of the system we’re interested in in the context we’re interested in? Netwonian physics does for the human-scale, as I noted. But, it sucks at really big or small scales. So, do ANNs capture the brain in a useful way, even though they use relatively simple math? If our context is the behavioural level, then the answer is yes, arguably, ANNs capture a big part of what we’re after, since we have had more success with getting ANNs that can perform complex behaviours than we have had with any of the more complicated biological models out there. Example: BlueBrain is a cool model that’s really useful for studying physiology, but it can’t *do* anything at the behavioural level. If your goal is AI, modelling things like spike trains, cell types, protein phosphorylation, receptor localization, etc., are probably not going to be useful. The take home message is this: when you notice the ways in which a model fails to capture reality, that does not mean it is a useless model. It depends on what your goals are.

Authors

Subutai Ahmad and Lucas Souza • Brains@Bay

Share