Dendritic Learning Group (DLG)

From brain architecture to algorithm

About

Neuromorphic computing systems aim to follow the design principles of biological brains to achieve performant and energy-efficient computing, by emulating its architecture and algorithm. Broadly speaking, the architecture of the brain can be thought of as its layout, optimised through millions of years of evolution but mostly static throughout the life of an individual. The algorithm, on the other hand, encompasses how dynamic changes during development lead to a functional brain, capable of intelligent behaviours such as perceiving and interacting with dynamic environments.


Much is known about architecture of the brain: at the micro scale, neurons have extensive tree-like branches that process specific inputs in distinct manner, at the mesoscale, local brain circuits have a high degree of organisation with specific recurrent connectivity motifs, and at the macro scale, brain areas connect to each other in a manner that is reproduced across species.


However, much less is known about the algorithm of the brain: tracking and mechanistically understanding plasticity process over developmental time scales is still infeasible, and will likely remain so in the decades to come.


The DLG starts from the hypothesis that the algorithm of the brain is closely adapted to the architecture of the brain (algorithm follows architecture). By consequence, we take clues from the architecture of the brain to conceive plausible and performant learning algorithms. We therefore study how the biophysical layout of individual neurons — specifically the input-receiving dendritic trees — can be exploited by learning algorithms in a natural manner. The DLG then aims to use these insights into the algorithm of the brain to improve neuromorphic chip design.

Research Topics

  • Contextual adaptation: one of our hypotheses is that dendritic processes modulate feedforward processing, to adapt neural networks to new tasks and contexts. We study the suitability of this architecture for multitask and continual learning.
  • Sequence learning: dendritic processes endow neurons with the capability to maintain a memory not influenced by output spikes, thereby providing a powerful substrate to learn relationships encoded in long input sequences.
  • Local learning rules: although it is widely assumed that learning in the brain proceeds through quantities locally available at synapses, neuroscientists have been unable to match the performance of the backpropagation-trained networks used in AI. We investigate how the richness of dendritic dynamics may enhance local learning rules.
  • Network dynamics: while the dynamics of simple point neuron networks are well understood, how networks behave where neurons have realistic dendrites, is much less studied. We study how dendrites enrich the dynamics of recurrently connected neural networks.

Members

News
No results found.
Loading
Projekte
News

Last Modified: 26.01.2024