(credit: Getty)
A revolutionary new theory contradicts a fundamental assumption in
neuroscience about how the brain learns. According to researchers at
Bar-Ilan University in Israel led by
Prof. Ido Kanter,
the theory promises to transform our understanding of brain dysfunction
and may lead to advanced, faster, deep-learning algorithms.
A
biological schema of an output neuron, comprising a neuron’s soma
(body, shown as gray circle, top) with two roots of dendritic trees
(light-blue arrows), splitting into many dendritic branches (light-blue
lines). The signals arriving from the connecting input neurons (gray
circles, bottom) travel via their axons (red lines) and their many
branches until terminating with the synapses (green stars). There, the
signals connect with dendrites (some synapse branches travel to other
neurons), which then connect to the soma. (credit: Shira Sardi et
al./Sci. Rep)
The brain is a highly complex network containing billions of neurons.
Each of these neurons communicates simultaneously with thousands of
others via their synapses. A neuron collects its many synaptic incoming
signals through dendritic trees.
In 1949, Donald Hebb suggested that learning occurs in the brain by modifying the strength of synapses.
Hebb’s theory has remained a deeply rooted assumption in neuroscience.
Synaptic vs. dendritic learning
In vitro
experimental setup. A micro-electrode array comprising 60 extracellular
electrodes separated by 200 micrometers, indicating a neuron patched
(connected) by an intracellular electrode (orange) and a nearby
extracellular electrode (green line). (Inset) Reconstruction of a
fluorescence image, showing a patched cortical pyramidal neuron (red)
and its dendrites growing in different directions and in proximity to
extracellular electrodes. (credit: Shira Sardi et al./Scientific Reports
adapted by KurzweilAI)
Hebb was wrong, says Kanter. “A new type of experiments strongly
indicates that a faster and enhanced learning process occurs in the
neuronal dendrites, similarly to what is currently attributed to the
synapse,” Kanter and his team suggest in an
open-access paper in Nature’s
Scientific Reports, published Mar. 23, 2018.
“In this new [faster] dendritic learning process, there are [only] a
few adaptive parameters per neuron, in comparison to thousands of tiny
and sensitive ones in the synaptic learning scenario,” says Kanter.
“Does it make sense to measure the quality of air we breathe via many
tiny, distant satellite sensors at the elevation of a skyscraper, or by
using one or several sensors in close proximity to the nose,?” he asks.
“Similarly, it is more efficient for the neuron to estimate its incoming
signals close to its computational unit, the neuron.”
Image
representing the current synaptic (pink) vs. the new dendritic (green)
learning scenarios of the brain. In the current scenario, a neuron
(black) with a small number (two in this example) dendritic trees
(center) collects incoming signals via synapses (represented by red
valves), with many thousands of tiny adjustable learning parameters. In
the new dendritic learning scenario (green) a few (two in this example)
adjustable controls (red valves) are located in close proximity to the
computational element, the neuron. The scale is such that if a neuron
collecting its incoming signals is represented by a person’s faraway
fingers, the length of its hands would be as tall as a skyscraper
(left). (credit: Prof. Ido Kanter)
The researchers also found that weak synapses, which comprise the
majority of our brain and were previously assumed to be insignificant,
actually play an important role in the dynamics of our brain.
According to the researchers, the new learning theory may lead to
advanced, faster, deep-learning algorithms and other
artificial-intelligence-based applications, and also suggests that we
need to reevaluate our current treatments for
disordered brain functionality.
This research is supported in part by the TELEM grant of the Israel Council for Higher Education.
Abstract of Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links
Physical models typically assume time-independent interactions,
whereas neural networks and machine learning incorporate interactions
that function as adjustable parameters. Here we demonstrate a new type
of abundant cooperative nonlinear dynamics where learning is attributed
solely to the nodes, instead of the network links which their number is
significantly larger. The nodal, neuronal, fast adaptation follows its
relative anisotropic (dendritic) input timings, as indicated
experimentally, similarly to the slow learning mechanism currently
attributed to the links, synapses. It represents a non-local learning
rule, where effectively many incoming links to a node concurrently
undergo the same adaptation. The network dynamics is now
counterintuitively governed by the weak links, which previously were
assumed to be insignificant. This cooperative nonlinear dynamic
adaptation presents a self-controlled mechanism to prevent divergence or
vanishing of the learning parameters, as opposed to learning by links,
and also supports self-oscillations of the effective learning
parameters. It hints on a hierarchical computational complexity of
nodes, following their number of anisotropic inputs and opens new
horizons for advanced deep learning algorithms and artificial
intelligence based applications, as well as a new mechanism for enhanced
and fast learning by neural networks.