In 1978, Michael J. D. White wrote about an extension of the modern synthesis based on new research from speciation. In the 1980s, entomologist Ryuichi Matsuda coined the term "pan-environmentalism" as an extended evolutionary synthesis which he saw as a fusion of Darwinism with neo-Lamarckism. He held that heterochrony is a main mechanism for evolutionary change and that novelty in evolution can be generated by genetic assimilation.An extended synthesis was also proposed by the Austrian zoologist Rupert Riedl, with the study of evolvability.
Gordon Rattray Taylor in his 1983 book The Great Evolution Mystery
called for an extended synthesis, noting that the modern synthesis is
only a subsection of a more comprehensive explanation for biological
evolution still to be formulated. In 1985, biologist Robert G. B. Reid authored Evolutionary Theory: The Unfinished Synthesis, which argued that the modern synthesis with its emphasis on natural selection is an incomplete picture of evolution, and emergent evolution can explain the origin of genetic variation.
In 1988, ethologist John Endler wrote about developing a newer synthesis, discussing processes of evolution that he felt had been neglected.
In 2000, Robert L. Carroll
called for an "expanded evolutionary synthesis" due to new research
from molecular developmental biology, systematics, geology and the
fossil record.
Punctuated equilibrium
In the 1980s, the American palaeontologists Stephen Jay Gould and Niles Eldredge argued for an extended synthesis based on their idea of punctuated equilibrium, the role of species selection shaping large scale evolutionary patterns and natural selection working on multiple levels extending from genes to species.
Contributions from evolutionary developmental biology
Some researchers in the field of evolutionary developmental biology proposed another synthesis. They argue that the modern and extended syntheses should mostly center on genes and suggest an integration of embryology with molecular genetics and evolution, aiming to understand how natural selection operates on gene regulation and deep homologies between organisms at the level of highly conserved genes, transcription factors and signalling pathways.By contrast, a different strand of evo-devo following an organismal approach contributes to the extended synthesis by emphasizing (amongst others) developmental bias (both through facilitation and constraint), evolvability, and inherency of form as primary factors in the evolution of complex structures and phenotypic novelties.
Recent history
Massimo Pigliucci, a leading proponent of the extended evolutionary synthesis in its 2007 form
The idea of an extended synthesis was relaunched in 2007 by Massimo Pigliucci, and Gerd B. Müller, with a book in 2010 titled Evolution: The Extended Synthesis, which has served as a launching point for work on the extended synthesis. This includes:
The role of prior configurations, genomic structures, and other traits in the organism in generating evolutionary variations.
How increasing dimensionality of fitness landscapes affects our view of speciation.
How organisms modify the environments they belong to through niche construction.
Other processes such as evolvability, phenotypic plasticity, reticulate evolution, horizontal gene transfer, symbiogenesis are said by proponents to have been excluded or missed from the modern synthesis. The goal of Piglucci's and Müller's extended synthesis is to take evolution beyond the gene-centered approach
of population genetics to consider more organism- and ecology-centered
approaches. Many of these causes are currently considered secondary in
evolutionary causation, and proponents of the extended synthesis want
them to be considered first-class evolutionary causes.
Michael R. Rose
and Todd Oakley have called for a postmodern synthesis, they commented
that "it is now abundantly clear that living things often attain a
degree of genomic complexity far beyond simple models like the "gene
library" genome of the Modern Synthesis". Biologist Eugene Koonin has suggested that the gradualism of the modern synthesis is unsustainable as gene duplication, horizontal gene transfer and endosymbiosis play a pivotal role in evolution. Koonin commented that "the new developments in evolutionary biology by
no account should be viewed as refutation of Darwin. On the contrary,
they are widening the trails that Darwin blazed 150 years ago and reveal
the extraordinary fertility of his thinking."
Arlin Stoltzfus and colleagues advocate mutational and developmental bias in the introduction of variation as an important source of orientation or direction in evolutionary change.They argue that bias in the introduction of variation was not formally recognized throughout the 20th century, due to the influence of neo-Darwinism on thinking about causation.
Organism-centered evolution
The early biologists of the organicist
movement have influenced the modern extended evolutionary synthesis.
Recent research has called for expanding the population genetic
framework of evolutionary biology by a more organism-centered
perspective. This has been described as "organism-centered evolution" which looks
beyond the genome to the ways that individual organisms are participants
in their own evolution. Philip Ball has written a research review on organism-centered evolution.
In her book from 2003, Developmental Plasticity and Evolution, Mary Jane West-Eberhard showed that biological theory cannot make sense of evolution if it lacks a theory of the phenotype. Only when we have understood the phenotype and its responsiveness to the environment (or phenotypic plasticity),
she says, can we understand its development, selection, and
evolution—and the place of genes therein: 'Why start with the phenotype
and its development? Because that is where evolution starts.' Likewise,
development itself depends at every step 'on the pre-existent structure
of the phenotype.' Crucially for West-Eberhard, like Darwin before her,
phenotypic plasticity includes the agency of organisms (as well as
ontogenetic and epigenetic flexibility) because it is as agents that
organisms participate in the struggle for existence from which natural selection results: "genes are usually followers, not leaders, in evolutionary change."
Rui Diogo has proposed a revision of evolutionary theory, which he has termed ONCE: Organic Nonoptimal Constrained Evolution. According to ONCE, evolution is mainly driven by the behavioural
choices and persistence of organisms themselves, whilst natural
selection plays a secondary role. ONCE cites examples of reciprocal causation between organism and the environment, Baldwin effect, organic selection, developmental bias and niche construction.
Predictions
The extended synthesis is characterized by its additional set of
predictions that differ from the standard modern synthesis theory:
Rapid evolution can result from simultaneous induction, natural selection and developmental dynamics
Biodiversity can be affected by features of developmental systems such as differences in evolvability
Heritable variation is directed towards variants that are adaptive and integrated with phenotype
Niche construction
is biased towards environmental changes that suit the constructor's
phenotype, or that of its descendants, and enhance their fitness
Publications from the project include over 200 papers, a special issue, and an anthology on Evolutionary Causation. In 2019 a final report of the 2016–2019 consortium was published, Putting the Extended Evolutionary Synthesis to the Test.
The project was headed by Kevin N. Laland at the University of St Andrews and Tobias Uller at Lund University.
According to Laland what the extended synthesis "really boils down to
is recognition that, in addition to selection, drift, mutation and other
established evolutionary processes, other factors, particularly
developmental influences, shape the evolutionary process in important
ways."
Status
Biologists disagree on the need for an extended synthesis. Opponents
contend that the modern synthesis is able to fully account for the newer
observations, whereas others criticize the extended synthesis for not
being radical enough. Proponents think that the conceptions of evolution at the core of the modern synthesis are too narrow and that even when the modern synthesis allows for the ideas in the
extended synthesis, using the modern synthesis affects the way that
biologists think about evolution. For example, Denis Noble
says that using terms and categories of the modern synthesis distorts
the picture of biology that modern experimentation has discovered. Proponents therefore claim that the extended synthesis is necessary to
help expand the conceptions and framework of how evolution is considered
throughout the biological disciplines. In 2022, the John Templeton Foundation published a review of recent literature.
The Human Brain Project (HBP) was a EU scientific research project that ran for ten years from 2013 to 2023, with a total budget of €1 billion. Using high-performanceexascale supercomputers it built infrastructure that allowed researchers to advance knowledge in the fields of neuroscience, computing and brain-related medicine. Its successor was the European Brain Research Infrastructures (EBRAINS) project.
Peer-reviewed research finds that the public discussion forum
(the Human Brain Project forum) was actively utilized and showed
resilience during the COVID-19 pandemic. The HBP forum has been most actively utilized and useful for solving
questions related to programming issues and questions close to HBP core
areas.
Strategic goals and organisation
The 2013 HBP Summit–the inauguration of the Project–took place in the EPFL Learning Centre in October 2013. It brought together scientists from over 100 Partner Institutions.
Fundamental to the HBP approach is to investigate the brain on
different spatial and temporal scales (i.e. from the molecular to the
large networks underlying higher cognitive processes, and from
milliseconds to years). To achieve this goal, the HBP relies on the
collaboration of scientists from diverse disciplines, including
neuroscience, philosophy and computer science, to take advantage of the
loop of experimental data, modelling theories and simulations. The idea
is that empirical results are used to develop theories, which then
foster modelling and simulations which result in predictions that are in
turn verified by empirical results.
The primary objective of the HBP is to create an ICT-based
research infrastructure for brain research, cognitive neuroscience and brain-inspired computing, which can be used by researchers world-wide.
The Project is divided into 12 Subprojects. Six of these develop
ICT-based platforms (Subprojects 5-10), which consist of prototype
hardware, software, databases, and programming interfaces. These tools
are available to researchers worldwide via the HBP Collaboratory.
Four Subprojects gather data on empirical neuroscience and establish
theoretical foundations (Subprojects 1–4) and one is responsible for
ethics and society (Subproject 12). Subproject 11 coordinates the
project.
SP1 Mouse Brain Organisation: Understanding the structure of the mouse brain, and its electrical and chemical functions
SP2 Human Brain Organisation: Understanding the structure of the human brain, and its electrical and chemical functions
SP3 Systems and Cognitive Neuroscience: Understanding how the brain
performs its systems-level and cognitive functional activities
SP4 Theoretical Neuroscience: Deriving high-level mathematical models to synthesize conclusions from research data
SP5 Neuroinformatics Platform: Gathering, organising and making available brain data
SP6 Brain Simulation Platform: Developing data-driven reconstructions of brain tissue and simulation capabilities to explore these reconstructions
SP7 High-performance Analytics and Computing
Platform: Providing the ICT capability to map the brain in
unprecedented detail, construct complex models, run large simulations,
and analyse large volumes of data
SP8 Medical Informatics
Platform: Developing the infrastructure to share hospital and medical
research data for the purpose of understanding disease clusters and
their respective disease signatures
SP9 Neuromorphic Computing Platform: Developing and applying brain-inspired computing technology
SP10 Neurorobotics Platform: Developing virtual and real robots and environments for testing brain simulations
SP11 Management and Coordination: General coordination of the project
SP12 Ethics and Society: Exploring the ethical and societal impact of HBP's work
The scientific direction is provided by representatives from each
of the HBP's Subprojects, which form the Science and Infrastructure
Board (SIB). Katrin Amunts from Forschungszentrum Jülich is the Chair of the SIB. Alois Knoll from TU Munich
is Vice Chair of the SIB for software. The Directorate steers the daily
work of the HBP – it is led by Andreas Mortensen from EPFL.
Funding
The HBP is funded by the European Commission Directorate General for
Communications Networks, Content and Technology (DG CONNECT) under the FP7 framework,
an EU Research and Innovation funding programme. It was one of the two
initial Future Emerging Technologies (FET) Flagship projects.
The project is split into five phases, each supplied with
separate funding. The call for funding for the Project's initial
two-and-a-half-year 'Ramp-Up Phase' of EUR 54 million closed in
November 2013 and the results were announced in March 2014. Twenty-two
projects from thirty-two organisations were selected for the initial
funding of EUR 8.3 million. The Ramp-Up Phase ended on 31 March 2016. Funding is reassessed every
two years using Specific Grant Agreements (SGA); the first of which
began in April 2016 (ending in April 2018), and the second with a total
EU funding of 88 Million Euro starting in April 2018 (ending in March
2020). The HBP's total costs are estimated at EUR 1.019 billion, of
which EUR 500 million will be provided by the European Commission, EUR
500 million by national, public and private organisations, and EUR 19
million by the Core Project Ramp-Up Phase Partners.
Obstacles
One of the Project's primary hurdles is the unsystematic nature of
the information collected from previous brain research. Neurological
research data varies by biological organisation schemes, species
studied, and by developmental stages, making it difficult to collectively use the data to replicate the brain in a model that acts as a single system.
Other obstacles include engineering problems involving power consumption, memory, and storage. For example, detailed neuron representations are very computationally expensive, and whole brain simulation is at the leading edge of our computational capability.
Implications
The Human Brain Project moved to Campus Biotech in 2014.
Technologies generated by the HBP and other similar projects offer
several possibilities to other fields of research. For instance, a brain
model can be used to investigate signatures of disease in the brain and the impact of certain drugs,
enabling the development of better diagnosis and treatment methods.
Ultimately, these technologies will likely lead to more advanced medical
options available to patients at a lower cost.
In addition, detailed brain simulation requires significant computing power, leading to developments in supercomputing and energy-efficient, brain-inspired computing techniques. Computational developments can be extended into areas such as data mining, telecommunications, appliances, and other industrial uses.
The long-term ethical consequences of the Project are also
considered. The Project follows a policy of Responsible Research and
Innovation, and its Ethics Advisory Board is responsible for monitoring
the use of human volunteers, animal subjects, and the data collected.
Implications on European society, industry, and economy are investigated
by the HBP Ethics and Society Programme's Foresight Lab.
Criticism
An open letter was sent on 7 July 2015 to the European Commission by
154 European researchers (750 signatures as of 3 September 2014) complaining of the HBP's overly narrow approach, and threatening to boycott the project. Central to this controversy was an internal dispute about funding for
cognitive scientists who study high level brain functions, such as
thought and behaviour. However, the HBP stated that there is "no
question that cognition and behaviour are vital to the HBP", explaining
that cognitive neuroscience research was repositioned in the project to
allow the core project to focus on building the platforms. In addition,
the open letter called on the EC to "reallocate the funding currently
allocated to the HBP core and partnering projects to broad
neuroscience-directed funding to meet the original goals of the
HBP—understanding brain function and its effect on society". In its
response, the HBP said that "while neuroscience research generates a
vast amount of valuable data, there is currently no technology for
sharing, organising, analysing or integrating this information, beyond
papers and even databases. The HBP will provide the critical missing
layer to move towards a multi-level reconstruction and simulation of the
brain". It added that "cognitive and behavioural neuroscience will
become the most significant component of neuroscience in HBP over the
course of the project. However, for this to happen the platforms have to
be in place first".
Peter Dayan, director of computational neuroscience at University College London, argued that the goal of a large-scale simulation of the brain is radically premature, and Geoffrey Hinton said that "the real problem with that project is they have no clue how to get a large system like that to learn". Similar concerns as to the project's methodology were raised by Robert Epstein.
The HBP has said that its members share the uncertainty
surrounding large-scale simulation, but that "reconstructing and
simulating the human brain is a vision, a target; the benefits will come
from the technology needed to get there. That technology, developed by
the HBP, will benefit all of neuroscience as well as related fields".
In 2015 the project underwent a review process and the three-member executive committee, led by Henry Markram, was dissolved and replaced by a 22-member governing board.
According to a 2019 article, "In 2013, the European Commission
awarded his initiative—the Human Brain Project (HBP)—a staggering 1
billion euro grant (worth about $1.42 billion at the time)...the people I
contacted struggled to name a major contribution that the HBP has made
in the past decade." Another article concluded that "Ultimately, the mega-project did create
communities of scientists focused on some common goals, he says.
"That's an enduring legacy.""
Legacy
The European Brain Research Infrastructures (EBRAINS)
is a research infrastructure created through HBP "that gathers an
extensive range of data and tools for brain related research". EBRAINS consists of a set of infrastructure initiatives (such as brain atlases), tools and services (such as the sPyNNaker software suite for SpiNNaker hardware), and community projects. It is an international non-profit association headquartered in Brussels, Belgium, and a member of the European Open Science Cloud association.
An artificial neural network is an interconnected group of nodes, inspired by a simplification of neurons in a brain. Here, each circular node represents an artificial neuron and an arrow represents a connection from the output of one artificial neuron to the input of another.
A neural network consists of connected units or nodes called artificial neurons, which loosely model the neurons
in the brain. Artificial neuron models that mimic biological neurons
more closely have also been recently investigated and shown to
significantly improve performance. These are connected by edges, which model the synapses
in the brain. Each artificial neuron receives signals from connected
neurons, then processes them and sends a signal to other connected
neurons. The "signal" is a real number, and the output of each neuron is computed by some non-linear function of the totality of its inputs, called the activation function. The strength of the signal at each connection is determined by a weight, which adjusts during the learning process.
Typically, neurons are aggregated into layers. Different layers
may perform different transformations on their inputs. Signals travel
from the first layer (the input layer) to the last layer (the output layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers.
Artificial neural networks are used for various tasks, including predictive modeling, adaptive control, and solving problems in artificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information.
Training
Neural networks are typically trained through empirical risk minimization,
which is based on the idea of optimizing the network's parameters to
minimize the difference, or empirical risk, between the predicted output
and the actual target values in a given dataset. Gradient-based methods such as backpropagation are usually used to estimate the parameters of the network. During the training phase, ANNs learn from labeled training data by iteratively updating their parameters to minimize a defined loss function. This method allows the network to generalize to unseen data.
Simplified
example of training a neural network in object detection: The network
is trained by multiple images that are known to depict starfish and sea urchins, which are correlated with "nodes" that represent visual features.
The starfish match with a ringed texture and a star outline, whereas
most sea urchins match with a striped texture and oval shape. However,
the instance of a ring textured sea urchin creates a weakly weighted
association between them.
Subsequent run of the network on an input image: The network correctly detects the starfish. However, the weakly
weighted association between ringed texture and sea urchin also confers a
weak signal to the latter from one of two intermediate nodes. In
addition, a shell that was not included in the training gives a weak
signal for the oval shape, also resulting in a weak signal for the sea
urchin output. These weak signals may result in a false positive result for sea urchin. In
reality, textures and outlines would not be represented by single
nodes, but rather by associated weight patterns of multiple nodes.
Today's deep neural networks are based on early work in statistics over 200 years ago. The simplest kind of feedforward neural network
(FNN) is a linear network, which consists of a single layer of output
nodes with linear activation functions; the inputs are fed directly to
the outputs via a series of weights. The sum of the products of the
weights and the inputs is calculated at each node. The mean squared errors
between these calculated outputs and the given target values are
minimized by creating an adjustment to the weights. This technique has
been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Legendre (1805) and Gauss (1795) for the prediction of planetary movement.
Historically, digital computers such as the von Neumann model
operate via the execution of explicit instructions with access to
memory by a number of processors. Some neural networks, on the other
hand, originated from efforts to model information processing in
biological systems through the framework of connectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing.
Warren McCulloch and Walter Pitts (1943) considered a non-learning computational model for neural networks. This model paved the way for research to split into two approaches. One
approach focused on biological processes while the other focused on the
application of neural networks to artificial intelligence.
In the late 1940s, D. O. Hebb proposed a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. It was used in many early neural networks, such as Rosenblatt's perceptron and the Hopfield network. Farley and Clark (1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created by Rochester, Holland, Habit and Duda (1956).
In 1958, psychologist Frank Rosenblatt described the perceptron, one of the first implemented artificial neural networks,funded by the United States Office of Naval Research. R. D. Joseph (1960) mentions an even earlier perceptron-like device by Farley and Clark: "Farley and Clark of MIT Lincoln Laboratory actually preceded
Rosenblatt in the development of a perceptron-like device." However,
"they dropped the subject."
The perceptron raised public excitement for research in Artificial
Neural Networks, causing the US government to drastically increase
funding. This contributed to "the Golden Age of AI" fueled by the
optimistic claims made by computer scientists regarding the ability of
perceptrons to emulate human intelligence.
The first perceptrons did not have adaptive hidden units. However, Joseph (1960) also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962)
cited and adopted these ideas, also crediting work by H. D. Block and
B. W. Knight. Unfortunately, these early efforts did not lead to a
working learning algorithm for hidden units, i.e., deep learning.
Deep learning breakthroughs in the 1960s and 1970s
Fundamental research was conducted on ANNs in the 1960s and 1970s. The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in the Soviet Union (1965). They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis.
Superfluous hidden units are pruned using a separate validation set.
Since the activation functions of the nodes are Kolmogorov-Gabor
polynomials, these were also the first deep networks with multiplicative
units or "gates."
The first deep learning multilayer perceptron (MLP) trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have
made end-to-end stochastic gradient descent the currently dominant
training technique.
In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function.The rectifier has become the most popular activation function for deep learning.
Nevertheless, research stagnated in the United States following the work of Minsky and Papert (1969), who emphasized that basic perceptrons were incapable of processing the
exclusive-or circuit. This insight was irrelevant for the deep networks
of Ivakhnenko (1965) and Amari (1967).
In 1976 transfer learning was introduced in neural networks learning.
Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers and weight replication began with the neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation.
Backpropagation
Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. In 1970, Seppo Linnainmaa published the modern form of backpropagation in his Master's thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work.
The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition. In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5
(1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was
applied by several banks to recognize hand-written numbers on checks
digitized in 32×32 pixel images.
From 1988 onward, the use of neural networks transformed the field of protein structure prediction, in particular when the first cascading networks were trained on profiles (matrices) produced by multiple sequence alignments.
Recurrent neural networks
One origin of RNN was statistical mechanics. In 1972, Shun'ichi Amari proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory, adding in the component of learning. This was popularized as the Hopfield network by John Hopfield (1982). Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943) considered neural networks that
contain cycles, and noted that the current activity of such networks can
be affected by activity indefinitely far in the past.
In 1982 a recurrent neural network with an array architecture
(rather than a multilayer perceptron architecture), namely a Crossbar
Adaptive Array, used direct recurrent connections from the output to the supervisor
(teaching) inputs. In addition of computing actions (decisions), it
computed internal state evaluations (emotions) of the consequence
situations. Eliminating the external supervisor, it introduced the
self-learning method in neural networks.
In cognitive psychology, the journal American Psychologist in
early 1980's carried out a debate on the relation between cognition and
emotion. Zajonc in 1980 stated that emotion is computed first and is
independent from cognition, while Lazarus in 1982 stated that cognition
is computed first and is inseparable from emotion.In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation. It was an example of a debate where an AI system, a recurrent neural
network, contributed to an issue in the same time addressed by cognitive
psychology.
In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991, Jürgen Schmidhuber proposed the "neural sequence chunker" or "neural history compressor"which introduced the important concepts of self-supervised pre-training (the "P" in ChatGPT) and neural knowledge distillation. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.
In 1991, Sepp Hochreiter's diploma thesis identified and analyzed the vanishing gradient problemand proposed recurrent residual connections to solve it. He and Schmidhuber introduced long short-term memory (LSTM), which set accuracy records in multiple applications domains. This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999. It became the default choice for RNN architecture.
Between 2009 and 2012, ANNs began winning prizes in image recognition
contests, approaching human level performance on various tasks,
initially in pattern recognition and handwriting recognition. In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella,
and Jürgen Schmidhuber achieved for the first time superhuman
performance in a visual pattern recognition contest, outperforming
traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly.
In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing
allowed the use of larger networks, particularly in image and visual
recognition problems, which became known as "deep learning".
Radial basis function
and wavelet networks were introduced in 2013. These can be shown to
offer best approximation properties and have been applied in nonlinear system identification and classification applications.
Generative adversarial network (GAN) (Ian Goodfellow et al., 2014) became state of the art in generative modeling during 2014–2018 period.
The GAN principle was originally published in 1991 by Jürgen
Schmidhuber who called it "artificial curiosity": two neural networks
contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here, the GAN generator is grown from small to large scale in a
pyramidal fashion. Image generation by GAN reached popular success, and
provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022).
In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the highway network was published in May 2015, and the residual neural network (ResNet) in December 2015. ResNet behaves like an open-gated Highway Net.
During the 2010s, the seq2seq model was developed, and attention mechanisms were added. It led to the modern Transformer architecture in 2017 in Attention Is All You Need. It requires computation time that is quadratic in the size of the
context window. Jürgen Schmidhuber's fast weight controller (1992) scales linearly and was later shown to be equivalent to the unnormalized linear Transformer.Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use this architecture.
Neuron and myelinated axon, with signal flow from inputs at dendrites to outputs at axon terminals
ANNs began as an attempt to exploit the architecture of the human
brain to perform tasks that conventional algorithms had little success
with. They soon reoriented towards improving empirical results,
abandoning attempts to remain true to their biological precursors. ANNs
have the ability to learn and model non-linearities and complex
relationships. This is achieved by neurons being connected in various
patterns, allowing the output of some neurons to become the input of
others. The network forms a directed, weighted graph.
An artificial neural network consists of simulated neurons. Each neuron is connected to other nodes via links
like a biological axon-synapse-dendrite connection. All the nodes
connected by links take in some data and use it to perform specific
operations and tasks on the data. Each link has a weight, determining
the strength of one node's influence on another, allowing weights to choose the signal between neurons.
ANNs are composed of artificial neurons which are conceptually derived from biological neurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons.[114]
The inputs can be the feature values of a sample of external data, such
as images or documents, or they can be the outputs of other neurons.
The outputs of the final output neurons of the neural net accomplish the task, such as recognizing an object in an image.
To find the output of the neuron we take the weighted sum of all the inputs, weighted by the weights of the connections from the inputs to the neuron. We add a bias term to this sum. This weighted sum is sometimes called the activation.
This weighted sum is then passed through a (usually nonlinear)
activation function to produce the output. The initial inputs are
external data, such as images and documents. The ultimate outputs
accomplish the task, such as recognizing an object in an image.
Organization
The neurons are typically organized into multiple layers, especially
in deep learning. Neurons of one layer connect only to neurons of the
immediately preceding and immediately following layers. The layer that
receives external data is the input layer. The layer that produces the ultimate result is the output layer. In between them are zero or more hidden layers.
Single layer and unlayered networks are also used. Between two layers,
multiple connection patterns are possible. They can be 'fully
connected', with every neuron in one layer connecting to every neuron in
the next layer. They can be pooling, where a group of neurons in
one layer connects to a single neuron in the next layer, thereby
reducing the number of neurons in that layer. Neurons with only such connections form a directed acyclic graph and are known as feedforward networks. Alternatively, networks that allow connections between neurons in the same or previous layers are known as recurrent networks.
A hyperparameter is a constant parameter defining any configurable part of the learning process, whose value is set prior to training. Examples of hyperparameters include learning rate, batch size and regularization parameters. The performance of a neural network is strongly influenced by the
choice of hyperparameter values, and thus the hyperparameters are often
optimized as part of the training process, a process called
hyperparameter tuning or hyperparameter optimization.
Learning is the adaptation of the network to better handle a task by
considering sample observations. Learning involves adjusting the weights
(and optional thresholds) of the network to improve the accuracy of the
result. This is done by minimizing the observed errors. Learning is
complete when examining additional observations does not usefully reduce
the error rate. Even after learning, the error rate typically does not
reach 0. If after learning, the error rate is too high, the network
typically must be redesigned. Practically this is done by defining a cost function
that is evaluated periodically during learning. As long as its output
continues to decline, learning continues. The cost is frequently defined
as a statistic
whose value can only be approximated. The outputs are actually numbers,
so when the error is low, the difference between the output (almost
certainly a cat) and the correct answer (cat) is small. Learning
attempts to reduce the total of the differences across the observations.
Most learning models can be viewed as a straightforward application of optimization theory and statistical estimation.
The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation. A high learning rate shortens the training time, but with lower
ultimate accuracy, while a lower learning rate takes longer, but with
the potential for greater accuracy. Optimizations such as Quickprop
are primarily aimed at speeding up error minimization, while other
improvements mainly try to increase reliability. In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements use an adaptive learning rate that increases or decreases as appropriate. The concept of momentum allows the balance between the gradient and the
previous change to be weighted such that the weight adjustment depends
to some degree on the previous change. A momentum close to 0 emphasizes
the gradient, while a value close to 1 emphasizes the last change.
Cost function
While it is possible to define a cost function ad hoc, frequently the choice is determined by the function's desirable properties (such as convexity) because it arises from the model (e.g. in a probabilistic model, the model's posterior probability can be used as an inverse cost).
Backpropagation is a method used to adjust the connection weights to
compensate for each error found during learning. The error amount is
effectively divided among the connections. Technically, backpropagation
calculates the gradient (the derivative) of the cost function
associated with a given state with respect to the weights. The weight
updates can be done via stochastic gradient descent or other methods,
such as extreme learning machines, "no-prop" networks, training without backtracking, "weightless" networks, and non-connectionist neural networks.
Supervised learning
uses a set of paired inputs and desired outputs. The learning task is
to produce the desired output for each input. In this case, the cost
function is related to eliminating incorrect deductions. A commonly used cost is the mean-squared error,
which tries to minimize the average squared error between the network's
output and the desired output. Tasks suited for supervised learning are
pattern recognition (also known as classification) and regression
(also known as function approximation). Supervised learning is also
applicable to sequential data (e.g., for handwriting, speech and gesture recognition).
This can be thought of as learning with a "teacher", in the form of a
function that provides continuous feedback on the quality of solutions
obtained thus far.
Unsupervised learning
In unsupervised learning, input data is given along with the cost function, some function of the data and the network's output. The cost function is dependent on the task (the model domain) and any a priori
assumptions (the implicit properties of the model, its parameters and
the observed variables). As a trivial example, consider the model where is a constant and the cost . Minimizing this cost produces a value of
that is equal to the mean of the data. The cost function can be much
more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between and , whereas in statistical modeling, it could be related to the posterior probability
of the model given the data (note that in both of those examples, those
quantities would be maximized rather than minimized). Tasks that fall
within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering.
In applications such as playing video games, an actor takes a string
of actions, receiving a generally unpredictable response from the
environment after each one. The goal is to win the game, i.e., generate
the most positive (lowest cost) responses. In reinforcement learning,
the aim is to weight the network (devise a policy) to perform actions
that minimize long-term (expected cumulative) cost. At each point in
time the agent performs an action and the environment generates an
observation and an instantaneous
cost, according to some (usually unknown) rules. The rules and the
long-term cost usually only can be estimated. At any juncture, the agent
decides whether to explore new actions to uncover their costs or to
exploit prior learning to proceed more quickly.
Formally, the environment is modeled as a Markov decision process (MDP) with states and actions . Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distribution , the observation distribution and the transition distribution ,
while a policy is defined as the conditional distribution over actions
given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the lowest-cost MC.
ANNs serve as the learning component in such applications. Dynamic programming coupled with ANNs (giving neurodynamic programming) has been applied to problems such as those involved in vehicle routing, video games, natural resource management and medicine because of ANNs ability to mitigate losses of accuracy even when reducing the discretization
grid density for numerically approximating the solution of control
problems. Tasks that fall within the paradigm of reinforcement learning
are control problems, games and other sequential decision making tasks.
Self-learning
Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named crossbar adaptive array (CAA). It is a system with only one input, situation s, and only one output,
action (or behavior) a. It has neither external advice input nor
external reinforcement input from the environment. The CAA computes, in a
crossbar fashion, both decisions about actions and emotions (feelings)
about encountered situations. The system is driven by the interaction
between cognition and emotion. Given the memory matrix, W =||w(a,s)||, the crossbar self-learning
algorithm in each iteration performs the following computation:
In situation s perform action a;
Receive consequence situation s';
Compute emotion of being in consequence situation v(s');
Update crossbar memory w'(a,s) = w(a,s) + v(s').
The backpropagated value (secondary reinforcement) is the emotion
toward the consequence situation. The CAA exists in two environments,
one is behavioral environment where it behaves, and the other is genetic
environment, where from it receives initial emotions (only once) about
to be encountered situations in the behavioral environment. Having
received the genome vector (species vector) from the genetic
environment, the CAA will learn a goal-seeking behavior, in the
behavioral environment that contains both desirable and undesirable
situations.
Neuroevolution can create neural network topologies and weights using evolutionary computation. It is competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".
Stochastic neural network
Stochastic neural networks originating from Sherrington–Kirkpatrick models
are a type of artificial neural network built by introducing random
variations into the network, either by giving the network's artificial
neurons stochastic transfer functions, or by giving them stochastic weights. This makes them useful tools for optimization problems, since the random fluctuations help the network escape from local minima. Stochastic neural networks trained using a Bayesian approach are known as Bayesian neural networks.
Topological deep learning
Topological deep learning, first introduced in 2017, is an emerging approach in machine learning that integrates topology with deep neural networks to address highly intricate and high-order data. Initially rooted in algebraic topology, TDL has since evolved into a versatile framework incorporating tools from other mathematical disciplines, such as differential topology and geometric topology. As a successful example of mathematical deep learning, TDL continues to inspire advancements in mathematical artificial intelligence, fostering a mutually beneficial relationship between AI and mathematics.
Two modes of learning are available: stochastic and batch. In
stochastic learning, each input creates a weight adjustment. In batch
learning, weights are adjusted based on a batch of inputs, accumulating
errors over the batch. Stochastic learning introduces "noise" into the
process, using the local gradient calculated from one data point; this
reduces the chance of the network getting stuck in local minima.
However, batch learning typically yields a faster, more stable descent
to a local minimum, since each update is performed in the direction of
the batch's average error. A common compromise is to use "mini-batches",
small batches with samples in each batch selected stochastically from
the entire data set.
ANNs have evolved into a broad family of techniques that have
advanced the state of the art across multiple domains. The simplest
types have one or more static components, including number of units,
number of layers, unit weights and topology.
Dynamic types allow one or more of these to evolve via learning. The
latter is much more complicated but can shorten learning periods and
produce better results. Some types allow/require learning to be
"supervised" by the operator, while others operate independently. Some
types operate purely in hardware, while others are purely software and
run on general purpose computers.
Some of the main breakthroughs include:
Convolutional neural networks (CNN) that have proven
particularly successful in processing visual and other two-dimensional
data; where long short-term memory avoids the vanishing gradient problem and can handle signals that have a mix of low and high frequency components aiding large-vocabulary speech recognition, text-to-speech synthesis, and photo-real talking heads. Convolutional Neural Networks have also been applied to fraud detection.
Competitive networks such as generative adversarial networks in which multiple networks (of varying structure) compete with each other, on tasks such as winning a game or on deceiving the opponent about the authenticity of an input.
Network design
Using artificial neural networks requires an understanding of their characteristics.
Choice of model: This depends on the data representation and the
application. Model parameters include the number, type, and
connectedness of network layers, as well as the size of each and the
connection type (full, pooling, etc.). Overly complex models learn
slowly.
Learning algorithm: Numerous trade-offs exist between learning algorithms. Almost any algorithm will work well with the correct hyperparameters for training on a particular data set. However, selecting and tuning an
algorithm for training on unseen data requires significant
experimentation.
Robustness: If the model, cost function and learning algorithm are selected appropriately, the resulting ANN can become robust.
Neural architecture search
(NAS) uses machine learning to automate ANN design. Various approaches
to NAS have designed networks that compare well with hand-designed
systems. The basic search algorithm is to propose a candidate model,
evaluate it against a dataset, and use the results as feedback to teach
the NAS network. Available systems include AutoML and AutoKeras. scikit-learn library provides functions to help with building a deep network from scratch. We can then implement a deep network with TensorFlow or Keras.
Hyperparameters must also be defined as part of the design (they
are not learned), governing matters such as how many neurons are in each
layer, learning rate, step, stride, depth, receptive field and padding
(for CNNs), etc. The Python
code snippet provides an overview of the training function, which uses
the training dataset, number of hidden layer units, learning rate, and
number of iterations as parameters:
deftrain(X,y,n_hidden,learning_rate,n_iter):"""Training function. Args: X: Argument X. y: Argument y. n_hidden: The number of hidden layer units. learning_rate: The learning rate. n_iter: The number of iterations. Returns: dict: A dictionary. """m,n_input=X.shape# 1. random initialize weights and biasesw1=np.random.randn(n_input,n_hidden)b1=np.zeros((1,n_hidden))w2=np.random.randn(n_hidden,1)b2=np.zeros((1,1))# 2. in each iteration, feed all layers with the latest weights and biasesforiinrange(n_iter+1):z2=np.dot(X,w1)+b1a2=sigmoid(z2)z3=np.dot(a2,w2)+b2a3=z3dz3=a3-ydw2=np.dot(a2.T,dz3)db2=np.sum(dz3,axis=0,keepdims=True)dz2=np.dot(dz3,w2.T)*sigmoid_derivative(z2)dw1=np.dot(X.Y,dz2)db1=np.sum(dz2,axis=0)# 3. update weights and biases with gradientsw1-=learning_rate*dw1/mw2-=learning_rate*dw2/mb1-=learning_rate*db1/mb2-=learning_rate*db2/mifi%1000==0:print("Epoch",i,"loss: ",np.mean(np.square(dz3)))model={"w1":w1,"b1":b1,"w2":w2,"b2":b2}returnmodel
Monitoring and concept drift detection of ANNs
When neural networks are deployed in real-world applications, the
statistical properties of the input data may change over time, a
phenomenon known as concept drift or non-stationarity.
Drift can reduce predictive accuracy and lead to unreliable or biased
decisions if it is not detected and corrected. In practice, this means
that the model's accuracy in deployment may differ substantially from
the levels observed during training or cross-validation.
Several strategies have been developed to monitor neural networks for drift and degradation:
Error-based monitoring: comparing current predictions
against ground-truth labels when they become available. This approach
directly quantifies predictive performance but may be impractical when
labels are delayed or costly to obtain.
Data distribution monitoring: detecting changes in the input data distribution using statistical tests, divergence measures, or density-ratio estimation.
Representation monitoring: tracking the distribution of internal embeddings
or hidden-layer features. Shifts in the latent representation can
indicate nonstationarity even when labels are unavailable. Statistical
methods such as statistical process control charts have been adapted for this purpose.
Applications
Because of their ability to model and reproduce nonlinear processes,
artificial neural networks have found applications in many disciplines.
These include:
ANNs have been used to diagnose several types of cancers and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.
ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters and to predict foundation settlements. It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff. ANNs have also been used for building black-box models in geoscience: hydrology, ocean modelling and coastal engineering, and geomorphology. ANNs have been employed in cybersecurity,
with the objective to discriminate between legitimate activities and
malicious ones. For example, machine learning has been used for
classifying Android malware, for identifying domains belonging to threat actors and for detecting URLs posing a security risk. Research is underway on ANN systems designed for penetration testing, for detecting botnets, credit cards frauds and network intrusions.
ANNs have been proposed as a tool to solve partial differential equations in physics and simulate the properties of many-body open quantum systems. In brain research ANNs have studied short-term behavior of individual neurons, the dynamics of neural circuitry arise from interactions between
individual neurons and how behavior can arise from abstract neural
modules that represent complete subsystems. Studies considered long-and
short-term plasticity of neural systems and their relation to learning
and memory from the individual neuron to the system level.
It is possible to create a profile of a user's interests from
pictures, using artificial neural networks trained for object
recognition.
Beyond their traditional applications, artificial neural networks
are increasingly being utilized in interdisciplinary research, such as
materials science. For instance, graph neural networks (GNNs) have
demonstrated their capability in scaling deep learning for the discovery
of new stable materials by efficiently predicting the total energy of
crystals. This application underscores the adaptability and potential of
ANNs in tackling complex problems beyond the realms of predictive
modeling and artificial intelligence, opening new pathways for
scientific discovery and innovation.
A specific recurrent architecture with rational-valued weights (as opposed to full precision real number-valued weights) has the power of a universal Turing machine, using a finite number of neurons and standard linear connections. Further, the use of irrational values for weights results in a machine with super-Turing power.
Capacity
A model's "capacity" property corresponds to its ability to model any
given function. It is related to the amount of information that can be
stored in the network and to the notion of complexity.
Two notions of capacity are known by the community. The information
capacity and the VC Dimension. The information capacity of a perceptron
is intensively discussed in Sir David MacKay's book which summarizes work by Thomas Cover. The capacity of a network of standard neurons (not convolutional) can be derived by four rules that derive from understanding a neuron as an electrical element. The
information capacity captures the functions modelable by the network
given any data as input. The second notion, is the VC dimension. VC Dimension uses the principles of measure theory
and finds the maximum capacity under the best possible circumstances.
This is, given input data in a specific form. As noted in, the VC Dimension for arbitrary inputs is half the information capacity
of a perceptron. The VC Dimension for arbitrary points is sometimes
referred to as Memory Capacity.
Convergence
Models may not consistently converge on a single solution, firstly
because local minima may exist, depending on the cost function and the
model. Secondly, the optimization method used might not guarantee to
converge when it begins far from any local minimum. Thirdly, for
sufficiently large data or parameters, some methods become impractical.
Another issue worthy to mention is that training may cross some saddle point which may lead the convergence to the wrong direction.
The convergence behavior of certain types of ANN architectures
are more understood than others. When the width of network approaches to
infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior of affine models.Another example is when parameters are small, it is observed that ANNs
often fit target functions from low to high frequencies. This behavior
is referred to as the spectral bias, or frequency principle, of neural
networks. This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such as Jacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions.
Generalization and statistics
Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training.
This arises in convoluted or over-specified systems when the network
capacity significantly exceeds the needed free parameters.
Two approaches address over-training. The first is to use cross-validation
and similar techniques to check for the presence of over-training and
to select hyperparameters to minimize the generalization error. The
second is to use some form of regularization.
This concept emerges in a probabilistic (Bayesian) framework, where
regularization can be performed by selecting a larger prior probability
over simpler models; but also in statistical learning theory, where the
goal is to minimize over two quantities: the 'empirical risk' and the
'structural risk', which roughly corresponds to the error over the
training set and the predicted error in unseen data due to overfitting.
Confidence analysis of a neural network
Supervised neural networks that use a mean squared error
(MSE) cost function can use formal statistical methods to determine the
confidence of the trained model. The MSE on a validation set can be
used as an estimate for variance. This value can then be used to
calculate the confidence interval of network output, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified.
By assigning a softmax activation function, a generalization of the logistic function,
on the output layer of the neural network (or a softmax component in a
component-based network) for categorical target variables, the outputs
can be interpreted as posterior probabilities. This is useful in
classification as it gives a certainty measure on classifications.
The softmax activation function is:
Criticism
Training
A common criticism of neural networks, particularly in robotics, is
that they require too many training samples for real-world operation. Any learning machine needs sufficient representative examples in order
to capture the underlying structure that allows it to generalize to new
cases. Potential solutions include randomly shuffling training examples,
by using a numerical optimization algorithm that does not take too
large steps when changing the network connections following an example,
grouping examples in so-called mini-batches and/or introducing a
recursive least squares algorithm for CMAC. Dean Pomerleau uses a neural network to train a robotic vehicle to drive
on multiple types of roads (single lane, multi-lane, dirt, etc.), and a
large amount of his research is devoted to extrapolating multiple
training scenarios from a single training experience, and preserving
past training diversity so that the system does not become overtrained
(if, for example, it is presented with a series of right turns—it should
not learn to always turn right).
Theory
A central claim of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined.
This allows simple statistical association (the basic function of
artificial neural networks) to be described as learning or recognition.
In 1997, Alexander Dewdney, a former Scientific American columnist, commented that as a result, artificial neural networks have a
something-for-nothing quality, one
that imparts a peculiar aura of laziness and a distinct lack of
curiosity about just how good these computing systems are. No human hand
(or mind) intervenes; solutions are found as if by magic; and no one,
it seems, has learned anything.
One response to Dewdney is that neural networks have been successfully
used to handle many complex and diverse tasks, ranging from autonomously
flying aircraft to detecting credit card fraud to mastering the game of Go.
Technology writer Roger Bridgman commented:
Neural networks, for instance, are
in the dock not only because they have been hyped to high heaven, (what
hasn't?) but also because you could create a successful net without
understanding how it worked: the bunch of numbers that captures its
behaviour would in all probability be "an opaque, unreadable
table...valueless as a scientific resource".
In spite of his emphatic declaration that science is not
technology, Dewdney seems here to pillory neural nets as bad science
when most of those devising them are just trying to be good engineers.
An unreadable table that a useful machine could read would still be well
worth having.
Although it is true that analyzing what has been learned by an
artificial neural network is difficult, it is much easier to do so than
to analyze what has been learned by a biological neural network.
Moreover, recent emphasis on the explainability of AI has contributed towards the development of methods, notably those based on attention
mechanisms, for visualizing and explaining learned neural networks.
Furthermore, researchers involved in exploring learning algorithms for
neural networks are gradually uncovering generic principles that allow a
learning machine to be successful. For example, Bengio and LeCun (2007)
wrote an article regarding local vs non-local learning, as well as
shallow vs deep architecture.
Biological brains use both shallow and deep circuits as reported by brain anatomy, displaying a wide variety of invariance. Weng argued that the brain self-wires largely according to signal statistics
and therefore, a serial cascade cannot catch all major statistical
dependencies.
Hardware
Large and effective neural networks require considerable computing resources. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may consume vast amounts of memory
and storage. Furthermore, the designer often needs to transmit signals
through many of these connections and their associated neurons – which
require enormous CPU power and time.
Some argue that the resurgence of neural networks in the
twenty-first century is largely attributable to advances in hardware:
from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs),
has increased around a million-fold, making the standard
backpropagation algorithm feasible for training networks that are
several layers deeper than before. The use of accelerators such as FPGAs and GPUs can reduce training times from months to days.
Neuromorphic engineering or a physical neural network
addresses the hardware difficulty directly, by constructing
non-von-Neumann chips to directly implement neural networks in
circuitry. Another type of chip optimized for neural network processing
is called a Tensor Processing Unit, or TPU.
Practical counterexamples
Analyzing what has been learned by an ANN is much easier than
analyzing what has been learned by a biological neural network.
Furthermore, researchers involved in exploring learning algorithms for
neural networks are gradually uncovering general principles that allow a
learning machine to be successful. For example, local vs. non-local
learning and shallow vs. deep architecture.
Hybrid approaches
Advocates of hybrid
models (combining neural networks and symbolic approaches) say that
such a mixture can better capture the mechanisms of the human mind.
Dataset bias
Neural networks are dependent on the quality of the data they are
trained on, thus low quality data with imbalanced representativeness can
lead to the model learning and perpetuating societal biases.These inherited biases become especially critical when the ANNs are
integrated into real-world scenarios where the training data may be
imbalanced due to the scarcity of data for a specific race, gender or
other attribute. This imbalance can result in the model having inadequate representation
and understanding of underrepresented groups, leading to discriminatory
outcomes that exacerbate societal inequalities, especially in
applications like facial recognition, hiring processes, and law enforcement. For example, in 2018, Amazon
had to scrap a recruiting tool because the model favored men over women
for jobs in software engineering due to the higher number of male
workers in the field. The program would penalize any resume with the word "woman" or the name of any women's college. However, the use of synthetic data can help reduce dataset bias and increase representation in datasets.
Gallery
A single-layer feedforward artificial neural network. Arrows originating from are omitted for clarity. There are p inputs to this network and q outputs. In this system, the value of the qth output, , is calculated as
A two-layer feedforward artificial neural network
An artificial neural network
An ANN dependency graph
A single-layer feedforward artificial neural
network with 4 inputs, 6 hidden nodes and 2 outputs. Given position
state and direction, it outputs wheel based control values.
A two-layer feedforward artificial neural
network with 8 inputs, 2x8 hidden nodes and 2 outputs. Given position
state, direction and other environment values, it outputs thruster based
control values.
Parallel pipeline structure of CMAC neural network. This learning algorithm can converge in one step.
Recent advancements and future directions
Artificial neural networks (ANNs) have undergone significant
advancements, particularly in their ability to model complex systems,
handle large data sets, and adapt to various types of applications.
Their evolution over the past few decades has been marked by a broad
range of applications in fields such as image processing, speech
recognition, natural language processing, finance, and medicine.
Image processing
In the realm of image processing, ANNs are employed in tasks such as
image classification, object recognition, and image segmentation. For
instance, deep convolutional neural networks (CNNs) have been important
in handwritten digit recognition, achieving state-of-the-art
performance. This demonstrates the ability of ANNs to effectively process and
interpret complex visual information, leading to advancements in fields
ranging from automated surveillance to medical imaging.
Speech recognition
By modeling speech signals, ANNs are used for tasks like speaker
identification and speech-to-text conversion. Deep neural network
architectures have introduced significant improvements in large
vocabulary continuous speech recognition, outperforming traditional
techniques. These advancements have enabled the development of more accurate and
efficient voice-activated systems, enhancing user interfaces in
technology products.
Natural language processing
In natural language processing, ANNs are used for tasks such as text
classification, sentiment analysis, and machine translation. They have
enabled the development of models that can accurately translate between
languages, understand the context and sentiment in textual data, and
categorize text based on content. This has implications for automated customer service, content moderation, and language understanding technologies.
Control systems
In the domain of control systems, ANNs are used to model dynamic
systems for tasks such as system identification, control design, and
optimization. For instance, deep feedforward neural networks are
important in system identification and control applications.
In investing, ANNs can process vast amounts of financial data,
recognize complex patterns, and forecast stock market trends, aiding
investors and risk managers in making informed decisions.
In credit scoring, ANNs offer data-driven, personalized assessments
of creditworthiness, improving the accuracy of default predictions and
automating the lending process.
ANNs require high-quality data and careful tuning, and their
"black-box" nature can pose challenges in interpretation. Nevertheless,
ongoing advancements suggest that ANNs continue to play a role in
finance, offering valuable insights and enhancing risk management strategies.
Medicine
ANNs are able to process and analyze vast medical datasets. They
enhance diagnostic accuracy, especially by interpreting complex medical imaging for early disease detection, and by predicting patient outcomes for personalized treatment planning. In drug discovery,
ANNs speed up the identification of potential drug candidates and
predict their efficacy and safety, significantly reducing development
time and costs. Additionally, their application in personalized medicine and healthcare
data analysis allows tailored therapies and efficient patient care
management. Ongoing research is aimed at addressing remaining challenges such as
data privacy and model interpretability, as well as expanding the scope
of ANN applications in medicine.
Content creation
ANNs such as generative adversarial networks (GAN) and transformers are used for content creation across numerous industries. This is because deep learning models are able to learn the style of an
artist or musician from huge datasets and generate completely new
artworks and music compositions. For instance, DALL-E
is a deep neural network trained on 650 million pairs of images and
texts across the internet that can create artworks based on text entered
by the user. In the field of music, transformers are used to create original music
for commercials and documentaries through companies such as AIVA and Jukedeck. In the marketing industry, generative models are used to create personalized advertisements for consumers. Additionally, major film companies are partnering with technology
companies to analyze the financial success of a film, such as the
partnership between Warner Bros and technology company Cinelytic
established in 2020. Furthermore, neural networks have found uses in video game creation,
where non-player characters (NPCs) can make decisions based on all the
characters currently in the game.