Neuromorphic computing is a computing approach inspired by the human brain's structure and function. It uses artificial neurons to perform computations, mimicking neural systems for tasks such as perception, motor control, and multisensory integration. These systems, implemented in analog, digital, or mixed-mode VLSI,
prioritize robustness, adaptability, and learning by emulating the
brain’s distributed processing across small computing elements. This interdisciplinary field integrates biology, physics, mathematics, computer science, and electronic engineering to develop systems that emulate the brain’s morphology and computational strategies. Neuromorphic systems aim to enhance energy efficiency and computational
power for applications including artificial intelligence, pattern
recognition, and sensory processing.
History
Carver Mead proposed one of the first applications for neuromorphic engineering in the late 1980s. In 2006, researchers at Georgia Tech developed a field programmable neural array, a silicon-based chip modeling neuron channel-ion characteristics. In 2011, MIT researchers created a chip mimicking synaptic communication using 400 transistors and standard CMOS techniques.
In 2012 HP Labs researchers reported that Mott memristors exhibit volatile behavior at low temperatures, enabling the creation of neuristors that mimic neuron behavior and support Turing machine components. Also in 2012, Purdue University researchers presented a neuromorphic chip design using lateral spin valves and memristors, noted for energy efficiency.
The 2013 Blue Brain Project creates detailed digital models of rodent brains.
Neurogrid, developed by Brains in Silicon at Stanford University, used 16 NeuroCore chips to emulate 65,536 neurons with high energy efficiency in 2014. The 2014 BRAIN Initiative and IBM’s TrueNorth chip contributed to neuromorphic advancements.
The 2016 BrainScaleS project, a hybrid neuromorphic supercomputer at University of Heidelberg, operated 864 times faster than biological neurons.
In 2017, Intel unveiled its Loihi chip, using an asynchronous artificial neural network for efficient learning and inference. Also in 2017 IMEC’s self-learning chip, based on OxRAM, demonstrated music composition by learning from minuets.
In 2022, MIT researchers developed artificial synapses using protons for analog deep learning. In 2019, the European Union funded neuromorphic quantum computing to explore quantum operations using neuromorphic systems. Also in 2022, researchers at the Max Planck Institute for Polymer Research developed an organic artificial spiking neuron for in-situ neuromorphic sensing and biointerfacing.
Researchers reported in 2024 that chemical systems in liquid
solutions can detect sound at various wavelengths, offering potential
for neuromorphic applications.
Neurological inspiration
Neuromorphic
engineering emulates the brain’s structure and operations, focusing on
the analog nature of biological computation and the role of neurons in
cognition. The brain processes information via neurons using chemical
signals, abstracted into mathematical functions. Neuromorphic systems
distribute computation across small elements, similar to neurons, using
methods guided by anatomical and functional neural maps from electron microscopy and neural connection studies.
Implementation
Neuromorphic systems employ hardware such as oxide-based memristors, spintronic memories, threshold switches, and transistors. Software implementations train spiking neural networks using error backpropagation.
Neuromemristive systems
Neuromemristive
systems use memristors to implement neuroplasticity, focusing on
abstract neural network models rather than detailed biological mimicry. These systems enable applications in speech recognition, face recognition, and object recognition, and can replace conventional digital logic gates. The Caravelli-Traversa-Di Ventra equation describes memristive memory evolution, revealing tunneling phenomena and Lyapunov functions.
Neuromorphic sensors
Neuromorphic principles extend to sensors, such as the retinomorphic sensor or event camera, which mimic human vision by registering brightness changes individually, optimizing power consumption.
Neuromorphic systems raise the same ethical questions as those for other approaches to artificial intelligence.
Daniel Lim argued that advanced neuromorphic systems could lead to
machine consciousness, raising concerns about whether civil rights and
other protocols should be extended to them. Legal debates, such as in Acohs Pty Ltd v. Ucorp Pty Ltd, question ownership of work produced by neuromorphic systems, as non-human-generated outputs may not be copyrightable.
Neurorobotics is the combined study of neuroscience, robotics, and artificial intelligence.
It is the science and technology of embodied autonomous neural systems.
Neural systems include brain-inspired algorithms (e.g. connectionist
networks), computational models of biological neural networks (e.g.
artificial spiking neural networks, large-scale simulations of neural microcircuits) and actual biological systems (e.g. in vivo and in vitro
neural nets). Such neural systems can be embodied in machines with
mechanic or any other forms of physical actuation. This includes robots, prosthetic or wearable systems but also, at smaller scale, micro-machines and, at the larger scales, furniture and infrastructures.
Neurorobotics is that branch of neuroscience with robotics, which
deals with the study and application of science and technology of
embodied autonomous neural systems like brain-inspired algorithms. It is
based on the idea that the brain is embodied and the body is embedded
in the environment. Therefore, most neurorobots are required to function
in the real world, as opposed to a simulated environment.
Beyond brain-inspired algorithms for robots neurorobotics may also involve the design of brain-controlled robot systems.
Major classes of models
Neurorobots
can be divided into various major classes based on the robot's purpose.
Each class is designed to implement a specific mechanism of interest
for study. Common types of neurorobots are those used to study motor
control, memory, action selection, and perception.
Locomotion and motor control
Neurorobots are often used to study motor feedback and control systems, and have proved their merit in developing controllers for robots. Locomotion
is modeled by a number of neurologically inspired theories on the
action of motor systems. Locomotion control has been mimicked using
models or central pattern generators, clumps of neurons capable of driving repetitive behavior, to make four-legged walking robots. Other groups have expanded the idea of combining rudimentary control
systems into a hierarchical set of simple autonomous systems. These
systems can formulate complex movements from a combination of these
rudimentary subsets. This theory of motor action is based on the organization of cortical columns, which progressively integrate from simple sensory input into a complex afferent signals, or from complex motor programs to simple controls for each muscle fiber in efferent signals, forming a similar hierarchical structure.
Another method for motor control uses learned error correction and predictive controls to form a sort of simulated muscle memory.
In this model, awkward, random, and error-prone movements are corrected
for using error feedback to produce smooth and accurate movements over
time. The controller learns to create the correct control signal by
predicting the error. Using these ideas, robots have been designed
which can learn to produce adaptive arm movements or to avoid obstacles in a course.
Learning and memory systems
Robots designed to test theories of animal memory systems. Many studies examine the memory system of rats, particularly the rat hippocampus, dealing with place cells, which fire for a specific location that has been learned.Systems modeled after the rat hippocampus are generally able to learn mental maps
of the environment, including recognizing landmarks and associating
behaviors with them, allowing them to predict the upcoming obstacles and
landmarks.
Another study has produced a robot based on the proposed learning
paradigm of barn owls for orientation and localization based on
primarily auditory, but also visual stimuli. The hypothesized method
involves synaptic plasticity and neuromodulation, a mostly chemical effect in which reward neurotransmitters such as dopamine or serotonin affect the firing sensitivity of a neuron to be sharper. The robot used in the study adequately matched the behavior of barn owls. Furthermore, the close interaction between motor output and auditory
feedback proved to be vital in the learning process, supporting active
sensing theories that are involved in many of the learning models.
Neurorobots in these studies are presented with simple mazes or
patterns to learn. Some of the problems presented to the neurorobot
include recognition of symbols, colors, or other patterns and execute
simple actions based on the pattern. In the case of the barn owl
simulation, the robot had to determine its location and direction to
navigate in its environment.
Action selection and value systems
Action
selection studies deal with negative or positive weighting to an action
and its outcome. Neurorobots can and have been used to study simple
ethical interactions, such as the classical thought experiment where
there are more people than a life raft can hold, and someone must leave
the boat to save the rest. However, more neurorobots used in the study
of action selection contend with much simpler persuasions such as
self-preservation or perpetuation of the population of robots in the
study. These neurorobots are modeled after the neuromodulation of
synapses to encourage circuits with positive results.
In biological systems, neurotransmitters such as dopamine or
acetylcholine positively reinforce neural signals that are beneficial.
One study of such interaction involved the robot Darwin VII, which used
visual, auditory, and a simulated taste input to "eat" conductive metal
blocks. The arbitrarily chosen good blocks had a striped pattern on them
while the bad blocks had a circular shape on them. The taste sense was
simulated by conductivity of the blocks. The robot had positive and
negative feedbacks to the taste based on its level of conductivity. The
researchers observed the robot to see how it learned its action
selection behaviors based on the inputs it had. Other studies have used herds of small robots which feed on batteries
strewn about the room, and communicate its findings to other robots.
Sensory perception
Neurorobots
have also been used to study sensory perception, particularly vision.
These are primarily systems that result from embedding neural models of
sensory pathways in automatas. This approach gives exposure to the
sensory signals that occur during behavior and also enables a more
realistic assessment of the degree of robustness of the neural model. It
is well known that changes in the sensory
signals produced by motor activity provide useful perceptual cues
that are used extensively by organisms. For example, researchers have
used the depth information that emerges during replication of human head
and eye movements to establish robust representations of the visual
scene.
Biological robots
are not officially neurorobots in that they are not neurologically
inspired AI systems, but actual neuron tissue wired to a robot. This
employs the use of cultured neural networks to study brain development or neural interactions. These typically consist of a neural culture raised on a multielectrode array
(MEA), which is capable of both recording the neural activity and
stimulating the tissue. In some cases, the MEA is connected to a
computer which presents a simulated environment to the brain tissue and
translates brain activity into actions in the simulation, as well as
providing sensory feedback The ability to record neural activity gives researchers a window into a
brain, which they can use to learn about a number of the same issues
neurorobots are used for.
An area of concern with the biological robots is ethics. Many
questions are raised about how to treat such experiments. The central
question concerns consciousness and whether or not the rat brain experiences it. There are many theories about how to define consciousness.
Implications for neuroscience
Neuroscientists
benefit from neurorobotics because it provides a blank slate to test
various possible methods of brain function in a controlled and testable
environment. While robots are more simplified versions of the systems
they emulate, they are more specific, allowing more direct testing of
the issue at hand. They also have the benefit of being accessible at all times, while it
is more difficult to monitor large portions of a brain while the human
or animal is active, especially individual neurons.
The development of neuroscience has produced neural treatments. These include pharmaceuticals and neural rehabilitation. Progress is dependent on an intricate understanding of the brain and
how exactly it functions. It is difficult to study the brain, especially
in humans, due to the danger associated with cranial surgeries.
Neurorobots can improved the range of tests and experiments that can be
performed in the study of neural processes.
In cosmology, the study of galaxy formation and evolution is concerned with the processes that formed a heterogeneous universe from a homogeneous beginning, the formation of the first galaxies,
the way galaxies change over time, and the processes that have
generated the variety of structures observed in nearby galaxies. Galaxy
formation is hypothesized to occur from structure formation theories, as a result of tiny quantum fluctuations in the aftermath of the Big Bang. The simplest model in general agreement with observed phenomena is the Lambda-CDM model—that
is, clustering and merging allows galaxies to accumulate mass,
determining both their shape and structure. Hydrodynamics simulation,
which simulates both baryons and dark matter, is widely used to study galaxy formation and evolution.
Because
of the inability to conduct experiments in outer space, the only way to
“test” theories and models of galaxy evolution is to compare them with
observations. Explanations for how galaxies formed and evolved must be
able to predict the observed properties and types of galaxies.
Edwin Hubble created an early galaxy classification scheme, now known as the Hubble tuning-fork diagram. It partitioned galaxies into ellipticals, normal spirals, barred spirals (such as the Milky Way), and irregulars. These galaxy types exhibit the following properties which can be explained by current galaxy evolution theories:
Many of the properties of galaxies (including the galaxy color–magnitude diagram)
indicate that there are fundamentally two types of galaxies. These
groups divide into blue star-forming galaxies that are more like spiral
types, and red non-star forming galaxies that are more like elliptical
galaxies.
Spiral galaxies are quite thin, dense, and rotate relatively fast,
while the stars in elliptical galaxies have randomly oriented orbits.
The majority of giant galaxies contain a supermassive black hole in their centers, ranging in mass from millions to billions of times the mass of the Sun. The black hole mass is tied to the host galaxy bulge or spheroid mass.
Metallicity has a positive correlation with the luminosity of a galaxy and an even stronger correlation with galaxy mass.
Astronomers now believe that disk galaxies likely formed first, then evolved into elliptical galaxies through galaxy mergers.
Current models also predict that the majority of mass in galaxies is made up of dark matter,
a substance which is not directly observable, and might not interact
through any means except gravity. This observation arises because
galaxies could not have formed as they have, or rotate as they are seen
to, unless they contain far more mass than can be directly observed.
Formation of disk galaxies
The
earliest stage in the evolution of galaxies is their formation. When a
galaxy forms, it has a disk shape and is called a spiral galaxy due to
spiral-like "arm" structures located on the disk. There are different
theories on how these disk-like distributions of stars develop from a
cloud of matter: however, at present, none of them exactly predicts the
results of observation.
Top-down theories
Olin J. Eggen, Donald Lynden-Bell, and Allan Sandage in 1962, proposed a theory that disk galaxies form through a monolithic
collapse of a large gas cloud. The distribution of matter in the early
universe was in clumps that consisted mostly of dark matter. These
clumps interacted gravitationally, putting tidal torques on each other
that acted to give them some angular momentum. As the baryonic matter
cooled, it dissipated some energy and contracted toward the center.
With angular momentum conserved, the matter near the center speeds up
its rotation. Then, like a spinning ball of pizza dough, the matter
forms into a tight disk. Once the disk cools, the gas is not
gravitationally stable, so it cannot remain a singular homogeneous
cloud. It breaks, and these smaller clouds of gas form stars. Since the
dark matter does not dissipate as it only interacts gravitationally, it
remains distributed outside the disk in what is known as the dark halo.
Observations show that there are stars located outside the disk, which
does not quite fit the "pizza dough" model. It was first proposed by Leonard Searle and Robert Zinn that galaxies form by the coalescence of smaller progenitors. Known as a
top-down formation scenario, this theory is quite simple yet no longer
widely accepted.
Bottom-up theory
More
recent theories include the clustering of dark matter halos in the
bottom-up process. Instead of large gas clouds collapsing to form a
galaxy in which the gas breaks up into smaller clouds, it is proposed
that matter started out in these “smaller” clumps (mass on the order of globular clusters), and then many of these clumps merged to form galaxies, which then were drawn by gravitation to form galaxy clusters.
This still results in disk-like distributions of baryonic matter with
dark matter forming the halo for all the same reasons as in the top-down
theory. Models using this sort of process predict more small galaxies
than large ones, which matches observations.
Astronomers do not currently know what process stops the
contraction. In fact, theories of disk galaxy formation are not
successful at producing the rotation speed and size of disk galaxies. It
has been suggested that the radiation from bright newly formed stars,
or from an active galactic nucleus can slow the contraction of a forming disk. It has also been suggested that the dark matter halo can pull the galaxy, thus stopping disk contraction.
The Lambda-CDM model is a cosmological model that explains the formation of the universe after the Big Bang.
It is a relatively simple model that predicts many properties observed
in the universe, including the relative frequency of different galaxy
types; however, it underestimates the number of thin disk galaxies in
the universe. The reason is that these galaxy formation models predict a large number
of mergers. If disk galaxies merge with another galaxy of comparable
mass (at least 15 percent of its mass) the merger will likely destroy,
or at a minimum greatly disrupt the disk, and the resulting galaxy is
not expected to be a disk galaxy (see next section). While this remains
an unsolved problem for astronomers, it does not necessarily mean that
the Lambda-CDM model is completely wrong, but rather that it requires
further refinement to accurately reproduce the population of galaxies in
the universe.
Galaxy mergers and the formation of elliptical galaxies
Artist's image of a firestorm of star birth deep inside the core of a young, growing elliptical galaxyNGC 4676 (Mice Galaxies) is an example of a present merger.The Antennae Galaxies are a pair of colliding galaxies – the bright, blue knots are young stars that have recently ignited as a result of the merger.ESO 325-G004, a typical elliptical galaxy
Elliptical galaxies (most notably supergiant ellipticals, such as ESO 306-17) are among some of the largest known thus far.
Their stars are on orbits that are randomly oriented within the galaxy
(i.e. they are not rotating like disk galaxies). A distinguishing
feature of elliptical galaxies is that the velocity of the stars does
not necessarily contribute to flattening of the galaxy, such as in
spiral galaxies. Elliptical galaxies have central supermassive black holes, and the masses of these black holes correlate with the galaxy's mass.
Elliptical galaxies have two main stages of evolution. The first
is due to the supermassive black hole growing by accreting cooling gas.
The second stage is marked by the black hole stabilizing by suppressing
gas cooling, thus leaving the elliptical galaxy in a stable state. The mass of the black hole is also correlated to a property called sigma which is the dispersion of the velocities of stars in their orbits. This relationship, known as the M-sigma relation, was discovered in 2000. Elliptical galaxies mostly lack disks, although some bulges
of disk galaxies resemble elliptical galaxies. Elliptical galaxies are
more likely found in crowded regions of the universe (such as galaxy clusters).
Astronomers now see elliptical galaxies as some of the most
evolved systems in the universe. It is widely accepted that the main
driving force for the evolution of elliptical galaxies is mergers
of smaller galaxies. Many galaxies in the universe are gravitationally
bound to other galaxies, which means that they will never escape their
mutual pull. If those colliding galaxies are of similar size, the
resultant galaxy will appear similar to neither of the progenitors, but will instead be elliptical. There are many types of galaxy
mergers, which do not necessarily result in elliptical galaxies, but
result in a structural change. For example, a minor merger event is
thought to be occurring between the Milky Way and the Magellanic Clouds.
Mergers between such large galaxies are regarded as violent, and
the frictional interaction of the gas between the two galaxies can cause
gravitational shock waves, which are capable of forming new stars in the new elliptical galaxy. By sequencing several images of different galactic collisions, one can
observe the timeline of two spiral galaxies merging into a single
elliptical galaxy.
In the Local Group, the Milky Way and the Andromeda Galaxy
are gravitationally bound, and currently approaching each other at high
speed. Simulations show that the Milky Way and Andromeda are on a
collision course, and are expected to collide in less than five billion
years. During this collision, it is expected that the Sun and the rest
of the Solar System will be ejected from its current path around the
Milky Way. The remnant could be a giant elliptical galaxy.
Galaxy quenching
Star formation in what are now "dead" galaxies sputtered out billions of years ago
One observation that must be explained by a successful theory of
galaxy evolution is the existence of two different populations of
galaxies on the galaxy color-magnitude diagram. Most galaxies tend to
fall into two separate locations on this diagram: a "red sequence" and a
"blue cloud". Red sequence galaxies are generally non-star-forming
elliptical galaxies with little gas and dust, while blue cloud galaxies
tend to be dusty star-forming spiral galaxies.
As described in previous sections, galaxies tend to evolve from
spiral to elliptical structure via mergers. However, the current rate of
galaxy mergers does not explain how all galaxies move from the "blue
cloud" to the "red sequence". It also does not explain how star
formation ceases in galaxies. Theories of galaxy evolution must
therefore be able to explain how star formation turns off in galaxies.
This phenomenon is called galaxy "quenching".
Stars form out of cold gas (see also the Kennicutt–Schmidt law),
so a galaxy is quenched when it has no more cold gas. However, it is
thought that quenching occurs relatively quickly (within 1 billion
years), which is much shorter than the time it would take for a galaxy
to simply use up its reservoir of cold gas. Galaxy evolution models explain this by hypothesizing other physical
mechanisms that remove or shut off the supply of cold gas in a galaxy.
These mechanisms can be broadly classified into two categories: (1)
preventive feedback mechanisms that stop cold gas from entering a galaxy
or stop it from producing stars, and (2) ejective feedback mechanisms
that remove gas so that it cannot form stars.
One theorized preventive mechanism called “strangulation” keeps
cold gas from entering the galaxy. Strangulation is likely the main
mechanism for quenching star formation in nearby low-mass galaxies. The exact physical explanation for strangulation is still unknown, but
it may have to do with a galaxy's interactions with other galaxies. As a
galaxy falls into a galaxy cluster, gravitational interactions with
other galaxies can strangle it by preventing it from accreting more gas. For galaxies with massive dark matter halos, another preventive mechanism called “virial shock heating” may also prevent gas from becoming cool enough to form stars.
Ejective processes, which expel cold gas from galaxies, may explain how more massive galaxies are quenched. One ejective mechanism is caused by supermassive black holes found in
the centers of galaxies. Simulations have shown that gas accreting onto
supermassive black holes in galactic centers produces high-energy jets; the released energy can expel enough cold gas to quench star formation.
Our own Milky Way and the nearby Andromeda Galaxy currently
appear to be undergoing the quenching transition from star-forming blue
galaxies to passive red galaxies.
Hydrodynamics Simulation
Dark
energy and dark matter account for most of the Universe's energy, so it
is valid to ignore baryons when simulating large-scale structure
formation (using methods such as N-body simulation).
However, since the visible components of galaxies consist of baryons,
it is crucial to include baryons in the simulation to study the detailed
structures of galaxies. At first, the baryon component consists of
mostly hydrogen and helium gas, which later transforms into stars during
the formation of structures. From observations, models used in
simulations can be tested and the understanding of different stages of
galaxy formation can be improved.
Euler equations
In cosmological simulations, astrophysical gases are typically modeled as inviscid ideal gases that follow the Euler equations,
which can be expressed mainly in three different ways: Lagrangian,
Eulerian, or arbitrary Lagrange-Eulerian methods. Different methods give
specific forms of hydrodynamical equations. When using the Lagrangian approach to specify the field, it is assumed
that the observer tracks a specific fluid parcel with its unique
characteristics during its movement through space and time. In contrast,
the Eulerian approach emphasizes particular locations in space that the
fluid passes through as time progresses.
Baryonic Physics
To
shape the population of galaxies, the hydrodynamical equations must be
supplemented by a variety of astrophysical processes mainly governed by
baryonic physics.
Gas cooling
Processes, such as collisional excitation, ionization, and inverse Compton scattering, can cause the internal energy
of the gas to be dissipated. In the simulation, cooling processes are
realized by coupling cooling functions to energy equations. Besides the
primordial cooling, at high temperature,, heavy elements (metals) cooling dominates. When , the fine structure and molecular cooling also need to be considered to simulate the cold phase of the interstellar medium.
Interstellar medium
Complex
multi-phase structure, including relativistic particles and magnetic
field, makes simulation of interstellar medium difficult. In particular,
modeling the cold phase of the interstellar medium poses technical
difficulties due to the short timescales associated with the dense gas.
In the early simulations, the dense gas phase is frequently not modeled
directly but rather characterized by an effective polytropic equation of
state. More recent simulations use a multimodal distribution to describe
the gas density and temperature distributions, which directly model the
multi-phase structure. However, more detailed physics processes needed
to be considered in future simulations, since the structure of the
interstellar medium directly affects star formation.
Star formation
As
cold and dense gas accumulates, it undergoes gravitational collapse and
eventually forms stars. To simulate this process, a portion of the gas
is transformed into collisionless star particles, which represent
coeval, single-metallicity stellar populations and are described by an
initial underlying mass function. Observations suggest that star
formation efficiency in molecular gas is almost universal, with around
1% of the gas being converted into stars per free fall time. In simulations, the gas is typically converted into star particles
using a probabilistic sampling scheme based on the calculated star
formation rate. Some simulations seek an alternative to the
probabilistic sampling scheme and aim to better capture the clustered
nature of star formation by treating star clusters as the fundamental
unit of star formation. This approach permits the growth of star
particles by accreting material from the surrounding medium. In addition to this, modern models of galaxy formation track the
evolution of these stars and the mass they return to the gas component,
leading to an enrichment of the gas with metals.
Stellar feedback
Stars
have an influence on their surrounding gas by injecting energy and
momentum. This creates a feedback loop that regulates the process of
star formation. To effectively control star formation, stellar feedback
must generate galactic-scale outflows that expel gas from galaxies.
Various methods are utilized to couple energy and momentum, particularly
through supernova explosions, to the surrounding gas. These methods
differ in how the energy is deposited, either thermally or kinetically.
However, excessive radiative gas cooling must be avoided in the former
case. Cooling is expected in dense and cold gas, but it cannot be
reliably modeled in cosmological simulations due to low resolution. This
leads to artificial and excessive cooling of the gas, causing the
supernova feedback energy to be lost via radiation and significantly
reducing its effectiveness. In the latter case, kinetic energy cannot
be radiated away until it thermalizes. However, using hydrodynamically
decoupled wind particles to inject momentum non-locally into the gas
surrounding active star-forming regions may still be necessary to
achieve large-scale galactic outflows. Recent models explicitly model stellar feedback. These models not only incorporate supernova feedback but also consider
other feedback channels such as energy and momentum injection from
stellar winds, photoionization, and radiation pressure resulting from radiation emitted by young, massive stars. During the Cosmic Dawn, galaxy formation occurred in short bursts of 5 to 30 Myr due to stellar feedbacks.
Supermassive black holes
Simulation
of supermassive black holes is also considered, numerically seeding
them in dark matter haloes, due to their observation in many galaxies and the impact of their mass on the mass density distribution. Their
mass accretion rate is frequently modeled by the Bondi-Hoyle model.
Active galactic nuclei
Active galactic nuclei
(AGN) have an impact on the observational phenomena of supermassive
black holes, and further have a regulation of black hole growth and star
formation. In simulations, AGN feedback is usually classified into two
modes, namely quasar and radio mode. Quasar mode feedback is linked to
the radiatively efficient mode of black hole growth and is frequently
incorporated through energy or momentum injection. The regulation of star formation in massive galaxies is believed to be
significantly influenced by radio mode feedback, which occurs due to the
presence of highly collimated jets of relativistic particles. These
jets are typically linked to X-ray bubbles that possess enough energy to
counterbalance cooling losses.
Magnetic fields
The ideal magnetohydrodynamics
approach is commonly utilized in cosmological simulations since it
provides a good approximation for cosmological magnetic fields. The
effect of magnetic fields on the dynamics of gas is generally negligible
on large cosmological scales. Nevertheless, magnetic fields are a
critical component of the interstellar medium since they provide
pressure support against gravity and affect the propagation of cosmic rays.
Cosmic rays
Cosmic rays play a significant role in the interstellar medium by contributing to its pressure, serving as a crucial heating channel, and potentially driving galactic gas outflows. The propagation of cosmic rays is highly affected by magnetic fields.
So in the simulation, equations describing the cosmic ray energy and
flux are coupled to magnetohydrodynamics equations.
Radiation Hydrodynamics
Radiation
hydrodynamics simulations are computational methods used to study the
interaction of radiation with matter. In astrophysical contexts,
radiation hydrodynamics is used to study the epoch of reionization when
the Universe had high redshift. There are several numerical methods used
for radiation hydrodynamics simulations, including ray-tracing, Monte Carlo,
and moment-based methods. Ray-tracing involves tracing the paths of
individual photons through the simulation and computing their
interactions with matter at each step. This method is computationally
expensive but can produce very accurate results.
The central dogma of molecular biology showing the flow of genetic information within a biological system. Example of a molecular biology concept. Model showing the interactions between DNA and proteins during DNA replication: Template DNA strand (yellow), a newly-synthesized daughter DNA strand (cyan), three subunits of PCNA (shades of blue), and the catalytic subunit of DNA Polymerase ε (green).
Though cells and other microscopic structures had been observed in organisms
as early as the 18th century, a detailed understanding of the
mechanisms and interactions governing their behavior did not emerge
until the 20th century, when technologies used in physics and chemistry
had advanced sufficiently to permit their application in the biological
sciences. The term 'molecular biology' was first used in 1945 by the English physicist William Astbury,
who described it as an approach focused on discerning the underpinnings
of biological phenomena—i.e. uncovering the physical and chemical
structures and properties of biological molecules, as well as their
interactions with other molecules and how these interactions explain
observations of so-called classical biology, which instead studies
biological processes at larger scales and higher levels of organization. In 1953, Francis Crick, James Watson, Rosalind Franklin, and their colleagues at the Medical Research Council Unit, Cavendish Laboratory, were the first to describe the double helix model for the chemical structure of deoxyribonucleic acid
(DNA), which is often considered a landmark event for the nascent field
because it provided a physico-chemical basis by which to understand the
previously nebulous idea of nucleic acids as the primary substance of
biological inheritance. They proposed this structure based on previous
research done by Franklin, which was conveyed to them by Maurice Wilkins and Max Perutz. Their work led to the discovery of DNA in other microorganisms, plants, and animals.
The field of molecular biology includes techniques which enable scientists to learn about molecular processes. These techniques are used to efficiently target new drugs, diagnose disease, and better understand cell physiology. Some clinical research and medical therapies arising from molecular biology are covered under gene therapy, whereas the use of molecular biology or molecular cell biology in medicine is now referred to as molecular medicine.
Angle description in DNA structureDiagrammatic representation of Watson and Crick's DNA structure
Molecular biology sits at the intersection of biochemistry and genetics;
as these scientific disciplines emerged and evolved in the 20th
century, it became clear that they both sought to determine the
molecular mechanisms which underlie vital cellular functions. Advances in molecular biology have been closely related to the development of new technologies and their optimization.
The field of genetics arose from attempts to understand the set of rules underlying reproduction and heredity, and the nature of the hypothetical units of heredity known as genes. Gregor Mendel
pioneered this work in 1866, when he first described the laws of
inheritance he observed in his studies of mating crosses in pea plants. One such law of genetic inheritance is the law of segregation, which states that diploid individuals with two alleles for a particular gene will pass one of these alleles to their offspring. Because of his critical work, the study of genetic inheritance is commonly referred to as Mendelian genetics.
A major milestone in molecular biology was the discovery of the structure of DNA. This work began in 1869 by Friedrich Miescher, a Swiss biochemist who first proposed a structure called nuclein, which we now know to be (deoxyribonucleic acid), or DNA. He discovered this unique substance by studying the components of
pus-filled bandages, and noting the unique properties of the
"phosphorus-containing substances". Another notable contributor to the DNA model was Phoebus Levene, who proposed the "polynucleotide model" of DNA in 1919 as a result of his biochemical experiments on yeast. In 1950, Erwin Chargaff
expanded on the work of Levene and elucidated a few critical properties
of nucleic acids: first, the sequence of nucleic acids varies across
species. Second, the total concentration of purines (adenine and guanine) is
always equal to the total concentration of pyrimidines (cytosine and
thymine). This is now known as Chargaff's rule. In 1953, James Watson and Francis Crick published the double helical structure of DNA, based on the X-ray crystallography work done by Rosalind Franklin which was conveyed to them by Maurice Wilkins and Max Perutz. Watson and Crick described the structure of DNA and conjectured about
the implications of this unique structure for possible mechanisms of DNA
replication. Watson and Crick were awarded the Nobel Prize in Physiology or Medicine in 1962, along with Wilkins, for proposing a model of the structure of DNA.
In 1961, it was demonstrated that when a gene encodes a protein, three sequential bases of a gene's DNA specify each successive amino acid of the protein. Thus the genetic code is a triplet code, where each triplet (called a codon)
specifies a particular amino acid. Furthermore, it was shown that the
codons do not overlap with each other in the DNA sequence encoding a
protein, and that each sequence is read from a fixed starting point.
During 1962–1964, through the use of conditional lethal mutants of a
bacterial virus, fundamental advances were made in our understanding of the functions
and interactions of the proteins employed in the machinery of DNA replication, DNA repair, DNA recombination, and in the assembly of molecular structures.
In 1928, Frederick Griffith, encountered a virulence property in pneumococcus
bacteria, which was killing lab rats. According to Mendel, prevalent at
that time, gene transfer could occur only from parent to daughter
cells. Griffith advanced another theory, stating that gene transfer
occurring in member of same generation is known as horizontal gene
transfer (HGT). This phenomenon is now referred to as genetic
transformation.
Griffith's experiment addressed the pneumococcus bacteria, which
had two different strains, one virulent and smooth and one avirulent and
rough. The smooth strain had glistering appearance owing to the
presence of a type of specific polysaccharide – a polymer of glucose and
glucuronic acid capsule. Due to this polysaccharide layer of bacteria, a
host's immune system cannot recognize the bacteria and it kills the
host. The rough strain lacks this polysaccharide capsule, resulting in a
dull, rough colony appearance and making it avirulent because it is
more readily recognized and destroyed by the host immune system.
Presence or absence of capsule in the strain, is known to be
genetically determined. Smooth and rough strains occur in several
different type such as S-I, S-II, S-III, etc. and R-I, R-II, R-III, etc.
respectively. All this subtypes of S and R bacteria differ with each
other in antigen type they produce.
The Avery–MacLeod–McCarty experiment was an experimental demonstration by Oswald Avery, Colin MacLeod, and Maclyn McCarty that, in 1944, reported that DNA is the substance that causes bacterial transformation, in an era when it had been widely believed that it was proteins that served the function of carrying genetic information (with the very word protein itself coined to indicate a belief that its function was primary). It was the culmination of research in the 1930s and early 20th century at the Rockefeller Institute for Medical Research to purify and characterize the "transforming principle" responsible for the transformation phenomenon first described in Griffith's experiment of 1928: killed Streptococcus pneumoniae of the virulent
strain type III-S, when injected along with living but non-virulent
type II-R pneumococci, resulted in a deadly infection of type III-S
pneumococci. In their paper "Studies on the Chemical Nature of the
Substance Inducing Transformation of Pneumococcal Types: Induction of
Transformation by a Desoxyribonucleic Acid Fraction Isolated from
Pneumococcus Type III", published in the February 1944 issue of the Journal of Experimental Medicine,
Avery and his colleagues suggest that DNA, rather than protein as
widely believed at the time, may be the hereditary material of bacteria,
and could be analogous to genes and/or viruses in higher organisms.
Confirmation that DNA is the genetic material which is cause of infection came from the Hershey–Chase experiment. They used E.coli
and bacteriophage for the experiment. This experiment is also known as
blender experiment, as kitchen blender was used as a major piece of
apparatus. Alfred Hershey and Martha Chase
demonstrated that the DNA injected by a phage particle into a bacterium
contains all information required to synthesize progeny phage
particles. They used radioactivity to tag the bacteriophage's protein
coat with radioactive sulfur and DNA with radioactive phosphorus, into
two different test tubes respectively. After mixing bacteriophage and E.coli into the test tube, the incubation period starts in which phage transforms the genetic material in the E.coli cells. Then the mixture is blended or agitated, which separates the phage from E.coli cells. The whole mixture is centrifuged and the pellet which contains E.coli cells was checked and the supernatant was discarded. The E.coli cells showed radioactive phosphorus, which indicated that the transformed material was DNA not the protein coat.
The transformed DNA gets attached to the DNA of E.coli and
radioactivity is only seen onto the bacteriophage's DNA. This mutated
DNA can be passed to the next generation and the theory of Transduction
came into existence. Transduction is a process in which the bacterial
DNA carry the fragment of bacteriophages and pass it on the next
generation. This is also a type of horizontal gene transfer.
The Meselson–Stahl experiment is an experiment by Matthew Meselson and Franklin Stahl in 1958 which supported Watson and Crick's hypothesis that DNA replication was semiconservative. In semiconservative replication, when the double-stranded DNA
helix is replicated, each of the two new double-stranded DNA helices
consisted of one strand from the original helix and one newly
synthesized. It has been called "the most beautiful experiment in
biology". Meselson and Stahl decided the best way to trace the parent DNA would be to tag them by changing one of its atoms. Since nitrogen is present in all of the DNA bases, they generated parent DNA containing a heavier isotope
of nitrogen than would be present naturally. This altered mass allowed
them to determine how much of the parent DNA was present in the DNA
after successive cycles of replication.
Modern molecular biology
In
the early 2020s, molecular biology entered a golden age defined by both
vertical and horizontal technical development. Vertically, novel
technologies are allowing for real-time monitoring of biological
processes at the atomic level. Molecular biologists today have access to increasingly affordable
sequencing data at increasingly higher depths, facilitating the
development of novel genetic manipulation methods in new non-model
organisms. Likewise, synthetic molecular biologists will drive the
industrial production of small and macro molecules through the
introduction of exogenous metabolic pathways in various prokaryotic and
eukaryotic cell lines.
Horizontally, sequencing data is becoming more affordable and
used in many different scientific fields. This will drive the
development of industries in developing nations and increase
accessibility to individual researchers. Likewise, CRISPR-Cas9 gene editing
experiments can now be conceived and implemented by individuals for
under $10,000 in novel organisms, which will drive the development of
industrial and medical applications.
The following list describes a viewpoint on the interdisciplinary
relationships between molecular biology and other related fields.
Molecular biology is the study of the molecular
underpinnings of the biological phenomena, focusing on molecular
synthesis, modification, mechanisms and interactions.
While researchers practice techniques specific to molecular biology, it is common to combine these with methods from genetics and biochemistry.
Much of molecular biology is quantitative, and recently a significant
amount of work has been done using computer science techniques such as bioinformatics and computational biology. Molecular genetics,
the study of gene structure and function, has been among the most
prominent sub-fields of molecular biology since the early 2000s. Other
branches of biology are informed by molecular biology, by either
directly studying the interactions of molecules in their own right such
as in cell biology and developmental biology, or indirectly, where molecular techniques are used to infer historical attributes of populations or species, as in fields in evolutionary biology such as population genetics and phylogenetics. There is also a long tradition of studying biomolecules "from the ground up", or molecularly, in biophysics.
Molecular cloning is used to isolate and then transfer a DNA sequence of interest into a plasmid vector. This recombinant DNA technology was first developed in the 1960s. In this technique, a DNA sequence coding for a protein of interest is cloned using polymerase chain reaction (PCR), and/or restriction enzymes, into a plasmid (expression vector). The plasmid vector usually has at least three distinctive features: an origin of replication, a multiple cloning site (MCS), and a selective marker (usually antibiotic resistance). Additionally, upstream of the MCS are the promoter regions and the transcription start site, which regulate the expression of cloned gene.
This plasmid can be inserted into either bacterial or animal cells. Introducing DNA into bacterial cells can be done by transformation via uptake of naked DNA, conjugation via cell-cell contact or by transduction via viral vector. Introducing DNA into eukaryotic cells, such as animal cells, by physical or chemical means is called transfection. Several different transfection techniques are available, such as calcium phosphate transfection, electroporation, microinjection and liposome transfection. The plasmid may be integrated into the genome,
resulting in a stable transfection, or may remain independent of the
genome and expressed temporarily, called a transient transfection.
DNA coding for a protein of interest is now inside a cell, and the protein
can now be expressed. A variety of systems, such as inducible promoters
and specific cell-signaling factors, are available to help express the
protein of interest at high levels. Large quantities of a protein can
then be extracted from the bacterial or eukaryotic cell. The protein can
be tested for enzymatic activity under a variety of situations, the
protein may be crystallized so its tertiary structure can be studied, or, in the pharmaceutical industry, the activity of new drugs against the protein can be studied.
Polymerase chain reaction (PCR) is an extremely versatile technique for copying DNA. In brief, PCR allows a specific DNA sequence
to be copied or modified in predetermined ways. The reaction is
extremely powerful and under perfect conditions could amplify one DNA
molecule to become 1.07 billion molecules in less than two hours. PCR
has many applications, including the study of gene expression, the
detection of pathogenic microorganisms, the detection of genetic
mutations, and the introduction of mutations to DNA. The PCR technique can be used to introduce restriction enzyme sites to ends of DNA molecules, or to mutate particular bases of DNA, the latter is a method referred to as site-directed mutagenesis. PCR can also be used to determine whether a particular DNA fragment is found in a cDNA library. PCR has many variations, like reverse transcription PCR (RT-PCR) for amplification of RNA, and, more recently, quantitative PCR which allow for quantitative measurement of DNA or RNA molecules.
Gel electrophoresis is a technique which separates molecules by their size using an agarose or polyacrylamide gel. This technique is one of the principal tools of molecular biology. The
basic principle is that DNA fragments can be separated by applying an
electric current across the gel - because the DNA backbone contains
negatively charged phosphate groups, the DNA will migrate through the
agarose gel towards the positive end of the current. Proteins can also be separated on the basis of size using an SDS-PAGE gel, or on the basis of size and their electric charge by using what is known as a 2D gel electrophoresis.
Proteins stained on a PAGE gel using Coomassie blue dye
The Bradford assay
is a molecular biology technique which enables the fast, accurate
quantitation of protein molecules utilizing the unique properties of a
dye called Coomassie Brilliant Blue G-250. Coomassie Blue undergoes a visible color shift from reddish-brown to bright blue upon binding to protein. In its unstable, cationic state, Coomassie Blue has a background wavelength of 465 nm and gives off a reddish-brown color. When Coomassie Blue binds to protein in an acidic solution, the
background wavelength shifts to 595 nm and the dye gives off a bright
blue color. Proteins in the assay bind Coomassie blue in about 2 minutes, and the
protein-dye complex is stable for about an hour, although it is
recommended that absorbance readings are taken within 5 to 20 minutes of
reaction initiation. The concentration of protein in the Bradford assay can then be measured using a visible light spectrophotometer, and therefore does not require extensive equipment.
This method was developed in 1975 by Marion M. Bradford,
and has enabled significantly faster, more accurate protein
quantitation compared to previous methods: the Lowry procedure and the
biuret assay. Unlike the previous methods, the Bradford assay is not susceptible to
interference by several non-protein molecules, including ethanol, sodium
chloride, and magnesium chloride. However, it is susceptible to influence by strong alkaline buffering agents, such as sodium dodecyl sulfate (SDS).
Macromolecule blotting and probing
The terms northern, western and eastern blotting are derived from what initially was a molecular biology joke that played on the term Southern blotting, after the technique described by Edwin Southern for the hybridisation of blotted DNA. Patricia Thomas, developer of the RNA blot which then became known as the northern blot, actually did not use the term.
Named after its inventor, biologist Edwin Southern,
the Southern blot is a method for probing for the presence of a
specific DNA sequence within a DNA sample. DNA samples before or after restriction enzyme (restriction endonuclease) digestion are separated by gel electrophoresis and then transferred to a membrane by blotting via capillary action.
The membrane is then exposed to a labeled DNA probe that has a
complement base sequence to the sequence on the DNA of interest. Southern blotting is less commonly used in laboratory science due to the capacity of other techniques, such as PCR,
to detect specific DNA sequences from DNA samples. These blots are
still used for some applications, however, such as measuring transgene copy number in transgenic mice or in the engineering of gene knockoutembryonic stem cell lines.
The northern blot is used to study the presence of specific RNA
molecules as relative comparison among a set of different samples of
RNA. It is essentially a combination of denaturing RNA gel electrophoresis, and a blot. In this process RNA is separated based on size and is then transferred to a membrane that is then probed with a labeled complement
of a sequence of interest. The results may be visualized through a
variety of ways depending on the label used; however, most result in the
revelation of bands representing the sizes of the RNA detected in
sample. The intensity of these bands is related to the amount of the
target RNA in the samples analyzed. The procedure is commonly used to
study when and how much gene expression is occurring by measuring how
much of that RNA is present in different samples, assuming that no
post-transcriptional regulation occurs and that the levels of mRNA
reflect proportional levels of the corresponding protein being produced.
It is one of the most basic tools for determining at what time, and
under what conditions, certain genes are expressed in living tissues.
A western blot is a technique by which specific proteins can be detected from a mixture of proteins. Western blots can be used to determine the size of isolated proteins, as well as to quantify their expression. In western blotting, proteins are first separated by size, in a thin gel sandwiched between two glass plates in a technique known as SDS-PAGE. The proteins in the gel are then transferred to a polyvinylidene fluoride (PVDF), nitrocellulose, nylon, or other support membrane. This membrane can then be probed with solutions of antibodies.
Antibodies that specifically bind to the protein of interest can then
be visualized by a variety of techniques, including colored products, chemiluminescence, or autoradiography. Often, the antibodies are labeled with enzymes. When a chemiluminescentsubstrate is exposed to the enzyme
it allows detection. Using western blotting techniques allows not only
detection but also quantitative analysis. Analogous methods to western
blotting can be used to directly stain specific proteins in live cells or tissue sections.
The eastern blotting technique is used to detect post-translational modification of proteins. Proteins blotted on to the PVDF or nitrocellulose membrane are probed for modifications using specific substrates.
A DNA microarray is a collection of spots attached to a solid support such as a microscope slide where each spot contains one or more single-stranded DNA oligonucleotide
fragments. Arrays make it possible to put down large quantities of very
small (100 micrometre diameter) spots on a single slide. Each spot has a
DNA fragment molecule that is complementary to a single DNA sequence. A variation of this technique allows the gene expression of an organism at a particular stage in development to be qualified (expression profiling). In this technique the RNA in a tissue is isolated and converted to labeled complementary DNA
(cDNA). This cDNA is then hybridized to the fragments on the array and
visualization of the hybridization can be done. Since multiple arrays
can be made with exactly the same position of fragments, they are
particularly useful for comparing the gene expression of two different
tissues, such as a healthy and cancerous tissue. Also, one can measure
what genes are expressed and how that expression changes with time or
with other factors.
There are many different ways to fabricate microarrays; the most common
are silicon chips, microscope slides with spots of ~100 micrometre
diameter, custom arrays, and arrays with larger spots on porous
membranes (macroarrays). There can be anywhere from 100 spots to more
than 10,000 on a given array. Arrays can also be made with molecules
other than DNA.
Allele-specific oligonucleotide (ASO) is a technique that allows
detection of single base mutations without the need for PCR or gel
electrophoresis. Short (20–25 nucleotides in length), labeled probes are
exposed to the non-fragmented target DNA, hybridization occurs with
high specificity due to the short length of the probes and even a single
base change will hinder hybridization. The target DNA is then washed
and the unhybridized probes are removed. The target DNA is then analyzed
for the presence of the probe via radioactivity or fluorescence. In
this experiment, as in most molecular biology techniques, a control must
be used to ensure successful experimentation.
In molecular biology, procedures and technologies are continually
being developed and older technologies abandoned. For example, before
the advent of DNA gel electrophoresis (agarose or polyacrylamide), the size of DNA molecules was typically determined by rate sedimentation in sucrose gradients, a slow and labor-intensive technique requiring expensive instrumentation; prior to sucrose gradients, viscometry
was used. Aside from their historical interest, it is often worth
knowing about older technology, as it is occasionally useful to solve
another new problem for which the newer technique is inappropriate.