Search This Blog

Tuesday, December 16, 2025

Wetware computer

From Wikipedia, the free encyclopedia
Diversity of neuronal morphologies in the auditory cortex

A wetware computer is an organic computer (which can also be known as an artificial organic brain or a neurocomputer) composed of organic material "wetware" such as "living" neurons. Wetware computers composed of neurons are different than conventional computers because they use biological materials, and offer the possibility of substantially more energy-efficient computing. While a wetware computer is still largely conceptual, there has been limited success with construction and prototyping, which has acted as a proof of the concept's realistic application to computing in the future. The most notable prototypes have stemmed from the research completed by biological engineer William Ditto during his time at the Georgia Institute of Technology. His work constructing a simple neurocomputer capable of basic addition from leech neurons in 1999 was a significant discovery for the concept. This research was a primary example driving interest in creating these artificially constructed, but still organic brains.

This image is of neural network cultured brain cells, highlighting connections between neurons. This structure reflects biological systems with Red stain highlighting neurites (axons and dendrites) and blue stain marks cell nuclei process information in wetware & organic computing.

Organic computers or Wetware is a future technology that replaces the traditional fundamental component of a central processing unit of a desktop or personal computer. It utilizes organic matter of living tissue cells that act like the transistor of a computer hardware system by acquiring, storing, and analyzing information data. Wetware is the name given to the computational properties of living systems, particularly in human neural tissue, which allows parallel and self-organizing information processing via biochemical and electrical interactions. Wetware is distinct from hardware systems in that it is based on dynamic mechanisms like synaptic plasticity and neurotransmitter diffusion, which provide unique benefits in terms of adaptability and robustness.

Origins and theoretical foundations

The term wetware came from cyberpunk fiction, notably through Gibson's Neuromancer, but was quickly taken up in scientific literature to explain computation by biological material, Theories of early biological computation borrowed from Alan Turing's morphogenesis model, which showed that chemical interactions could produce complex patterns without centralized control. Hopfield's associative memory networks also provided a foundation for biological information systems with fault tolerance and self-organization.

Major characteristics and processes

Biological wetware systems demonstrate dynamic reconfigurability underpinned by neuroplasticity and enable continuous learning and adaptation. Reaction-diffusion-based computing and molecular logic gates allow spatially parallel information processing unachievable in conventional systems. These systems also show fault tolerance and self-repair at the cellular and network level. The development of cerebral organoids—miniature lab-grown brains—demonstrates spontaneous learning behavior and suggests biological tissue as a viable computational substrate.

Overview

The concept of wetware is an application of specific interest to the field of computer manufacturing. Moore's law, which states that the number of transistors which can be placed on a silicon chip is doubled roughly every two years, has acted as a goal for the industry for decades, but as the size of computers continues to decrease, the ability to meet this goal has become more difficult, threatening to reach a plateau. Due to the difficulty in reducing the size of computers because of size limitations of transistors and integrated circuits, wetware provides an unconventional alternative. A wetware computer composed of neurons is an ideal concept because, unlike conventional materials which operate in binary (on/off), a neuron can shift between thousands of states, constantly altering its chemical conformation, and redirecting electrical pulses through over 200,000 channels in any of its many synaptic connections. Because of this large difference in the possible settings for any one neuron, compared to the binary limitations of conventional computers, the space limitations are far fewer.

Background

The concept of wetware is distinct and unconventional and draws slight resonance with both hardware and software from conventional computers. While hardware is understood as the physical architecture of traditional computational devices, comprising integrated circuits and supporting infrastructure, software represents the encoded architecture of storage and instructions. Wetware is a separate concept that uses the formation of organic molecules, mostly complex cellular structures (such as neurons), to create a computational device such as a computer. In wetware, the ideas of hardware and software are intertwined and interdependent. The molecular and chemical composition of the organic or biological structure would represent not only the physical structure of the wetware but also the software, being continually reprogrammed by the discrete shifts in electrical pulses and chemical concentration gradients as the molecules change their structures to communicate signals. The responsiveness of a cell, proteins, and molecules to changing conformations, both within their structures and around them, ties the idea of internal programming and external structure together in a way that is alien to the current model of conventional computer architecture.

The structure of wetware represents a model where the external structure and internal programming are interdependent and unified; meaning that changes to the programming or internal communication between molecules of the device would represent a physical change in the structure. The dynamic nature of wetware borrows from the function of complex cellular structures in biological organisms. The combination of "hardware" and "software" into one dynamic, and interdependent system which uses organic molecules and complexes to create an unconventional model for computational devices is a specific example of applied biorobotics.

The cell as a model of wetware

Cells in many ways can be seen as their form of naturally occurring wetware, similar to the concept that the human brain is the preexisting model system for complex wetware. In his book Wetware: A Computer in Every Living Cell (2009) Dennis Bray explains his theory that cells, which are the most basic form of life, are just a highly complex computational structure, like a computer. To simplify one of his arguments a cell can be seen as a type of computer, using its structured architecture. In this architecture, much like a traditional computer, many smaller components operate in tandem to receive input, process the information, and compute an output. In an overly simplified, non-technical analysis, cellular function can be broken into the following components: Information and instructions for execution are stored as DNA in the cell, RNA acts as a source for distinctly encoded input, processed by ribosomes and other transcription factors to access and process the DNA and to output a protein. Bray's argument in favor of viewing cells and cellular structures as models of natural computational devices is important when considering the more applied theories of wetware to biorobotics.

Biorobotics

Wetware and biorobotics are closely related concepts, which both borrow from similar overall principles. A biorobotic structure can be defined as a system modeled from a preexisting organic complex or model such as cells (neurons) or more complex structures like organs (brain) or whole organisms. Unlike wetware, the concept of biorobotics is not always a system composed of organic molecules, but instead could be composed of conventional material which is designed and assembled in a structure similar or derived from a biological model. Biorobotics have many applications and are used to address the challenges of conventional computer architecture. Conceptually, designing a program, robot, or computational device after a preexisting biological model such as a cell, or even a whole organism, provides the engineer or programmer the benefits of incorporating into the structure the evolutionary advantages of the model.

Effects on users

Wetware technologies such as BCIs and neuromorphic chips offer new possibilities for user autonomy. For those with disabilities, such systems could restore motor or sensory functions and enhance quality of life. However, these technologies raise ethical questions: cognitive privacy, consent over biological data, and risk of exploitation.

Without proper oversight, wetware technologies may also widen inequality, favoring those with access to cognitive enhancements. Open governance frameworks and ethical AI design grounded in neuro ethics will be essential. With the development of wetware devices, disparities in access could exacerbate social inequalities, benefiting those who have resources to enhance cognitive or physical abilities. It is necessary to create strong ethical frameworks, inclusive development practices, and open systems of governance to reduce risks and make sure that wetware advances are beneficial to all segments of society.

Applications and goals

Basic neurocomputer composed of leech neurons

In 1999 William Ditto and his team of researchers at Georgia Institute of Technology and Emory University created a basic form of a wetware computer capable of simple addition by harnessing leech neurons. Leeches were used as a model organism due to the large size of their neuron, and the ease associated with their collection and manipulation. However, these results have never been published in a peer-reviewed journal, prompting questions about the validity of the claims. The computer was able to complete basic addition through electrical probes inserted into the neuron. The manipulation of electrical currents through neurons was not a trivial accomplishment, however. Unlike conventional computer architecture, which is based on the binary on/off states, neurons are capable of existing in thousands of states and communicate with each other through synaptic connections with each containing over 200,000 channels. Each can be dynamically shifted in a process called self-organization to constantly form and reform new connections. A conventional computer program called the dynamic clamp, capable of reading the electrical pulses from the neurons in real time and interpreting them was written by Eve Marder, a neurobiologist at Brandeis University. This program was used to manipulate the electrical signals being input into the neurons to represent numbers and to communicate with each other to return the sum. While this computer is a very basic example of a wetware structure it represents a small example with fewer neurons than found in a more complex organ. It is thought by Ditto that by increasing the number of neurons present the chaotic signals sent between them will self-organize into a more structured pattern, such as the regulation of heart neurons into a constant heartbeat found in humans and other living organisms.

Biological models for conventional computing

After his work creating a basic computer from leech neurons, Ditto continued to work not only with organic molecules and wetware but also on the concept of applying the chaotic nature of biological systems and organic molecules to conventional material and logic gates. Chaotic systems have advantages for generating patterns and computing higher-order functions like memory, arithmetic logic, and input/output operations. In his article Construction of a Chaotic Computer Chip Ditto discusses the advantages in programming of using chaotic systems, with their greater sensitivity to respond and reconfigure logic gates in his conceptual chaotic chip. The main difference between a chaotic computer chip and a conventional computer chip is the reconfigurability of the chaotic system. Unlike a traditional computer chip, where a programmable gate array element must be reconfigured through the switching of many single-purpose logic gates, a chaotic chip can reconfigure all logic gates through the control of the pattern generated by the non-linear chaotic element.

Impact of wetware in cognitive biology

Cognitive biology evaluates cognition as a basic biological function. W. Tecumseh Fitch, a professor of cognitive biology at the University of Vienna, is a leading theorist on ideas of cellular intentionality. The idea is that not only do whole organisms have a sense of "aboutness" of intentionality, but that single cells also carry a sense of intentionality through cells' ability to adapt and reorganize in response to certain stimuli. Fitch discusses the idea of nano-intentionality, specifically in regards to neurons, in their ability to adjust rearrangements to create neural networks. He discusses the ability of cells such as neurons to respond independently to stimuli such as damage to be what he considers "intrinsic intentionality" in cells, explaining that "while at a vastly simpler level than intentionality at the human cognitive level, I propose that this basic capacity of living things [response to stimuli] provides the necessary building blocks for cognition and higher-order intentionality." Fitch describes the value of his research to specific areas of computer science such as artificial intelligence and computer architecture. He states "If a researcher aims to make a conscious machine, doing it with rigid switches (whether vacuum tubes or static silicon chips) is barking up the wrong tree." Fitch believes that an important aspect of the development of areas such as artificial intelligence is wetware with nano-intentionally, and autonomous ability to adapt and restructure itself.

In a review of the above-mentioned research conducted by Fitch, Daniel Dennett, a professor at Tufts University, discusses the importance of the distinction between the concept of hardware and software when evaluating the idea of wetware and organic material such as neurons. Dennett discusses the value of observing the human brain as a preexisting example of wetware. He sees the brain as having "the competence of a silicon computer to take on an unlimited variety of temporary cognitive roles." Dennett disagrees with Fitch on certain areas, such as the relationship of software/hardware versus wetware, and what a machine with wetware might be capable of. Dennett highlights the importance of additional research into human cognition to better understand the intrinsic mechanisms by which the human brain can operate, to better create an organic computer.

Medical applications

Wetware computers should not be confused with brain-on-a-chip devices have that are mostly aimed at replacing animal models in preclinical drug screening. Modern wetware computers use similar technology derived from the brain-on-a-chip field, but medical applications from wetware computing specifically have not been established.

Ethical and philosophical implications

Wetware computers may have substantial ethical implications, for instance related to possible potentials to sentience and suffering and dual-use technology.

Moreover, in some cases the human brain itself may be connected as a kind of "wetware" to other information technology systems which may also have large social and ethical implications, including issues related to intimate access to people's brains. For example, in 2021 Chile became the first country to approve neurolaw that establishes rights to personal identity, free will and mental privacy.

The concept of artificial insects may raise substantial ethical questions, including questions related to the decline in insect populations.

It is an open question whether human cerebral organoids could develop a degree or form of consciousness. Whether or how it could acquire its moral status with related rights and limits may also be potential future questions. There is research on how consciousness could be detected. As cerebral organoids may acquire human brain-like neural function subjective experience and consciousness may be feasible. Moreover, it may be possible that they acquire such upon transplantation into animals. A study notes that it may, in various cases, be morally permissible "to create self-conscious animals by engrafting human cerebral organoids, but in the case, the moral status of such animals should be carefully considered".

Applications

Wetware has driven innovations in brain-computer interfaces (BCIs), allowing neural activity to control external devices and enabling people with disabilities to regain communication and movement. Neuromorphic engineering, which mimics neural architectures using silicon, has resulted in low-power, highly adaptive artificial systems.

Synthetic biology has enabled the development of programmable biological processors for diagnostics and smart therapeutics. Brain organoids are also being used for computational pattern recognition and memory emulation. Large-scale international efforts like the Human Brain Project aim to simulate the entire human brain using insights from wetware.

Evaluating potential and limitations

The core advantage of wetware is its potential to overcome the rigidity and energy inefficiencies of binary transistor-based systems. Digital systems operate through fixed binary pathways and consume increasing energy as computational loads increase. Wetware, in contrast, uses decentralized and adaptive data flow that mimics biology. Notwithstanding the encouraging advances, several challenges hinder the effective utilization of wetware computing systems. Scalability is problematic due to the inherent variability of biological systems and their responsiveness to environmental factors, which makes large-scale implementation difficult. Additionally, the absence of standardization when combining silicon and biological systems hampers reproducibility and cooperation between research groups biological systems must also be stabilized carefully to turn away genetic drift and contamination necessary for reliable computational functionality.

Good parts – Replacing binary systems with organic cell structures opens the door to decentralized adaptive systems. Cells naturally form clusters and connections, much like neurons transmitting electrical and biochemical signals. Such a shift would increase scalability and efficiency, enabling users to interact with information in an intuitive and organic manner. Still, biological systems are sensitive to environmental changes, which presents challenges for standardization and reproducibility. Additionally, ethical concerns remain especially in using living neural tissue and lab-grown brain constructs.

Bad parts – Despite its promise, organic computing currently suffers from major limitations. Transistors still dominate computer architecture with a binary "on/off" model that restricts long-term energy efficiency and adaptability. As a result, personal computers in everyday use whether for work, games, or research often contribute to higher energy output and environmental impact.

Future applications

While there have been few major developments in the creation of an organic computer since the neuron-based calculator developed by Ditto in the 1990s, research continues to push the field forward, and in 2023 a functioning computer was constructed by researchers at the University of Illinois Urbana-Champaign using 80,000 mouse neurons as processor that can detect light and electrical signals. Projects such as the modeling of chaotic pathways in silicon chips by Ditto have made discoveries in ways of organizing traditional silicon chips and structuring computer architecture to be more efficient and better structured. Ideas emerging from the field of cognitive biology also help to continue to push discoveries in ways of structuring systems for artificial intelligence, to better imitate preexisting systems in humans.

In a proposed fungal computer using basidiomycetes, information is represented by spikes of electrical activity, a computation is implemented in a mycelium network, and an interface is realized via fruit bodies.

Connecting cerebral organoids (including computer-like wetware) with other nerve tissues may become feasible in the future, as is the connection of physical artificial neurons (not necessarily organic) and the control of muscle tissue. External modules of biological tissue could trigger parallel trains of stimulation back into the brain. All-organic devices could be advantageous because it could be biocompatible which may allow it to be implanted into the human body. This may enable treatments of certain diseases and injuries to the nervous system.

Prototypes

  • In late 2021, scientists, including two from Cortical Labs, demonstrated that grown brain cells integrated into digital systems can carry out goal-directed tasks with performance-scores. In particular, the human brain cells learned to play a simulated (via electrophysiological stimulation) Pong which they learned faster than known machine intelligence systems, albeit to a lower skill-level than both AI and humans each. Moreover, the study suggests it provides "first empirical evidence" of differences in an information-processing capacity between neurons from different species as the human brain cells performed better than mouse cells.
  • Also in December 2021, researchers from Max Planck Institute for Polymer Research reported the development of organic low-power neuromorphic electronics which they built into a robot, enabling it to learn sensorimotorically within the real world, rather than via simulations. For the chip, polymers were used and coated with an ion-rich gel to enable the material to carry an electric charge like real neurons.
  • In 2022, researchers from the Max Planck Institute for Polymer Research, demonstrated an artificial spiking neuron based on polymers that operates in the biological wetware, enabling synergetic operation between the artificial and biological components.

Companies active in wetware computing

Three companies are focusing on wetware computing using living neurons:

Convergence of AI and wetware

One technology developing today is the fusion of artificial intelligence (AI) with wetware. Modern research shows that hybrid systems combining living neural networks with AI can enable self-repair, real-time adaptation, and emotional intelligence. These systems are more flexible than conventional AI and can integrate learning and memory in real time. Such integration lays the foundation for AI that mirrors human cognition and behavior, potentially creating intelligent systems grounded in neuroscience.

Neural networks embodied in AI systems could facilitate continuous learning, emotional processing, and fault tolerance more than existing silicon-based implementations. Additionally, AI systems based on neuroethical principles could uphold transparency, fairness, and autonomy. While early research is ongoing, the integration of wetware and artificial intelligence seeks to redefine both fields with the possibility of creating more human-like, moral, and resilient intelligent systems.

Cognitive computer

From Wikipedia, the free encyclopedia

A cognitive computer is a computer that hardwires artificial intelligence and machine learning algorithms into an integrated circuit that closely reproduces the behavior of the human brain. It generally adopts a neuromorphic engineering approach. Synonyms include neuromorphic chip and cognitive chip.

In 2023, IBM's proof-of-concept NorthPole chip (optimized for 2-, 4- and 8-bit precision) achieved remarkable performance in image recognition.

In 2013, IBM developed Watson, a cognitive computer that uses neural networks and deep learning techniques. The following year, it developed the 2014 TrueNorth microchip architecture which is designed to be closer in structure to the human brain than the von Neumann architecture used in conventional computers. In 2017, Intel also announced its version of a cognitive chip in "Loihi, which it intended to be available to university and research labs in 2018. Intel (most notably with its Pohoiki Beach and Springs systems), Qualcomm, and others are improving neuromorphic processors steadily.

IBM TrueNorth chip

DARPA SyNAPSE board with 16 TrueNorth chips

TrueNorth was a neuromorphic CMOS integrated circuit produced by IBM in 2014. It is a manycore processor network on a chip design, with 4096 cores, each one having 256 programmable simulated neurons for a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). Its basic transistor count is 5.4 billion.

In 2023 Zhejiang University and Alibaba developed Darwin a neuromorphic chip The darwin3 chip was designed around 2023 so it is fairly modern compared to IBM's TrueNorth or Intel's LoihI.

Details

Memory, computation, and communication are handled in each of the 4096 neurosynaptic cores, TrueNorth circumvents the von Neumann-architecture bottleneck and is very energy-efficient, with IBM claiming a power consumption of 70 milliwatts and a power density that is 1/10,000th of conventional microprocessors. The SyNAPSE chip operates at lower temperatures and power because it only draws power necessary for computation. Skyrmions have been proposed as models of the synapse on a chip.

The neurons are emulated using a Linear-Leak Integrate-and-Fire (LLIF) model, a simplification of the leaky integrate-and-fire model.

According to IBM, it does not have a clock, operates on unary numbers, and computes by counting to a maximum of 19 bits. The cores are event-driven by using both synchronous and asynchronous logic, and are interconnected through an asynchronous packet-switched mesh network on chip (NOC).

IBM developed a new network to program and use TrueNorth. It included a simulator, a new programming language, an integrated programming environment, and libraries. This lack of backward compatibility with any previous technology (e.g., C++ compilers) poses serious vendor lock-in risks and other adverse consequences that may prevent it from commercialization in the future.

Research

In 2018, a cluster of TrueNorth network-linked to a master computer was used in stereo vision research that attempted to extract the depth of rapidly moving objects in a scene.

IBM NorthPole chip

In 2023, IBM released its NorthPole chip, which is a proof-of-concept for dramatically improving performance by intertwining compute with memory on-chip, thus eliminating the Von Neumann bottleneck. It blends approaches from IBM's 2014 TrueNorth system with modern hardware designs to achieve speeds about 4,000 times faster than TrueNorth. It can run ResNet-50 or Yolo-v4 image recognition tasks about 22 times faster, with 25 times less energy and 5 times less space, when compared to GPUs which use the same 12-nm node process that it was fabricated with. It includes 224 MB of RAM and 256 processor cores and can perform 2,048 operations per core per cycle at 8-bit precision, and 8,192 operations at 2-bit precision. It runs at between 25 and 425 MHz. This is an inferencing chip, but it cannot yet handle GPT-4 because of memory and accuracy limitations

Intel Loihi chip

Pohoiki Springs

Pohoiki Springs is a system that incorporates Intel's self-learning neuromorphic chip, named Loihi, introduced in 2017, perhaps named after the Hawaiian seamount Lōʻihi. Intel claims Loihi is about 1000 times more energy efficient than general-purpose computing systems used to train neural networks. In theory, Loihi supports both machine learning training and inference on the same silicon independently of a cloud connection, and more efficiently than convolutional neural networks or deep learning neural networks. Intel points to a system for monitoring a person's heartbeat, taking readings after events such as exercise or eating, and using the chip to normalize the data and work out the ‘normal’ heartbeat. It can then spot abnormalities and deal with new events or conditions.

The first iteration of the chip was made using Intel's 14 nm fabrication process and houses 128 clusters of 1,024 artificial neurons each for a total of 131,072 simulated neurons. This offers around 130 million synapses, far less than the human brain's 800 trillion synapses, and behind IBM's TrueNorth. Loihi is available for research purposes among more than 40 academic research groups as a USB form factor.

In October 2019, researchers from Rutgers University published a research paper to demonstrate the energy efficiency of Intel's Loihi in solving simultaneous localization and mapping.

In March 2020, Intel and Cornell University published a research paper to demonstrate the ability of Intel's Loihi to recognize different hazardous materials, which could eventually aid to "diagnose diseases, detect weapons and explosives, find narcotics, and spot signs of smoke and carbon monoxide".

Pohoiki Beach

Intel's Loihi 2, named Pohoiki Beach, was released in September 2021 with 64 cores. It boasts faster speeds, higher-bandwidth inter-chip communications for enhanced scalability, increased capacity per chip, a more compact size due to process scaling, and improved programmability.

Hala Point

Hala Point packages 1,152 Loihi 2 processors produced on Intel 3 process node in a six-rack-unit chassis. The system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores, consuming 2,600 watts of power. It includes over 2,300 embedded x86 processors for ancillary computations.

Intel claimed in 2024 that Hala Point was the world’s largest neuromorphic system. It uses Loihi 2 chips. It is claimed to offer 10x more neuron capacity and up to 12x higher performance. The Darwin3 chip exceeds these specs.

Hala Point provides up to 20 quadrillion operations per second, (20 petaops), with efficiency exceeding 15 trillion (8-bit) operations S−1 W−1 on conventional deep neural networks.

Hala Point integrates processing, memory and communication channels in a massively parallelized fabric, providing 16 PB S−1 of memory bandwidth, 3.5 PB S−1 of inter-core communication bandwidth, and 5 TB S−1 of inter-chip bandwidth.

The system can process its 1.15 billion neurons 20 times faster than a human brain. Its neuron capacity is roughly equivalent to that of an owl brain or the cortex of a capuchin monkey.

Loihi-based systems can perform inference and optimization using 100 times less energy at speeds as much as 50 times faster than CPU/GPU architectures.

Intel claims that Hala Point can create LLMs. Much further research is needed.

SpiNNaker

SpiNNaker (Spiking Neural Network Architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group at the Department of Computer Science, University of Manchester.

Criticism

Critics argue that a room-sized computer – as in the case of IBM's Watson – is not a viable alternative to a three-pound human brain. Some also cite the difficulty for a single system to bring so many elements together, such as the disparate sources of information as well as computing resources.

In 2021, The New York Times released Steve Lohr's article "What Ever Happened to IBM’s Watson?". He wrote about some costly failures of IBM Watson. One of them, a cancer-related project called the Oncology Expert Advisor, was abandoned in 2016 as a costly failure. During the collaboration, Watson could not use patient data. Watson struggled to decipher doctors’ notes and patient histories.

The development of LLMs has placed a new emphasis on cognitive computers, because the Transformer technology that underpins LLMs demands huge energy for GPUs and PCs. Cognitive computers use very much less energy, but the details of STDPs and neuron models cannot yet match the accuracy of backprop, and so ANN to SNN weight translations such as QAT and PQT or progressive quantization are becoming popular, with their own limitations.

Fusion rocket

From Wikipedia, the free encyclopedia
A schematic of a fusion-driven rocket by NASA

A fusion rocket is a theoretical design for a rocket driven by fusion propulsion that could provide efficient and sustained acceleration in space without the need to carry a large fuel supply. The design requires fusion power technology beyond current capabilities, and much larger and more complex rockets.

Fusion nuclear pulse propulsion is one approach to using nuclear fusion energy to provide propulsion.

Fusion's main advantage is its very high specific impulse, while its main disadvantage is the (likely) large mass of the reactor. A fusion rocket may produce less radiation than a fission rocket, reducing the shielding mass needed. The simplest way of building a fusion rocket is to use hydrogen bombs as proposed in Project Orion, but such a spacecraft would be massive and the Partial Nuclear Test Ban Treaty prohibits the use of such bombs. For that reason bomb-based rockets would likely be limited to operating only in space. An alternate approach uses electrical (e.g. ion) propulsion with electric power generated by fusion instead of direct thrust.

Electricity generation vs. direct thrust

Spacecraft propulsion methods such as ion thrusters require electric power to run, but are highly efficient. In some cases their thrust is limited by the amount of power that can be generated (for example, a mass driver). An electric generator running on fusion power could drive such a ship. One disadvantage is that conventional electricity production requires a low-temperature energy sink, which is difficult (i.e. heavy) in a spacecraft. Direct conversion of the kinetic energy of fusion products into electricity mitigates this problem.

One attractive possibility is to direct the fusion exhaust out the back of the rocket to provide thrust without the intermediate production of electricity. This would be easier with some confinement schemes (e.g. magnetic mirrors) than with others (e.g. tokamaks). It is also more attractive for "advanced fuels" (see aneutronic fusion). Helium-3 propulsion would use the fusion of helium-3 atoms as a power source. Helium-3, an isotope of helium with two protons and one neutron, could be fused with deuterium in a reactor. The resulting energy release could expel propellant out the back of the spacecraft. Helium-3 is proposed as a power source for spacecraft mainly because of its lunar abundance. Scientists estimate that 1 million tons of accessible helium-3 are present on the moon. Only 20% of the power produced by the D-T reaction could be used this way; while the other 80% is released as neutrons which, because they cannot be directed by magnetic fields or solid walls, would be difficult to direct towards thrust, and may in turn require shielding. Helium-3 is produced via beta decay of tritium, which can be produced from deuterium, lithium, or boron.

Even if a self-sustaining fusion reaction cannot be produced, it might be possible to use fusion to boost the efficiency of another propulsion system, such as a VASIMR engine.

Confinement alternatives

Magnetic

To sustain a fusion reaction, the plasma must be confined. The most widely studied configuration for terrestrial fusion is the tokamak, a form of magnetic confinement fusion. Currently tokamaks weigh a great deal, so the thrust to weight ratio would seem unacceptable. NASA's Glenn Research Center proposed in 2001 a small aspect ratio spherical torus reactor for its "Discovery II" conceptual vehicle design. "Discovery II" could deliver a crewed 172 metric tons payload to Jupiter in 118 days (or 212 days to Saturn) using 861 metric tons of hydrogen propellant, plus 11 metric tons of Helium-3-Deuterium (D-He3) fusion fuel. The hydrogen is heated by the fusion plasma debris to increase thrust, at a cost of reduced exhaust velocity (348–463 km/s) and hence increased propellant mass.

Inertial

The main alternative to magnetic confinement is inertial confinement fusion (ICF), such as that proposed by Project Daedalus. A small pellet of fusion fuel (with a diameter of a couple of millimeters) would be ignited by an electron beam or a laser. To produce direct thrust, a magnetic field forms the pusher plate. In principle, the Helium-3-Deuterium reaction or an aneutronic fusion reaction could be used to maximize the energy in charged particles and to minimize radiation, but it is highly questionable whether using these reactions is technically feasible. Both the detailed design studies in the 1970s, the Orion drive and Project Daedalus, used inertial confinement. In the 1980s, Lawrence Livermore National Laboratory and NASA studied an ICF-powered "Vehicle for Interplanetary Transport Applications" (VISTA). The conical VISTA spacecraft could deliver a 100-tonne payload to Mars orbit and return to Earth in 130 days, or to Jupiter orbit and back in 403 days. 41 tonnes of deuterium/tritium (D-T) fusion fuel would be required, plus 4,124 tonnes of hydrogen expellant. The exhaust velocity would be 157 km/s.

The very large necessary mass and the challenge of managing the heat produced in space may make an ICF reactor unworkable in space travel.

Magnetized target

Magnetized target fusion (MTF) is a relatively new approach that combines the best features of the more widely studied magnetic confinement fusion (i.e. good energy confinement) and inertial confinement fusion (i.e. efficient compression heating and wall free containment of the fusing plasma) approaches. Like the magnetic approach, the fusion fuel is confined at low density by magnetic fields while it is heated into a plasma, but like the inertial confinement approach, fusion is initiated by rapidly squeezing the target to dramatically increase fuel density, and thus temperature. MTF uses "plasma guns" (i.e. electromagnetic acceleration techniques) instead of powerful lasers, leading to low cost and low weight compact reactors. The NASA/MSFC Human Outer Planets Exploration (HOPE) group has investigated a crewed MTF propulsion spacecraft capable of delivering a 164-tonne payload to Jupiter's moon Callisto using 106-165 metric tons of propellant (hydrogen plus either D-T or D-He3 fusion fuel) in 249–330 days. This design would thus be considerably smaller and more fuel efficient due to its higher exhaust velocity (700 km/s) than the previously mentioned "Discovery II", "VISTA" concepts.

Inertial electrostatic

Another popular confinement concept for fusion rockets is inertial electrostatic confinement (IEC), such as in the Farnsworth-Hirsch Fusor or the Polywell variation under development by Energy-Matter Conversion Corporation (EMC2). The University of Illinois has defined a 500-tonne "Fusion Ship II" concept capable of delivering a 100,000 kg crewed payload to Jupiter's moon Europa in 210 days. Fusion Ship II utilizes ion rocket thrusters (343 km/s exhaust velocity) powered by ten D-He3 IEC fusion reactors. The concept would need 300 tonnes of argon propellant for a 1-year round trip to the Jupiter system. Robert Bussard published a series of technical articles discussing its application to spaceflight throughout the 1990s. His work was popularised by an article in the Analog Science Fiction and Fact publication, where Tom Ligon described how the fusor would make for a highly effective fusion rocket.

Antimatter

A still more speculative concept is antimatter-catalyzed nuclear pulse propulsion, which would use antimatter to catalyze a fission and fusion reaction, allowing much smaller fusion explosions to be created. During the 1990s an abortive design effort was conducted at Penn State University under the name AIMStar. The project would require more antimatter than can currently be produced. In addition, some technical hurdles need to be surpassed before it would be feasible.

Development projects

Neuromorphic computing

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Neuromorphic_computing

Neuromorphic computing is a computing approach inspired by the human brain's structure and function. It uses artificial neurons to perform computations, mimicking neural systems for tasks such as perception, motor control, and multisensory integration. These systems, implemented in analog, digital, or mixed-mode VLSI, prioritize robustness, adaptability, and learning by emulating the brain’s distributed processing across small computing elements. This interdisciplinary field integrates biology, physics, mathematics, computer science, and electronic engineering to develop systems that emulate the brain’s morphology and computational strategies. Neuromorphic systems aim to enhance energy efficiency and computational power for applications including artificial intelligence, pattern recognition, and sensory processing.

History

Carver Mead proposed one of the first applications for neuromorphic engineering in the late 1980s. In 2006, researchers at Georgia Tech developed a field programmable neural array, a silicon-based chip modeling neuron channel-ion characteristics. In 2011, MIT researchers created a chip mimicking synaptic communication using 400 transistors and standard CMOS techniques.

In 2012 HP Labs researchers reported that Mott memristors exhibit volatile behavior at low temperatures, enabling the creation of neuristors that mimic neuron behavior and support Turing machine components. Also in 2012, Purdue University researchers presented a neuromorphic chip design using lateral spin valves and memristors, noted for energy efficiency.

The 2013 Blue Brain Project creates detailed digital models of rodent brains.

Neurogrid, developed by Brains in Silicon at Stanford University, used 16 NeuroCore chips to emulate 65,536 neurons with high energy efficiency in 2014. The 2014 BRAIN Initiative and IBM’s TrueNorth chip contributed to neuromorphic advancements.

The 2016 BrainScaleS project, a hybrid neuromorphic supercomputer at University of Heidelberg, operated 864 times faster than biological neurons.

In 2017, Intel unveiled its Loihi chip, using an asynchronous artificial neural network for efficient learning and inference. Also in 2017 IMEC’s self-learning chip, based on OxRAM, demonstrated music composition by learning from minuets.

In 2022, MIT researchers developed artificial synapses using protons for analog deep learning. In 2019, the European Union funded neuromorphic quantum computing to explore quantum operations using neuromorphic systems. Also in 2022, researchers at the Max Planck Institute for Polymer Research developed an organic artificial spiking neuron for in-situ neuromorphic sensing and biointerfacing.

Researchers reported in 2024 that chemical systems in liquid solutions can detect sound at various wavelengths, offering potential for neuromorphic applications.

Neurological inspiration

Neuromorphic engineering emulates the brain’s structure and operations, focusing on the analog nature of biological computation and the role of neurons in cognition. The brain processes information via neurons using chemical signals, abstracted into mathematical functions. Neuromorphic systems distribute computation across small elements, similar to neurons, using methods guided by anatomical and functional neural maps from electron microscopy and neural connection studies.

Implementation

Neuromorphic systems employ hardware such as oxide-based memristors, spintronic memories, threshold switches, and transistors. Software implementations train spiking neural networks using error backpropagation.

Neuromemristive systems

Neuromemristive systems use memristors to implement neuroplasticity, focusing on abstract neural network models rather than detailed biological mimicry. These systems enable applications in speech recognition, face recognition, and object recognition, and can replace conventional digital logic gates. The Caravelli-Traversa-Di Ventra equation describes memristive memory evolution, revealing tunneling phenomena and Lyapunov functions.

Neuromorphic sensors

Neuromorphic principles extend to sensors, such as the retinomorphic sensor or event camera, which mimic human vision by registering brightness changes individually, optimizing power consumption.

An example of this applied to detecting light is the retinomorphic sensor or, when employed in an array, an event camera.

Ethical considerations

Neuromorphic systems raise the same ethical questions as those for other approaches to artificial intelligence. Daniel Lim argued that advanced neuromorphic systems could lead to machine consciousness, raising concerns about whether civil rights and other protocols should be extended to them. Legal debates, such as in Acohs Pty Ltd v. Ucorp Pty Ltd, question ownership of work produced by neuromorphic systems, as non-human-generated outputs may not be copyrightable.

Neurorobotics

From Wikipedia, the free encyclopedia

Neurorobotics is the combined study of neuroscience, robotics, and artificial intelligence. It is the science and technology of embodied autonomous neural systems. Neural systems include brain-inspired algorithms (e.g. connectionist networks), computational models of biological neural networks (e.g. artificial spiking neural networks, large-scale simulations of neural microcircuits) and actual biological systems (e.g. in vivo and in vitro neural nets). Such neural systems can be embodied in machines with mechanic or any other forms of physical actuation. This includes robots, prosthetic or wearable systems but also, at smaller scale, micro-machines and, at the larger scales, furniture and infrastructures.

Neurorobotics is that branch of neuroscience with robotics, which deals with the study and application of science and technology of embodied autonomous neural systems like brain-inspired algorithms. It is based on the idea that the brain is embodied and the body is embedded in the environment. Therefore, most neurorobots are required to function in the real world, as opposed to a simulated environment.

Beyond brain-inspired algorithms for robots neurorobotics may also involve the design of brain-controlled robot systems.

Major classes of models

Neurorobots can be divided into various major classes based on the robot's purpose. Each class is designed to implement a specific mechanism of interest for study. Common types of neurorobots are those used to study motor control, memory, action selection, and perception.

Locomotion and motor control

Neurorobots are often used to study motor feedback and control systems, and have proved their merit in developing controllers for robots. Locomotion is modeled by a number of neurologically inspired theories on the action of motor systems. Locomotion control has been mimicked using models or central pattern generators, clumps of neurons capable of driving repetitive behavior, to make four-legged walking robots. Other groups have expanded the idea of combining rudimentary control systems into a hierarchical set of simple autonomous systems. These systems can formulate complex movements from a combination of these rudimentary subsets. This theory of motor action is based on the organization of cortical columns, which progressively integrate from simple sensory input into a complex afferent signals, or from complex motor programs to simple controls for each muscle fiber in efferent signals, forming a similar hierarchical structure.

Another method for motor control uses learned error correction and predictive controls to form a sort of simulated muscle memory. In this model, awkward, random, and error-prone movements are corrected for using error feedback to produce smooth and accurate movements over time. The controller learns to create the correct control signal by predicting the error. Using these ideas, robots have been designed which can learn to produce adaptive arm movements or to avoid obstacles in a course.

Learning and memory systems

Robots designed to test theories of animal memory systems. Many studies examine the memory system of rats, particularly the rat hippocampus, dealing with place cells, which fire for a specific location that has been learned. Systems modeled after the rat hippocampus are generally able to learn mental maps of the environment, including recognizing landmarks and associating behaviors with them, allowing them to predict the upcoming obstacles and landmarks.

Another study has produced a robot based on the proposed learning paradigm of barn owls for orientation and localization based on primarily auditory, but also visual stimuli. The hypothesized method involves synaptic plasticity and neuromodulation, a mostly chemical effect in which reward neurotransmitters such as dopamine or serotonin affect the firing sensitivity of a neuron to be sharper. The robot used in the study adequately matched the behavior of barn owls. Furthermore, the close interaction between motor output and auditory feedback proved to be vital in the learning process, supporting active sensing theories that are involved in many of the learning models.

Neurorobots in these studies are presented with simple mazes or patterns to learn. Some of the problems presented to the neurorobot include recognition of symbols, colors, or other patterns and execute simple actions based on the pattern. In the case of the barn owl simulation, the robot had to determine its location and direction to navigate in its environment.

Action selection and value systems

Action selection studies deal with negative or positive weighting to an action and its outcome. Neurorobots can and have been used to study simple ethical interactions, such as the classical thought experiment where there are more people than a life raft can hold, and someone must leave the boat to save the rest. However, more neurorobots used in the study of action selection contend with much simpler persuasions such as self-preservation or perpetuation of the population of robots in the study. These neurorobots are modeled after the neuromodulation of synapses to encourage circuits with positive results.

In biological systems, neurotransmitters such as dopamine or acetylcholine positively reinforce neural signals that are beneficial. One study of such interaction involved the robot Darwin VII, which used visual, auditory, and a simulated taste input to "eat" conductive metal blocks. The arbitrarily chosen good blocks had a striped pattern on them while the bad blocks had a circular shape on them. The taste sense was simulated by conductivity of the blocks. The robot had positive and negative feedbacks to the taste based on its level of conductivity. The researchers observed the robot to see how it learned its action selection behaviors based on the inputs it had. Other studies have used herds of small robots which feed on batteries strewn about the room, and communicate its findings to other robots.

Sensory perception

Neurorobots have also been used to study sensory perception, particularly vision. These are primarily systems that result from embedding neural models of sensory pathways in automatas. This approach gives exposure to the sensory signals that occur during behavior and also enables a more realistic assessment of the degree of robustness of the neural model. It is well known that changes in the sensory signals produced by motor activity provide useful perceptual cues that are used extensively by organisms. For example, researchers have used the depth information that emerges during replication of human head and eye movements to establish robust representations of the visual scene.

Biological robots

Biological robots are not officially neurorobots in that they are not neurologically inspired AI systems, but actual neuron tissue wired to a robot. This employs the use of cultured neural networks to study brain development or neural interactions. These typically consist of a neural culture raised on a multielectrode array (MEA), which is capable of both recording the neural activity and stimulating the tissue. In some cases, the MEA is connected to a computer which presents a simulated environment to the brain tissue and translates brain activity into actions in the simulation, as well as providing sensory feedback The ability to record neural activity gives researchers a window into a brain, which they can use to learn about a number of the same issues neurorobots are used for.

An area of concern with the biological robots is ethics. Many questions are raised about how to treat such experiments. The central question concerns consciousness and whether or not the rat brain experiences it. There are many theories about how to define consciousness.

Implications for neuroscience

Neuroscientists benefit from neurorobotics because it provides a blank slate to test various possible methods of brain function in a controlled and testable environment. While robots are more simplified versions of the systems they emulate, they are more specific, allowing more direct testing of the issue at hand. They also have the benefit of being accessible at all times, while it is more difficult to monitor large portions of a brain while the human or animal is active, especially individual neurons.

The development of neuroscience has produced neural treatments. These include pharmaceuticals and neural rehabilitation. Progress is dependent on an intricate understanding of the brain and how exactly it functions. It is difficult to study the brain, especially in humans, due to the danger associated with cranial surgeries. Neurorobots can improved the range of tests and experiments that can be performed in the study of neural processes.

Galaxy formation and evolution

In cosmology, the study of galaxy formation and evolution is concerned with the processes that formed a heterogeneous universe from a homogeneous beginning, the formation of the first galaxies, the way galaxies change over time, and the processes that have generated the variety of structures observed in nearby galaxies. Galaxy formation is hypothesized to occur from structure formation theories, as a result of tiny quantum fluctuations in the aftermath of the Big Bang. The simplest model in general agreement with observed phenomena is the Lambda-CDM model—that is, clustering and merging allows galaxies to accumulate mass, determining both their shape and structure. Hydrodynamics simulation, which simulates both baryons and dark matter, is widely used to study galaxy formation and evolution.

Commonly observed properties of galaxies

Hubble tuning fork diagram of galaxy morphology

Because of the inability to conduct experiments in outer space, the only way to “test” theories and models of galaxy evolution is to compare them with observations. Explanations for how galaxies formed and evolved must be able to predict the observed properties and types of galaxies.

Edwin Hubble created an early galaxy classification scheme, now known as the Hubble tuning-fork diagram. It partitioned galaxies into ellipticals, normal spirals, barred spirals (such as the Milky Way), and irregulars. These galaxy types exhibit the following properties which can be explained by current galaxy evolution theories:

  • Many of the properties of galaxies (including the galaxy color–magnitude diagram) indicate that there are fundamentally two types of galaxies. These groups divide into blue star-forming galaxies that are more like spiral types, and red non-star forming galaxies that are more like elliptical galaxies.
  • Spiral galaxies are quite thin, dense, and rotate relatively fast, while the stars in elliptical galaxies have randomly oriented orbits.
  • The majority of giant galaxies contain a supermassive black hole in their centers, ranging in mass from millions to billions of times the mass of the Sun. The black hole mass is tied to the host galaxy bulge or spheroid mass.
  • Metallicity has a positive correlation with the luminosity of a galaxy and an even stronger correlation with galaxy mass.

Astronomers now believe that disk galaxies likely formed first, then evolved into elliptical galaxies through galaxy mergers.

Current models also predict that the majority of mass in galaxies is made up of dark matter, a substance which is not directly observable, and might not interact through any means except gravity. This observation arises because galaxies could not have formed as they have, or rotate as they are seen to, unless they contain far more mass than can be directly observed.

Formation of disk galaxies

The earliest stage in the evolution of galaxies is their formation. When a galaxy forms, it has a disk shape and is called a spiral galaxy due to spiral-like "arm" structures located on the disk. There are different theories on how these disk-like distributions of stars develop from a cloud of matter: however, at present, none of them exactly predicts the results of observation.

Top-down theories

Olin J. Eggen, Donald Lynden-Bell, and Allan Sandage in 1962, proposed a theory that disk galaxies form through a monolithic collapse of a large gas cloud. The distribution of matter in the early universe was in clumps that consisted mostly of dark matter. These clumps interacted gravitationally, putting tidal torques on each other that acted to give them some angular momentum. As the baryonic matter cooled, it dissipated some energy and contracted toward the center. With angular momentum conserved, the matter near the center speeds up its rotation. Then, like a spinning ball of pizza dough, the matter forms into a tight disk. Once the disk cools, the gas is not gravitationally stable, so it cannot remain a singular homogeneous cloud. It breaks, and these smaller clouds of gas form stars. Since the dark matter does not dissipate as it only interacts gravitationally, it remains distributed outside the disk in what is known as the dark halo. Observations show that there are stars located outside the disk, which does not quite fit the "pizza dough" model. It was first proposed by Leonard Searle and Robert Zinn  that galaxies form by the coalescence of smaller progenitors. Known as a top-down formation scenario, this theory is quite simple yet no longer widely accepted.

Bottom-up theory

More recent theories include the clustering of dark matter halos in the bottom-up process. Instead of large gas clouds collapsing to form a galaxy in which the gas breaks up into smaller clouds, it is proposed that matter started out in these “smaller” clumps (mass on the order of globular clusters), and then many of these clumps merged to form galaxies, which then were drawn by gravitation to form galaxy clusters. This still results in disk-like distributions of baryonic matter with dark matter forming the halo for all the same reasons as in the top-down theory. Models using this sort of process predict more small galaxies than large ones, which matches observations.

Astronomers do not currently know what process stops the contraction. In fact, theories of disk galaxy formation are not successful at producing the rotation speed and size of disk galaxies. It has been suggested that the radiation from bright newly formed stars, or from an active galactic nucleus can slow the contraction of a forming disk. It has also been suggested that the dark matter halo can pull the galaxy, thus stopping disk contraction.

The Lambda-CDM model is a cosmological model that explains the formation of the universe after the Big Bang. It is a relatively simple model that predicts many properties observed in the universe, including the relative frequency of different galaxy types; however, it underestimates the number of thin disk galaxies in the universe. The reason is that these galaxy formation models predict a large number of mergers. If disk galaxies merge with another galaxy of comparable mass (at least 15 percent of its mass) the merger will likely destroy, or at a minimum greatly disrupt the disk, and the resulting galaxy is not expected to be a disk galaxy (see next section). While this remains an unsolved problem for astronomers, it does not necessarily mean that the Lambda-CDM model is completely wrong, but rather that it requires further refinement to accurately reproduce the population of galaxies in the universe.

Galaxy mergers and the formation of elliptical galaxies

Artist's image of a firestorm of star birth deep inside the core of a young, growing elliptical galaxy
NGC 4676 (Mice Galaxies) is an example of a present merger.
The Antennae Galaxies are a pair of colliding galaxies – the bright, blue knots are young stars that have recently ignited as a result of the merger.
ESO 325-G004, a typical elliptical galaxy

Elliptical galaxies (most notably supergiant ellipticals, such as ESO 306-17) are among some of the largest known thus far. Their stars are on orbits that are randomly oriented within the galaxy (i.e. they are not rotating like disk galaxies). A distinguishing feature of elliptical galaxies is that the velocity of the stars does not necessarily contribute to flattening of the galaxy, such as in spiral galaxies. Elliptical galaxies have central supermassive black holes, and the masses of these black holes correlate with the galaxy's mass.

Elliptical galaxies have two main stages of evolution. The first is due to the supermassive black hole growing by accreting cooling gas. The second stage is marked by the black hole stabilizing by suppressing gas cooling, thus leaving the elliptical galaxy in a stable state. The mass of the black hole is also correlated to a property called sigma which is the dispersion of the velocities of stars in their orbits. This relationship, known as the M-sigma relation, was discovered in 2000. Elliptical galaxies mostly lack disks, although some bulges of disk galaxies resemble elliptical galaxies. Elliptical galaxies are more likely found in crowded regions of the universe (such as galaxy clusters).

Astronomers now see elliptical galaxies as some of the most evolved systems in the universe. It is widely accepted that the main driving force for the evolution of elliptical galaxies is mergers of smaller galaxies. Many galaxies in the universe are gravitationally bound to other galaxies, which means that they will never escape their mutual pull. If those colliding galaxies are of similar size, the resultant galaxy will appear similar to neither of the progenitors, but will instead be elliptical. There are many types of galaxy mergers, which do not necessarily result in elliptical galaxies, but result in a structural change. For example, a minor merger event is thought to be occurring between the Milky Way and the Magellanic Clouds.

Mergers between such large galaxies are regarded as violent, and the frictional interaction of the gas between the two galaxies can cause gravitational shock waves, which are capable of forming new stars in the new elliptical galaxy. By sequencing several images of different galactic collisions, one can observe the timeline of two spiral galaxies merging into a single elliptical galaxy.

In the Local Group, the Milky Way and the Andromeda Galaxy are gravitationally bound, and currently approaching each other at high speed. Simulations show that the Milky Way and Andromeda are on a collision course, and are expected to collide in less than five billion years. During this collision, it is expected that the Sun and the rest of the Solar System will be ejected from its current path around the Milky Way. The remnant could be a giant elliptical galaxy.

Galaxy quenching

Star formation in what are now "dead" galaxies sputtered out billions of years ago

One observation that must be explained by a successful theory of galaxy evolution is the existence of two different populations of galaxies on the galaxy color-magnitude diagram. Most galaxies tend to fall into two separate locations on this diagram: a "red sequence" and a "blue cloud". Red sequence galaxies are generally non-star-forming elliptical galaxies with little gas and dust, while blue cloud galaxies tend to be dusty star-forming spiral galaxies.

As described in previous sections, galaxies tend to evolve from spiral to elliptical structure via mergers. However, the current rate of galaxy mergers does not explain how all galaxies move from the "blue cloud" to the "red sequence". It also does not explain how star formation ceases in galaxies. Theories of galaxy evolution must therefore be able to explain how star formation turns off in galaxies. This phenomenon is called galaxy "quenching".

Stars form out of cold gas (see also the Kennicutt–Schmidt law), so a galaxy is quenched when it has no more cold gas. However, it is thought that quenching occurs relatively quickly (within 1 billion years), which is much shorter than the time it would take for a galaxy to simply use up its reservoir of cold gas. Galaxy evolution models explain this by hypothesizing other physical mechanisms that remove or shut off the supply of cold gas in a galaxy. These mechanisms can be broadly classified into two categories: (1) preventive feedback mechanisms that stop cold gas from entering a galaxy or stop it from producing stars, and (2) ejective feedback mechanisms that remove gas so that it cannot form stars.

One theorized preventive mechanism called “strangulation” keeps cold gas from entering the galaxy. Strangulation is likely the main mechanism for quenching star formation in nearby low-mass galaxies. The exact physical explanation for strangulation is still unknown, but it may have to do with a galaxy's interactions with other galaxies. As a galaxy falls into a galaxy cluster, gravitational interactions with other galaxies can strangle it by preventing it from accreting more gas. For galaxies with massive dark matter halos, another preventive mechanism called “virial shock heating” may also prevent gas from becoming cool enough to form stars.

Ejective processes, which expel cold gas from galaxies, may explain how more massive galaxies are quenched. One ejective mechanism is caused by supermassive black holes found in the centers of galaxies. Simulations have shown that gas accreting onto supermassive black holes in galactic centers produces high-energy jets; the released energy can expel enough cold gas to quench star formation.

Our own Milky Way and the nearby Andromeda Galaxy currently appear to be undergoing the quenching transition from star-forming blue galaxies to passive red galaxies.

Hydrodynamics Simulation

Dark energy and dark matter account for most of the Universe's energy, so it is valid to ignore baryons when simulating large-scale structure formation (using methods such as N-body simulation). However, since the visible components of galaxies consist of baryons, it is crucial to include baryons in the simulation to study the detailed structures of galaxies. At first, the baryon component consists of mostly hydrogen and helium gas, which later transforms into stars during the formation of structures. From observations, models used in simulations can be tested and the understanding of different stages of galaxy formation can be improved.

Euler equations

In cosmological simulations, astrophysical gases are typically modeled as inviscid ideal gases that follow the Euler equations, which can be expressed mainly in three different ways: Lagrangian, Eulerian, or arbitrary Lagrange-Eulerian methods. Different methods give specific forms of hydrodynamical equations. When using the Lagrangian approach to specify the field, it is assumed that the observer tracks a specific fluid parcel with its unique characteristics during its movement through space and time. In contrast, the Eulerian approach emphasizes particular locations in space that the fluid passes through as time progresses.

Baryonic Physics

To shape the population of galaxies, the hydrodynamical equations must be supplemented by a variety of astrophysical processes mainly governed by baryonic physics.

Gas cooling

Processes, such as collisional excitation, ionization, and inverse Compton scattering, can cause the internal energy of the gas to be dissipated. In the simulation, cooling processes are realized by coupling cooling functions to energy equations. Besides the primordial cooling, at high temperature,, heavy elements (metals) cooling dominates. When , the fine structure and molecular cooling also need to be considered to simulate the cold phase of the interstellar medium.

Interstellar medium

Complex multi-phase structure, including relativistic particles and magnetic field, makes simulation of interstellar medium difficult. In particular, modeling the cold phase of the interstellar medium poses technical difficulties due to the short timescales associated with the dense gas. In the early simulations, the dense gas phase is frequently not modeled directly but rather characterized by an effective polytropic equation of state. More recent simulations use a multimodal distribution to describe the gas density and temperature distributions, which directly model the multi-phase structure. However, more detailed physics processes needed to be considered in future simulations, since the structure of the interstellar medium directly affects star formation.

Star formation

As cold and dense gas accumulates, it undergoes gravitational collapse and eventually forms stars. To simulate this process, a portion of the gas is transformed into collisionless star particles, which represent coeval, single-metallicity stellar populations and are described by an initial underlying mass function. Observations suggest that star formation efficiency in molecular gas is almost universal, with around 1% of the gas being converted into stars per free fall time. In simulations, the gas is typically converted into star particles using a probabilistic sampling scheme based on the calculated star formation rate. Some simulations seek an alternative to the probabilistic sampling scheme and aim to better capture the clustered nature of star formation by treating star clusters as the fundamental unit of star formation. This approach permits the growth of star particles by accreting material from the surrounding medium. In addition to this, modern models of galaxy formation track the evolution of these stars and the mass they return to the gas component, leading to an enrichment of the gas with metals.

Stellar feedback

Stars have an influence on their surrounding gas by injecting energy and momentum. This creates a feedback loop that regulates the process of star formation. To effectively control star formation, stellar feedback must generate galactic-scale outflows that expel gas from galaxies. Various methods are utilized to couple energy and momentum, particularly through supernova explosions, to the surrounding gas. These methods differ in how the energy is deposited, either thermally or kinetically. However, excessive radiative gas cooling must be avoided in the former case. Cooling is expected in dense and cold gas, but it cannot be reliably modeled in cosmological simulations due to low resolution. This leads to artificial and excessive cooling of the gas, causing the supernova feedback energy to be lost via radiation and significantly reducing its effectiveness. In the latter case, kinetic energy cannot be radiated away until it thermalizes. However, using hydrodynamically decoupled wind particles to inject momentum non-locally into the gas surrounding active star-forming regions may still be necessary to achieve large-scale galactic outflows. Recent models explicitly model stellar feedback. These models not only incorporate supernova feedback but also consider other feedback channels such as energy and momentum injection from stellar winds, photoionization, and radiation pressure resulting from radiation emitted by young, massive stars. During the Cosmic Dawn, galaxy formation occurred in short bursts of 5 to 30 Myr due to stellar feedbacks.

Supermassive black holes

Simulation of supermassive black holes is also considered, numerically seeding them in dark matter haloes, due to their observation in many galaxies and the impact of their mass on the mass density distribution. Their mass accretion rate is frequently modeled by the Bondi-Hoyle model.

Active galactic nuclei

Active galactic nuclei (AGN) have an impact on the observational phenomena of supermassive black holes, and further have a regulation of black hole growth and star formation. In simulations, AGN feedback is usually classified into two modes, namely quasar and radio mode. Quasar mode feedback is linked to the radiatively efficient mode of black hole growth and is frequently incorporated through energy or momentum injection. The regulation of star formation in massive galaxies is believed to be significantly influenced by radio mode feedback, which occurs due to the presence of highly collimated jets of relativistic particles. These jets are typically linked to X-ray bubbles that possess enough energy to counterbalance cooling losses.

Magnetic fields

The ideal magnetohydrodynamics approach is commonly utilized in cosmological simulations since it provides a good approximation for cosmological magnetic fields. The effect of magnetic fields on the dynamics of gas is generally negligible on large cosmological scales. Nevertheless, magnetic fields are a critical component of the interstellar medium since they provide pressure support against gravity and affect the propagation of cosmic rays.

Cosmic rays

Cosmic rays play a significant role in the interstellar medium by contributing to its pressure, serving as a crucial heating channel, and potentially driving galactic gas outflows. The propagation of cosmic rays is highly affected by magnetic fields. So in the simulation, equations describing the cosmic ray energy and flux are coupled to magnetohydrodynamics equations.

Radiation Hydrodynamics

Radiation hydrodynamics simulations are computational methods used to study the interaction of radiation with matter. In astrophysical contexts, radiation hydrodynamics is used to study the epoch of reionization when the Universe had high redshift. There are several numerical methods used for radiation hydrodynamics simulations, including ray-tracing, Monte Carlo, and moment-based methods. Ray-tracing involves tracing the paths of individual photons through the simulation and computing their interactions with matter at each step. This method is computationally expensive but can produce very accurate results.

Relationship between science and religion

From Wikipedia, the free encyclopedia "Science and Religion" redirects here. For the 1991 book by John Hedley Brooke, see  Science...