Search This Blog

Wednesday, December 17, 2025

DNA computing

From Wikipedia, the free encyclopedia
The biocompatible computing device: Deoxyribonucleic acid (DNA)

DNA computing is an emerging branch of unconventional computing which uses DNA, biochemistry, and molecular biology hardware, instead of the traditional electronic computing. Research and development in this area concerns theory, experiments, and applications of DNA computing. Although the field originally started with the demonstration of a computing application by Len Adleman in 1994, it has now been expanded to several other avenues such as the development of storage technologies, nanoscale imaging modalities, synthetic controllers and reaction networks, etc.

History

Leonard Adleman of the University of Southern California initially developed this field in 1994. Adleman demonstrated a proof-of-concept use of DNA as a form of computation which solved the seven-point Hamiltonian path problem. Since the initial Adleman experiments, advances have occurred and various Turing machines have been proven to be constructible.

Since then the field has expanded into several avenues. In 1995, the idea for DNA-based memory was proposed by Eric Baum who conjectured that a vast amount of data can be stored in a tiny amount of DNA due to its ultra-high density. This expanded the horizon of DNA computing into the realm of memory technology although the in vitro demonstrations were made after almost a decade.

The field of DNA computing can be categorized as a sub-field of the broader DNA nanoscience field started by Ned Seeman about a decade before Len Adleman's demonstration. Ned's original idea in the 1980s was to build arbitrary structures using bottom-up DNA self-assembly for applications in crystallography. However, it morphed into the field of structural DNA self-assembly which as of 2020 is extremely sophisticated. Self-assembled structure from a few nanometers tall all the way up to several tens of micrometers in size have been demonstrated in 2018.

In 1994, Prof. Seeman's group demonstrated early DNA lattice structures using a small set of DNA components. While the demonstration by Adleman showed the possibility of DNA-based computers, the DNA design was trivial because as the number of nodes in a graph grows, the number of DNA components required in Adleman's implementation would grow exponentially. Therefore, computer scientists and biochemists started exploring tile-assembly where the goal was to use a small set of DNA strands as tiles to perform arbitrary computations upon growth. Other avenues that were theoretically explored in the late 90's include DNA-based security and cryptography, computational capacity of DNA systems, DNA memories and disks, and DNA-based robotics.

Before 2002, Lila Kari showed that the DNA operations performed by genetic recombination in some organisms are Turing complete.

In 2003, John Reif's group first demonstrated the idea of a DNA-based walker that traversed along a track similar to a line follower robot. They used molecular biology as a source of energy for the walker. Since this first demonstration, a wide variety of DNA-based walkers have been demonstrated.

Applications, examples, and recent developments

In 1994 Leonard Adleman presented the first prototype of a DNA computer. The TT-100 was a test tube filled with 100 microliters of a DNA solution. He managed to solve an instance of the directed Hamiltonian path problem. In Adleman's experiment, the Hamiltonian Path Problem was implemented notationally as the "travelling salesman problem". For this purpose, different DNA fragments were created, each one of them representing a city that had to be visited. Every one of these fragments is capable of a linkage with the other fragments created. These DNA fragments were produced and mixed in a test tube. Within seconds, the small fragments form bigger ones, representing the different travel routes. Through a chemical reaction, the DNA fragments representing the longer routes were eliminated. The remains are the solution to the problem, but overall, the experiment lasted a week. However, current technical limitations prevent the evaluation of the results. Therefore, the experiment isn't suitable for the application, but it is nevertheless a proof of concept.

Combinatorial problems

First results to these problems were obtained by Leonard Adleman.

Tic-tac-toe game

In 2002, J. Macdonald, D. Stefanović and M. Stojanović created a DNA computer able to play tic-tac-toe against a human player. The calculator consists of nine bins corresponding to the nine squares of the game. Each bin contains a substrate and various combinations of DNA enzymes. The substrate itself is composed of a DNA strand onto which was grafted a fluorescent chemical group at one end, and the other end, a repressor group. Fluorescence is only active if the molecules of the substrate are cut in half. The DNA enzymes simulate logical functions. For example, such a DNA will unfold if two specific types of DNA strand are introduced to reproduce the logic function AND.

By default, the computer is considered to have played first in the central square. The human player starts with eight different types of DNA strands corresponding to the eight remaining boxes that may be played. To play box number i, the human player pours into all bins the strands corresponding to input #i. These strands bind to certain DNA enzymes present in the bins, resulting, in one of these bins, in the deformation of the DNA enzymes which binds to the substrate and cuts it. The corresponding bin becomes fluorescent, indicating which box is being played by the DNA computer. The DNA enzymes are divided among the bins in such a way as to ensure that the best the human player can achieve is a draw, as in real tic-tac-toe.

Neural network based computing

Kevin Cherry and Lulu Qian at Caltech developed a DNA-based artificial neural network that can recognize 100-bit hand-written digits. They achieved this by programming on a computer in advance with the appropriate set of weights represented by varying concentrations weight molecules which are later added to the test tube that holds the input DNA strands.

Improved speed with Localized (cache-like) Computing

One of the challenges of DNA computing is its slow speed. While DNA is a biologically compatible substrate, i.e., it can be used at places where silicon technology cannot, its computational speed is still very slow. For example, the square-root circuit used as a benchmark in the field takes over 100 hours to complete. While newer ways with external enzyme sources are reporting faster and more compact circuits, Chatterjee et al. demonstrated an interesting idea in the field to speed up computation through localized DNA circuits, a concept being further explored by other groups. This idea, while originally proposed in the field of computer architecture, has been adopted in this field as well. In computer architecture, it is very well-known that if the instructions are executed in sequence, having them loaded in the cache will inevitably lead to fast performance, also called the principle of localization. This is because with instructions in fast cache memory, there is no need swap them in and out of main memory, which can be slow. Similarly, in localized DNA computing, the DNA strands responsible for computation are fixed on a breadboard-like substrate ensuring physical proximity of the computing gates. Such localized DNA computing techniques have been shown to potentially reduce the computation time by orders of magnitude.

Renewable (or reversible) DNA computing

Subsequent research on DNA computing has produced reversible DNA computing, bringing the technology one step closer to the silicon-based computing used in (for example) PCs. In particular, John Reif and his group at Duke University have proposed two different techniques to reuse the computing DNA complexes. The first design uses dsDNA gates, while the second design uses DNA hairpin complexes. While both designs face some issues (such as reaction leaks), this appears to represent a significant breakthrough in the field of DNA computing. Some other groups have also attempted to address the gate reusability problem.

Using strand displacement reactions (SDRs), reversible proposals are presented in the "Synthesis Strategy of Reversible Circuits on DNA Computers" paper for implementing reversible gates and circuits on DNA computers by combining DNA computing and reversible computing techniques. This paper also proposes a universal reversible gate library (URGL) for synthesizing n-bit reversible circuits on DNA computers with an average length and cost of the constructed circuits better than the previous methods.

Methods

There are multiple methods for building a computing device based on DNA, each with its own advantages and disadvantages. Most of these build the basic logic gates (AND, OR, NOT) associated with digital logic from a DNA basis. Some of the different bases include DNAzymes, deoxyoligonucleotides, enzymes, and toehold exchange.

Strand displacement mechanisms

The most fundamental operation in DNA computing and molecular programming is the strand displacement mechanism. Currently, there are two ways to perform strand displacement:

Toehold exchange

Besides simple strand displacement schemes, DNA computers have also been constructed using the concept of toehold exchange. In this system, an input DNA strand binds to a sticky end, or toehold, on another DNA molecule, which allows it to displace another strand segment from the molecule. This allows the creation of modular logic components such as AND, OR, and NOT gates and signal amplifiers, which can be linked into arbitrarily large computers. This class of DNA computers does not require enzymes or any chemical capability of the DNA.

Chemical reaction networks (CRNs)

The full stack for DNA computing looks very similar to a traditional computer architecture. At the highest level, a C-like general purpose programming language is expressed using a set of chemical reaction networks (CRNs). This intermediate representation gets translated to domain-level DNA design and then implemented using a set of DNA strands. In 2010, Erik Winfree's group showed that DNA can be used as a substrate to implement arbitrary chemical reactions. This opened the way to design and synthesis of biochemical controllers since the expressive power of CRNs is equivalent to a Turing machine. Such controllers can potentially be used in vivo for applications such as preventing hormonal imbalance.

DNAzymes

Catalytic DNA (deoxyribozyme or DNAzyme) catalyze a reaction when interacting with the appropriate input, such as a matching oligonucleotide. These DNAzymes are used to build logic gates analogous to digital logic in silicon; however, DNAzymes are limited to one-, two-, and three-input gates with no current implementation for evaluating statements in series.

The DNAzyme logic gate changes its structure when it binds to a matching oligonucleotide and the fluorogenic substrate it is bonded to is cleaved free. While other materials can be used, most models use a fluorescence-based substrate because it is very easy to detect, even at the single molecule limit. The amount of fluorescence can then be measured to tell whether or not a reaction took place. The DNAzyme that changes is then "used", and cannot initiate any more reactions. Because of this, these reactions take place in a device such as a continuous stirred-tank reactor, where old product is removed and new molecules added.

Two commonly used DNAzymes are named E6 and 8-17. These are popular because they allow cleaving of a substrate in any arbitrary location. Stojanovic and MacDonald have used the E6 DNAzymes to build the MAYA I and MAYA II machines, respectively; Stojanovic has also demonstrated logic gates using the 8-17 DNAzyme. While these DNAzymes have been demonstrated to be useful for constructing logic gates, they are limited by the need of a metal cofactor to function, such as Zn2+ or Mn2+, and thus are not useful in vivo.

A design called a stem loop, consisting of a single strand of DNA which has a loop at an end, are a dynamic structure that opens and closes when a piece of DNA bonds to the loop part. This effect has been exploited to create several logic gates. These logic gates have been used to create the computers MAYA I and MAYA II which can play tic-tac-toe to some extent.

Enzymes

Enzyme-based DNA computers are usually of the form of a simple Turing machine; there is analogous hardware, in the form of an enzyme, and software, in the form of DNA.

Benenson, Shapiro and colleagues have demonstrated a DNA computer using the FokI enzyme and expanded on their work by going on to show automata that diagnose and react to prostate cancer: under expression of the genes PPAP2B and GSTP1 and an over expression of PIM1 and HPN. Their automata evaluated the expression of each gene, one gene at a time, and on positive diagnosis then released a single strand DNA molecule (ssDNA) that is an antisense for MDM2. MDM2 is a repressor of protein 53, which itself is a tumor suppressor. On negative diagnosis it was decided to release a suppressor of the positive diagnosis drug instead of doing nothing. A limitation of this implementation is that two separate automata are required, one to administer each drug. The entire process of evaluation until drug release took around an hour to complete. This method also requires transition molecules as well as the FokI enzyme to be present. The requirement for the FokI enzyme limits application in vivo, at least for use in "cells of higher organisms". It should also be pointed out that the 'software' molecules can be reused in this case.

Algorithmic self-assembly

DNA arrays that display a representation of the Sierpinski gasket on their surfaces. Click the image for further details. Image from Rothemund et al., 2004.

DNA nanotechnology has been applied to the related field of DNA computing. DNA tiles can be designed to contain multiple sticky ends with sequences chosen so that they act as Wang tiles. A DX array has been demonstrated whose assembly encodes an XOR operation; this allows the DNA array to implement a cellular automaton which generates a fractal called the Sierpinski gasket. This shows that computation can be incorporated into the assembly of DNA arrays, increasing its scope beyond simple periodic arrays.

Capabilities

DNA computing is a form of parallel computing in that it takes advantage of the many different molecules of DNA to try many different possibilities at once. For certain specialized problems, DNA computers are faster and smaller than any other computer built so far. Furthermore, particular mathematical computations have been demonstrated to work on a DNA computer.

DNA computing does not provide any new capabilities from the standpoint of computability theory, the study of which problems are computationally solvable using different models of computation. For example, if the space required for the solution of a problem grows exponentially with the size of the problem (EXPSPACE problems) on von Neumann machines, it still grows exponentially with the size of the problem on DNA machines. For very large EXPSPACE problems, the amount of DNA required is too large to be practical.

Alternative technologies

A partnership between IBM and Caltech was established in 2009 aiming at "DNA chips" production. A Caltech group is working on the manufacturing of these nucleic-acid-based integrated circuits. One of these chips can compute whole square roots. A compiler has been written in Perl.

Pros and cons

The slow processing speed of a DNA computer (the response time is measured in minutes, hours or days, rather than milliseconds) is compensated by its potential to make a high amount of multiple parallel computations. This allows the system to take a similar amount of time for a complex calculation as for a simple one. This is achieved by the fact that millions or billions of molecules interact with each other simultaneously. However, it is much harder to analyze the answers given by a DNA computer than by a digital one.

Tuesday, December 16, 2025

Prebiotic atmosphere

From Wikipedia, the free encyclopedia
The pale orange dot, an artist's impression of the early Earth which is believed to have appeared orange through its hazy methane rich prebiotic second atmosphere, being somewhat comparable to Titan's atmosphere

The prebiotic atmosphere is the second atmosphere present on Earth before today's biotic, oxygen-rich third atmosphere, and after the first atmosphere (which was mainly water vapor and simple hydrides) of Earth's formation. The formation of the Earth, roughly 4.5 billion years ago, involved multiple collisions and coalescence of planetary embryos. This was followed by an over 100 million year period on Earth where a magma ocean was present, the atmosphere was mainly steam, and surface temperatures reached up to 8,000 K (14,000 °F). Earth's surface then cooled and the atmosphere stabilized, establishing the prebiotic atmosphere. The environmental conditions during this time period were quite different from today: the Sun was about 30% dimmer overall yet brighter at ultraviolet and x-ray wavelengths; there was a liquid ocean; it is unknown if there were continents but oceanic islands were likely;Earth's interior chemistry (and thus, volcanic activity) was different; there was a larger flux of impactors (e.g. comets and asteroids) hitting Earth's surface.

Studies have attempted to constrain the composition and nature of the prebiotic atmosphere by analyzing geochemical data and using theoretical models that include our knowledge of the early Earth environment. These studies indicate that the prebiotic atmosphere likely contained more CO2 than the modern Earth, had N2 within a factor of 2 of the modern levels, and had vanishingly low amounts of O2. The atmospheric chemistry is believed to have been "weakly reducing", where reduced gases like CH4, NH3, and H2 were present in small quantities. The composition of the prebiotic atmosphere was likely periodically altered by impactors, which may have temporarily caused the atmosphere to have been "strongly reduced".

Constraining the composition of the prebiotic atmosphere is key to understanding the origin of life, as it may facilitate or inhibit certain chemical reactions on Earth's surface believed to be important for the formation of the first living organism. Life on Earth originated and began modifying the atmosphere at least 3.5 billion years ago and possibly much earlier, which marks the end of the prebiotic atmosphere.

Environmental context

Establishment of the prebiotic atmosphere

Earth is believed to have formed over 4.5 billion years ago by accreting material from the solar nebula.[2] Earth's Moon formed in a collision, the Moon-forming impact, believed to have occurred 30-50 million years after the Earth formed. In this collision, a Mars-sized object named Theia collided with the primitive Earth and the remnants of the collision formed the Moon. The collision likely supplied enough energy to melt most of Earth's mantle and vaporize roughly 20% of it, heating Earth's surface to as high as 8,000 K (~14,000 °F). Earth's surface in the aftermath of the Moon-forming impact was characterized by high temperatures (~2,500 K), an atmosphere made of rock vapor and steam, and a magma ocean. As the Earth cooled by radiating away the excess energy from the impact, the magma ocean solidified and volatiles were partitioned between the mantle and atmosphere until a stable state was reached. It is estimated that Earth transitioned from the hot, post-impact environment into a potentially habitable environment with crustal recycling, albeit different from modern plate tectonics, roughy 10-20 million years after the Moon-forming impact, around 4.4 billion years ago. The atmosphere present from this point in Earth's history until the origin of life is referred to as the prebiotic atmosphere.

It is unknown when exactly life originated. The oldest direct evidence for life on Earth is around 3.5 billion years old, such as fossil stromatolites from North Pole, Western Australia. Putative evidence of life on Earth from older times (e.g. 3.8 and 4.1 billion years ago) lacks additional context necessary to claim it is truly of biotic origin, so it is still debated. Thus, the prebiotic atmosphere concluded 3.5 billion years ago or earlier, placing it in the early Archean Eon or mid-to-late Hadean Eon.

Environmental factors

Knowledge of the environmental factors at play on early Earth is required to investigate the prebiotic atmosphere. Much of what we know about the prebiotic environment comes from zircons - crystals of zirconium silicate (ZrSiO4). Zircons are useful because they record the physical and chemical processes occurring on the prebiotic Earth during their formation and they are especially durable. Most zircons that are dated to the prebiotic time period are found at the Jack Hills formation of Western Australia, but they also occur elsewhere. Geochemical data from several prebiotic zircons show isotopic evidence for chemical change induced by liquid water, indicating that the prebiotic environment had a liquid ocean and a surface temperature that did not cause it to freeze or boil. It is unknown when exactly the continents emerged above this liquid ocean. This adds uncertainty to the interaction between Earth's prebiotic surface and atmosphere, as the presence of exposed land determines the rate of weathering processes and provides local environments that may be necessary for life to form. However, oceanic islands were likely. Additionally, the oxidation state of Earth's mantle was likely different at early times, which changes the fluxes of chemical species delivered to the atmosphere from volcanic outgassing.

Environmental factors from elsewhere in the Solar System also affected prebiotic Earth. The Sun was ~30% dimmer overall around the time the Earth formed. This means greenhouse gases may have been required in higher levels than present day to keep Earth from freezing over. Despite the overall reduction in energy coming from the Sun, the early Sun emitted more radiation in the ultraviolet and x-ray regimes than it currently does. This indicates that different photochemical reactions may have dominated early Earth's atmosphere, which has implications for global atmospheric chemistry and the formation of important compounds that could lead to the origin of life. Finally, there was a significantly higher flux of objects that impacted Earth - such as comets and asteroids - in the early Solar System. These impactors may have been important in the prebiotic atmosphere because they can deliver material to the atmosphere, eject material from the atmosphere, and change the chemical nature of the atmosphere after their arrival.

Atmospheric composition

The exact composition of the prebiotic atmosphere is unknown due to the lack of geochemical data from the time period. Current studies generally indicate that the prebiotic atmosphere was "weakly reduced", with elevated levels of CO2, N2 within a factor of 2 of the modern level, negligible amounts of O2, and more hydrogen-bearing gases than the modern Earth (see below). Noble gases and photochemical products of the dominant species were also present in small quantities.

Carbon dioxide

Carbon dioxide (CO2) is an important component of the prebiotic atmosphere because, as a greenhouse gas, it strongly affects the surface temperature; also, it dissolves in water and can change the ocean pH. The abundance of carbon dioxide in the prebiotic atmosphere is not directly constrained by geochemical data and must be inferred.

Evidence suggests that the carbonate-silicate cycle regulates Earth's atmospheric carbon dioxide abundance on timescales of about 1 million years. The carbonate-silicate cycle is a negative feedback loop that modulates Earth's surface temperature by partitioning carbon between the atmosphere and the mantle via several surface processes. It has been proposed that the processes of the carbonate-silicate cycle would result in high CO2 levels in the prebiotic atmosphere to offset the lower energy input from the faint young Sun. This mechanism can be used to estimate the prebiotic CO2 abundance, but it is debated and uncertain. Uncertainty is primarily driven by a lack of knowledge about the area of exposed land, early Earth's interior chemistry and structure, the rate of reverse weathering and seafloor weathering, and the increased impactor flux. One extensive modeling study suggests that CO2 was roughly 20 times higher in the prebiotic atmosphere than the preindustrial modern value (280 ppm), which would result in a global average surface temperature around 259 K (6.5 °F) and an ocean pH around 7.9. This is in agreement with other studies, which generally conclude that the prebiotic atmospheric CO2 abundance was higher than the modern one, although the global surface temperature may still be significantly colder due to the faint young Sun.

Nitrogen

Nitrogen in the form of N2 is 78% of Earth's modern atmosphere by volume, making it the most abundant gas. N2 is generally considered a background gas in the Earth's atmosphere because it is relatively unreactive due to the strength of its triple bond. Despite this, atmospheric N2 was at least moderately important to the prebiotic environment because it impacts the climate via Rayleigh scattering and it may have been more photochemically active under the enhanced x-ray and ultraviolet radiation from the young Sun. N2 was also likely important for the synthesis of compounds believed to be critical for the origin of life, such as hydrogen cyanide (HCN) and amino acids derived from HCN. Studies have attempted to constrain the prebiotic atmosphere N2 abundance with theoretical estimates, models, and geologic data. These studies have resulted in a range of possible constraints on the prebiotic N2 abundance. For example, a recent modeling study that incorporates atmospheric escape, magma ocean chemistry, and the evolution of Earth's interior chemistry suggests that the atmospheric N2 abundance was probably less than half of the present day value. However, this study fits into a larger body of work that generally constrains the prebiotic N2 abundance to be between half and double the present level.

Oxygen

Oxygen in the form of O2 makes up 21% of Earth's modern atmosphere by volume. Earth's modern atmospheric O2 is due almost entirely to biology (e.g. it is produced during oxygenic photosynthesis), so it was not nearly as abundant in the prebiotic atmosphere. This is favorable for the origin of life, as O2 would oxidize organic compounds needed in the origin of life. The prebiotic atmosphere O2 abundance can be theoretically calculated with models of atmospheric chemistry. The primary source of O2 in these models is the breakdown and subsequent chemical reactions of other oxygen containing compounds. Incoming solar photons or lightning can break up CO2 and H2O molecules, freeing oxygen atoms and other radicals (i.e. highly reactive gases in the atmosphere). The free oxygen can then combine into O2 molecules via several chemical pathways. The rate at which O2 is created in this process is determined by the incoming solar flux, the rate of lightning, and the abundances of the other atmospheric gases that take part in the chemical reactions (e.g. CO2, H2O, OH), as well as their vertical distributions. O2 is removed from the atmosphere via photochemical reactions that mainly involve H2 and CO near the surface. The most important of these reactions starts when H2 is split into two H atoms by incoming solar photons. The free H then reacts with O2 and eventually forms H2O, resulting in a net removal of O2 and a net increase in H2O. Models that simulate all of these chemical reactions in a potential prebiotic atmosphere show that an extremely small atmospheric O2 abundance is likely.In one such model that assumed values for CO2 and H2 abundances and sources, the O2 volume mixing ratio is calculated to be between 10−18 and 10−11 near the surface and up to 10−4 in the upper atmosphere.

Hydrogen and reduced gases

The hydrogen abundance in the prebiotic atmosphere can be viewed from the perspective of reduction-oxidation (redox) chemistry. The modern atmosphere is oxidizing, due to the large volume of atmospheric O2. In an oxidizing atmosphere, the majority of atoms that form atmospheric compounds (e.g. C) will be in an oxidized form (e.g. CO2) instead of a reduced form (e.g. CH4). In a reducing atmosphere, more species will be in their reduced, generally hydrogen-bearing forms. Because there was very little O2 in the prebiotic atmosphere, it is generally believed that the prebiotic atmosphere was "weakly reduced" - although some argue that the atmosphere was "strongly reduced". In a weakly reduced atmosphere, reduced gases (e.g. CH4 and NH3) and oxidized gases (e.g CO2) are both present. The actual H2 abundance in the prebiotic atmosphere has been estimated by doing a calculation that takes into account the rate at which H2 is volcanically outgassed to the surface and the rate at which it escapes to space. One of these recent calculations indicates that the prebiotic atmosphere H2 abundance was around 400 parts per million, but could have been significantly higher if the source from volcanic outgassing was enhanced or atmospheric escape was less efficient than expected. The abundances of other reduced species in the atmosphere can then be calculated with models of atmospheric chemistry.

Post-impact atmospheres

It has been proposed that the large flux of impactors in the early solar system may have significantly changed the nature of the prebiotic atmosphere. During the time period of the prebiotic atmosphere, it is expected that a few asteroid impacts large enough to vaporize the oceans and melt Earth's surface could have occurred, with smaller impacts expected in even larger numbers. These impacts would have significantly changed the chemistry of the prebiotic atmosphere by heating it up, ejecting some of it to space, and delivering new chemical material. Studies of post-impact atmospheres indicate that they would have caused the prebiotic atmosphere to be strongly reduced for a period of time after a large impact. On average, impactors in the early solar system contained highly reduced minerals (e.g. metallic iron) and were enriched with reduced compounds that readily enter the atmosphere as a gas. In these strongly reduced post-impact atmospheres, there would be significantly higher abundances of reduced gases like CH4, HCN, and perhaps NH3. Reduced, post-impact atmospheres after the ocean condensed are predicted to last up to tens of millions of years before returning to the background state.

Model studies haved refined this by dividing post-impact evolution into three phases: initial H2 production from iron-steam reactions, cooling with CH4 and NH3 formation (catalyzed by nickel surfaces), and long-term photochemical production of nitriles. When CH4 to CO2 ratio > 0.1, hazy atmospheres with HCN/HCCCN rainout up to 109 molecules per cm2 per second; smaller CH4 to CO2 ratios yield negligible HCCCN. Such production of nitriles would continue until H2 escape to space on the order of a few million years. Minimum masses for effective reduction are 4×1020–5×1021 kg, depending on iron efficiency and melt equilibration. In additional to the nitrile bombardment hypothesis, other studies find that serpentinization from deep mantle processes may have been sufficient on their own to produce HCN an order of magnitude less than the bombardment mechanism—though without HCCCN.

Relationship to the origin of life

The prebiotic atmosphere can supply chemical ingredients and facilitate environmental conditions that contribute to the synthesis of organic compounds involved in the origin of life. For example, compounds potentially involved in the origin of life were synthesized in the Miller-Urey experiment. In this experiment, assumptions must be made about what gases were present in the prebiotic atmosphere. Proposed important ingredients for the origin of life include (but are not limited to) methane (CH4), ammonia (NH3), phosphate (PO43-), hydrogen cyanide (HCN), cyanoacetylene (HCCCN), various organics, and various photochemical byproducts. The atmospheric composition will impact the stability and production of these compounds at Earth's surface. For example, the "weakly reduced" prebiotic atmosphere may produce some, but not all, of these ingredients via reactions with lightning. On the other hand, the production and stability of origin of life ingredients in a strongly reduced atmosphere are greatly enhanced, making post-impact atmospheres particularly relevant. It is also proposed that the conditions required for the origin of life could have emerged locally, in a system that is isolated from the atmosphere (e.g. a hydrothermal vent). Arguments against this hypothesis have emphasized that compounds such as cyanides used to make nucleobases of RNA would be too dilute in the ocean, unlike lakes on land which might readily store them as ferrocyanide salts. This may be overcome by imposing a boundary condition such as shallow water vents that experienced localized evaporative cycles. The vent mechanism might also produce HCCCN, but would require extremely high pressure and temperature for efficient stockpiling. Methods that readily produce HCCCN are important as it is a required constituent in the current best understanding of pyrimidine synthesis.

Once life originated and began interacting with the atmosphere, the prebiotic atmosphere transitioned into the post-biotic atmosphere, by definition.

Big Bang nucleosynthesis

From Wikipedia, the free encyclopedia

In physical cosmology, Big Bang nucleosynthesis (also known as primordial nucleosynthesis, and abbreviated as BBN) is a model for the production of the light nuclei 2H, 3He, 4He, and 7Li between 0.01s and 200s in the lifetime of the universe. The model uses a combination of thermodynamic arguments and results from equations for the expansion of the universe to define a changing temperature and density, then analyzes the rates of nuclear reactions at these temperatures and densities to predict the nuclear abundance ratios. Refined models agree very well with observations with the exception of the abundance of 7Li. The model is one of the key concepts in standard cosmology.

Elements heavier than lithium are thought to have been created later in the life of the universe by stellar nucleosynthesis, through the formation, evolution and death of stars.

Characteristics

The Big Bang nucleosynthesis (BBN) model assumes a homogeneous plasma, at a temperature corresponding to 1 MeV, consisting of electrons annihilating with positrons to produce photons. In turn, the photons pair to produce electrons and positrons: . These particles are in equilibrium. A similar number of neutrinos, also at 1 MeV, have just dropped out of equilibrium at this density. Finally, there is a very low density of baryons (neutrons and protons). The BBN model follows the nuclear reactions of these baryons as the temperature and pressure drops due to expansion of the universe.

The basic model makes two simplifying assumptions:

  1. until the temperature drops below 0.1 MeV only neutrons and protons are stable and
  2. only isotopes of hydrogen and of helium will be produced at the end.

These assumptions are based on the intense flux of high energy photons in the plasma. Above 0.1 MeV every nucleus created is blasted apart by a photon. Thus the model first determines the ratio of neutrons to protons and uses this as an input to calculate the hydrogen, deuterium, tritium, and 3He.

The model follows nuclear reaction rates as the temperature and density drops. The evolving density and temperature follow from the Friedmann-Robertson-Walker model. Around MeV, the density of neutrinos drops, and reactions like which maintained neutron and proton equilibrium, slow down. The neutron-to-proton ratio decreases to around 1/7.

As the temperature and density continue to fall, reactions involving combinations of protons and neutrons shift towards heavier nuclei. These include Due to the higher binding energy of He, the free neutrons and the deuterium nuclei are largely consumed, leaving mostly protons and helium.

The fusion of nuclei occurred between roughly 10 seconds to 20 minutes after the Big Bang; this corresponds to the temperature range when the universe was cool enough for deuterium to survive, but hot and dense enough for fusion reactions to occur at a significant rate.

The key parameter which allows one to calculate the effects of Big Bang nucleosynthesis is the baryon/photon number ratio, which is a small number of order 6 × 10−10. This parameter corresponds to the baryon density and controls the rate at which nucleons collide and react; from this it is possible to calculate element abundances after nucleosynthesis ends. Although the baryon per photon ratio is important in determining element abundances, the precise value makes little difference to the overall picture. Without major changes to the Big Bang theory itself, BBN will result in mass abundances of about 75% of hydrogen-1, about 25% helium-4, about 0.01% of deuterium and helium-3, trace amounts (on the order of 10−10) of lithium, and negligible heavier elements. That the observed abundances in the universe are generally consistent with these abundance numbers is considered strong evidence for the Big Bang theory.

History

The history of Big Bang nucleosynthesis research began with a proposal in the 1940s by George Gamow that nuclear reactions during a hot initial phase of the universe produced the observed hydrogen and helium. Calculations by his student Ralph Alpher were published in the famous Alpher–Bethe–Gamow paper outlined a theory of light-element production in the early universe. The first detailed calculations of the primordial isotopic abundances came in 1966 and have been refined over the years using updated estimates of the input nuclear reaction rates. The first systematic Monte Carlo study of how nuclear reaction rate uncertainties impact isotope predictions, over the relevant temperature range, was carried out in 1993.

Important parameters

The creation of light elements during BBN was dependent on a number of parameters; among those was the neutron–proton ratio (calculable from Standard Model physics) and the baryon-photon ratio.

Neutron–proton ratio

The neutron–proton ratio was set by Standard Model physics before the nucleosynthesis era, essentially within the first 1-second after the Big Bang. Neutrons can react with positrons or electron neutrinos to create protons and other products in one of the following reactions:

At times much earlier than 1 sec, these reactions were fast and maintained the n/p ratio close to 1:1. As the temperature dropped, the equilibrium shifted in favour of protons due to their slightly lower mass, and the n/p ratio smoothly decreased. These reactions continued until the decreasing temperature and density caused the reactions to become too slow, which occurred at about T = 0.7 MeV (time around 1 second) and is called the freeze out temperature. At freeze out, the neutron–proton ratio was about 1:6. However, free neutrons are unstable with a mean life of 880 sec; some neutrons decayed in the next few minutes before fusing into any nucleus, so the ratio of total neutrons to protons after nucleosynthesis ends is about 1:7. Almost all neutrons that fused instead of decaying ended up combined into helium-4, due to the fact that helium-4 has the highest binding energy per nucleon among light elements. This predicts that about 8% of all atoms should be helium-4, leading to a mass fraction of helium-4 of about 25%, which is in line with observations. Small traces of deuterium and helium-3 remained as there was insufficient time and density for them to react and form helium-4.

Baryon–photon ratio

The baryon–photon ratio, η, is the key parameter determining the abundances of light elements after nucleosynthesis ends. Baryons and light elements can fuse in the following main reactions:

along with some other low-probability reactions leading to 7Li or 7Be. (An important feature is that there are no stable nuclei with mass 5 or 8, which implies that reactions adding one baryon to 4He, or fusing two 4He, do not occur). Most fusion chains during BBN ultimately terminate in 4He (helium-4), while "incomplete" reaction chains lead to small amounts of left-over 2H or 3He; the amount of these decreases with increasing baryon-photon ratio. That is, the larger the baryon-photon ratio the more reactions there will be and the more efficiently deuterium will be eventually transformed into helium-4. This result makes deuterium a very useful tool in measuring the baryon-to-photon ratio.

Sequence

Main nuclear reaction chains for Big Bang nucleosynthesis

Big Bang nucleosynthesis began roughly 20 seconds after the big bang, when the universe had cooled sufficiently to allow deuterium nuclei to survive disruption by high-energy photons. (Note that the neutron–proton freeze-out time was earlier). This time is essentially independent of dark matter content, since the universe was highly radiation dominated until much later, and this dominant component controls the temperature/time relation. At this time there were about six protons for every neutron, but a small fraction of the neutrons decay before fusing in the next few hundred seconds, so at the end of nucleosynthesis there are about seven protons to every neutron, and almost all the neutrons are in Helium-4 nuclei.

One feature of BBN is that the physical laws and constants that govern the behavior of matter at these energies are very well understood, and hence BBN lacks some of the speculative uncertainties that characterize earlier periods in the life of the universe. Another feature is that the process of nucleosynthesis is determined by conditions at the start of this phase of the life of the universe, and proceeds independently of what happened before.

As the universe expands, it cools. Free neutrons are less stable than helium nuclei, and the protons and neutrons have a strong tendency to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium. Before nucleosynthesis began, the temperature was high enough for many photons to have energy greater than the binding energy of deuterium; therefore any deuterium that was formed was immediately destroyed (a situation known as the "deuterium bottleneck"). Hence, the formation of helium-4 was delayed until the universe became cool enough for deuterium to survive (at about T = 0.1 MeV); after which there was a sudden burst of element formation. However, very shortly thereafter, around twenty minutes after the Big Bang, the temperature and density became too low for any significant fusion to occur. At this point, the elemental abundances were nearly fixed, and the only changes were the result of the radioactive decay of the two major unstable products of BBN, tritium and beryllium-7.

Heavy elements

A version of the periodic table indicating the origins – including big bang nucleosynthesis – of the elements. All elements above 103 (lawrencium) are also man-made and are not included.

Big Bang nucleosynthesis produced very few nuclei of elements heavier than lithium due to a bottleneck: the absence of a stable nucleus with 8 or 5 nucleons. This deficit of larger atoms also limited the amounts of lithium-7 produced during BBN. In stars, the bottleneck is passed by triple collisions of helium-4 nuclei, producing carbon (the triple-alpha process). However, this process is very slow and requires much higher densities, taking tens of thousands of years to convert a significant amount of helium to carbon in stars, and therefore it made a negligible contribution in the minutes following the Big Bang.

The predicted abundance of CNO isotopes produced in Big Bang nucleosynthesis is expected to be on the order of 10−15 that of H, making them essentially undetectable and negligible. Indeed, none of these primordial isotopes of the elements from beryllium to oxygen have yet been detected, although those of beryllium and boron may be able to be detected in the future. So far, the only stable nuclides known experimentally to have been made during Big Bang nucleosynthesis are protium, deuterium, helium-3, helium-4, and lithium-7.

Helium-4

Big Bang nucleosynthesis predicts a primordial abundance of about 25% helium-4 by mass, irrespective of the initial conditions of the universe. As long as the universe was hot enough for protons and neutrons to transform into each other easily, their ratio, determined solely by their relative masses, was about 1 neutron to 7 protons (allowing for some decay of neutrons into protons). Once it was cool enough, the neutrons quickly bound with an equal number of protons to form first deuterium, then helium-4. Helium-4 is very stable and is nearly the end of this chain if it runs for only a short time, since helium neither decays nor combines easily to form heavier nuclei (since there are no stable nuclei with mass numbers of 5 or 8, helium does not combine easily with either protons, or with itself). Once temperatures are lowered, out of every 16 nucleons (2 neutrons and 14 protons), 4 of these (25% of the total particles and total mass) combine quickly into one helium-4 nucleus. This produces one helium for every 12 hydrogens, resulting in a universe that is a little over 8% helium by number of atoms, and 25% helium by mass.

"One analogy is to think of helium-4 as ash, and the amount of ash that one forms when one completely burns a piece of wood is insensitive to how one burns it." The resort to the BBN theory of the helium-4 abundance is necessary as there is far more helium-4 in the universe than can be explained by stellar nucleosynthesis. In addition, it provides an important test for the Big Bang theory. If the observed helium abundance is significantly different from 25%, then this would pose a serious challenge to the theory. This would particularly be the case if the early helium-4 abundance was much smaller than 25% because it is hard to destroy helium-4. For a few years during the mid-1990s, observations suggested that this might be the case, causing astrophysicists to talk about a Big Bang nucleosynthetic crisis, but further observations were consistent with the Big Bang theory.

Deuterium

Deuterium is in some ways the opposite of helium-4, in that while helium-4 is very stable and difficult to destroy, deuterium is only marginally stable and easy to destroy. The temperatures, time, and densities were sufficient to combine a substantial fraction of the deuterium nuclei to form helium-4 but insufficient to carry the process further using helium-4 in the next fusion step. BBN did not convert all of the deuterium in the universe to helium-4 due to the expansion that cooled the universe and reduced the density, and so cut that conversion short before it could proceed any further. One consequence of this is that, unlike helium-4, the amount of deuterium is very sensitive to initial conditions. The denser the initial universe was, the more deuterium would be converted to helium-4 before time ran out, and the less deuterium would remain.

There are no known post-Big Bang processes which can produce significant amounts of deuterium. Hence observations about deuterium abundance suggest that the universe is not infinitely old, which is in accordance with the Big Bang theory.

During the 1970s, there were major efforts to find processes that could produce deuterium, but those revealed ways of producing isotopes other than deuterium. The problem was that while the concentration of deuterium in the universe is consistent with the Big Bang model as a whole, it is too high to be consistent with a model that presumes that most of the universe is composed of protons and neutrons. If one assumes that all of the universe consists of protons and neutrons, the density of the universe is such that much of the currently observed deuterium would have been burned into helium-4. The standard explanation now used for the abundance of deuterium is that the universe does not consist mostly of baryons, but that non-baryonic matter (also known as dark matter) makes up most of the mass of the universe. This explanation is also consistent with calculations that show that a universe made mostly of protons and neutrons would be far more clumpy than is observed.

It is very hard to come up with another process that would produce deuterium other than by nuclear fusion. Such a process would require that the temperature be hot enough to produce deuterium, but not hot enough to produce helium-4, and that this process should immediately cool to non-nuclear temperatures after no more than a few minutes. It would also be necessary for the deuterium to be swept away before it reoccurs.

Producing deuterium by fission is also difficult. The problem here again is that deuterium is very unlikely due to nuclear processes, and that collisions between atomic nuclei are likely to result either in the fusion of the nuclei, or in the release of free neutrons or alpha particles. During the 1970s, cosmic ray spallation was proposed as a source of deuterium. That theory failed to account for the abundance of deuterium, but led to explanations of the source of other light elements.

Lithium

Lithium-7 and lithium-6 produced in the Big Bang are on the order of: lithium-7 to be 10−9 of all primordial nuclides; and lithium-6 around 10−13.

Measurements and status of theory

The theory of BBN gives a detailed mathematical description of the production of the light "elements" deuterium, helium-3, helium-4, and lithium-7. Specifically, the theory yields precise quantitative predictions for the mixture of these elements, that is, the primordial abundances at the end of the big-bang.

In order to test these predictions, it is necessary to reconstruct the primordial abundances as faithfully as possible, for instance by observing astronomical objects in which very little stellar nucleosynthesis has taken place (such as certain dwarf galaxies) or by observing objects that are very far away, and thus can be seen in a very early stage of their evolution (such as distant quasars).

As noted above, in the standard picture of BBN, all of the light element abundances depend on the amount of ordinary matter (baryons) relative to radiation (photons). Since the universe is presumed to be homogeneous, it has one unique value of the baryon-to-photon ratio. For a long time, this meant that to test BBN theory against observations one had to ask: can all of the light element observations be explained with a single value of the baryon-to-photon ratio? Or more precisely, allowing for the finite precision of both the predictions and the observations, one asks: is there some range of baryon-to-photon values which can account for all of the observations?

More recently, the question has changed: Precision observations of the cosmic microwave background radiation with the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck give an independent value for the baryon-to-photon ratio. The present measurement of helium-4 indicates good agreement, and yet better agreement for helium-3. But for lithium-7, there is a significant discrepancy between BBN and WMAP/Planck, and the abundance derived from Population II stars. The discrepancy, called the "cosmological lithium problem", is a factor of 2.4―4.3 below the theoretically predicted value. that have resulted in revised calculations of the standard BBN based on new nuclear data, and to various reevaluation proposals for primordial proton–proton nuclear reactions, especially the abundances of 7Be + n → 7Li + p, versus 7Be + 2H → 8Be + p.

Non-standard scenarios

In addition to the standard BBN scenario there are numerous non-standard BBN scenarios. These should not be confused with non-standard cosmology: a non-standard BBN scenario assumes that the Big Bang occurred, but inserts additional physics in order to see how this affects elemental abundances. These pieces of additional physics include relaxing or removing the assumption of homogeneity, or inserting new particles such as massive neutrinos.

There have been, and continue to be, various reasons for researching non-standard BBN. The first, which is largely of historical interest, is to resolve inconsistencies between BBN predictions and observations. This has proved to be of limited usefulness in that the inconsistencies were resolved by better observations, and in most cases trying to change BBN resulted in abundances that were more inconsistent with observations rather than less. The second reason for researching non-standard BBN, and largely the focus of non-standard BBN in the early 21st century, is to use BBN to place limits on unknown or speculative physics. For example, standard BBN assumes that no exotic hypothetical particles were involved in BBN. One can insert a hypothetical particle (such as a massive neutrino) and see what has to happen before BBN predicts abundances that are very different from observations. This has been done to put limits on the mass of a stable tau neutrino.

Biological computing

From Wikipedia, the free encyclopedia

Biological computers use biologically derived molecules — such as DNA and/or proteins — to perform digital or real computations.

The development of biocomputers has been made possible by the expanding new science of nanobiotechnology. The term nanobiotechnology can be defined in multiple ways; in a more general sense, nanobiotechnology can be defined as any type of technology that uses both nano-scale materials (i.e. materials having characteristic dimensions of 1-100 nanometers) and biologically based materials. A more restrictive definition views nanobiotechnology more specifically as the design and engineering of proteins that can then be assembled into larger, functional structures. The implementation of nanobiotechnology, as defined in this narrower sense, provides scientists with the ability to engineer biomolecular systems specifically so that they interact in a fashion that can ultimately result in the computational functionality of a computer.

Scientific background

Biocomputers use biologically derived materials to perform computational functions. A biocomputer consists of a pathway or series of metabolic pathways involving biological materials that are engineered to behave in a certain manner based upon the conditions (input) of the system. The resulting pathway of reactions that takes place constitutes an output, which is based on the engineering design of the biocomputer and can be interpreted as a form of computational analysis. Three distinguishable types of biocomputers include biochemical computers, biomechanical computers, and bioelectronic computers.

Biochemical computers

Biochemical computers use the immense variety of feedback loops that are characteristic of biological chemical reactions in order to achieve computational functionality. Feedback loops in biological systems take many forms, and many different factors can provide both positive and negative feedback to a particular biochemical process, causing either an increase in chemical output or a decrease in chemical output, respectively. Such factors may include the quantity of catalytic enzymes present, the amount of reactants present, the amount of products present, and the presence of molecules that bind to and thus alter the chemical reactivity of any of the aforementioned factors. Given the nature of these biochemical systems to be regulated through many different mechanisms, one can engineer a chemical pathway comprising a set of molecular components that react to produce one particular product under one set of specific chemical conditions and another particular product under another set of conditions. The presence of the particular product that results from the pathway can serve as a signal, which can be interpreted—along with other chemical signals—as a computational output based upon the starting chemical conditions of the system (the input).

Biomechanical computers

Biomechanical computers are similar to biochemical computers in that they both perform a specific operation that can be interpreted as a functional computation based upon specific initial conditions which serve as input. They differ, however, in what exactly serves as the output signal. In biochemical computers, the presence or concentration of certain chemicals serves as the output signal. In biomechanical computers, however, the mechanical shape of a specific molecule or set of molecules under a set of initial conditions serves as the output. Biomechanical computers rely on the nature of specific molecules to adopt certain physical configurations under certain chemical conditions. The mechanical, three-dimensional structure of the product of the biomechanical computer is detected and interpreted appropriately as a calculated output.

Bioelectronic computers

Biocomputers can also be constructed in order to perform electronic computing. Again, like both biomechanical and biochemical computers, computations are performed by interpreting a specific output that is based upon an initial set of conditions that serve as input. In bioelectronic computers, the measured output is the nature of the electrical conductivity that is observed in the bioelectronic computer. This output comprises specifically designed biomolecules that conduct electricity in highly specific manners based upon the initial conditions that serve as the input of the bioelectronic system.

Network-based biocomputers

In networks-based biocomputation, self-propelled biological agents, such as molecular motor proteins or bacteria, explore a microscopic network that encodes a mathematical problem of interest. The paths of the agents through the network and/or their final positions represent potential solutions to the problem. For instance, in the system described by Nicolau et al., mobile molecular motor filaments are detected at the "exits" of a network encoding the NP-complete problem SUBSET SUM. All exits visited by filaments represent correct solutions to the algorithm. Exits not visited are non-solutions. The motility proteins are either actin and myosin or kinesin and microtubules. The myosin and kinesin, respectively, are attached to the bottom of the network channels. When adenosine triphosphate (ATP) is added, the actin filaments or microtubules are propelled through the channels, thus exploring the network. The energy conversion from chemical energy (ATP) to mechanical energy (motility) is highly efficient when compared with e.g. electronic computing, so the computer, in addition to being massively parallel, also uses orders of magnitude less energy per computational step.

Engineering biocomputers

A ribosome is a biological machine that uses protein dynamics on nanoscales to translate RNA into proteins.

The behavior of biologically derived computational systems such as these relies on the particular molecules that make up the system, which are primarily proteins but may also include DNA molecules. Nanobiotechnology provides the means to synthesize the multiple chemical components necessary to create such a system. The chemical nature of a protein is dictated by its sequence of amino acids—the chemical building blocks of proteins. This sequence is in turn dictated by a specific sequence of DNA nucleotides—the building blocks of DNA molecules. Proteins are manufactured in biological systems through the translation of nucleotide sequences by biological molecules called ribosomes, which assemble individual amino acids into polypeptides that form functional proteins based on the nucleotide sequence that the ribosome interprets. What this ultimately means is that one can engineer the chemical components necessary to create a biological system capable of performing computations by engineering DNA nucleotide sequences to encode for the necessary protein components. Also, the synthetically designed DNA molecules themselves may function in a particular biocomputer system. Thus, implementing nanobiotechnology to design and produce synthetically designed proteins—as well as the design and synthesis of artificial DNA molecules—can allow the construction of functional biocomputers (e.g. Computational Genes).

Biocomputers can also be designed with cells as their basic components. Chemically induced dimerization systems can be used to make logic gates from individual cells. These logic gates are activated by chemical agents that induce interactions between previously non-interacting proteins and trigger some observable change in the cell.

Network-based biocomputers are engineered by nanofabrication of the hardware from wafers where the channels are etched by electron-beam lithography or nano-imprint lithography. The channels are designed to have a high aspect ratio of cross section so the protein filaments will be guided. Also, split and pass junctions are engineered so filaments will propagate in the network and explore the allowed paths. Surface silanization ensures that the motility proteins can be affixed to the surface and remain functional. The molecules that perform the logic operations are derived from biological tissue.

Economics

All biological organisms have the ability to self-replicate and self-assemble into functional components. The economical benefit of biocomputers lies in this potential of all biologically derived systems to self-replicate and self-assemble given appropriate conditions. For instance, all of the necessary proteins for a certain biochemical pathway, which could be modified to serve as a biocomputer, could be synthesized many times over inside a biological cell from a single DNA molecule. This DNA molecule could then be replicated many times over. This characteristic of biological molecules could make their production highly efficient and relatively inexpensive. Whereas electronic computers require manual production, biocomputers could be produced in large quantities from cultures without any additional machinery needed to assemble them.

Notable advancements in biocomputer technology

Currently, biocomputers exist with various functional capabilities that include operations of Boolean logic and mathematical calculations. Tom Knight of the MIT Artificial Intelligence Laboratory first suggested a biochemical computing scheme in which protein concentrations are used as binary signals that ultimately serve to perform logical operations. At or above a certain concentration of a particular biochemical product in a biocomputer chemical pathway indicates a signal that is either a 1 or a 0. A concentration below this level indicates the other, remaining signal. Using this method as computational analysis, biochemical computers can perform logical operations in which the appropriate binary output will occur only under specific logical constraints on the initial conditions. In other words, the appropriate binary output serves as a logically derived conclusion from a set of initial conditions that serve as premises from which the logical conclusion can be made. In addition to these types of logical operations, biocomputers have also been shown to demonstrate other functional capabilities, such as mathematical computations. One such example was provided by W.L. Ditto, who in 1999 created a biocomputer composed of leech neurons at Georgia Tech which was capable of performing simple addition. These are just a few of the notable uses that biocomputers have already been engineered to perform, and the capabilities of biocomputers are becoming increasingly sophisticated. Because of the availability and potential economic efficiency associated with producing biomolecules and biocomputers—as noted above—the advancement of the technology of biocomputers is a popular, rapidly growing subject of research that is likely to see much progress in the future.

In March 2013. a team of bioengineers from Stanford University, led by Drew Endy, announced that they had created the biological equivalent of a transistor, which they dubbed a "transcriptor". The invention was the final of the three components necessary to build a fully functional computer: data storage, information transmission, and a basic system of logic.

Parallel biological computing with networks, where bio-agent movement corresponds to arithmetical addition was demonstrated in 2016 on a SUBSET SUM instance with 8 candidate solutions.

In July 2017, separate experiments with E. Coli published on Nature showed the potential of using living cells for computing tasks and storing information. A team formed with collaborators of the Biodesign Institute at Arizona State University and Harvard's Wyss Institute for Biologically Inspired Engineering developed a biological computer inside E. Coli that responded to a dozen inputs. The team called the computer "ribocomputer", as it was composed of ribonucleic acid. Harvard researchers proved that it is possible to store information in bacteria after successfully archiving images and movies in the DNA of living E. coli cells.

In 2021, a team led by biophysicist Sangram Bagh realized a study with E. coli to solve 2 x 2 maze problems to probe the principle for distributed computing among cells.

In 2024, FinalSpark, a Swiss biocomputing startup, launched an online platform enabling global researchers to conduct experiments remotely on biological neurons in vitro.

In March 2025, Cortical Labs unveiled CL1, the world's first commercially available biological computer integrating lab-grown human neurons with silicon hardware. Building on earlier work with DishBrain, CL1 uses hundreds of thousands of neurons sustained by an internal life-support system for up to six months, enabling real-time learning and adaptive computation within a closed-loop environment. The system operates via the Biological Intelligence Operating System (biOS), allowing direct code deployment to living neurons. CL1 is designed for applications in drug discovery, disease modeling, and neuromorphic research, offering an ethically preferable alternative to animal testing and consuming significantly less energy than traditional artificial intelligence systems.

Future potential of biocomputers

Many examples of simple biocomputers have been designed, but the capabilities of these biocomputers are very limited in comparison to commercially available inorganic computers.

The potential to solve complex mathematical problems using far less energy than standard electronic supercomputers, as well as to perform more reliable calculations simultaneously rather than sequentially, motivates the further development of "scalable" biological computers, and several funding agencies are supporting these efforts.

Self-reference

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Sel...