Search This Blog

Tuesday, December 16, 2025

Prebiotic atmosphere

From Wikipedia, the free encyclopedia
The pale orange dot, an artist's impression of the early Earth which is believed to have appeared orange through its hazy methane rich prebiotic second atmosphere, being somewhat comparable to Titan's atmosphere

The prebiotic atmosphere is the second atmosphere present on Earth before today's biotic, oxygen-rich third atmosphere, and after the first atmosphere (which was mainly water vapor and simple hydrides) of Earth's formation. The formation of the Earth, roughly 4.5 billion years ago, involved multiple collisions and coalescence of planetary embryos. This was followed by an over 100 million year period on Earth where a magma ocean was present, the atmosphere was mainly steam, and surface temperatures reached up to 8,000 K (14,000 °F). Earth's surface then cooled and the atmosphere stabilized, establishing the prebiotic atmosphere. The environmental conditions during this time period were quite different from today: the Sun was about 30% dimmer overall yet brighter at ultraviolet and x-ray wavelengths; there was a liquid ocean; it is unknown if there were continents but oceanic islands were likely;Earth's interior chemistry (and thus, volcanic activity) was different; there was a larger flux of impactors (e.g. comets and asteroids) hitting Earth's surface.

Studies have attempted to constrain the composition and nature of the prebiotic atmosphere by analyzing geochemical data and using theoretical models that include our knowledge of the early Earth environment. These studies indicate that the prebiotic atmosphere likely contained more CO2 than the modern Earth, had N2 within a factor of 2 of the modern levels, and had vanishingly low amounts of O2. The atmospheric chemistry is believed to have been "weakly reducing", where reduced gases like CH4, NH3, and H2 were present in small quantities. The composition of the prebiotic atmosphere was likely periodically altered by impactors, which may have temporarily caused the atmosphere to have been "strongly reduced".

Constraining the composition of the prebiotic atmosphere is key to understanding the origin of life, as it may facilitate or inhibit certain chemical reactions on Earth's surface believed to be important for the formation of the first living organism. Life on Earth originated and began modifying the atmosphere at least 3.5 billion years ago and possibly much earlier, which marks the end of the prebiotic atmosphere.

Environmental context

Establishment of the prebiotic atmosphere

Earth is believed to have formed over 4.5 billion years ago by accreting material from the solar nebula.[2] Earth's Moon formed in a collision, the Moon-forming impact, believed to have occurred 30-50 million years after the Earth formed. In this collision, a Mars-sized object named Theia collided with the primitive Earth and the remnants of the collision formed the Moon. The collision likely supplied enough energy to melt most of Earth's mantle and vaporize roughly 20% of it, heating Earth's surface to as high as 8,000 K (~14,000 °F). Earth's surface in the aftermath of the Moon-forming impact was characterized by high temperatures (~2,500 K), an atmosphere made of rock vapor and steam, and a magma ocean. As the Earth cooled by radiating away the excess energy from the impact, the magma ocean solidified and volatiles were partitioned between the mantle and atmosphere until a stable state was reached. It is estimated that Earth transitioned from the hot, post-impact environment into a potentially habitable environment with crustal recycling, albeit different from modern plate tectonics, roughy 10-20 million years after the Moon-forming impact, around 4.4 billion years ago. The atmosphere present from this point in Earth's history until the origin of life is referred to as the prebiotic atmosphere.

It is unknown when exactly life originated. The oldest direct evidence for life on Earth is around 3.5 billion years old, such as fossil stromatolites from North Pole, Western Australia. Putative evidence of life on Earth from older times (e.g. 3.8 and 4.1 billion years ago) lacks additional context necessary to claim it is truly of biotic origin, so it is still debated. Thus, the prebiotic atmosphere concluded 3.5 billion years ago or earlier, placing it in the early Archean Eon or mid-to-late Hadean Eon.

Environmental factors

Knowledge of the environmental factors at play on early Earth is required to investigate the prebiotic atmosphere. Much of what we know about the prebiotic environment comes from zircons - crystals of zirconium silicate (ZrSiO4). Zircons are useful because they record the physical and chemical processes occurring on the prebiotic Earth during their formation and they are especially durable. Most zircons that are dated to the prebiotic time period are found at the Jack Hills formation of Western Australia, but they also occur elsewhere. Geochemical data from several prebiotic zircons show isotopic evidence for chemical change induced by liquid water, indicating that the prebiotic environment had a liquid ocean and a surface temperature that did not cause it to freeze or boil. It is unknown when exactly the continents emerged above this liquid ocean. This adds uncertainty to the interaction between Earth's prebiotic surface and atmosphere, as the presence of exposed land determines the rate of weathering processes and provides local environments that may be necessary for life to form. However, oceanic islands were likely. Additionally, the oxidation state of Earth's mantle was likely different at early times, which changes the fluxes of chemical species delivered to the atmosphere from volcanic outgassing.

Environmental factors from elsewhere in the Solar System also affected prebiotic Earth. The Sun was ~30% dimmer overall around the time the Earth formed. This means greenhouse gases may have been required in higher levels than present day to keep Earth from freezing over. Despite the overall reduction in energy coming from the Sun, the early Sun emitted more radiation in the ultraviolet and x-ray regimes than it currently does. This indicates that different photochemical reactions may have dominated early Earth's atmosphere, which has implications for global atmospheric chemistry and the formation of important compounds that could lead to the origin of life. Finally, there was a significantly higher flux of objects that impacted Earth - such as comets and asteroids - in the early Solar System. These impactors may have been important in the prebiotic atmosphere because they can deliver material to the atmosphere, eject material from the atmosphere, and change the chemical nature of the atmosphere after their arrival.

Atmospheric composition

The exact composition of the prebiotic atmosphere is unknown due to the lack of geochemical data from the time period. Current studies generally indicate that the prebiotic atmosphere was "weakly reduced", with elevated levels of CO2, N2 within a factor of 2 of the modern level, negligible amounts of O2, and more hydrogen-bearing gases than the modern Earth (see below). Noble gases and photochemical products of the dominant species were also present in small quantities.

Carbon dioxide

Carbon dioxide (CO2) is an important component of the prebiotic atmosphere because, as a greenhouse gas, it strongly affects the surface temperature; also, it dissolves in water and can change the ocean pH. The abundance of carbon dioxide in the prebiotic atmosphere is not directly constrained by geochemical data and must be inferred.

Evidence suggests that the carbonate-silicate cycle regulates Earth's atmospheric carbon dioxide abundance on timescales of about 1 million years. The carbonate-silicate cycle is a negative feedback loop that modulates Earth's surface temperature by partitioning carbon between the atmosphere and the mantle via several surface processes. It has been proposed that the processes of the carbonate-silicate cycle would result in high CO2 levels in the prebiotic atmosphere to offset the lower energy input from the faint young Sun. This mechanism can be used to estimate the prebiotic CO2 abundance, but it is debated and uncertain. Uncertainty is primarily driven by a lack of knowledge about the area of exposed land, early Earth's interior chemistry and structure, the rate of reverse weathering and seafloor weathering, and the increased impactor flux. One extensive modeling study suggests that CO2 was roughly 20 times higher in the prebiotic atmosphere than the preindustrial modern value (280 ppm), which would result in a global average surface temperature around 259 K (6.5 °F) and an ocean pH around 7.9. This is in agreement with other studies, which generally conclude that the prebiotic atmospheric CO2 abundance was higher than the modern one, although the global surface temperature may still be significantly colder due to the faint young Sun.

Nitrogen

Nitrogen in the form of N2 is 78% of Earth's modern atmosphere by volume, making it the most abundant gas. N2 is generally considered a background gas in the Earth's atmosphere because it is relatively unreactive due to the strength of its triple bond. Despite this, atmospheric N2 was at least moderately important to the prebiotic environment because it impacts the climate via Rayleigh scattering and it may have been more photochemically active under the enhanced x-ray and ultraviolet radiation from the young Sun. N2 was also likely important for the synthesis of compounds believed to be critical for the origin of life, such as hydrogen cyanide (HCN) and amino acids derived from HCN. Studies have attempted to constrain the prebiotic atmosphere N2 abundance with theoretical estimates, models, and geologic data. These studies have resulted in a range of possible constraints on the prebiotic N2 abundance. For example, a recent modeling study that incorporates atmospheric escape, magma ocean chemistry, and the evolution of Earth's interior chemistry suggests that the atmospheric N2 abundance was probably less than half of the present day value. However, this study fits into a larger body of work that generally constrains the prebiotic N2 abundance to be between half and double the present level.

Oxygen

Oxygen in the form of O2 makes up 21% of Earth's modern atmosphere by volume. Earth's modern atmospheric O2 is due almost entirely to biology (e.g. it is produced during oxygenic photosynthesis), so it was not nearly as abundant in the prebiotic atmosphere. This is favorable for the origin of life, as O2 would oxidize organic compounds needed in the origin of life. The prebiotic atmosphere O2 abundance can be theoretically calculated with models of atmospheric chemistry. The primary source of O2 in these models is the breakdown and subsequent chemical reactions of other oxygen containing compounds. Incoming solar photons or lightning can break up CO2 and H2O molecules, freeing oxygen atoms and other radicals (i.e. highly reactive gases in the atmosphere). The free oxygen can then combine into O2 molecules via several chemical pathways. The rate at which O2 is created in this process is determined by the incoming solar flux, the rate of lightning, and the abundances of the other atmospheric gases that take part in the chemical reactions (e.g. CO2, H2O, OH), as well as their vertical distributions. O2 is removed from the atmosphere via photochemical reactions that mainly involve H2 and CO near the surface. The most important of these reactions starts when H2 is split into two H atoms by incoming solar photons. The free H then reacts with O2 and eventually forms H2O, resulting in a net removal of O2 and a net increase in H2O. Models that simulate all of these chemical reactions in a potential prebiotic atmosphere show that an extremely small atmospheric O2 abundance is likely.In one such model that assumed values for CO2 and H2 abundances and sources, the O2 volume mixing ratio is calculated to be between 10−18 and 10−11 near the surface and up to 10−4 in the upper atmosphere.

Hydrogen and reduced gases

The hydrogen abundance in the prebiotic atmosphere can be viewed from the perspective of reduction-oxidation (redox) chemistry. The modern atmosphere is oxidizing, due to the large volume of atmospheric O2. In an oxidizing atmosphere, the majority of atoms that form atmospheric compounds (e.g. C) will be in an oxidized form (e.g. CO2) instead of a reduced form (e.g. CH4). In a reducing atmosphere, more species will be in their reduced, generally hydrogen-bearing forms. Because there was very little O2 in the prebiotic atmosphere, it is generally believed that the prebiotic atmosphere was "weakly reduced" - although some argue that the atmosphere was "strongly reduced". In a weakly reduced atmosphere, reduced gases (e.g. CH4 and NH3) and oxidized gases (e.g CO2) are both present. The actual H2 abundance in the prebiotic atmosphere has been estimated by doing a calculation that takes into account the rate at which H2 is volcanically outgassed to the surface and the rate at which it escapes to space. One of these recent calculations indicates that the prebiotic atmosphere H2 abundance was around 400 parts per million, but could have been significantly higher if the source from volcanic outgassing was enhanced or atmospheric escape was less efficient than expected. The abundances of other reduced species in the atmosphere can then be calculated with models of atmospheric chemistry.

Post-impact atmospheres

It has been proposed that the large flux of impactors in the early solar system may have significantly changed the nature of the prebiotic atmosphere. During the time period of the prebiotic atmosphere, it is expected that a few asteroid impacts large enough to vaporize the oceans and melt Earth's surface could have occurred, with smaller impacts expected in even larger numbers. These impacts would have significantly changed the chemistry of the prebiotic atmosphere by heating it up, ejecting some of it to space, and delivering new chemical material. Studies of post-impact atmospheres indicate that they would have caused the prebiotic atmosphere to be strongly reduced for a period of time after a large impact. On average, impactors in the early solar system contained highly reduced minerals (e.g. metallic iron) and were enriched with reduced compounds that readily enter the atmosphere as a gas. In these strongly reduced post-impact atmospheres, there would be significantly higher abundances of reduced gases like CH4, HCN, and perhaps NH3. Reduced, post-impact atmospheres after the ocean condensed are predicted to last up to tens of millions of years before returning to the background state.

Model studies haved refined this by dividing post-impact evolution into three phases: initial H2 production from iron-steam reactions, cooling with CH4 and NH3 formation (catalyzed by nickel surfaces), and long-term photochemical production of nitriles. When CH4 to CO2 ratio > 0.1, hazy atmospheres with HCN/HCCCN rainout up to 109 molecules per cm2 per second; smaller CH4 to CO2 ratios yield negligible HCCCN. Such production of nitriles would continue until H2 escape to space on the order of a few million years. Minimum masses for effective reduction are 4×1020–5×1021 kg, depending on iron efficiency and melt equilibration. In additional to the nitrile bombardment hypothesis, other studies find that serpentinization from deep mantle processes may have been sufficient on their own to produce HCN an order of magnitude less than the bombardment mechanism—though without HCCCN.

Relationship to the origin of life

The prebiotic atmosphere can supply chemical ingredients and facilitate environmental conditions that contribute to the synthesis of organic compounds involved in the origin of life. For example, compounds potentially involved in the origin of life were synthesized in the Miller-Urey experiment. In this experiment, assumptions must be made about what gases were present in the prebiotic atmosphere. Proposed important ingredients for the origin of life include (but are not limited to) methane (CH4), ammonia (NH3), phosphate (PO43-), hydrogen cyanide (HCN), cyanoacetylene (HCCCN), various organics, and various photochemical byproducts. The atmospheric composition will impact the stability and production of these compounds at Earth's surface. For example, the "weakly reduced" prebiotic atmosphere may produce some, but not all, of these ingredients via reactions with lightning. On the other hand, the production and stability of origin of life ingredients in a strongly reduced atmosphere are greatly enhanced, making post-impact atmospheres particularly relevant. It is also proposed that the conditions required for the origin of life could have emerged locally, in a system that is isolated from the atmosphere (e.g. a hydrothermal vent). Arguments against this hypothesis have emphasized that compounds such as cyanides used to make nucleobases of RNA would be too dilute in the ocean, unlike lakes on land which might readily store them as ferrocyanide salts. This may be overcome by imposing a boundary condition such as shallow water vents that experienced localized evaporative cycles. The vent mechanism might also produce HCCCN, but would require extremely high pressure and temperature for efficient stockpiling. Methods that readily produce HCCCN are important as it is a required constituent in the current best understanding of pyrimidine synthesis.

Once life originated and began interacting with the atmosphere, the prebiotic atmosphere transitioned into the post-biotic atmosphere, by definition.

Big Bang nucleosynthesis

From Wikipedia, the free encyclopedia

In physical cosmology, Big Bang nucleosynthesis (also known as primordial nucleosynthesis, and abbreviated as BBN) is a model for the production of the light nuclei 2H, 3He, 4He, and 7Li between 0.01s and 200s in the lifetime of the universe. The model uses a combination of thermodynamic arguments and results from equations for the expansion of the universe to define a changing temperature and density, then analyzes the rates of nuclear reactions at these temperatures and densities to predict the nuclear abundance ratios. Refined models agree very well with observations with the exception of the abundance of 7Li. The model is one of the key concepts in standard cosmology.

Elements heavier than lithium are thought to have been created later in the life of the universe by stellar nucleosynthesis, through the formation, evolution and death of stars.

Characteristics

The Big Bang nucleosynthesis (BBN) model assumes a homogeneous plasma, at a temperature corresponding to 1 MeV, consisting of electrons annihilating with positrons to produce photons. In turn, the photons pair to produce electrons and positrons: . These particles are in equilibrium. A similar number of neutrinos, also at 1 MeV, have just dropped out of equilibrium at this density. Finally, there is a very low density of baryons (neutrons and protons). The BBN model follows the nuclear reactions of these baryons as the temperature and pressure drops due to expansion of the universe.

The basic model makes two simplifying assumptions:

  1. until the temperature drops below 0.1 MeV only neutrons and protons are stable and
  2. only isotopes of hydrogen and of helium will be produced at the end.

These assumptions are based on the intense flux of high energy photons in the plasma. Above 0.1 MeV every nucleus created is blasted apart by a photon. Thus the model first determines the ratio of neutrons to protons and uses this as an input to calculate the hydrogen, deuterium, tritium, and 3He.

The model follows nuclear reaction rates as the temperature and density drops. The evolving density and temperature follow from the Friedmann-Robertson-Walker model. Around MeV, the density of neutrinos drops, and reactions like which maintained neutron and proton equilibrium, slow down. The neutron-to-proton ratio decreases to around 1/7.

As the temperature and density continue to fall, reactions involving combinations of protons and neutrons shift towards heavier nuclei. These include Due to the higher binding energy of He, the free neutrons and the deuterium nuclei are largely consumed, leaving mostly protons and helium.

The fusion of nuclei occurred between roughly 10 seconds to 20 minutes after the Big Bang; this corresponds to the temperature range when the universe was cool enough for deuterium to survive, but hot and dense enough for fusion reactions to occur at a significant rate.

The key parameter which allows one to calculate the effects of Big Bang nucleosynthesis is the baryon/photon number ratio, which is a small number of order 6 × 10−10. This parameter corresponds to the baryon density and controls the rate at which nucleons collide and react; from this it is possible to calculate element abundances after nucleosynthesis ends. Although the baryon per photon ratio is important in determining element abundances, the precise value makes little difference to the overall picture. Without major changes to the Big Bang theory itself, BBN will result in mass abundances of about 75% of hydrogen-1, about 25% helium-4, about 0.01% of deuterium and helium-3, trace amounts (on the order of 10−10) of lithium, and negligible heavier elements. That the observed abundances in the universe are generally consistent with these abundance numbers is considered strong evidence for the Big Bang theory.

History

The history of Big Bang nucleosynthesis research began with a proposal in the 1940s by George Gamow that nuclear reactions during a hot initial phase of the universe produced the observed hydrogen and helium. Calculations by his student Ralph Alpher were published in the famous Alpher–Bethe–Gamow paper outlined a theory of light-element production in the early universe. The first detailed calculations of the primordial isotopic abundances came in 1966 and have been refined over the years using updated estimates of the input nuclear reaction rates. The first systematic Monte Carlo study of how nuclear reaction rate uncertainties impact isotope predictions, over the relevant temperature range, was carried out in 1993.

Important parameters

The creation of light elements during BBN was dependent on a number of parameters; among those was the neutron–proton ratio (calculable from Standard Model physics) and the baryon-photon ratio.

Neutron–proton ratio

The neutron–proton ratio was set by Standard Model physics before the nucleosynthesis era, essentially within the first 1-second after the Big Bang. Neutrons can react with positrons or electron neutrinos to create protons and other products in one of the following reactions:

At times much earlier than 1 sec, these reactions were fast and maintained the n/p ratio close to 1:1. As the temperature dropped, the equilibrium shifted in favour of protons due to their slightly lower mass, and the n/p ratio smoothly decreased. These reactions continued until the decreasing temperature and density caused the reactions to become too slow, which occurred at about T = 0.7 MeV (time around 1 second) and is called the freeze out temperature. At freeze out, the neutron–proton ratio was about 1:6. However, free neutrons are unstable with a mean life of 880 sec; some neutrons decayed in the next few minutes before fusing into any nucleus, so the ratio of total neutrons to protons after nucleosynthesis ends is about 1:7. Almost all neutrons that fused instead of decaying ended up combined into helium-4, due to the fact that helium-4 has the highest binding energy per nucleon among light elements. This predicts that about 8% of all atoms should be helium-4, leading to a mass fraction of helium-4 of about 25%, which is in line with observations. Small traces of deuterium and helium-3 remained as there was insufficient time and density for them to react and form helium-4.

Baryon–photon ratio

The baryon–photon ratio, η, is the key parameter determining the abundances of light elements after nucleosynthesis ends. Baryons and light elements can fuse in the following main reactions:

along with some other low-probability reactions leading to 7Li or 7Be. (An important feature is that there are no stable nuclei with mass 5 or 8, which implies that reactions adding one baryon to 4He, or fusing two 4He, do not occur). Most fusion chains during BBN ultimately terminate in 4He (helium-4), while "incomplete" reaction chains lead to small amounts of left-over 2H or 3He; the amount of these decreases with increasing baryon-photon ratio. That is, the larger the baryon-photon ratio the more reactions there will be and the more efficiently deuterium will be eventually transformed into helium-4. This result makes deuterium a very useful tool in measuring the baryon-to-photon ratio.

Sequence

Main nuclear reaction chains for Big Bang nucleosynthesis

Big Bang nucleosynthesis began roughly 20 seconds after the big bang, when the universe had cooled sufficiently to allow deuterium nuclei to survive disruption by high-energy photons. (Note that the neutron–proton freeze-out time was earlier). This time is essentially independent of dark matter content, since the universe was highly radiation dominated until much later, and this dominant component controls the temperature/time relation. At this time there were about six protons for every neutron, but a small fraction of the neutrons decay before fusing in the next few hundred seconds, so at the end of nucleosynthesis there are about seven protons to every neutron, and almost all the neutrons are in Helium-4 nuclei.

One feature of BBN is that the physical laws and constants that govern the behavior of matter at these energies are very well understood, and hence BBN lacks some of the speculative uncertainties that characterize earlier periods in the life of the universe. Another feature is that the process of nucleosynthesis is determined by conditions at the start of this phase of the life of the universe, and proceeds independently of what happened before.

As the universe expands, it cools. Free neutrons are less stable than helium nuclei, and the protons and neutrons have a strong tendency to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium. Before nucleosynthesis began, the temperature was high enough for many photons to have energy greater than the binding energy of deuterium; therefore any deuterium that was formed was immediately destroyed (a situation known as the "deuterium bottleneck"). Hence, the formation of helium-4 was delayed until the universe became cool enough for deuterium to survive (at about T = 0.1 MeV); after which there was a sudden burst of element formation. However, very shortly thereafter, around twenty minutes after the Big Bang, the temperature and density became too low for any significant fusion to occur. At this point, the elemental abundances were nearly fixed, and the only changes were the result of the radioactive decay of the two major unstable products of BBN, tritium and beryllium-7.

Heavy elements

A version of the periodic table indicating the origins – including big bang nucleosynthesis – of the elements. All elements above 103 (lawrencium) are also man-made and are not included.

Big Bang nucleosynthesis produced very few nuclei of elements heavier than lithium due to a bottleneck: the absence of a stable nucleus with 8 or 5 nucleons. This deficit of larger atoms also limited the amounts of lithium-7 produced during BBN. In stars, the bottleneck is passed by triple collisions of helium-4 nuclei, producing carbon (the triple-alpha process). However, this process is very slow and requires much higher densities, taking tens of thousands of years to convert a significant amount of helium to carbon in stars, and therefore it made a negligible contribution in the minutes following the Big Bang.

The predicted abundance of CNO isotopes produced in Big Bang nucleosynthesis is expected to be on the order of 10−15 that of H, making them essentially undetectable and negligible. Indeed, none of these primordial isotopes of the elements from beryllium to oxygen have yet been detected, although those of beryllium and boron may be able to be detected in the future. So far, the only stable nuclides known experimentally to have been made during Big Bang nucleosynthesis are protium, deuterium, helium-3, helium-4, and lithium-7.

Helium-4

Big Bang nucleosynthesis predicts a primordial abundance of about 25% helium-4 by mass, irrespective of the initial conditions of the universe. As long as the universe was hot enough for protons and neutrons to transform into each other easily, their ratio, determined solely by their relative masses, was about 1 neutron to 7 protons (allowing for some decay of neutrons into protons). Once it was cool enough, the neutrons quickly bound with an equal number of protons to form first deuterium, then helium-4. Helium-4 is very stable and is nearly the end of this chain if it runs for only a short time, since helium neither decays nor combines easily to form heavier nuclei (since there are no stable nuclei with mass numbers of 5 or 8, helium does not combine easily with either protons, or with itself). Once temperatures are lowered, out of every 16 nucleons (2 neutrons and 14 protons), 4 of these (25% of the total particles and total mass) combine quickly into one helium-4 nucleus. This produces one helium for every 12 hydrogens, resulting in a universe that is a little over 8% helium by number of atoms, and 25% helium by mass.

"One analogy is to think of helium-4 as ash, and the amount of ash that one forms when one completely burns a piece of wood is insensitive to how one burns it." The resort to the BBN theory of the helium-4 abundance is necessary as there is far more helium-4 in the universe than can be explained by stellar nucleosynthesis. In addition, it provides an important test for the Big Bang theory. If the observed helium abundance is significantly different from 25%, then this would pose a serious challenge to the theory. This would particularly be the case if the early helium-4 abundance was much smaller than 25% because it is hard to destroy helium-4. For a few years during the mid-1990s, observations suggested that this might be the case, causing astrophysicists to talk about a Big Bang nucleosynthetic crisis, but further observations were consistent with the Big Bang theory.

Deuterium

Deuterium is in some ways the opposite of helium-4, in that while helium-4 is very stable and difficult to destroy, deuterium is only marginally stable and easy to destroy. The temperatures, time, and densities were sufficient to combine a substantial fraction of the deuterium nuclei to form helium-4 but insufficient to carry the process further using helium-4 in the next fusion step. BBN did not convert all of the deuterium in the universe to helium-4 due to the expansion that cooled the universe and reduced the density, and so cut that conversion short before it could proceed any further. One consequence of this is that, unlike helium-4, the amount of deuterium is very sensitive to initial conditions. The denser the initial universe was, the more deuterium would be converted to helium-4 before time ran out, and the less deuterium would remain.

There are no known post-Big Bang processes which can produce significant amounts of deuterium. Hence observations about deuterium abundance suggest that the universe is not infinitely old, which is in accordance with the Big Bang theory.

During the 1970s, there were major efforts to find processes that could produce deuterium, but those revealed ways of producing isotopes other than deuterium. The problem was that while the concentration of deuterium in the universe is consistent with the Big Bang model as a whole, it is too high to be consistent with a model that presumes that most of the universe is composed of protons and neutrons. If one assumes that all of the universe consists of protons and neutrons, the density of the universe is such that much of the currently observed deuterium would have been burned into helium-4. The standard explanation now used for the abundance of deuterium is that the universe does not consist mostly of baryons, but that non-baryonic matter (also known as dark matter) makes up most of the mass of the universe. This explanation is also consistent with calculations that show that a universe made mostly of protons and neutrons would be far more clumpy than is observed.

It is very hard to come up with another process that would produce deuterium other than by nuclear fusion. Such a process would require that the temperature be hot enough to produce deuterium, but not hot enough to produce helium-4, and that this process should immediately cool to non-nuclear temperatures after no more than a few minutes. It would also be necessary for the deuterium to be swept away before it reoccurs.

Producing deuterium by fission is also difficult. The problem here again is that deuterium is very unlikely due to nuclear processes, and that collisions between atomic nuclei are likely to result either in the fusion of the nuclei, or in the release of free neutrons or alpha particles. During the 1970s, cosmic ray spallation was proposed as a source of deuterium. That theory failed to account for the abundance of deuterium, but led to explanations of the source of other light elements.

Lithium

Lithium-7 and lithium-6 produced in the Big Bang are on the order of: lithium-7 to be 10−9 of all primordial nuclides; and lithium-6 around 10−13.

Measurements and status of theory

The theory of BBN gives a detailed mathematical description of the production of the light "elements" deuterium, helium-3, helium-4, and lithium-7. Specifically, the theory yields precise quantitative predictions for the mixture of these elements, that is, the primordial abundances at the end of the big-bang.

In order to test these predictions, it is necessary to reconstruct the primordial abundances as faithfully as possible, for instance by observing astronomical objects in which very little stellar nucleosynthesis has taken place (such as certain dwarf galaxies) or by observing objects that are very far away, and thus can be seen in a very early stage of their evolution (such as distant quasars).

As noted above, in the standard picture of BBN, all of the light element abundances depend on the amount of ordinary matter (baryons) relative to radiation (photons). Since the universe is presumed to be homogeneous, it has one unique value of the baryon-to-photon ratio. For a long time, this meant that to test BBN theory against observations one had to ask: can all of the light element observations be explained with a single value of the baryon-to-photon ratio? Or more precisely, allowing for the finite precision of both the predictions and the observations, one asks: is there some range of baryon-to-photon values which can account for all of the observations?

More recently, the question has changed: Precision observations of the cosmic microwave background radiation with the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck give an independent value for the baryon-to-photon ratio. The present measurement of helium-4 indicates good agreement, and yet better agreement for helium-3. But for lithium-7, there is a significant discrepancy between BBN and WMAP/Planck, and the abundance derived from Population II stars. The discrepancy, called the "cosmological lithium problem", is a factor of 2.4―4.3 below the theoretically predicted value. that have resulted in revised calculations of the standard BBN based on new nuclear data, and to various reevaluation proposals for primordial proton–proton nuclear reactions, especially the abundances of 7Be + n → 7Li + p, versus 7Be + 2H → 8Be + p.

Non-standard scenarios

In addition to the standard BBN scenario there are numerous non-standard BBN scenarios. These should not be confused with non-standard cosmology: a non-standard BBN scenario assumes that the Big Bang occurred, but inserts additional physics in order to see how this affects elemental abundances. These pieces of additional physics include relaxing or removing the assumption of homogeneity, or inserting new particles such as massive neutrinos.

There have been, and continue to be, various reasons for researching non-standard BBN. The first, which is largely of historical interest, is to resolve inconsistencies between BBN predictions and observations. This has proved to be of limited usefulness in that the inconsistencies were resolved by better observations, and in most cases trying to change BBN resulted in abundances that were more inconsistent with observations rather than less. The second reason for researching non-standard BBN, and largely the focus of non-standard BBN in the early 21st century, is to use BBN to place limits on unknown or speculative physics. For example, standard BBN assumes that no exotic hypothetical particles were involved in BBN. One can insert a hypothetical particle (such as a massive neutrino) and see what has to happen before BBN predicts abundances that are very different from observations. This has been done to put limits on the mass of a stable tau neutrino.

Biological computing

From Wikipedia, the free encyclopedia

Biological computers use biologically derived molecules — such as DNA and/or proteins — to perform digital or real computations.

The development of biocomputers has been made possible by the expanding new science of nanobiotechnology. The term nanobiotechnology can be defined in multiple ways; in a more general sense, nanobiotechnology can be defined as any type of technology that uses both nano-scale materials (i.e. materials having characteristic dimensions of 1-100 nanometers) and biologically based materials. A more restrictive definition views nanobiotechnology more specifically as the design and engineering of proteins that can then be assembled into larger, functional structures. The implementation of nanobiotechnology, as defined in this narrower sense, provides scientists with the ability to engineer biomolecular systems specifically so that they interact in a fashion that can ultimately result in the computational functionality of a computer.

Scientific background

Biocomputers use biologically derived materials to perform computational functions. A biocomputer consists of a pathway or series of metabolic pathways involving biological materials that are engineered to behave in a certain manner based upon the conditions (input) of the system. The resulting pathway of reactions that takes place constitutes an output, which is based on the engineering design of the biocomputer and can be interpreted as a form of computational analysis. Three distinguishable types of biocomputers include biochemical computers, biomechanical computers, and bioelectronic computers.

Biochemical computers

Biochemical computers use the immense variety of feedback loops that are characteristic of biological chemical reactions in order to achieve computational functionality. Feedback loops in biological systems take many forms, and many different factors can provide both positive and negative feedback to a particular biochemical process, causing either an increase in chemical output or a decrease in chemical output, respectively. Such factors may include the quantity of catalytic enzymes present, the amount of reactants present, the amount of products present, and the presence of molecules that bind to and thus alter the chemical reactivity of any of the aforementioned factors. Given the nature of these biochemical systems to be regulated through many different mechanisms, one can engineer a chemical pathway comprising a set of molecular components that react to produce one particular product under one set of specific chemical conditions and another particular product under another set of conditions. The presence of the particular product that results from the pathway can serve as a signal, which can be interpreted—along with other chemical signals—as a computational output based upon the starting chemical conditions of the system (the input).

Biomechanical computers

Biomechanical computers are similar to biochemical computers in that they both perform a specific operation that can be interpreted as a functional computation based upon specific initial conditions which serve as input. They differ, however, in what exactly serves as the output signal. In biochemical computers, the presence or concentration of certain chemicals serves as the output signal. In biomechanical computers, however, the mechanical shape of a specific molecule or set of molecules under a set of initial conditions serves as the output. Biomechanical computers rely on the nature of specific molecules to adopt certain physical configurations under certain chemical conditions. The mechanical, three-dimensional structure of the product of the biomechanical computer is detected and interpreted appropriately as a calculated output.

Bioelectronic computers

Biocomputers can also be constructed in order to perform electronic computing. Again, like both biomechanical and biochemical computers, computations are performed by interpreting a specific output that is based upon an initial set of conditions that serve as input. In bioelectronic computers, the measured output is the nature of the electrical conductivity that is observed in the bioelectronic computer. This output comprises specifically designed biomolecules that conduct electricity in highly specific manners based upon the initial conditions that serve as the input of the bioelectronic system.

Network-based biocomputers

In networks-based biocomputation, self-propelled biological agents, such as molecular motor proteins or bacteria, explore a microscopic network that encodes a mathematical problem of interest. The paths of the agents through the network and/or their final positions represent potential solutions to the problem. For instance, in the system described by Nicolau et al., mobile molecular motor filaments are detected at the "exits" of a network encoding the NP-complete problem SUBSET SUM. All exits visited by filaments represent correct solutions to the algorithm. Exits not visited are non-solutions. The motility proteins are either actin and myosin or kinesin and microtubules. The myosin and kinesin, respectively, are attached to the bottom of the network channels. When adenosine triphosphate (ATP) is added, the actin filaments or microtubules are propelled through the channels, thus exploring the network. The energy conversion from chemical energy (ATP) to mechanical energy (motility) is highly efficient when compared with e.g. electronic computing, so the computer, in addition to being massively parallel, also uses orders of magnitude less energy per computational step.

Engineering biocomputers

A ribosome is a biological machine that uses protein dynamics on nanoscales to translate RNA into proteins.

The behavior of biologically derived computational systems such as these relies on the particular molecules that make up the system, which are primarily proteins but may also include DNA molecules. Nanobiotechnology provides the means to synthesize the multiple chemical components necessary to create such a system. The chemical nature of a protein is dictated by its sequence of amino acids—the chemical building blocks of proteins. This sequence is in turn dictated by a specific sequence of DNA nucleotides—the building blocks of DNA molecules. Proteins are manufactured in biological systems through the translation of nucleotide sequences by biological molecules called ribosomes, which assemble individual amino acids into polypeptides that form functional proteins based on the nucleotide sequence that the ribosome interprets. What this ultimately means is that one can engineer the chemical components necessary to create a biological system capable of performing computations by engineering DNA nucleotide sequences to encode for the necessary protein components. Also, the synthetically designed DNA molecules themselves may function in a particular biocomputer system. Thus, implementing nanobiotechnology to design and produce synthetically designed proteins—as well as the design and synthesis of artificial DNA molecules—can allow the construction of functional biocomputers (e.g. Computational Genes).

Biocomputers can also be designed with cells as their basic components. Chemically induced dimerization systems can be used to make logic gates from individual cells. These logic gates are activated by chemical agents that induce interactions between previously non-interacting proteins and trigger some observable change in the cell.

Network-based biocomputers are engineered by nanofabrication of the hardware from wafers where the channels are etched by electron-beam lithography or nano-imprint lithography. The channels are designed to have a high aspect ratio of cross section so the protein filaments will be guided. Also, split and pass junctions are engineered so filaments will propagate in the network and explore the allowed paths. Surface silanization ensures that the motility proteins can be affixed to the surface and remain functional. The molecules that perform the logic operations are derived from biological tissue.

Economics

All biological organisms have the ability to self-replicate and self-assemble into functional components. The economical benefit of biocomputers lies in this potential of all biologically derived systems to self-replicate and self-assemble given appropriate conditions. For instance, all of the necessary proteins for a certain biochemical pathway, which could be modified to serve as a biocomputer, could be synthesized many times over inside a biological cell from a single DNA molecule. This DNA molecule could then be replicated many times over. This characteristic of biological molecules could make their production highly efficient and relatively inexpensive. Whereas electronic computers require manual production, biocomputers could be produced in large quantities from cultures without any additional machinery needed to assemble them.

Notable advancements in biocomputer technology

Currently, biocomputers exist with various functional capabilities that include operations of Boolean logic and mathematical calculations. Tom Knight of the MIT Artificial Intelligence Laboratory first suggested a biochemical computing scheme in which protein concentrations are used as binary signals that ultimately serve to perform logical operations. At or above a certain concentration of a particular biochemical product in a biocomputer chemical pathway indicates a signal that is either a 1 or a 0. A concentration below this level indicates the other, remaining signal. Using this method as computational analysis, biochemical computers can perform logical operations in which the appropriate binary output will occur only under specific logical constraints on the initial conditions. In other words, the appropriate binary output serves as a logically derived conclusion from a set of initial conditions that serve as premises from which the logical conclusion can be made. In addition to these types of logical operations, biocomputers have also been shown to demonstrate other functional capabilities, such as mathematical computations. One such example was provided by W.L. Ditto, who in 1999 created a biocomputer composed of leech neurons at Georgia Tech which was capable of performing simple addition. These are just a few of the notable uses that biocomputers have already been engineered to perform, and the capabilities of biocomputers are becoming increasingly sophisticated. Because of the availability and potential economic efficiency associated with producing biomolecules and biocomputers—as noted above—the advancement of the technology of biocomputers is a popular, rapidly growing subject of research that is likely to see much progress in the future.

In March 2013. a team of bioengineers from Stanford University, led by Drew Endy, announced that they had created the biological equivalent of a transistor, which they dubbed a "transcriptor". The invention was the final of the three components necessary to build a fully functional computer: data storage, information transmission, and a basic system of logic.

Parallel biological computing with networks, where bio-agent movement corresponds to arithmetical addition was demonstrated in 2016 on a SUBSET SUM instance with 8 candidate solutions.

In July 2017, separate experiments with E. Coli published on Nature showed the potential of using living cells for computing tasks and storing information. A team formed with collaborators of the Biodesign Institute at Arizona State University and Harvard's Wyss Institute for Biologically Inspired Engineering developed a biological computer inside E. Coli that responded to a dozen inputs. The team called the computer "ribocomputer", as it was composed of ribonucleic acid. Harvard researchers proved that it is possible to store information in bacteria after successfully archiving images and movies in the DNA of living E. coli cells.

In 2021, a team led by biophysicist Sangram Bagh realized a study with E. coli to solve 2 x 2 maze problems to probe the principle for distributed computing among cells.

In 2024, FinalSpark, a Swiss biocomputing startup, launched an online platform enabling global researchers to conduct experiments remotely on biological neurons in vitro.

In March 2025, Cortical Labs unveiled CL1, the world's first commercially available biological computer integrating lab-grown human neurons with silicon hardware. Building on earlier work with DishBrain, CL1 uses hundreds of thousands of neurons sustained by an internal life-support system for up to six months, enabling real-time learning and adaptive computation within a closed-loop environment. The system operates via the Biological Intelligence Operating System (biOS), allowing direct code deployment to living neurons. CL1 is designed for applications in drug discovery, disease modeling, and neuromorphic research, offering an ethically preferable alternative to animal testing and consuming significantly less energy than traditional artificial intelligence systems.

Future potential of biocomputers

Many examples of simple biocomputers have been designed, but the capabilities of these biocomputers are very limited in comparison to commercially available inorganic computers.

The potential to solve complex mathematical problems using far less energy than standard electronic supercomputers, as well as to perform more reliable calculations simultaneously rather than sequentially, motivates the further development of "scalable" biological computers, and several funding agencies are supporting these efforts.

Helium-3

From Wikipedia, the free encyclopedia
Helium-3
General
Symbol3He
Nameshelium-3,
tralphium (obsolete)
Protons (Z)2
Neutrons (N)1
Nuclide data
Natural abundance0.000137% (atmosphere)
0.01% (Solar System)
Half-life (t1/2)stable
Isotope mass3.016029322 Da
Spin1/2 ħ
Parent isotopes3H (beta decay of tritium)
Isotopes of helium
Complete table of nuclides

Helium-3 (3He see also helion) is a light, stable isotope of helium with two protons and one neutron. (In contrast, the most common isotope, helium-4, has two protons and two neutrons.) Helium-3 and hydrogen-1 are the only stable nuclides with more protons than neutrons. It was discovered in 1939. Helium-3 atoms are fermionic and become a superfluid at the temperature of 2.491 mK.

Helium-3 occurs as a primordial nuclide, escaping from Earth's crust into its atmosphere and into outer space over millions of years. It is also thought to be a natural nucleogenic and cosmogenic nuclide, one produced when lithium is bombarded by natural neutrons, which can be released by spontaneous fission and by nuclear reactions with cosmic rays. Some found in the terrestrial atmosphere is a remnant of atmospheric and underwater nuclear weapons testing.

Nuclear fusion using helium-3 has long been viewed as a desirable future energy source. The fusion of two of its atoms would be aneutronic, that is, it would not release the dangerous radiation of traditional fusion or require the much higher temperatures thereof. The process may unavoidably create other reactions that themselves would cause the surrounding material to become radioactive.

Helium-3 is thought to be more abundant on the Moon than on Earth, having been deposited in the upper layer of regolith by the solar wind over billions of years, though still lower in abundance than in the Solar System's gas giants.

History

The existence of helium-3 was first proposed in 1934 by the Australian nuclear physicist Mark Oliphant while he was working at the University of Cambridge Cavendish Laboratory. Oliphant had performed experiments in which fast deuterons collided with deuteron targets (incidentally, the first demonstration of nuclear fusion). Isolation of helium-3 was first accomplished by Luis Alvarez and Robert Cornog in 1939. Helium-3 was thought to be a radioactive isotope until it was also found in samples of natural helium, which is mostly helium-4, taken both from the terrestrial atmosphere and from natural gas wells.

Physical properties

Due to its low atomic mass of 3.016 Da, helium-3 has some physical properties different from those of helium-4, with a mass of 4.0026 Da. On account of the weak, induced dipole–dipole interaction between the helium atoms, their microscopic physical properties are mainly determined by their zero-point energy. Also, the microscopic properties of helium-3 cause it to have a higher zero-point energy than helium-4. This implies that helium-3 can overcome dipole–dipole interactions with less thermal energy than helium-4 can.

The quantum mechanical effects on helium-3 and helium-4 are significantly different because with two protons, two neutrons, and two electrons, helium-4 has an overall spin of zero, making it a boson, but with one fewer neutron, helium-3 has an overall spin of one half, making it a fermion.

Pure helium-3 gas boils at 3.19 K compared with helium-4 at 4.23 K, and its critical point is also lower at 3.35 K, compared with helium-4 at 5.2 K. Helium-3 has less than half the density of helium-4 when it is at its boiling point: 59 g/L compared to 125 g/L of helium-4 at a pressure of one atmosphere. Its latent heat of vaporization is also considerably lower at 0.026 kJ/mol compared with the 0.0829 kJ/mol of helium-4.

Superfluidity

Phase diagram for helium-3 ("bcc" indicates a body-centered cubic crystal lattice.)

An important property of helium-3 atoms, which distinguishes them from the more common helium-4, is that they contain an odd number of spin 12 particles, and therefore are composite fermions. This is a direct result of the addition rules for quantized angular momentum. In contrast, helium-4 atoms are bosons, containing an even number of spin-1/2 particles. At low temperatures (about 2.17 K), helium-4 undergoes a phase transition: A fraction of it enters a superfluid phase that can be roughly understood as a type of Bose–Einstein condensate. Such a mechanism is not available for fermionic helium-3 atoms. Many speculated that helium-3 could also become a superfluid at much lower temperatures, if the atoms formed into pairs analogous to Cooper pairs in the BCS theory of superconductivity. Each Cooper pair, having integer spin, can be thought of as a boson. During the 1970s, David Lee, Douglas Osheroff and Robert Coleman Richardson discovered two phase transitions along the melting curve, which were soon realized to be the two superfluid phases of helium-3. The transition to a superfluid occurs at 2.491 millikelvins on the melting curve. They were awarded the 1996 Nobel Prize in Physics for their discovery. Alexei Abrikosov, Vitaly Ginzburg, and Tony Leggett won the 2003 Nobel Prize in Physics for their work on refining understanding of the superfluid phase of helium-3.

In a zero magnetic field, there are two distinct superfluid phases of 3He, the A-phase and the B-phase. The B-phase is the low-temperature, low-pressure phase which has an isotropic energy gap. The A-phase is the higher temperature, higher pressure phase that is further stabilized by a magnetic field and has two point nodes in its gap. The presence of two phases is a clear indication that 3He is an unconventional superfluid (superconductor), since the presence of two phases requires an additional symmetry, other than gauge symmetry, to be broken. In fact, it is a p-wave superfluid, with spin one, S = 1 ħ, and angular momentum one, L = 1 ħ. The ground state corresponds to total angular momentum zero, J = S + L = 0 (vector addition). Excited states are possible with non-zero total angular momentum, J > 0, which are excited pair collective modes. These collective modes have been studied with much greater precision than in any other unconventional pairing system, because of the extreme purity of superfluid 3He. This purity is due to all 4He phase separating entirely and all other materials solidifying and sinking to the bottom of the liquid, making the A- and B-phases of 3He the most pure condensed matter state possible.

Natural abundance

Terrestrial abundance

3He is a primordial substance in the Earth's mantle, thought to have been trapped during the planet's initial formation. The ratio of 3He to 4He within the Earth's crust and mantle is less than that in the solar disk (as estimated using meteorite and lunar samples), with terrestrial materials generally containing lower 3He/4He ratios due to production of 4He from radioactive decay.

3He has a cosmological ratio of 300 atoms per million atoms of 4He, leading to the assumption that the original ratio of these primordial gases in the mantle was around 200–300 ppm when Earth was formed. Over the course of Earth's history, a significant amount of 4He has been generated by the alpha decay of uranium, thorium and other radioactive isotopes, to the point that only around 7% of the helium now in the mantle is primordial helium, thus lowering the total 3He:4He ratio to around 20 ppm. Ratios of 3He:4He in excess of the atmospheric ratio are indicative of a contribution of 3He from the mantle. Crustal sources are dominated by the 4He produced by radioactive decay.

The ratio of helium-3 to helium-4 in natural Earth-bound sources varies greatly. Samples of the lithium ore spodumene from Edison Mine, South Dakota were found to contain 12 parts of helium-3 to a million parts of helium-4. Samples from other mines showed 2 parts per million.

Helium itself is present as up to 7% of some natural gas sources, and large sources have over 0.5% (above 0.2% makes it viable to extract). The fraction of 3He in helium separated from natural gas in the U.S. was found to range from 70 to 242 parts per billion. Hence the US 2002 stockpile of 1 billion normal m3 would have contained about 12 to 43 kilograms (26 to 95 lb) of helium-3. According to American physicist Richard Garwin, about 26 cubic metres (920 cu ft) or almost 5 kilograms (11 lb) of 3He is available annually for separation from the US natural gas stream. If the process of separating out the 3He could employ as feedstock the liquefied helium typically used to transport and store bulk quantities, estimates for the incremental energy cost range from 34 to 300 /L NTP, excluding the cost of infrastructure and equipment. Algeria's annual gas production is assumed to contain 100 million normal cubic metres and this would contain between 7 and 24 m3 of helium-3 (about 1 to 4 kg) assuming a similar 3He fraction.

3He is also present in the Earth's atmosphere. The natural abundance of 3He in atmospheric helium is 1.37×10−6 (1.37 parts per million). The partial pressure of helium in the Earth's atmosphere is about 0.52 Pa, and thus helium accounts for 5.2 parts per million of the total pressure (101325 Pa) in the Earth's atmosphere, and 3He thus accounts for 7.2 parts per trillion of the atmosphere. Since the atmosphere of the Earth has a mass of about 5.14×1018 kg, the mass of 3He in the Earth's atmosphere is the product of these numbers and the molecular weight ratio of helium-3 to air (3.016 to 28.95), giving a mass of 3815 tonnes of helium-3 in the earth's atmosphere.

3He is produced on Earth from three sources: lithium spallation, cosmic rays, and beta decay of tritium (3H). The contribution from cosmic rays is negligible within all except the oldest regolith materials, and lithium spallation reactions are a lesser contributor than the production of 4He by alpha particle emissions.

The total amount of helium-3 in the mantle may be in the range of 0.1–1 megatonnes. Some helium-3 finds its way up through deep-sourced hotspot volcanoes such as those of the Hawaiian Islands, but only 300 g per year is emitted to the atmosphere. Mid-ocean ridges emit another 3 kg per year. Around subduction zones, various sources produce helium-3 in natural gas deposits which possibly contain a thousand tonnes of helium-3 (although there may be 25 thousand tonnes if all ancient subduction zones have such deposits). Wittenberg estimated that United States crustal natural gas sources may have only half a tonne total. Wittenberg cited Anderson's estimate of another 1200 tonnes in interplanetary dust particles on the ocean floors. In the 1994 study, extracting helium-3 from these sources consumes more energy than fusion would release.

Moon

Materials on the Moon's surface contain helium-3 at concentrations between 1.4 and 15 ppb in sunlit areas, and may contain concentrations as much as 50 ppb in permanently shadowed regions. A number of people, starting with Gerald Kulcinski in 1986, have proposed to explore the Moon, mine lunar regolith and use the helium-3 for fusion. Because of the low concentrations of helium-3, any mining equipment would need to process extremely large amounts of regolith (over 150 tonnes of regolith to obtain one gram of helium-3).

The primary objective of Indian Space Research Organisation's first lunar probe called Chandrayaan-1, launched on October 22, 2008, was reported in some sources to be mapping the Moon's surface for helium-3-containing minerals. No such objective is mentioned in the project's official list of goals, though many of its scientific payloads have held helium-3-related applications.

Cosmochemist and geochemist Ouyang Ziyuan from the Chinese Academy of Sciences who is now in charge of the Chinese Lunar Exploration Program has already stated on many occasions that one of the main goals of the program would be the mining of helium-3, from which operation "each year, three space shuttle missions could bring enough fuel for all human beings across the world".

In January 2006, the Russian space company RKK Energiya announced that it considers lunar helium-3 a potential economic resource to be mined by 2020, if funding can be found.

Not all writers feel the extraction of lunar helium-3 is feasible, or even that there will be a demand for it for fusion. Dwayne Day, writing in The Space Review in 2015, characterises helium-3 extraction from the Moon for use in fusion as magical thinking about an unproven technology, and questions the feasibility of lunar extraction, as compared to production on Earth.

Gas giants

Mining gas giants for helium-3 has also been proposed. The British Interplanetary Society's hypothetical Project Daedalus interstellar probe design was fueled by helium-3 mines in the atmosphere of Jupiter, for example.

Solar nebula (primordial) abundance

One early estimate of the primordial ratio of 3He to 4He in the solar nebula has been the measurement of their ratio in the atmosphere of Jupiter, measured by the mass spectrometer of the Galileo atmospheric entry probe. This ratio is about 1:10000, or 100 parts of 3He per million parts of 4He. This is roughly the same ratio of the isotopes as in lunar regolith, which contains 28 ppm helium-4 and 2.8 ppb helium-3 (which is at the lower end of actual sample measurements, which vary from about 1.4 to 15 ppb). Terrestrial ratios of the isotopes are lower by a factor of 100, mainly due to enrichment of helium-4 stocks in the mantle by billions of years of alpha decay from uranium, thorium as well as their decay products and extinct radionuclides.

Human production

Tritium decay

Virtually all helium-3 used in industry today is produced from the radioactive decay of tritium, given its very low natural abundance and its very high cost.

Production, sales and distribution of helium-3 in the United States are managed by the US Department of Energy (DOE) DOE Isotope Program.

While tritium has several different experimentally determined values of its half-life, NIST lists 4500±8 d (12.32±0.02 years). It decays into helium-3 by beta decay as in this nuclear equation:

3
1
H
 
→  3
2
He1+
 
e
 
ν
e

Among the total released energy of 18.6 keV, the part taken by electron's kinetic energy varies, with an average of 5.7 keV, while the remaining energy is carried off by the nearly undetectable electron antineutrino. Beta particles from tritium can penetrate only about 6.0 millimetres (0.24 in) of air, and they are incapable of passing through the dead outermost layer of human skin. The unusually low energy released in the tritium beta decay makes the decay (along with that of rhenium-187) appropriate for absolute neutrino mass measurements in the laboratory (the most recent experiment being KATRIN).

The low energy of tritium's radiation makes it difficult to detect tritium-labeled compounds except by using liquid scintillation counting.

Tritium is a radioactive isotope of hydrogen and is typically produced by bombarding lithium-6 with neutrons in a nuclear reactor. The lithium nucleus absorbs a neutron and splits into helium-4 and tritium. Tritium decays into helium-3 with a half-life of 12.3 years, so helium-3 can be produced by simply storing the tritium until it undergoes radioactive decay. As tritium forms a stable compound with oxygen (tritiated water) while helium-3 does not, the storage and collection process could continuously collect the material that outgasses from the stored material.

Tritium is a critical component of nuclear weapons and historically it was produced and stockpiled primarily for this application. The decay of tritium into helium-3 reduces the explosive power of the fusion warhead, so periodically the accumulated helium-3 must be removed from warhead reservoirs and tritium in storage. Helium-3 removed during this process is marketed for other applications.

For decades this has been, and remains, the principal source of the world's helium-3. Since the signing of the START I Treaty in 1991 the number of nuclear warheads that are kept ready for use has decreased. This has reduced the quantity of helium-3 available from this source. Helium-3 stockpiles have been further diminished by increased demand, primarily for use in neutron radiation detectors and medical diagnostic procedures. US industrial demand for helium-3 reached a peak of 70000 litres (approximately 8 kg) per year in 2008. Price at auction, historically about $100 per litre, reached as high as $2000 per litre. Since then, demand for helium-3 has declined to about 6000 litres per year due to the high cost and efforts by the DOE to recycle it and find substitutes. Assuming a density of 114 g/m3 at $100/L helium-3 would be about a thirtieth as expensive as tritium (roughly $880/g vs. roughly $30000 per gram) while at $2000 per litre, helium-3 would be about half as expensive as tritium ($17540/g vs. $30000/g).

The DOE recognized the developing shortage of both tritium and helium-3, and began producing tritium by lithium irradiation at the Tennessee Valley Authority's Watts Bar Nuclear Generating Station in 2010. In this process tritium-producing burnable absorber rods (TPBARs) containing lithium in a ceramic form are inserted into the reactor in place of the normal boron control rods Periodically the TPBARs are replaced and the tritium extracted.

Currently only two commercial nuclear reactors (Watts Bar Nuclear Plant Units 1 and 2) are being used for tritium production but the process could, if necessary, be vastly scaled up to meet any conceivable demand simply by utilizing more of the nation's power reactors. Substantial quantities of tritium and helium-3 could also be extracted from the heavy water moderator in CANDU nuclear reactors. India and Canada, the two countries with the largest heavy water reactor fleet, are both known to extract tritium from moderator/coolant heavy water, but those amounts are not nearly enough to satisfy global demand of either tritium or helium-3.

As tritium is also produced inadvertently in various processes in light water reactors (see Tritium for details), extraction from those sources could be another source of helium-3. If the annual discharge of tritium (per 2018 figures) at La Hague reprocessing facility is taken as a basis, the amounts discharged (31.2 g at La Hague) are not nearly enough to satisfy demand, even if 100% recovery is achieved.

Annual discharge of tritium from nuclear facilities
Location Nuclear facility Closest
waters
Liquid
(TBq)
Steam
(TBq)
Total
(TBq)
Total
(mg)
year
United Kingdom Heysham nuclear power station B Irish Sea 396 2.1 398 1,115 2019
United Kingdom Sellafield reprocessing facility Irish Sea 423 56 479 1,342 2019
Romania Cernavodă Nuclear Power Plant Unit 1 Black Sea 140 152 292 872 2018
France La Hague reprocessing plant English Channel 11,400 60 11,460 32,100 2018
South Korea Wolseong Nuclear Power Plant East Sea 107 80.9 188 671 2020
Taiwan Maanshan Nuclear Power Plant Luzon Strait 35 9.4 44 123 2015
China Fuqing Nuclear Power Plant Taiwan Strait 52 0.8 52 146 2020
China Sanmen Nuclear Power Station East China Sea 20 0.4 20 56 2020
Canada Bruce Nuclear Generating Station A, B Great Lakes 756 994 1,750 4,901 2018
Canada Darlington Nuclear Generating Station Great Lakes 220 210 430 1,204 2018
Canada Pickering Nuclear Generating Station Units 1-4 Great Lakes 140 300 440 1,232 2015
United States Diablo Canyon Power Plant Units1, 2 Pacific Ocean 82 2.7 84 235 2019

Uses

Helium-3 spin echo

Helium-3 can be used to do spin echo experiments of surface dynamics, which are underway at the Surface Physics Group at the Cavendish Laboratory in Cambridge and in the Chemistry Department at Swansea University.

Neutron detection

Helium-3 is an important isotope in instrumentation for neutron detection. It has a high absorption cross section for thermal neutron beams and is used as a converter gas in neutron detectors. The neutron is converted through the nuclear reaction

n + 3He → 3H + 1H + 0.764 MeV

into charged particles tritium ions (T, 3H) and Hydrogen ions, or protons (p, 1H) which then are detected by creating a charge cloud in the stopping gas of a proportional counter or a Geiger–Müller tube.

Furthermore, the absorption process is strongly spin-dependent, which allows a spin-polarized helium-3 volume to transmit neutrons with one spin component while absorbing the other. This effect is employed in neutron polarization analysis, a technique which probes for magnetic properties of matter.

The United States Department of Homeland Security had hoped to deploy detectors to spot smuggled plutonium in shipping containers by their neutron emissions, but the worldwide shortage of helium-3 following the drawdown in nuclear weapons production since the Cold War has to some extent prevented this. As of 2012, DHS determined the commercial supply of boron-10 would support converting its neutron detection infrastructure to that technology.

Cryogenics

Helium-3 refrigerators are devices used in experimental physics for obtaining temperatures down to about 0.2 kelvin. By evaporative cooling of helium-4, a 1-K pot liquefies a small amount of helium-3 in a small vessel called a helium-3 pot. Evaporative cooling at low pressure of the liquid helium-3, usually driven by adsorption since, due to its high price, the helium-3 is usually contained in a closed system to avoid losses, cools the helium-3 pot to a fraction of a kelvin.

A dilution refrigerator uses a mixture of helium-3 and helium-4 to reach cryogenic temperatures as low as a few thousandths of a kelvin.

Nuclear magnetic resonance

Helium-3 nuclei have an intrinsic nuclear spin of 1/2 ħ, and a relatively high gyromagnetic ratio. Because of this, it is possible to use Nuclear magnetic resonance (NMR) to observe helium-3. This analytical technique, usually called 3He-NMR, can be used to identify helium-containing compounds. It is however limited by the low abundance of helium-3 in comparison to helium-4, which is itself not NMR-active.

Helium-3 can be hyperpolarized using non-equilibrium means such as spin-exchange optical pumping. During this process, circularly polarized infrared laser light, tuned to the appropriate wavelength, is used to excite electrons in an alkali metal, such as caesium or rubidium inside a sealed glass vessel. The angular momentum is transferred from the alkali metal electrons to the noble gas nuclei through collisions. In essence, this process effectively aligns the nuclear spins with the magnetic field in order to enhance the NMR signal.

The hyperpolarized gas may then be stored at pressures of 10 atm, for up to 100 hours. Following inhalation, gas mixtures containing the hyperpolarized helium-3 gas can be imaged with an MRI scanner to produce anatomical and functional images of lung ventilation. This technique is also able to produce images of the airway tree, locate unventilated defects, measure the alveolar oxygen partial pressure, and measure the ventilation/perfusion ratio. This technique may be critical for the diagnosis and treatment management of chronic respiratory diseases such as chronic obstructive pulmonary disease (COPD), emphysema, cystic fibrosis, and asthma.

Because a helium atom, or even two helium atoms, can be encased in fullerene-like cages, the NMR spectroscopy of this element can be a sensitive probe for changes of the carbon framework around it. Using carbon-13 NMR to analyze fullerenes themselves is complicated by so many subtle differences among the carbons in anything but the simplest, highly symmetric structures.

Radio energy absorber for tokamak plasma experiments

Both MIT's Alcator C-Mod tokamak and the Joint European Torus (JET) have experimented with adding a little helium-3 to a H–D plasma to increase the absorption of radio-frequency (RF) energy to heat the hydrogen and deuterium ions, a "three-ion" effect.

Nuclear fuel

Comparison of neutronicity for different reactions

Reactants Products Q n/MeV
First-generation fusion fuels 2D + 2D 3He + 1
0
n
3.268 MeV 0.306
2D + 2D 3T + 1
1
p
4.032 MeV 0
2D + 3T 4He + 1
0
n
17.571 MeV 0.057
Second-generation fusion fuel 2D + 3He 4He + 1
1
p
18.354 MeV 0
Net result of 2D burning (sum of first 4 rows) 6 2D 2(4He + n + p) 43.225 MeV 0.046
Third-generation fusion fuels 3He + 3He 4He + 2 1
1
p
12.86 MeV 0
11B + 1
1
p
3 4He 8.68 MeV 0
Current nuclear fuel 235U + n 2 FP+ 2.5n ~200 MeV 0.0075

3He can be produced by the low temperature fusion of (D-p)2H + 1p3He + γ + 4.98 MeV. If the fusion temperature is below that for the helium nuclei to fuse, the reaction produces a high energy alpha particle which quickly acquires an electron producing a stable light helium ion which can be utilized directly as a source of electricity without producing dangerous neutrons.

The fusion reaction rate increases rapidly with temperature until it maximizes and then gradually drops off. The DT rate peaks at a lower temperature (about 70 keV, or 800 million kelvins) and at a higher value than other reactions commonly considered for fusion energy.

3He can be used in fusion reactions by either of the reactions 2H + 3He → 4He + 1p + 18.3 MeV, or 3He + 3He → 4He + 2 1p + 12.86 MeV.

The conventional deuterium + tritium ("D–T") fusion process produces energetic neutrons which render reactor components radioactive with activation products. The appeal of helium-3 fusion stems from the aneutronic nature of its reaction products. Helium-3 itself is non-radioactive. The lone high-energy by-product, the proton, can be contained by means of electric and magnetic fields. The momentum energy of this proton (created in the fusion process) will interact with the containing electromagnetic field, resulting in direct net electricity generation.

Because of the higher Coulomb barrier, the temperatures required for 2H + 3He fusion are much higher than those of conventional D–T fusion. Moreover, since both reactants need to be mixed together to fuse, reactions between nuclei of the same reactant will occur, and the D–D reaction (2H + 2H) does produce a neutron. Reaction rates vary with temperature, but the D–3He reaction rate is never greater than 3.56 times the D–D reaction rate (see graph). Therefore, fusion using D–3He fuel at the right temperature and a D-lean fuel mixture, can produce a much lower neutron flux than D–T fusion, but is not clean, negating some of its main attraction.

The second possibility, fusing 3He with itself (3He + 3He), requires even higher temperatures (since now both reactants have a +2 charge), and thus is even more difficult than the D-3He reaction. It offers a theoretical reaction that produces no neutrons; the charged protons produced can be contained in electric and magnetic fields, which in turn directly generates electricity. 3He + 3He fusion is feasible as demonstrated in the laboratory and has immense advantages, but commercial viability is many years in the future.

The amounts of helium-3 needed as a replacement for conventional fuels are substantial by comparison to amounts currently available. The total amount of energy produced in the 2D + 3He reaction is 18.4 MeV, which corresponds to some 493 megawatt-hours (4.93×108 W·h) per three grams (one mole) of 3He. If the total amount of energy could be converted to electrical power with 100% efficiency (a physical impossibility), it would correspond to about 30 minutes of output of a gigawatt electrical plant per mole of 3He. Thus, a year's production (at 6 grams for each operation hour) would require 52.5 kilograms of helium-3. The amount of fuel needed for large-scale applications can also be put in terms of total consumption: electricity consumption by 107 million U.S. households in 2001 totaled 1,140 billion kW·h (1.14×1015 W⋅h). Again assuming 100% conversion efficiency, 6.7 tonnes per year of helium-3 would be required for that segment of the energy demand of the United States, 15 to 20 tonnes per year given a more realistic end-to-end conversion efficiency.

A second-generation approach to controlled fusion power involves combining helium-3 and deuterium, 2D. This reaction produces an alpha particle and a high-energy proton. The most important potential advantage of this fusion reaction for power production as well as other applications lies in its compatibility with the use of electrostatic fields to control fuel ions and the fusion protons. High speed protons, as positively charged particles, can have their kinetic energy converted directly into electricity, through use of solid-state conversion materials as well as other techniques. Potential conversion efficiencies of 70% may be possible, as there is no need to convert proton energy to heat in order to drive a turbine-powered electrical generator.

He-3 power plants

There have been many claims about the capabilities of helium-3 power plants. According to proponents, fusion power plants operating on deuterium and helium-3 would offer lower capital and operating costs than their competitors due to less technical complexity, higher conversion efficiency, smaller size, the absence of radioactive fuel, no air or water pollution, and only low-level radioactive waste disposal requirements. Recent estimates suggest that about $6 billion in investment capital will be required to develop and construct the first helium-3 fusion power plant. Financial break even at today's wholesale electricity prices (5 US cents per kilowatt-hour) would occur after five 1-gigawatt plants were on line, replacing old conventional plants or meeting new demand.

The reality is not so clear-cut. The most advanced fusion programs in the world are inertial confinement fusion (such as National Ignition Facility) and magnetic confinement fusion (such as ITER and Wendelstein 7-X). In the case of the former, there is no solid roadmap to power generation. In the case of the latter, commercial power generation is not expected until around 2050. In both cases, the type of fusion discussed is the simplest: D–T fusion. The reason for this is the very low Coulomb barrier for this reaction; for D+3He, the barrier is much higher, and it is even higher for 3He–3He. The immense cost of reactors like ITER and National Ignition Facility are largely due to their immense size, yet to scale up to higher plasma temperatures would require reactors far larger still. The 14.7 MeV proton and 3.6 MeV alpha particle from D–3He fusion, plus the higher conversion efficiency, means that more electricity is obtained per kilogram than with D–T fusion (17.6 MeV), but not that much more. As a further downside, the rates of reaction for helium-3 fusion reactions are not particularly high, requiring a reactor that is larger still or more reactors to produce the same amount of electricity.

In 2022, Helion Energy claimed that their 7th fusion prototype (Polaris; fully funded and under construction as of September 2022) will demonstrate "net electricity from fusion", and will demonstrate "helium-3 production through deuterium–deuterium fusion" by means of a "patented high-efficiency closed-fuel cycle".

Alternatives to He-3

To attempt to work around this problem of massively large power plants that may not even be economical with D–T fusion, let alone the far more challenging D–3He fusion, a number of other reactors have been proposed – the Fusor, Polywell, Focus fusion, and many more, though many of these concepts have fundamental problems with achieving a net energy gain, and generally attempt to achieve fusion in thermal disequilibrium, something that could potentially prove impossible, and consequently, these long-shot programs tend to have trouble garnering funding despite their low budgets. Unlike the "big" and "hot" fusion systems, if such systems worked, they could scale to the higher barrier aneutronic fuels, and so their proponents tend to promote p-B fusion, which requires no exotic fuel such as helium-3.

Major themes

  Photo taken during a Citizen Science Bioblitz The area integrates a series of fields and themes such as: Citizen science Consumer educat...