Paleoclimatology (British spelling, palaeoclimatology) is the scientific study of climates predating the invention of meteorological instruments, when no direct measurement data were available. As instrumental records only span a tiny part of Earth's history, the reconstruction of ancient climate is important to understand natural variation and the evolution of the current climate.
The scientific field of paleoclimatology came to maturity in the
20th century. Notable periods studied by paleoclimatologists include the
frequent glaciations that Earth has undergone, rapid cooling events like the Younger Dryas, and the rapid warming during the Paleocene–Eocene Thermal Maximum.
Studies of past changes in the environment and biodiversity often
reflect on the current situation, specifically the impact of climate on mass extinctions and biotic recovery and current global warming.
Studying paleoclimatology is important when looking towards the Earth's future regarding climate specifically.
Notions of a changing climate most likely evolved in ancient Egypt, Mesopotamia, the Indus Valley and China, where prolonged periods of droughts and floods were experienced. In the seventeenth century, Robert Hooke postulated that fossils of giant turtles found in Dorset could only be explained by a once warmer climate, which he thought could be explained by a shift in Earth's axis. Fossils were, at that time, often explained as a consequence of a biblical flood. Systematic observations of sunspots started by amateur astronomer Heinrich Schwabe in the early 19th century, starting a discussion of the Sun's influence on Earth's climate.
The scientific study of paleoclimatology began to take shape in
the early 19th century, when discoveries about glaciations and natural
changes in Earth's past climate helped to understand the greenhouse effect.
It was only in the 20th century that paleoclimatology became a unified
scientific field. Before, different aspects of Earth's climate history
were studied by a variety of disciplines. At the end of the 20th century, the empirical research into Earth's
ancient climates started to be combined with computer models of
increasing complexity. A new objective also developed in this period:
finding ancient analog climates that could provide information about
current climate change.
Reconstructing ancient climates
Preliminary results from a Smithsonian Institution project, showing Earth's average surface temperature over the past 500 million yearsPalaeotemperature graphs placed togetherThe oxygen content in the atmosphere over the last billion years
Paleoclimatologists employ a wide variety of techniques to deduce
ancient climates. The techniques used depend on which variable has to be
reconstructed (this could be temperature, precipitation,
or something else) and how long ago the climate of interest occurred.
For instance, the deep marine record, the source of most isotopic data,
exists only on oceanic plates, which are eventually subducted; the oldest remaining material is 200 million years old. Older sediments are also more prone to corruption by diagenesis.
This is due to the millions of years of disruption experienced by the
rock formations, such as pressure, tectonic activity, and fluid flowing.
These factors often result in a lack in quality or quantity of data,
which causes resolution and confidence in the data decrease over time.
Specific techniques used to make inferences on ancient climate
conditions are the use of lake sediment cores and speleothems. These
utilize an analysis of sediment layers and rock growth formations
respectively, amongst element-dating methods utilizing oxygen, carbon
and uranium.
Proxies for climate
Direct Quantitative Measurements
The
Direct Quantitative Measurements method is the most direct approach to
understand the change in a climate. Comparisons between recent data to
older data allows a researcher to gain a basic understanding of weather
and climate changes within an area. There is a disadvantage to this
method. Data of the climate only started being recorded in the
mid-1800s. This means that researchers can only utilize 150 years of
data. That is not helpful when trying to map the climate of an area
10,000 years ago. This is where more complex methods can be used.
Ice
Mountain glaciers and the polar ice caps/ice sheets provide much data in paleoclimatology. Ice-coring projects in the ice caps of Greenland and Antarctica have yielded data going back several hundred thousand years, over 800,000 years in the case of the EPICA project.
Air trapped within fallen snow
becomes encased in tiny bubbles as the snow is compressed into ice in
the glacier under the weight of later years' snow. The trapped air has
proven a tremendously valuable source for direct measurement of the
composition of air from the time the ice was formed.
Layering can be observed because of seasonal pauses in ice
accumulation and can be used to establish chronology, associating
specific depths of the core with ranges of time.
Changes in the layering thickness can be used to determine changes in precipitation or temperature.
Oxygen-18 quantity changes (δ18O)
in ice layers represent changes in average ocean surface temperature.
Water molecules containing the heavier O-18 evaporate at a higher
temperature than water molecules containing the normal Oxygen-16
isotope. The ratio of O-18 to O-16 will be higher as temperature
increases but it also depends on factors such as water salinity and the
volume of water locked up in ice sheets. Various cycles in isotope
ratios have been detected.
Pollen
has been observed in the ice cores and can be used to understand which
plants were present as the layer formed. Pollen is produced in abundance
and its distribution is typically well understood. A pollen count for a
specific layer can be produced by observing the total amount of pollen
categorized by type (shape) in a controlled sample of that layer.
Changes in plant frequency over time can be plotted through statistical
analysis of pollen counts in the core. Knowing which plants were present
leads to an understanding of precipitation and temperature, and types
of fauna present. Palynology includes the study of pollen for these purposes.
Volcanic ash
is contained in some layers and can be used to establish the time of
the layer's formation. Volcanic events distribute ash with a unique set
of properties (shape and color of particles, chemical signature).
Establishing the ash's source will give a time period to associate with
the layer of ice.
A multinational consortium, the European Project for Ice Coring in Antarctica (EPICA), has drilled an ice core in Dome C on the East Antarctic ice sheet and retrieved ice from roughly 800,000 years ago. The international ice core community has, under the auspices of
International Partnerships in Ice Core Sciences (IPICS), defined a
priority project to obtain the oldest possible ice core record from
Antarctica, an ice core record reaching back to or towards 1.5 million
years ago.
Climatic information can be obtained through an understanding of
changes in tree growth. Generally, trees respond to changes in climatic
variables by speeding up or slowing down growth, which in turn is
generally reflected by a greater or lesser thickness in growth rings.
Different species, however, respond to changes in climatic variables in
different ways. A tree-ring record is established by compiling
information from many living trees in a specific area. This is done by
comparing the number, thickness, ring boundaries, and pattern matching
of tree growth rings.
The differences in thickness displayed in the growth rings in
trees can often indicate the quality of conditions in the environment,
and the fitness of the tree species evaluated. Different species of
trees will display different growth responses to the changes in the
climate. An evaluation of multiple trees within the same species, along
with one of trees in different species, will allow for a more accurate
analysis of the changing variables within the climate and how they
affected the surrounding species.
Older intact wood that has escaped decay can extend the time
covered by the record by matching the ring depth changes to contemporary
specimens. By using that method, some areas have tree-ring records
dating back a few thousand years. Older wood not connected to a
contemporary record can be dated generally with radiocarbon techniques. A
tree-ring record can be used to produce information regarding
precipitation, temperature, hydrology, and fire corresponding to a
particular area.
Sedimentary content
On a longer time scale, geologists must refer to the sedimentary record for data.
Sediments, sometimes lithified to form rock, may contain remnants of preserved vegetation, animals, plankton, or pollen, which may be characteristic of certain climatic zones.
Biomarker molecules such as the alkenones may yield information about their temperature of formation.
Chemical signatures, particularly Mg/Ca ratio of calcite in Foraminifera tests, can be used to reconstruct past temperature.
Isotopic ratios can provide further information. Specifically, the δ18O record responds to changes in temperature and ice volume, and the δ13C record reflects a range of factors, which are often difficult to disentangle.
Sea
floor core sample labelled to identify the exact spot on the sea floor
where the sample was taken. Sediments from nearby locations can show
significant differences in chemical and biological composition.
On a longer time scale, the rock record may show signs of sea level rise and fall, and features such as "fossilised" sand dunes can be identified. Scientists can get a grasp of long-term climate by studying sedimentary rock
going back billions of years. The division of Earth history into
separate periods is largely based on visible changes in sedimentary rock
layers that demarcate major changes in conditions. Often, they include
major shifts in climate.
Coral "rings'' share similar evidence of growth to that of trees and
thus can be dated in similar ways. A primary difference is their
environments and the conditions within those that they respond to.
Examples of these conditions for coral include water temperature,
freshwater influx, changes in pH, and wave disturbances. From there,
specialized equipment, such as the Advanced Very High-Resolution
Radiometer (AVHRR) instrument, can be used to derive the sea surface temperature and water salinity from the past few centuries. The δ18O of coralline
red algae provides a useful proxy of the combined sea surface
temperature and sea surface salinity at high latitudes and the tropics,
where many traditional techniques are limited.
Landscapes and landforms
Within climatic geomorphology, one approach is to study relict landforms to infer ancient climates. Being often concerned about past climates climatic geomorphology is considered sometimes to be a theme of historical geology. Evidence of these past climates to be studied can be found in the
landforms they leave behind. Examples of these landforms are those such
as glacial landforms (moraines, striations), desert features (dunes,
desert pavements), and coastal landforms (marine terraces, beach
ridges). Climatic geomorphology is of limited use to study recent (Quaternary, Holocene) large climate changes since there are seldom discernible in the geomorphological record.
Timing of proxies
The field of geochronology
has scientists working on determining how old certain proxies are. For
recent proxy archives of tree rings and corals the individual year rings
can be counted, and an exact year can be determined. Radiometric dating
uses the properties of radioactive elements in proxies. In older
material, more of the radioactive material will have decayed and the
proportion of different elements will be different from newer proxies.
One example of radiometric dating is radiocarbon dating. In the air, cosmic rays constantly convert nitrogen into a specific radioactive carbon isotope, 14C.
When plants then use this carbon to grow, this isotope is not
replenished anymore and starts decaying. The proportion of 'normal'
carbon and Carbon-14 gives information of how long the plant material
has not been in contact with the atmosphere.
The first atmosphere would have consisted of gases in the solar nebula, primarily hydrogen. In addition, there would probably have been simple hydrides such as those now found in gas giants like Jupiter and Saturn, notably water vapor, methane, and ammonia. As the solar nebula dissipated, the gases would have escaped, partly driven off by the solar wind.
Second atmosphere
The next atmosphere, consisting largely of nitrogen, carbon dioxide, and inert gases, was produced by outgassing from volcanism, supplemented by gases produced during the late heavy bombardment of Earth by huge asteroids. A major part of carbon dioxide emissions were soon dissolved in water and built up carbonate sediments.
Water-related sediments have been found dating from as early as 3.8 billion years ago. About 3.4 billion years ago, nitrogen was the major part of the then
stable "second atmosphere". An influence of life has to be taken into
account rather soon in the history of the atmosphere because hints of
early life forms have been dated to as early as 3.5 to 4.3 billion years
ago. The fact that it is not perfectly in line with the 30% lower solar
radiance (compared to today) of the early Sun has been described as the "faint young Sun paradox".
The geological record, however, shows a continually relatively warm surface during the complete early temperature record of Earth with the exception of one cold glacial phase about 2.4 billion years ago. In the late Archaean eon, an oxygen-containing atmosphere began to develop, apparently from photosynthesizing cyanobacteria (see Great Oxygenation Event) which have been found as stromatolite fossils from 2.7 billion years ago. The early basic carbon isotopy (isotope ratio proportions) was very much in line with what is found today, suggesting that the fundamental features of the carbon cycle were established as early as 4 billion years ago.
Third atmosphere
The constant rearrangement of continents by plate tectonics
influences the long-term evolution of the atmosphere by transferring
carbon dioxide to and from large continental carbonate stores. Free
oxygen did not exist in the atmosphere until about 2.4 billion years
ago, during the Great Oxygenation Event, and its appearance is indicated by the end of the banded iron formations.
Until then, any oxygen produced by photosynthesis was consumed by
oxidation of reduced materials, notably iron. Molecules of free oxygen
did not start to accumulate in the atmosphere until the rate of
production of oxygen began to exceed the availability of reducing
materials. That point was a shift from a reducing atmosphere to an oxidizing atmosphere. O2 showed major variations until reaching a steady state of more than 15% by the end of the Precambrian. The following time span was the Phanerozoic eon, during which oxygen-breathing metazoan life forms began to appear.
The amount of oxygen in the atmosphere has fluctuated over the last 600 million years, reaching a peak of 35% during the Carboniferous period, significantly higher than today's 21%. Two main processes govern changes in the atmosphere: plants use carbon dioxide from the atmosphere, releasing oxygen and the breakdown of pyrite and volcanic eruptions release sulfur
into the atmosphere, which oxidizes and hence reduces the amount of
oxygen in the atmosphere. However, volcanic eruptions also release
carbon dioxide, which plants can convert to oxygen. The exact cause of
the variation of the amount of oxygen in the atmosphere is not known.
Periods with much oxygen in the atmosphere are associated with rapid
development of animals. Today's atmosphere contains 21% oxygen, which is
high enough for rapid development of animals.
The Quaternary glaciation is the current glaciation period and began 2.58 million years ago.
In 2020 scientists published a continuous, high-fidelity record of variations in Earth's climate during the past 66 million years and identified four climate states,
separated by transitions that include changing greenhouse gas levels
and polar ice sheets volumes. They integrated data of various sources.
The warmest climate state since the time of the dinosaur extinction,
"Hothouse", endured from 56 Mya to 47 Mya and was ~14 °C warmer than
average modern temperatures.
The Precambrian took place between the time when Earth first formed 4.6 billion years (Ga)
ago, and 542 million years ago. The Precambrian can be split into two
eons, the Archean and the Proterozoic, which can be further subdivided
into eras. The reconstruction of the Precambrian climate is difficult for various
reasons including the low number of reliable indicators and a,
generally, not well-preserved or extensive fossil record (especially
when compared to the Phanerozoic eon). Despite these issues, there is evidence for a number of major climate events throughout the history of the Precambrian: The Great Oxygenation Event, which started around 2.3 Ga ago (the beginning of the Proterozoic) is indicated by biomarkers which demonstrate the appearance of photosynthetic organisms. Due to the high levels of oxygen in the atmosphere from the GOE, CH4
levels fell rapidly cooling the atmosphere causing the Huronian
glaciation. For about 1 Ga after the glaciation (2–0.8 Ga ago), the
Earth likely experienced warmer temperatures indicated by microfossils
of photosynthetic eukaryotes, and oxygen levels between 5 and 18% of the
Earth's current oxygen level. At the end of the Proterozoic, there is
evidence of global glaciation events of varying severity causing a 'Snowball Earth'. Snowball Earth is supported by different indicators such as, glacial deposits, significant continental erosion called the Great Unconformity, and sedimentary rocks called cap carbonates that form after a deglaciation episode.
Phanerozoic climate
Changes in oxygen-18 ratios over the last 500 million years, indicating environmental change
Major drivers for the preindustrial ages have been variations of the
Sun, volcanic ashes and exhalations, relative movements of the Earth
towards the Sun, and tectonically induced effects as for major sea
currents, watersheds, and ocean oscillations. In the early Phanerozoic,
increased atmospheric carbon dioxide concentrations have been linked to
driving or amplifying increased global temperatures. Royer et al. 2004 found a climate sensitivity for the rest of the Phanerozoic which was
calculated to be similar to today's modern range of values.
The difference in global mean temperatures between a fully
glacial Earth and an ice-free Earth is estimated at 10 °C, though far
larger changes would be observed at high latitudes and smaller ones at
low latitudes. One requirement for the development of large-scale ice sheets seems to
be the arrangement of continental land masses at or near the poles. The
constant rearrangement of continents by plate tectonics
can also shape long-term climate evolution. However, the presence or
absence of land masses at the poles is not sufficient to guarantee
glaciations or exclude polar ice caps. Evidence exists of past warm
periods in Earth's climate when polar land masses similar to Antarctica were home to deciduous forests rather than ice sheets.
The relatively warm local minimum between Jurassic and Cretaceous goes along with an increase of subduction and mid-ocean ridge volcanism due to the breakup of the Pangeasupercontinent.
Superimposed on the long-term evolution between hot and cold
climates have been many short-term fluctuations in climate similar to,
and sometimes more severe than, the varying glacial and interglacial
states of the present ice age. Some of the most severe fluctuations, such as the Paleocene-Eocene Thermal Maximum, may be related to rapid climate changes due to sudden collapses of natural methane clathrate reservoirs in the oceans.
Ice
core data for the past 800,000 years (x-axis values represent "age
before 1950", so today's date is on the left side of the graph and older
time on the right). Blue curve is temperature, red curve is atmospheric CO2 concentrations, and brown curve is dust fluxes. Note length of glacial-interglacial cycles averages ~100,000 years.Holocene temperature variations
The Quaternary geological period includes the current climate. There has been a cycle of ice ages for the past 2.2–2.1 million years (starting before the Quaternary in the late Neogene Period).
Note in the graphic on the right the strong 120,000-year
periodicity of the cycles, and the striking asymmetry of the curves.
This asymmetry is believed to result from complex interactions of
feedback mechanisms. It has been observed that ice ages deepen by
progressive steps, but the recovery to interglacial conditions occurs in
one big step.
The graph on the left shows the temperature change over the past
12,000 years, from various sources; the thick black curve is an average.
Climate forcing is the difference between radiant energy (sunlight) received by the Earth and the outgoing longwave radiation back to space. Such radiative forcing is quantified based on the CO2 amount in the tropopause, in units of watts per square meter to the Earth's surface. Dependent on the radiative balance
of incoming and outgoing energy, the Earth either warms up or cools
down. Earth radiative balance originates from changes in solar insolation and the concentrations of greenhouse gases and aerosols. Climate change may be due to internal processes in Earth sphere's and/or following external forcings.
One example of a way this can be applied to study climatology is analyzing how the varying concentrations of CO2
affect the overall climate. This is done by using various proxies to
estimate past greenhouse gas concentrations and compare those to that of
the present day. Researchers are then able to assess their role in
progression of climate change throughout Earth's history.
Internal processes and forcings
The Earth's climate system involves the atmosphere, biosphere, cryosphere, hydrosphere, and lithosphere, and the sum of these processes from Earth's spheres is what affects the
climate. Greenhouse gasses act as the internal forcing of the climate
system. Particular interests in climate science and paleoclimatology
focus on the study of Earth climate sensitivity,
in response to the sum of forcings. Analyzing the sum of these forcings
contributes to the ability of scientists to make broad conclusive
estimates on the Earth's climate system. These estimates include the
evidence for systems such as long-term climate variability
(eccentricity, obliquity precession), feedback mechanisms (Ice-Albedo
Effect), and anthropogenic influence.
The Milankovitch cycles
determine Earth distance and position to the Sun. The solar insolation
is the total amount of solar radiation received by Earth.
Volcanic eruptions are considered an internal forcing.
Human changes of the composition of the atmosphere or land use.
Human activities causing anthropogenic greenhouse gas emissions leading to global warming and associated climate changes.
Large asteroids that have cataclysmic impacts on Earth's climate are considered external forcings.
Mechanisms
On timescales of millions of years, the uplift of mountain ranges and subsequent weathering processes of rocks and soils and the subduction of tectonic plates, are an important part of the carbon cycle. The weathering sequesters CO2, by the reaction of minerals with chemicals (especially silicate weathering with CO2) and thereby removing CO2 from the atmosphere and reducing the radiative forcing. The opposite effect is volcanism, responsible for the natural greenhouse effect, by emitting CO2 into the atmosphere, thus affecting glaciation (Ice Age) cycles. Jim Hansen suggested that humans emit CO2 10,000 times faster than natural processes have done in the past.
Ice sheet
dynamics and continental positions (and linked vegetation changes) have
been important factors in the long term evolution of the Earth's
climate. There is also a close correlation between CO2 and temperature, where CO2 has a strong control over global temperatures in Earth's history.
Nuclear fusion is a reaction in which two or more atomic nuclei
combine to form a larger nucleus. The difference in mass between the
reactants and products is manifested as either the release or the absorption of energy. This difference in mass arises as a result of the difference in nuclear binding energy between the atomic nuclei before and after the fusion reaction. Nuclear fusion is the process that powers all active stars, via many reaction pathways.
Animation of an electron's wave function as quantum tunneling allows transit through a barrier with a low probability. In the same fashion, an atomic nucleus can quantum tunnel through the Coulomb barrier to another nucleus, making a fusion reaction possible.
American chemist William Draper Harkins was the first to propose the concept of nuclear fusion in 1915. Francis William Aston's 1919 invention of the mass spectrometer allowed the discovery that four hydrogen atoms are heavier than one helium atom. Thus in 1920, Arthur Eddington correctly predicted fusion of hydrogen into helium could be the primary source of stellar energy.
where the intermediary nuclide was later confirmed to be the extremely short-lived beryllium-8. This has a claim to the first artificial fusion reaction.
The Radiation Lab, only detecting the resulting energized protons and neutrons, misinterpreted the source as an exothermic disintegration of the deuterons, now known to be impossible. In May 1934, Mark Oliphant, Paul Harteck, and Ernest Rutherford at the Cavendish Laboratory, published an intentional deuterium fusion experiment, and made the discovery of both tritium and helium-3. This is widely considered the first experimental demonstration of fusion.
Research into fusion for military purposes began in the early 1940s as part of the Manhattan Project.
In 1941, Enrico Fermi and Edward Teller had a conversation about the
possibility of a fission bomb creating conditions for thermonuclear
fusion. In 1942, Emil Konopinski brought Ruhlig's work on the deuterium–tritium reaction to the project's attention. J. Robert Oppenheimer
initially commissioned physicists at Chicago and Cornell to use the
Harvard University cyclotron to secretly investigate its cross-section,
and that of the lithium reaction (see below). Measurements were obtained
at Purdue, Chicago, and Los Alamos from 1942 to 1946. Theoretical
assumptions about DT fusion gave it a similar cross-section to DD.
However, in 1946 Egon Bretscher discovered a resonance enhancement giving the DT reaction a cross-section ~100 times larger.
From 1945, John von Neumann, Teller, and other Los Alamos scientists used ENIAC, one of the first electronic computers, to simulate thermonuclear weapon detonations.
The first artificial thermonuclear fusion reaction occurred during the 1951 US Greenhouse George nuclear test, using a small amount of deuterium–tritium gas. This produced the largest yield to date, at 225 kt, 15 times that of Little Boy. The first "true" thermonuclear weapon detonation i.e. a two-stage device, was the 1952 Ivy Mike test of a liquiddeuterium-fusing device, yielding over 10 Mt. The key to this jump was the full utilization of the fission blast by the Teller–Ulam design.
The Soviet Union had begun their focus on a hydrogen bomb program earlier, and in 1953 carried out the RDS-6s
test. This had international impacts as the first air-deliverable bomb
using fusion, but yielded 400 kt and was limited by its single-stage
design. The first Soviet two-stage test was RDS-37 in 1955 yielding 1.5 Mt, using an independently reached version of the Teller–Ulam design.
Modern devices benefit from the usage of solid lithium deuteride with an enrichment of lithium-6. This is due to the Jetter cycle involving the exothermic reaction:
During thermonuclear detonations, this provides tritium for the
highly energetic DT reaction, and benefits from its neutron production,
creating a closed neutron cycle.
While fusion bomb detonations were loosely considered for energy production,
the possibility of controlled and sustained reactions remained the
scientific focus for peaceful fusion power. Research into developing
controlled fusion inside fusion reactors has been ongoing since the 1930s, with Los Alamos National Laboratory's
Scylla I device producing the first laboratory thermonuclear fusion in
1958, but the technology is still in its developmental phase.
The first experiments producing large amounts of controlled
fusion power were the experiments with mixes of deuterium and tritium in
Tokamaks. Experiments in the TFTR at the PPPL in Princeton University Princeton NJ, USA during 1993–1996 produced 1.6 GJ of fusion energy. The peak fusion power was 10.3 MW from 3.7×1018 reactions per second, and peak fusion energy created in one discharge was 7.6 MJ. Subsequent experiments in the JET in 1997 achieved a peak fusion power of 16 MW (5.8×1018/s). The central Q, defined as the local fusion power produced to the local applied heating power, is computed to be 1.3. A JET experiment in 2024 produced 69 MJ of fusion power, consuming 0.2 mgm of D and T.
The US National Ignition Facility, which uses laser-driven inertial confinement fusion, was designed with a goal of achieving a fusion energy gain factor
(Q) of larger than one; the first large-scale laser target experiments
were performed in June 2009 and ignition experiments began in early
2011.On 13 December 2022, the United States Department of Energy
announced that on 5 December 2022, they had successfully accomplished
break-even fusion, "delivering 2.05 megajoules (MJ) of energy to the
target, resulting in 3.15 MJ of fusion energy output". The rate of supplying power to the experimental test cell is hundreds of times larger than the power delivered to the target.
Prior to this breakthrough, controlled fusion reactions had been
unable to produce break-even (self-sustaining) controlled fusion. The two most advanced approaches for it are magnetic confinement
(toroid designs) and inertial confinement (laser designs). Workable
designs for a toroidal reactor that theoretically will deliver ten times
more fusion energy than the amount needed to heat plasma to the
required temperatures are in development (see ITER).
The ITER facility is currently expected to initiate plasma experiments
in 2034, but is not expected to begin full deuterium–tritium fusion
until 2039.
One of the most recent breakthroughs to date in maintaining a
sustained fusion reaction occurred in France's WEST fusion reactor. It
maintained a 90 million degree plasma for a record time of six minutes.
This is a tokamak-style reactor which is the same style as the upcoming
ITER reactor.
Process
Fusion of deuterium with tritium creating helium-4, freeing a neutron, and releasing 17.59 MeV as kinetic energy of the products while a corresponding amount of mass disappears, in agreement with kinetic E = ∆mc2, where Δm is the decrease in the total rest mass of particles
The release of energy with the fusion of light elements is due to the interplay of two opposing forces: the nuclear force, a manifestation of the strong interaction, which holds protons and neutrons tightly together in the atomic nucleus; and the Coulomb force, which causes positively chargedprotons in the nucleus to repel each other. Lighter nuclei (nuclei smaller than iron and nickel) are sufficiently
small and proton-poor to allow the nuclear force to overcome the Coulomb
force. This is because the nucleus is sufficiently small that all
nucleons feel the short-range attractive force at least as strongly as
they feel the infinite-range Coulomb repulsion. Building up nuclei from
lighter nuclei by fusion releases the extra energy from the net
attraction of particles. For larger nuclei, however, no energy is released, because the nuclear force is short-range and cannot act across larger nuclei.
Fusion powers stars and produces most elements lighter than cobalt in a process called nucleosynthesis.
The Sun is a main-sequence star, and, as such, generates its energy by
nuclear fusion of hydrogen nuclei into helium. In its core, the Sun
fuses 620 million metric tons of hydrogen and makes 616 million metric
tons of helium each second. The fusion of lighter elements in stars
releases energy and the mass that always accompanies it. For example, in
the fusion of two hydrogen nuclei to form helium, 0.645% of the mass is
carried away in the form of kinetic energy of an alpha particle or other forms of energy, such as electromagnetic radiation.
It takes considerable energy to force nuclei to fuse, even those of the lightest element, hydrogen.
When accelerated to high enough speeds, nuclei can overcome this
electrostatic repulsion and be brought close enough such that the
attractive nuclear force is greater than the repulsive Coulomb force. The strong force
grows rapidly once the nuclei are close enough, and the fusing nucleons
can essentially "fall" into each other and the result is fusion; this
is an exothermic process.
Energy released in most nuclear reactions is much larger than in chemical reactions, because the binding energy that holds a nucleus together is greater than the energy that holds electrons to a nucleus. For example, the ionization energy gained by adding an electron to a hydrogen nucleus is 13.6 eV—less than one-millionth of the 17.6 MeV released in the deuterium–tritium (D–T) reaction shown in the adjacent diagram. Fusion reactions have an energy density many times greater than nuclear fission; the reactions produce far greater energy per unit of mass even though individual fission reactions are generally much more energetic than individual fusion ones, which are themselves millions of times more energetic than chemical reactions. Via the mass–energy equivalence, fusion yields a 0.7% efficiency of reactant mass into energy. This can only be exceeded by the extreme cases of the accretion process involving neutron stars or black holes, approaching 40% efficiency, and antimatterannihilation at 100% efficiency. (The complete conversion of one gram of matter would expel 9×1013 joules of energy.)
The proton–proton chain reaction, branch I, dominates in stars the size of the Sun or smaller.The CNO cycle dominates in stars heavier than the Sun.
An important fusion process is the stellar nucleosynthesis that powers stars,
including the Sun. In the 20th century, it was recognized that the
energy released from nuclear fusion reactions accounts for the longevity
of stellar heat and light. The fusion of nuclei in a star, starting
from its initial hydrogen and helium abundance, provides that energy and
synthesizes new nuclei. Different reaction chains are involved,
depending on the mass of the star (and therefore the pressure and
temperature in its core).
Around 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was unknown; Eddington
correctly speculated that the source was fusion of hydrogen into helium,
liberating enormous energy according to Einstein's equationE = mc2.
This was a particularly remarkable development since at that time
fusion and thermonuclear energy had not yet been discovered, nor even
that stars are largely composed of hydrogen(see: Metallicity). Eddington's paper reasoned that:
The leading theory of stellar energy, the contraction
hypothesis, should cause the rotation of a star to visibly speed up due
to conservation of angular momentum. But observations of Cepheid variable stars showed this was not happening.
The only other known plausible source of energy was conversion of
matter to energy; Einstein had shown some years earlier that a small
amount of matter was equivalent to a large amount of energy.
Francis Aston
had also recently shown that the mass of a helium atom was about 0.8%
less than the mass of the four hydrogen atoms which would, combined,
form a helium atom (according to the then-prevailing theory of atomic
structure which held atomic weight to be the distinguishing property
between elements; work by Henry Moseley and Antonius van den Broek
would later show that nucleic charge was the distinguishing property
and that a helium nucleus, therefore, consisted of two hydrogen nuclei
plus additional mass). This suggested that if such a combination could
happen, it would release considerable energy as a byproduct.
If a star contained just 5% of fusible hydrogen, it would suffice to
explain how stars got their energy. (It is now known that most
'ordinary' stars are usually made of around 70% to 75% hydrogen)
Further elements might also be fused, and other scientists had
speculated that stars were the "crucible" in which light elements
combined to create heavy elements, but without more accurate
measurements of their atomic masses nothing more could be said at the time.
All of these speculations were proven correct in the following decades.
The primary source of solar energy, and that of similar size stars, is the fusion of hydrogen to form helium (the proton–proton chain reaction), which occurs at a solar-core temperature of 14 million kelvin. The net result is the fusion of four protons into one alpha particle, with the release of two positrons and two neutrinos (which changes two of the protons into neutrons), and energy. In heavier stars, the CNO cycle
and other processes are more important. As a star uses up a substantial
fraction of its hydrogen, it begins to fuse heavier elements. In
massive cores, silicon-burning is the final fusion cycle, leading to a build-up of iron and nickel nuclei.
Nuclear binding energy
makes the production of elements heavier than nickel via fusion
energetically unfavorable. These elements are produced in non-fusion
processes: the s-process, r-process, and the variety of processes that can produce p-nuclei. Such processes occur in giant star shells, or supernovae, or neutron star mergers.
Brown dwarfs
Brown dwarfs fuse deuterium and in very high mass cases also fuse lithium.
White dwarfs
Carbon–oxygen white dwarfs, which accrete matter either from an active stellar companion or white dwarf merger, approach the Chandrasekhar limit of 1.44 solar masses. Immediately prior, carbon burning fusion begins, destroying the Earth-sized dwarf within one second, in a Type Ia supernova.
Much more rarely, helium white dwarfs may merge, which does not cause an explosion but begins helium burning in an extreme type of helium star.
Some neutron stars accrete hydrogen and helium from an active stellar
companion. Periodically, the helium accretion reaches a critical level,
and a thermonuclear burn wave propagates across the surface, on the
timescale of one second.
Black hole accretion disks
Similar to stellar fusion, extreme conditions within black holeaccretion disks can allow fusion reactions. Calculations show the most energetic reactions occur around lower stellar mass black holes, below 10 solar masses, compared to those above 100. Beyond five Schwarzschild radii, carbon-burning and fusion of helium-3 dominates the reactions. Within this distance, around lower mass black holes, fusion of nitrogen, oxygen, neon, and magnesium can occur. In the extreme limit, the silicon-burning process can begin with the fusion of silicon and selenium nuclei.
From the period approximately 10 seconds to 20 minutes after the Big Bang,
the universe cooled from over 100 keV to 1 keV. This allowed the
combination of protons and neutrons in deuterium nuclei, and beginning a
rapid fusion chain into tritium and helium-3 and ending in
predominantly helium-4, with a minimal fraction of lithium, beryllium,
and boron nuclei.
Observational evidence shows that pockets of gas in the early universe became thick to collapse under their own gravity. This activated nuclear fusion with the formation of the first stars around 13.6 billion years ago.
Requirements
The nuclear binding energy curve. The formation of nuclei with masses up to iron-56 releases energy, as illustrated above.
A substantial energy barrier of electrostatic forces must be overcome
before fusion can occur. At large distances, two naked nuclei repel one
another because of the repulsive electrostatic force between their positively charged
protons. If two nuclei can be brought close enough together, however,
the electrostatic repulsion can be overcome by the quantum effect in
which nuclei can tunnel through coulomb forces.
When a nucleon such as a proton or neutron
is added to a nucleus, the nuclear force attracts it to all the other
nucleons of the nucleus (if the atom is small enough), but primarily to
its immediate neighbors due to the short range of the force. The
nucleons in the interior of a nucleus have more neighboring nucleons
than those on the surface. Since smaller nuclei have a larger
surface-area-to-volume ratio, the binding energy per nucleon due to the nuclear force
generally increases with the size of the nucleus but approaches a
limiting value corresponding to that of a nucleus with a diameter of
about four nucleons. It is important to keep in mind that nucleons are quantum objects.
So, for example, since two neutrons in a nucleus are identical to each
other, the goal of distinguishing one from the other, such as which one
is in the interior and which is on the surface, is in fact meaningless,
and the inclusion of quantum mechanics is therefore necessary for proper
calculations.
The electrostatic force, on the other hand, is an inverse-square force, so a proton added to a nucleus will feel an electrostatic repulsion from all
the other protons in the nucleus. The electrostatic energy per nucleon
due to the electrostatic force thus increases without limit as nuclei
atomic number grows.
The electrostatic force
between the positively charged nuclei is repulsive, but when the
separation is small enough, the quantum effect will tunnel through the
wall. Therefore, the prerequisite for fusion is that the two nuclei be
brought close enough together for a long enough time for quantum
tunneling to act.
The net result of the opposing electrostatic and strong nuclear
forces is that the binding energy per nucleon generally increases with
increasing size, up to the elements iron and nickel, and then decreases for heavier nuclei. Eventually, the binding energy
becomes negative and very heavy nuclei (all with more than 208
nucleons, corresponding to a diameter of about 6 nucleons) are not
stable. The four most tightly bound nuclei, in decreasing order of binding energy per nucleon, are 62 Ni, 58 Fe, 56 Fe, and 60 Ni. Even though the nickel isotope, 62 Ni, is more stable, the iron isotope56 Fe is an order of magnitude more common. This is due to the fact that there is no easy way for stars to create 62 Ni through the alpha process.
An exception to this general trend is the helium-4 nucleus, whose binding energy is higher than that of lithium, the next heavier element. This is because protons and neutrons are fermions, which according to the Pauli exclusion principle
cannot exist in the same nucleus in exactly the same state. Each proton
or neutron's energy state in a nucleus can accommodate both a spin up
particle and a spin down particle. Helium-4 has an anomalously large
binding energy because its nucleus consists of two protons and two
neutrons (it is a doubly magic
nucleus), so all four of its nucleons can be in the ground state. Any
additional nucleons would have to go into higher energy states. Indeed,
the helium-4 nucleus is so tightly bound that it is commonly treated as a
single quantum mechanical particle in nuclear physics, namely, the alpha particle.
The situation is similar if two nuclei are brought together. As
they approach each other, all the protons in one nucleus repel all the
protons in the other. Not until the two nuclei actually come close
enough for long enough so the strong attractive nuclear force
can take over and overcome the repulsive electrostatic force. This can
also be described as the nuclei overcoming the so-called Coulomb barrier. The kinetic energy to achieve this can be lower than the barrier itself because of quantum tunneling.
The Coulomb barrier is smallest for isotopes of hydrogen, as their nuclei contain only a single positive charge. A diproton
is not stable, so neutrons must also be involved, ideally in such a way
that a helium nucleus, with its extremely tight binding, is one of the
products.
Using deuterium–tritium fuel, the resulting energy barrier is about 0.1 MeV. In comparison, the energy needed to remove an electron from hydrogen is 13.6 eV. The (intermediate) result of the fusion is an unstable 5He nucleus, which immediately ejects a neutron with 14.1 MeV. The recoil energy of the remaining 4He
nucleus is 3.5 MeV, so the total energy liberated is 17.6 MeV. This is
many times more than what was needed to overcome the energy barrier.
The
fusion reaction rate increases rapidly with temperature until it
maximizes and then gradually drops off. The DT rate peaks at a lower
temperature (about 70 keV, or 800 million kelvin) and at a higher value
than other reactions commonly considered for fusion energy.
The reaction cross section
(σ) is a measure of the probability of a fusion reaction as a function
of the relative velocity of the two reactant nuclei. If the reactants
have a distribution of velocities, e.g. a thermal distribution, then it
is useful to perform an average over the distributions of the product of
cross-section and velocity. This average is called the 'reactivity',
denoted ⟨σv⟩. The reaction rate (fusions per volume per time) is ⟨σv⟩ times the product of the reactant number densities:
If a species of nuclei is reacting with a nucleus like itself, such as the DD reaction, then the product must be replaced by .
increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10–100 keV/kB. At these temperatures, well above typical ionization energies (13.6 eV in the hydrogen case), the fusion reactants exist in a plasma state.
The significance of as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion.
This is an extremely challenging barrier to overcome on Earth, which
explains why fusion research has taken many years to reach the current
advanced technical state.
Thermonuclear fusion is the process of atomic nuclei combining or
"fusing" using high temperatures to drive them close enough together for
this to become possible. Such temperatures cause the matter to become a
plasma
and, if confined, fusion reactions may occur due to collisions with
extreme thermal kinetic energies of the particles. There are two forms
of thermonuclear fusion: uncontrolled, in which the resulting energy is released in an uncontrolled manner, as it is in thermonuclear weapons ("hydrogen bombs") and in most stars; and controlled, where the fusion reactions take place in an environment allowing some or all of the energy released to be harnessed.
Temperature is a measure of the average kinetic energy of particles, so by heating the material it will gain energy. After reaching sufficient temperature, given by the Lawson criterion, the energy of accidental collisions within the plasma is high enough to overcome the Coulomb barrier and the particles may fuse together.
There are two effects that are needed to lower the actual temperature. One is the fact that temperature is the average
kinetic energy, implying that some nuclei at this temperature would
actually have much higher energy than 0.1 MeV, while others would be
much lower. It is the nuclei in the high-energy tail of the velocity distribution that account for most of the fusion reactions. The other effect is quantum tunnelling.
The nuclei do not actually have to have enough energy to overcome the
Coulomb barrier completely. If they have nearly enough energy, they can
tunnel through the remaining barrier. For these reasons fuel at lower
temperatures will still undergo fusion events, at a lower rate.
Thermonuclear fusion is one of the methods being researched in the attempts to produce fusion power. If thermonuclear fusion becomes favorable to use, it would significantly reduce the world's carbon footprint.
Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions.
Accelerating light ions is relatively easy, and can be done in an
efficient manner—requiring only a vacuum tube, a pair of electrodes,
and a high-voltage transformer; fusion can be observed with as little as
10 kV between the electrodes. The system can be arranged to accelerate ions into a static fuel-infused target, known as beam–target fusion, or by accelerating two streams of ions towards each other, beam–beam fusion. The key problem with accelerator-based fusion (and with cold targets in
general) is that fusion cross sections are many orders of magnitude
lower than Coulomb interaction cross-sections. Therefore, the vast
majority of ions expend their energy emitting bremsstrahlung radiation and the ionization of atoms of the target. Devices referred to as sealed-tube neutron generators
are particularly relevant to this discussion. These small devices are
miniature particle accelerators filled with deuterium and tritium gas in
an arrangement that allows ions of those nuclei to be accelerated
against hydride targets, also containing deuterium and tritium, where
fusion takes place, releasing a flux of neutrons. Hundreds of neutron
generators are produced annually for use in the petroleum industry where
they are used in measurement equipment for locating and mapping oil
reserves.
A number of attempts to recirculate the ions that "miss"
collisions have been made over the years. One of the better-known
attempts in the 1970s was Migma, which used a unique particle storage ring
to capture ions into circular orbits and return them to the reaction
area. Theoretical calculations made during funding reviews pointed out
that the system would have significant difficulty scaling up to contain
enough fusion fuel to be relevant as a power source. In the 1990s, a new
arrangement using a field-reversed configuration (FRC) as the storage system was proposed by Norman Rostoker and continues to be studied by TAE Technologies as of 2021. A closely related approach is to merge two FRC's rotating in opposite directions, which is being actively studied by Helion Energy. Because these approaches all have ion energies well beyond the Coulomb barrier, they often suggest the use of alternative fuel cycles like p-11B that are too difficult to attempt using conventional approaches.
Fusion of very heavy target nuclei with accelerated ion beams is the
primary method of element synthesis. In early 1930s nuclear experiments,
deuteron beams were used, to discover the first synthetic elements,
such as technetium, neptunium, and plutonium:
Fusion of very heavy target nuclei with heavy ion beams has been used to discover superheavy elements:
Muon-catalyzed fusion
Muon-catalyzed fusion is a fusion process that occurs at ordinary temperatures. It was studied in detail by Steven Jones
in the early 1980s. Net energy production from this reaction has been
unsuccessful because of the high energy required to create muons, their short 2.2 μs half-life, and the high chance that a muon will bind to the new alpha particle and thus stop catalyzing fusion.
Pyroelectric fusion was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal heated from −34 to 7 °C (−29 to 45 °F), combined with a tungsten needle to produce an electric field of about 25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. At the estimated energy levels, the D–D fusion reaction may occur, producing helium-3 and a 2.45 MeV neutron.
Although it makes a useful neutron generator, the apparatus is not
intended for power generation since it requires far more energy than it
produces. D–T fusion reactions have been observed with a tritiated erbium target.
Nuclear fusion–fission hybrid (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s, and was briefly advocated by Hans Bethe
during the 1970s, but largely remained unexplored until a revival of
interest in 2009, due to the delays in the realization of pure fusion.
Project PACER, carried out at Los Alamos National Laboratory (LANL) in the mid-1970s, explored the possibility of a fusion power system that would involve exploding small hydrogen bombs
(fusion bombs) inside an underground cavity. As an energy source, the
system is the only fusion power system that could be demonstrated to
work using existing technology. However, it would also require a large,
continuous supply of nuclear bombs, making the economics of such a
system rather questionable.
Bubble fusion, also called sonofusion, was a proposed mechanism for achieving fusion via sonic cavitation which rose to prominence in the early 2000s. Subsequent attempts at replication failed and the principal investigator, Rusi Taleyarkhan, was judged guilty of research misconduct in 2008.
Confinement in thermonuclear fusion
The
key problem in achieving thermonuclear fusion is how to confine the hot
plasma. Due to the high temperature, the plasma cannot be in direct
contact with any solid material, so it has to be located in a vacuum.
Also, high temperatures imply high pressures. The plasma tends to
expand immediately and some force is necessary to act against it. This
force can take one of three forms: gravitation in stars, magnetic forces
in magnetic confinement fusion reactors, or inertial as the fusion reaction may occur before the plasma starts to expand, so the plasma's inertia is keeping the material together.
One force capable of confining the fuel well enough to satisfy the Lawson criterion is gravity. The mass needed, however, is so great that gravitational confinement is only found in stars—the least massive stars capable of sustained fusion are red dwarfs, while brown dwarfs are able to fuse deuterium and lithium if they are of sufficient mass. In stars heavy enough, after the supply of hydrogen is exhausted in their cores, their cores (or a shell around the core) start fusing helium to carbon. In the most massive stars (at least 8–11 solar masses), the process is continued until some of their energy is produced by fusing lighter elements to iron. As iron has one of the highest binding energies, reactions producing heavier elements are generally endothermic.
Therefore, significant amounts of heavier elements are not formed
during stable periods of massive star evolution, but are formed in supernova explosions. Some lighter stars
also form these elements in the outer parts of the stars over long
periods of time, by absorbing energy from fusion in the inside of the
star, by absorbing neutrons that are emitted from the fusion process.
All of the elements heavier than iron have some potential energy
to release, in theory. At the extremely heavy end of element production,
these heavier elements can produce energy in the process of being split again back toward the size of iron, in the process of nuclear fission. Nuclear fission thus releases energy that has been stored, sometimes billions of years before, during stellar nucleosynthesis.
Electrically charged particles (such as fuel ions) will follow magnetic field lines (see Guiding centre).
The fusion fuel can therefore be trapped using a strong magnetic field.
A variety of magnetic configurations exist, including the toroidal
geometries of tokamaks and stellarators and open-ended mirror confinement systems.
A third confinement principle is to apply a rapid pulse of energy to a
large part of the surface of a pellet of fusion fuel, causing it to
simultaneously "implode" and heat to very high pressure and temperature.
If the fuel is dense enough and hot enough, the fusion reaction rate
will be high enough to burn a significant fraction of the fuel before it
has dissipated. To achieve these extreme conditions, the initially cold
fuel must be explosively compressed. Inertial confinement is used in
the hydrogen bomb, where the driver is x-rays created by a fission bomb. Inertial confinement is also attempted in "controlled" nuclear fusion, where the driver is a laser, ion, or electron beam, or a Z-pinch. Another method is to use conventional high explosive material to compress a fuel to fusion conditions. The UTIAS explosive-driven-implosion facility was used to produce stable, centred and focused hemispherical implosions to generate neutrons from D–D reactions. The simplest and most direct method proved to be in a predetonated stoichiometric mixture of deuterium–oxygen. The other successful method was using a miniature Voitenko compressor, where a plane diaphragm was driven by the implosion wave into a secondary small spherical cavity that contained pure deuterium gas at one atmosphere.
There are also electrostatic confinement fusion devices. These devices confine ions using electrostatic fields. The best known is the fusor.
This device has a cathode inside an anode wire cage. Positive ions fly
towards the negative inner cage, and are heated by the electric field in
the process. If they miss the inner cage they can collide and fuse.
Ions typically hit the cathode, however, creating prohibitory high conduction losses. Also, fusion rates in fusors are very low due to competing physical effects, such as energy loss in the form of light radiation. Designs have been proposed to avoid the problems associated with the
cage, by generating the field using a non-neutral cloud. These include a
plasma oscillating device, a Penning trap and the polywell. The technology is relatively immature, however, and many scientific and engineering questions remain.
The most well known Inertial electrostatic confinement approach is the fusor. Starting in 1999, a number of amateurs have been able to do amateur fusion using these homemade devices. Other IEC devices include: the Polywell, MIX POPS and Marble concepts.
Important reactions
Stellar reaction chains
At the temperatures and densities in stellar cores, the rates of
fusion reactions are notoriously slow. For example, at solar core
temperature (T ≈ 15 MK) and density (160 g/cm3), the energy release rate is only 276 μW/cm3—about a quarter of the volumetric rate at which a resting human body generates heat. Thus, reproduction of stellar core conditions in a lab for nuclear
fusion power production is completely impractical. Because nuclear
reaction rates depend on density as well as temperature, and most fusion
schemes operate at relatively low densities, those methods are strongly
dependent on higher temperatures. The fusion rate as a function of
temperature (exp(−E/kT)), leads to the need to achieve temperatures in terrestrial reactors 10–100 times higher than in stellar interiors: T ≈ (0.1–1.0)×109 K.
In artificial fusion, the primary fuel is not constrained to be
protons and higher temperatures can be used, so reactions with larger
cross-sections are chosen. Another concern is the production of
neutrons, which activate the reactor structure radiologically, but also
have the advantages of allowing volumetric extraction of the fusion
energy and tritium breeding. Reactions that release no neutrons are referred to as aneutronic.
To be a useful energy source, a fusion reaction must satisfy several criteria. It must:
This limits the reactants to the low Z (number of protons) side of the curve of binding energy. It also makes helium 4 He the most common product because of its extraordinarily tight binding, although 3 He and 3 H also show up.
Involve low atomic number (Z) nuclei
This is because the electrostatic repulsion that must be overcome before the nuclei are close enough to fuse (Coulomb barrier) is directly related to the number of protons it contains – its atomic number.
Have two reactants
At anything less than stellar densities, three-body collisions are
too improbable. In inertial confinement, both stellar densities and
temperatures are exceeded to compensate for the shortcomings of the
third parameter of the Lawson criterion, ICF's very short confinement
time.
Have two or more products
This allows simultaneous conservation of energy and momentum without relying on the electromagnetic force.
Conserve both protons and neutrons
The cross sections for the weak interaction are too small.
Few reactions meet these criteria. The following are those with the largest cross sections:
For reactions with two products, the energy is divided between them
in inverse proportion to their masses, as shown. In most reactions with
three products, the distribution of energy varies. For reactions that
can result in more than one set of products, the branching ratios are
given.
Some reaction candidates can be eliminated at once. The D–6Li reaction has no advantage compared to p+–11 5B because it is roughly as difficult to burn but produces substantially more neutrons through 2 1D–2 1D side reactions. There is also a p+–7 3Li reaction, but the cross section is far too low, except possibly when Ti
> 1 MeV, but at such high temperatures an endothermic, direct
neutron-producing reaction also becomes very significant. Finally there
is also a p+–9 4Be reaction, which is not only difficult to burn, but 9 4Be can be easily induced to split into two alpha particles and a neutron.
In addition to the fusion reactions, the following reactions with
neutrons are important in order to "breed" tritium in "dry" fusion
bombs and some proposed fusion reactors:
The latter of the two equations was unknown when the U.S. conducted the Castle Bravo
fusion bomb test in 1954. Being just the second fusion bomb ever tested
(and the first to use lithium), the designers of the Castle Bravo
"Shrimp" had understood the usefulness of 6Li in tritium production, but had failed to recognize that 7Li fission would greatly increase the yield of the bomb. While 7Li has a small neutron cross-section for low neutron energies, it has a higher cross section above 5 MeV. The 15 Mt yield was 150% greater than the predicted 6 Mt and caused unexpected exposure to fallout.
To evaluate the usefulness of these reactions, in addition to the
reactants, the products, and the energy released, one needs to know
something about the nuclear cross section.
Any given fusion device has a maximum plasma pressure it can sustain,
and an economical device would always operate near this maximum. Given
this pressure, the largest fusion output is obtained when the
temperature is chosen so that ⟨σv⟩/T2 is a maximum. This is also the temperature at which the value of the triple product nTτ required for ignition is a minimum, since that required value is inversely proportional to ⟨σv⟩/T2 (see Lawson criterion).
(A plasma is "ignited" if the fusion reactions produce enough power to
maintain the temperature without external heating.) This optimum
temperature and the value of ⟨σv⟩/T2 at that temperature is given for a few of these reactions in the following table.
fuel
T [keV]
⟨σv⟩/T2 [m3/s/keV2]
2 1D–3 1T
13.6
1.24×10−24
2 1D–2 1D
15
1.28×10−26
2 1D–3 2He
58
2.24×10−26
p+–6 3Li
66
1.46×10−27
p+–11 5B
123
3.01×10−27
Note that many of the reactions form chains. For instance, a reactor fueled with 3 1T and 3 2He creates some 2 1D, which is then possible to use in the 2 1D–3 2He reaction if the energies are "right". An elegant idea is to combine the reactions (8) and (9). The 3 2He from reaction (8) can react with 6 3Li
in reaction (9) before completely thermalizing. This produces an
energetic proton, which in turn undergoes reaction (8) before
thermalizing. Detailed analysis shows that this idea would not work
well, but it is a good example of a case where the usual assumption of a Maxwellian plasma is not appropriate.
Neutronicity, confinement requirement, and power density
Any of the reactions above can in principle be the basis of fusion power
production. In addition to the temperature and cross section discussed
above, we must consider the total energy of the fusion products Efus, the energy of the charged fusion products Ech, and the atomic number Z of the non-hydrogenic reactant.
Specification of the 2 1D–2 1D
reaction entails some difficulties, though. To begin with, one must
average over the two branches (2i) and (2ii). More difficult is to
decide how to treat the 3 1T and 3 2He products. 3 1T burns so well in a deuterium plasma that it is almost impossible to extract from the plasma. The 2 1D–3 2He reaction is optimized at a much higher temperature, so the burnup at the optimum 2 1D–2 1D temperature may be low. Therefore, it seems reasonable to assume the 3 1T but not the 3 2He
gets burned up and adds its energy to the net reaction, which means the
total reaction would be the sum of (2i), (2ii), and (1):
For calculating the power of a reactor (in which the reaction rate is determined by the D–D step), we count the 2 1D–2 1D fusion energy per D–D reaction as Efus = (4.03 MeV + 17.6 MeV) × 50% + (3.27 MeV) × 50% = 12.5 MeV and the energy in charged particles as Ech
= (4.03 MeV + 3.5 MeV) × 50% + (0.82 MeV) × 50% = 4.2 MeV. (Note: if
the tritium ion reacts with a deuteron while it still has a large
kinetic energy, then the kinetic energy of the helium-4 produced may be
quite different from 3.5 MeV, so this calculation of energy in charged particles is only an
approximation of the average.) The amount of energy per deuteron
consumed is 2/5 of this, or 5.0 MeV (a specific energy of about 225 million MJ per kilogram of deuterium).
Another unique aspect of the 2 1D–2 1D reaction is that there is only one reactant, which must be taken into account when calculating the reaction rate.
With this choice, we tabulate parameters for four of the most important reactions
fuel
Z
Efus [MeV]
Ech [MeV]
neutronicity
2 1D–3 1T
1
17.6
3.5
0.80
2 1D–2 1D
1
12.5
4.2
0.66
2 1D–3 2He
2
18.3
18.3
≈0.05
p+–11 5B
5
8.7
8.7
≈0.001
The last column is the neutronicity
of the reaction, the fraction of the fusion energy released as
neutrons. This is an important indicator of the magnitude of the
problems associated with neutrons like radiation damage, biological
shielding, remote handling, and safety. For the first two reactions it
is calculated as (Efus − Ech)/Efus.
For the last two reactions, where this calculation would give zero, the
values quoted are rough estimates based on side reactions that produce
neutrons in a plasma in thermal equilibrium.
Of course, the reactants should also be mixed in the optimal
proportions. This is the case when each reactant ion plus its associated
electrons accounts for half the pressure. Assuming that the total
pressure is fixed, this means that particle density of the
non-hydrogenic ion is smaller than that of the hydrogenic ion by a
factor 2/(Z + 1). Therefore, the rate for these reactions is reduced by the same factor, on top of any differences in the values of ⟨σv⟩/T2. On the other hand, because the 2 1D–2 1D
reaction has only one reactant, its rate is twice as high as when the
fuel is divided between two different hydrogenic species, thus creating a
more efficient reaction.
Thus there is a "penalty" of 2/(Z + 1)
for non-hydrogenic fuels arising from the fact that they require more
electrons, which take up pressure without participating in the fusion
reaction. (It is usually a good assumption that the electron temperature
will be nearly equal to the ion temperature. Some authors, however,
discuss the possibility that the electrons could be maintained
substantially colder than the ions. In such a case, known as a "hot ion
mode", the "penalty" would not apply.) There is at the same time a
"bonus" of a factor 2 for 2 1D–2 1D because each ion can react with any of the other ions, not just a fraction of them.
We can now compare these reactions in the following table.
fuel
⟨σv⟩/T2
penalty/bonus
inverse reactivity
Lawson criterion
power density [W/m3/kPa2]
inverse ratio of power density
2 1D–3 1T
1.24×10−24
1
1
1
34
1
2 1D–2 1D
1.28×10−26
2
48
30
0.5
68
2 1D–3 2He
2.24×10−26
2/3
83
16
0.43
80
p+–6 3Li
1.46×10−27
1/2
1700
0.005
6800
p+–11 5B
3.01×10−27
1/3
1240
500
0.014
2500
The maximum value of ⟨σv⟩/T2
is taken from a previous table. The "penalty/bonus" factor is that
related to a non-hydrogenic reactant or a single-species reaction. The
values in the column "inverse reactivity" are found by dividing 1.24×10−24
by the product of the second and third columns. It indicates the factor
by which the other reactions occur more slowly than the 2 1D–3 1T reaction under comparable conditions. The column "Lawson criterion" weights these results with Ech
and gives an indication of how much more difficult it is to achieve
ignition with these reactions, relative to the difficulty for the 2 1D–3 1T reaction. The next-to-last column is labeled "power density" and weights the practical reactivity by Efus. The final column indicates how much lower the fusion power density of the other reactions is compared to the 2 1D–3 1T reaction and can be considered a measure of the economic potential.
Bremsstrahlung losses in quasineutral, isotropic plasmas
The ions undergoing fusion in many systems will essentially never occur alone but will be mixed with electrons that in aggregate neutralize the ions' bulk electrical charge and form a plasma.
The electrons will generally have a temperature comparable to or
greater than that of the ions, so they will collide with the ions and
emit x-ray radiation of 10–30 keV energy, a process known as Bremsstrahlung.
The huge size of the Sun and stars means that the x-rays produced
in this process will not escape and will deposit their energy back into
the plasma. They are said to be opaque to x-rays. But any terrestrial fusion reactor will be optically thin
for x-rays of this energy range. X-rays are difficult to reflect but
they are effectively absorbed (and converted into heat) in less than mm
thickness of stainless steel (which is part of a reactor's shield). This
means the bremsstrahlung process is carrying energy out of the plasma,
cooling it.
The ratio of fusion power produced to x-ray radiation lost to
walls is an important figure of merit. This ratio is generally maximized
at a much higher temperature than that which maximizes the power
density (see the previous subsection). The following table shows
estimates of the optimum temperature and the power ratio at that
temperature for several reactions:
fuel
Ti [keV]
Pfusion/PBremsstrahlung
2 1D–3 1T
50
140
2 1D–2 1D
500
2.9
2 1D–3 2He
100
5.3
3 2He–3 2He
1000
0.72
p+–6 3Li
800
0.21
p+–11 5B
300
0.57
The actual ratios of fusion to Bremsstrahlung power will likely be
significantly lower for several reasons. For one, the calculation
assumes that the energy of the fusion products is transmitted completely
to the fuel ions, which then lose energy to the electrons by
collisions, which in turn lose energy by Bremsstrahlung. However,
because the fusion products move much faster than the fuel ions, they
will give up a significant fraction of their energy directly to the
electrons. Secondly, the ions in the plasma are assumed to be purely
fuel ions. In practice, there will be a significant proportion of
impurity ions, which will then lower the ratio. In particular, the
fusion products themselves must remain in the plasma until they have given up their energy, and will
remain for some time after that in any proposed confinement scheme.
Finally, all channels of energy loss other than Bremsstrahlung have been
neglected. The last two factors are related. On theoretical and
experimental grounds, particle and energy confinement seem to be closely
related. In a confinement scheme that does a good job of retaining
energy, fusion products will build up. If the fusion products are
efficiently ejected, then energy confinement will be poor, too.
The temperatures maximizing the fusion power compared to the
Bremsstrahlung are in every case higher than the temperature that
maximizes the power density and minimizes the required value of the fusion triple product. This will not change the optimum operating point for 2 1D–3 1T
very much because the Bremsstrahlung fraction is low, but it will push
the other fuels into regimes where the power density relative to 2 1D–3 1T is even lower and the required confinement even more difficult to achieve. For 2 1D–2 1D and 2 1D–3 2He, Bremsstrahlung losses will be a serious, possibly prohibitive problem. For 3 2He–3 2He, p+–6 3Li and p+–11 5B
the Bremsstrahlung losses appear to make a fusion reactor using these
fuels with a quasineutral, isotropic plasma impossible. Some ways out of
this dilemma have been considered but rejected. This limitation does not apply to non-neutral and anisotropic plasmas; however, these have their own challenges to contend with.
Mathematical description of cross section
Fusion under classical physics
In a classical picture, nuclei can be understood as hard spheres that
repel each other through the Coulomb force but fuse once the two
spheres come close enough for contact. Estimating the radius of an
atomic nuclei as about one femtometer, the energy needed for fusion of
two hydrogen is:
This would imply that for the core of the sun, which has a Boltzmann distribution with a temperature of around 1.4 keV, the probability hydrogen would reach the threshold is 10−290, that is, fusion would never occur. However, fusion in the sun does occur due to quantum mechanics.
Parameterization of cross section
The probability that fusion occurs is greatly increased compared to
the classical picture, thanks to the smearing of the effective radius as
the de Broglie wavelength as well as quantum tunneling through the potential barrier. To determine the rate of fusion reactions, the value of most interest is the cross section,
which describes the probability that particles will fuse by giving a
characteristic area of interaction. An estimation of the fusion
cross-sectional area is often broken into three pieces:
where is the geometric cross section, T is the barrier transparency and R is the reaction characteristics of the reaction.
is of the order of the square of the de Broglie wavelength where is the reduced mass of the system and is the center of mass energy of the system.
T can be approximated by the Gamow transparency, which has the form: where is the Gamow factor and comes from estimating the quantum tunneling probability through the potential barrier.
R
contains all the nuclear physics of the specific reaction and takes very
different values depending on the nature of the interaction. However,
for most reactions, the variation of is small compared to the variation from the Gamow factor and so is approximated by a function called the astrophysical S-factor, ,
which is weakly varying in energy. Putting these dependencies together,
one approximation for the fusion cross section as a function of energy
takes the form:
More detailed forms of the cross-section can be derived through nuclear physics-based models and R-matrix theory.
Formulas of fusion cross sections
The Naval Research Lab's plasma physics formulary gives the total cross section in barns as a function of the energy (in keV) of the incident particle towards a target ion at rest fit by the formula:
with the following coefficient values:
NRL Formulary Cross Section Coefficients
DT(1)
DD(2i)
DD(2ii)
DHe3(3)
TT(4)
The3(6)
A1
45.95
46.097
47.88
89.27
38.39
123.1
A2
50200
372
482
25900
448
11250
A3
1.368×10−2
4.36×10−4
3.08×10−4
3.98×10−3
1.02×10−3
0
A4
1.076
1.22
1.177
1.297
2.09
0
A5
409
0
0
647
0
0
Bosch-Hale also reports a R-matrix calculated cross sections fitting observation data with Padé rational approximating coefficients. With energy in units of keV and cross sections in units of millibarn, the factor has the form:
, with the coefficient values:
Bosch-Hale coefficients for the fusion cross section
In fusion systems that are in thermal equilibrium, the particles are in a Maxwell–Boltzmann distribution,
meaning the particles have a range of energies centered around the
plasma temperature. The sun, magnetically confined plasmas and inertial
confinement fusion systems are well modeled to be in thermal
equilibrium. In these cases, the value of interest is the fusion
cross-section averaged across the Maxwell–Boltzmann distribution. The
Naval Research Lab's plasma physics formulary tabulates Maxwell averaged
fusion cross sections reactivities in .
NRL Formulary fusion reaction rates averaged over Maxwellian distributions