Search This Blog

Tuesday, December 16, 2025

Helium-3

From Wikipedia, the free encyclopedia
Helium-3
General
Symbol3He
Nameshelium-3,
tralphium (obsolete)
Protons (Z)2
Neutrons (N)1
Nuclide data
Natural abundance0.000137% (atmosphere)
0.01% (Solar System)
Half-life (t1/2)stable
Isotope mass3.016029322 Da
Spin1/2 ħ
Parent isotopes3H (beta decay of tritium)
Isotopes of helium
Complete table of nuclides

Helium-3 (3He see also helion) is a light, stable isotope of helium with two protons and one neutron. (In contrast, the most common isotope, helium-4, has two protons and two neutrons.) Helium-3 and hydrogen-1 are the only stable nuclides with more protons than neutrons. It was discovered in 1939. Helium-3 atoms are fermionic and become a superfluid at the temperature of 2.491 mK.

Helium-3 occurs as a primordial nuclide, escaping from Earth's crust into its atmosphere and into outer space over millions of years. It is also thought to be a natural nucleogenic and cosmogenic nuclide, one produced when lithium is bombarded by natural neutrons, which can be released by spontaneous fission and by nuclear reactions with cosmic rays. Some found in the terrestrial atmosphere is a remnant of atmospheric and underwater nuclear weapons testing.

Nuclear fusion using helium-3 has long been viewed as a desirable future energy source. The fusion of two of its atoms would be aneutronic, that is, it would not release the dangerous radiation of traditional fusion or require the much higher temperatures thereof. The process may unavoidably create other reactions that themselves would cause the surrounding material to become radioactive.

Helium-3 is thought to be more abundant on the Moon than on Earth, having been deposited in the upper layer of regolith by the solar wind over billions of years, though still lower in abundance than in the Solar System's gas giants.

History

The existence of helium-3 was first proposed in 1934 by the Australian nuclear physicist Mark Oliphant while he was working at the University of Cambridge Cavendish Laboratory. Oliphant had performed experiments in which fast deuterons collided with deuteron targets (incidentally, the first demonstration of nuclear fusion). Isolation of helium-3 was first accomplished by Luis Alvarez and Robert Cornog in 1939. Helium-3 was thought to be a radioactive isotope until it was also found in samples of natural helium, which is mostly helium-4, taken both from the terrestrial atmosphere and from natural gas wells.

Physical properties

Due to its low atomic mass of 3.016 Da, helium-3 has some physical properties different from those of helium-4, with a mass of 4.0026 Da. On account of the weak, induced dipole–dipole interaction between the helium atoms, their microscopic physical properties are mainly determined by their zero-point energy. Also, the microscopic properties of helium-3 cause it to have a higher zero-point energy than helium-4. This implies that helium-3 can overcome dipole–dipole interactions with less thermal energy than helium-4 can.

The quantum mechanical effects on helium-3 and helium-4 are significantly different because with two protons, two neutrons, and two electrons, helium-4 has an overall spin of zero, making it a boson, but with one fewer neutron, helium-3 has an overall spin of one half, making it a fermion.

Pure helium-3 gas boils at 3.19 K compared with helium-4 at 4.23 K, and its critical point is also lower at 3.35 K, compared with helium-4 at 5.2 K. Helium-3 has less than half the density of helium-4 when it is at its boiling point: 59 g/L compared to 125 g/L of helium-4 at a pressure of one atmosphere. Its latent heat of vaporization is also considerably lower at 0.026 kJ/mol compared with the 0.0829 kJ/mol of helium-4.

Superfluidity

Phase diagram for helium-3 ("bcc" indicates a body-centered cubic crystal lattice.)

An important property of helium-3 atoms, which distinguishes them from the more common helium-4, is that they contain an odd number of spin 12 particles, and therefore are composite fermions. This is a direct result of the addition rules for quantized angular momentum. In contrast, helium-4 atoms are bosons, containing an even number of spin-1/2 particles. At low temperatures (about 2.17 K), helium-4 undergoes a phase transition: A fraction of it enters a superfluid phase that can be roughly understood as a type of Bose–Einstein condensate. Such a mechanism is not available for fermionic helium-3 atoms. Many speculated that helium-3 could also become a superfluid at much lower temperatures, if the atoms formed into pairs analogous to Cooper pairs in the BCS theory of superconductivity. Each Cooper pair, having integer spin, can be thought of as a boson. During the 1970s, David Lee, Douglas Osheroff and Robert Coleman Richardson discovered two phase transitions along the melting curve, which were soon realized to be the two superfluid phases of helium-3. The transition to a superfluid occurs at 2.491 millikelvins on the melting curve. They were awarded the 1996 Nobel Prize in Physics for their discovery. Alexei Abrikosov, Vitaly Ginzburg, and Tony Leggett won the 2003 Nobel Prize in Physics for their work on refining understanding of the superfluid phase of helium-3.

In a zero magnetic field, there are two distinct superfluid phases of 3He, the A-phase and the B-phase. The B-phase is the low-temperature, low-pressure phase which has an isotropic energy gap. The A-phase is the higher temperature, higher pressure phase that is further stabilized by a magnetic field and has two point nodes in its gap. The presence of two phases is a clear indication that 3He is an unconventional superfluid (superconductor), since the presence of two phases requires an additional symmetry, other than gauge symmetry, to be broken. In fact, it is a p-wave superfluid, with spin one, S = 1 ħ, and angular momentum one, L = 1 ħ. The ground state corresponds to total angular momentum zero, J = S + L = 0 (vector addition). Excited states are possible with non-zero total angular momentum, J > 0, which are excited pair collective modes. These collective modes have been studied with much greater precision than in any other unconventional pairing system, because of the extreme purity of superfluid 3He. This purity is due to all 4He phase separating entirely and all other materials solidifying and sinking to the bottom of the liquid, making the A- and B-phases of 3He the most pure condensed matter state possible.

Natural abundance

Terrestrial abundance

3He is a primordial substance in the Earth's mantle, thought to have been trapped during the planet's initial formation. The ratio of 3He to 4He within the Earth's crust and mantle is less than that in the solar disk (as estimated using meteorite and lunar samples), with terrestrial materials generally containing lower 3He/4He ratios due to production of 4He from radioactive decay.

3He has a cosmological ratio of 300 atoms per million atoms of 4He, leading to the assumption that the original ratio of these primordial gases in the mantle was around 200–300 ppm when Earth was formed. Over the course of Earth's history, a significant amount of 4He has been generated by the alpha decay of uranium, thorium and other radioactive isotopes, to the point that only around 7% of the helium now in the mantle is primordial helium, thus lowering the total 3He:4He ratio to around 20 ppm. Ratios of 3He:4He in excess of the atmospheric ratio are indicative of a contribution of 3He from the mantle. Crustal sources are dominated by the 4He produced by radioactive decay.

The ratio of helium-3 to helium-4 in natural Earth-bound sources varies greatly. Samples of the lithium ore spodumene from Edison Mine, South Dakota were found to contain 12 parts of helium-3 to a million parts of helium-4. Samples from other mines showed 2 parts per million.

Helium itself is present as up to 7% of some natural gas sources, and large sources have over 0.5% (above 0.2% makes it viable to extract). The fraction of 3He in helium separated from natural gas in the U.S. was found to range from 70 to 242 parts per billion. Hence the US 2002 stockpile of 1 billion normal m3 would have contained about 12 to 43 kilograms (26 to 95 lb) of helium-3. According to American physicist Richard Garwin, about 26 cubic metres (920 cu ft) or almost 5 kilograms (11 lb) of 3He is available annually for separation from the US natural gas stream. If the process of separating out the 3He could employ as feedstock the liquefied helium typically used to transport and store bulk quantities, estimates for the incremental energy cost range from 34 to 300 /L NTP, excluding the cost of infrastructure and equipment. Algeria's annual gas production is assumed to contain 100 million normal cubic metres and this would contain between 7 and 24 m3 of helium-3 (about 1 to 4 kg) assuming a similar 3He fraction.

3He is also present in the Earth's atmosphere. The natural abundance of 3He in atmospheric helium is 1.37×10−6 (1.37 parts per million). The partial pressure of helium in the Earth's atmosphere is about 0.52 Pa, and thus helium accounts for 5.2 parts per million of the total pressure (101325 Pa) in the Earth's atmosphere, and 3He thus accounts for 7.2 parts per trillion of the atmosphere. Since the atmosphere of the Earth has a mass of about 5.14×1018 kg, the mass of 3He in the Earth's atmosphere is the product of these numbers and the molecular weight ratio of helium-3 to air (3.016 to 28.95), giving a mass of 3815 tonnes of helium-3 in the earth's atmosphere.

3He is produced on Earth from three sources: lithium spallation, cosmic rays, and beta decay of tritium (3H). The contribution from cosmic rays is negligible within all except the oldest regolith materials, and lithium spallation reactions are a lesser contributor than the production of 4He by alpha particle emissions.

The total amount of helium-3 in the mantle may be in the range of 0.1–1 megatonnes. Some helium-3 finds its way up through deep-sourced hotspot volcanoes such as those of the Hawaiian Islands, but only 300 g per year is emitted to the atmosphere. Mid-ocean ridges emit another 3 kg per year. Around subduction zones, various sources produce helium-3 in natural gas deposits which possibly contain a thousand tonnes of helium-3 (although there may be 25 thousand tonnes if all ancient subduction zones have such deposits). Wittenberg estimated that United States crustal natural gas sources may have only half a tonne total. Wittenberg cited Anderson's estimate of another 1200 tonnes in interplanetary dust particles on the ocean floors. In the 1994 study, extracting helium-3 from these sources consumes more energy than fusion would release.

Moon

Materials on the Moon's surface contain helium-3 at concentrations between 1.4 and 15 ppb in sunlit areas, and may contain concentrations as much as 50 ppb in permanently shadowed regions. A number of people, starting with Gerald Kulcinski in 1986, have proposed to explore the Moon, mine lunar regolith and use the helium-3 for fusion. Because of the low concentrations of helium-3, any mining equipment would need to process extremely large amounts of regolith (over 150 tonnes of regolith to obtain one gram of helium-3).

The primary objective of Indian Space Research Organisation's first lunar probe called Chandrayaan-1, launched on October 22, 2008, was reported in some sources to be mapping the Moon's surface for helium-3-containing minerals. No such objective is mentioned in the project's official list of goals, though many of its scientific payloads have held helium-3-related applications.

Cosmochemist and geochemist Ouyang Ziyuan from the Chinese Academy of Sciences who is now in charge of the Chinese Lunar Exploration Program has already stated on many occasions that one of the main goals of the program would be the mining of helium-3, from which operation "each year, three space shuttle missions could bring enough fuel for all human beings across the world".

In January 2006, the Russian space company RKK Energiya announced that it considers lunar helium-3 a potential economic resource to be mined by 2020, if funding can be found.

Not all writers feel the extraction of lunar helium-3 is feasible, or even that there will be a demand for it for fusion. Dwayne Day, writing in The Space Review in 2015, characterises helium-3 extraction from the Moon for use in fusion as magical thinking about an unproven technology, and questions the feasibility of lunar extraction, as compared to production on Earth.

Gas giants

Mining gas giants for helium-3 has also been proposed. The British Interplanetary Society's hypothetical Project Daedalus interstellar probe design was fueled by helium-3 mines in the atmosphere of Jupiter, for example.

Solar nebula (primordial) abundance

One early estimate of the primordial ratio of 3He to 4He in the solar nebula has been the measurement of their ratio in the atmosphere of Jupiter, measured by the mass spectrometer of the Galileo atmospheric entry probe. This ratio is about 1:10000, or 100 parts of 3He per million parts of 4He. This is roughly the same ratio of the isotopes as in lunar regolith, which contains 28 ppm helium-4 and 2.8 ppb helium-3 (which is at the lower end of actual sample measurements, which vary from about 1.4 to 15 ppb). Terrestrial ratios of the isotopes are lower by a factor of 100, mainly due to enrichment of helium-4 stocks in the mantle by billions of years of alpha decay from uranium, thorium as well as their decay products and extinct radionuclides.

Human production

Tritium decay

Virtually all helium-3 used in industry today is produced from the radioactive decay of tritium, given its very low natural abundance and its very high cost.

Production, sales and distribution of helium-3 in the United States are managed by the US Department of Energy (DOE) DOE Isotope Program.

While tritium has several different experimentally determined values of its half-life, NIST lists 4500±8 d (12.32±0.02 years). It decays into helium-3 by beta decay as in this nuclear equation:

3
1
H
 
→  3
2
He1+
 
e
 
ν
e

Among the total released energy of 18.6 keV, the part taken by electron's kinetic energy varies, with an average of 5.7 keV, while the remaining energy is carried off by the nearly undetectable electron antineutrino. Beta particles from tritium can penetrate only about 6.0 millimetres (0.24 in) of air, and they are incapable of passing through the dead outermost layer of human skin. The unusually low energy released in the tritium beta decay makes the decay (along with that of rhenium-187) appropriate for absolute neutrino mass measurements in the laboratory (the most recent experiment being KATRIN).

The low energy of tritium's radiation makes it difficult to detect tritium-labeled compounds except by using liquid scintillation counting.

Tritium is a radioactive isotope of hydrogen and is typically produced by bombarding lithium-6 with neutrons in a nuclear reactor. The lithium nucleus absorbs a neutron and splits into helium-4 and tritium. Tritium decays into helium-3 with a half-life of 12.3 years, so helium-3 can be produced by simply storing the tritium until it undergoes radioactive decay. As tritium forms a stable compound with oxygen (tritiated water) while helium-3 does not, the storage and collection process could continuously collect the material that outgasses from the stored material.

Tritium is a critical component of nuclear weapons and historically it was produced and stockpiled primarily for this application. The decay of tritium into helium-3 reduces the explosive power of the fusion warhead, so periodically the accumulated helium-3 must be removed from warhead reservoirs and tritium in storage. Helium-3 removed during this process is marketed for other applications.

For decades this has been, and remains, the principal source of the world's helium-3. Since the signing of the START I Treaty in 1991 the number of nuclear warheads that are kept ready for use has decreased. This has reduced the quantity of helium-3 available from this source. Helium-3 stockpiles have been further diminished by increased demand, primarily for use in neutron radiation detectors and medical diagnostic procedures. US industrial demand for helium-3 reached a peak of 70000 litres (approximately 8 kg) per year in 2008. Price at auction, historically about $100 per litre, reached as high as $2000 per litre. Since then, demand for helium-3 has declined to about 6000 litres per year due to the high cost and efforts by the DOE to recycle it and find substitutes. Assuming a density of 114 g/m3 at $100/L helium-3 would be about a thirtieth as expensive as tritium (roughly $880/g vs. roughly $30000 per gram) while at $2000 per litre, helium-3 would be about half as expensive as tritium ($17540/g vs. $30000/g).

The DOE recognized the developing shortage of both tritium and helium-3, and began producing tritium by lithium irradiation at the Tennessee Valley Authority's Watts Bar Nuclear Generating Station in 2010. In this process tritium-producing burnable absorber rods (TPBARs) containing lithium in a ceramic form are inserted into the reactor in place of the normal boron control rods Periodically the TPBARs are replaced and the tritium extracted.

Currently only two commercial nuclear reactors (Watts Bar Nuclear Plant Units 1 and 2) are being used for tritium production but the process could, if necessary, be vastly scaled up to meet any conceivable demand simply by utilizing more of the nation's power reactors. Substantial quantities of tritium and helium-3 could also be extracted from the heavy water moderator in CANDU nuclear reactors. India and Canada, the two countries with the largest heavy water reactor fleet, are both known to extract tritium from moderator/coolant heavy water, but those amounts are not nearly enough to satisfy global demand of either tritium or helium-3.

As tritium is also produced inadvertently in various processes in light water reactors (see Tritium for details), extraction from those sources could be another source of helium-3. If the annual discharge of tritium (per 2018 figures) at La Hague reprocessing facility is taken as a basis, the amounts discharged (31.2 g at La Hague) are not nearly enough to satisfy demand, even if 100% recovery is achieved.

Annual discharge of tritium from nuclear facilities
Location Nuclear facility Closest
waters
Liquid
(TBq)
Steam
(TBq)
Total
(TBq)
Total
(mg)
year
United Kingdom Heysham nuclear power station B Irish Sea 396 2.1 398 1,115 2019
United Kingdom Sellafield reprocessing facility Irish Sea 423 56 479 1,342 2019
Romania Cernavodă Nuclear Power Plant Unit 1 Black Sea 140 152 292 872 2018
France La Hague reprocessing plant English Channel 11,400 60 11,460 32,100 2018
South Korea Wolseong Nuclear Power Plant East Sea 107 80.9 188 671 2020
Taiwan Maanshan Nuclear Power Plant Luzon Strait 35 9.4 44 123 2015
China Fuqing Nuclear Power Plant Taiwan Strait 52 0.8 52 146 2020
China Sanmen Nuclear Power Station East China Sea 20 0.4 20 56 2020
Canada Bruce Nuclear Generating Station A, B Great Lakes 756 994 1,750 4,901 2018
Canada Darlington Nuclear Generating Station Great Lakes 220 210 430 1,204 2018
Canada Pickering Nuclear Generating Station Units 1-4 Great Lakes 140 300 440 1,232 2015
United States Diablo Canyon Power Plant Units1, 2 Pacific Ocean 82 2.7 84 235 2019

Uses

Helium-3 spin echo

Helium-3 can be used to do spin echo experiments of surface dynamics, which are underway at the Surface Physics Group at the Cavendish Laboratory in Cambridge and in the Chemistry Department at Swansea University.

Neutron detection

Helium-3 is an important isotope in instrumentation for neutron detection. It has a high absorption cross section for thermal neutron beams and is used as a converter gas in neutron detectors. The neutron is converted through the nuclear reaction

n + 3He → 3H + 1H + 0.764 MeV

into charged particles tritium ions (T, 3H) and Hydrogen ions, or protons (p, 1H) which then are detected by creating a charge cloud in the stopping gas of a proportional counter or a Geiger–Müller tube.

Furthermore, the absorption process is strongly spin-dependent, which allows a spin-polarized helium-3 volume to transmit neutrons with one spin component while absorbing the other. This effect is employed in neutron polarization analysis, a technique which probes for magnetic properties of matter.

The United States Department of Homeland Security had hoped to deploy detectors to spot smuggled plutonium in shipping containers by their neutron emissions, but the worldwide shortage of helium-3 following the drawdown in nuclear weapons production since the Cold War has to some extent prevented this. As of 2012, DHS determined the commercial supply of boron-10 would support converting its neutron detection infrastructure to that technology.

Cryogenics

Helium-3 refrigerators are devices used in experimental physics for obtaining temperatures down to about 0.2 kelvin. By evaporative cooling of helium-4, a 1-K pot liquefies a small amount of helium-3 in a small vessel called a helium-3 pot. Evaporative cooling at low pressure of the liquid helium-3, usually driven by adsorption since, due to its high price, the helium-3 is usually contained in a closed system to avoid losses, cools the helium-3 pot to a fraction of a kelvin.

A dilution refrigerator uses a mixture of helium-3 and helium-4 to reach cryogenic temperatures as low as a few thousandths of a kelvin.

Nuclear magnetic resonance

Helium-3 nuclei have an intrinsic nuclear spin of 1/2 ħ, and a relatively high gyromagnetic ratio. Because of this, it is possible to use Nuclear magnetic resonance (NMR) to observe helium-3. This analytical technique, usually called 3He-NMR, can be used to identify helium-containing compounds. It is however limited by the low abundance of helium-3 in comparison to helium-4, which is itself not NMR-active.

Helium-3 can be hyperpolarized using non-equilibrium means such as spin-exchange optical pumping. During this process, circularly polarized infrared laser light, tuned to the appropriate wavelength, is used to excite electrons in an alkali metal, such as caesium or rubidium inside a sealed glass vessel. The angular momentum is transferred from the alkali metal electrons to the noble gas nuclei through collisions. In essence, this process effectively aligns the nuclear spins with the magnetic field in order to enhance the NMR signal.

The hyperpolarized gas may then be stored at pressures of 10 atm, for up to 100 hours. Following inhalation, gas mixtures containing the hyperpolarized helium-3 gas can be imaged with an MRI scanner to produce anatomical and functional images of lung ventilation. This technique is also able to produce images of the airway tree, locate unventilated defects, measure the alveolar oxygen partial pressure, and measure the ventilation/perfusion ratio. This technique may be critical for the diagnosis and treatment management of chronic respiratory diseases such as chronic obstructive pulmonary disease (COPD), emphysema, cystic fibrosis, and asthma.

Because a helium atom, or even two helium atoms, can be encased in fullerene-like cages, the NMR spectroscopy of this element can be a sensitive probe for changes of the carbon framework around it. Using carbon-13 NMR to analyze fullerenes themselves is complicated by so many subtle differences among the carbons in anything but the simplest, highly symmetric structures.

Radio energy absorber for tokamak plasma experiments

Both MIT's Alcator C-Mod tokamak and the Joint European Torus (JET) have experimented with adding a little helium-3 to a H–D plasma to increase the absorption of radio-frequency (RF) energy to heat the hydrogen and deuterium ions, a "three-ion" effect.

Nuclear fuel

Comparison of neutronicity for different reactions

Reactants Products Q n/MeV
First-generation fusion fuels 2D + 2D 3He + 1
0
n
3.268 MeV 0.306
2D + 2D 3T + 1
1
p
4.032 MeV 0
2D + 3T 4He + 1
0
n
17.571 MeV 0.057
Second-generation fusion fuel 2D + 3He 4He + 1
1
p
18.354 MeV 0
Net result of 2D burning (sum of first 4 rows) 6 2D 2(4He + n + p) 43.225 MeV 0.046
Third-generation fusion fuels 3He + 3He 4He + 2 1
1
p
12.86 MeV 0
11B + 1
1
p
3 4He 8.68 MeV 0
Current nuclear fuel 235U + n 2 FP+ 2.5n ~200 MeV 0.0075

3He can be produced by the low temperature fusion of (D-p)2H + 1p3He + γ + 4.98 MeV. If the fusion temperature is below that for the helium nuclei to fuse, the reaction produces a high energy alpha particle which quickly acquires an electron producing a stable light helium ion which can be utilized directly as a source of electricity without producing dangerous neutrons.

The fusion reaction rate increases rapidly with temperature until it maximizes and then gradually drops off. The DT rate peaks at a lower temperature (about 70 keV, or 800 million kelvins) and at a higher value than other reactions commonly considered for fusion energy.

3He can be used in fusion reactions by either of the reactions 2H + 3He → 4He + 1p + 18.3 MeV, or 3He + 3He → 4He + 2 1p + 12.86 MeV.

The conventional deuterium + tritium ("D–T") fusion process produces energetic neutrons which render reactor components radioactive with activation products. The appeal of helium-3 fusion stems from the aneutronic nature of its reaction products. Helium-3 itself is non-radioactive. The lone high-energy by-product, the proton, can be contained by means of electric and magnetic fields. The momentum energy of this proton (created in the fusion process) will interact with the containing electromagnetic field, resulting in direct net electricity generation.

Because of the higher Coulomb barrier, the temperatures required for 2H + 3He fusion are much higher than those of conventional D–T fusion. Moreover, since both reactants need to be mixed together to fuse, reactions between nuclei of the same reactant will occur, and the D–D reaction (2H + 2H) does produce a neutron. Reaction rates vary with temperature, but the D–3He reaction rate is never greater than 3.56 times the D–D reaction rate (see graph). Therefore, fusion using D–3He fuel at the right temperature and a D-lean fuel mixture, can produce a much lower neutron flux than D–T fusion, but is not clean, negating some of its main attraction.

The second possibility, fusing 3He with itself (3He + 3He), requires even higher temperatures (since now both reactants have a +2 charge), and thus is even more difficult than the D-3He reaction. It offers a theoretical reaction that produces no neutrons; the charged protons produced can be contained in electric and magnetic fields, which in turn directly generates electricity. 3He + 3He fusion is feasible as demonstrated in the laboratory and has immense advantages, but commercial viability is many years in the future.

The amounts of helium-3 needed as a replacement for conventional fuels are substantial by comparison to amounts currently available. The total amount of energy produced in the 2D + 3He reaction is 18.4 MeV, which corresponds to some 493 megawatt-hours (4.93×108 W·h) per three grams (one mole) of 3He. If the total amount of energy could be converted to electrical power with 100% efficiency (a physical impossibility), it would correspond to about 30 minutes of output of a gigawatt electrical plant per mole of 3He. Thus, a year's production (at 6 grams for each operation hour) would require 52.5 kilograms of helium-3. The amount of fuel needed for large-scale applications can also be put in terms of total consumption: electricity consumption by 107 million U.S. households in 2001 totaled 1,140 billion kW·h (1.14×1015 W⋅h). Again assuming 100% conversion efficiency, 6.7 tonnes per year of helium-3 would be required for that segment of the energy demand of the United States, 15 to 20 tonnes per year given a more realistic end-to-end conversion efficiency.

A second-generation approach to controlled fusion power involves combining helium-3 and deuterium, 2D. This reaction produces an alpha particle and a high-energy proton. The most important potential advantage of this fusion reaction for power production as well as other applications lies in its compatibility with the use of electrostatic fields to control fuel ions and the fusion protons. High speed protons, as positively charged particles, can have their kinetic energy converted directly into electricity, through use of solid-state conversion materials as well as other techniques. Potential conversion efficiencies of 70% may be possible, as there is no need to convert proton energy to heat in order to drive a turbine-powered electrical generator.

He-3 power plants

There have been many claims about the capabilities of helium-3 power plants. According to proponents, fusion power plants operating on deuterium and helium-3 would offer lower capital and operating costs than their competitors due to less technical complexity, higher conversion efficiency, smaller size, the absence of radioactive fuel, no air or water pollution, and only low-level radioactive waste disposal requirements. Recent estimates suggest that about $6 billion in investment capital will be required to develop and construct the first helium-3 fusion power plant. Financial break even at today's wholesale electricity prices (5 US cents per kilowatt-hour) would occur after five 1-gigawatt plants were on line, replacing old conventional plants or meeting new demand.

The reality is not so clear-cut. The most advanced fusion programs in the world are inertial confinement fusion (such as National Ignition Facility) and magnetic confinement fusion (such as ITER and Wendelstein 7-X). In the case of the former, there is no solid roadmap to power generation. In the case of the latter, commercial power generation is not expected until around 2050. In both cases, the type of fusion discussed is the simplest: D–T fusion. The reason for this is the very low Coulomb barrier for this reaction; for D+3He, the barrier is much higher, and it is even higher for 3He–3He. The immense cost of reactors like ITER and National Ignition Facility are largely due to their immense size, yet to scale up to higher plasma temperatures would require reactors far larger still. The 14.7 MeV proton and 3.6 MeV alpha particle from D–3He fusion, plus the higher conversion efficiency, means that more electricity is obtained per kilogram than with D–T fusion (17.6 MeV), but not that much more. As a further downside, the rates of reaction for helium-3 fusion reactions are not particularly high, requiring a reactor that is larger still or more reactors to produce the same amount of electricity.

In 2022, Helion Energy claimed that their 7th fusion prototype (Polaris; fully funded and under construction as of September 2022) will demonstrate "net electricity from fusion", and will demonstrate "helium-3 production through deuterium–deuterium fusion" by means of a "patented high-efficiency closed-fuel cycle".

Alternatives to He-3

To attempt to work around this problem of massively large power plants that may not even be economical with D–T fusion, let alone the far more challenging D–3He fusion, a number of other reactors have been proposed – the Fusor, Polywell, Focus fusion, and many more, though many of these concepts have fundamental problems with achieving a net energy gain, and generally attempt to achieve fusion in thermal disequilibrium, something that could potentially prove impossible, and consequently, these long-shot programs tend to have trouble garnering funding despite their low budgets. Unlike the "big" and "hot" fusion systems, if such systems worked, they could scale to the higher barrier aneutronic fuels, and so their proponents tend to promote p-B fusion, which requires no exotic fuel such as helium-3.

Wetware computer

From Wikipedia, the free encyclopedia
Diversity of neuronal morphologies in the auditory cortex

A wetware computer is an organic computer (which can also be known as an artificial organic brain or a neurocomputer) composed of organic material "wetware" such as "living" neurons. Wetware computers composed of neurons are different than conventional computers because they use biological materials, and offer the possibility of substantially more energy-efficient computing. While a wetware computer is still largely conceptual, there has been limited success with construction and prototyping, which has acted as a proof of the concept's realistic application to computing in the future. The most notable prototypes have stemmed from the research completed by biological engineer William Ditto during his time at the Georgia Institute of Technology. His work constructing a simple neurocomputer capable of basic addition from leech neurons in 1999 was a significant discovery for the concept. This research was a primary example driving interest in creating these artificially constructed, but still organic brains.

This image is of neural network cultured brain cells, highlighting connections between neurons. This structure reflects biological systems with Red stain highlighting neurites (axons and dendrites) and blue stain marks cell nuclei process information in wetware & organic computing.

Organic computers or Wetware is a future technology that replaces the traditional fundamental component of a central processing unit of a desktop or personal computer. It utilizes organic matter of living tissue cells that act like the transistor of a computer hardware system by acquiring, storing, and analyzing information data. Wetware is the name given to the computational properties of living systems, particularly in human neural tissue, which allows parallel and self-organizing information processing via biochemical and electrical interactions. Wetware is distinct from hardware systems in that it is based on dynamic mechanisms like synaptic plasticity and neurotransmitter diffusion, which provide unique benefits in terms of adaptability and robustness.

Origins and theoretical foundations

The term wetware came from cyberpunk fiction, notably through Gibson's Neuromancer, but was quickly taken up in scientific literature to explain computation by biological material, Theories of early biological computation borrowed from Alan Turing's morphogenesis model, which showed that chemical interactions could produce complex patterns without centralized control. Hopfield's associative memory networks also provided a foundation for biological information systems with fault tolerance and self-organization.

Major characteristics and processes

Biological wetware systems demonstrate dynamic reconfigurability underpinned by neuroplasticity and enable continuous learning and adaptation. Reaction-diffusion-based computing and molecular logic gates allow spatially parallel information processing unachievable in conventional systems. These systems also show fault tolerance and self-repair at the cellular and network level. The development of cerebral organoids—miniature lab-grown brains—demonstrates spontaneous learning behavior and suggests biological tissue as a viable computational substrate.

Overview

The concept of wetware is an application of specific interest to the field of computer manufacturing. Moore's law, which states that the number of transistors which can be placed on a silicon chip is doubled roughly every two years, has acted as a goal for the industry for decades, but as the size of computers continues to decrease, the ability to meet this goal has become more difficult, threatening to reach a plateau. Due to the difficulty in reducing the size of computers because of size limitations of transistors and integrated circuits, wetware provides an unconventional alternative. A wetware computer composed of neurons is an ideal concept because, unlike conventional materials which operate in binary (on/off), a neuron can shift between thousands of states, constantly altering its chemical conformation, and redirecting electrical pulses through over 200,000 channels in any of its many synaptic connections. Because of this large difference in the possible settings for any one neuron, compared to the binary limitations of conventional computers, the space limitations are far fewer.

Background

The concept of wetware is distinct and unconventional and draws slight resonance with both hardware and software from conventional computers. While hardware is understood as the physical architecture of traditional computational devices, comprising integrated circuits and supporting infrastructure, software represents the encoded architecture of storage and instructions. Wetware is a separate concept that uses the formation of organic molecules, mostly complex cellular structures (such as neurons), to create a computational device such as a computer. In wetware, the ideas of hardware and software are intertwined and interdependent. The molecular and chemical composition of the organic or biological structure would represent not only the physical structure of the wetware but also the software, being continually reprogrammed by the discrete shifts in electrical pulses and chemical concentration gradients as the molecules change their structures to communicate signals. The responsiveness of a cell, proteins, and molecules to changing conformations, both within their structures and around them, ties the idea of internal programming and external structure together in a way that is alien to the current model of conventional computer architecture.

The structure of wetware represents a model where the external structure and internal programming are interdependent and unified; meaning that changes to the programming or internal communication between molecules of the device would represent a physical change in the structure. The dynamic nature of wetware borrows from the function of complex cellular structures in biological organisms. The combination of "hardware" and "software" into one dynamic, and interdependent system which uses organic molecules and complexes to create an unconventional model for computational devices is a specific example of applied biorobotics.

The cell as a model of wetware

Cells in many ways can be seen as their form of naturally occurring wetware, similar to the concept that the human brain is the preexisting model system for complex wetware. In his book Wetware: A Computer in Every Living Cell (2009) Dennis Bray explains his theory that cells, which are the most basic form of life, are just a highly complex computational structure, like a computer. To simplify one of his arguments a cell can be seen as a type of computer, using its structured architecture. In this architecture, much like a traditional computer, many smaller components operate in tandem to receive input, process the information, and compute an output. In an overly simplified, non-technical analysis, cellular function can be broken into the following components: Information and instructions for execution are stored as DNA in the cell, RNA acts as a source for distinctly encoded input, processed by ribosomes and other transcription factors to access and process the DNA and to output a protein. Bray's argument in favor of viewing cells and cellular structures as models of natural computational devices is important when considering the more applied theories of wetware to biorobotics.

Biorobotics

Wetware and biorobotics are closely related concepts, which both borrow from similar overall principles. A biorobotic structure can be defined as a system modeled from a preexisting organic complex or model such as cells (neurons) or more complex structures like organs (brain) or whole organisms. Unlike wetware, the concept of biorobotics is not always a system composed of organic molecules, but instead could be composed of conventional material which is designed and assembled in a structure similar or derived from a biological model. Biorobotics have many applications and are used to address the challenges of conventional computer architecture. Conceptually, designing a program, robot, or computational device after a preexisting biological model such as a cell, or even a whole organism, provides the engineer or programmer the benefits of incorporating into the structure the evolutionary advantages of the model.

Effects on users

Wetware technologies such as BCIs and neuromorphic chips offer new possibilities for user autonomy. For those with disabilities, such systems could restore motor or sensory functions and enhance quality of life. However, these technologies raise ethical questions: cognitive privacy, consent over biological data, and risk of exploitation.

Without proper oversight, wetware technologies may also widen inequality, favoring those with access to cognitive enhancements. Open governance frameworks and ethical AI design grounded in neuro ethics will be essential. With the development of wetware devices, disparities in access could exacerbate social inequalities, benefiting those who have resources to enhance cognitive or physical abilities. It is necessary to create strong ethical frameworks, inclusive development practices, and open systems of governance to reduce risks and make sure that wetware advances are beneficial to all segments of society.

Applications and goals

Basic neurocomputer composed of leech neurons

In 1999 William Ditto and his team of researchers at Georgia Institute of Technology and Emory University created a basic form of a wetware computer capable of simple addition by harnessing leech neurons. Leeches were used as a model organism due to the large size of their neuron, and the ease associated with their collection and manipulation. However, these results have never been published in a peer-reviewed journal, prompting questions about the validity of the claims. The computer was able to complete basic addition through electrical probes inserted into the neuron. The manipulation of electrical currents through neurons was not a trivial accomplishment, however. Unlike conventional computer architecture, which is based on the binary on/off states, neurons are capable of existing in thousands of states and communicate with each other through synaptic connections with each containing over 200,000 channels. Each can be dynamically shifted in a process called self-organization to constantly form and reform new connections. A conventional computer program called the dynamic clamp, capable of reading the electrical pulses from the neurons in real time and interpreting them was written by Eve Marder, a neurobiologist at Brandeis University. This program was used to manipulate the electrical signals being input into the neurons to represent numbers and to communicate with each other to return the sum. While this computer is a very basic example of a wetware structure it represents a small example with fewer neurons than found in a more complex organ. It is thought by Ditto that by increasing the number of neurons present the chaotic signals sent between them will self-organize into a more structured pattern, such as the regulation of heart neurons into a constant heartbeat found in humans and other living organisms.

Biological models for conventional computing

After his work creating a basic computer from leech neurons, Ditto continued to work not only with organic molecules and wetware but also on the concept of applying the chaotic nature of biological systems and organic molecules to conventional material and logic gates. Chaotic systems have advantages for generating patterns and computing higher-order functions like memory, arithmetic logic, and input/output operations. In his article Construction of a Chaotic Computer Chip Ditto discusses the advantages in programming of using chaotic systems, with their greater sensitivity to respond and reconfigure logic gates in his conceptual chaotic chip. The main difference between a chaotic computer chip and a conventional computer chip is the reconfigurability of the chaotic system. Unlike a traditional computer chip, where a programmable gate array element must be reconfigured through the switching of many single-purpose logic gates, a chaotic chip can reconfigure all logic gates through the control of the pattern generated by the non-linear chaotic element.

Impact of wetware in cognitive biology

Cognitive biology evaluates cognition as a basic biological function. W. Tecumseh Fitch, a professor of cognitive biology at the University of Vienna, is a leading theorist on ideas of cellular intentionality. The idea is that not only do whole organisms have a sense of "aboutness" of intentionality, but that single cells also carry a sense of intentionality through cells' ability to adapt and reorganize in response to certain stimuli. Fitch discusses the idea of nano-intentionality, specifically in regards to neurons, in their ability to adjust rearrangements to create neural networks. He discusses the ability of cells such as neurons to respond independently to stimuli such as damage to be what he considers "intrinsic intentionality" in cells, explaining that "while at a vastly simpler level than intentionality at the human cognitive level, I propose that this basic capacity of living things [response to stimuli] provides the necessary building blocks for cognition and higher-order intentionality." Fitch describes the value of his research to specific areas of computer science such as artificial intelligence and computer architecture. He states "If a researcher aims to make a conscious machine, doing it with rigid switches (whether vacuum tubes or static silicon chips) is barking up the wrong tree." Fitch believes that an important aspect of the development of areas such as artificial intelligence is wetware with nano-intentionally, and autonomous ability to adapt and restructure itself.

In a review of the above-mentioned research conducted by Fitch, Daniel Dennett, a professor at Tufts University, discusses the importance of the distinction between the concept of hardware and software when evaluating the idea of wetware and organic material such as neurons. Dennett discusses the value of observing the human brain as a preexisting example of wetware. He sees the brain as having "the competence of a silicon computer to take on an unlimited variety of temporary cognitive roles." Dennett disagrees with Fitch on certain areas, such as the relationship of software/hardware versus wetware, and what a machine with wetware might be capable of. Dennett highlights the importance of additional research into human cognition to better understand the intrinsic mechanisms by which the human brain can operate, to better create an organic computer.

Medical applications

Wetware computers should not be confused with brain-on-a-chip devices have that are mostly aimed at replacing animal models in preclinical drug screening. Modern wetware computers use similar technology derived from the brain-on-a-chip field, but medical applications from wetware computing specifically have not been established.

Ethical and philosophical implications

Wetware computers may have substantial ethical implications, for instance related to possible potentials to sentience and suffering and dual-use technology.

Moreover, in some cases the human brain itself may be connected as a kind of "wetware" to other information technology systems which may also have large social and ethical implications, including issues related to intimate access to people's brains. For example, in 2021 Chile became the first country to approve neurolaw that establishes rights to personal identity, free will and mental privacy.

The concept of artificial insects may raise substantial ethical questions, including questions related to the decline in insect populations.

It is an open question whether human cerebral organoids could develop a degree or form of consciousness. Whether or how it could acquire its moral status with related rights and limits may also be potential future questions. There is research on how consciousness could be detected. As cerebral organoids may acquire human brain-like neural function subjective experience and consciousness may be feasible. Moreover, it may be possible that they acquire such upon transplantation into animals. A study notes that it may, in various cases, be morally permissible "to create self-conscious animals by engrafting human cerebral organoids, but in the case, the moral status of such animals should be carefully considered".

Applications

Wetware has driven innovations in brain-computer interfaces (BCIs), allowing neural activity to control external devices and enabling people with disabilities to regain communication and movement. Neuromorphic engineering, which mimics neural architectures using silicon, has resulted in low-power, highly adaptive artificial systems.

Synthetic biology has enabled the development of programmable biological processors for diagnostics and smart therapeutics. Brain organoids are also being used for computational pattern recognition and memory emulation. Large-scale international efforts like the Human Brain Project aim to simulate the entire human brain using insights from wetware.

Evaluating potential and limitations

The core advantage of wetware is its potential to overcome the rigidity and energy inefficiencies of binary transistor-based systems. Digital systems operate through fixed binary pathways and consume increasing energy as computational loads increase. Wetware, in contrast, uses decentralized and adaptive data flow that mimics biology. Notwithstanding the encouraging advances, several challenges hinder the effective utilization of wetware computing systems. Scalability is problematic due to the inherent variability of biological systems and their responsiveness to environmental factors, which makes large-scale implementation difficult. Additionally, the absence of standardization when combining silicon and biological systems hampers reproducibility and cooperation between research groups biological systems must also be stabilized carefully to turn away genetic drift and contamination necessary for reliable computational functionality.

Good parts – Replacing binary systems with organic cell structures opens the door to decentralized adaptive systems. Cells naturally form clusters and connections, much like neurons transmitting electrical and biochemical signals. Such a shift would increase scalability and efficiency, enabling users to interact with information in an intuitive and organic manner. Still, biological systems are sensitive to environmental changes, which presents challenges for standardization and reproducibility. Additionally, ethical concerns remain especially in using living neural tissue and lab-grown brain constructs.

Bad parts – Despite its promise, organic computing currently suffers from major limitations. Transistors still dominate computer architecture with a binary "on/off" model that restricts long-term energy efficiency and adaptability. As a result, personal computers in everyday use whether for work, games, or research often contribute to higher energy output and environmental impact.

Future applications

While there have been few major developments in the creation of an organic computer since the neuron-based calculator developed by Ditto in the 1990s, research continues to push the field forward, and in 2023 a functioning computer was constructed by researchers at the University of Illinois Urbana-Champaign using 80,000 mouse neurons as processor that can detect light and electrical signals. Projects such as the modeling of chaotic pathways in silicon chips by Ditto have made discoveries in ways of organizing traditional silicon chips and structuring computer architecture to be more efficient and better structured. Ideas emerging from the field of cognitive biology also help to continue to push discoveries in ways of structuring systems for artificial intelligence, to better imitate preexisting systems in humans.

In a proposed fungal computer using basidiomycetes, information is represented by spikes of electrical activity, a computation is implemented in a mycelium network, and an interface is realized via fruit bodies.

Connecting cerebral organoids (including computer-like wetware) with other nerve tissues may become feasible in the future, as is the connection of physical artificial neurons (not necessarily organic) and the control of muscle tissue. External modules of biological tissue could trigger parallel trains of stimulation back into the brain. All-organic devices could be advantageous because it could be biocompatible which may allow it to be implanted into the human body. This may enable treatments of certain diseases and injuries to the nervous system.

Prototypes

  • In late 2021, scientists, including two from Cortical Labs, demonstrated that grown brain cells integrated into digital systems can carry out goal-directed tasks with performance-scores. In particular, the human brain cells learned to play a simulated (via electrophysiological stimulation) Pong which they learned faster than known machine intelligence systems, albeit to a lower skill-level than both AI and humans each. Moreover, the study suggests it provides "first empirical evidence" of differences in an information-processing capacity between neurons from different species as the human brain cells performed better than mouse cells.
  • Also in December 2021, researchers from Max Planck Institute for Polymer Research reported the development of organic low-power neuromorphic electronics which they built into a robot, enabling it to learn sensorimotorically within the real world, rather than via simulations. For the chip, polymers were used and coated with an ion-rich gel to enable the material to carry an electric charge like real neurons.
  • In 2022, researchers from the Max Planck Institute for Polymer Research, demonstrated an artificial spiking neuron based on polymers that operates in the biological wetware, enabling synergetic operation between the artificial and biological components.

Companies active in wetware computing

Three companies are focusing on wetware computing using living neurons:

Convergence of AI and wetware

One technology developing today is the fusion of artificial intelligence (AI) with wetware. Modern research shows that hybrid systems combining living neural networks with AI can enable self-repair, real-time adaptation, and emotional intelligence. These systems are more flexible than conventional AI and can integrate learning and memory in real time. Such integration lays the foundation for AI that mirrors human cognition and behavior, potentially creating intelligent systems grounded in neuroscience.

Neural networks embodied in AI systems could facilitate continuous learning, emotional processing, and fault tolerance more than existing silicon-based implementations. Additionally, AI systems based on neuroethical principles could uphold transparency, fairness, and autonomy. While early research is ongoing, the integration of wetware and artificial intelligence seeks to redefine both fields with the possibility of creating more human-like, moral, and resilient intelligent systems.

Cognitive computer

From Wikipedia, the free encyclopedia

A cognitive computer is a computer that hardwires artificial intelligence and machine learning algorithms into an integrated circuit that closely reproduces the behavior of the human brain. It generally adopts a neuromorphic engineering approach. Synonyms include neuromorphic chip and cognitive chip.

In 2023, IBM's proof-of-concept NorthPole chip (optimized for 2-, 4- and 8-bit precision) achieved remarkable performance in image recognition.

In 2013, IBM developed Watson, a cognitive computer that uses neural networks and deep learning techniques. The following year, it developed the 2014 TrueNorth microchip architecture which is designed to be closer in structure to the human brain than the von Neumann architecture used in conventional computers. In 2017, Intel also announced its version of a cognitive chip in "Loihi, which it intended to be available to university and research labs in 2018. Intel (most notably with its Pohoiki Beach and Springs systems), Qualcomm, and others are improving neuromorphic processors steadily.

IBM TrueNorth chip

DARPA SyNAPSE board with 16 TrueNorth chips

TrueNorth was a neuromorphic CMOS integrated circuit produced by IBM in 2014. It is a manycore processor network on a chip design, with 4096 cores, each one having 256 programmable simulated neurons for a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). Its basic transistor count is 5.4 billion.

In 2023 Zhejiang University and Alibaba developed Darwin a neuromorphic chip The darwin3 chip was designed around 2023 so it is fairly modern compared to IBM's TrueNorth or Intel's LoihI.

Details

Memory, computation, and communication are handled in each of the 4096 neurosynaptic cores, TrueNorth circumvents the von Neumann-architecture bottleneck and is very energy-efficient, with IBM claiming a power consumption of 70 milliwatts and a power density that is 1/10,000th of conventional microprocessors. The SyNAPSE chip operates at lower temperatures and power because it only draws power necessary for computation. Skyrmions have been proposed as models of the synapse on a chip.

The neurons are emulated using a Linear-Leak Integrate-and-Fire (LLIF) model, a simplification of the leaky integrate-and-fire model.

According to IBM, it does not have a clock, operates on unary numbers, and computes by counting to a maximum of 19 bits. The cores are event-driven by using both synchronous and asynchronous logic, and are interconnected through an asynchronous packet-switched mesh network on chip (NOC).

IBM developed a new network to program and use TrueNorth. It included a simulator, a new programming language, an integrated programming environment, and libraries. This lack of backward compatibility with any previous technology (e.g., C++ compilers) poses serious vendor lock-in risks and other adverse consequences that may prevent it from commercialization in the future.

Research

In 2018, a cluster of TrueNorth network-linked to a master computer was used in stereo vision research that attempted to extract the depth of rapidly moving objects in a scene.

IBM NorthPole chip

In 2023, IBM released its NorthPole chip, which is a proof-of-concept for dramatically improving performance by intertwining compute with memory on-chip, thus eliminating the Von Neumann bottleneck. It blends approaches from IBM's 2014 TrueNorth system with modern hardware designs to achieve speeds about 4,000 times faster than TrueNorth. It can run ResNet-50 or Yolo-v4 image recognition tasks about 22 times faster, with 25 times less energy and 5 times less space, when compared to GPUs which use the same 12-nm node process that it was fabricated with. It includes 224 MB of RAM and 256 processor cores and can perform 2,048 operations per core per cycle at 8-bit precision, and 8,192 operations at 2-bit precision. It runs at between 25 and 425 MHz. This is an inferencing chip, but it cannot yet handle GPT-4 because of memory and accuracy limitations

Intel Loihi chip

Pohoiki Springs

Pohoiki Springs is a system that incorporates Intel's self-learning neuromorphic chip, named Loihi, introduced in 2017, perhaps named after the Hawaiian seamount Lōʻihi. Intel claims Loihi is about 1000 times more energy efficient than general-purpose computing systems used to train neural networks. In theory, Loihi supports both machine learning training and inference on the same silicon independently of a cloud connection, and more efficiently than convolutional neural networks or deep learning neural networks. Intel points to a system for monitoring a person's heartbeat, taking readings after events such as exercise or eating, and using the chip to normalize the data and work out the ‘normal’ heartbeat. It can then spot abnormalities and deal with new events or conditions.

The first iteration of the chip was made using Intel's 14 nm fabrication process and houses 128 clusters of 1,024 artificial neurons each for a total of 131,072 simulated neurons. This offers around 130 million synapses, far less than the human brain's 800 trillion synapses, and behind IBM's TrueNorth. Loihi is available for research purposes among more than 40 academic research groups as a USB form factor.

In October 2019, researchers from Rutgers University published a research paper to demonstrate the energy efficiency of Intel's Loihi in solving simultaneous localization and mapping.

In March 2020, Intel and Cornell University published a research paper to demonstrate the ability of Intel's Loihi to recognize different hazardous materials, which could eventually aid to "diagnose diseases, detect weapons and explosives, find narcotics, and spot signs of smoke and carbon monoxide".

Pohoiki Beach

Intel's Loihi 2, named Pohoiki Beach, was released in September 2021 with 64 cores. It boasts faster speeds, higher-bandwidth inter-chip communications for enhanced scalability, increased capacity per chip, a more compact size due to process scaling, and improved programmability.

Hala Point

Hala Point packages 1,152 Loihi 2 processors produced on Intel 3 process node in a six-rack-unit chassis. The system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores, consuming 2,600 watts of power. It includes over 2,300 embedded x86 processors for ancillary computations.

Intel claimed in 2024 that Hala Point was the world’s largest neuromorphic system. It uses Loihi 2 chips. It is claimed to offer 10x more neuron capacity and up to 12x higher performance. The Darwin3 chip exceeds these specs.

Hala Point provides up to 20 quadrillion operations per second, (20 petaops), with efficiency exceeding 15 trillion (8-bit) operations S−1 W−1 on conventional deep neural networks.

Hala Point integrates processing, memory and communication channels in a massively parallelized fabric, providing 16 PB S−1 of memory bandwidth, 3.5 PB S−1 of inter-core communication bandwidth, and 5 TB S−1 of inter-chip bandwidth.

The system can process its 1.15 billion neurons 20 times faster than a human brain. Its neuron capacity is roughly equivalent to that of an owl brain or the cortex of a capuchin monkey.

Loihi-based systems can perform inference and optimization using 100 times less energy at speeds as much as 50 times faster than CPU/GPU architectures.

Intel claims that Hala Point can create LLMs. Much further research is needed.

SpiNNaker

SpiNNaker (Spiking Neural Network Architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group at the Department of Computer Science, University of Manchester.

Criticism

Critics argue that a room-sized computer – as in the case of IBM's Watson – is not a viable alternative to a three-pound human brain. Some also cite the difficulty for a single system to bring so many elements together, such as the disparate sources of information as well as computing resources.

In 2021, The New York Times released Steve Lohr's article "What Ever Happened to IBM’s Watson?". He wrote about some costly failures of IBM Watson. One of them, a cancer-related project called the Oncology Expert Advisor, was abandoned in 2016 as a costly failure. During the collaboration, Watson could not use patient data. Watson struggled to decipher doctors’ notes and patient histories.

The development of LLMs has placed a new emphasis on cognitive computers, because the Transformer technology that underpins LLMs demands huge energy for GPUs and PCs. Cognitive computers use very much less energy, but the details of STDPs and neuron models cannot yet match the accuracy of backprop, and so ANN to SNN weight translations such as QAT and PQT or progressive quantization are becoming popular, with their own limitations.

Fusion rocket

From Wikipedia, the free encyclopedia
A schematic of a fusion-driven rocket by NASA

A fusion rocket is a theoretical design for a rocket driven by fusion propulsion that could provide efficient and sustained acceleration in space without the need to carry a large fuel supply. The design requires fusion power technology beyond current capabilities, and much larger and more complex rockets.

Fusion nuclear pulse propulsion is one approach to using nuclear fusion energy to provide propulsion.

Fusion's main advantage is its very high specific impulse, while its main disadvantage is the (likely) large mass of the reactor. A fusion rocket may produce less radiation than a fission rocket, reducing the shielding mass needed. The simplest way of building a fusion rocket is to use hydrogen bombs as proposed in Project Orion, but such a spacecraft would be massive and the Partial Nuclear Test Ban Treaty prohibits the use of such bombs. For that reason bomb-based rockets would likely be limited to operating only in space. An alternate approach uses electrical (e.g. ion) propulsion with electric power generated by fusion instead of direct thrust.

Electricity generation vs. direct thrust

Spacecraft propulsion methods such as ion thrusters require electric power to run, but are highly efficient. In some cases their thrust is limited by the amount of power that can be generated (for example, a mass driver). An electric generator running on fusion power could drive such a ship. One disadvantage is that conventional electricity production requires a low-temperature energy sink, which is difficult (i.e. heavy) in a spacecraft. Direct conversion of the kinetic energy of fusion products into electricity mitigates this problem.

One attractive possibility is to direct the fusion exhaust out the back of the rocket to provide thrust without the intermediate production of electricity. This would be easier with some confinement schemes (e.g. magnetic mirrors) than with others (e.g. tokamaks). It is also more attractive for "advanced fuels" (see aneutronic fusion). Helium-3 propulsion would use the fusion of helium-3 atoms as a power source. Helium-3, an isotope of helium with two protons and one neutron, could be fused with deuterium in a reactor. The resulting energy release could expel propellant out the back of the spacecraft. Helium-3 is proposed as a power source for spacecraft mainly because of its lunar abundance. Scientists estimate that 1 million tons of accessible helium-3 are present on the moon. Only 20% of the power produced by the D-T reaction could be used this way; while the other 80% is released as neutrons which, because they cannot be directed by magnetic fields or solid walls, would be difficult to direct towards thrust, and may in turn require shielding. Helium-3 is produced via beta decay of tritium, which can be produced from deuterium, lithium, or boron.

Even if a self-sustaining fusion reaction cannot be produced, it might be possible to use fusion to boost the efficiency of another propulsion system, such as a VASIMR engine.

Confinement alternatives

Magnetic

To sustain a fusion reaction, the plasma must be confined. The most widely studied configuration for terrestrial fusion is the tokamak, a form of magnetic confinement fusion. Currently tokamaks weigh a great deal, so the thrust to weight ratio would seem unacceptable. NASA's Glenn Research Center proposed in 2001 a small aspect ratio spherical torus reactor for its "Discovery II" conceptual vehicle design. "Discovery II" could deliver a crewed 172 metric tons payload to Jupiter in 118 days (or 212 days to Saturn) using 861 metric tons of hydrogen propellant, plus 11 metric tons of Helium-3-Deuterium (D-He3) fusion fuel. The hydrogen is heated by the fusion plasma debris to increase thrust, at a cost of reduced exhaust velocity (348–463 km/s) and hence increased propellant mass.

Inertial

The main alternative to magnetic confinement is inertial confinement fusion (ICF), such as that proposed by Project Daedalus. A small pellet of fusion fuel (with a diameter of a couple of millimeters) would be ignited by an electron beam or a laser. To produce direct thrust, a magnetic field forms the pusher plate. In principle, the Helium-3-Deuterium reaction or an aneutronic fusion reaction could be used to maximize the energy in charged particles and to minimize radiation, but it is highly questionable whether using these reactions is technically feasible. Both the detailed design studies in the 1970s, the Orion drive and Project Daedalus, used inertial confinement. In the 1980s, Lawrence Livermore National Laboratory and NASA studied an ICF-powered "Vehicle for Interplanetary Transport Applications" (VISTA). The conical VISTA spacecraft could deliver a 100-tonne payload to Mars orbit and return to Earth in 130 days, or to Jupiter orbit and back in 403 days. 41 tonnes of deuterium/tritium (D-T) fusion fuel would be required, plus 4,124 tonnes of hydrogen expellant. The exhaust velocity would be 157 km/s.

The very large necessary mass and the challenge of managing the heat produced in space may make an ICF reactor unworkable in space travel.

Magnetized target

Magnetized target fusion (MTF) is a relatively new approach that combines the best features of the more widely studied magnetic confinement fusion (i.e. good energy confinement) and inertial confinement fusion (i.e. efficient compression heating and wall free containment of the fusing plasma) approaches. Like the magnetic approach, the fusion fuel is confined at low density by magnetic fields while it is heated into a plasma, but like the inertial confinement approach, fusion is initiated by rapidly squeezing the target to dramatically increase fuel density, and thus temperature. MTF uses "plasma guns" (i.e. electromagnetic acceleration techniques) instead of powerful lasers, leading to low cost and low weight compact reactors. The NASA/MSFC Human Outer Planets Exploration (HOPE) group has investigated a crewed MTF propulsion spacecraft capable of delivering a 164-tonne payload to Jupiter's moon Callisto using 106-165 metric tons of propellant (hydrogen plus either D-T or D-He3 fusion fuel) in 249–330 days. This design would thus be considerably smaller and more fuel efficient due to its higher exhaust velocity (700 km/s) than the previously mentioned "Discovery II", "VISTA" concepts.

Inertial electrostatic

Another popular confinement concept for fusion rockets is inertial electrostatic confinement (IEC), such as in the Farnsworth-Hirsch Fusor or the Polywell variation under development by Energy-Matter Conversion Corporation (EMC2). The University of Illinois has defined a 500-tonne "Fusion Ship II" concept capable of delivering a 100,000 kg crewed payload to Jupiter's moon Europa in 210 days. Fusion Ship II utilizes ion rocket thrusters (343 km/s exhaust velocity) powered by ten D-He3 IEC fusion reactors. The concept would need 300 tonnes of argon propellant for a 1-year round trip to the Jupiter system. Robert Bussard published a series of technical articles discussing its application to spaceflight throughout the 1990s. His work was popularised by an article in the Analog Science Fiction and Fact publication, where Tom Ligon described how the fusor would make for a highly effective fusion rocket.

Antimatter

A still more speculative concept is antimatter-catalyzed nuclear pulse propulsion, which would use antimatter to catalyze a fission and fusion reaction, allowing much smaller fusion explosions to be created. During the 1990s an abortive design effort was conducted at Penn State University under the name AIMStar. The project would require more antimatter than can currently be produced. In addition, some technical hurdles need to be surpassed before it would be feasible.

Development projects

Neuromorphic computing

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Neuromorphic_computing

Neuromorphic computing is a computing approach inspired by the human brain's structure and function. It uses artificial neurons to perform computations, mimicking neural systems for tasks such as perception, motor control, and multisensory integration. These systems, implemented in analog, digital, or mixed-mode VLSI, prioritize robustness, adaptability, and learning by emulating the brain’s distributed processing across small computing elements. This interdisciplinary field integrates biology, physics, mathematics, computer science, and electronic engineering to develop systems that emulate the brain’s morphology and computational strategies. Neuromorphic systems aim to enhance energy efficiency and computational power for applications including artificial intelligence, pattern recognition, and sensory processing.

History

Carver Mead proposed one of the first applications for neuromorphic engineering in the late 1980s. In 2006, researchers at Georgia Tech developed a field programmable neural array, a silicon-based chip modeling neuron channel-ion characteristics. In 2011, MIT researchers created a chip mimicking synaptic communication using 400 transistors and standard CMOS techniques.

In 2012 HP Labs researchers reported that Mott memristors exhibit volatile behavior at low temperatures, enabling the creation of neuristors that mimic neuron behavior and support Turing machine components. Also in 2012, Purdue University researchers presented a neuromorphic chip design using lateral spin valves and memristors, noted for energy efficiency.

The 2013 Blue Brain Project creates detailed digital models of rodent brains.

Neurogrid, developed by Brains in Silicon at Stanford University, used 16 NeuroCore chips to emulate 65,536 neurons with high energy efficiency in 2014. The 2014 BRAIN Initiative and IBM’s TrueNorth chip contributed to neuromorphic advancements.

The 2016 BrainScaleS project, a hybrid neuromorphic supercomputer at University of Heidelberg, operated 864 times faster than biological neurons.

In 2017, Intel unveiled its Loihi chip, using an asynchronous artificial neural network for efficient learning and inference. Also in 2017 IMEC’s self-learning chip, based on OxRAM, demonstrated music composition by learning from minuets.

In 2022, MIT researchers developed artificial synapses using protons for analog deep learning. In 2019, the European Union funded neuromorphic quantum computing to explore quantum operations using neuromorphic systems. Also in 2022, researchers at the Max Planck Institute for Polymer Research developed an organic artificial spiking neuron for in-situ neuromorphic sensing and biointerfacing.

Researchers reported in 2024 that chemical systems in liquid solutions can detect sound at various wavelengths, offering potential for neuromorphic applications.

Neurological inspiration

Neuromorphic engineering emulates the brain’s structure and operations, focusing on the analog nature of biological computation and the role of neurons in cognition. The brain processes information via neurons using chemical signals, abstracted into mathematical functions. Neuromorphic systems distribute computation across small elements, similar to neurons, using methods guided by anatomical and functional neural maps from electron microscopy and neural connection studies.

Implementation

Neuromorphic systems employ hardware such as oxide-based memristors, spintronic memories, threshold switches, and transistors. Software implementations train spiking neural networks using error backpropagation.

Neuromemristive systems

Neuromemristive systems use memristors to implement neuroplasticity, focusing on abstract neural network models rather than detailed biological mimicry. These systems enable applications in speech recognition, face recognition, and object recognition, and can replace conventional digital logic gates. The Caravelli-Traversa-Di Ventra equation describes memristive memory evolution, revealing tunneling phenomena and Lyapunov functions.

Neuromorphic sensors

Neuromorphic principles extend to sensors, such as the retinomorphic sensor or event camera, which mimic human vision by registering brightness changes individually, optimizing power consumption.

An example of this applied to detecting light is the retinomorphic sensor or, when employed in an array, an event camera.

Ethical considerations

Neuromorphic systems raise the same ethical questions as those for other approaches to artificial intelligence. Daniel Lim argued that advanced neuromorphic systems could lead to machine consciousness, raising concerns about whether civil rights and other protocols should be extended to them. Legal debates, such as in Acohs Pty Ltd v. Ucorp Pty Ltd, question ownership of work produced by neuromorphic systems, as non-human-generated outputs may not be copyrightable.

Self-reference

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Sel...