Search This Blog

Saturday, March 14, 2026

Gaia hypothesis

From Wikipedia, the free encyclopedia
The study of planetary habitability is partly based upon extrapolation from knowledge of the Earth's conditions, as the Earth is the only planet currently known to harbour life (The Blue Marble, 1972 Apollo 17 photograph).

The Gaia hypothesis (/ˈɡ.ə/), also known as the Gaia theory, Gaia paradigm, or the Gaia principle, proposes that living organisms interact with their inorganic surroundings on Earth to form a synergistic and self-regulating complex system that helps to maintain and perpetuate the conditions for life on the planet.

The Gaia hypothesis was formulated by the chemist James Lovelock and co-developed by the microbiologist Lynn Margulis in the 1970s. Following the suggestion by his neighbour, novelist William Golding, Lovelock named the hypothesis after Gaia, the primordial deity who was sometimes personified as the Earth in Greek mythology. In 2006, the Geological Society of London awarded Lovelock the Wollaston Medal in part for his work on the Gaia hypothesis.

Topics related to the Gaia hypothesis include how the biosphere and the evolution of organisms affect the stability of global temperature, salinity of seawater, atmospheric oxygen levels, the maintenance of the hydrosphere, and other environmental variables that affect the habitability of Earth.

The Gaia hypothesis was initially criticized for being teleological - implying the Earth purposefully maintains an atmosphere suitable for life - but this interpretation was rejected by Lovelock. The Gaia hypothesis continues to attract criticism, and today many scientists consider it to be only weakly supported by, or at odds with, the available evidence.

Overview

The Gaia hypothesis argues that organisms co-evolve with their environment. That is, organisms influence the abiotic, not just the biological environment, and in co-development the abiotic environment influences biota via some sort of a Darwinian process, which may indicate an evolution of a collaborative reciprocal evolving life habitat. In 1995, Lovelock gave evidence of this biotic-abiotic relationship in his second iteration of his conjecture within the book Ages of Gaia. This theory states the evolution from the world of the early warm-loving bacteria and methanogenic bacteria towards the oxygen-enriched extant atmosphere, that is, today's atmosphere, which we know is the Holocene and this is a supportive environment of more complex life than primordial times. As each individual species or other systems pursue their self-interest, their combined actions may have counterbalancing effects on the abiotic and biotic environment. Opponents of this view sometimes reference examples of events that resulted in dramatic change rather than stable equilibrium, such as the conversion of the Earth's atmosphere from a reducing environment to an oxygen-rich one at the end of the Archaean and the beginning of the Proterozoic periods.

Less accepted versions of the Gaia hypothesis claim that changes in the biosphere are brought about through the coordination of living organisms and maintain those conditions through homeostasis. In some versions of Gaia Hypothosis, all lifeforms are considered part of one single living planetary being called Gaia. In this view, the atmosphere, the seas and the terrestrial crust would be results of interventions carried out by Gaia through the coevolving diversity of living organisms.

Among the precursors of the Gaia hypothesis are Russian scientists such as Piotr Alekseevich Kropotkin (1842–1921), Rafail Vasil’evich Rizpolozhensky (1862 – c. 1922), Vladimir Ivanovich Vernadsky (1863–1945), and Vladimir Alexandrovich Kostitzin (1886–1963).

The Gaia paradigm was an influence on the deep ecology movement.

Details

The Gaia hypothesis posits that the Earth is a self-regulating complex system involving the biosphere, the atmosphere, the hydrospheres and the pedosphere, tightly coupled as an evolving system. The hypothesis contends that this system as a whole, called Gaia, seeks a physical and chemical environment optimal for contemporary life.

Gaia evolves through a cybernetic feedback system operated by the biota, leading to broad stabilization of the conditions of habitability in a full homeostasis. Many processes in the Earth's surface, essential for the conditions of life, depend on the interaction of living forms, especially microorganisms, with inorganic elements. These processes establish a global control system that regulates Earth's surface temperature, atmosphere composition and ocean salinity, powered by the global thermodynamic disequilibrium state of the Earth system.

The existence of a planetary homeostasis influenced by living forms had been observed previously in the field of biogeochemistry, and it is being investigated also in other fields like Earth system science. The originality of the Gaia hypothesis relies on the assessment that such homeostatic balance is actively pursued with the goal of keeping the optimal conditions for life, even when terrestrial or external events menace them.

Regulation of global surface temperature

Rob Rohde's palaeotemperature graphs

Since life started on Earth, the energy provided by the Sun has increased by 25–30%; however, the surface temperature of the planet has remained within the levels of habitability, reaching quite regular low and high margins. Lovelock has also hypothesised that methanogens produced elevated levels of methane in the early atmosphere, giving a situation similar to that found in petrochemical smog, similar in some respects to the atmosphere on Titan. This, he suggests, helped to screen out ultraviolet light until the formation of the ozone layer, maintaining a degree of homeostasis. However, the Snowball Earth research has suggested that "oxygen shocks" and reduced methane levels led, during the Huronian, Sturtian and Marinoan/Varanger Ice Ages, to a world that very nearly became a solid "snowball". These epochs are evidence against the ability of the pre Phanerozoic biosphere to fully self-regulate.

Processing of the greenhouse gas CO2, explained below, plays a critical role in the maintenance of the Earth temperature within the limits of habitability.

The CLAW hypothesis, inspired by the Gaia hypothesis, proposes a feedback loop that operates between ocean ecosystems and the Earth's climate. The hypothesis specifically proposes that particular phytoplankton that produce dimethyl sulfide are responsive to variations in climate forcing, and that these responses lead to a negative feedback loop that acts to stabilise the temperature of the Earth's atmosphere.

Currently the increase in human population and the environmental impact of its activities, such as the multiplication of greenhouse gases may cause negative feedbacks in the environment to become positive feedback. Lovelock has stated that this could bring an extremely accelerated global warming, but he has since stated the effects will likely occur more slowly.

Daisyworld simulations

Plots from a standard black and white Daisyworld simulation

In response to the criticism that the Gaia hypothesis seemingly required unrealistic group selection and cooperation between organisms, James Lovelock and Andrew Watson developed a mathematical model, Daisyworld, in which ecological competition underpinned planetary temperature regulation.

Daisyworld examines the energy budget of a planet populated by two different types of plants, black daisies and white daisies, which are assumed to occupy a significant portion of the surface. The colour of the daisies influences the albedo of the planet such that black daisies absorb more light and warm the planet, while white daisies reflect more light and cool the planet. The black daisies are assumed to grow and reproduce best at a lower temperature, while the white daisies are assumed to thrive best at a higher temperature. As the temperature rises closer to the value the white daisies like, the white daisies outreproduce the black daisies, leading to a larger percentage of white surface, and more sunlight is reflected, reducing the heat input and eventually cooling the planet. Conversely, as the temperature falls, the black daisies outreproduce the white daisies, absorbing more sunlight and warming the planet. The temperature will thus converge to the value at which the reproductive rates of the plants are equal.

Lovelock and Watson showed that, over a limited range of conditions, this negative feedback due to competition can stabilize the planet's temperature at a value which supports life, if the energy output of the Sun changes, while a planet without life would show wide temperature changes. The percentage of white and black daisies will continually change to keep the temperature at the value at which the plants' reproductive rates are equal, allowing both life forms to thrive.

It has been suggested that the results were predictable because Lovelock and Watson selected examples that produced the responses they desired.

Regulation of oceanic salinity

Ocean salinity has been constant at about 3.5% for a very long time. Salinity stability in oceanic environments is important as most cells require a rather constant salinity and do not generally tolerate values above 5%. The constant ocean salinity was a long-standing mystery, because no process counterbalancing the salt influx from rivers was known. Recently it was suggested that salinity may also be strongly influenced by seawater circulation through hot basaltic rocks, and emerging as hot water vents on mid-ocean ridges. However, the composition of seawater is far from equilibrium, and it is difficult to explain this fact without the influence of organic processes. One suggested explanation lies in the formation of salt plains throughout Earth's history. It is hypothesized that these are created by bacterial colonies that fix ions and heavy metals during their life processes.

In the biogeochemical processes of Earth, sources and sinks are the movement of elements. The composition of salt ions within our oceans and seas is: sodium (Na+), chlorine (Cl), sulfate (SO42−), magnesium (Mg2+), calcium (Ca2+) and potassium (K+). The elements that comprise salinity do not readily change and are a conservative property of seawater. There are many mechanisms that change salinity from a particulate form to a dissolved form and back. Considering the metallic composition of iron sources across a multifaceted grid of thermomagnetic design, not only would the movement of elements hypothetically help restructure the movement of ions, electrons, and the like, but would also potentially and inexplicably assist in balancing the magnetic bodies of the Earth's geomagnetic field. The known sources of sodium i.e. salts are when weathering, erosion, and dissolution of rocks are transported into rivers and deposited into the oceans.

The Mediterranean Sea as being Gaia's kidney is found (here) by Kenneth J. Hsu, a correspondence author in 2001. Hsu suggests the "desiccation" of the Mediterranean is evidence of a functioning Gaia "kidney". In this and earlier suggested cases, it is plate movements and physics, not biology, which performs the regulation. Earlier "kidney functions" were performed during the "deposition of the Cretaceous (South Atlantic), Jurassic (Gulf of Mexico), Permo-Triassic (Europe), Devonian (Canada), and Cambrian/Precambrian (Gondwana) saline giants."

Regulation of oxygen in the atmosphere

Levels of gases in the atmosphere in 420,000 years of ice core data from Vostok, Antarctica research station. Current period is at the left.

The Gaia hypothesis states that the Earth's atmospheric composition is kept at a dynamically steady state by the presence of life. The atmospheric composition provides the conditions that contemporary life has adapted to. All the atmospheric gases other than noble gases present in the atmosphere are either made by organisms or processed by them.

The stability of the atmosphere in Earth is not a consequence of chemical equilibrium. Oxygen is a reactive compound, and should eventually combine with gases and minerals of the Earth's atmosphere and crust. Oxygen only began to persist in the atmosphere in small quantities about 50 million years before the start of the Great Oxygenation Event. Since the start of the Cambrian period, atmospheric oxygen concentrations have fluctuated between 15% and 40% of atmospheric volume. Traces of methane (at an amount of 100,000 tonnes produced per year) should not exist, as methane is combustible in an oxygen atmosphere.

Dry air in the atmosphere of Earth contains roughly (by volume) 78.09% nitrogen, 20.95% oxygen, 0.93% argon, 0.039% carbon dioxide, and small amounts of other gases including methane. Lovelock originally speculated that concentrations of oxygen above about 25% would increase the frequency of wildfires and conflagration of forests. This mechanism, however, would not raise oxygen levels if they became too low. If plants can be shown to robustly over-produce O2 then perhaps only the high oxygen forest fires regulator is necessary. Recent work on the findings of fire-caused charcoal in Carboniferous and Cretaceous coal measures, in geologic periods when O2 did exceed 25%, has supported Lovelock's contention.

Processing of CO2

Gaia scientists see the participation of living organisms in the carbon cycle as one of the complex processes that maintain conditions suitable for life. The only significant natural source of atmospheric carbon dioxide (CO2) is volcanic activity, while the only significant removal is through the precipitation of carbonate rocks. Carbon precipitation, solution and fixation are influenced by the bacteria and plant roots in soils, where they improve gaseous circulation, or in coral reefs, where calcium carbonate is deposited as a solid on the sea floor. Calcium carbonate is used by living organisms to manufacture carbonaceous tests and shells. Once dead, the living organisms' shells fall. Some arrive at the bottom of shallow seas where the heat and pressure of burial, and/or the forces of plate tectonics, eventually convert them to deposits of chalk and limestone. Much of the falling dead shells, however, redissolve into the ocean below the carbon compensation depth.

One of these organisms is Emiliania huxleyi, an abundant coccolithophore algae which may have a role in the formation of clouds. CO2 excess is compensated by an increase of coccolithophorid life, increasing the amount of CO2 locked in the ocean floor. Coccolithophorids, if the CLAW hypothesis turns out to be supported (see "Regulation of Global Surface Temperature" above), could help increase the cloud cover, hence control the surface temperature, help cool the whole planet and favor precipitation necessary for terrestrial plants. Lately the atmospheric CO2 concentration has increased and there is some evidence that concentrations of ocean algal blooms are also increasing.

Lichen and other organisms accelerate the weathering of rocks in the surface, while the decomposition of rocks also happens faster in the soil, thanks to the activity of roots, fungi, bacteria and subterranean animals. The flow of carbon dioxide from the atmosphere to the soil is therefore regulated with the help of living organisms. When CO2 levels rise in the atmosphere the temperature increases and plants grow. This growth brings higher consumption of CO2 by the plants, who process it into the soil, removing it from the atmosphere.

History

Precedents

Earthrise taken from Apollo 8 by astronaut William Anders, December 24, 1968

The idea of the Earth as an integrated whole, a living being, has a long tradition. The mythical Gaia was the primal Greek goddess personifying the Earth, the Greek version of "Mother Nature" (from Ge = Earth, and Aia = PIE grandmother), or the Earth Mother. James Lovelock gave this name to his hypothesis after a suggestion from the novelist William Golding, who was living in the same village as Lovelock at the time (Bowerchalke, Wiltshire, UK). Golding's advice was based on Gea, an alternative spelling for the name of the Greek goddess, which is used as prefix in geology, geophysics and geochemistry. Golding later made reference to Gaia in his Nobel Prize acceptance lecture.

In the eighteenth century, as geology consolidated as a modern science, James Hutton maintained that geological and biological processes are interlinked. Later, the naturalist and explorer Alexander von Humboldt recognized the coevolution of living organisms, climate, and Earth's crust. In the twentieth century, Vladimir Vernadsky formulated a theory of Earth's development that is now one of the foundations of ecology. Vernadsky was a Ukrainian geochemist and was one of the first scientists to recognize that the oxygen, nitrogen, and carbon dioxide in the Earth's atmosphere result from biological processes. During the 1920s he published works arguing that living organisms could reshape the planet as surely as any physical force. Vernadsky was a pioneer of the scientific bases for the environmental sciences. His visionary pronouncements were not widely accepted in the West, and some decades later the Gaia hypothesis received the same type of initial resistance from the scientific community.

Also in the turn to the 20th century Aldo Leopold, pioneer in the development of modern environmental ethics and in the movement for wilderness conservation, suggested a living Earth in his biocentric or holistic ethics regarding land.

It is at least not impossible to regard the earth's parts—soil, mountains, rivers, atmosphere etc,—as organs or parts of organs of a coordinated whole, each part with its definite function. And if we could see this whole, as a whole, through a great period of time, we might perceive not only organs with coordinated functions, but possibly also that process of consumption as replacement which in biology we call metabolism, or growth. In such case we would have all the visible attributes of a living thing, which we do not realize to be such because it is too big, and its life processes too slow.

— Stephan Harding, Animate Earth

Another influence for the Gaia hypothesis and the environmental movement in general came as a side effect of the Space Race between the Soviet Union and the United States of America. During the 1960s, the first humans in space could see how the Earth looked as a whole. The photograph Earthrise taken by astronaut William Anders in 1968 during the Apollo 8 mission became, through the Overview Effect, an early symbol for the global ecology movement.

Formulation of the hypothesis

James Lovelock, 2005

Lovelock started defining the idea of a self-regulating Earth controlled by the community of living organisms in September 1965, while working at the Jet Propulsion Laboratory in California on methods of detecting life on Mars. The first paper to mention it was Planetary Atmospheres: Compositional and other Changes Associated with the Presence of Life, co-authored with C.E. Giffin. A main concept was that life could be detected in a planetary scale by the chemical composition of the atmosphere. According to the data gathered by the Pic du Midi observatory, planets like Mars or Venus had atmospheres in chemical equilibrium. This difference with the Earth atmosphere was considered to be a proof that there was no life in these planets.

Lovelock formulated the Gaia Hypothesis in journal articles in 1972 and 1974, followed by a popularizing 1979 book Gaia: A new look at life on Earth. An article in the New Scientist of February 6, 1975, and a popular book length version of the hypothesis, published in 1979 as The Quest for Gaia, began to attract scientific and critical attention.

Lovelock called it first the Earth feedback hypothesis, and it was a way to explain the fact that combinations of chemicals including oxygen and methane persist in stable concentrations in the atmosphere of the Earth. Lovelock suggested detecting such combinations in other planets' atmospheres as a relatively reliable and cheap way to detect life.

Lynn Margulis

Later, other relationships such as sea creatures producing sulfur and iodine in approximately the same quantities as required by land creatures emerged and helped bolster the hypothesis.

In 1971 microbiologist Dr. Lynn Margulis joined Lovelock in the effort of fleshing out the initial hypothesis into scientifically proven concepts, contributing her knowledge about how microbes affect the atmosphere and the different layers in the surface of the planet. The American biologist had also awakened criticism from the scientific community with her advocacy of the theory on the origin of eukaryotic organelles and her contributions to the endosymbiotic theory, nowadays accepted. Margulis dedicated the last of eight chapters in her book, The Symbiotic Planet, to Gaia. However, she objected to the widespread personification of Gaia and stressed that Gaia is "not an organism", but "an emergent property of interaction among organisms". She defined Gaia as "the series of interacting ecosystems that compose a single huge ecosystem at the Earth's surface. Period". The book's most memorable "slogan" was actually quipped by a student of Margulis'.

James Lovelock called his first proposal the Gaia hypothesis but has also used the term Gaia theory. Lovelock states that the initial formulation was based on observation, but still lacked a scientific explanation. The Gaia hypothesis has since been supported by a number of scientific experiments and provided a number of useful predictions.

First Gaia conference

In 1985, the first public symposium on the Gaia hypothesis, Is The Earth a Living Organism? was held at University of Massachusetts Amherst, August 1–6. The principal sponsor was the National Audubon Society. Speakers included James Lovelock, Lynn Margulis, George Wald, Mary Catherine Bateson, Lewis Thomas, Thomas Berry, David Abram, John Todd, Donald Michael, Christopher Bird, Michael Cohen, and William Fields. Some 500 people attended.

Second Gaia conference

In 1988, climatologist Stephen Schneider organised a conference of the American Geophysical Union. The first Chapman Conference on Gaia, was held in San Diego, California, on March 7, 1988.

During the "philosophical foundations" session of the conference, David Abram spoke on the influence of metaphor in science, and of the Gaia hypothesis as offering a new and potentially game-changing metaphorics, while James Kirchner criticised the Gaia hypothesis for its imprecision. Kirchner claimed that Lovelock and Margulis had not presented one Gaia hypothesis, but four:

  • CoEvolutionary Gaia: that life and the environment had evolved in a coupled way. Kirchner claimed that this was already accepted scientifically and was not new.
  • Homeostatic Gaia: that life maintained the stability of the natural environment, and that this stability enabled life to continue to exist.
  • Geophysical Gaia: that the Gaia hypothesis generated interest in geophysical cycles and therefore led to interesting new research in terrestrial geophysical dynamics.
  • Optimising Gaia: that Gaia shaped the planet in a way that made it an optimal environment for life as a whole. Kirchner claimed that this was not testable and therefore was not scientific.

Of Homeostatic Gaia, Kirchner recognised two alternatives. "Weak Gaia" asserted that life tends to make the environment stable for the flourishing of all life. "Strong Gaia" according to Kirchner, asserted that life tends to make the environment stable, to enable the flourishing of all life. Strong Gaia, Kirchner claimed, was untestable and therefore not scientific.

Lovelock and other Gaia-supporting scientists, however, did attempt to disprove the claim that the hypothesis is not scientific because it is impossible to test it by controlled experiment. For example, against the charge that Gaia was teleological, Lovelock and Andrew Watson offered the Daisyworld Model (and its modifications, above) as evidence against most of these criticisms. Lovelock said that the Daisyworld model "demonstrates that self-regulation of the global environment can emerge from competition amongst types of life altering their local environment in different ways".

Lovelock was careful to present a version of the Gaia hypothesis that had no claim that Gaia intentionally or consciously maintained the complex balance in her environment that life needed to survive. It would appear that the claim that Gaia acts "intentionally" was a statement in his popular initial book and was not meant to be taken literally. This new statement of the Gaia hypothesis was more acceptable to the scientific community. Most accusations of teleologism ceased, following this conference.

Third Gaia conference

By the time of the 2nd Chapman Conference on the Gaia Hypothesis, held at Valencia, Spain, on 23 June 2000, the situation had changed significantly. Rather than a discussion of the Gaian teleological views, or "types" of Gaia hypotheses, the focus was upon the specific mechanisms by which basic short term homeostasis was maintained within a framework of significant evolutionary long term structural change.

The major questions were:

  1. "How has the global biogeochemical/climate system called Gaia changed in time? What is its history? Can Gaia maintain stability of the system at one time scale but still undergo vectorial change at longer time scales? How can the geologic record be used to examine these questions?"
  2. "What is the structure of Gaia? Are the feedbacks sufficiently strong to influence the evolution of climate? Are there parts of the system determined pragmatically by whatever disciplinary study is being undertaken at any given time or are there a set of parts that should be taken as most true for understanding Gaia as containing evolving organisms over time? What are the feedbacks among these different parts of the Gaian system, and what does the near closure of matter mean for the structure of Gaia as a global ecosystem and for the productivity of life?"
  3. "How do models of Gaian processes and phenomena relate to reality and how do they help address and understand Gaia? How do results from Daisyworld transfer to the real world? What are the main candidates for "daisies"? Does it matter for Gaia theory whether we find daisies or not? How should we be searching for daisies, and should we intensify the search? How can Gaian mechanisms be collaborated with using process models or global models of the climate system that include the biota and allow for chemical cycling?"

In 1997, Tyler Volk argued that a Gaian system is almost inevitably produced as a result of an evolution towards far-from-equilibrium homeostatic states that maximise entropy production, and Axel Kleidon (2004) agreed stating: "...homeostatic behavior can emerge from a state of MEP associated with the planetary albedo"; "...the resulting behavior of a symbiotic Earth at a state of MEP may well lead to near-homeostatic behavior of the Earth system on long time scales, as stated by the Gaia hypothesis". M. Staley (2002) has similarly proposed "...an alternative form of Gaia theory based on more traditional Darwinian principles... In [this] new approach, environmental regulation is a consequence of population dynamics. The role of selection is to favor organisms that are best adapted to prevailing environmental conditions. However, the environment is not a static backdrop for evolution, but is heavily influenced by the presence of living organisms. The resulting co-evolving dynamical process eventually leads to the convergence of equilibrium and optimal conditions".

Fourth Gaia conference

A fourth international conference on the Gaia hypothesis, sponsored by the Northern Virginia Regional Park Authority and others, was held in October 2006 at the Arlington, Virginia campus of George Mason University.

Martin Ogle, Chief Naturalist, for NVRPA, and long-time Gaia hypothesis proponent, organized the event. Lynn Margulis, Distinguished University Professor in the Department of Geosciences, University of Massachusetts-Amherst, and long-time advocate of the Gaia hypothesis, was a keynote speaker. Among many other speakers: Tyler Volk, co-director of the Program in Earth and Environmental Science at New York University; Dr. Donald Aitken, Principal of Donald Aitken Associates; Dr. Thomas Lovejoy, President of the Heinz Center for Science, Economics and the Environment; Robert Corell, Senior Fellow, Atmospheric Policy Program, American Meteorological Society and noted environmental ethicist, J. Baird Callicott.

Criticism

After initially receiving little attention from scientists (from 1969 until 1977), thereafter for a period the initial Gaia hypothesis was criticized by a number of scientists, including Ford DoolittleRichard Dawkins and Stephen Jay Gould. Lovelock has said that because his hypothesis is named after a Greek goddess, and championed by many non-scientists, the Gaia hypothesis was interpreted as a neo-Pagan religion. Many scientists in particular also criticized the approach taken in his popular book Gaia, a New Look at Life on Earth for being teleological—a belief that things are purposeful and aimed towards a goal. Responding to this critique in 1990, Lovelock stated, "Nowhere in our writings do we express the idea that planetary self-regulation is purposeful, or involves foresight or planning by the biota".

Stephen Jay Gould criticized Gaia as being "a metaphor, not a mechanism." He wanted to know the actual mechanisms by which self-regulating homeostasis was achieved. In his defense of Gaia, David Abram argues that Gould overlooked the fact that "mechanism", itself, is a metaphor—albeit an exceedingly common and often unrecognized metaphor—one which leads us to consider natural and living systems as though they were machines organized and built from outside (rather than as autopoietic or self-organizing phenomena). Mechanical metaphors, according to Abram, lead us to overlook the active or agentic quality of living entities, while the organismic metaphors of the Gaia hypothesis accentuate the active agency of both the biota and the biosphere as a whole. With regard to causality in Gaia, Lovelock argues that no single mechanism is responsible, that the connections between the various known mechanisms may never be known, that this is accepted in other fields of biology and ecology as a matter of course, and that specific hostility is reserved for his own hypothesis for other reasons.

Aside from clarifying his language and understanding of what is meant by a life form, Lovelock himself ascribes most of the criticism to a lack of understanding of non-linear mathematics by his critics, and a linearizing form of greedy reductionism in which all events have to be immediately ascribed to specific causes before the fact. He also states that most of his critics are biologists but that his hypothesis includes experiments in fields outside biology, and that some self-regulating phenomena may not be mathematically explainable.

Natural selection and evolution

Lovelock has suggested that global biological feedback mechanisms could evolve by natural selection, stating that organisms that improve their environment for their survival do better than those that damage their environment. However, in the early 1980s, W. Ford Doolittle and Richard Dawkins separately argued against this aspect of Gaia. Doolittle argued that nothing in the genome of individual organisms could provide the feedback mechanisms proposed by Lovelock, and therefore the Gaia hypothesis proposed no plausible mechanism and was unscientific. Dawkins meanwhile stated that for organisms to act in concert would require foresight and planning, which is contrary to the current scientific understanding of evolution. Like Doolittle, he also rejected the possibility that feedback loops could stabilize the system.

Margulis argued in 1999 that "Darwin's grand vision was not wrong, only incomplete. In accentuating the direct competition between individuals for resources as the primary selection mechanism, Darwin (and especially his followers) created the impression that the environment was simply a static arena". She wrote that the composition of the Earth's atmosphere, hydrosphere, and lithosphere are regulated around "set points" as in homeostasis, but those set points change with time.

Evolutionary biologist W. D. Hamilton called the concept of Gaia Copernican, adding that it would take another Newton to explain how Gaian self-regulation takes place through Darwinian natural selection. More recently Ford Doolittle building on his and Inkpen's ITSNTS (It's The Song Not The Singer) proposal proposed that differential persistence can play a similar role to differential reproduction in evolution by natural selections, thereby providing a possible reconciliation between the theory of natural selection and the Gaia hypothesis.

Criticism in the 21st century

The Gaia hypothesis continues to be broadly skeptically received by the scientific community. For instance, arguments both for and against it were laid out in the journal Climatic Change in 2002 and 2003. A significant argument raised against it are the many examples where life has had a detrimental or destabilising effect on the environment rather than acting to regulate it. Several recent books have criticised the Gaia hypothesis, expressing views ranging from "... the Gaia hypothesis lacks unambiguous observational support and has significant theoretical difficulties" to "Suspended uncomfortably between tainted metaphor, fact, and false science, I prefer to leave Gaia firmly in the background" to "The Gaia hypothesis is supported neither by evolutionary theory nor by the empirical evidence of the geological record". The CLAW hypothesis, initially suggested as a potential example of direct Gaian feedback, has subsequently been found to be less credible as understanding of cloud condensation nuclei has improved. In 2009 the Medea hypothesis was proposed: that life has highly detrimental (biocidal) impacts on planetary conditions, in direct opposition to the Gaia hypothesis.

In a 2013 book-length evaluation of the Gaia hypothesis considering modern evidence from across the various relevant disciplines, Toby Tyrrell concluded that: "I believe Gaia is a dead end. Its study has, however, generated many new and thought provoking questions. While rejecting Gaia, we can at the same time appreciate Lovelock's originality and breadth of vision, and recognize that his audacious concept has helped to stimulate many new ideas about the Earth, and to champion a holistic approach to studying it". Elsewhere he presents his conclusion "The Gaia hypothesis is not an accurate picture of how our world works". This statement needs to be understood as referring to the "strong" and "moderate" forms of Gaia—that the biota obeys a principle that works to make Earth optimal (strength 5) or favourable for life (strength 4) or that it works as a homeostatic mechanism (strength 3). The latter is the "weakest" form of Gaia that Lovelock has advocated. Tyrrell rejects it. However, he finds that the two weaker forms of Gaia—Coeveolutionary Gaia and Influential Gaia, which assert that there are close links between the evolution of life and the environment and that biology affects the physical and chemical environment—are both credible, but that it is not useful to use the term "Gaia" in this sense and that those two forms were already accepted and explained by the processes of natural selection and adaptation.

Anthropic principle

As emphasized by multiple critics, no plausible mechanism exists that would drive the evolution of negative feedback loops leading to planetary self-regulation of the climate. Indeed, multiple incidents in Earth's history (see the Medea hypothesis) have shown that the Earth and the biosphere can enter self-destructive positive feedback loops that lead to mass extinction events.

For example, the Snowball Earth glaciations appeared to result from the development of photosynthesis during a period when the Sun was cooler than it is now. These mechanisms will have some effect, but any understanding of glacial-interglacial cycles requires study of the variations in the Earth’s orbit around the Sun, the tilt of its axis of rotation, and the ‘wobble’ in that rotational movement which causes the periodicity in Northern Hemisphere insolation, thereby setting the Earth’s thermal regime. Including studies from the fields of mathematics and Earth science, the fields of geology and geography provide insight into the causes of ice ages. Meanwhile, the removal of carbon dioxide from the atmosphere, along with the oxidation of atmospheric methane by the released oxygen, resulted in a dramatic diminishment of the greenhouse effect. The resulting expansion of the polar ice sheets decreased the overall fraction of sunlight absorbed by the Earth, resulting in a runaway ice–albedo positive feedback loop ultimately resulting in glaciation over nearly the entire surface of the Earth. However, volcanic processes at this scale should be understood as relating to the pressure exerted on the Earth’s crust, and released during periods of ice sheet retreat. Breaking out of the Earth from the frozen condition appears to have directly been due to the release of carbon dioxide and methane by volcanos, although release of methane by microbes trapped underneath the ice could also have played a part. Lesser contributions to warming would come from the fact that coverage of the Earth by ice sheets largely inhibited photosynthesis and lessened the removal of carbon dioxide from the atmosphere by the weathering of siliceous rocks. However, in the absence of tectonic activity, the snowball condition could have persisted indefinitely.

Geologic events with amplifying positive feedbacks (along with some possible biologic participation) led to the greatest mass extinction event on record, the Permian–Triassic extinction event about 250 million years ago. The precipitating event appears to have been volcanic eruptions in the Siberian Traps, a hilly region of flood basalts in Siberia. These eruptions released high levels of carbon dioxide and sulfur dioxide which elevated world temperatures and acidified the oceans. Estimates of the rise in carbon dioxide levels range widely, from as little as a two-fold increase, to as much as a twenty-fold increase. Amplifying feedbacks increased the warming to considerably greater than that to be expected merely from the greenhouse effect of carbon dioxide: these include the ice albedo feedback, the increased evaporation of water vapor (another greenhouse gas) into the atmosphere, the release of methane from the warming of methane hydrate deposits buried under the permafrost and beneath continental shelf sediments, and increased wildfires. The rising carbon dioxide acidified the oceans, leading to widespread die-off of creatures with calcium carbonate shells, killing mollusks and crustaceans like crabs and lobsters and destroying coral reefs. Their demise led to disruption of the entire oceanic food chain. It has been argued that rising temperatures may have led to disruption of the chemocline separating sulfidic deep waters from oxygenated surface waters, which led to massive release of toxic hydrogen sulfide (produced by anerobic bacteria) to the surface ocean and even into atmosphere, contributing to the (primarily methane-driven) collapse of the ozone layer, and helping to explain the die-off of terrestrial animal and plant life.

According to the weak anthropic principle, our observation of such stabilizing feedback loops is an observer selection effect. In all the universe, it is only planets with Gaian properties that could have evolved intelligent, self-aware organisms capable of asking such questions. One can imagine innumerable worlds where life evolved with different biochemistries or where the worlds had different geophysical properties such that the worlds are presently dead due to runaway greenhouse effect, or else are in perpetual Snowball, or else due to one factor or another, life has been inhibited from evolving beyond the microbial level.

If no means exists for natural selection to operate at the biosphere level, then it would appear that the anthropic principle provides the only explanation for the survival of Earth's biosphere over geologic time. But in recent years, this strictly reductionistic view has been modified by recognition that natural selection can operate at multiple levels of the biological hierarchy — not just at the level of individual organisms. Traditional Darwinian natural selection requires reproducing entities that display inheritable properties or abilities that result in their having more offspring than their competitors. Successful biospheres clearly cannot reproduce to spawn copies of themselves, and so traditional Darwinian natural selection cannot operate. A mechanism for biosphere-level selection was proposed by Ford Doolittle: Although he had been a strong and early critic of the Gaia hypothesis, he had by 2015 started to think of ways whereby Gaia might be "Darwinised", seeking means whereby the planet could have evolved biosphere-level adaptations. Doolittle has suggested that differential persistence — mere survival — could be considered a legitimate mechanism for natural selection. As the Earth passes through various challenges, the phenomenon of differential persistence enables selected entities to achieve fixation by surviving the death of their competitors. Although Earth's biosphere is not competing against other biospheres on other planets, there are many competitors for survival on this planet. Collectively, Gaia constitutes the single clade of all living survivors descended from life’s last universal common ancestor (LUCA). Various other proposals for biosphere-level selection include sequential selection, entropic hierarchy, and considering Gaia as a holobiont-like system. Ultimately speaking, differential persistence and sequential selection are variants of the anthropic principle, while entropic hierarchy and holobiont arguments may possibly allow understanding the emergence of Gaia without anthropic arguments.

Mind uploading

From Wikipedia, the free encyclopedia
Refer to caption
Schematic representation of a mind being uploaded from a human brain to a computer

Mind uploading is a hypothetical process of whole brain emulation in which a brain scan is used to completely emulate a person's mental state in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and have a sentient conscious mind.

Substantial mainstream research in related areas is being conducted in neuroscience and computer science, including animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics, and information extraction from dynamically functioning brains. Supporters say many of the tools and ideas needed to achieve mind uploading already exist or are under active development, but they admit that others are as yet very speculative, though still in the realm of engineering possibility.

Mind uploading may be accomplished by either of two methods: copy-and-upload or copy-and-delete by gradual replacement of neurons (which can be considered gradual destructive uploading) until the original organic brain no longer exists and a computer program emulating it takes control of the body. In the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain and then storing and copying that information into a computer system or another computational device. The biological brain may not survive the copying process or may be deliberately destroyed during it. The simulated mind could be in a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively, the simulated mind could reside in a computer inside—or either connected to or remotely controlled by—a (not necessarily humanoid) robot, biological, or cybernetic body.

Among some futurists and within part of transhumanist movement, mind uploading is treated as an important proposed life extension or immortality technology (known as "digital immortality"). Some believe mind uploading is the best way to preserve the human species, as opposed to cryonics. Another aim of mind uploading is to provide a permanent backup to our "mind-file", to enable interstellar space travel, and to be a means for human culture to survive a global disaster by making a functional copy of a human society in a computing device. Some futurists consider whole-brain emulation a "logical endpoint" of computational neuroscience and neuroinformatics, both of which study brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI (artificial general intelligence) and to at least weak superintelligence. Another approach is seed AI, which is not based on existing brains. Computer-based intelligence, such as an upload, could think much faster than a biological human, even if it were no more intelligent. A large-scale society of uploads might, according to futurists, give rise to a technological singularity: an exponential development of technology that exceeds human control and becomes unpredictable. Mind uploading is a central conceptual feature of numerous science fiction novels, films, and games.

Overview

Many neuroscientists believe that the human mind is largely an emergent property of the information processing of its neuronal network.

Neuroscientists have said that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:

Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality.

Eminent computer scientists and neuroscientists, including Koch and Tononi, Douglas HofstadterJeff HawkinsMarvin Minsky, Randal A. Koene, and Rodolfo Llinás, have predicted that advanced computers will be capable of thought and even attain consciousness.

Many theorists have presented models of the brain and established a range of estimates of how much computing power is needed for partial and complete simulations. Using these models, some have estimated that uploading may be possible within decades if trends such as Moore's law continue. As of December 2022, this kind of technology is almost entirely theoretical.

Theoretical benefits and applications

"Immortality" or backup

In theory, if a mind's information and processes can be disassociated from a biological body, they are no longer tied to that body's limits and lifespan. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby—from a purely mechanistic perspective—reducing or eliminating such information's "mortality risk". This general proposal was discussed in 1971 by biogerontologist George M. Martin of the University of Washington. From the perspective of the biological brain, the simulated brain may just be a copy, even if it is conscious and has an indistinguishable character. As such, the original biological being, before the uploading, might consider the digital twin a new and independent being rather than a future self.

Space exploration

An "uploaded astronaut" could be used instead of a "live" astronaut in human spaceflight, avoiding the perils of zero gravity, the vacuum of space, and cosmic radiation to the human body. It would allow for the use of smaller spacecraft, such as the proposed StarChip, and it would enable virtually unlimited interstellar travel distances.

Mind editing

Some researchers believe editing human brains is physically possible in theory, for example by performing neurosurgery with nanobots, but it would require particularly advanced technology. Editing an uploaded mind would be much easier, as long as the exact edits to be made are known. This would facilitate cognitive enhancement and the precise control of the emulated beings' well-being, motivations, and personalities.

Speed

Although the number of neuronal connections in the human brain is very large (around 100 trillions), the frequency of activation of biological neurons is limited to around 200 Hz, whereas electronic hardware can easily operate at multiple GHz. With sufficient hardware parallelism, a simulated brain could thus in theory run faster than a biological brain. Uploaded beings may therefore not only be more efficient but also have a faster rate of subjective experience than biological brains (e.g. experiencing an hour of lifetime in a single second of real time).

Relevant technologies and techniques

The focus of mind uploading, in the case of copy-and-transfer, is on data acquisition, rather than data maintenance of the brain. A set of approaches known as loosely coupled off-loading (LCOL) may be used in an attempt to characterize and copy a brain's mental contents. The LCOL approach may take advantage of self-reports, life-logs, and video recordings that can be analyzed by artificial intelligence. A bottom-up approach may focus on neurons' specific resolution, morphology, and spike times, the times at which they produce potential responses.

AI-based "digital self" applications

Related to the "loosely coupled off-loading" (LCOL) idea of using self-reports and life-logs to approximate aspects of a person's mental contents, some consumer applications use user-provided text (e.g., journaling) to build conversational AI companions or "digital selves"; this differs from whole-brain emulation because it does not involve brain scanning or simulation of brain tissue. Examples include AI companion/chatbot services such as Replika and Character.AI, and a mobile application called Mind Upload which describes itself as allowing users to "upload your consciousness" by submitting thoughts and memories.

Computational complexity

Estimates of how much processing power is needed to emulate a human brain at various levels, along with the fastest and slowest supercomputers from TOP500 and a $1000 PC. Note the logarithmic scale. The (exponential) trend line for the fastest supercomputer reflects a doubling every 14 months. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg & Bostrom report is less certain about where consciousness arises.

Advocates of mind uploading point to Moore's law to support the notion that the necessary computing power will be available within a few decades. But the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious.

Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron.

Required computational capacity strongly depends on the chosen level of simulation model scale:

Level CPU demand
(FLOPS)
Memory demand
(Tb)
$1 million super‐computer
(Earliest year of making)
Analog network population model 1015 102 2008
Spiking neural network 1018 104 2019
Electrophysiology 1022 104 2033
Metabolome 1025 106 2044
Proteome 1026 107 2048
States of protein complexes 1027 108 2052
Distribution of complexes 1030 109 2063
Stochastic behavior of single molecules 1043 1014 2111
Estimates from Sandberg, Bostrom, 2008

Scanning and mapping scale of an individual

When modeling and simulating a specific brain, a brain map or connectivity database showing the connections between the neurons must be extracted from an anatomic model of the brain. For whole-brain simulation, this map should show the connectivity of the whole nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of a small sample of tissue from a mouse brain including synaptic details is possible as of 2010.

But if short-term memory and working memory include prolonged or repeated firing of neurons as well as intraneural dynamic processes, the electrical and chemical signal state of the synapses and neurons may be hard to extract. The uploaded mind may then perceive a memory loss of the events and mental processes immediately before the brain scanning.

A full brain map has been estimated to occupy less than 2 x 1016 bytes (20,000 TB) and would store the addresses of the connected neurons, the synapse type, and the "weight" of each of the brains' 1015 synapses. But the biological complexities of true brain function (the epigenetic states of neurons, protein components with multiple functional states, etc.) may preclude an accurate prediction of the volume of binary data required to faithfully represent a functioning human mind.

Serial sectioning

Serial sectioning of a brain

A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, which, for frozen samples at nano-scale, requires a cryo-ultramicrotome, capturing the structure of the neurons and their interconnections. The exposed surface of frozen nerve tissue would be scanned and recorded, and then the surface layer of tissue removed. While this would be very slow and labor-intensive, research is underway to automate the collection and microscopy of serial sections. The scans would then be analyzed, and a model of the neural net recreated in the system into which the mind was being uploaded.

There is uncertainty with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique. But as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron's cell membrane), this may not suffice to capture and simulate neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods that could then be read via confocal laser scanning microscopy. But as the physiological genesis of mind is not currently known, this method may not be able to access all the biochemical information necessary to recreate a human brain with sufficient fidelity.

Brain imaging

Process from MRI acquisition to the whole brain structural network[35]
Magnetoencephalography

It may be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology such as functional MRI (fMRI, for mapping change in blood flow), magnetoencephalography (MEG, for mapping of electrical currents), or combinations of multiple methods, to build a detailed three-dimensional model of the brain using non-invasive and non-destructive techniques. Today, fMRI is often combined with MEG to create functional maps of human cortices during more complex cognitive tasks, as the methods complement each other. Even though current imaging technology lacks the spatial resolution needed to gather the information needed for such a scan, important recent and future developments are predicted to substantially improve both spatial and temporal resolution.

Brain–computer interfaces

Brain–computer interfaces (BCIs) are sometimes discussed as indirectly relevant to mind uploading insofar as they aim to record (and in some cases stimulate) neural activity, but they do not by themselves provide the high-resolution structural and biochemical data typically assumed in whole-brain emulation proposals. Neuralink is a company developing an implantable BCI that uses flexible electrode "threads" inserted by a neurosurgical robot, described in a 2019 technical report in the Journal of Medical Internet Research. In 2024, Neuralink reported its first human implant; scientists noted the work is primarily aimed at assisting people with paralysis and that major challenges remain for long-term safety and performance.

Brain simulation

Ongoing work in brain simulation includes partial and whole simulations of some animals. For example, the C. elegans roundworm, Drosophila fruit fly, and mouse have all been simulated to various degrees.

The Blue Brain Project, initiated by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne, is an attempt to create a synthetic brain by reverse-engineering mammalian brain circuitry in order to accelerate experimental research on the brain. In 2009, after a successful simulation of part of a rat brain, the director, Henry Markram, said, "A detailed, functional artificial human brain can be built within the next 10 years." In 2013, Markram became the director of the new decade-long Human Brain Project. Less than two years later, the project was recognized to be mismanaged and its claims overblown, and Markram was asked to step down.

Commercial brain preservation proposals

In 2018, startup company Nectome, backed by Y Combinator, proposed a controversial preservation service using aldehyde-stabilized cryopreservation. The company's approach involves connecting terminal patients to a heart-lung machine while under general anesthesia to pump preservation chemicals through the carotid arteries while the person is still alive, a process the company calls "100 percent fatal". The service planned to operate under California's End of Life Option Act. Nectome won an $80,000 Brain Preservation Foundation prize for preserving a pig's brain at the synaptic level, and received a $960,000 federal grant from the National Institute of Mental Health. Nectome collaborated with MIT neuroscientist Edward Boyden.

Nanobots

One approach to digital immortality is gradually "replacing" neurons in the brain with advanced medical technology such as nanobiotechnology, possibly using wetware computer technology or using nanobots to read brain structure, as described by Alexey Turchin.

Issues

Philosophical issues

The main philosophical problem faced by mind uploading is the hard problem of consciousness: the difficulty of explaining how a physical entity such as a human can have qualia, phenomenal consciousness, or subjective experience. Many philosophical responses to the hard problem entail that mind uploading is fundamentally impossible, while others are compatible with mind uploading. Many proponents of mind uploading defend its feasibility by recourse to non-reductive physicalism, which includes the belief that consciousness is an emergent feature that arises from large neural network high-level patterns of organization that could be realized in other processing devices. Mind uploading relies on the idea that the human mind reduces to neural network paths and the weights of synapses in the brain. In contrast, many dualistic and idealistic accounts seek to avoid the hard problem of consciousness by explaining it in terms of immaterial (and presumably inaccessible) substances, which pose a fundamental challenge to the feasibility of artificial consciousness in general.

Assuming physicalism is true, the mind can be defined as the information state of the brain, so it exists only in the same sense as the information content of a data file or a computer software state. In this case, data specifying a neural network's information state could be captured and copied as a "computer file" from the brain and implemented in a different physical form. This is not to deny that minds are richly adapted to their substrates. An analogy to mind uploading is copying the information state of a computer program from the memory of the computer on which it is running to another computer and then continuing its execution on the second computer. The second computer may have different hardware architecture, but it emulates the hardware of the first computer.

These philosophical issues have a long history. In 1775, Thomas Reid wrote: "I would be glad to know... whether when my brain has lost its original structure, and when some hundred years after the same materials are fabricated so curiously as to become an intelligent being, whether, I say that being will be me; or, if, two or three such beings should be formed out of my brain; whether they will all be me, and consequently one and the same intelligent being." Although the term hard problem of consciousness was coined in 1994, debate about the issue is ancient. Augustine of Hippo argued against physicalist "Academians" in the 5th century, writing that consciousness cannot be an illusion because only a conscious being can be deceived or experience an illusion. René Descartes, the founder of mind-body dualism, made a similar objection in the 17th century, coining the popular phrase "Je pense, donc je suis" ("I think, therefore I am"). Although physicalism was proposed in ancient times, Thomas Huxley was among the first to describe mental experience as merely an epiphenomenon of interactions within the brain, with no causal power of its own and entirely downstream from brain activity.

Many transhumanists and singularitarians hope to become immortal by creating one or many non-biological functional copies of their brains, thereby leaving their "biological shell". But the philosopher and transhumanist Susan Schneider claims that, at best, uploading would create a copy of the original mind. Schneider agrees that consciousness has a computational basis, but does not agree that this means a person survives uploading. According to her, uploading would probably result in the death of one's brain, and only others could maintain the illusion that the original person survived. It is implausible to think that one's consciousness could leave one's brain for another location; ordinary physical objects do not behave this way. Ordinary objects (rocks, tables, etc.) are not simultaneously here and elsewhere. At best, a copy is created.

Others have argued against such conclusions. For example, Buddhist transhumanist James Hughes has pointed out that this consideration only goes so far: if one believes the self is an illusion, worries about survival are not reasons to avoid uploading, and Keith Wiley has presented an argument wherein all resulting minds of an uploading procedure have equal claims to the original identity, such that survival of the self is determined retroactively from a strictly subjective position. Some have also asserted that consciousness is a part of an extra-biological system yet to be discovered and therefore cannot yet be fully understood. Without transference of consciousness, true uploading or perpetual immortality cannot be practically achieved.

Another potential consequence of mind uploading is that the decision to upload may create a mindless symbol manipulator instead of a conscious mind (a philosophical zombie). If a computer could process sensory inputs to generate the same outputs that a human mind does (speech, muscle movements, etc.) without having conscious experience, it may be impossible to determine whether the uploaded mind is conscious and not merely an automaton that behaves like a conscious being. Thought experiments like the Chinese room raise fundamental questions about mind uploading: if an upload behaves in ways highly indicative of consciousness, or insists that it is conscious, is it conscious? There might also be an absolute upper limit in processing speed above which consciousness cannot be sustained. The subjectivity of consciousness precludes a definitive answer to this question.

Many scientists, including Ray Kurzweil, believe that whether a separate entity is conscious is impossible to know with confidence, since consciousness is inherently subjective (see solipsism). Regardless, some scientists believe consciousness is the consequence of substrate-neutral computational processes. Other scientists, including Roger Penrose, believe consciousness may emerge from some form of quantum computation that depends on an organic substrate (see quantum mind).

In light of uncertainty about whether uploaded minds are conscious, Sandberg proposes a cautious approach:

Principle of assuming the most (PAM): Assume that any emulated system could have the same mental properties as the original system and treat it correspondingly.

Continuity of consciousness

Michael Cerullo's theory of consciousness is called "psychological branching identity". He argues a person would survive gradual destructive uploading via scan-and-copy (or copy-and-delete), based on a theory grounded in emergent materialism, functionalism, and psychological contiunity theory. According to him, psychological identity branches out, with each copy an authentic continuation of the original, ensuring the persistence of the original consciousness even if the substrate is destroyed in the process. The mind branches into distinct paths, all of which are continuations of the uploaded person.

The "fading qualia" and "dancing qualia" thought experiments proposed by Chalmers

The process of developing emulation technology raises ethical issues related to animal welfare and artificial consciousness. The neuroscience required to develop brain emulation would require animal experimentation, first on invertebrates and then on small mammals before moving on to humans. Sometimes the animals would just need to be euthanized in order to extract, slice, and scan their brains, but sometimes behavioral and in vivo measures would be required, which might cause pain to living animals.

In addition, the resulting animal emulations might suffer, depending on one's views about consciousness. Bancroft argues for the plausibility of consciousness in brain simulations based on David Chalmers's "fading qualia" thought experiment. Bancroft concludes: "If, as I argue above, a sufficiently detailed computational simulation of the brain is potentially operationally equivalent to an organic brain, it follows that we must consider extending protections against suffering to simulations." Chalmers has argued that such virtual realities would be genuine realities. But if mind uploading occurs and the uploads are not conscious, there may be a significant opportunity cost. In Superintelligence, Nick Bostrom expresses concern that a "Disneyland without children" could be built.

It might help reduce emulation suffering to develop virtual equivalents of anesthesia and to omit processing related to pain and/or consciousness. But some experiments might require a fully functioning and suffering animal emulation. Animals might also suffer by accident due to flaws and lack of insight into what parts of their brains are suffering. Questions also arise about the moral status of partial brain emulations and about creating neuromorphic emulations inspired by biological brains but differently built.

Brain emulations could be erased by computer viruses or malware without destroying the underlying hardware. This may make assassination easier than for physical humans. The attacker might take the computing power for its own use.

Many questions arise regarding the legal personhood of emulations. Would they be given the rights of biological humans? If a person makes an emulated copy of themselves and then dies, does the emulation inherit their property and official positions? Could the emulation ask to "pull the plug" when its biological version was terminally ill or in a coma? Would it help to treat emulations as adolescents for a few years so that the biological creator would maintain temporary control? Would criminal emulations receive the death penalty, or would they be given forced data modification as a form of "rehabilitation"? Could an upload have marriage and child-care rights?

If simulated minds had rights, it might be difficult to ensure their protection. For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions.

Public attitudes

Research on public attitudes toward mind uploading reveals complex psychological factors. A 2018 study by Michael Laakasuo found that approval or disapproval of mind uploading technology is significantly predicted by individual differences in disgust sensitivity, particularly sexual disgust and purity moral orientations, rather than rational thinking styles or personality traits. The research found that:

  • People with higher "purity" moral foundations (from Moral Foundations Theory) were more likely to condemn mind upload technology
  • Science fiction familiarity strongly predicted approval of mind uploading
  • Those with death anxiety were more accepting of mind uploading, viewing it as life extension rather than death
  • The target platform (computer, android, artificial brain) did not affect moral judgments—only the act of transfer itself mattered

The study suggests mind uploading may have broader appeal than previously thought among those seeking life extension or immortality.

Another study by Laakasuo found that attitudes toward mind uploading are predicted by belief in an afterlife; the existence of mind uploading technology may threaten religious and spiritual notions of immortality and divinity.

Political and economic implications

Emulations might be preceded by a technological arms race driven by first-strike advantages. Their emergence and existence may lead to increased risk of war, including inequality, power struggles, strong loyalty and willingness to die among emulations, and new forms of racism, xenophobia, and religious prejudice. If emulations run much faster than humans, there might not be enough time for human leaders to make wise decisions or negotiate. Humans might react violently to the growing power of emulations, especially if it reduces human wages. Emulations might not trust each other, and even well-intentioned defensive measures might be interpreted as offense.

Robin Hanson's book The Age of Em poses many hypotheses on the nature of a society of mind uploads, including that the most common minds would be copies of adults with personalities conducive to long hours of productive specialized work.

Emulation timelines and AI risk

Kenneth D. Miller, a professor of neuroscience at Columbia University and a co-director of the Center for Theoretical Neuroscience, has raised doubts about the practicality of mind uploading. His major argument is that reconstructing neurons and their connections is itself a formidable task, but far from sufficient. The brain's operation depends on the dynamics of electrical and biochemical signal exchange between neurons, so capturing them in a single "frozen" state may be insufficient. In addition, the nature of these signals may require modeling at the molecular level and beyond. Therefore, while not rejecting the idea in principle, Miller believes that the complexity of the "absolute" duplication of a mind will be insurmountable for several hundred years.

The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, and their development will presumably continue. People may also have brain emulations for a brief but significant period on the way to non-emulation-based human-level AI. If emulation technology arrives, it is debatable whether its advance should be accelerated or slowed.

Arguments for speeding up brain-emulation research:

  • If neuroscience rather than computing power is the bottleneck to brain emulation, emulation advances may be more erratic and unpredictable based on when new scientific discoveries happen. Limited computing power would mean the first emulations would run slower and so would be easier to adapt to, and there would be more time for the technology to transition through society.
  • Improvements in manufacturing, 3D printing, and nanotechnology may accelerate hardware production, which could increase the "computing overhang" from excess hardware relative to neuroscience.
  • If one AI-development group had a lead in emulation technology, it would have more subjective time to win an arms race to build the first superhuman AI. Because it would be less rushed, it would have more freedom to consider AI risks.

Arguments for slowing brain-emulation research:

  • Greater investment in brain emulation and associated cognitive science might enhance AI researchers' ability to create "neuromorphic" (brain-inspired) algorithms, such as neural networks, reinforcement learning, and hierarchical perception. This could accelerate risks from uncontrolled AI. Participants at a 2011 AI workshop estimated an 85% probability that neuromorphic AI would arrive before brain emulation. This was based on the idea that brain emulation would require understanding of the workings and functions of the brain's components, along with the technological know-how to emulate neurons. But reverse-engineering the Microsoft Windows code base is already hard, and reverse-engineering the brain is likely much harder. By a very narrow margin, the participants leaned toward the view that accelerating brain emulation would increase expected AI risk.
  • Waiting might give society more time to think about the consequences of brain emulation and develop institutions to improve cooperation.

Emulation research would also accelerate neuroscience as a whole, which might accelerate medical advances, cognitive enhancement, lie detectors, and psychological manipulation.

Emulations might be easier to control than de novo AI because:

  1. Human abilities, behavioral tendencies, and vulnerabilities are more thoroughly understood, thus control measures might be more intuitive and easier to plan.
  2. Emulations could more easily inherit human motivations.
  3. Emulations are harder to manipulate than de novo AI, because brains are messy and complicated; this could reduce risks of their rapid takeoff. Also, emulations may be bulkier and require more hardware than AI, which would also slow the speed of a transition. Unlike AI, an emulation would not be able to rapidly expand beyond the size of a human brain. Emulations running at digital speeds would have less intelligence differential vis-à-vis AI and so might more easily control AI.

As counterpoint to these considerations, Bostrom notes some downsides:

  1. Even if human behavior is better understood, the evolution of emulation behavior under self-improvement might be much less predictable than the evolution of safe de novo AI under self-improvement.
  2. Emulations may not inherit all human motivations. Perhaps they would inherit people's darker motivations or would behave abnormally in the unfamiliar environment of cyberspace.
  3. Even if there is a slow takeoff toward emulations, there would still be a second transition to de novo AI later. Two intelligence explosions may mean more total risk.

Because of the postulated difficulties that a whole brain emulation-generated superintelligence would pose for the control problem, computer scientist Stuart J. Russell, in his book Human Compatible, rejects creating one, calling it "so obviously a bad idea".

Advocates

In 1979, Hans Moravec described and endorsed mind uploading using a brain surgeon. He used a similar description in 1988, calling it "transmigration".

Ray Kurzweil, director of engineering at Google, has long predicted that people will be able to upload their brains to computers and become "digitally immortal" by 2045. For example, he made this claim in his 2013 speech at the Global Futures 2045 International Congress in New York, which claims to subscribe to a similar set of beliefs. Mind uploading has also been advocated by a number of researchers in neuroscience and artificial intelligence, such as Marvin Minsky. In 1993, Joe Strout created a small website called the Mind Uploading Home Page, and began advocating the idea in cryonics circles and elsewhere. That site has not been updated recently, but it has spawned other sites, including MindUploading.org, run by Randal A. Koene, who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure that could save countless lives.

Many transhumanists look forward to the development and deployment of mind-uploading technology, with transhumanists such as Nick Bostrom predicting that it will become possible within the 21st century due to technological trends such as Moore's law.

Michio Kaku, in collaboration with Science, hosted the documentary Sci Fi Science: Physics of the Impossible, based on his book Physics of the Impossible. Episode four, "How to Teleport", mentions that mind uploading via techniques such as quantum entanglement and whole-brain emulation using an advanced MRI machine may enable people to be transported vast distances at near light-speed.

Gregory S. Paul's and Earl D. Cox's book Beyond Humanity: CyberEvolution and Future Minds is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer. Richard Doyle's Wetwares: Experiments in PostVital Living deals extensively with uploading from the perspective of distributed embodiment, arguing for example that humans are part of the "artificial life phenotype". Doyle's vision reverses the polarity on uploading, with artificial life forms such as uploads actively seeking out biological embodiment as part of their reproductive strategy.

In fiction

Mind uploading—transferring one's personality to a computer—appears in several works of science fiction. It is distinct from transferring a consciousness from one human body to another. It is sometimes applied to a single person and sometimes to an entire society. Recurring themes in these stories include whether the computerized mind is truly conscious, and if so, whether identity is preserved. It is a common feature of the cyberpunk subgenre, sometimes taking the form of digital immortality.

Hypothetical technology

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Hypothetical_technology ...