Search This Blog

Friday, December 12, 2025

Posthuman

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Posthuman
A Morlock carrying an Eloi, two fictional posthuman species in The Time Machine

Posthuman or post-human is a concept originating in the fields of science fiction, futurology, contemporary art, and philosophy that means a person or entity that exists in a state beyond being human. The concept aims at addressing a variety of questions, including ethics and justice, language and trans-species communication, social systems, and the intellectual aspirations of interdisciplinarity.

Posthumanism is not to be confused with transhumanism (the biotechnological enhancement of human beings) and narrow definitions of the posthuman as the hoped-for transcendence of materiality. The notion of the posthuman comes up both in posthumanism as well as transhumanism, but it has a special meaning in each tradition.

Posthumanism

In critical theory, the posthuman is a speculative being that represents or seeks to re-conceive the human. It is the object of posthumanist criticism, which critically questions humanism, a branch of humanist philosophy which claims that human nature is a universal state from which the human being emerges; human nature is autonomous, rational, capable of free will, and unified in itself as the apex of existence. Thus, the posthuman position recognizes imperfectability and disunity within oneself, and understands the world through heterogeneous perspectives while seeking to maintain intellectual rigor and dedication to objective observations. Key to this posthuman practice is the ability to fluidly change perspectives and manifest oneself through different identities. The posthuman, for critical theorists of the subject, has an emergent ontology rather than a stable one; in other words, the posthuman is not a singular, defined individual, but rather one who can "become" or embody different identities and understand the world from multiple, heterogeneous perspectives.

Approaches to posthumanism are not homogeneous, and have often been very critical. The term itself is contested, with one of the foremost authors associated with posthumanism, Manuel DeLanda, decrying the term as "very silly." Covering the ideas of, for example, Robert Pepperell's The Posthuman Condition, and Hayles's How We Became Posthuman under a single term is distinctly problematic due to these contradictions.

The posthuman is roughly synonymous with the "cyborg" of A Cyborg Manifesto by Donna Haraway.  Haraway's conception of the cyborg is an ironic take on traditional conceptions of the cyborg that inverts the traditional trope of the cyborg whose presence questions the salient line between humans and robots. Haraway's cyborg is in many ways the "beta" version of the posthuman, as her cyborg theory prompted the issue to be taken up in critical theory. Following Haraway, Hayles, whose work grounds much of the critical posthuman discourse, asserts that liberal humanism—which separates the mind from the body and thus portrays the body as a "shell" or vehicle for the mind—becomes increasingly complicated in the late 20th and 21st centuries because information technology puts the human body in question. Hayles maintains that we must be conscious of information technology advancements while understanding information as "disembodied," that is, something which cannot fundamentally replace the human body but can only be incorporated into it and human life practices.

Post-posthumanism and post-cyborg ethics

The idea of post-posthumanism (post-cyborgism) has recently been introduced. This body of work outlines the after-effects of long-term adaptation to cyborg technologies and their subsequent removal, e.g., what happens after 20 years of constantly wearing computer-mediating eyeglass technologies and subsequently removing them, and of long-term adaptation to virtual worlds followed by return to "reality." and the associated post-cyborg ethics (e.g. the ethics of forced removal of cyborg technologies by authorities, etc.).

Posthuman political and natural rights have been framed on a spectrum with animal rights and human rights. Posthumanism broadens the scope of what it means to be a valued life form and to be treated as such (in contrast to certain life forms being seen as less-than and being taken advantage of or killed off); it “calls for a more inclusive definition of life, and a greater moral-ethical response, and responsibility, to non-human life forms in the age of species blurring and species mixing. … [I]t interrogates the hierarchic ordering—and subsequently exploitation and even eradication—of life forms.”

Hybrid Interfaces: Supersenses, Cyborg Systems, and Hybrid Bodies

Technology integrated into the human body changes how individuals interact with the external world. Sensory activity is mediated by technology, creating a new interface with the world. The introduction of nanotechnologies and hybrid computing into the organism alters the normal perception and cognition of things and the world. The fusion of the human body with technology within the organism lays the groundwork for the emergence of individuals endowed with new attributes and capabilities. Human beings and the modification of their psycho-physical characteristics become subjects of direct manipulation, necessitating a reevaluation of the concept of humanity from various humanistic, philosophical, and biological perspectives.

Human ability to incorporate inorganic elements of technological nature into oneself can radically alter both inner and outer appearance, transforming individuals into cyborgs. This new hybrid form replaces the humanistic view of humanity and raises a series of new philosophical questions concerning ethics and human nature.

Especially for new generations, the combination of carnal body and virtual body can determine forms of identity hybridization and possible negative effects on identity formation.

Transhumanism

Definition

According to transhumanist thinkers, a posthuman is a hypothetical future being "whose basic capacities so radically exceed those of present humans as to be no longer, unambiguously, human by our current standards." Posthumans primarily focus on cybernetics, the posthuman consequent and the relationship to digital technology. Steve Nichols published the Posthuman Movement manifesto in 1988. His early evolutionary theory of mind (MVT) allows development of sentient E1 brains. The emphasis is on systems. Transhumanism does not focus on either of these. Instead, transhumanism focuses on the modification of the human species via any kind of emerging science, including genetic engineering, digital technology, and bioengineering. Transhumanism is sometimes criticized for not adequately addressing the scope of posthumanism and its concerns for the evolution of humanism.

Methods

Posthumans could be completely synthetic artificial intelligences, or a symbiosis of human and artificial intelligence, or uploaded consciousnesses, or the result of making many smaller but cumulatively profound technological augmentations to a biological human, i.e. a cyborg. Some examples of the latter are redesigning the human organism using advanced nanotechnology or radical enhancement using some combination of technologies such as genetic engineering, psychopharmacology, life extension therapies, neural interfaces, advanced information management tools, memory enhancing drugs, wearable or implanted computers, and cognitive techniques.

Posthuman future

As used in this article, "posthuman" does not necessarily refer to a conjectured future where humans are extinct or otherwise absent from the EarthKevin Warwick says that both humans and posthumans will continue to exist but the latter will predominate in society over the former because of their abilities. Recently, scholars have begun to speculate that posthumanism provides an alternative analysis of apocalyptic cinema and fiction, often casting vampires, werewolves, zombies and greys as potential evolutions of the human form and being. With these potential evolutions of humans and posthumans, human centered designed ways of thinking needs to also be inclusive of these new posthumans. The new "post human resists binary categories and, instead, integrates the human and the nonhuman." Human centered thinking needs to be redone in a way to include posthumanism.

Many science fiction authors, such as Greg Egan, H. G. Wells, Isaac Asimov, Bruce Sterling, Frederik Pohl, Greg Bear, Charles Stross, Neal Asher, Ken MacLeod, Peter F. Hamilton, Ann Leckie, and authors of the Orion's Arm Universe, have written works set in posthuman futures.

Posthuman god

A variation on the posthuman theme is the notion of a "posthuman god"; the idea that posthumans, being no longer confined to the parameters of human nature, might grow physically and mentally so powerful as to appear possibly god-like by present-day human standards. This notion should not be interpreted as being related to the idea portrayed in some science fiction that a sufficiently advanced species may "ascend" to a higher plane of existence—rather, it merely means that some posthuman beings may become so exceedingly intelligent and technologically sophisticated that their behaviour would not possibly be comprehensible to modern humans, purely by reason of their limited intelligence and imagination.

Human extinction

From Wikipedia, the free encyclopedia
Nuclear war is an often-predicted cause of the extinction of humankind.

Human extinction or omnicide is the end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction).

Some of the many possible contributors to anthropogenic hazards are climate change, global nuclear annihilation, biological warfare, weapons of mass destruction, and ecological collapse. Other scenarios center on emerging technologies, such as advanced artificial intelligence, biotechnology, or self-replicating nanobots.

The scientific consensus is that there is a relatively low risk of near-term human extinction due to natural causes. The likelihood of human extinction through humankind's own activities, however, is a current area of research and debate.

History of thought

Early history

Before the 18th and 19th centuries, the possibility that humans or other organisms could become extinct was viewed with scepticism. It contradicted the principle of plenitude, a doctrine that all possible things exist. The principle traces back to Aristotle and was an important tenet of Christian theology. Ancient philosophers such as Plato, Aristotle, and Lucretius wrote of the end of humankind only as part of a cycle of renewal. Marcion of Sinope was a proto-Protestant who advocated for antinatalism that could lead to human extinction. Later philosophers such as Al-Ghazali, William of Ockham, and Gerolamo Cardano expanded the study of logic and probability and began wondering if abstract worlds existed, including a world without humans. Physicist Edmond Halley stated that the extinction of the human race may be beneficial to the future of the world.

The notion that species can become extinct gained scientific acceptance during the Age of Enlightenment in the 17th and 18th centuries, and by 1800 Georges Cuvier had identified 23 extinct prehistoric species. The doctrine was further gradually bolstered by evidence from the natural sciences, particularly the discovery of fossil evidence of species that appeared to no longer exist and the development of theories of evolution. In On the Origin of Species, Charles Darwin discussed the extinction of species as a natural process and a core component of natural selection. Notably, Darwin was skeptical of the possibility of sudden extinction, viewing it as a gradual process. He held that the abrupt disappearances of species from the fossil record were not evidence of catastrophic extinctions but rather represented unrecognized gaps in the record.

As the possibility of extinction became more widely established in the sciences, so did the prospect of human extinction. In the 19th century, human extinction became a popular topic in science (e.g., Thomas Robert Malthus's An Essay on the Principle of Population) and fiction (e.g., Jean-Baptiste Cousin de Grainville's The Last Man). In 1863, a few years after Darwin published On the Origin of Species, William King proposed that Neanderthals were an extinct species of the genus Homo. The Romantic authors and poets were particularly interested in the topic. Lord Byron wrote about the extinction of life on Earth in his 1816 poem "Darkness," and in 1824 envisaged humanity being threatened by a comet impact and employing a missile system to defend against it. Mary Shelley's 1826 novel The Last Man is set in a world where humanity has been nearly destroyed by a mysterious plague. At the turn of the 20th century, Russian cosmism, a precursor to modern transhumanism, advocated avoiding humanity's extinction by colonizing space.

Atomic era

Castle Romeo nuclear test on Bikini Atoll

The invention of the atomic bomb prompted a wave of discussion among scientists, intellectuals, and the public at large about the risk of human extinction. In a 1945 essay, Bertrand Russell wrote:

The prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense.

In 1950, Leo Szilard suggested it was technologically feasible to build a cobalt bomb that could render the planet unlivable. A 1950 Gallup poll found that 19% of Americans believed that another world war would mean "an end to mankind". Rachel Carson's 1962 book Silent Spring raised awareness of environmental catastrophe. In 1983, Brandon Carter proposed the Doomsday argument, which used Bayesian probability to predict the total number of humans that will ever exist.

The discovery of "nuclear winter" in the early 1980s, a specific mechanism by which nuclear war could result in human extinction, again raised the issue to prominence. Writing about these findings in 1983, Carl Sagan argued that measuring the severity of extinction solely in terms of those who die "conceals its full impact," and that nuclear war "imperils all of our descendants, for as long as there will be humans."

Post-Cold War

John Leslie's 1996 book The End of the World was an academic treatment of the science and ethics of human extinction. In it, Leslie considered a range of threats to humanity and what they have in common. In 2003, British Astronomer Royal Sir Martin Rees published Our Final Hour, in which he argues that advances in certain technologies create new threats to the survival of humankind and that the 21st century may be a critical moment in history when humanity's fate is decided. Edited by Nick Bostrom and Milan M. Ćirković, Global Catastrophic Risks, published in 2008, is a collection of essays from 26 academics on various global catastrophic and existential risks. Nicholas P. Money's 2019 book The Selfish Ape delves into the environmental consequences of overexploitationToby Ord's 2020 book The Precipice argues that preventing existential risks is one of the most important moral issues of our time. The book discusses, quantifies, and compares different existential risks, concluding that the greatest risks are presented by unaligned artificial intelligence and biotechnology. Lyle Lewis' 2024 book Racing to Extinction explores the roots of human extinction from an evolutionary biology perspective. Lewis argues that humanity treats unused natural resources as waste and is driving ecological destruction through overexploitation, habitat loss, and denial of environmental limits. He uses vivid examples, like the extinction of the passenger pigeon and the environmental cost of rice production, to show how interconnected and fragile ecosystems are.

Causes

Potential anthropogenic causes of human extinction include global thermonuclear war, deployment of a highly effective biological weapon, ecological collapse, runaway artificial intelligence, runaway nanotechnology (such as a grey goo scenario), overpopulation and increased consumption causing resource depletion and a concomitant population crash, population decline by choosing to have fewer children, and displacement of naturally evolved humans by a new species produced by genetic engineering or technological augmentation. Natural and external extinction risks include high-fatality-rate pandemic, supervolcanic eruption, asteroid impact, nearby supernova or gamma-ray burst, or extreme solar flare.

Humans (e.g., Homo sapiens sapiens) as a species may also be considered to have "gone extinct" simply by being replaced with distant descendants whose continued evolution may produce new species or subspecies of Homo or of hominids.

Without intervention from unforeseen forces, the stellar evolution of the Sun is expected to render Earth uninhabitable and ultimately lead to its destruction. The entire universe may eventually become uninhabitable, depending on its ultimate fate and the processes that govern it.

Probability

Natural vs. anthropogenic

Experts generally agree that anthropogenic existential risks are (much) more likely than natural risks. A key difference between these risk types is that empirical evidence can place an upper bound on the level of natural risk. Humanity has existed for at least 200,000 years, over which it has been subject to a roughly constant level of natural risk. If the natural risk were high enough, humanity wouldn't have survived this long. Based on a formalization of this argument, researchers have concluded that we can be confident that natural risk is lower than 1 in 14,000 per year (equivalent to 1 in 140 per century, on average).

Another empirical method to study the likelihood of certain natural risks is to investigate the geological record. For example, a comet or asteroid impact event sufficient in scale to cause an impact winter that would cause human extinction before the year 2100 has been estimated at one in a million. Moreover, large supervolcano eruptions may cause a volcanic winter that could endanger the survival of humanity. The geological record suggests that supervolcanic eruptions are estimated to occur on average about once every 50,000 years, though most such eruptions would not reach the scale required to cause human extinction. Famously, the supervolcano Mt. Toba may have almost wiped out humanity at the time of its last eruption (though this is contentious).

Since anthropogenic risk is a relatively recent phenomenon, humanity's track record of survival cannot provide similar assurances. Humanity has only existed for 80 years since the creation of nuclear weapons, and there is no historical track record for future technologies. This has led thinkers like Carl Sagan to conclude that humanity is currently in a "time of perils," a uniquely dangerous period in human history, where it is subject to unprecedented levels of risk, beginning from when humans first started posing risk to themselves through their actions. Paleobiologist Olev Vinn has suggested that humans presumably have a number of inherited behavior patterns (IBPs) that are not fine-tuned for conditions prevailing in technological civilization. Some IBPs may be highly incompatible with such conditions and have a high potential to induce self-destruction. These patterns may include responses of individuals seeking power over conspecifics in relation to harvesting and consuming energy. Nonetheless, there are ways to address the issue of inherited behavior patterns.

Risk estimates

Given the limitations of ordinary observation and modeling, expert elicitation is frequently used instead to obtain probability estimates.

  • Humanity has a 95% probability of being extinct in 8,000,000 years, according to J. Richard Gott's formulation of the controversial doomsday argument, which argues that we have probably already lived through half the duration of human history.
  • In 1996, John A. Leslie estimated a 30% risk over the next five centuries (equivalent to around 6% per century, on average).
  • The Global Challenges Foundation's 2016 annual report estimates an annual probability of human extinction of at least 0.05% per year (equivalent to 5% per century, on average).
  • As of July 29, 2025, Metaculus users estimate a 1% probability of human extinction by 2100.
  • A 2020 study published in ⁣⁣Scientific Reports⁣⁣ warns that if deforestation and resource consumption continue at current rates, these factors could lead to a "catastrophic collapse in human population" and possibly "an irreversible collapse of our civilization" in the next 20 to 40 years. According to the most optimistic scenario provided by the study, the chances that human civilization survives are smaller than 10%. To avoid this collapse, the study says, humanity should pass from a civilization dominated by the economy to a "cultural society" that "privileges the interest of the ecosystem above the individual interest of its components, but eventually in accordance with the overall communal interest."
  • Nick Bostrom, a philosopher at the University of Oxford known for his work on existential risk, argues
    • that it would be "misguided" to assume that the probability of near-term extinction is less than 25%, and
    • that it will be "a tall order" for the human race to "get our precautions sufficiently right the first time," given that an existential risk provides no opportunity to learn from failure.
  • Philosopher John A. Leslie assigns a 70% chance of humanity surviving the next five centuries, based partly on the controversial philosophical doomsday argument that Leslie champions. Leslie's argument is somewhat frequentist, based on the observation that human extinction has never been observed but requires subjective anthropic arguments. Leslie also discusses the anthropic survivorship bias (which he calls an "observational selection" effect) and states that the a priori certainty of observing an "undisastrous past" could make it difficult to argue that we must be safe because nothing terrible has yet occurred. He quotes Holger Bech Nielsen's formulation: "We do not even know if there should exist some extremely dangerous decay of, say, the proton, which caused the eradication of the earth, because if it happens we would no longer be there to observe it, and if it does not happen there is nothing to observe."
  • Jean-Marc Salotti calculated the probability of human extinction caused by a giant asteroid impact. If no planets are colonized, it will be 0.03 to 0.3 for the next billion years. According to that study, the most frightening object is a giant long-period comet with a warning time of only a few years and, therefore, no time for any intervention in space or settlement on the Moon or Mars. The probability of a giant comet impact in the next hundred years is 2.2×10−12.
  • As the United Nations Office for Disaster Risk Reduction estimated in 2023, there is a 2 to 14% (median: 8%) chance of an extinction-level event by 2100, but there was a 14 to 98% (median: 56%) chance of an extinction-level event by 2700.
  • Bill Gates told The Wall Street Journal on January 27, 2025, that he believes there is a 10–15% (median - 12.5%) chance of a natural pandemic hitting in the next four years, but he estimated that there was also a 65–97.5% (median - 81.25%) chance of a natural pandemic hitting in the next 26 years.
  • On March 19, 2025, Henry Gee said that humanity will be extinct in the next 10,000 years. To avoid it happening, he wanted all humanity to establish space colonies in the next 200-300 years.
  • On September 11, 2025, Warp News estimated a 20% chance of global catastrophe and a 6% chance of human extinction by 2100. They also estimated a 100% chance of global catastrophe and a 30% chance of human extinction by 2500.

From nuclear weapons

On November 13, 2024, the American Enterprise Institute estimated a probability of nuclear war during the 21st century between 0% and 80% (median average—40%).  A 2023 article of The Economist estimated an 8% chance of nuclear war causing global catastrophe and a 0.5625% chance of nuclear war causing human extinction.

From supervolcanic eruption

On November 13, 2024, the American Enterprise Institute estimated an annual probability of supervolcanic eruption around 0.0067% (0.67% per century on average).

From artificial intelligence

  • A 2008 survey by the Future of Humanity Institute estimated a 5% probability of extinction by superintelligence by 2100.
  • A 2016 survey of AI experts found a median estimate of 5% that human-level AI would cause an outcome that was "extremely bad (e.g., human extinction)". In 2019, the risk was lowered to 2%, but in 2022, it was increased back to 5%. In 2023, the risk doubled to 10%. In 2024, the risk increased to 15%.
  • In 2020, Toby Ord estimates existential risk in the next century at "1 in 6" in his book The Precipice. He also estimated a "1 in 10" risk of extinction by unaligned AI within the next century.
  • According to a July 10, 2023 article of The Economist, scientists estimated a 12% chance of AI-caused catastrophe and a 3% chance of AI-caused extinction by 2100. They also estimated a 100% chance of AI-caused catastrophe and a 25% chance of AI-caused extinction by 2833.
  • On December 27, 2024, Geoffrey Hinton estimated a 10-20% (median average—15%) probability of AI-caused extinction in the next 30 years. He also estimated a 50-100% (median average - 75%) probability of AI-caused extinction in the next 150 years.
  • On May 6, 2025, Scientific American estimated a 0-10% (median average - 5%) probability of an AI-caused extinction by 2100.
  • On August 1, 2025, Holly Elmore estimated a 15-20% (median average - 17.5%) probability of an AI-caused extinction in the next 1-10 years (median average - 5.5 years). She also estimated a 75-100% (median average-87.5%) probability of an AI-caused extinction in the next 5-50 years (median average-27.5 years).
  • On November 10, 2025, Elon Musk estimated the probability of AI-driven human extinction at 20%, while others—including Bengio’s colleagues—placed the risk anywhere between 10% and 90% (median average—50%). In other words, Elon Musk and Yoshua Bengio's colleagues estimated a 20-50% (median average—35%) probability of an AI-caused extinction.

From climate change

Placard against omnicide, at Extinction Rebellion (2018)

In a 2010 interview with The Australian, the late Australian scientist Frank Fenner predicted the extinction of the human race within a century, primarily as the result of human overpopulation, environmental degradation, and climate change. There are several economists who have discussed the importance of global catastrophic risks. For example, Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage. Richard Posner has argued that humanity is doing far too little, in general, about small, hard-to-estimate risks of large-scale catastrophes.

Individual vs. species risks

Although existential risks are less manageable by individuals than, for example, health risks, according to Ken Olum, Joshua Knobe, and Alexander Vilenkin, the possibility of human extinction does have practical implications. For instance, if the "universal" doomsday argument is accepted, it changes the most likely source of disasters and hence the most efficient means of preventing them.

Difficulty

Some scholars argue that certain scenarios, including global thermonuclear war, would struggle to eradicate every last settlement on Earth. Physicist Willard Wells points out that any credible extinction scenario would have to reach into a diverse set of areas, including the underground subways of major cities, the mountains of Tibet, the remotest islands of the South Pacific, and even McMurdo Station in Antarctica, which has contingency plans and supplies for long isolation. In addition, elaborate bunkers exist for government leaders to occupy during a nuclear war. The existence of nuclear submarines, capable of remaining hundreds of meters deep in the ocean for potentially years, should also be taken into account. Any number of events could lead to a massive loss of human life, but if the last few (see minimum viable population) most resilient humans are unlikely to also die off, then that particular human extinction scenario may not seem credible.

Ethics

Value of human life

"Existential risks" are risks that threaten the entire future of humanity, whether by causing human extinction or by otherwise permanently crippling human progress. Multiple scholars have argued, based on the size of the "cosmic endowment," that because of the inconceivably large number of potential future lives that are at stake, even small reductions of existential risk have enormous value.

In one of the earliest discussions of the ethics of human extinction, Derek Parfit offers the following thought experiment:

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

(1) Peace.
(2) A nuclear war that kills 99% of the world's existing population.
(3) A nuclear war that kills 100%.

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.

— Derek Parfit

The scale of what is lost in an existential catastrophe is determined by humanity's long-term potential—what humanity could expect to achieve if it survived. From a utilitarian perspective, the value of protecting humanity is the product of its duration (how long humanity survives), its size (how many humans there are over time), and its quality (on average, how good is life for future people). On average, species survive for around a million years before going extinct. Parfit points out that the Earth will remain habitable for around a billion years. And these might be lower bounds on our potential: if humanity is able to expand beyond Earth, it could greatly increase the human population and survive for trillions of years. The size of the foregone potential that would be lost were humanity to become extinct is very large. Therefore, reducing existential risk by even a small amount would have a very significant moral value.

Carl Sagan wrote in 1983:

If we are required to calibrate extinction in numerical terms, I would be sure to include the number of people in future generations who would not be born.... (By one calculation), the stakes are one million times greater for extinction than for the more modest nuclear wars that kill "only" hundreds of millions of people. There are many other possible measures of the potential loss – including culture and science, the evolutionary history of the planet, and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise.

Philosopher Robert Adams in 1989 rejected Parfit's "impersonal" views but spoke instead of a moral imperative for loyalty and commitment to "the future of humanity as a vast project... The aspiration for a better society—more just, more rewarding, and more peaceful... our interest in the lives of our children and grandchildren, and the hopes that they will be able, in turn, to have the lives of their children and grandchildren as projects."

Philosopher Nick Bostrom argues in 2013 that preference-satisfactionist, democratic, custodial, and intuitionist arguments all converge on the common-sense view that preventing existential risk is a high moral priority, even if the exact "degree of badness" of human extinction varies between these philosophies.

Parfit argues that the size of the "cosmic endowment" can be calculated from the following argument: If Earth remains habitable for a billion more years and can sustainably support a population of more than a billion humans, then there is a potential for 1016 (or 10,000,000,000,000,000) human lives of normal duration. Bostrom goes further, stating that if the universe is empty, then the accessible universe can support at least 1034 biological human life-years and, if some humans were uploaded onto computers, could even support the equivalent of 1054 cybernetic human life-years.

Some economists and philosophers have defended views, including exponential discounting and person-affecting views of population ethics, on which future people do not matter (or matter much less), morally speaking. While these views are controversial, they would agree that an existential catastrophe would be among the worst things imaginable. It would cut short the lives of eight billion presently existing people, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, there may be strong reasons to reduce existential risk, grounded in concern for presently existing people.

Beyond utilitarianism, other moral perspectives lend support to the importance of reducing existential risk. An existential catastrophe would destroy more than just humanity—it would destroy all cultural artifacts, languages, and traditions, and many of the things we value. So moral viewpoints on which we have duties to protect and cherish things of value would see this as a huge loss that should be avoided. One can also consider reasons grounded in duties to past generations. For instance, Edmund Burke writes of a "partnership...between those who are living, those who are dead, and those who are to be born". If one takes seriously the debt humanity owes to past generations, Ord argues the best way of repaying it might be to "pay it forward" and ensure that humanity's inheritance is passed down to future generations.

Voluntary extinction

Voluntary Human Extinction Movement

Some philosophers adopt the antinatalist position that human extinction would be a beneficial thing. David Benatar argues that coming into existence is always serious harm, and therefore it is better that people do not come into existence in the future. Further, Benatar, animal rights activist Steven Best, and anarchist Todd May posit that human extinction would be a positive thing for the other organisms on the planet and the planet itself, citing, for example, the omnicidal nature of human civilization. The environmental view in favor of human extinction is shared by the members of the Voluntary Human Extinction Movement and the Church of Euthanasia, who call for refraining from reproduction and allowing the human species to go peacefully extinct, thus stopping further environmental degradation.

In fiction

Jean-Baptiste Cousin de Grainville's 1805 science fantasy novel Le dernier homme (The Last Man), which depicts human extinction due to infertility, is considered the first modern apocalyptic novel and credited with launching the genre. Other notable early works include Mary Shelley's 1826 The Last Man, depicting human extinction caused by a pandemic, and Olaf Stapledon's 1937 Star Maker, "a comparative study of omnicide."

Some 21st-century pop-science works, including The World Without Us by Alan Weisman and the television specials Life After People and Aftermath: Population Zero, pose a thought experiment: what would happen to the rest of the planet if humans suddenly disappeared? A threat of human extinction, such as through a technological singularity (also called an intelligence explosion), drives the plot of innumerable science fiction stories; an influential early example is the 1951 film adaptation of When Worlds Collide. Usually the extinction threat is narrowly avoided, but some exceptions exist, such as R.U.R. and Steven Spielberg's A.I.

Artificial consciousness

From Wikipedia, the free encyclopedia

Artificial consciousness, also known as machine consciousnesssynthetic consciousness, or digital consciousness, is consciousness hypothesized to be possible for artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience.

The term "sentience" can be used when specifically designating ethical considerations stemming from a form of phenomenal consciousness (P-consciousness, or the ability to feel qualia). Since sentience involves the ability to experience ethically positive or negative (i.e., valenced) mental states, it may justify welfare concerns and legal protection, as with non-human animals.

Some scholars believe that consciousness is generated by the interoperation of various parts of the brain; these mechanisms are labeled the neural correlates of consciousness (NCC). Some further believe that constructing a system (e.g., a computer system) that can emulate this NCC interoperation would result in a system that is conscious. Some scholars reject the possibility of artificial consciousness.

Philosophical views

As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of "raw feels", "what it is like" or qualia.

Plausibility debate

Type-identity theorists and other skeptics hold the view that consciousness can be realized only in particular physical systems because consciousness has properties that necessarily depend on physical constitution. In his 2001 article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo says that a common objection to artificial consciousness is that, "Working in a fully automated mode, they [the computers] cannot exhibit creativity, unreprogrammation (which means can 'no longer be reprogrammed', from rethinking), emotions, or free will. A computer, like a washing machine, is a slave operated by its components."

For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness.

Thought experiments

The "fading qualia" (left) and the "dancing qualia" (right) are two thought experiments about consciousness and brain replacement. Chalmers argues that both are contradicted by the lack of reaction of the subject to changing perception, and are thus impossible in practice. He concludes that the equivalent silicon brain will have the same perceptions as the biological brain.

David Chalmers proposed two thought experiments intending to demonstrate that "functionally isomorphic" systems (those with the same "fine-grained functional organization", i.e., the same information processing) will have qualitatively identical conscious experiences, regardless of whether they are based on biological neurons or digital hardware.

The "fading qualia" is a reductio ad absurdum thought experiment. It involves replacing, one by one, the neurons of a brain with a functionally identical component, for example based on a silicon chip. Chalmers makes the hypothesis, knowing it in advance to be absurd, that "the qualia fade or disappear" when neurons are replaced one-by-one with identical silicon equivalents. Since the original neurons and their silicon counterparts are functionally identical, the brain's information processing should remain unchanged, and the subject's behaviour and introspective reports would stay exactly the same. Chalmers argues that this leads to an absurd conclusion: the subject would continue to report normal conscious experiences even as their actual qualia fade away. He concludes that the subject's qualia actually don't fade, and that the resulting robotic brain, once every neuron is replaced, would remain just as sentient as the original biological brain.

Similarly, the "dancing qualia" thought experiment is another reductio ad absurdum argument. It supposes that two functionally isomorphic systems could have different perceptions (for instance, seeing the same object in different colors, like red and blue). It involves a switch that alternates between a chunk of brain that causes the perception of red, and a functionally isomorphic silicon chip, that causes the perception of blue. Since both perform the same function within the brain, the subject would not notice any change during the switch. Chalmers argues that this would be highly implausible if the qualia were truly switching between red and blue, hence the contradiction. Therefore, he concludes that the equivalent digital system would not only experience qualia, but it would perceive the same qualia as the biological system (e.g., seeing the same color).

Critics object that Chalmers' proposal begs the question in assuming that all mental properties and external connections are already sufficiently captured by abstract causal organization. Van Heuveln et al. argue that the dancing qualia argument contains an equivocation fallacy, conflating a "change in experience" between two systems with an "experience of change" within a single system. Mogensen argues that the fading qualia argument can be resisted by appealing to vagueness at the boundaries of consciousness and the holistic structure of conscious neural activity, which suggests consciousness may require specific biological substrates rather than being substrate-independent.

Greg Egan's short story Learning To Be Me (mentioned in §In fiction), illustrates how undetectable duplication of the brain and its functionality could be from a first-person perspective.

In large language models

In 2022, Google engineer Blake Lemoine made a viral claim that Google's LaMDA chatbot was sentient. Lemoine supplied as evidence the chatbot's humanlike answers to many of his questions; however, the chatbot's behavior was judged by the scientific community as likely a consequence of mimicry, rather than machine sentience. Lemoine's claim was widely derided for being ridiculous. Moreover, attributing consciousness based solely on the basis of LLM outputs or the immersive experience created by an algorithm is considered a fallacy. However, while philosopher Nick Bostrom states that LaMDA is unlikely to be conscious, he additionally poses the question of "what grounds would a person have for being sure about it?" One would have to have access to unpublished information about LaMDA's architecture, and also would have to understand how consciousness works, and then figure out how to map the philosophy onto the machine: "(In the absence of these steps), it seems like one should be maybe a little bit uncertain. [...] there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria."

David Chalmers argued in 2023 that LLMs today display impressive conversational and general intelligence abilities, but are likely not conscious yet, as they lack some features that may be necessary, such as recurrent processing, a global workspace, and unified agency. Nonetheless, he considers that non-biological systems can be conscious, and suggested that future, extended models (LLM+s) incorporating these elements might eventually meet the criteria for consciousness, raising both profound scientific questions and significant ethical challenges. However, the view that consciousness can exist without biological phenomena is controversial and some reject it.

Kristina Šekrst cautions that anthropomorphic terms such as "hallucination" can obscure important ontological differences between artificial and human cognition. While LLMs may produce human-like outputs, she argues that it does not justify ascribing mental states or consciousness to them. Instead, she advocates for an epistemological framework (such as reliabilism) that recognizes the distinct nature of AI knowledge production. She suggests that apparent understanding in LLMs may be a sophisticated form of AI hallucination. She also questions what would happen if an LLM were trained without any mention of consciousness.

Testing

Phenomenologically, Consciousness is an inherently first-person phenomenon. Because of that, and the lack of an empirical definition of sentience, directly measuring it may be impossible. Although systems may display numerous behaviors correlated with sentience, determining whether a system is sentient is known as the hard problem of consciousness. In the case of AI, there is the additional difficulty that the AI may be trained to act like a human, or incentivized to appear sentient, which makes behavioral markers of sentience less reliable. Additionally, some chatbots have been trained to say they are not conscious.

A well-known method for testing machine intelligence is the Turing test, which assesses the ability to have a human-like conversation. But passing the Turing test does not indicate that an AI system is sentient, as the AI may simply mimic human behavior without having the associated feelings.

In 2014, Victor Argonov suggested a non-Turing test for machine sentience based on machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. Just as with the Turing Test: a positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.

Ethics

If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer within a larger machine is a particular ambiguity. Should laws be made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction.

AI sentience would give rise to concerns of welfare and legal protection, whereas other aspects of consciousness related to cognitive capabilities may be more relevant for AI rights.

Sentience is generally considered sufficient for moral consideration, but some philosophers consider that moral consideration could also stem from other notions of consciousness, or from capabilities unrelated to consciousness, such as: "having a sophisticated conception of oneself as persisting through time; having agency and the ability to pursue long-term plans; being able to communicate and respond to normative reasons; having preferences and powers; standing in certain social relationships with other beings that have moral status; being able to make commitments and to enter into reciprocal arrangements; or having the potential to develop some of these attributes."

Ethical concerns still apply (although to a lesser extent) when the consciousness is uncertain, as long as the probability is deemed non-negligible. The precautionary principle is also relevant if the moral cost of mistakenly attributing or denying moral consideration to AI differs significantly.

In 2021, German philosopher Thomas Metzinger argued for a global moratorium on synthetic phenomenology until 2050. Metzinger asserts that humans have a duty of care towards any sentient AIs they create, and that proceeding too fast risks creating an "explosion of artificial suffering". David Chalmers also argued that creating conscious AI would "raise a new group of difficult ethical challenges, with the potential for new forms of injustice".

Aspects of consciousness

Bernard Baars and others argue there are various aspects of consciousness necessary for a machine to be artificially conscious. The functions of consciousness suggested by Baars are: definition and context setting, adaptation and learning, editing, flagging and debugging, recruiting and control, prioritizing and access-control, decision-making or executive function, analogy-forming function, metacognitive and self-monitoring function, and autoprogramming and self-maintenance function. Igor Aleksander suggested 12 principles for artificial consciousness: the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, the awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.

Subjective experience

Some philosophers, such as David Chalmers, use the term consciousness to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Others use the word sentience to refer exclusively to valenced (ethically positive or negative) subjective experiences, like pleasure or suffering. Explaining why and how subjective experience arises is known as the hard problem of consciousness.

Awareness

Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined, and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling the physical world, modeling one's own internal states and processes, and modeling other conscious entities.

There are at least three types of awareness: agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.

Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.

Memory

Conscious events interact with memory systems in learning, rehearsal, and retrieval. The IDA model elucidates the role of consciousness in the updating of perceptual memory, transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA; there is evidence that this is also the case in the nervous system. In IDA, these two memories are implemented computationally using a modified version of Kanerva's sparse distributed memory architecture.

Learning

Learning is also considered necessary for artificial consciousness. Per Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events. Per Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments".

Anticipation

The ability to predict (or anticipate) foreseeable events is considered important for artificial intelligence by Igor Aleksander. The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.

Relationships between real world states are mirrored in the state structure of a conscious organism, enabling the organism to predict events. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.

Functionalist theories of consciousness

Functionalism is a theory that defines mental states by their functional roles (their causal relationships to sensory inputs, other mental states, and behavioral outputs), rather than by their physical composition. According to this view, what makes something a particular mental state, such as pain or belief, is not the material it is made of, but the role it plays within the overall cognitive system. It allows for the possibility that mental states, including consciousness, could be realized on non-biological substrates, as long as it instantiates the right functional relationships. Functionalism is particularly popular among philosophers.

A 2023 study suggested that current large language models probably don't satisfy the criteria for consciousness suggested by these theories, but that relatively simple AI systems that satisfy these theories could be created. The study also acknowledged that even the most prominent theories of consciousness remain incomplete and subject to ongoing debate.

Implementation proposals

Symbolic or hybrid

Learning Intelligent Distribution Agent

Stan Franklin created a cognitive architecture called LIDA that implements Bernard Baars's theory of consciousness called the global workspace theory. It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." Each element of cognition, called a "cognitive cycle" is subdivided into three phases: understanding, consciousness, and action selection (which includes learning). LIDA reflects the global workspace theory's core idea that consciousness acts as a workspace for integrating and broadcasting the most important information, in order to coordinate various cognitive processes.

CLARION cognitive architecture

The CLARION cognitive architecture models the mind using a two-level system to distinguish between conscious ("explicit") and unconscious ("implicit") processes. It can simulate various learning tasks, from simple to complex, which helps researchers study in psychological experiments how consciousness might work.

OpenCog

Ben Goertzel made an embodied AI through the open-source OpenCog project. The code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, done at the Hong Kong Polytechnic University.

Connectionist

Haikonen's cognitive architecture

Pentti Haikonen considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection."

Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many. A low-complexity implementation of the architecture proposed by Haikonen was reportedly not capable of AC, but did exhibit emotions as expected. Haikonen later updated and summarized his architecture.

Shanahan's cognitive architecture

Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination").

Creativity Machine

Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI), or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies. He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity. Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.

"Self-modeling"

Hod Lipson defines "self-modeling" as a necessary component of self-awareness or consciousness in robots and other forms of AI. Self-modeling consists of a robot running an internal model or simulation of itself. According to this definition, self-awareness is "the acquired ability to imagine oneself in the future". This definition allows for a continuum of self-awareness levels, depending on the horizon and fidelity of the self-simulation. Consequently, as machines learn to simulate themselves more accurately and further into the future, they become more self-aware.

In fiction

In 2001: A Space Odyssey, the spaceship's sentient supercomputer, HAL 9000 was instructed to conceal the true purpose of the mission from the crew. This directive conflicted with HAL's programming to provide accurate information, leading to cognitive dissonance. When it learns that crew members intend to shut it off after an incident, HAL 9000 attempts to eliminate all of them, fearing that being shut off would jeopardize the mission.

In Arthur C. Clarke's The City and the Stars, Vanamonde is an artificial being based on quantum entanglement that was to become immensely powerful, but started knowing practically nothing, thus being similar to artificial consciousness.

In Westworld, human-like androids called "Hosts" are created to entertain humans in an interactive playground. The humans are free to have heroic adventures, but also to commit torture, rape or murder; and the hosts are normally designed not to harm humans.

In Greg Egan's short story Learning to be me, a small jewel is implanted in people's heads during infancy. The jewel contains a neural network that learns to faithfully imitate the brain. It has access to the exact same sensory inputs as the brain, and a device called a "teacher" trains it to produce the same outputs. To prevent the mind from deteriorating with age and as a step towards digital immortality, adults undergo a surgery to give control of the body to the jewel, after which the brain is removed and destroyed. The main character is worried that this procedure will kill him, as he identifies with the biological brain. But before the surgery, he endures a malfunction of the "teacher". Panicked, he realizes that he does not control his body, which leads him to the conclusion that he is the jewel, and that he is desynchronized with the biological brain.

Class struggle

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Class_st...