Search This Blog

Saturday, April 4, 2026

Human extinction

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Human_extinction
Nuclear war is an often predicted cause of the extinction of humankind.

Human extinction, or omnicide, is the end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction).

Some of the many possible contributors to anthropogenic hazards are climate change, global nuclear annihilation, biological warfare, weapons of mass destruction, and ecological collapse. Other scenarios center on emerging technologies, such as advanced artificial intelligence, biotechnology, or self-replicating nanobots.

The scientific consensus is that there is a relatively low risk of near-term human extinction due to natural causes. The likelihood of human extinction through humankind's own activities, however, is a current area of research and debate.

History of thought

Early history

Before the 18th and 19th centuries, the possibility that humans or other organisms could become extinct was viewed with scepticism. It contradicted the principle of plenitude, a doctrine that all possible things exist. The principle traces back to Aristotle and was an important tenet of Christian theology. Ancient philosophers such as Plato, Aristotle, and Lucretius wrote of the end of humankind only as part of a cycle of renewal. Marcion of Sinope was a proto-Protestant who advocated for antinatalism that could lead to human extinction. Later philosophers such as Al-Ghazali, William of Ockham, and Gerolamo Cardano expanded the study of logic and probability and began wondering if abstract worlds existed, including a world without humans. Physicist Edmond Halley stated that the extinction of the human race may be beneficial to the future of the world.

The notion that species can become extinct gained scientific acceptance during the Age of Enlightenment in the 17th and 18th centuries, and by 1800 Georges Cuvier had identified 23 extinct prehistoric species. The doctrine was further gradually bolstered by evidence from the natural sciences, particularly the discovery of fossil evidence of species that appeared to no longer exist and the development of theories of evolution. In On the Origin of Species, Charles Darwin discussed the extinction of species as a natural process and a core component of natural selection. Notably, Darwin was skeptical of the possibility of sudden extinction, viewing it as a gradual process. He held that the abrupt disappearances of species from the fossil record were not evidence of catastrophic extinctions but rather represented unrecognized gaps in the record.

As the possibility of extinction became more widely established in the sciences, so did the prospect of human extinction. In the 19th century, human extinction became a popular topic in science (e.g., Thomas Robert Malthus's An Essay on the Principle of Population) and fiction (e.g., Jean-Baptiste Cousin de Grainville's The Last Man). In 1863, a few years after Darwin published On the Origin of Species, William King proposed that Neanderthals were an extinct species of the genus Homo. The Romantic authors and poets were particularly interested in the topic. Lord Byron wrote about the extinction of life on Earth in his 1816 poem "Darkness," and in 1824 envisaged humanity being threatened by a comet impact and employing a missile system to defend against it. Mary Shelley's 1826 novel The Last Man is set in a world where humanity has been nearly destroyed by a mysterious plague. At the turn of the 20th century, Russian cosmism, a precursor to modern transhumanism, advocated avoiding humanity's extinction by colonizing space.

Atomic era

Castle Romeo nuclear test on Bikini Atoll

The invention of the atomic bomb prompted a wave of discussion among scientists, intellectuals, and the public at large about the risk of human extinction. In a 1945 essay, Bertrand Russell stated:

The prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense.

In 1950, Leo Szilard suggested it was technologically feasible to build a cobalt bomb that could render the planet unlivable. A 1950 Gallup poll found that 19% of Americans believed that another world war would mean "an end to mankind." Rachel Carson's 1962 book Silent Spring raised awareness of environmental catastrophe. In 1983, Brandon Carter proposed the Doomsday argument, which used Bayesian probability to predict the total number of humans that will ever exist.

The discovery of "nuclear winter" in the early 1980s, a specific mechanism by which nuclear war could result in human extinction, again raised the issue to prominence. Writing about these findings in 1983, Carl Sagan argued that measuring the severity of extinction solely in terms of those who die "conceals its full impact," and that nuclear war "imperils all of our descendants, for as long as there will be humans."

Post-Cold War

The end of the Cold War led to an explosion of literature about human extinction. John Leslie's 1996 book The End of the World was an academic treatment of the science and ethics of human extinction. In it, Leslie considered a range of threats to humanity and what they have in common. In 2003, British Astronomer Royal Sir Martin Rees published Our Final Hour, in which he argues that advances in certain technologies create new threats to the survival of humankind and that the 21st century may be a critical moment in history when humanity's fate is decided. Edited by Nick Bostrom and Milan M. Ćirković, Global Catastrophic Risks, published in 2008, is a collection of essays from 26 academics on various global catastrophic and existential risks. Nicholas P. Money's 2019 book The Selfish Ape delves into the environmental consequences of overexploitationToby Ord's 2020 book The Precipice argues that preventing existential risks is one of the most important moral issues of our time. The book discusses, quantifies, and compares different existential risks, concluding that the greatest risks are presented by unaligned artificial intelligence and biotechnology. Lyle Lewis' 2024 book Racing to Extinction explores the roots of human extinction from an evolutionary biology perspective. Lewis argues that humanity treats unused natural resources as waste and is driving ecological destruction through overexploitation, habitat loss, and denial of environmental limits. He uses vivid examples, like the extinction of the passenger pigeon and the environmental cost of rice production, to show how interconnected and fragile ecosystems are. Henry Gee's book The Rise and Fall of the Human Empire (2025) argues that humanity is on the brink of extinction due to environmental degradation and diminishing resources.

In 2022, a study led by a group of scientists asked for a new research agenda to figure out the possible catastrophic effects of climate change, such as situations that could kill off 10% of the world's population or even all of humanity. They say that the IPCC should write a report on catastrophic climate change because the effects of extreme warming, like famine, severe weather, war, and disease outbreaks, have not been studied enough. The researchers stress the importance of comprehending potential tipping points and interacting threats to enhance preparedness for worst-case scenarios.

Probability

Natural vs. anthropogenic

Experts generally agree that anthropogenic existential risks are (much) more likely than natural risks. A key difference between these risk types is that empirical evidence can place an upper bound on the level of natural risk. Humanity has existed for at least 200,000 years, over which it has been subject to a roughly constant level of natural risk. If the natural risk were high enough, humanity wouldn't have survived this long. Based on a formalization of this argument, researchers have concluded that we can be confident that natural risk is lower than 1 in 14,000 per year (equivalent to 1 in 140 per century, on average).

Another empirical method to study the likelihood of certain natural risks is to investigate the geological record. For example, a comet or asteroid impact event sufficient in scale to cause an impact winter that would cause human extinction before the year 2100 has been estimated at one in a million. Moreover, large supervolcano eruptions may cause a volcanic winter that could endanger the survival of humanity. The geological record suggests that supervolcanic eruptions are estimated to occur on average about once every 50,000 years, though most such eruptions would not reach the scale required to cause human extinction. Famously, the supervolcano Mt. Toba may have almost wiped out humanity at the time of its last eruption (though this is contentious).

Since anthropogenic risk is a relatively recent phenomenon, humanity's track record of survival cannot provide similar assurances. Humanity has only existed for 80 years since the creation of nuclear weapons, and there is no historical track record for future technologies. This has led thinkers like Carl Sagan to conclude that humanity is currently in a "time of perils," a uniquely dangerous period in human history, where it is subject to unprecedented levels of risk, beginning from when humans first started posing risk to themselves through their actions. Paleobiologist Olev Vinn has suggested that humans presumably have a number of inherited behavior patterns (IBPs) that are not fine-tuned for conditions prevailing in technological civilization. Some IBPs may be highly incompatible with such conditions and have a high potential to induce self-destruction. These patterns may include responses of individuals seeking power over conspecifics in relation to harvesting and consuming energy. Nonetheless, there are ways to address the issue of inherited behavior patterns.

Risk estimates

Given the limitations of ordinary observation and modeling, expert elicitation is frequently used instead to obtain probability estimates.

  • Humanity has a 95% probability of being extinct in 8,000,000 years, according to J. Richard Gott's formulation of the controversial doomsday argument, which argues that we have probably already lived through half the duration of human history.
  • In 1996, John A. Leslie estimated a 30% risk over the next five centuries (equivalent to around 6% per century, on average).
  • The Global Challenges Foundation's 2016 annual report estimates an annual probability of human extinction of at least 0.05% per year (equivalent to 5% per century, on average).
  • As of July 29, 2025, Metaculus users estimate a 1% probability of human extinction by 2100.
  • A 2020 study published in ⁣⁣Scientific Reports⁣⁣ warns that if deforestation and resource consumption continue at current rates, these factors could lead to a "catastrophic collapse in human population" and possibly "an irreversible collapse of our civilization" in the next 20 to 40 years. According to the most optimistic scenario provided by the study, the chances that human civilization survives are smaller than 10%. To avoid this collapse, the study says, humanity should pass from a civilization dominated by the economy to a "cultural society" that "privileges the interest of the ecosystem above the individual interest of its components, but eventually in accordance with the overall communal interest."
  • Nick Bostrom, a philosopher at the University of Oxford known for his work on existential risk, argues
    • that it would be "misguided" to assume that the probability of near-term extinction is less than 25%, and
    • that it will be "a tall order" for the human race to "get our precautions sufficiently right the first time," given that an existential risk provides no opportunity to learn from failure.
  • Philosopher John A. Leslie assigns a 70% chance of humanity surviving the next five centuries, based partly on the controversial philosophical doomsday argument that he champions. Leslie's argument is somewhat frequentist, based on the observation that human extinction has never been observed but requires subjective anthropic arguments. Leslie also discusses the anthropic survivorship bias (which he calls an "observational selection" effect) and states that the a priori certainty of observing an "undisastrous past" could make it difficult to argue that we must be safe because nothing terrible has yet occurred. He quotes Holger Bech Nielsen's formulation: "We do not even know if there should exist some extremely dangerous decay of, say, the proton, which caused the eradication of the earth, because if it happens we would no longer be there to observe it, and if it does not happen there is nothing to observe."
  • Jean-Marc Salotti calculated the probability of human extinction caused by a giant asteroid impact. If no planets are colonized, it will be 0.03 to 0.3 for the next billion years. According to that study, the most frightening object is a giant long-period comet with a warning time of only a few years and, therefore, no time for any intervention in space or settlement on the Moon or Mars. The probability of a giant comet impact in the next hundred years is 2.2×10−12.
  • As the United Nations Office for Disaster Risk Reduction estimated in 2023, there is a 2 to 14% chance of an extinction-level event by 2100.
  • Bill Gates told The Wall Street Journal on January 27, 2025, that he believes there is a 10–15% chance of a natural pandemic hitting in the next four years, but he estimated that there was also a 65–97.5% chance of a natural pandemic hitting in the next 26 years.
  • On March 19, 2025, Henry Gee said that humanity will be extinct in the next 10,000 years. To avoid it happening, he wanted all humanity to establish space colonies in the next 200-300 years.
  • On September 11, 2025, Warp News estimated a 20% chance of global catastrophe and a 6% chance of human extinction by 2100. They also estimated a 100% chance of global catastrophe and a 30% chance of human extinction by 2500.

From nuclear weapons

On November 13, 2024, the American Enterprise Institute estimated a probability of nuclear war during the 21st century between 0% and 80%. A 2023 article of The Economist estimated an 8% chance of nuclear war causing global catastrophe and a 0.5625% chance of nuclear war causing human extinction.

From supervolcanic eruption

On November 13, 2024, the American Enterprise Institute estimated an annual probability of supervolcanic eruption around 0.0067% (0.67% per century on average).

From artificial intelligence

  • A 2008 survey by the Future of Humanity Institute estimated a 5% probability of extinction by superintelligence by 2100.
  • A 2016 survey of AI experts found a median estimate of 5% that human-level AI would cause an outcome that was "extremely bad (e.g., human extinction)". In 2019, the risk was lowered to 2%, but in 2022, it was increased back to 5%. In 2023, the risk doubled to 10%. In 2024, the risk increased to 15%.
  • In 2020, Toby Ord estimates existential risk in the next century at "1 in 6" in his book The Precipice. He also estimated a "1 in 10" risk of extinction by unaligned AI within the next century.
  • According to a July 10, 2023 article of The Economist, scientists estimated a 12% chance of AI-caused catastrophe and a 3% chance of AI-caused extinction by 2100. They also estimated a 100% chance of AI-caused catastrophe and a 25% chance of AI-caused extinction by 2833.
  • On December 27, 2024, Geoffrey Hinton estimated a 10-20% probability of AI-caused extinction in the next 30 years. He also estimated a 50-100% probability of AI-caused extinction in the next 150 years.
  • On May 6, 2025, Scientific American estimated a 0-10% probability of an AI-caused extinction by 2100.
  • On August 1, 2025, Holly Elmore estimated a 15-20% probability of an AI-caused extinction in the next 1-10 years. She also estimated a 75-100% probability of an AI-caused extinction in the next 5-50 years.
  • On November 10, 2025, Elon Musk estimated the probability of AI-driven human extinction at 20%, while others—including Bengio’s colleagues—placed the risk anywhere between 10% and 90%. In other words, Elon Musk and Yoshua Bengio's colleagues estimated a 20-50% probability of an AI-caused extinction.

From climate change

Placard against omnicide, at Extinction Rebellion (2018)

In a 2010 interview with The Australian, the late Australian scientist Frank Fenner predicted the extinction of the human race within a century, primarily as the result of human overpopulation, environmental degradation, and climate change. There are several economists who have discussed the importance of global catastrophic risks. For example, Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage. Richard Posner has argued that humanity is doing far too little, in general, about small, hard-to-estimate risks of large-scale catastrophes.

Individual vs. species risks

Although existential risks are less manageable by individuals than, for example, health risks, according to Ken Olum, Joshua Knobe, and Alexander Vilenkin, the possibility of human extinction does have practical implications. For instance, if the "universal" doomsday argument is accepted, it changes the most likely source of disasters and hence the most efficient means of preventing them.

Difficulty

Some scholars argue that certain scenarios, including global thermonuclear war, would struggle to eradicate every last settlement on Earth. Physicist Willard Wells points out that any credible extinction scenario would have to reach into a diverse set of areas, including the underground subways of major cities, the mountains of Tibet, the remotest islands of the South Pacific, and even McMurdo Station in Antarctica, which has contingency plans and supplies for long isolation. In addition, elaborate bunkers exist for government leaders to occupy during a nuclear war. The existence of nuclear submarines, capable of remaining hundreds of meters deep in the ocean for potentially years, should also be taken into account. Any number of events could lead to a massive loss of human life, but if the last few (see minimum viable population) most resilient humans are unlikely to also die off, then that particular human extinction scenario may not seem credible.

Ethics

Value of human life

"Existential risks" are risks that threaten the entire future of humanity, whether by causing human extinction or by otherwise permanently crippling human progress. Multiple scholars have argued, based on the size of the "cosmic endowment," that because of the inconceivably large number of potential future lives that are at stake, even small reductions of existential risk have enormous value.

In one of the earliest discussions of the ethics of human extinction, Derek Parfit offers the following thought experiment:

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

(1) Peace.
(2) A nuclear war that kills 99% of the world's existing population.
(3) A nuclear war that kills 100%.

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.

— Derek Parfit

The scale of what is lost in an existential catastrophe is determined by humanity's long-term potential—what humanity could expect to achieve if it survived. From a utilitarian perspective, the value of protecting humanity is the product of its duration (how long humanity survives), its size (how many humans there are over time), and its quality (on average, how good is life for future people). On average, species survive for around a million years before going extinct. Parfit points out that the Earth will remain habitable for around a billion years. And these might be lower bounds on our potential: if humanity is able to expand beyond Earth, it could greatly increase the human population and survive for trillions of years. The size of the foregone potential that would be lost were humanity to become extinct is very large. Therefore, reducing existential risk by even a small amount would have a very significant moral value.

Carl Sagan wrote in 1983:

If we are required to calibrate extinction in numerical terms, I would be sure to include the number of people in future generations who would not be born.... (By one calculation), the stakes are one million times greater for extinction than for the more modest nuclear wars that kill "only" hundreds of millions of people. There are many other possible measures of the potential loss – including culture and science, the evolutionary history of the planet, and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise.

Philosopher Robert Adams in 1989 rejected Parfit's "impersonal" views but spoke instead of a moral imperative for loyalty and commitment to "the future of humanity as a vast project... The aspiration for a better society—more just, more rewarding, and more peaceful... our interest in the lives of our children and grandchildren, and the hopes that they will be able, in turn, to have the lives of their children and grandchildren as projects."

Philosopher Nick Bostrom argues in 2013 that preference-satisfactionist, democratic, custodial, and intuitionist arguments all converge on the common-sense view that preventing existential risk is a high moral priority, even if the exact "degree of badness" of human extinction varies between these philosophies.

Parfit argues that the size of the "cosmic endowment" can be calculated from the following argument: If Earth remains habitable for a billion more years and can sustainably support a population of more than a billion humans, then there is a potential for 1016 (or 10,000,000,000,000,000) human lives of normal duration. Bostrom goes further, stating that if the universe is empty, then the accessible universe can support at least 1034 biological human life-years and, if some humans were uploaded onto computers, could even support the equivalent of 1054 cybernetic human life-years.

Some economists and philosophers have defended views, including exponential discounting and person-affecting views of population ethics, on which future people do not matter (or matter much less), morally speaking. While these views are controversial, they would agree that an existential catastrophe would be among the worst things imaginable. It would cut short the lives of eight billion presently existing people, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, there may be strong reasons to reduce existential risk, grounded in concern for presently existing people.

Beyond utilitarianism, other moral perspectives lend support to the importance of reducing existential risk. An existential catastrophe would destroy more than just humanity—it would destroy all cultural artifacts, languages, and traditions, and many of the things we value. So moral viewpoints on which we have duties to protect and cherish things of value would see this as a huge loss that should be avoided. One can also consider reasons grounded in duties to past generations. For instance, Edmund Burke writes of a "partnership...between those who are living, those who are dead, and those who are to be born". If one takes seriously the debt humanity owes to past generations, Ord argues the best way of repaying it might be to "pay it forward" and ensure that humanity's inheritance is passed down to future generations.

Voluntary extinction

Voluntary Human Extinction Movement

Some philosophers adopt the antinatalist position that human extinction would be a beneficial thing. David Benatar argues that coming into existence is always serious harm, and therefore it is better that people do not come into existence in the future. Further, Benatar, animal rights activist Steven Best, and anarchist Todd May posit that human extinction would be a positive thing for the other organisms on the planet and the planet itself, citing, for example, the omnicidal nature of human civilization. The environmental view in favor of human extinction is shared by the members of the Voluntary Human Extinction Movement and the Church of Euthanasia, who call for refraining from reproduction and allowing the human species to go peacefully extinct, thus stopping further environmental degradation.

In fiction

Jean-Baptiste Cousin de Grainville's 1805 science fantasy novel Le dernier homme (The Last Man), which depicts human extinction due to infertility, is considered the first modern apocalyptic novel and credited with launching the genre. Other notable early works include Mary Shelley's 1826 The Last Man, depicting human extinction caused by a pandemic, and Olaf Stapledon's 1937 Star Maker, "a comparative study of omnicide."

Some 21st-century pop-science works, including The World Without Us by Alan Weisman and the television specials Life After People and Aftermath: Population Zero, pose a thought experiment: what would happen to the rest of the planet if humans suddenly disappeared? A threat of human extinction, such as through a technological singularity (also called an intelligence explosion), drives the plot of innumerable science fiction stories; an influential early example is the 1951 film adaptation of When Worlds Collide. Usually the extinction threat is narrowly avoided, but some exceptions exist, such as R.U.R. and Steven Spielberg's A.I

Collapsology

From Wikipedia, the free encyclopedia

The term collapsology or collapse studies are neologisms used to designate the transdisciplinary study of the risks of collapse of industrial civilization. It is concerned with the general collapse of societies induced by climate change, as well as "scarcity of resources, vast extinctions, and natural disasters."

Although the concept of civilizational or societal collapse had already existed for many years, collapsology focuses its attention on contemporary, industrial, and globalized societies.

Background

The word collapsology has been coined and publicized by Pablo Servigne [fr] and Raphaël Stevens in their essay: Comment tout peut s'effondrer. Petit manuel de collapsologie à l'usage des générations présentes (How everything can collapse: A manual for our times), published in 2015 in France. It also developed into a movement when Jared Diamond's text Collapse was published. Use of the term has spread, especially by journalists reporting on the deep adaptation writings by Jem Bendell.

Collapsology is based on the idea that humans impact their environment in a sustained and negative way, and promotes the concept of an environmental emergency, linked in particular to global warming and the biodiversity loss. Collapsologists believe, however, that the collapse of industrial civilization could be the result of a combination of different crises: environmental, but also economic, geopolitical, democratic, and others.

Recent literature reviews have shown the maturation of collapsology as an academic field. Archaeologist Guy Middleton argues that collapse studies have evolved into "a more nuanced, self-critical, and sophisticated field" that moves beyond environmental determinism and apocalyptic narratives. This evolution has led to applied collapsology, which draws from archaeology and ancient history to inform contemporary sustainability policies and climate change adaptation strategies, making collapse research increasingly relevant for resilience planning. Moreover, Brozović's comprehensive analysis of over 400 academic works identified five key scholarly conversations within collapse research: past collapses (historical and archaeological studies), general explanations of collapse (theoretical frameworks), alternatives to collapse (resilience and adaptation strategies), fictional collapses (speculative fiction and dystopian literature), and future climate change and societal collapse (predictive and scenario-based studies). Additionally, Shackelford and colleagues developed innovative methodologies for systematically reviewing the growing body of existential risk literature, including risks of human extinction and civilizational collapse, using crowdsourcing and machine learning techniques to handle the overwhelming volume of relevant research.

Etymology

The word collapsology is a portmanteau derived from the Latin collapsus, 'to fall, to collapse' and from the suffix -logy, logos, 'study', which is intended to name an approach of scientific nature.

Since 2015, several words have been proposed to describe the various approaches dealing with the issue of collapse: collapsosophy to designate the philosophical approach, collapsopraxis to designate the ideology inspired by this study, and collapsonauts to designate people living with this idea in mind.

Distinction from eschatology

Unlike traditional eschatological thinking, collapsology is based on data and concepts from contemporary scientific research, primarily human understanding of climate change and ecological overshoot as caused by human economic and geopolitical systems. It is not in line with the idea of a cosmic, apocalyptic "end of the world", but makes the hypothesis of the end of the human current world, the "thermo-industrial civilization".

This distinction is further stressed by historian Eric H. Cline by pointing out that while the whole world has obviously not ended, civilizations have collapsed over the course of history which makes the statement that "prophets have always predicted doom and been wrong" inapplicable to societal collapse.

Scientific basis

As early as 1972, The Limits to Growth, a report produced by MIT researchers, warned of the risks of exponential demographic and economic growth on a planet with limited resources.

As a systemic approach, collapsology is based on prospective studies such as The Limits of Growth, but also on the state of global and regional trends in the environmental, social and economic fields (such as the IPCC, IPBES or Global Environment Outlook (GE) reports periodically published by the Early Warning and Assessment Division of the UNEP, etc.) and numerous scientific works as well as various studies, such as "A safe operating space for humanity"; "Approaching a state shift in Earth's biosphere", published in Nature in 2009 and 2012, "The trajectory of the Anthropocene: The Great Acceleration", published in 2015 in The Anthropocene Review, and "Trajectories of the Earth System in the Anthropocene", published in 2018 in the Proceedings of the National Academy of Sciences of the United States of America. There is evidence to support the importance of collective processing of the emotional aspects of contemplating societal collapse, and the inherent adaptiveness of these emotional experiences.

History

Precursors

Even if this neologism only appeared in 2015 and concerns the study of the collapse of industrial civilization, the study of the collapse of societies is older and is probably a concern of every civilization. Among the works on this theme (in a broad sense) one can mention those of Berossus (278 B.C.), Pliny the Younger (79 AD), Ibn Khaldun (1375), Montesquieu (1734), Thomas Robert Malthus (1766–1834), Edward Gibbon (1776), Georges Cuvier, (1821), Élisée Reclus (1905), Oswald Spengler (1918), Arnold Toynbee (1939), Günther Anders (1956), Samuel Noah Kramer (1956), Leopold Kohr (1957), Rachel Carson (1962), Paul Ehrlich (1969), Nicholas Georgescu-Roegen (1971), Donella Meadows, Dennis Meadows & Jørgen Randers (1972), René Dumont (1973), Hans Jonas (1979), Joseph Tainter (1988), Al Gore (1992), Hubert Reeves (2003), Richard Posner (2004), Jared Diamond (2005), Niall Ferguson (2013).

Arnold J. Toynbee

In his monumental (initially published in twelve volumes) and highly controversial work of contemporary historiography entitled A Study of History (1972), Arnold J. Toynbee (1889–1975) deals with the genesis of civilizations (chapter 2), their growth (chapter 3), their decline (chapter 4), and their disintegration (chapter 5). According to him, the mortality of civilizations is trivial evidence for the historian, as is the fact that they follow one another over a long period of time.

Joseph Tainter

In his book The Collapse of Complex Societies, the anthropologist and historian Joseph Tainter (born 1949) studies the collapse of various civilizations, including that of the Roman Empire, in terms of network theory, energy economics and complexity theory. For Tainter, an increasingly complex society eventually collapses because of the ever-increasing difficulty in solving its problems.

Jared Diamond

The American geographer, evolutionary biologist and physiologist Jared Diamond (born 1937) already evoked the theme of civilizational collapse in his book called Collapse: How Societies Choose to Fail or Succeed, published in 2005. By relying on historical cases, notably the Rapa Nui civilization, the Vikings and the Maya civilization, Diamond argues that humanity collectively faces, on a much larger scale, many of the same issues as these civilizations did, with possibly catastrophic near-future consequences to many of the world's populations. This book has had a resonance beyond the United States, despite some criticism. Proponents of catastrophism who identify themselves as "enlightened catastrophists" draw from Diamond's work, helping build the expansion of the relational ecology network, whose members believe that man is heading toward disaster. Diamond's Collapse approached civilizational collapse from archaeological, ecological, and biogeographical perspectives on ancient civilizations.

Modern collapsologists

Since the invention of the term collapsology, many French personalities gravitate in or around the collapsologists' sphere. Not all have the same vision of civilizational collapse, some even reject the term "collapsologist", but all agree that contemporary industrial civilization, and the biosphere as a whole, are on the verge of a global crisis of unprecedented proportions. According to them, the process is already under way, and it is now only possible to try to reduce its devastating effects in the near future. The leaders of the movement are Yves Cochet and Agnès Sinaï of the Momentum Institute (a think tank exploring the causes of environmental and societal risks of collapse of the thermo-industrial civilization and possible actions to adapt to it), and Pablo Servigne and Raphaël Stevens who wrote the essay How everything can collapse: A manual for our times.

Beyond the French collapsologists mentioned above, one can mention: Aurélien Barrau (astrophysicist), Philippe Bihouix (engineer, low-tech developer), Dominique Bourg (philosopher), Valérie Cabanes (lawyer, seeking recognition of the crime of ecocide by the international criminal court), Jean-Marc Jancovici (energy-climate specialist), and Paul Jorion (anthropologist, sociologist).

In 2020 the French humanities and social science website Cairn.info published a dossier on collapsology titled The Age of Catastrophe, with contributions from historian François Hartog, economist Emmanuel Hache, philosopher Pierre Charbonnier, art historian Romain Noël, geoscientist Gabriele Salerno, and American philosopher Eugene Thacker.

Even if the term remains rather unknown in the Anglo-Saxon world, many publications deal with the same topic (for example the 2017 David Wallace-Wells article "The Uninhabitable Earth" and 2019 bestselling book of the same name, probably a mass-market collapsology work without using the term). It is now gradually spreading on general and scientific English speaking social networks. In his book Anti-Tech Revolution: Why and How, Ted Kaczynski also warned of the threat of catastrophic societal collapse.

Many-worlds interpretation

From Wikipedia, the free encyclopedia
The quantum-mechanical "Schrödinger's cat" paradox according to the many-worlds interpretation. In this interpretation, every quantum event is a branch point; the cat is both alive and dead, even after the box is opened, but the "alive" and "dead" cats are in different branches of the multiverse, both of which are equally real, but which do not interact with each other.

The many-worlds interpretation (MWI) is an interpretation of quantum mechanics that asserts that the universal wavefunction is objectively real, and that there is no wave function collapse. This implies that all possible outcomes of quantum measurements are physically realized in different "worlds". The evolution of reality as a whole in MWI is rigidly deterministic and dynamically local. Many-worlds is also called the relative state formulation or the Everett interpretation, after physicist Hugh Everett, who first proposed it in 1957. Bryce DeWitt popularized the formulation and named it many-worlds in the 1970s.

In modern versions of many-worlds, the subjective appearance of wave function collapse is explained by the mechanism of quantum decoherence. Decoherence approaches to interpreting quantum theory have been widely explored and developed since the 1970s. MWI is considered a mainstream interpretation of quantum mechanics, along with the other decoherence interpretations, the Copenhagen interpretation, and hidden variable theories such as Bohmian mechanics.

In the many-worlds interpretation, the universal wavefunction evolves unitarily without collapse. Interactions lead to decoherence, producing dynamically independent components of the wavefunction that correspond to different macroscopic outcomes. These components are sometimes called "worlds", though they are emergent, approximate, and not fundamental entities. This is intended to resolve the measurement problem and thus some paradoxes of quantum theory, such as Wigner's friend,[4]: 4–6  the Einstein–Podolsky–Rosen (EPR) paradox and Schrödinger's cat, since the universal wavefunction contains components corresponding to every possible outcome of a quantum event.

Overview of the interpretation

The many-worlds interpretation's key idea is that the linear and unitary dynamics of quantum mechanics applies everywhere and at all times and so describes the whole universe. In particular, it models a measurement as a unitary transformation, a correlation-inducing interaction, between observer and object, without using a collapse postulate, and models observers as ordinary quantum-mechanical systems. This stands in contrast to the Copenhagen interpretation, in which a measurement is a "primitive" concept, not describable by unitary quantum mechanics; using the Copenhagen interpretation the universe is divided into a quantum and a classical domain, and the collapse postulate is central. In MWI, there is no division between classical and quantum: everything is quantum and there is no collapse. MWI's main conclusion is that the universe (or multiverse in this context) is composed of a quantum superposition of an uncountable or undefinable amount or number of increasingly divergent, non-communicating parallel universes or quantum worlds. Sometimes dubbed Everett worlds, each is an internally consistent and actualized alternative history or timeline.

The many-worlds interpretation uses decoherence to explain the measurement process and the emergence of a quasi-classical world. Wojciech H. Zurek, one of decoherence theory's pioneers, said: "Under scrutiny of the environment, only pointer states remain unchanged. Other states decohere into mixtures of stable pointer states that can persist, and, in this sense, exist: They are einselected." Zurek emphasizes that his work does not depend on a particular interpretation.

The many-worlds interpretation shares many similarities with the decoherent histories interpretation, which also uses decoherence to explain the process of measurement or wave function collapse. MWI treats the other histories or worlds as real, since it regards the universal wave function as the "basic physical entity" or "the fundamental entity, obeying at all times a deterministic wave equation". The decoherent histories interpretation, on the other hand, needs only one of the histories (or worlds) to be real.

Several authors, including Everett, John Archibald Wheeler and David Deutsch, call many-worlds a theory or metatheory, rather than just an interpretation Everett argued that it was the "only completely coherent approach to explaining both the contents of quantum mechanics and the appearance of the world." Deutsch dismissed the idea that many-worlds is an "interpretation", saying that to call it an interpretation "is like talking about dinosaurs as an 'interpretation' of fossil records".

Formulation

In his 1957 doctoral dissertation, Everett proposed that, rather than relying on external observation for analysis of isolated quantum systems, one could mathematically model an object, as well as its observers, as purely physical systems within the mathematical framework developed by Paul Dirac, John von Neumann, and others, discarding altogether the ad hoc mechanism of wave function collapse.

Relative state

Everett's original work introduced the concept of a relative state. Two (or more) subsystems, after a general interaction, become correlated, or as is now said, entangled. Everett noted that such entangled systems can be expressed as the sum of products of states, where the two or more subsystems are each in a state relative to each other. After a measurement or observation one of the pair (or triple, etc.) is the measured, object or observed system, and one other member is the measuring apparatus (which may include an observer) having recorded the state of the measured system. Each product of subsystem states in the overall superposition evolves over time independently of other products. Once the subsystems interact, their states have become correlated or entangled and can no longer be considered independent. In Everett's terminology, each subsystem state was now correlated with its relative state, since each subsystem must now be considered relative to the other subsystems with which it has interacted.

In the example of Schrödinger's cat, after the box is opened, the entangled system is the cat, the poison vial and the observer. One relative triple of states would be the alive cat, the unbroken vial and the observer seeing an alive cat. Another relative triple of states would be the dead cat, the broken vial and the observer seeing a dead cat.

In the example of a measurement of a continuous variable (e.g., position q), the object-observer system decomposes into a continuum of pairs of relative states: the object system's relative state becomes a Dirac delta function each centered on a particular value of q and the corresponding observer relative state representing an observer having recorded the value of q. The states of the pairs of relative states are, post measurement, correlated with each other.

In Everett's scheme, there is no collapse; instead, the Schrödinger equation, or its quantum field theory, relativistic analog, holds all the time, everywhere. An observation or measurement is modeled by applying the wave equation to the entire system, comprising the object being observed and the observer. One consequence is that every observation causes the combined observer–object's wavefunction to change into a quantum superposition of two or more non-interacting branches.

Thus the process of measurement or observation, or any correlation-inducing interaction, splits the system into sets of relative states, where each set of relative states, forming a branch of the universal wave function, is consistent within itself, and all future measurements (including by multiple observers) will confirm this consistency.

Renamed many-worlds

Everett had referred to the combined observer–object system as split by an observation, each split corresponding to the different or multiple possible outcomes of an observation. These splits generate a branching tree, where each branch is a set of all the states relative to each other. Bryce DeWitt popularized Everett's work with a series of publications calling it the Many Worlds Interpretation. Focusing on the splitting process, DeWitt introduced the term "world" to describe a single branch of that tree, which is a consistent history. All observations or measurements within any branch are consistent within themselves.

Since many observation-like events have happened and are constantly happening, Everett's model implies that there are an enormous and growing number of simultaneously existing states or "worlds".

Properties

MWI removes the observer-dependent role in the quantum measurement process by replacing wave function collapse with the established mechanism of quantum decoherence. As the observer's role lies at the heart of all "quantum paradoxes" such as the EPR paradox and von Neumann's "boundary problem", this provides a clearer and easier approach to their resolution.

Since the Copenhagen interpretation requires the existence of a classical domain beyond the one described by quantum mechanics, it has been criticized as inadequate for the study of cosmology. While there is no evidence that Everett was inspired by issues of cosmology, he developed his theory with the explicit goal of allowing quantum mechanics to be applied to the universe as a whole, hoping to stimulate the discovery of new phenomena. This hope has been realized in the later development of quantum cosmology.

MWI is a realist, deterministic and dynamically local theory. It achieves this by removing wave function collapse, which is indeterministic and nonlocal, from the deterministic and local equations of quantum theory.

MWI (like other, broader multiverse theories) provides a context for the anthropic principle, which may provide an explanation for the fine-tuned universe.

MWI depends crucially on the linearity of quantum mechanics, which underpins the superposition principle. If the final theory of everything is non-linear with respect to wavefunctions, then many-worlds is invalid. All quantum field theories are linear and compatible with the MWI, a point Everett emphasized as a motivation for the MWI. While quantum gravity or string theory may be non-linear in this respect, there is as yet no evidence of this.

Weingarten and Taylor & McCulloch have made separate proposals for how to define wavefunction branches in terms of quantum circuit complexity.

Alternative to wavefunction collapse

As with the other interpretations of quantum mechanics, the many-worlds interpretation is motivated by behavior that can be illustrated by the double-slit experiment. When particles of light (or anything else) pass through the double slit, a calculation assuming wavelike behavior of light can be used to identify where the particles are likely to be observed. Yet when the particles are observed in this experiment, they appear as particles (i.e., at definite places) and not as non-localized waves.

Some versions of the Copenhagen interpretation of quantum mechanics proposed a process of "collapse" in which an indeterminate quantum system would probabilistically collapse onto, or select, just one determinate outcome to "explain" this phenomenon of observation. Wave function collapse was widely regarded as artificial and ad hoc, so an alternative interpretation in which the behavior of measurement could be understood from more fundamental physical principles was considered desirable.

Everett's PhD work provided such an interpretation. He argued that for a composite system—such as a subject (the "observer" or measuring apparatus) observing an object (the "observed" system, such as a particle)—the claim that either the observer or the observed has a well-defined state is meaningless; in modern parlance, the observer and the observed have become entangled: we can only specify the state of one relative to the other, i.e., the state of the observer and the observed are correlated after the observation is made. This led Everett to derive from the unitary, deterministic dynamics alone (i.e., without assuming wave function collapse) the notion of a relativity of states.

Everett noticed that the unitary, deterministic dynamics alone entailed that after an observation is made each element of the quantum superposition of the combined subject–object wave function contains two "relative states": a "collapsed" object state and an associated observer who has observed the same collapsed outcome; what the observer sees and the state of the object have become correlated by the act of measurement or observation. The subsequent evolution of each pair of relative subject–object states proceeds with complete indifference as to the presence or absence of the other elements, as if wave function collapse has occurred, which has the consequence that later observations are always consistent with the earlier observations. Thus the appearance of the object's wave function's collapse has emerged from the unitary, deterministic theory itself. (This answered Einstein's early criticism of quantum theory: that the theory should define what is observed, not for the observables to define the theory.) Since the wave function appears to have collapsed then, Everett reasoned, there was no need to actually assume that it had collapsed. And so, invoking Occam's razor, he removed the postulate of wave function collapse from the theory.

Testability

In 1985, David Deutsch proposed a variant of the Wigner's friend thought experiment as a test of many-worlds versus the Copenhagen interpretation. It consists of an experimenter (Wigner's friend) making a measurement on a quantum system in an isolated laboratory, and another experimenter (Wigner) who would make a measurement on the first one. According to the many-worlds theory, the first experimenter would end up in a macroscopic superposition of seeing one result of the measurement in one branch, and another result in another branch. The second experimenter could then interfere these two branches in order to test whether it is in fact in a macroscopic superposition or has collapsed into a single branch, as predicted by the Copenhagen interpretation. Since then Lockwood, Vaidman, and others have made similar proposals, which require placing macroscopic objects in a coherent superposition and interfering them, a task currently beyond experimental capability.

Probability and the Born rule

Since the many-worlds interpretation's inception, physicists have been puzzled about the role of probability in it. As put by Wallace, there are two facets to the question: the incoherence problem, which asks why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and the quantitative problem, which asks why the probabilities should be given by the Born rule.

Everett tried to answer these questions in the paper that introduced many-worlds. To address the incoherence problem, he argued that an observer who makes a sequence of measurements on a quantum system will in general have an apparently random sequence of results in their memory, which justifies the use of probabilities to describe the measurement process. To address the quantitative problem, Everett proposed a derivation of the Born rule based on the properties that a measure on the branches of the wave function should have. His derivation has been criticized as relying on unmotivated assumptions. Since then several other derivations of the Born rule in the many-worlds framework have been proposed. There is no consensus on whether this has been successful.

Frequentism

DeWitt and Graham and Farhi et al., among others, have proposed derivations of the Born rule based on a frequentist interpretation of probability. They try to show that in the limit of uncountably many measurements, no worlds would have relative frequencies that didn't match the probabilities given by the Born rule, but these derivations have been shown to be mathematically incorrect.

Decision theory

A decision-theoretic derivation of the Born rule was produced by David Deutsch (1999) and refined by Wallace and Saunders. They consider an agent who takes part in a quantum gamble: the agent makes a measurement on a quantum system, branches as a consequence, and each of the agent's future selves receives a reward that depends on the measurement result. The agent uses decision theory to evaluate the price they would pay to take part in such a gamble, and concludes that the price is given by the utility of the rewards weighted according to the Born rule. Some reviews have been positive, although these arguments remain highly controversial; some theoretical physicists have taken them as supporting the case for parallel universes. For example, a New Scientist story on a 2007 conference about Everettian interpretations quoted physicist Andy Albrecht as saying, "This work will go down as one of the most important developments in the history of science." In contrast, the philosopher Huw Price, also attending the conference, found the Deutsch–Wallace–Saunders approach fundamentally flawed.

Symmetries and invariance

In 2005, Zurek produced a derivation of the Born rule based on the symmetries of entangled states; Schlosshauer and Fine argue that Zurek's derivation is not rigorous, as it does not define what probability is and has several unstated assumptions about how it should behave.

In 2016, Charles Sebens and Sean M. Carroll, building on work by Lev Vaidman, proposed a similar approach based on self-locating uncertainty. In this approach, decoherence creates multiple identical copies of observers, who can assign credences to being on different branches using the Born rule. The Sebens–Carroll approach has been criticized by Adrian Kent, and Vaidman does not find it satisfactory.

Branch counting

In 2021, Simon Saunders produced a branch counting derivation of the Born rule. The crucial feature of this approach is to define the branches so that they all have the same magnitude or 2-norm. The ratios of the numbers of branches thus defined give the probabilities of the various outcomes of a measurement, in accordance with the Born rule.

Preferred basis problem

As originally formulated by Everett and DeWitt, the many-worlds interpretation had a privileged role for measurements: they determined which basis of a quantum system would give rise to the eponymous worlds. Without this the theory was ambiguous, as a quantum state can equally well be described (e.g.) as having a well-defined position or as being a superposition of two delocalized states. The assumption is that the preferred basis to use is the one which assigns a unique measurement outcome to each world. This special role for measurements is problematic for the theory, as it contradicts Everett and DeWitt's goal of having a reductionist theory and undermines their criticism of the ill-defined measurement postulate of the Copenhagen interpretation. This is known today as the preferred basis problem.

The preferred basis problem has been solved, according to Saunders and Wallace, among others, by incorporating decoherence into the many-worlds theory. In this approach, the preferred basis does not have to be postulated, but rather is identified as the basis stable under environmental decoherence. In this way measurements no longer play a special role; rather, any interaction that causes decoherence causes the world to split. Since decoherence is never complete, there will always remain some infinitesimal overlap between two worlds, making it arbitrary whether a pair of worlds has split or not. Wallace argues that this is not problematic: it only shows that worlds are not a part of the fundamental ontology, but rather of the emergent ontology, where these approximate, effective descriptions are routine in the physical sciences. Since in this approach the worlds are derived, it follows that they must be present in any other interpretation of quantum mechanics that does not have a collapse mechanism, such as Bohmian mechanics.

This approach to deriving the preferred basis has been criticized as creating circularity with derivations of probability in the many-worlds interpretation, as decoherence theory depends on probability and probability depends on the ontology derived from decoherence. Wallace contends that decoherence theory depends not on probability but only on the notion that one is allowed to do approximations in physics.

History

MWI originated in Everett's Princeton University PhD thesis "The Theory of the Universal Wave Function", developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 under the title "Relative State Formulation of Quantum Mechanics" (Wheeler contributed the title "relative state"; Everett originally called his approach the "Correlation Interpretation", where "correlation" refers to quantum entanglement). The phrase "many-worlds" is due to Bryce DeWitt, who was responsible for the wider popularization of Everett's theory, which had been largely ignored for a decade after publication in 1957.

Everett's proposal was not without precedent. In 1952, Erwin Schrödinger gave a lecture in Dublin in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". He went on to assert that while the Schrödinger equation seemed to be describing several different histories, they were "not alternatives but all really happen simultaneously". According to David Deutsch, this is the earliest known reference to many-worlds; Jeffrey A. Barrett describes it as indicating the similarity of "general views" between Everett and Schrödinger. Schrödinger's writings from the period also contain elements resembling the modal interpretation originated by Bas van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wave function as physical and treating it as information became interchangeable.

Leon Cooper and Deborah Van Vechten developed a very similar approach before reading Everett's work. Zeh also came to the same conclusions as Everett before reading his work, then built a new theory of quantum decoherence based on these ideas.

According to people who knew him, Everett believed in the literal reality of the other quantum worlds. His son and wife reported that he "never wavered in his belief over his many-worlds theory". In their detailed review of Everett's work, Osnaghi, Freitas, and Freire Jr. note that Everett consistently used quotes around "real" to indicate a meaning within scientific practice.

Reception

MWI's initial reception was overwhelmingly negative, in the sense that it was ignored, with the notable exception of DeWitt. Wheeler made considerable efforts to formulate the theory in a way that would be palatable to Bohr, visited Copenhagen in 1956 to discuss it with him, and convinced Everett to visit as well, which happened in 1959. Nevertheless, Bohr and his collaborators completely rejected the theory. Everett had already left academia in 1957, never to return, and in 1980, Wheeler disavowed the theory.

Support

One of the strongest longtime advocates of MWI is David Deutsch. According to him, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing, Deutsch suggested that parallelism that results from MWI could lead to "a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it". He also proposed that MWI will be testable (at least against "naive" Copenhagenism) when reversible computers become conscious via the reversible observation of spin.

Equivocal

Philosophers of science James Ladyman and Don Ross say that MWI could be true, but do not embrace it. They note that no quantum theory is yet empirically adequate for describing all of reality, given its lack of unification with general relativity, and so do not see a reason to regard any interpretation of quantum mechanics as the final word in metaphysics. They also suggest that the multiple branches may be an artifact of incomplete descriptions and of using quantum mechanics to represent the states of macroscopic objects. They argue that macroscopic objects are significantly different from microscopic objects in not being isolated from the environment, and that using quantum formalism to describe them lacks explanatory and descriptive power and accuracy.

Rejection

Some scientists consider some aspects of MWI to be unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them.

Victor J. Stenger remarked that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes. Collaborating with James Hartle, Gell-Mann worked toward the development of a more "palatable" post-Everett quantum mechanics. Stenger thought it fair to say that most physicists find MWI too extreme, though it "has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse".

Roger Penrose argues that the idea is flawed because it is based on an oversimplified version of quantum mechanics that does not account for gravity. In his view, applying conventional quantum mechanics to the universe implies the MWI, but the lack of a successful theory of quantum gravity negates the claimed universality of conventional quantum mechanics. According to Penrose, "the rules must change when gravity is involved". He further asserts that gravity helps anchor reality and "blurry" events have only one allowable outcome: "electrons, atoms, molecules, etc., are so minute that they require almost no amount of energy to maintain their gravity, and therefore their overlapping states. They can stay in that state forever, as described in standard quantum theory". On the other hand, "in the case of large objects, the duplicate states disappear in an instant due to the fact that these objects create a large gravitational field".

Philosopher of science Robert P. Crease says that MWI is "one of the most implausible and unrealistic ideas in the history of science" because it means that everything conceivable happens. Science writer Philip Ball calls MWI's implications fantasies, since "beneath their apparel of scientific equations or symbolic logic, they are acts of imagination, of 'just supposing'".

Theoretical physicist Gerard 't Hooft also dismisses the idea: "I do not believe that we have to live with the many-worlds interpretation. Indeed, it would be a stupendous number of parallel worlds, which are only there because physicists couldn't decide which of them is real."

Asher Peres was an outspoken critic of MWI. A section of his 1993 textbook had the title Everett's interpretation and other bizarre theories. Peres argued that the various many-worlds interpretations merely shift the arbitrariness or vagueness of the collapse postulate to the question of when "worlds" can be regarded as separate, and that no objective criterion for that separation can actually be formulated.

Polls

A poll of 72 "leading quantum cosmologists and other quantum field theorists" conducted before 1991 by L. David Raub showed 58% agreement with "Yes, I think MWI is true".

Max Tegmark reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations."

In response to Sean M. Carroll's statement "As crazy as it sounds, most working physicists buy into the many-worlds theory", Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people ... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres "got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'"

A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing at the University of Waterloo found "Many Worlds (and decoherence)" to be the least favored.

A 2011 poll of 33 participants at an Austrian conference on quantum foundations found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen; the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll.

Speculative implications

DeWitt has said that Everett, Wheeler, and Graham "do not in the end exclude any element of the superposition. All the worlds are there, even those in which everything goes wrong and all the statistical laws break down." Tegmark affirmed that absurd or highly unlikely events are rare but inevitable under MWI: "Things inconsistent with the laws of physics will never happen—everything else will ... it's important to keep track of the statistics, since even if everything conceivable happens somewhere, really freak events happen only exponentially rarely." David Deutsch speculates in his book The Beginning of Infinity that some fiction, such as alternate history, could occur somewhere in the multiverse, as long as it is consistent with the laws of physics.

According to Ladyman and Ross, many seemingly physically plausible but unrealized possibilities, such as those discussed in other scientific fields, generally have no counterparts in other branches, because they are in fact incompatible with the universal wave function. According to Carroll, human decision-making, contrary to common misconceptions, is best thought of as a classical process, not a quantum one, because it works on the level of neurochemistry rather than fundamental particles. Human decisions do not cause the world to branch into equally realized outcomes; even for subjectively difficult decisions, the "weight" of realized outcomes is almost entirely concentrated in a single branch.

Quantum suicide is a thought experiment in quantum mechanics and the philosophy of physics that can purportedly distinguish between the Copenhagen interpretation of quantum mechanics and the many-worlds interpretation by a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide. Most experts believe the experiment would not work in the real world, because the world with the surviving experimenter has a lower "measure" than the world before the experiment, making it less likely that the experimenter will experience their survival.

Human extinction

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Human_ext...