Search This Blog

Sunday, April 5, 2026

Doomsday Clock

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Doomsday_Clock
Doomsday Clock
The Doomsday Clock pictured at its setting of "85 seconds to midnight", last changed on January 27, 2026
FrequencyYearly
InauguratedJune 1947
Most recentJanuary 27, 2026
Organized byBulletin of the Atomic Scientists
Websitethebulletin.org/doomsday-clock Edit this at Wikidata

The Doomsday Clock is a symbol that represents the estimated likelihood of a human-made global catastrophe, in the opinion of the nonprofit organization Bulletin of the Atomic Scientists.

Maintained since 1947, the Clock is a proxy mechanism for threats to humanity from unchecked scientific and technological advances: A hypothetical global catastrophe is represented by midnight on the Clock, with the Bulletin's opinion on how close the world is to "zero" represented by a certain number of minutes or seconds to midnight. This is assessed in January of each year. The main factors influencing the Clock are nuclear warfare, climate change, and artificial intelligence. The Bulletin's Science and Security Board monitors new developments in the life sciences and technology that could inflict irrevocable harm to humanity.

The Clock's original setting in 1947 was seven minutes to midnight. It has since been set backward eight times and forward 19 times. The farthest time from midnight was 17 minutes in 1991, and the closest is 85 seconds in 2026.

The Clock was moved to 150 seconds (2 minutes, 30 seconds) in 2017, then forward to two minutes to midnight in 2018, and left unchanged in 2019. It was moved forward to 100 seconds (1 minute, 40 seconds) in 2020, 90 seconds (1 minute, 30 seconds) in 2023, 89 seconds (1 minute, 29 seconds) in 2025, and 85 seconds (1 minute, 25 seconds) in 2026.

History

Cover of the 1947 Bulletin of the Atomic Scientists issue, featuring the Doomsday Clock at "seven minutes to midnight"

The Doomsday Clock's origin can be traced to the international group of researchers called the Chicago Atomic Scientists, who had participated in the Manhattan Project. After the atomic bombings of Hiroshima and Nagasaki, they began publishing a mimeographed newsletter and then the magazine, Bulletin of the Atomic Scientists, which, since its inception, has depicted the Clock on every cover. The Clock was first represented in 1947, when the Bulletin co-founder Hyman Goldsmith asked artist Martyl Langsdorf (wife of Manhattan Project research associate and Szilárd petition signatory Alexander Langsdorf Jr.) to design a cover for the magazine's June 1947 issue. As Eugene Rabinowitch, another co-founder of the Bulletin, explained later:

The Bulletin's Clock is not a gauge to register the ups and downs of the international power struggle; it is intended to reflect basic changes in the level of continuous danger in which mankind lives in the nuclear age...

Langsdorf chose a clock to reflect the urgency of the problem: like a countdown, the Clock suggests that destruction will naturally occur unless someone takes action to stop it.

In January 2007, designer Michael Bierut, who was on the Bulletin's Governing Board, redesigned the Doomsday Clock to give it a more modern feel. In 2009, the Bulletin ceased its print edition and became one of the first print publications in the U.S. to become entirely digital; the Clock is now found as part of the logo on the Bulletin's website. Information about the Doomsday Clock Symposium, a timeline of the Clock's settings, and multimedia shows about the Clock's history and culture can also be found on the Bulletin's website.

The 5th Doomsday Clock Symposium was held on November 14, 2013, in Washington, D.C.; it was a day-long event that was open to the public and featured panelists discussing various issues on the topic "Communicating Catastrophe". There was also an evening event at the Hirshhorn Museum and Sculpture Garden in conjunction with the Hirshhorn's current exhibit, "Damage Control: Art and Destruction Since 1950". The panel discussions, held at the American Association for the Advancement of Science, were streamed live from the Bulletin's website and can still be viewed there. Reflecting international events dangerous to humankind, the Clock has been adjusted 27 times since its inception in 1947, when it was set to "seven minutes to midnight".

The Doomsday Clock has become a universally recognized metaphor according to The Two-Way, an NPR blog. According to the Bulletin, the Clock attracts more daily visitors to the Bulletin's site than any other feature.

Basis for settings

"Midnight" has a deeper meaning besides the constant threat of war. There are various elements taken into consideration when the scientists from the Bulletin decide what Midnight and "global catastrophe" really mean in a particular year. They might include "politics, energy, weapons, diplomacy, and climate science"; potential sources of threat include nuclear threats, climate change, bioterrorism, and artificial intelligence. Members of the board judge Midnight by discussing how close they think humanity is to the end of civilization. In 1947, at the beginning of the Cold War, the Clock was started at seven minutes to midnight.

Fluctuations and threats

Before January 2020, the two tied-for-lowest points for the Doomsday Clock were in 1953 (when the Clock was set to two minutes until midnight, after the U.S. and the Soviet Union began testing hydrogen bombs) and in 2018, following the failure of world leaders to address tensions relating to nuclear weapons and climate change issues. In other years, the Clock's time has fluctuated from 17 minutes in 1991 to 2 minutes 30 seconds in 2017. Discussing the change in 2017, Lawrence Krauss, one of the scientists from the Bulletin, warned that political leaders must make decisions based on facts, and those facts "must be taken into account if the future of humanity is to be preserved". In an announcement from the Bulletin about the status of the Clock, they went as far to call for action from "wise" public officials and "wise" citizens to make an attempt to steer human life away from catastrophe while humans still can.

On January 24, 2018, scientists moved the clock to two minutes to midnight, based on threats greatest in the nuclear realm. The scientists said, of recent moves by North Korea under Kim Jong-un and the administration of Donald Trump in the U.S.: "Hyperbolic rhetoric and provocative actions by both sides have increased the possibility of nuclear war by accident or miscalculation".

The clock was left unchanged in 2019 due to the twin threats of nuclear weapons and climate change, and the problem of those threats being "exacerbated this past year by the increased use of information warfare to undermine democracy around the world, amplifying risk from these and other threats and putting the future of civilization in extraordinary danger".

On January 23, 2020, the Clock was moved to 100 seconds (1 minute, 40 seconds) before midnight. The Bulletin's executive chairman, Jerry Brown, said "the dangerous rivalry and hostility among the superpowers increases the likelihood of nuclear blunder... Climate change just compounds the crisis". The "100 seconds to midnight" setting remained unchanged in 2021 and 2022.

On January 24, 2023, the Clock was moved to 90 seconds (1 minute, 30 seconds) before midnight, which was largely attributed to the risk of nuclear escalation that arose from the Russian invasion of Ukraine. Other reasons cited included climate change, biological threats such as COVID-19, and risks associated with disinformation and disruptive technologies.

On January 28, 2025, the Clock was moved to 89 seconds (1 minute, 29 seconds) before midnight. In addition to last year's concerns, the increased usage of artificial intelligence in both the battlefield and social media was noted as a new factor.

On January 27, 2026, the Clock was moved to 85 seconds (1 minute, 25 seconds) before midnight, the closest it has ever been set to midnight since its inception in 1947. According to the bulletin, it's all caused as a "failure of leadership". From Alexandra Bell, CEO and president of the scientists "It is a hard truth, but this is our reality" as no significant action was done to push the clock back.

Criticism

In 2016, Anders Sandberg of the Future of Humanity Institute has stated that the "grab bag of threats" currently mixed together by the Clock can induce paralysis. People may be more likely to succeed at smaller, incremental challenges; for example, taking steps to prevent the accidental detonation of nuclear weapons was a small but significant step towards avoiding nuclear war. Alex Barasch in Slate argued that "putting humanity on a permanent, blanket high-alert isn't helpful when it comes to policy or science" and criticized the Bulletin for neither explaining nor attempting to quantify their methodology.

Cognitive psychologist Steven Pinker harshly criticized the Doomsday Clock as a political stunt, pointing to the words of its founder that its purpose was "to preserve civilization by scaring men into rationality". He stated that it is inconsistent and not based on any objective indicators of security, using as an example its being farther from midnight in 1962 during the Cuban Missile Crisis than in the "far calmer 2007". He argued it was another example of humanity's tendency toward historical pessimism, and compared it to other predictions of self-destruction that went unfulfilled.

Writing for the New Statesman, British journalist James Ball questioned the Clock's purpose and noted the Bulletin's lack of objective methodology for setting the Clock. Ball then observes that the organization is no different to other doomsday cults preaching the end of the world except that the Bulletin is secular instead of religious.

Conservative media outlets have often criticized the Bulletin and the Doomsday Clock. Keith Payne wrote 2010 in the National Review that the Clock overestimated the effects of "developments in the areas of nuclear testing and formal arms control". In 2018, Tristin Hopper in the National Post acknowledged that "there are plenty of things to worry about regarding climate change", but states that climate change is not in the same league as total nuclear destruction. In addition, some critics accuse the Bulletin of pushing a political agenda.

Timeline

Doomsday Clock graph, 1947–2023. The lower points on the graph represent a higher probability of technologically or environmentally-induced catastrophe, and the higher points represent a lower probability, in the opinion of the Bulletin.
Timeline of the Doomsday Clock
Year Minutes to midnight Time (24-h) Change (minutes) Reason Clock
1947 7 23:53 0 The initial setting of the Doomsday Clock.
1949 3 23:57 −4 The Soviet Union tests its first atomic bomb, the RDS-1, starting the nuclear arms race.
1953 2 23:58 −1 The United States tests its first thermonuclear device in November 1952 as part of Operation Ivy, before the Soviet Union follows suit with the Joe 4 test in August. This remained the clock's closest approach to midnight (tied in 2018) until 2020.
1960 7 23:53 +5 In response to a perception of increased scientific cooperation and public understanding of the dangers of nuclear weapons (as well as political actions taken to avoid "massive retaliation"), the United States and Soviet Union cooperate and avoid direct confrontation in regional conflicts such as the 1956 Suez Crisis, the 1958 Second Taiwan Strait Crisis, and the 1958 Lebanon crisis. Scientists from various countries help establish the International Geophysical Year, a series of coordinated, worldwide scientific observations between nations allied with both the United States and the Soviet Union, and the Pugwash Conferences on Science and World Affairs, which allow Soviet and American scientists to interact.
1963 12 23:48 +5 The United States and the Soviet Union sign the Partial Test Ban Treaty, limiting atmospheric nuclear testing.
1968 7 23:53 −5 The involvement of the United States in the Vietnam War intensifies, the Indo-Pakistani War of 1965 takes place, and the Six-Day War occurs in 1967. France and China, two nations which have not signed the Partial Test Ban Treaty, acquire and test nuclear weapons (the 1960 Gerboise Bleue and the 1964 596, respectively) to assert themselves as global players in the nuclear arms race.
1969 10 23:50 +3 About 100 nations sign the Nuclear Non-Proliferation Treaty, and the United States also ratifies it.
1972 12 23:48 +2 The United States and the Soviet Union sign the first Strategic Arms Limitation Treaty (SALT I) and the Anti-Ballistic Missile (ABM) Treaty.
1974 9 23:51 −3 India tests a nuclear device (Smiling Buddha), and SALT II talks stall. Both the United States and the Soviet Union modernize multiple independently targetable reentry vehicles (MIRVs).
1980 7 23:53 −2 Unforeseeable end to deadlock in American–Soviet talks as the Soviet–Afghan War begins. As a result of the war, the U.S. Senate refuses to ratify the SALT II agreement.
1981 4 23:56 −3 The Soviet war in Afghanistan toughens the U.S.' nuclear posture. U.S. President Jimmy Carter withdraws the United States from the 1980 Summer Olympic Games in Moscow. The Carter administration considers ways in which the United States could win a nuclear war. Ronald Reagan becomes President of the United States, scraps further arms reduction talks with the Soviet Union, and argues that the only way to end the Cold War is to win it. Tensions between the United States and the Soviet Union contribute to the danger of nuclear annihilation as they each deploy intermediate-range missiles in Europe. The adjustment also accounts for the Iran hostage crisis, the Iran–Iraq War, China's atmospheric nuclear warhead test, the declaration of martial law in Poland, apartheid in South Africa, and human rights abuses across the world.[35][36]
1984 3 23:57 −1 Further escalation of the tensions between the United States and the Soviet Union, with the ongoing Soviet–Afghan War intensifying the Cold War. U.S. Pershing II medium-range ballistic missile and cruise missiles are deployed in Western Europe.[35] Ronald Reagan pushes to win the Cold War by intensifying the arms race between the superpowers. The Soviet Union and its allies (except Romania) boycott the 1984 Olympic Games in Los Angeles, as a response to the U.S.-led boycott in 1980.
1988 6 23:54 +3 In December 1987, the United States and the Soviet Union sign the Intermediate-Range Nuclear Forces Treaty, to eliminate intermediate-range nuclear missiles, and their relations improve.
1990 10 23:50 +4 The fall of the Berlin Wall and the Iron Curtain, along with the reunification of Germany, meaning that the Cold War is nearing its end.
1991 17 23:43 +7 The United States and Soviet Union sign the first Strategic Arms Reduction Treaty (START I), the US announces the removal of many tactical nuclear weapons in September 1991, and the Soviet Union takes similar steps, as well as announcing the complete cessation of all nuclear testing in October 1991. The Bulletin editorial, published November 26, 1991, announces that "the 40-year-long East-West nuclear arms race is over." One month after the Bulletin made this clock adjustment, the Soviet Union dissolves on December 26, 1991. This is the farthest from midnight the Clock has been since its inception.
1995 14 23:46 −3 Global military spending continues at Cold War levels amid concerns about post-Soviet nuclear proliferation of weapons and brainpower.
1998 9 23:51 −5 Both India (Pokhran-II) and Pakistan (Chagai-I) test nuclear weapons in a tit-for-tat show of aggression; the United States and Russia run into difficulties in further reducing stockpiles.
2002 7 23:53 −2 Little progress on global nuclear disarmament. United States rejects a series of arms control treaties and announces its intentions to withdraw from the Anti-Ballistic Missile Treaty, amid concerns about the possibility of a nuclear terrorist attack due to the amount of weapon-grade nuclear materials that are unsecured and unaccounted for worldwide.
2007 5 23:55 −2 North Korea tests a nuclear weapon in October 2006, Iran's nuclear ambitions, a renewed American emphasis on the military utility of nuclear weapons, the failure to adequately secure nuclear materials, and the continued presence of some 26,000 nuclear weapons in the United States and Russia. After assessing the dangers posed to civilization, climate change was added to the prospect of nuclear annihilation as the greatest threats to humanity.
2010 6 23:54 +1 Worldwide cooperation to reduce nuclear arsenals and limit effect of climate change. The New START agreement is ratified by both the United States and Russia, and more negotiations for further reductions in the American and Russian nuclear arsenal are already planned. The 2009 United Nations Climate Change Conference in Copenhagen results in the developing and industrialized countries agreeing to take responsibility for carbon emissions and to limit global temperature rise to 2 degrees Celsius.
2012 5 23:55 −1 Lack of global political action to address global climate change, nuclear weapons stockpiles, the potential for regional nuclear conflict, and nuclear power safety.
2015 3 23:57 −2 Concerns amid continued lack of global political action to address global climate change, the modernization of nuclear weapons in the United States and Russia, and the problem of nuclear waste.
2017 2+12 23:57:30 12
(−30 s)
United States President Donald Trump's comments over nuclear weapons, the threat of a renewed arms race between the U.S. and Russia, and the expressed disbelief in the scientific consensus over climate change by the Trump administration.
2018 2 23:58 12
(−30 s)
Failure of world leaders to deal with looming threats of nuclear war and climate change. This was at the time the clock's third closest approach to midnight, matching that of 1953. In 2019, the Bulletin reaffirmed the "two minutes to midnight" time, citing continuing climate change and Trump administration's abandonment of U.S. efforts to lead the world toward decarbonization; U.S. withdrawal from the Paris Agreement, the Joint Comprehensive Plan of Action, and the Intermediate-Range Nuclear Forces Treaty; U.S. and Russian nuclear modernization efforts; information warfare threats and other dangers from "disruptive technologies" such as synthetic biology, artificial intelligence, and cyberwarfare.
2020 1+23
(100 s)
23:58:20 13
(−20 s)
Failure of world leaders to deal with the increased threats of nuclear war, such as the end of the Intermediate-Range Nuclear Forces Treaty (INF) between the United States and Russia as well as increased tensions between the U.S. and Iran, along with the continued neglect of climate change. Announced in units of seconds, instead of minutes; this was the clock's closest approach to midnight, exceeding that of 1953 and 2018. The Bulletin concluded by stating that the current issues causing the adjustment are "the most dangerous situation that humanity has ever faced". In the annual statements for 2021 and 2022, issued in January of each year, the Bulletin left the "100 seconds to midnight" time setting unchanged.
2023 1+12
(90 s)
23:58:30 16
(−10 s)
Due largely—but not exclusively—to the Russian invasion of Ukraine and the increased risk of nuclear escalation stemming from the conflict. Russia suspended its participation in the last remaining nuclear weapons treaty between it and the United States, New START. Russia also brought its war to the Chernobyl and Zaporizhzhia nuclear reactor sites, violating international protocols and risking widespread release of radioactive materials. North Korea resumed its nuclear rhetoric, launching an intermediate-range ballistic missile test over Japan in October 2022. Continuing threats posed by the climate crisis and the breakdown of global norms and institutions set up to mitigate risks associated with advancing technologies and biological threats such as COVID-19 also contributed to the time setting. This setting remained unchanged the following year.
2025 1+2960
(89 s)
23:58:31 160
(−1 s)
The continuing Russian invasion of Ukraine and the Middle Eastern crisis, increased nuclear proliferation, effects of climate change, biological threats, and advancing technologies.
2026 1+512
(85 s)
23:58:35 115
(−4 s)
Russia's continued war in Ukraine, the U.S. and Israeli bombing of Iran, and border clashes between India and Pakistan. Other cited factors include ongoing tensions in Asia, including on the Korean Peninsula, as well as rising tensions in the Western hemisphere, and the expiration of the New START treaty on February 5, 2026. Rising nuclear proliferation, effects of climate change, biological threats, and advancing technologies have also continued. This is the closest to midnight the Clock has been since its inception.

Many-minds interpretation

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Many-minds_interpretation

The many-minds interpretation of quantum mechanics extends the many-worlds interpretation by proposing that the distinction between worlds should be made at the level of the mind of an individual observer. The concept was first introduced in 1970 by H. Dieter Zeh as a variant of the Hugh Everett interpretation in connection with quantum decoherence, and later (in 1981) explicitly called a many or multi-consciousness interpretation. The name many-minds interpretation was first used by David Albert and Barry Loewer in 1988.

History

Interpretations of quantum mechanics

The various interpretations of quantum mechanics typically involve explaining the mathematical formalism of quantum mechanics, or to create a physical picture of the theory. While the mathematical structure has a strong foundation, there is still much debate about the physical and philosophical interpretation of the theory. These interpretations aim to tackle various concepts such as:

  1. Evolution of the state of a quantum system (given by the wavefunction), typically through the use of the Schrödinger equation. This concept is almost universally accepted, and is rarely put to debate.
  2. The measurement problem, which relates to what is called wavefunction collapse – the collapse of a quantum state into a definite measurement (i.e. a specific eigenstate of the wavefunction). The debate on whether this collapse actually occurs is a central problem in interpreting quantum mechanics.

The standard solution to the measurement problem is the "Orthodox" or "Copenhagen" interpretation, which claims that the wave function collapses as the result of a measurement by an observer or apparatus external to the quantum system. An alternative interpretation, the Many-worlds Interpretation, was first described by Hugh Everett in 1957 (where it was called the relative state interpretation, the name Many-worlds was coined by Bryce Seligman DeWitt starting in the 1960s and finalized in the 1970s). His formalism of quantum mechanics denied that a measurement requires a wave collapse, instead suggesting that all that is truly necessary of a measurement is that a quantum connection is formed between the particle, the measuring device, and the observer.

The many-worlds interpretation

In the original relative state formulation, Everett proposed that there is one universal wavefunction that describes the objective reality of the whole universe. He stated that when subsystems interact, the total system becomes a superposition of these subsystems. This includes observers and measurement systems, which become part of one universal state (the wavefunction) that is always described via the Schrödinger Equation (or its relativistic alternative). That is, the states of the subsystems that interacted become "entangled" in such a way that any definition of one must necessarily involve the other. Thus, each subsystem's state can only be described relative to each subsystem with which it interacts (hence the name relative state).

Everett suggested that the universe is actually indeterminate as a whole. For example, consider an observer measuring some particle that starts in an undetermined state, as both spin-up and spin-down, that is – a superposition of both possibilities. When an observer measures that particle's spin, however, it always registers as either up or down. The problem of how to understand this sudden shift from "both up and down" to "either up or down" is called the Measurement problem. According to the many-worlds interpretation, the act of measurement forced a “splitting” of the universe into two states, one spin-up and the other spin-down, and the two branches that extend from those two subsequently independent states. One branch measures up. The other measures down. Looking at the instrument informs the observer which branch he is on, but the system itself is indeterminate at this and, by logical extension, presumably any higher level.

The “worlds” in the many worlds theory is then just the complete measurement history up until and during the measurement in question, where splitting happens. These “worlds” each describe a different state of the universal wave function and cannot communicate. There is no collapse of the wavefunction into one state or another, but rather an observer finds itself in the world leading up to what measurement it has made and is unaware of the other possibilities that are equally real.

The many-minds interpretation

The many-minds interpretation of quantum theory is many-worlds with the distinction between worlds constructed at the level of the individual observer. Rather than the worlds that branch, it is the observer's mind that branches.

The problem with this interpretation is that it implies the observer must be in a superposition with herself, and that seems strange. In their 1988 paper, Albert and Loewer argued that the mind of an observer cannot be in an indefinite state because an observer must answer the question about which state of a system he has observed with complete certainty. If the observer's mind were in a superposition of states, then it could not attain such certainty. To overcome this contradiction, they suggest that a mind must always be in a definite state and only the “bodies” of the minds are in a superposition.

Accordingly, when an observer measures a quantum system and becomes entangled with it, the result is a larger quantum system. In regards to each possibility within this greater wave function, a mental state of the brain corresponds. Ultimately, only one of these mental states is experienced, leading the others to branch off and become inaccessible, albeit real. In this way, every sentient being possesses an infinity of minds, whose prevalence correspond to the amplitude of the wavefunction. As an observer checks a measurement, the probability of realizing a specific measurement directly correlates to the number of minds they have where they see that measurement. It is in this way that the probabilistic nature of quantum measurements are obtained by the Many-minds Interpretation.

Quantum non-locality in the many-minds interpretation

The body remains in an indeterminate state while the minds picks a stochastic result.

Consider an experiment that measures the polarization of two photons. When the photon is created, it has an indeterminate polarization. If a stream of these photons is passed through a polarization filter, 50% of the light is passed through. This corresponds to each photon having a 50% chance of aligning with the filter and thus passing, or being misaligned (by 90 degrees relative to the polarization filter) and being absorbed. Quantum mechanically, this means the photon is in a superposition of states where it is either passed or absorbed. Now, consider the inclusion of another photon and polarization detector. Now, the photons are created in such a way that they are entangled. That is, when one photon takes on a polarization state, the other photon will always behave as if it has the same polarization. For simplicity, take the second filter to either be perfectly aligned with the first, or to be perfectly misaligned (90 degree difference in angle, such that it is absorbed). If the detectors are aligned, both photons are passed (i.e. they are said to agree). If they are misaligned, only the first passes and the second is absorbed (now they disagree). Thus, the entanglement causes perfect correlations between the two measurements – regardless of separation distance, making the interaction non-local. This sort of experiment is further explained in Tim Maudlin's Quantum Non-Locality and Relativity, and can be related to Bell test experiments. Now, consider the analysis of this experiment from the many minds point of view:

No sentient observer

Consider the case where there is no sentient observer, i.e. no mind present to observe the experiment. In this case, the detector will be in an indefinite state. The photon is both passed and absorbed, and will remain in this state. The correlations are withheld in that none of the possible "minds", or wave function states, correspond to non correlated results.

One sentient observer

Now expand the situation to have one sentient being observing the device. Now, they too enter the indefinite state. Their eyes, body, and brain are seeing both spins at the same time. The mind however, stochastically chooses one of the directions, and that is what the mind sees. When this observer views the second detector, their body will see both results. Their mind will choose the result that agrees with the first detector, and the observer will see the expected results. However, the observer's mind seeing one result does not directly affect the distant state – there is just no wave function in which the expected correlations do not exist. The true correlation only happens when they actually view the second detector.

Two sentient observers

When two people look at two different detectors that scan entangled particles, both observers will enter an indefinite state, as with one observer. These results need not agree – the second observer's mind does not have to have results that correlate with the first's. When one observer tells the results to the second observer, their two minds cannot communicate and thus will only interact with the other's body, which is still indefinite. When the second observer responds, his body will respond with whatever result agrees with the first observer's mind. This means that both observer's minds will be in a state of the wavefunction that always get the expected results, but individually their results could be different.

Non-locality of the many-minds interpretation

As we have thus seen, any correlations seen in the wavefunction of each observer's minds are only concrete after interaction between the different polarizers. The correlations on the level of individual minds correspond to the appearance of quantum non-locality (or equivalently, violation of Bell's inequality). So the many world is non-local, or it cannot explain EPR-GHZ correlations.

Support

There is currently no empirical evidence for the many-minds interpretation. However, there are theories that do not discredit the many-minds interpretation. In light of Bell's analysis of the consequences of quantum non-locality, empirical evidence is needed to avoid inventing novel fundamental concepts (hidden variables).[9] Two different solutions of the measurement problem then appear conceivable: consciousness causes collapse or Everett's relative state interpretation. In both cases a (suitably modified) psycho-physical parallelism can be re-established.

If neural processes can be described and analyzed then some experiments could potentially be created to test whether affecting neural processes can have an effect on a quantum system. Speculation about the details of this awareness-local physical system coupling on a purely theoretical basis could occur, however experimentally searching for them through neurological and psychological studies would be ideal.

Objections

Nothing within quantum theory itself requires each possibility within a wave function to complement a mental state. As all physical states (i.e. brain states) are quantum states, their associated mental states should be also. Nonetheless, it is not what one experiences within physical reality. Albert and Loewer argue that the mind must be intrinsically different than the physical reality as described by quantum theory. Thereby, they reject type-identity physicalism in favour of a non-reductive stance. However, Lockwood saves materialism through the notion of supervenience of the mental on the physical.

Nonetheless, the many-minds interpretation does not solve the mindless hulks problem as a problem of supervenience. Mental states do not supervene on brain states as a given brain state is compatible with different configurations of mental states.

Another serious objection is that workers in no collapse interpretations have produced no more than elementary models based on the definite existence of specific measuring devices. They have assumed, for example, that the Hilbert space of the universe splits naturally into a tensor product structure compatible with the measurement under consideration. They have also assumed, even when describing the behaviour of macroscopic objects, that it is appropriate to employ models in which only a few dimensions of Hilbert space are used to describe all the relevant behaviour.

Furthermore, as the many-minds interpretation is corroborated by our experience of physical reality, a notion of many unseen worlds and its compatibility with other physical theories (i.e. the principle of the conservation of mass) is difficult to reconcile. According to Schrödinger's equation, the mass-energy of the combined observed system and measurement apparatus is the same before and after. However, with every measurement process (i.e. splitting), the total mass-energy would seemingly increase.

Peter J. Lewis argues that the many-minds interpretation of quantum mechanics has absurd implications for agents facing life-or-death decisions.

In general, the many-minds theory holds that a conscious being who observes the outcome of a random zero-sum experiment will evolve into two successors in different observer states, each of whom observes one of the possible outcomes. Moreover, the theory advises one to favour choices in such situations in proportion to the probability that they will bring good results to one's various successors. But in a life-or-death case like an observer getting into the box with Schrödinger's cat, the observer will only have one successor, since one of the outcomes will ensure the observers death. So it seems that the many-minds interpretation advises one to get in the box with the cat, since it is certain that one's only successor will emerge unharmed. See also quantum suicide and immortality.

Finally, it supposes that there is some physical distinction between a conscious observer and a non-conscious measuring device, so it seems to require eliminating the strong Church–Turing hypothesis or postulating a physical model for consciousness.

Saturday, April 4, 2026

Human extinction

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Human_extinction
Nuclear war is an often predicted cause of the extinction of humankind.

Human extinction, or omnicide, is the end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction).

Some of the many possible contributors to anthropogenic hazards are climate change, global nuclear annihilation, biological warfare, weapons of mass destruction, and ecological collapse. Other scenarios center on emerging technologies, such as advanced artificial intelligence, biotechnology, or self-replicating nanobots.

The scientific consensus is that there is a relatively low risk of near-term human extinction due to natural causes. The likelihood of human extinction through humankind's own activities, however, is a current area of research and debate.

History of thought

Early history

Before the 18th and 19th centuries, the possibility that humans or other organisms could become extinct was viewed with scepticism. It contradicted the principle of plenitude, a doctrine that all possible things exist. The principle traces back to Aristotle and was an important tenet of Christian theology. Ancient philosophers such as Plato, Aristotle, and Lucretius wrote of the end of humankind only as part of a cycle of renewal. Marcion of Sinope was a proto-Protestant who advocated for antinatalism that could lead to human extinction. Later philosophers such as Al-Ghazali, William of Ockham, and Gerolamo Cardano expanded the study of logic and probability and began wondering if abstract worlds existed, including a world without humans. Physicist Edmond Halley stated that the extinction of the human race may be beneficial to the future of the world.

The notion that species can become extinct gained scientific acceptance during the Age of Enlightenment in the 17th and 18th centuries, and by 1800 Georges Cuvier had identified 23 extinct prehistoric species. The doctrine was further gradually bolstered by evidence from the natural sciences, particularly the discovery of fossil evidence of species that appeared to no longer exist and the development of theories of evolution. In On the Origin of Species, Charles Darwin discussed the extinction of species as a natural process and a core component of natural selection. Notably, Darwin was skeptical of the possibility of sudden extinction, viewing it as a gradual process. He held that the abrupt disappearances of species from the fossil record were not evidence of catastrophic extinctions but rather represented unrecognized gaps in the record.

As the possibility of extinction became more widely established in the sciences, so did the prospect of human extinction. In the 19th century, human extinction became a popular topic in science (e.g., Thomas Robert Malthus's An Essay on the Principle of Population) and fiction (e.g., Jean-Baptiste Cousin de Grainville's The Last Man). In 1863, a few years after Darwin published On the Origin of Species, William King proposed that Neanderthals were an extinct species of the genus Homo. The Romantic authors and poets were particularly interested in the topic. Lord Byron wrote about the extinction of life on Earth in his 1816 poem "Darkness," and in 1824 envisaged humanity being threatened by a comet impact and employing a missile system to defend against it. Mary Shelley's 1826 novel The Last Man is set in a world where humanity has been nearly destroyed by a mysterious plague. At the turn of the 20th century, Russian cosmism, a precursor to modern transhumanism, advocated avoiding humanity's extinction by colonizing space.

Atomic era

Castle Romeo nuclear test on Bikini Atoll

The invention of the atomic bomb prompted a wave of discussion among scientists, intellectuals, and the public at large about the risk of human extinction. In a 1945 essay, Bertrand Russell stated:

The prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense.

In 1950, Leo Szilard suggested it was technologically feasible to build a cobalt bomb that could render the planet unlivable. A 1950 Gallup poll found that 19% of Americans believed that another world war would mean "an end to mankind." Rachel Carson's 1962 book Silent Spring raised awareness of environmental catastrophe. In 1983, Brandon Carter proposed the Doomsday argument, which used Bayesian probability to predict the total number of humans that will ever exist.

The discovery of "nuclear winter" in the early 1980s, a specific mechanism by which nuclear war could result in human extinction, again raised the issue to prominence. Writing about these findings in 1983, Carl Sagan argued that measuring the severity of extinction solely in terms of those who die "conceals its full impact," and that nuclear war "imperils all of our descendants, for as long as there will be humans."

Post-Cold War

The end of the Cold War led to an explosion of literature about human extinction. John Leslie's 1996 book The End of the World was an academic treatment of the science and ethics of human extinction. In it, Leslie considered a range of threats to humanity and what they have in common. In 2003, British Astronomer Royal Sir Martin Rees published Our Final Hour, in which he argues that advances in certain technologies create new threats to the survival of humankind and that the 21st century may be a critical moment in history when humanity's fate is decided. Edited by Nick Bostrom and Milan M. Ćirković, Global Catastrophic Risks, published in 2008, is a collection of essays from 26 academics on various global catastrophic and existential risks. Nicholas P. Money's 2019 book The Selfish Ape delves into the environmental consequences of overexploitationToby Ord's 2020 book The Precipice argues that preventing existential risks is one of the most important moral issues of our time. The book discusses, quantifies, and compares different existential risks, concluding that the greatest risks are presented by unaligned artificial intelligence and biotechnology. Lyle Lewis' 2024 book Racing to Extinction explores the roots of human extinction from an evolutionary biology perspective. Lewis argues that humanity treats unused natural resources as waste and is driving ecological destruction through overexploitation, habitat loss, and denial of environmental limits. He uses vivid examples, like the extinction of the passenger pigeon and the environmental cost of rice production, to show how interconnected and fragile ecosystems are. Henry Gee's book The Rise and Fall of the Human Empire (2025) argues that humanity is on the brink of extinction due to environmental degradation and diminishing resources.

In 2022, a study led by a group of scientists asked for a new research agenda to figure out the possible catastrophic effects of climate change, such as situations that could kill off 10% of the world's population or even all of humanity. They say that the IPCC should write a report on catastrophic climate change because the effects of extreme warming, like famine, severe weather, war, and disease outbreaks, have not been studied enough. The researchers stress the importance of comprehending potential tipping points and interacting threats to enhance preparedness for worst-case scenarios.

Probability

Natural vs. anthropogenic

Experts generally agree that anthropogenic existential risks are (much) more likely than natural risks. A key difference between these risk types is that empirical evidence can place an upper bound on the level of natural risk. Humanity has existed for at least 200,000 years, over which it has been subject to a roughly constant level of natural risk. If the natural risk were high enough, humanity wouldn't have survived this long. Based on a formalization of this argument, researchers have concluded that we can be confident that natural risk is lower than 1 in 14,000 per year (equivalent to 1 in 140 per century, on average).

Another empirical method to study the likelihood of certain natural risks is to investigate the geological record. For example, a comet or asteroid impact event sufficient in scale to cause an impact winter that would cause human extinction before the year 2100 has been estimated at one in a million. Moreover, large supervolcano eruptions may cause a volcanic winter that could endanger the survival of humanity. The geological record suggests that supervolcanic eruptions are estimated to occur on average about once every 50,000 years, though most such eruptions would not reach the scale required to cause human extinction. Famously, the supervolcano Mt. Toba may have almost wiped out humanity at the time of its last eruption (though this is contentious).

Since anthropogenic risk is a relatively recent phenomenon, humanity's track record of survival cannot provide similar assurances. Humanity has only existed for 80 years since the creation of nuclear weapons, and there is no historical track record for future technologies. This has led thinkers like Carl Sagan to conclude that humanity is currently in a "time of perils," a uniquely dangerous period in human history, where it is subject to unprecedented levels of risk, beginning from when humans first started posing risk to themselves through their actions. Paleobiologist Olev Vinn has suggested that humans presumably have a number of inherited behavior patterns (IBPs) that are not fine-tuned for conditions prevailing in technological civilization. Some IBPs may be highly incompatible with such conditions and have a high potential to induce self-destruction. These patterns may include responses of individuals seeking power over conspecifics in relation to harvesting and consuming energy. Nonetheless, there are ways to address the issue of inherited behavior patterns.

Risk estimates

Given the limitations of ordinary observation and modeling, expert elicitation is frequently used instead to obtain probability estimates.

  • Humanity has a 95% probability of being extinct in 8,000,000 years, according to J. Richard Gott's formulation of the controversial doomsday argument, which argues that we have probably already lived through half the duration of human history.
  • In 1996, John A. Leslie estimated a 30% risk over the next five centuries (equivalent to around 6% per century, on average).
  • The Global Challenges Foundation's 2016 annual report estimates an annual probability of human extinction of at least 0.05% per year (equivalent to 5% per century, on average).
  • As of July 29, 2025, Metaculus users estimate a 1% probability of human extinction by 2100.
  • A 2020 study published in ⁣⁣Scientific Reports⁣⁣ warns that if deforestation and resource consumption continue at current rates, these factors could lead to a "catastrophic collapse in human population" and possibly "an irreversible collapse of our civilization" in the next 20 to 40 years. According to the most optimistic scenario provided by the study, the chances that human civilization survives are smaller than 10%. To avoid this collapse, the study says, humanity should pass from a civilization dominated by the economy to a "cultural society" that "privileges the interest of the ecosystem above the individual interest of its components, but eventually in accordance with the overall communal interest."
  • Nick Bostrom, a philosopher at the University of Oxford known for his work on existential risk, argues
    • that it would be "misguided" to assume that the probability of near-term extinction is less than 25%, and
    • that it will be "a tall order" for the human race to "get our precautions sufficiently right the first time," given that an existential risk provides no opportunity to learn from failure.
  • Philosopher John A. Leslie assigns a 70% chance of humanity surviving the next five centuries, based partly on the controversial philosophical doomsday argument that he champions. Leslie's argument is somewhat frequentist, based on the observation that human extinction has never been observed but requires subjective anthropic arguments. Leslie also discusses the anthropic survivorship bias (which he calls an "observational selection" effect) and states that the a priori certainty of observing an "undisastrous past" could make it difficult to argue that we must be safe because nothing terrible has yet occurred. He quotes Holger Bech Nielsen's formulation: "We do not even know if there should exist some extremely dangerous decay of, say, the proton, which caused the eradication of the earth, because if it happens we would no longer be there to observe it, and if it does not happen there is nothing to observe."
  • Jean-Marc Salotti calculated the probability of human extinction caused by a giant asteroid impact. If no planets are colonized, it will be 0.03 to 0.3 for the next billion years. According to that study, the most frightening object is a giant long-period comet with a warning time of only a few years and, therefore, no time for any intervention in space or settlement on the Moon or Mars. The probability of a giant comet impact in the next hundred years is 2.2×10−12.
  • As the United Nations Office for Disaster Risk Reduction estimated in 2023, there is a 2 to 14% chance of an extinction-level event by 2100.
  • Bill Gates told The Wall Street Journal on January 27, 2025, that he believes there is a 10–15% chance of a natural pandemic hitting in the next four years, but he estimated that there was also a 65–97.5% chance of a natural pandemic hitting in the next 26 years.
  • On March 19, 2025, Henry Gee said that humanity will be extinct in the next 10,000 years. To avoid it happening, he wanted all humanity to establish space colonies in the next 200-300 years.
  • On September 11, 2025, Warp News estimated a 20% chance of global catastrophe and a 6% chance of human extinction by 2100. They also estimated a 100% chance of global catastrophe and a 30% chance of human extinction by 2500.

From nuclear weapons

On November 13, 2024, the American Enterprise Institute estimated a probability of nuclear war during the 21st century between 0% and 80%. A 2023 article of The Economist estimated an 8% chance of nuclear war causing global catastrophe and a 0.5625% chance of nuclear war causing human extinction.

From supervolcanic eruption

On November 13, 2024, the American Enterprise Institute estimated an annual probability of supervolcanic eruption around 0.0067% (0.67% per century on average).

From artificial intelligence

  • A 2008 survey by the Future of Humanity Institute estimated a 5% probability of extinction by superintelligence by 2100.
  • A 2016 survey of AI experts found a median estimate of 5% that human-level AI would cause an outcome that was "extremely bad (e.g., human extinction)". In 2019, the risk was lowered to 2%, but in 2022, it was increased back to 5%. In 2023, the risk doubled to 10%. In 2024, the risk increased to 15%.
  • In 2020, Toby Ord estimates existential risk in the next century at "1 in 6" in his book The Precipice. He also estimated a "1 in 10" risk of extinction by unaligned AI within the next century.
  • According to a July 10, 2023 article of The Economist, scientists estimated a 12% chance of AI-caused catastrophe and a 3% chance of AI-caused extinction by 2100. They also estimated a 100% chance of AI-caused catastrophe and a 25% chance of AI-caused extinction by 2833.
  • On December 27, 2024, Geoffrey Hinton estimated a 10-20% probability of AI-caused extinction in the next 30 years. He also estimated a 50-100% probability of AI-caused extinction in the next 150 years.
  • On May 6, 2025, Scientific American estimated a 0-10% probability of an AI-caused extinction by 2100.
  • On August 1, 2025, Holly Elmore estimated a 15-20% probability of an AI-caused extinction in the next 1-10 years. She also estimated a 75-100% probability of an AI-caused extinction in the next 5-50 years.
  • On November 10, 2025, Elon Musk estimated the probability of AI-driven human extinction at 20%, while others—including Bengio’s colleagues—placed the risk anywhere between 10% and 90%. In other words, Elon Musk and Yoshua Bengio's colleagues estimated a 20-50% probability of an AI-caused extinction.

From climate change

Placard against omnicide, at Extinction Rebellion (2018)

In a 2010 interview with The Australian, the late Australian scientist Frank Fenner predicted the extinction of the human race within a century, primarily as the result of human overpopulation, environmental degradation, and climate change. There are several economists who have discussed the importance of global catastrophic risks. For example, Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage. Richard Posner has argued that humanity is doing far too little, in general, about small, hard-to-estimate risks of large-scale catastrophes.

Individual vs. species risks

Although existential risks are less manageable by individuals than, for example, health risks, according to Ken Olum, Joshua Knobe, and Alexander Vilenkin, the possibility of human extinction does have practical implications. For instance, if the "universal" doomsday argument is accepted, it changes the most likely source of disasters and hence the most efficient means of preventing them.

Difficulty

Some scholars argue that certain scenarios, including global thermonuclear war, would struggle to eradicate every last settlement on Earth. Physicist Willard Wells points out that any credible extinction scenario would have to reach into a diverse set of areas, including the underground subways of major cities, the mountains of Tibet, the remotest islands of the South Pacific, and even McMurdo Station in Antarctica, which has contingency plans and supplies for long isolation. In addition, elaborate bunkers exist for government leaders to occupy during a nuclear war. The existence of nuclear submarines, capable of remaining hundreds of meters deep in the ocean for potentially years, should also be taken into account. Any number of events could lead to a massive loss of human life, but if the last few (see minimum viable population) most resilient humans are unlikely to also die off, then that particular human extinction scenario may not seem credible.

Ethics

Value of human life

"Existential risks" are risks that threaten the entire future of humanity, whether by causing human extinction or by otherwise permanently crippling human progress. Multiple scholars have argued, based on the size of the "cosmic endowment," that because of the inconceivably large number of potential future lives that are at stake, even small reductions of existential risk have enormous value.

In one of the earliest discussions of the ethics of human extinction, Derek Parfit offers the following thought experiment:

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

(1) Peace.
(2) A nuclear war that kills 99% of the world's existing population.
(3) A nuclear war that kills 100%.

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.

— Derek Parfit

The scale of what is lost in an existential catastrophe is determined by humanity's long-term potential—what humanity could expect to achieve if it survived. From a utilitarian perspective, the value of protecting humanity is the product of its duration (how long humanity survives), its size (how many humans there are over time), and its quality (on average, how good is life for future people). On average, species survive for around a million years before going extinct. Parfit points out that the Earth will remain habitable for around a billion years. And these might be lower bounds on our potential: if humanity is able to expand beyond Earth, it could greatly increase the human population and survive for trillions of years. The size of the foregone potential that would be lost were humanity to become extinct is very large. Therefore, reducing existential risk by even a small amount would have a very significant moral value.

Carl Sagan wrote in 1983:

If we are required to calibrate extinction in numerical terms, I would be sure to include the number of people in future generations who would not be born.... (By one calculation), the stakes are one million times greater for extinction than for the more modest nuclear wars that kill "only" hundreds of millions of people. There are many other possible measures of the potential loss – including culture and science, the evolutionary history of the planet, and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise.

Philosopher Robert Adams in 1989 rejected Parfit's "impersonal" views but spoke instead of a moral imperative for loyalty and commitment to "the future of humanity as a vast project... The aspiration for a better society—more just, more rewarding, and more peaceful... our interest in the lives of our children and grandchildren, and the hopes that they will be able, in turn, to have the lives of their children and grandchildren as projects."

Philosopher Nick Bostrom argues in 2013 that preference-satisfactionist, democratic, custodial, and intuitionist arguments all converge on the common-sense view that preventing existential risk is a high moral priority, even if the exact "degree of badness" of human extinction varies between these philosophies.

Parfit argues that the size of the "cosmic endowment" can be calculated from the following argument: If Earth remains habitable for a billion more years and can sustainably support a population of more than a billion humans, then there is a potential for 1016 (or 10,000,000,000,000,000) human lives of normal duration. Bostrom goes further, stating that if the universe is empty, then the accessible universe can support at least 1034 biological human life-years and, if some humans were uploaded onto computers, could even support the equivalent of 1054 cybernetic human life-years.

Some economists and philosophers have defended views, including exponential discounting and person-affecting views of population ethics, on which future people do not matter (or matter much less), morally speaking. While these views are controversial, they would agree that an existential catastrophe would be among the worst things imaginable. It would cut short the lives of eight billion presently existing people, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, there may be strong reasons to reduce existential risk, grounded in concern for presently existing people.

Beyond utilitarianism, other moral perspectives lend support to the importance of reducing existential risk. An existential catastrophe would destroy more than just humanity—it would destroy all cultural artifacts, languages, and traditions, and many of the things we value. So moral viewpoints on which we have duties to protect and cherish things of value would see this as a huge loss that should be avoided. One can also consider reasons grounded in duties to past generations. For instance, Edmund Burke writes of a "partnership...between those who are living, those who are dead, and those who are to be born". If one takes seriously the debt humanity owes to past generations, Ord argues the best way of repaying it might be to "pay it forward" and ensure that humanity's inheritance is passed down to future generations.

Voluntary extinction

Voluntary Human Extinction Movement

Some philosophers adopt the antinatalist position that human extinction would be a beneficial thing. David Benatar argues that coming into existence is always serious harm, and therefore it is better that people do not come into existence in the future. Further, Benatar, animal rights activist Steven Best, and anarchist Todd May posit that human extinction would be a positive thing for the other organisms on the planet and the planet itself, citing, for example, the omnicidal nature of human civilization. The environmental view in favor of human extinction is shared by the members of the Voluntary Human Extinction Movement and the Church of Euthanasia, who call for refraining from reproduction and allowing the human species to go peacefully extinct, thus stopping further environmental degradation.

In fiction

Jean-Baptiste Cousin de Grainville's 1805 science fantasy novel Le dernier homme (The Last Man), which depicts human extinction due to infertility, is considered the first modern apocalyptic novel and credited with launching the genre. Other notable early works include Mary Shelley's 1826 The Last Man, depicting human extinction caused by a pandemic, and Olaf Stapledon's 1937 Star Maker, "a comparative study of omnicide."

Some 21st-century pop-science works, including The World Without Us by Alan Weisman and the television specials Life After People and Aftermath: Population Zero, pose a thought experiment: what would happen to the rest of the planet if humans suddenly disappeared? A threat of human extinction, such as through a technological singularity (also called an intelligence explosion), drives the plot of innumerable science fiction stories; an influential early example is the 1951 film adaptation of When Worlds Collide. Usually the extinction threat is narrowly avoided, but some exceptions exist, such as R.U.R. and Steven Spielberg's A.I

Collapsology

From Wikipedia, the free encyclopedia

The term collapsology or collapse studies are neologisms used to designate the transdisciplinary study of the risks of collapse of industrial civilization. It is concerned with the general collapse of societies induced by climate change, as well as "scarcity of resources, vast extinctions, and natural disasters."

Although the concept of civilizational or societal collapse had already existed for many years, collapsology focuses its attention on contemporary, industrial, and globalized societies.

Background

The word collapsology has been coined and publicized by Pablo Servigne [fr] and Raphaël Stevens in their essay: Comment tout peut s'effondrer. Petit manuel de collapsologie à l'usage des générations présentes (How everything can collapse: A manual for our times), published in 2015 in France. It also developed into a movement when Jared Diamond's text Collapse was published. Use of the term has spread, especially by journalists reporting on the deep adaptation writings by Jem Bendell.

Collapsology is based on the idea that humans impact their environment in a sustained and negative way, and promotes the concept of an environmental emergency, linked in particular to global warming and the biodiversity loss. Collapsologists believe, however, that the collapse of industrial civilization could be the result of a combination of different crises: environmental, but also economic, geopolitical, democratic, and others.

Recent literature reviews have shown the maturation of collapsology as an academic field. Archaeologist Guy Middleton argues that collapse studies have evolved into "a more nuanced, self-critical, and sophisticated field" that moves beyond environmental determinism and apocalyptic narratives. This evolution has led to applied collapsology, which draws from archaeology and ancient history to inform contemporary sustainability policies and climate change adaptation strategies, making collapse research increasingly relevant for resilience planning. Moreover, Brozović's comprehensive analysis of over 400 academic works identified five key scholarly conversations within collapse research: past collapses (historical and archaeological studies), general explanations of collapse (theoretical frameworks), alternatives to collapse (resilience and adaptation strategies), fictional collapses (speculative fiction and dystopian literature), and future climate change and societal collapse (predictive and scenario-based studies). Additionally, Shackelford and colleagues developed innovative methodologies for systematically reviewing the growing body of existential risk literature, including risks of human extinction and civilizational collapse, using crowdsourcing and machine learning techniques to handle the overwhelming volume of relevant research.

Etymology

The word collapsology is a portmanteau derived from the Latin collapsus, 'to fall, to collapse' and from the suffix -logy, logos, 'study', which is intended to name an approach of scientific nature.

Since 2015, several words have been proposed to describe the various approaches dealing with the issue of collapse: collapsosophy to designate the philosophical approach, collapsopraxis to designate the ideology inspired by this study, and collapsonauts to designate people living with this idea in mind.

Distinction from eschatology

Unlike traditional eschatological thinking, collapsology is based on data and concepts from contemporary scientific research, primarily human understanding of climate change and ecological overshoot as caused by human economic and geopolitical systems. It is not in line with the idea of a cosmic, apocalyptic "end of the world", but makes the hypothesis of the end of the human current world, the "thermo-industrial civilization".

This distinction is further stressed by historian Eric H. Cline by pointing out that while the whole world has obviously not ended, civilizations have collapsed over the course of history which makes the statement that "prophets have always predicted doom and been wrong" inapplicable to societal collapse.

Scientific basis

As early as 1972, The Limits to Growth, a report produced by MIT researchers, warned of the risks of exponential demographic and economic growth on a planet with limited resources.

As a systemic approach, collapsology is based on prospective studies such as The Limits of Growth, but also on the state of global and regional trends in the environmental, social and economic fields (such as the IPCC, IPBES or Global Environment Outlook (GE) reports periodically published by the Early Warning and Assessment Division of the UNEP, etc.) and numerous scientific works as well as various studies, such as "A safe operating space for humanity"; "Approaching a state shift in Earth's biosphere", published in Nature in 2009 and 2012, "The trajectory of the Anthropocene: The Great Acceleration", published in 2015 in The Anthropocene Review, and "Trajectories of the Earth System in the Anthropocene", published in 2018 in the Proceedings of the National Academy of Sciences of the United States of America. There is evidence to support the importance of collective processing of the emotional aspects of contemplating societal collapse, and the inherent adaptiveness of these emotional experiences.

History

Precursors

Even if this neologism only appeared in 2015 and concerns the study of the collapse of industrial civilization, the study of the collapse of societies is older and is probably a concern of every civilization. Among the works on this theme (in a broad sense) one can mention those of Berossus (278 B.C.), Pliny the Younger (79 AD), Ibn Khaldun (1375), Montesquieu (1734), Thomas Robert Malthus (1766–1834), Edward Gibbon (1776), Georges Cuvier, (1821), Élisée Reclus (1905), Oswald Spengler (1918), Arnold Toynbee (1939), Günther Anders (1956), Samuel Noah Kramer (1956), Leopold Kohr (1957), Rachel Carson (1962), Paul Ehrlich (1969), Nicholas Georgescu-Roegen (1971), Donella Meadows, Dennis Meadows & Jørgen Randers (1972), René Dumont (1973), Hans Jonas (1979), Joseph Tainter (1988), Al Gore (1992), Hubert Reeves (2003), Richard Posner (2004), Jared Diamond (2005), Niall Ferguson (2013).

Arnold J. Toynbee

In his monumental (initially published in twelve volumes) and highly controversial work of contemporary historiography entitled A Study of History (1972), Arnold J. Toynbee (1889–1975) deals with the genesis of civilizations (chapter 2), their growth (chapter 3), their decline (chapter 4), and their disintegration (chapter 5). According to him, the mortality of civilizations is trivial evidence for the historian, as is the fact that they follow one another over a long period of time.

Joseph Tainter

In his book The Collapse of Complex Societies, the anthropologist and historian Joseph Tainter (born 1949) studies the collapse of various civilizations, including that of the Roman Empire, in terms of network theory, energy economics and complexity theory. For Tainter, an increasingly complex society eventually collapses because of the ever-increasing difficulty in solving its problems.

Jared Diamond

The American geographer, evolutionary biologist and physiologist Jared Diamond (born 1937) already evoked the theme of civilizational collapse in his book called Collapse: How Societies Choose to Fail or Succeed, published in 2005. By relying on historical cases, notably the Rapa Nui civilization, the Vikings and the Maya civilization, Diamond argues that humanity collectively faces, on a much larger scale, many of the same issues as these civilizations did, with possibly catastrophic near-future consequences to many of the world's populations. This book has had a resonance beyond the United States, despite some criticism. Proponents of catastrophism who identify themselves as "enlightened catastrophists" draw from Diamond's work, helping build the expansion of the relational ecology network, whose members believe that man is heading toward disaster. Diamond's Collapse approached civilizational collapse from archaeological, ecological, and biogeographical perspectives on ancient civilizations.

Modern collapsologists

Since the invention of the term collapsology, many French personalities gravitate in or around the collapsologists' sphere. Not all have the same vision of civilizational collapse, some even reject the term "collapsologist", but all agree that contemporary industrial civilization, and the biosphere as a whole, are on the verge of a global crisis of unprecedented proportions. According to them, the process is already under way, and it is now only possible to try to reduce its devastating effects in the near future. The leaders of the movement are Yves Cochet and Agnès Sinaï of the Momentum Institute (a think tank exploring the causes of environmental and societal risks of collapse of the thermo-industrial civilization and possible actions to adapt to it), and Pablo Servigne and Raphaël Stevens who wrote the essay How everything can collapse: A manual for our times.

Beyond the French collapsologists mentioned above, one can mention: Aurélien Barrau (astrophysicist), Philippe Bihouix (engineer, low-tech developer), Dominique Bourg (philosopher), Valérie Cabanes (lawyer, seeking recognition of the crime of ecocide by the international criminal court), Jean-Marc Jancovici (energy-climate specialist), and Paul Jorion (anthropologist, sociologist).

In 2020 the French humanities and social science website Cairn.info published a dossier on collapsology titled The Age of Catastrophe, with contributions from historian François Hartog, economist Emmanuel Hache, philosopher Pierre Charbonnier, art historian Romain Noël, geoscientist Gabriele Salerno, and American philosopher Eugene Thacker.

Even if the term remains rather unknown in the Anglo-Saxon world, many publications deal with the same topic (for example the 2017 David Wallace-Wells article "The Uninhabitable Earth" and 2019 bestselling book of the same name, probably a mass-market collapsology work without using the term). It is now gradually spreading on general and scientific English speaking social networks. In his book Anti-Tech Revolution: Why and How, Ted Kaczynski also warned of the threat of catastrophic societal collapse.

Microscopy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Microsco...