Search This Blog

Tuesday, February 3, 2026

Doomsday argument

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Doomsday_argument
World population from 10,000 BC to AD 2000

The doomsday argument (DA), or Carter catastrophe, is a probabilistic argument that aims to predict the total number of humans who will ever live. It argues that if a human's birth rank is randomly sampled from the set of all humans who will ever live, it is improbable that one would be at the extreme beginning. This implies that the total number of humans is unlikely to be much larger than the number of humans born so far.

The doomsday argument was originally proposed by the astrophysicist Brandon Carter in 1983, leading to the initial name of the Carter catastrophe. The argument was subsequently championed by the philosopher John A. Leslie and has since been independently conceived by J. Richard Gott and Holger Bech Nielsen.

Summary

The premise of the argument is as follows: suppose that the total number of human beings who will ever exist is fixed. If so, the likelihood of a randomly selected person existing at a particular time in history would be proportional to the total population at that time. Given this, the argument posits that a person alive today should adjust their expectations about the future of the human race because their existence provides information about the total number of humans that will ever live.

If the total number of humans who were born or will ever be born is denoted by , then the Copernican principle suggests that any one human is equally likely to find themselves in any position of the total population .

is uniformly distributed on (0,1) even after learning the absolute position . For example, there is a 95% chance that is in the interval (0.05,1), that is . In other words, one can assume with 95% certainty that any individual human would be within the last 95% of all the humans ever to be born. If the absolute position is known, this argument implies a 95% confidence upper bound for obtained by rearranging to give .

If Leslie's figure is used, then approximately 60 billion humans have been born so far, so it can be estimated that there is a 95% chance that the total number of humans will be less than 2060 billion = 1.2 trillion. Assuming that the world population stabilizes at 10 billion and a life expectancy of 80 years, it can be estimated that the remaining 1140 billion humans will be born in 9120 years. Although estimates vary depending on projections of world population over the coming centuries, the argument maintains that it is unlikely that more than 1.2 trillion humans will ever live.

Aspects

Assume, for simplicity, that the total number of humans who will ever be born is 60 billion (N1), or 6,000 billion (N2). If there is no prior knowledge of the position that a currently living individual, X, has in the history of humanity, one may instead compute how many humans were born before X, and arrive at say 59,854,795,447, which would necessarily place X among the first 60 billion humans who have ever lived.

It is possible to sum the probabilities for each value of N and, therefore, to compute a statistical 'confidence limit' on N. For example, taking the numbers above, it is 99% certain that N is smaller than 6 trillion.

Note that as remarked above, this argument assumes that the prior probability for N is flat, or 50% for N1 and 50% for N2 in the absence of any information about X. On the other hand, it is possible to conclude, given X, that N2 is more likely than N1 if a different prior is used for N. More precisely, Bayes' theorem tells us that P(N|X) = P(X|N)P(N)/P(X), and the conservative application of the Copernican principle tells us only how to calculate P(X|N). Taking P(X) to be flat, we still have to assume the prior probability P(N) that the total number of humans is N. If we conclude that N2 is much more likely than N1 (for example, because producing a larger population takes more time, increasing the chance that a low probability but cataclysmic natural event will take place in that time), then P(X|N) can become more heavily weighted towards the bigger value of N. A further, more detailed discussion, as well as relevant distributions P(N), are given below in the Counterarguments section.

The doomsday argument does not say that humanity cannot or will not exist indefinitely. It does not put any upper limit on the number of humans that will ever exist nor provide a date for when humanity will become extinct. An abbreviated form of the argument does make these claims, by confusing probability with certainty. However, the actual conclusion for the version used above is that there is a 95% chance of extinction within 9,120 years and a 5% chance that some humans will still be alive at the end of that period. (The precise numbers vary among specific doomsday arguments.)

Variations

This argument has generated a philosophical debate, and no consensus has yet emerged on its solution. The variants described below produce the DA by separate derivations.

Gott's formulation: "vague prior" total population

Gott specifically proposes the functional form for the prior distribution of the number of people who will ever be born (N). Gott's DA used the vague prior distribution:

.

where

  • P(N) is the probability prior to discovering n, the total number of humans who have yet been born.
  • The constant, k, is chosen to normalize the sum of P(N). The value chosen is not important here, just the functional form (this is an improper prior, so no value of k gives a valid distribution, but Bayesian inference is still possible using it.)

Since Gott specifies the prior distribution of total humans, P(N), Bayes' theorem and the principle of indifference alone give us P(N|n), the probability of N humans being born if n is a random draw from N:

This is Bayes' theorem for the posterior probability of the total population ever born of N, conditioned on population born thus far of n. Now, using the indifference principle:

.

The unconditioned n distribution of the current population is identical to the vague prior N probability density function, so:

,

giving P (N | n) for each specific N (through a substitution into the posterior probability equation):

.

The easiest way to produce the doomsday estimate with a given confidence (say 95%) is to pretend that N is a continuous variable (since it is very large) and integrate over the probability density from N = n to N = Z. (This will give a function for the probability that NZ):

Defining Z = 20n gives:

.

This is the simplest Bayesian derivation of the doomsday argument:

The chance that the total number of humans that will ever be born (N) is greater than twenty times the total that have been is below 5%

The use of a vague prior distribution seems well-motivated as it assumes as little knowledge as possible about N, given that some particular function must be chosen. It is equivalent to the assumption that the probability density of one's fractional position remains uniformly distributed even after learning of one's absolute position (n).

Gott's "reference class" in his original 1993 paper was not the number of births, but the number of years "humans" had existed as a species, which he put at 200,000. Also, Gott tried to give a 95% confidence interval between a minimum survival time and a maximum. Because of the 2.5% chance that he gives to underestimating the minimum, he has only a 2.5% chance of overestimating the maximum. This equates to 97.5% confidence that extinction occurs before the upper boundary of his confidence interval, which can be used in the integral above with Z = 40n, and n = 200,000 years:

This is how Gott produces a 97.5% confidence of extinction within N ≤ 8,000,000 years. The number he quoted was the likely time remaining, N − n = 7.8 million years. This was much higher than the temporal confidence bound produced by counting births, because it applied the principle of indifference to time. (Producing different estimates by sampling different parameters in the same hypothesis is Bertrand's paradox.) Similarly, there is a 97.5% chance that the present lies in the first 97.5% of human history, so there is a 97.5% chance that the total lifespan of humanity will be at least

;

In other words, Gott's argument gives a 95% confidence that humans will go extinct between 5,100 and 7.8 million years in the future.

Gott has also tested this formulation against the Berlin Wall and Broadway and off-Broadway plays.

Leslie's argument differs from Gott's version in that he does not assume a vague prior probability distribution for N. Instead, he argues that the force of the doomsday argument resides purely in the increased probability of an early doomsday once you take into account your birth position, regardless of your prior probability distribution for N. He calls this the probability shift.

Heinz von Foerster argued that humanity's abilities to construct societies, civilizations and technologies do not result in self-inhibition. Rather, societies' success varies directly with population size. Von Foerster found that this model fits some 25 data points from the birth of Jesus to 1958, with only 7% of the variance left unexplained. Several follow-up letters (1961, 1962, ...) were published in Science showing that von Foerster's equation was still on track. The data continued to fit up until 1973. The most remarkable thing about von Foerster's model was it predicted that the human population would reach infinity or a mathematical singularity, on Friday, November 13, 2026. In fact, von Foerster did not imply that the world population on that day could actually become infinite. The real implication was that the world population growth pattern followed for many centuries prior to 1960 was about to come to an end and be transformed into a radically different pattern. Note that this prediction began to be fulfilled just in a few years after the "doomsday" argument was published.

Reference classes

The reference class from which n is drawn, and of which N is the ultimate size, is a crucial point of contention in the doomsday argument argument. The "standard" doomsday argument hypothesis skips over this point entirely, merely stating that the reference class is the number of "people". Given that you are human, the Copernican principle might be used to determine if you were born exceptionally early, however the term "human" has been heavily contested on practical and philosophical reasons. According to Nick Bostrom, consciousness is (part of) the discriminator between what is in and what is out of the reference class, and therefore extraterrestrial intelligence might have a significant impact on the calculation.

The following sub-sections relate to different suggested reference classes, each of which has had the standard doomsday argument applied to it.

SSSA: Sampling from observer-moments

Nick Bostrom, considering observation selection effects, has produced a Self-Sampling Assumption (SSA): "that you should think of yourself as if you were a random observer from a suitable reference class". If the "reference class" is the set of humans to ever be born, this gives N < 20n with 95% confidence (the standard doomsday argument). However, he has refined this idea to apply to observer-moments rather than just observers. He has formalized this as:

The strong self-sampling assumption (SSSA): Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class.

An application of the principle underlying SSSA (though this application is nowhere expressly articulated by Bostrom), is: If the minute in which you read this article is randomly selected from every minute in every human's lifespan, then (with 95% confidence) this event has occurred after the first 5% of human observer-moments. If the mean lifespan in the future is twice the historic mean lifespan, this implies 95% confidence that N < 10n (the average future human will account for twice the observer-moments of the average historic human). Therefore, the 95th percentile extinction-time estimate in this version is 4560 years.

Counterarguments

We are in the earliest 5%, a priori

One counterargument to the doomsday argument agrees with its statistical methods but disagrees with its extinction-time estimate. This position requires justifying why the observer cannot be assumed to be randomly selected from the set of all humans ever to be born, which implies that this set is not an appropriate reference class. By disagreeing with the doomsday argument, it implies that the observer is within the first 5% of humans to be born.

By analogy, if one is a member of 50,000 people in a collaborative project, the reasoning of the doomsday argument implies that there will never be more than a million members of that project, within a 95% confidence interval. However, if one's characteristics are typical of an early adopter, rather than typical of an average member over the project's lifespan, then it may not be reasonable to assume one has joined the project at a random point in its life. For instance, the mainstream of potential users will prefer to be involved when the project is nearly complete. However, if one were to enjoy the project's incompleteness, it is already known that he or she is unusual, before the discovery of his or her early involvement.

If one has measurable attributes that set one apart from the typical long-run user, the project doomsday argument can be refuted based on the fact that one could expect to be within the first 5% of members, a priori. The analogy to the total-human-population form of the argument is that confidence in a prediction of the distribution of human characteristics that places modern and historic humans outside the mainstream implies that it is already known, before examining n, that it is likely to be very early in N. This is an argument for changing the reference class.

For example, if one is certain that 99% of humans who will ever live will be cyborgs, but that only a negligible fraction of humans who have been born to date are cyborgs, one could be equally certain that at least one hundred times as many people remain to be born as have been.

Robin Hanson's paper sums up these criticisms of the doomsday argument:

All else is not equal; we have good reasons for thinking we are not randomly selected humans from all who will ever live.

Human extinction is distant, a posteriori

The a posteriori observation that extinction level events are rare could be offered as evidence that the doomsday argument's predictions are implausible; typically, extinctions of dominant species happen less often than once in a million years. Therefore, it is argued that human extinction is unlikely within the next ten millennia. (Another probabilistic argument, drawing a different conclusion than the doomsday argument.)

In Bayesian terms, this response to the doomsday argument says that our knowledge of history (or ability to prevent disaster) produces a prior marginal for N with a minimum value in the trillions. If N is distributed uniformly from 1012 to 1013, for example, then the probability of N < 1,200 billion inferred from n = 60 billion will be extremely small. This is an equally impeccable Bayesian calculation, rejecting the Copernican principle because we must be 'special observers' since there is no likely mechanism for humanity to go extinct within the next hundred thousand years.

This response is accused of overlooking the technological threats to humanity's survival, to which earlier life was not subject, and is specifically rejected by most academic critics of the doomsday argument (arguably excepting Robin Hanson).

The prior N distribution may make n very uninformative

Robin Hanson argues that N's prior may be exponentially distributed:

Here, c and q are constants. If q is large, then our 95% confidence upper bound is on the uniform draw, not the exponential value of N.

The simplest way to compare this with Gott's Bayesian argument is to flatten the distribution from the vague prior by having the probability fall off more slowly with N (than inverse proportionally). This corresponds to the idea that humanity's growth may be exponential in time with doomsday having a vague prior probability density function in time. This would mean that N, the last birth, would have a distribution looking like the following:

This prior N distribution is all that is required (with the principle of indifference) to produce the inference of N from n, and this is done in an identical way to the standard case, as described by Gott (equivalent to = 1 in this distribution):

Substituting into the posterior probability equation):

Integrating the probability of any N above xn:

For example, if x = 20, and = 0.5, this becomes:

Therefore, with this prior, the chance of a trillion births is well over 20%, rather than the 5% chance given by the standard DA. If is reduced further by assuming a flatter prior N distribution, then the limits on N given by n become weaker. An of one reproduces Gott's calculation with a birth reference class, and around 0.5 could approximate his temporal confidence interval calculation (if the population were expanding exponentially). As (gets smaller) n becomes less and less informative about N. In the limit this distribution approaches an (unbounded) uniform distribution, where all values of N are equally likely. This is Page et al.'s "Assumption 3", which they find few reasons to reject, a priori. (Although all distributions with are improper priors, this applies to Gott's vague-prior distribution also, and they can all be converted to produce proper integrals by postulating a finite upper population limit.) Since the probability of reaching a population of size 2N is usually thought of as the chance of reaching N multiplied by the survival probability from N to 2N it follows that Pr(N) must be a monotonically decreasing function of N, but this doesn't necessarily require an inverse proportionality.

Infinite expectation

Another objection to the doomsday argument is that the expected total human population is actually infinite. The calculation is as follows:

The total human population N = n/f, where n is the human population to date and f is our fractional position in the total.
We assume that f is uniformly distributed on (0,1].
The expectation of N is

For a similar example of counterintuitive infinite expectations, see the St. Petersburg paradox.

Self-indication assumption: The possibility of not existing at all

One objection is that the possibility of a human existing at all depends on how many humans will ever exist (N). If this is a high number, then the possibility of their existing is higher than if only a few humans will ever exist. Since they do indeed exist, this is evidence that the number of humans that will ever exist is high.

This objection, originally by Dennis Dieks (1992), is now known by Nick Bostrom's name for it: the "Self-Indication Assumption objection". It can be shown that some SIAs prevent any inference of N from n (the current population).

The SIA has been defended by Matthew Adelstein, arguing that all alternatives to the SIA imply the soundness of the doomsday argument, and other even stranger conclusions.

Caves' rebuttal

The Bayesian argument by Carlton M. Caves states that the uniform distribution assumption is incompatible with the Copernican principle, not a consequence of it.

Caves gives a number of examples to argue that Gott's rule is implausible. For instance, he says, imagine stumbling into a birthday party, about which you know nothing:

Your friendly enquiry about the age of the celebrant elicits the reply that she is celebrating her (tp=) 50th birthday. According to Gott, you can predict with 95% confidence that the woman will survive between [50]/39 = 1.28 years and 39[×50] = 1,950 years into the future. Since the wide range encompasses reasonable expectations regarding the woman's survival, it might not seem so bad, till one realizes that [Gott's rule] predicts that with probability 1/2 the woman will survive beyond 100 years old and with probability 1/3 beyond 150. Few of us would want to bet on the woman's survival using Gott's rule. (See Caves' online paper below.)

Cave's example example exposes a weakness in J. Richard Gott's "Copernicus method" DA: it does not specify when the "Copernicus method" can be applied. But this criticism is less effective against more refined versions of the argument. Epistemological refinements of Gott's argument by philosophers such as Nick Bostrom specify that:

Knowing the absolute birth rank (n) must give no information on the total population (N).

Careful DA variants specified with this rule aren't shown implausible by Caves' "Old Lady" example above, because the woman's age is given prior to the estimate of her lifespan. Since human age gives an estimate of survival time (via actuarial tables) Caves' Birthday party age-estimate could not fall into the class of DA problems defined with this proviso.

To produce a comparable "Birthday Party Example" of the carefully specified Bayesian DA, we would need to completely exclude all prior knowledge of likely human life spans; in principle this could be done (e.g.: hypothetical Amnesia chamber). However, this would remove the modified example from everyday experience. To keep it in the everyday realm the lady's age must be hidden prior to the survival estimate being made. (Although this is no longer exactly the DA, it is much more comparable to it.)

Without knowing the lady's age, the DA reasoning produces a rule to convert the birthday (n) into a maximum lifespan with 50% confidence (N). Gott's Copernicus method rule is simply: Prob (N < 2n) = 50%. How accurate would this estimate turn out to be? Western demographics are now fairly uniform across ages, so a random birthday (n) could be (very roughly) approximated by a U(0,M] draw where M is the maximum lifespan in the census. In this 'flat' model, everyone shares the same lifespan so N = M. If n happens to be less than (M)/2 then Gott's 2n estimate of N will be under M, its true figure. The other half of the time 2n underestimates M, and in this case (the one Caves highlights in his example) the subject will die before the 2n estimate is reached. In this "flat demographics" model Gott's 50% confidence figure is proven right 50% of the time.

Self-referencing doomsday argument rebuttal

Some philosophers have suggested that only people who have contemplated the doomsday argument (DA) belong in the reference class "human". If that is the appropriate reference class, Carter defied his own prediction when he first described the argument (to the Royal Society). An attendant could have argued thus:

Presently, only one person in the world understands the Doomsday argument, so by its own logic there is a 95% chance that it is a minor problem which will only ever interest twenty people, and I should ignore it.

Jeff Dewynne and Professor Peter Landsberg suggested that this line of reasoning will create a paradox for the doomsday argument:

If a member of the Royal Society did pass such a comment, it would indicate that they understood the DA sufficiently well that in fact 2 people could be considered to understand it, and thus there would be a 5% chance that 40 or more people would actually be interested. Also, of course, ignoring something because you only expect a small number of people to be interested in it is extremely short sighted—if this approach were to be taken, nothing new would ever be explored, if we assume no a priori knowledge of the nature of interest and attentional mechanisms.

Conflation of future duration with total duration

Various authors have argued that the doomsday argument rests on an incorrect conflation of future duration with total duration. This occurs in the specification of the two time periods as "doom soon" and "doom deferred" which means that both periods are selected to occur after the observed value of the birth order. A rebuttal in Pisaturo (2009) argues that the doomsday argument relies on the equivalent of this equation:

,
where:
X = the prior information;
Dp = the data that past duration is tp;
HFS = the hypothesis that the future duration of the phenomenon will be short;
HFL = the hypothesis that the future duration of the phenomenon will be long;
HTS = the hypothesis that the total duration of the phenomenon will be short—i.e., that tt, the phenomenon's total longevity, = tTS;
HTL = the hypothesis that the total duration of the phenomenon will be long—i.e., that tt, the phenomenon's total longevity, = tTL, with tTL > tTS.

Pisaturo then observes:

Clearly, this is an invalid application of Bayes' theorem, as it conflates future duration and total duration.

Pisaturo takes numerical examples based on two possible corrections to this equation: considering only future durations and considering only total durations. In both cases, he concludes that the doomsday argument's claim, that there is a "Bayesian shift" in favor of the shorter future duration, is fallacious.

This argument is also echoed in O'Neill (2014). In this work O'Neill argues that a unidirectional "Bayesian Shift" is an impossibility within the standard formulation of probability theory and is contradictory to the rules of probability. As with Pisaturo, he argues that the doomsday argument conflates future duration with total duration by specification of doom times that occur after the observed birth order. According to O'Neill:

The reason for the hostility to the doomsday argument and its assertion of a "Bayesian shift" is that many people who are familiar with probability theory are implicitly aware of the absurdity of the claim that one can have an automatic unidirectional shift in beliefs regardless of the actual outcome that is observed. This is an example of the "reasoning to a foregone conclusion" that arises in certain kinds of failures of an underlying inferential mechanism. An examination of the inference problem used in the argument shows that this suspicion is indeed correct, and the doomsday argument is invalid. (pp. 216-217)

Confusion over the meaning of confidence intervals

Gelman and Robert assert that the doomsday argument confuses frequentist confidence intervals with Bayesian credible intervals. Suppose that every individual knows their number n and uses it to estimate an upper bound on N. Every individual has a different estimate, and these estimates are constructed so that 95% of them contain the true value of N and the other 5% do not. This, say Gelman and Robert, is the defining property of a frequentist lower-tailed 95% confidence interval. But, they say, "this does not mean that there is a 95% chance that any particular interval will contain the true value." That is, while 95% of the confidence intervals will contain the true value of N, this is not the same as N being contained in the confidence interval with 95% probability. The latter is a different property and is the defining characteristic of a Bayesian credible interval. Gelman and Robert conclude:

the Doomsday argument is the ultimate triumph of the idea, beloved among Bayesian educators, that our students and clients do not really understand Neyman–Pearson confidence intervals and inevitably give them the intuitive Bayesian interpretation.

Human extinction

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Human_extinction
Nuclear war is an often-predicted cause of the extinction of humankind.

Human extinction or omnicide is the end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction).

Some of the many possible contributors to anthropogenic hazards are climate change, global nuclear annihilation, biological warfare, weapons of mass destruction, and ecological collapse. Other scenarios center on emerging technologies, such as advanced artificial intelligence, biotechnology, or self-replicating nanobots.

The scientific consensus is that there is a relatively low risk of near-term human extinction due to natural causes. The likelihood of human extinction through humankind's own activities, however, is a current area of research and debate.

History of thought

Early history

Before the 18th and 19th centuries, the possibility that humans or other organisms could become extinct was viewed with scepticism. It contradicted the principle of plenitude, a doctrine that all possible things exist. The principle traces back to Aristotle and was an important tenet of Christian theology. Ancient philosophers such as Plato, Aristotle, and Lucretius wrote of the end of humankind only as part of a cycle of renewal. Marcion of Sinope was a proto-Protestant who advocated for antinatalism that could lead to human extinction. Later philosophers such as Al-Ghazali, William of Ockham, and Gerolamo Cardano expanded the study of logic and probability and began wondering if abstract worlds existed, including a world without humans. Physicist Edmond Halley stated that the extinction of the human race may be beneficial to the future of the world.

The notion that species can become extinct gained scientific acceptance during the Age of Enlightenment in the 17th and 18th centuries, and by 1800 Georges Cuvier had identified 23 extinct prehistoric species. The doctrine was further gradually bolstered by evidence from the natural sciences, particularly the discovery of fossil evidence of species that appeared to no longer exist and the development of theories of evolution. In On the Origin of Species, Charles Darwin discussed the extinction of species as a natural process and a core component of natural selection. Notably, Darwin was skeptical of the possibility of sudden extinction, viewing it as a gradual process. He held that the abrupt disappearances of species from the fossil record were not evidence of catastrophic extinctions but rather represented unrecognized gaps in the record.

As the possibility of extinction became more widely established in the sciences, so did the prospect of human extinction. In the 19th century, human extinction became a popular topic in science (e.g., Thomas Robert Malthus's An Essay on the Principle of Population) and fiction (e.g., Jean-Baptiste Cousin de Grainville's The Last Man). In 1863, a few years after Darwin published On the Origin of Species, William King proposed that Neanderthals were an extinct species of the genus Homo. The Romantic authors and poets were particularly interested in the topic. Lord Byron wrote about the extinction of life on Earth in his 1816 poem "Darkness," and in 1824 envisaged humanity being threatened by a comet impact and employing a missile system to defend against it. Mary Shelley's 1826 novel The Last Man is set in a world where humanity has been nearly destroyed by a mysterious plague. At the turn of the 20th century, Russian cosmism, a precursor to modern transhumanism, advocated avoiding humanity's extinction by colonizing space.

Atomic era

Castle Romeo nuclear test on Bikini Atoll

The invention of the atomic bomb prompted a wave of discussion among scientists, intellectuals, and the public at large about the risk of human extinction. In a 1945 essay, Bertrand Russell wrote:

The prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense.

In 1950, Leo Szilard suggested it was technologically feasible to build a cobalt bomb that could render the planet unlivable. A 1950 Gallup poll found that 19% of Americans believed that another world war would mean "an end to mankind". Rachel Carson's 1962 book Silent Spring raised awareness of environmental catastrophe. In 1983, Brandon Carter proposed the Doomsday argument, which used Bayesian probability to predict the total number of humans that will ever exist.

The discovery of "nuclear winter" in the early 1980s, a specific mechanism by which nuclear war could result in human extinction, again raised the issue to prominence. Writing about these findings in 1983, Carl Sagan argued that measuring the severity of extinction solely in terms of those who die "conceals its full impact," and that nuclear war "imperils all of our descendants, for as long as there will be humans."

Post-Cold War

John Leslie's 1996 book The End of the World was an academic treatment of the science and ethics of human extinction. In it, Leslie considered a range of threats to humanity and what they have in common. In 2003, British Astronomer Royal Sir Martin Rees published Our Final Hour, in which he argues that advances in certain technologies create new threats to the survival of humankind and that the 21st century may be a critical moment in history when humanity's fate is decided. Edited by Nick Bostrom and Milan M. Ćirković, Global Catastrophic Risks, published in 2008, is a collection of essays from 26 academics on various global catastrophic and existential risks. Nicholas P. Money's 2019 book The Selfish Ape delves into the environmental consequences of overexploitationToby Ord's 2020 book The Precipice argues that preventing existential risks is one of the most important moral issues of our time. The book discusses, quantifies, and compares different existential risks, concluding that the greatest risks are presented by unaligned artificial intelligence and biotechnology. Lyle Lewis' 2024 book Racing to Extinction explores the roots of human extinction from an evolutionary biology perspective. Lewis argues that humanity treats unused natural resources as waste and is driving ecological destruction through overexploitation, habitat loss, and denial of environmental limits. He uses vivid examples, like the extinction of the passenger pigeon and the environmental cost of rice production, to show how interconnected and fragile ecosystems are.

Causes

Potential anthropogenic causes of human extinction include global thermonuclear war, deployment of a highly effective biological weapon, ecological collapse, runaway artificial intelligence, runaway nanotechnology (such as a grey goo scenario), overpopulation and increased consumption causing resource depletion and a concomitant population crash, population decline by choosing to have fewer children, and displacement of naturally evolved humans by a new species produced by genetic engineering or technological augmentation. Natural and external extinction risks include high-fatality-rate pandemic, supervolcanic eruption, asteroid impact, nearby supernova or gamma-ray burst, or extreme solar flare.

Humans (e.g., Homo sapiens sapiens) as a species may also be considered to have "gone extinct" simply by being replaced with distant descendants whose continued evolution may produce new species or subspecies of Homo or of hominids.

Without intervention from unforeseen forces, the stellar evolution of the Sun is expected to render Earth uninhabitable and ultimately lead to its destruction. The entire universe may eventually become uninhabitable, depending on its ultimate fate and the processes that govern it.

Probability

Natural vs. anthropogenic

Experts generally agree that anthropogenic existential risks are (much) more likely than natural risks. A key difference between these risk types is that empirical evidence can place an upper bound on the level of natural risk. Humanity has existed for at least 200,000 years, over which it has been subject to a roughly constant level of natural risk. If the natural risk were high enough, humanity wouldn't have survived this long. Based on a formalization of this argument, researchers have concluded that we can be confident that natural risk is lower than 1 in 14,000 per year (equivalent to 1 in 140 per century, on average).

Another empirical method to study the likelihood of certain natural risks is to investigate the geological record. For example, a comet or asteroid impact event sufficient in scale to cause an impact winter that would cause human extinction before the year 2100 has been estimated at one in a million. Moreover, large supervolcano eruptions may cause a volcanic winter that could endanger the survival of humanity. The geological record suggests that supervolcanic eruptions are estimated to occur on average about once every 50,000 years, though most such eruptions would not reach the scale required to cause human extinction. Famously, the supervolcano Mt. Toba may have almost wiped out humanity at the time of its last eruption (though this is contentious).

Since anthropogenic risk is a relatively recent phenomenon, humanity's track record of survival cannot provide similar assurances. Humanity has only existed for 80 years since the creation of nuclear weapons, and there is no historical track record for future technologies. This has led thinkers like Carl Sagan to conclude that humanity is currently in a "time of perils," a uniquely dangerous period in human history, where it is subject to unprecedented levels of risk, beginning from when humans first started posing risk to themselves through their actions. Paleobiologist Olev Vinn has suggested that humans presumably have a number of inherited behavior patterns (IBPs) that are not fine-tuned for conditions prevailing in technological civilization. Some IBPs may be highly incompatible with such conditions and have a high potential to induce self-destruction. These patterns may include responses of individuals seeking power over conspecifics in relation to harvesting and consuming energy. Nonetheless, there are ways to address the issue of inherited behavior patterns.

Risk estimates

Given the limitations of ordinary observation and modeling, expert elicitation is frequently used instead to obtain probability estimates.

  • Humanity has a 95% probability of being extinct in 8,000,000 years, according to J. Richard Gott's formulation of the controversial doomsday argument, which argues that we have probably already lived through half the duration of human history.
  • In 1996, John A. Leslie estimated a 30% risk over the next five centuries (equivalent to around 6% per century, on average).
  • The Global Challenges Foundation's 2016 annual report estimates an annual probability of human extinction of at least 0.05% per year (equivalent to 5% per century, on average).
  • As of July 29, 2025, Metaculus users estimate a 1% probability of human extinction by 2100.
  • A 2020 study published in ⁣⁣Scientific Reports⁣⁣ warns that if deforestation and resource consumption continue at current rates, these factors could lead to a "catastrophic collapse in human population" and possibly "an irreversible collapse of our civilization" in the next 20 to 40 years. According to the most optimistic scenario provided by the study, the chances that human civilization survives are smaller than 10%. To avoid this collapse, the study says, humanity should pass from a civilization dominated by the economy to a "cultural society" that "privileges the interest of the ecosystem above the individual interest of its components, but eventually in accordance with the overall communal interest."
  • Nick Bostrom, a philosopher at the University of Oxford known for his work on existential risk, argues
    • that it would be "misguided" to assume that the probability of near-term extinction is less than 25%, and
    • that it will be "a tall order" for the human race to "get our precautions sufficiently right the first time," given that an existential risk provides no opportunity to learn from failure.
  • Philosopher John A. Leslie assigns a 70% chance of humanity surviving the next five centuries, based partly on the controversial philosophical doomsday argument that Leslie champions. Leslie's argument is somewhat frequentist, based on the observation that human extinction has never been observed but requires subjective anthropic arguments. Leslie also discusses the anthropic survivorship bias (which he calls an "observational selection" effect) and states that the a priori certainty of observing an "undisastrous past" could make it difficult to argue that we must be safe because nothing terrible has yet occurred. He quotes Holger Bech Nielsen's formulation: "We do not even know if there should exist some extremely dangerous decay of, say, the proton, which caused the eradication of the earth, because if it happens we would no longer be there to observe it, and if it does not happen there is nothing to observe."
  • Jean-Marc Salotti calculated the probability of human extinction caused by a giant asteroid impact. If no planets are colonized, it will be 0.03 to 0.3 for the next billion years. According to that study, the most frightening object is a giant long-period comet with a warning time of only a few years and, therefore, no time for any intervention in space or settlement on the Moon or Mars. The probability of a giant comet impact in the next hundred years is 2.2×10−12.
  • As the United Nations Office for Disaster Risk Reduction estimated in 2023, there is a 2 to 14% (median: 8%) chance of an extinction-level event by 2100, but there was a 14 to 98% (median: 56%) chance of an extinction-level event by 2700.
  • Bill Gates told The Wall Street Journal on January 27, 2025, that he believes there is a 10–15% (median - 12.5%) chance of a natural pandemic hitting in the next four years, but he estimated that there was also a 65–97.5% (median - 81.25%) chance of a natural pandemic hitting in the next 26 years.
  • On March 19, 2025, Henry Gee said that humanity will be extinct in the next 10,000 years. To avoid it happening, he wanted all humanity to establish space colonies in the next 200-300 years.
  • On September 11, 2025, Warp News estimated a 20% chance of global catastrophe and a 6% chance of human extinction by 2100. They also estimated a 100% chance of global catastrophe and a 30% chance of human extinction by 2500.

From nuclear weapons

On November 13, 2024, the American Enterprise Institute estimated a probability of nuclear war during the 21st century between 0% and 80% (median average—40%). A 2023 article of The Economist estimated an 8% chance of nuclear war causing global catastrophe and a 0.5625% chance of nuclear war causing human extinction.

From supervolcanic eruption

On November 13, 2024, the American Enterprise Institute estimated an annual probability of supervolcanic eruption around 0.0067% (0.67% per century on average).

From artificial intelligence

  • A 2008 survey by the Future of Humanity Institute estimated a 5% probability of extinction by superintelligence by 2100.
  • A 2016 survey of AI experts found a median estimate of 5% that human-level AI would cause an outcome that was "extremely bad (e.g., human extinction)". In 2019, the risk was lowered to 2%, but in 2022, it was increased back to 5%. In 2023, the risk doubled to 10%. In 2024, the risk increased to 15%.
  • In 2020, Toby Ord estimates existential risk in the next century at "1 in 6" in his book The Precipice. He also estimated a "1 in 10" risk of extinction by unaligned AI within the next century.
  • According to a July 10, 2023 article of The Economist, scientists estimated a 12% chance of AI-caused catastrophe and a 3% chance of AI-caused extinction by 2100. They also estimated a 100% chance of AI-caused catastrophe and a 25% chance of AI-caused extinction by 2833.
  • On December 27, 2024, Geoffrey Hinton estimated a 10-20% (median average—15%) probability of AI-caused extinction in the next 30 years. He also estimated a 50-100% (median average - 75%) probability of AI-caused extinction in the next 150 years.
  • On May 6, 2025, Scientific American estimated a 0-10% (median average - 5%) probability of an AI-caused extinction by 2100.
  • On August 1, 2025, Holly Elmore estimated a 15-20% (median average - 17.5%) probability of an AI-caused extinction in the next 1-10 years (median average - 5.5 years). She also estimated a 75-100% (median average-87.5%) probability of an AI-caused extinction in the next 5-50 years (median average-27.5 years).
  • On November 10, 2025, Elon Musk estimated the probability of AI-driven human extinction at 20%, while others—including Bengio’s colleagues—placed the risk anywhere between 10% and 90% (median average—50%). In other words, Elon Musk and Yoshua Bengio's colleagues estimated a 20-50% (median average—35%) probability of an AI-caused extinction.

From climate change

Placard against omnicide, at Extinction Rebellion (2018)

In a 2010 interview with The Australian, the late Australian scientist Frank Fenner predicted the extinction of the human race within a century, primarily as the result of human overpopulation, environmental degradation, and climate change. There are several economists who have discussed the importance of global catastrophic risks. For example, Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage. Richard Posner has argued that humanity is doing far too little, in general, about small, hard-to-estimate risks of large-scale catastrophes.

Individual vs. species risks

Although existential risks are less manageable by individuals than, for example, health risks, according to Ken Olum, Joshua Knobe, and Alexander Vilenkin, the possibility of human extinction does have practical implications. For instance, if the "universal" doomsday argument is accepted, it changes the most likely source of disasters and hence the most efficient means of preventing them.

Difficulty

Some scholars argue that certain scenarios, including global thermonuclear war, would struggle to eradicate every last settlement on Earth. Physicist Willard Wells points out that any credible extinction scenario would have to reach into a diverse set of areas, including the underground subways of major cities, the mountains of Tibet, the remotest islands of the South Pacific, and even McMurdo Station in Antarctica, which has contingency plans and supplies for long isolation. In addition, elaborate bunkers exist for government leaders to occupy during a nuclear war. The existence of nuclear submarines, capable of remaining hundreds of meters deep in the ocean for potentially years, should also be taken into account. Any number of events could lead to a massive loss of human life, but if the last few (see minimum viable population) most resilient humans are unlikely to also die off, then that particular human extinction scenario may not seem credible.

Ethics

Value of human life

"Existential risks" are risks that threaten the entire future of humanity, whether by causing human extinction or by otherwise permanently crippling human progress. Multiple scholars have argued, based on the size of the "cosmic endowment," that because of the inconceivably large number of potential future lives that are at stake, even small reductions of existential risk have enormous value.

In one of the earliest discussions of the ethics of human extinction, Derek Parfit offers the following thought experiment:

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

(1) Peace.
(2) A nuclear war that kills 99% of the world's existing population.
(3) A nuclear war that kills 100%.

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.

— Derek Parfit

The scale of what is lost in an existential catastrophe is determined by humanity's long-term potential—what humanity could expect to achieve if it survived. From a utilitarian perspective, the value of protecting humanity is the product of its duration (how long humanity survives), its size (how many humans there are over time), and its quality (on average, how good is life for future people). On average, species survive for around a million years before going extinct. Parfit points out that the Earth will remain habitable for around a billion years. And these might be lower bounds on our potential: if humanity is able to expand beyond Earth, it could greatly increase the human population and survive for trillions of years. The size of the foregone potential that would be lost were humanity to become extinct is very large. Therefore, reducing existential risk by even a small amount would have a very significant moral value.

Carl Sagan wrote in 1983:

If we are required to calibrate extinction in numerical terms, I would be sure to include the number of people in future generations who would not be born.... (By one calculation), the stakes are one million times greater for extinction than for the more modest nuclear wars that kill "only" hundreds of millions of people. There are many other possible measures of the potential loss – including culture and science, the evolutionary history of the planet, and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise.

Philosopher Robert Adams in 1989 rejected Parfit's "impersonal" views but spoke instead of a moral imperative for loyalty and commitment to "the future of humanity as a vast project... The aspiration for a better society—more just, more rewarding, and more peaceful... our interest in the lives of our children and grandchildren, and the hopes that they will be able, in turn, to have the lives of their children and grandchildren as projects."

Philosopher Nick Bostrom argues in 2013 that preference-satisfactionist, democratic, custodial, and intuitionist arguments all converge on the common-sense view that preventing existential risk is a high moral priority, even if the exact "degree of badness" of human extinction varies between these philosophies.

Parfit argues that the size of the "cosmic endowment" can be calculated from the following argument: If Earth remains habitable for a billion more years and can sustainably support a population of more than a billion humans, then there is a potential for 1016 (or 10,000,000,000,000,000) human lives of normal duration. Bostrom goes further, stating that if the universe is empty, then the accessible universe can support at least 1034 biological human life-years and, if some humans were uploaded onto computers, could even support the equivalent of 1054 cybernetic human life-years.

Some economists and philosophers have defended views, including exponential discounting and person-affecting views of population ethics, on which future people do not matter (or matter much less), morally speaking. While these views are controversial, they would agree that an existential catastrophe would be among the worst things imaginable. It would cut short the lives of eight billion presently existing people, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, there may be strong reasons to reduce existential risk, grounded in concern for presently existing people.

Beyond utilitarianism, other moral perspectives lend support to the importance of reducing existential risk. An existential catastrophe would destroy more than just humanity—it would destroy all cultural artifacts, languages, and traditions, and many of the things we value. So moral viewpoints on which we have duties to protect and cherish things of value would see this as a huge loss that should be avoided. One can also consider reasons grounded in duties to past generations. For instance, Edmund Burke writes of a "partnership...between those who are living, those who are dead, and those who are to be born". If one takes seriously the debt humanity owes to past generations, Ord argues the best way of repaying it might be to "pay it forward" and ensure that humanity's inheritance is passed down to future generations.

Voluntary extinction

Voluntary Human Extinction Movement

Some philosophers adopt the antinatalist position that human extinction would be a beneficial thing. David Benatar argues that coming into existence is always serious harm, and therefore it is better that people do not come into existence in the future. Further, Benatar, animal rights activist Steven Best, and anarchist Todd May posit that human extinction would be a positive thing for the other organisms on the planet and the planet itself, citing, for example, the omnicidal nature of human civilization. The environmental view in favor of human extinction is shared by the members of the Voluntary Human Extinction Movement and the Church of Euthanasia, who call for refraining from reproduction and allowing the human species to go peacefully extinct, thus stopping further environmental degradation.

In fiction

Jean-Baptiste Cousin de Grainville's 1805 science fantasy novel Le dernier homme (The Last Man), which depicts human extinction due to infertility, is considered the first modern apocalyptic novel and credited with launching the genre. Other notable early works include Mary Shelley's 1826 The Last Man, depicting human extinction caused by a pandemic, and Olaf Stapledon's 1937 Star Maker, "a comparative study of omnicide."

Some 21st-century pop-science works, including The World Without Us by Alan Weisman and the television specials Life After People and Aftermath: Population Zero, pose a thought experiment: what would happen to the rest of the planet if humans suddenly disappeared? A threat of human extinction, such as through a technological singularity (also called an intelligence explosion), drives the plot of innumerable science fiction stories; an influential early example is the 1951 film adaptation of When Worlds Collide. Usually the extinction threat is narrowly avoided, but some exceptions exist, such as R.U.R. and Steven Spielberg's A.I.

Doomsday argument

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Doomsday_argument World population ...