Search This Blog

Monday, September 3, 2018

Doomsday argument

From Wikipedia, the free encyclopedia
 
World population from 10,000 BC to AD 2000

The Doomsday argument (DA) is a probabilistic argument that claims to predict the number of future members of the human species given an estimate of the total number of humans born so far. Simply put, it says that supposing that all humans are born in a random order, chances are that any one human is born roughly in the middle.

It was first proposed in an explicit way by the astrophysicist Brandon Carter in 1983, from which it is sometimes called the Carter catastrophe; the argument was subsequently championed by the philosopher John A. Leslie and has since been independently discovered by J. Richard Gott and Holger Bech Nielsen. Similar principles of eschatology were proposed earlier by Heinz von Foerster, among others. A more general form was given earlier in the Lindy effect, in which for certain phenomena the future life expectancy is proportional to (though not necessarily equal to) the current age, and is based on decreasing mortality rate over time: old things endure.

Denoting by N the total number of humans who were ever or will ever be born, the Copernican principle suggests that any one human is equally likely (along with the other N − 1 humans) to find themselves at any position n of the total population N, so humans assume that our fractional position f = n/N is uniformly distributed on the interval [0, 1] prior to learning our absolute position.

f is uniformly distributed on (0, 1) even after learning of the absolute position n. That is, for example, there is a 95% chance that f is in the interval (0.05, 1), that is f > 0.05. In other words, we could assume that we could be 95% certain that we would be within the last 95% of all the humans ever to be born. If we know our absolute position n, this implies an upper bound for N obtained by rearranging n/N > 0.05 to give N < 20n.

If Leslie's figure is used, then 60 billion humans have been born so far, so it can be estimated that there is a 95% chance that the total number of humans N will be less than 20 × 60 billion = 1.2 trillion. Assuming that the world population stabilizes at 10 billion and a life expectancy of 80 years, it can be estimated that the remaining 1140 billion humans will be born in 9120 years. Depending on the projection of world population in the forthcoming centuries, estimates may vary, but the main point of the argument is that it is unlikely that more than 1.2 trillion humans will ever live on Earth. This problem is similar to the famous German tank problem.

The title "Doomsday Argument" is arguably a misnomer. Its popularity as a way of referring to this concept is perhaps based on the widespread belief that there are more people now alive than have ever lived, which would make the current generation of humans statistically likely to be the last one. According to the Population Reference Bureau, however, the number of biologically modern humans who have ever lived and died is closer to 107 billion, which is considerably more than the 7 billion alive today. That being the case, the argument actually implies it is unlikely that this is the last generation. Instead, it paints a relatively optimistic portrait of how long humanity is likely to last, even given current population growth. It is further worth noting that even if the argument is accepted at face value, it does not entail extinction–humanity could conversely evolve into something distinctly enough different that people born after that point would no longer compose part of the same reference group. For both these reasons, the invocation of "doomsday" is misleading.

Aspects

Remarks

  • The step that converts N into an extinction time depends upon a finite human lifespan. If immortality becomes common, and the birth rate drops to zero, then the human race could continue forever even if the total number of humans N is finite.
  • A precise formulation of the Doomsday Argument requires the Bayesian interpretation of probability.
  • Even among Bayesians some of the assumptions of the argument's logic would not be acceptable; for instance, the fact that it is applied to a temporal phenomenon (how long something lasts) means that N's distribution simultaneously represents an "aleatory probability" (as a future event), and an "epistemic probability" (as a decided value about which we are uncertain).
  • The U (0,1] f distribution is derived from two choices, which despite being the default are also arbitrary:
    • The principle of indifference, so that it is as likely for any other randomly selected person to be born after you as before you.
    • The assumption of no 'prior' knowledge on the distribution of N.

Simplification: two possible total numbers of humans

Assume for simplicity that the total number of humans who will ever be born is 60 billion (N1), or 6,000 billion (N2). If there is no prior knowledge of the position that a currently living individual, X, has in the history of humanity, we may instead compute how many humans were born before X, and arrive at (say) 59,854,795,447, which would roughly place X amongst the first 60 billion humans who have ever lived.

Now, if we assume that the number of humans who will ever be born equals N1, the probability that X is amongst the first 60 billion humans who have ever lived is of course 100%. However, if the number of humans who will ever be born equals N2, then the probability that X is amongst the first 60 billion humans who have ever lived is only 1%. Since X is in fact amongst the first 60 billion humans who have ever lived, this means that the total number of humans who will ever be born is more likely to be much closer to 60 billion than to 6,000 billion. In essence the DA therefore suggests that human extinction is more likely to occur sooner rather than later.

It is possible to sum the probabilities for each value of N and therefore to compute a statistical 'confidence limit' on N. For example, taking the numbers above, it is 99% certain that N is smaller than 6,000 billion.

Note that as remarked above, this argument assumes that the prior probability for N is flat, or 50% for N1 and 50% for N2 in the absence of any information about X. On the other hand, it is possible to conclude, given X, that N2 is more likely than N1, if a different prior is used for N. More precisely, Bayes' theorem tells us that P(N|X)=P(X|N)P(N)/P(X), and the conservative application of the Copernican principle tells us only how to calculate P(X|N). Taking P(X) to be flat, we still have to make an assumption about the prior probability P(N) that the total number of humans is N. If we conclude that N2 is much more likely than N1 (for example, because producing a larger population takes more time, increasing the chance that a low-probability but cataclysmic natural event will take place in that time), then P(X|N) can become more heavily weighted towards the bigger value of N. A further, more detailed discussion, as well as relevant distributions P(N), are given below in the Rebuttals section.

What the argument is not

The Doomsday argument (DA) does not say that humanity cannot or will not exist indefinitely. It does not put any upper limit on the number of humans that will ever exist, nor provide a date for when humanity will become extinct.

An abbreviated form of the argument does make these claims, by confusing probability with certainty. However, the actual DA's conclusion is:
There is a 95% chance of extinction within 9,120 years.
The DA gives a 5% chance that some humans will still be alive at the end of that period. (These dates are based on the assumptions above; the precise numbers vary among specific Doomsday arguments.)

Variations

This argument has generated a lively philosophical debate, and no consensus has yet emerged on its solution. The variants described below produce the DA by separate derivations.

Gott's formulation: 'vague prior' total population

Gott specifically proposes the functional form for the prior distribution of the number of people who will ever be born (N). Gott's DA used the vague prior distribution:
P(N)={\frac {k}{N}}.
where
  • P(N) is the probability prior to discovering n, the total number of humans who have yet been born.
  • The constant, k, is chosen to normalize the sum of P(N). The value chosen isn't important here, just the functional form (this is an improper prior, so no value of k gives a valid distribution, but Bayesian inference is still possible using it.)
Since Gott specifies the prior distribution of total humans, P(N), Bayes's theorem and the principle of indifference alone give us P(N|n), the probability of N humans being born if n is a random draw from N:
P(N\mid n)={\frac {P(n\mid N)P(N)}{P(n)}}.
This is Bayes's theorem for the posterior probability of total population ever born of N, conditioned on population born thus far of n. Now, using the indifference principle:
P(n\mid N)={\frac {1}{N}}.
The unconditioned n distribution of the current population is identical to the vague prior N probability density function, so:
P(n)={\frac {k}{n}},
giving P (N | n) for each specific N (through a substitution into the posterior probability equation):
P(N\mid n)={\frac {n}{N^{2}}}.
The easiest way to produce the doomsday estimate with a given confidence (say 95%) is to pretend that N is a continuous variable (since it is very large) and integrate over the probability density from N = n to N = Z. (This will give a function for the probability that NZ):
P(N\leq Z)=\int _{N=n}^{N=Z}P(N|n)\,dN ={\frac {Z-n}{Z}}
Defining Z = 20n gives:
P(N\leq 20n)={\frac {19}{20}}.
This is the simplest Bayesian derivation of the Doomsday Argument:
The chance that the total number of humans that will ever be born (N) is greater than twenty times the total that have been is below 5%
The use of a vague prior distribution seems well-motivated as it assumes as little knowledge as possible about N, given that any particular function must be chosen. It is equivalent to the assumption that the probability density of one's fractional position remains uniformly distributed even after learning of one's absolute position (n).

Gott's 'reference class' in his original 1993 paper was not the number of births, but the number of years 'humans' had existed as a species, which he put at 200,000. Also, Gott tried to give a 95% confidence interval between a minimum survival time and a maximum. Because of the 2.5% chance that he gives to underestimating the minimum he has only a 2.5% chance of overestimating the maximum. This equates to 97.5% confidence that extinction occurs before the upper boundary of his confidence interval.

97.5% is one chance in forty, which can be used in the integral above with Z = 40n, and n = 200,000 years:
P(N\leq 40[200000])={\frac {39}{40}}
This is how Gott produces a 97.5% confidence of extinction within N ≤ 8,000,000 years. The number he quoted was the likely time remaining, N − n = 7.8 million years. This was much higher than the temporal confidence bound produced by counting births, because it applied the principle of indifference to time. (Producing different estimates by sampling different parameters in the same hypothesis is Bertrand's paradox.)

His choice of 95% confidence bounds (rather than 80% or 99.9%, say) matched the scientifically accepted limit of statistical significance for hypothesis rejection. Therefore, he argued that the hypothesis: "humanity will cease to exist before 5,100 years or thrive beyond 7.8 million years" can be rejected.

Leslie's argument differs from Gott's version in that he does not assume a vague prior probability distribution for N. Instead he argues that the force of the Doomsday Argument resides purely in the increased probability of an early Doomsday once you take into account your birth position, regardless of your prior probability distribution for N. He calls this the probability shift.

Heinz von Foerster argued that humanity's abilities to construct societies, civilizations and technologies do not result in self inhibition. Rather, societies' success varies directly with population size. Von Foerster found that this model fit some 25 data points from the birth of Jesus to 1958, with only 7% of the variance left unexplained. Several follow-up letters (1961, 1962, …) were published in Science showing that von Foerster's equation was still on track. The data continued to fit up until 1973. The most remarkable thing about von Foerster's model was it predicted that the human population would reach infinity or a mathematical singularity, on Friday, November 13, 2026. In fact, von Foerster did not imply that the world population on that day could actually become infinite. The real implication was that the world population growth pattern followed for many centuries prior to 1960 was about to come to an end and be transformed into a radically different pattern. Note that this prediction began to be fulfilled just in a few years after the "Doomsday" was published.

Reference classes

One of the major areas of Doomsday Argument debate is the reference class from which n is drawn, and of which N is the ultimate size. The 'standard' Doomsday Argument hypothesis doesn't spend very much time on this point, and simply says that the reference class is the number of 'humans'. Given that you are human, the Copernican principle could be applied to ask if you were born unusually early, but the grouping of 'human' has been widely challenged on practical and philosophical grounds. Nick Bostrom has argued that consciousness is (part of) the discriminator between what is in and what is out of the reference class, and that extraterrestrial intelligences might affect the calculation dramatically.

The following sub-sections relate to different suggested reference classes, each of which has had the standard Doomsday Argument applied to it.

Sampling only WMD-era humans

The Doomsday clock shows the expected time to nuclear doomsday by the judgment of an expert board, rather than a Bayesian model. If the twelve hours of the clock symbolize the lifespan of the human species, its current time of 23:58 implies that we are among the last 1% of people who will ever be born (i.e., that n > 0.99N). J. Richard Gott's temporal version of the Doomsday argument (DA) would require very strong prior evidence to overcome the improbability of being born in such a special time.
If the clock's doomsday estimate is correct, there is less than 1 chance in 100 of seeing it show such a late time in human history, if observed at a random time within that history.
The scientists' warning can be reconciled with the DA, however. The Doomsday clock specifically estimates the proximity of atomic self-destruction—which has only been possible for about seventy years. If doomsday requires nuclear weaponry then the Doomsday Argument 'reference class' is people contemporaneous with nuclear weapons. In this model, the number of people living through, or born after Hiroshima is n, and the number of people who ever will is N. Applying Gott's DA to these variable definitions gives a 50% chance of doomsday within 50 years.
"In this model, the clock's hands are so close to midnight because a condition of doomsday is living post-1945, a condition which applies now but not to the earlier 11 hours and 53 minutes of the clock's metaphorical human 'day'."
If your life is randomly selected from all lives lived under the shadow of the bomb, this simple model gives a 95% chance of doomsday within 1000 years.

The scientists' recent use of moving the clock forward to warn of the dangers posed by global warming muddles this reasoning, however.

SSSA: Sampling from observer-moments

Nick Bostrom, considering observation selection effects, has produced a Self-Sampling Assumption (SSA): "that you should think of yourself as if you were a random observer from a suitable reference class". If the 'reference class' is the set of humans to ever be born, this gives N < 20n with 95% confidence (the standard Doomsday argument). However, he has refined this idea to apply to observer-moments rather than just observers. He has formalized this ( as:
The Strong Self-Sampling Assumption (SSSA): Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class.
If the minute in which you read this article is randomly selected from every minute in every human's lifespan then (with 95% confidence) this event has occurred after the first 5% of human observer-moments. If the mean lifespan in the future is twice the historic mean lifespan, this implies 95% confidence that N < 10n (the average future human will account for twice the observer-moments of the average historic human). Therefore, the 95th percentile extinction-time estimate in this version is 4560 years.

Rebuttals

We are in the earliest 5%, a priori

If one agrees with the statistical methods, still disagreeing with the Doomsday argument (DA) implies that:
  1. The current generation of humans are within the first 5% of humans to be born.
  2. This is not purely a coincidence.
Therefore, these rebuttals try to give reasons for believing that the currently living humans are some of the earliest beings.

For instance, if one is a member of 50,000 people in a collaborative project, the Doomsday Argument implies a 95% chance that there will never be more than a million members of that project. This can be refuted if one's other characteristics are typical of the early adopter. The mainstream of potential users will prefer to be involved when the project is nearly complete. If one were to enjoy the project's incompleteness, it is already known that he or she is unusual, prior to the discovery of his or her early involvement.

If one has measurable attributes that sets one apart from the typical long run user, the project DA can be refuted based on the fact that one could expect to be within the first 5% of members, a priori. The analogy to the total-human-population form of the argument is: Confidence in a prediction of the distribution of human characteristics that places modern and historic humans outside the mainstream, implies that it is already known, before examining n that it is likely to be very early in N.

For example, if one is certain that 99% of humans who will ever live will be cyborgs, but that only a negligible fraction of humans who have been born to date are cyborgs, one could be equally certain that at least one hundred times as many people remain to be born as have been.

Robin Hanson's paper sums up these criticisms of the DA:
"All else is not equal; we have good reasons for thinking we are not randomly selected humans from all who will ever live."
Drawbacks of this rebuttal:
  1. The question of how the confident prediction is derived. An uncannily prescient picture of humanity's statistical distribution is needed through all time, before humans can pronounce ourselves extreme members of that population. (In contrast, project pioneers have clearly distinct psychology from the mainstream.)
  2. If the majority of humans have characteristics that they do not share, some would argue that this is equivalent to the Doomsday argument, since people similar to those observing these matters will become extinct.

Critique: Human extinction is distant, a posteriori

The a posteriori observation that extinction level events are rare could be offered as evidence that the DA's predictions are implausible; typically, extinctions of a dominant species happens less often than once in a million years. Therefore, it is argued that human extinction is unlikely within the next ten millennia. (Another probabilistic argument, drawing a different conclusion than the DA.)

In Bayesian terms, this response to the DA says that our knowledge of history (or ability to prevent disaster) produces a prior marginal for N with a minimum value in the trillions. If N is distributed uniformly from 1012 to 1013, for example, then the probability of N < 1,200 billion inferred from n = 60 billion will be extremely small. This is an equally impeccable Bayesian calculation, rejecting the Copernican principle on the grounds that we must be 'special observers' since there is no likely mechanism for humanity to go extinct within the next hundred thousand years.

This response is accused of overlooking the technological threats to humanity's survival, to which earlier life was not subject, and is specifically rejected by most of the DA's academic critics (arguably excepting Robin Hanson).

In fact, many futurologists believe the empirical situation is worse than Gott's DA estimate. For instance, Sir Martin Rees believes that the technological dangers give an estimated human survival duration of ninety-five years (with 50% confidence.) Earlier prophets made similar predictions and were 'proven' wrong (e.g., on surviving the nuclear arms race). It is possible that their estimates were accurate, and that their common image as alarmists is a survivorship bias.

The prior N distribution may make n very uninformative

Robin Hanson argues that N's prior may be exponentially distributed:
N={\frac {e^{U(0,q]}}{c}}
Here, c and q are constants. If q is large, then our 95% confidence upper bound is on the uniform draw, not the exponential value of N.

The best way to compare this with Gott's Bayesian argument is to flatten the distribution from the vague prior by having the probability fall off more slowly with N (than inverse proportionally). This corresponds to the idea that humanity's growth may be exponential in time with doomsday having a vague prior pdf in time. This would mean than N, the last birth, would have a distribution looking like the following:
\Pr(N)={\frac {k}{N^{\alpha }}},0<\alpha <1.
This prior N distribution is all that is required (with the principle of indifference) to produce the inference of N from n, and this is done in an identical way to the standard case, as described by Gott (equivalent to \alpha = 1 in this distribution):
\Pr(n)=\int _{N=n}^{N=\infty }\Pr(n\mid N)\Pr(N)\,dN=\int _{n}^{\infty }{\frac {k}{N^{(\alpha +1)}}}\,dN={\frac {k}{{\alpha }n^{\alpha }}}
Substituting into the posterior probability equation):
\Pr(N\mid n)={\frac {{\alpha }n^{\alpha }}{N^{(1+\alpha )}}}.
Integrating the probability of any N above xn:
\Pr(N>xn)=\int _{N=xn}^{N=\infty }\Pr(N\mid n)\,dN={\frac {1}{x^{\alpha }}}.
For example, if x = 20, and \alpha = 0.5, this becomes:
\Pr(N>20n)={\frac {1}{\sqrt {20}}}\simeq 22.3\%.
Therefore, with this prior, the chance of a trillion births is well over 20%, rather than the 5% chance given by the standard DA. If \alpha is reduced further by assuming a flatter prior N distribution, then the limits on N given by n become weaker. An \alpha of one reproduces Gott's calculation with a birth reference class, and \alpha around 0.5 could approximate his temporal confidence interval calculation (if the population were expanding exponentially). As \alpha \to 0 (gets smaller) n becomes less and less informative about N. In the limit this distribution approaches an (unbounded) uniform distribution, where all values of N are equally likely. This is Page et al.'s "Assumption 3", which they find few reasons to reject, a priori. (Although all distributions with \alpha \leq 1 are improper priors, this applies to Gott's vague-prior distribution also, and they can all be converted to produce proper integrals by postulating a finite upper population limit.) Since the probability of reaching a population of size 2N is usually thought of as the chance of reaching N multiplied by the survival probability from N to 2N it seems that Pr(N) must be a monotonically decreasing function of N, but this doesn't necessarily require an inverse proportionality.

A prior distribution with a very low \alpha parameter makes the DA's ability to constrain the ultimate size of humanity very weak.

Infinite expectation

Another objection to the Doomsday Argument is that the expected total human population is actually infinite. The calculation is as follows:
The total human population N = n/f, where n is the human population to date and f is our fractional position in the total.
We assume that f is uniformly distributed on (0,1].
The expectation of N is
E(N)=\int _{0}^{1}{n \over f}\,df=n\ln(1)-n\ln(0)=+\infty .

Self-Indication Assumption: The possibility of not existing at all

One objection is that the possibility of your existing at all depends on how many humans will ever exist (N). If this is a high number, then the possibility of your existing is higher than if only a few humans will ever exist. Since you do indeed exist, this is evidence that the number of humans that will ever exist is high.

This objection, originally by Dennis Dieks (1992), is now known by Nick Bostrom's name for it: the "Self-Indication Assumption objection". It can be shown that some SIAs prevent any inference of N from n (the current population).

Caves' rebuttal

The Bayesian argument by Carlton M. Caves says that the uniform distribution assumption is incompatible with the Copernican principle, not a consequence of it.

He gives a number of examples to argue that Gott's rule is implausible. For instance, he says, imagine stumbling into a birthday party, about which you know nothing:
Your friendly enquiry about the age of the celebrant elicits the reply that she is celebrating her (tp = ) 50th birthday. According to Gott, you can predict with 95% confidence that the woman will survive between [50]/39 = 1.28 years and 39[×50] = 1,950 years into the future. Since the wide range encompasses reasonable expectations regarding the woman's survival, it might not seem so bad, till one realizes that [Gott's rule] predicts that with probability 1/2 the woman will survive beyond 100 years old and with probability 1/3 beyond 150. Few of us would want to bet on the woman's survival using Gott's rule.
Although this example exposes a weakness in J. Richard Gott's "Copernicus method" DA (that he does not specify when the "Copernicus method" can be applied) it is not precisely analogous with the modern DA; epistemological refinements of Gott's argument by philosophers such as Nick Bostrom specify that:
Knowing the absolute birth rank (n) must give no information on the total population (N).
Careful DA variants specified with this rule aren't shown implausible by Caves' "Old Lady" example above, because, the woman's age is given prior to the estimate of her lifespan. Since human age gives an estimate of survival time (via actuarial tables) Caves' Birthday party age-estimate could not fall into the class of DA problems defined with this proviso.

To produce a comparable "Birthday party example" of the carefully specified Bayesian DA we would need to completely exclude all prior knowledge of likely human life spans; in principle this could be done (e.g.: hypothetical Amnesia chamber). However, this would remove the modified example from everyday experience. To keep it in the everyday realm the lady's age must be hidden prior to the survival estimate being made. (Although this is no longer exactly the DA, it is much more comparable to it.)

Without knowing the lady’s age, the DA reasoning produces a rule to convert the birthday (n) into a maximum lifespan with 50% confidence (N). Gott's Copernicus method rule is simply: Prob (N < 2n) = 50%. How accurate would this estimate turn out to be? Western demographics are now fairly uniform across ages, so a random birthday (n) could be (very roughly) approximated by a U(0,M] draw where M is the maximum lifespan in the census. In this 'flat' model, everyone shares the same lifespan so N = M. If n happens to be less than (M)/2 then Gott's 2n estimate of N will be under M, its true figure. The other half of the time 2n underestimates M, and in this case (the one Caves highlights in his example) the subject will die before the 2n estimate is reached. In this 'flat demographics' model Gott's 50% confidence figure is proven right 50% of the time.

Self-referencing doomsday argument rebuttal

Some philosophers have been bold enough to suggest that only people who have contemplated the Doomsday argument (DA) belong in the reference class 'human'. If that is the appropriate reference class, Carter defied his own prediction when he first described the argument (to the Royal Society). A member present could have argued thus:
Presently, only one person in the world understands the Doomsday argument, so by its own logic there is a 95% chance that it is a minor problem which will only ever interest twenty people, and I should ignore it.
Jeff Dewynne and Professor Peter Landsberg suggested that this line of reasoning will create a paradox for the Doomsday argument:

If a member did pass such a comment, it would indicate that they understood the DA sufficiently well that in fact 2 people could be considered to understand it, and thus there would be a 5% chance that 40 or more people would actually be interested. Also, of course, ignoring something because you only expect a small number of people to be interested in it is extremely short sighted—if this approach were to be taken, nothing new would ever be explored, if we assume no a priori knowledge of the nature of interest and attentional mechanisms.

Additionally, it should be considered that because Carter did present and describe his argument, in which case the people to whom he explained it did contemplate the DA, as it was inevitable, the conclusion could then be drawn that in the moment of explanation Carter created the basis for his own prediction.

Conflation of future duration with total duration

Various authors have argued that the doomsday argument rests on an incorrect conflation of future duration with total duration. This occurs in the specification of the two time periods as "doom soon" and "doom deferred" which means that both periods are selected to occur after the observed value of the birth order. A rebuttal in Pisaturo (2009) argues that the Doomsday Argument relies on the equivalent of this equation:
P(H_{TS}|D_{p}X)/P(H_{TL}|D_{p}X)=[P(H_{FS}|X)/P(H_{FL}|X)]\cdot [P(D_{p}|H_{TS}X)/P(D_{p}|H_{TL}X)],

where:
X = the prior information;
Dp = the data that past duration is tp;
HFS = the hypothesis that the future duration of the phenomenon will be short;
HFL = the hypothesis that the future duration of the phenomenon will be long;
HTS = the hypothesis that the total duration of the phenomenon will be short—i.e., that tt, the phenomenon’s total longevity, = tTS;
HTL = the hypothesis that the total duration of the phenomenon will be long—i.e., that tt, the phenomenon’s total longevity, = tTL, with tTL > tTS.
Pisaturo then observes:
Clearly, this is an invalid application of Bayes’ theorem, as it conflates future duration and total duration.
Pisaturo takes numerical examples based on two possible corrections to this equation: considering only future durations, and considering only total durations. In both cases, he concludes that the Doomsday Argument’s claim, that there is a ‘Bayesian shift’ in favor of the shorter future duration, is fallacious.

This argument is also echoed in O'Neill (2014). In this work the author argues that a unidirectional "Bayesian Shift" is an impossibility within the standard formulation of probability theory and is contradictory to the rules of probability. As with Pisaturo, he argues that the doomsday argument conflates future duration with total duration by specification of doom times that occur after the observed birth order. According to O'Neill:
The reason for the hostility to the doomsday argument and its assertion of a "Bayesian shift" is that many people who are familiar with probability theory are implicitly aware of the absurdity of the claim that one can have an automatic unidirectional shift in beliefs regardless of the actual outcome that is observed. This is an example of the "reasoning to a foregone conclusion" that arises in certain kinds of failures of an underlying inferential mechanism. An examination of the inference problem used in the argument shows that this suspicion is indeed correct and the doomsday argument is invalid. (pp. 216-217)

Mathematics-free explanation by analogy

Assume the human species is a car driver. The driver has encountered some bumps but no catastrophes, and the car (Earth) is still road-worthy. However, insurance is required. The cosmic insurer has not dealt with humanity before, and needs some basis on which to calculate the premium. According to the Doomsday Argument, the insurer merely need ask how long the car and driver have been on the road—currently at least 40,000 years without an "accident"—and use the response to calculate insurance based on a 50% chance that a fatal "accident" will occur inside that time period.

Consider a hypothetical insurance company that tries to attract drivers with long accident-free histories not because they necessarily drive more safely than newly qualified drivers, but for statistical reasons: the hypothetical insurer estimates that each driver looks for insurance quotes every year, so that the time since the last accident is an evenly distributed random sample between accidents. The chance of being more than halfway through an evenly distributed random sample is one-half, and (ignoring old-age effects) if the driver is more than halfway between accidents then he is closer to his next accident than his previous one. A driver who was accident-free for 10 years would be quoted a very low premium for this reason, but someone should not expect cheap insurance if he only passed his test two hours ago (equivalent to the accident-free record of the human species in relation to 40,000 years of geological time.)

Analogy to the estimated final score of a cricket batsman

A random in-progress cricket test match is sampled for a single piece of information: the current batsman's run tally so far. If the batsman is dismissed (rather than his team declaring because it has enough runs), what is the chance that he will end up with a score more than double his current total?
A rough empirical result is that the chance is half (on average).
The Doomsday argument (DA) is that even if we were completely ignorant of the game we could make the same prediction, or profit by offering a bet paying odds of 2-to-3 on the batsmen doubling his current score.

Importantly, we can only offer the bet before the current score is given (this is necessary because the absolute value of the current score would give a cricket expert a lot of information about the chance of that tally doubling). It is necessary to be ignorant of the absolute run tally before making the prediction because this is linked to the likely total, but if the likely total and absolute value are not linked, the survival prediction can be made after discovering the batter's current score. Analogously, the DA says that if the absolute number of humans born gives no information on the number that will be, we can predict the species’ total number of births after discovering that 60 billion people have ever been born: with 50% confidence it is 120 billion people, so that there is better-chance-than-not that the last human birth will occur before the 23rd century.

It is not true that the chance is half, whatever the number of runs currently scored; batting records give an empirical correlation between reaching a given score (50 say) and reaching any other, higher score (say 100). On the average, the chance of doubling the current tally may be half, but the chance of reaching 100 having scored 50 is much lower than reaching ten from five. Thus, the absolute value of the score gives information about the likely final total the batsman will reach, beyond the "scale invariant".

An analogous Bayesian critique of the DA is that it somehow possessed prior knowledge of the all-time human population distribution (total runs scored), and that this is more significant than the finding of a low number of births until now (a low current run count).

There are two alternative methods of making uniform draws from the current score (n):
  1. Put the runs actually scored by dismissed player in order, say 200, and randomly choose between these scoring increments by U(0, 200].
  2. Select a time randomly from the beginning of the match to the final dismissal.
The second sampling-scheme will include those lengthy periods of a game where a dismissed player is replaced, during which the ‘current batsman’ is preparing to take the field and has no runs. If people sample based on time-of-day rather than running-score they will often find that a new batsman has a score of zero when the total score that day was low, but humans will rarely sample a zero if one batsman continued piling on runs all day long. Therefore, sampling a non-zero score would tell us something about the likely final score the current batsman will achieve.

Choosing sampling method 2 rather than method 1 would give a different statistical link between current and final score: any non-zero score would imply that the batsman reached a high final total, especially if the time to replace batsman is very long. This is analogous to the SIA-DA-refutation that N's distribution should include N = 0 states, which leads to the DA having reduced predictive power (in the extreme, no power to predict N from n at all).

The Doomsday Argument as a tricky problem

Sometimes, the Doomsday Argument is presented as a probability problem using Bayes’ formula.

Hypotheses

Two hypotheses are in competition:
  1. The theory A says that humanity will disappear in 2150,
  2. and the theory B says that it will be much later.
Under assumption A, a tenth of humanity was alive in the year 2000, and humanity has included 50 billion individuals.

Under assumption B, one thousandth of humanity was alive in the year 2000, and humanity has included 5 trillion individuals.

The first theory seems less likely, and its a priori probability is set at 1%, while the probability of the second is logically set to 99%.

Now consider an event E, for example: "a person is part of the 5 billion people alive in the year 2000". One may ask "What is the most likely hypothesis, if you take into account this event?" and apply Bayes' formula:
\mathbb {P} (A\mid E)={\frac {\mathbb {P} (E\mid A)\cdot \mathbb {P} (A)}{\mathbb {P} (E)}}
According to the above figures:
\mathbb {P} (E\mid A)=10\%\ ,\ \mathbb {P} (E\mid B)=0.10\%
Now with :
\mathbb {P} (A)={\frac {1}{100}}\ ,\ \mathbb {P} (B)={\frac {99}{100}}
We get :
\mathbb {P} (E)=\mathbb {P} (E\cap A)+\mathbb {P} (E\cap B)=\mathbb {P} (E\mid A)\cdot \mathbb {P} (A)+\mathbb {P} (E\mid B)\cdot \mathbb {P} (B)={\frac {19.9}{10000}}
Finally the probabilities have changed dramatically:
{\displaystyle \mathbb {P} (A\mid E)={\frac {10}{19.9}}=50.25\%}
{\displaystyle \mathbb {P} (B\mid E)={\frac {9.9}{19.9}}=49.75\%}
Because an individual was chosen randomly, the probability of the end of the world has significantly increased.

Attempted Refutations

A potential refutation was provided in July 2003: Jean-Paul Delahaye showed that Bayes' formula introduces "probabilistic anamorphosis", and demonstrated that Bayes' formula is prone to misleading errors made in good faith by its users. In 2011, Philippe Gay showed that many similar problems can lead to these mistakes: each change of a weighted average by a simple one leads to odd results.

In 2010,[18] Philippe Gay and Édouard Thomas described a slightly different understanding: the formula must take into account the number of humans involved in each case. These explanations show the same algebra:
{\displaystyle \mathbb {P} (B\mid E)={\frac {0.1\%\times 5\cdot 10^{12}\times 99\%}{0.1\%\times 5\cdot 10^{12}\times 99\%+10\%\times 50\cdot 10^{9}\times 1\%}}={\frac {99\%}{99\%+1\%}}=99\%=\mathbb {P} (B)}
Using a similar method, we get:
\mathbb {P} (A\mid E)={\frac {1\%}{99\%+1\%}}=1\%=\mathbb {P} (A)

Malthusian catastrophe

From Wikipedia, the free encyclopedia
 
A chart of estimated annual growth rates in world population, 1800–2005. Rates before 1950 are annualized historical estimates from the US Census Bureau. Red = USCB projections to 2025

A Malthusian catastrophe (also known as Malthusian check or Malthusian spectre) is a prediction that population growth will outpace agricultural production – that there will be too many people and not enough food.

Thomas Malthus

In 1779, Thomas Malthus wrote:
Famine seems to be the last, the most dreadful resource of nature. The power of population is so superior to the power of the earth to produce subsistence for man, that premature death must in some shape or other visit the human race. The vices of mankind are active and able ministers of depopulation. They are the precursors in the great army of destruction, and often finish the dreadful work themselves. But should they fail in this war of extermination, sickly seasons, epidemics, pestilence, and plague advance in terrific array, and sweep off their thousands and tens of thousands. Should success be still incomplete, gigantic inevitable famine stalks in the rear, and with one mighty blow levels the population with the food of the world.
— Thomas Malthus, 1798. An Essay on the Principle of Population. Chapter VII, p. 61
Notwithstanding the apocalyptic image conveyed by this particular paragraph, Malthus himself did not subscribe to the notion that mankind was fated for a "catastrophe" due to population overshooting resources. Rather, he believed that population growth was generally restricted by available resources:

Malthus PL en.svg
The passion between the sexes has appeared in every age to be so nearly the same that it may always be considered, in algebraic language, as a given quantity. The great law of necessity which prevents population from increasing in any country beyond the food which it can either produce or acquire, is a law so open to our view...that we cannot for a moment doubt it. The different modes which nature takes to prevent or repress a redundant population do not appear, indeed, to us so certain and regular, but though we cannot always predict the mode we may with certainty predict the fact.
— Thomas Malthus, 1798. An Essay on the Principle of Population. Chapter IV.

Preventive vs. positive

Malthus proposed two kinds of population checks: preventive and positive.

A preventive check is a conscious decision to delay marriage or abstain from procreation based on a lack of resources. This type of check is unique to humanity, because it requires foresight. Malthus argued that man is incapable of ignoring the consequences of uncontrolled population growth, and would intentionally avoid contributing to it. According to Malthus, a positive check is any event or circumstance that shortens the human life span. The primary examples of this are war, plague and famine. However, poor health and economic conditions are also considered instances of positive checks.

Neo-Malthusian theory

Wheat yields in developing countries since 1961, in kg/ha. The steep rise in crop yields in the U.S. began in the 1940s. The percentage of growth was fastest in the early rapid growth stage. In developing countries maize yields are still rapidly rising.
 
After World War II, mechanized agriculture produced a dramatic increase in productivity of agriculture and the Green Revolution greatly increased crop yields, expanding the world's food supply while lowering food prices. In response, the growth rate of the world's population accelerated rapidly, resulting in predictions by Paul R. Ehrlich, Simon Hopkins, and many others of an imminent Malthusian catastrophe. However, populations of most developed countries grew slowly enough to be outpaced by gains in productivity.

By the early 21st century, many technologically developed countries had passed through the demographic transition, a complex social development encompassing a drop in total fertility rates in response to various fertility factors, including lower infant mortality, increased urbanization, and a wider availability of effective birth control.

On the assumption that the demographic transition is now spreading from the developed countries to less developed countries, the United Nations Population Fund estimates that human population may peak in the late 21st century rather than continue to grow until it has exhausted available resources.

World population from 1800 to 2100, based on UN 2004 projections (red, orange, green) and US Census Bureau historical estimates (black)
 
Growth in food production has been greater than population growth. Food per person increased since 1961.
 
Historians have estimated the total human population back to 10,000 BC. The figure on the right shows the trend of total population from 1800 to 2005, and from there in three projections out to 2100 (low, medium, and high). The United Nations population projections out to 2100 (the red, orange, and green lines) show a possible peak in the world's population occurring by 2040 in the first scenario, and by 2100 in the second scenario, and never ending growth in the third.

The graph of annual growth rates (at the top of the page) does not appear exactly as one would expect for long-term exponential growth. For exponential growth it should be a straight line at constant height, whereas in fact the graph from 1800 to 2005 is dominated by an enormous hump that began about 1920, peaked in the mid-1960s, and has been steadily eroding away for the last 40 years. The sharp fluctuation between 1959 and 1960 was due to the combined effects of the Great Leap Forward and a natural disaster in China. Also visible on this graph are the effects of the Great Depression, the two world wars, and possibly also the 1918 flu pandemic.

Though short-term trends, even on the scale of decades or centuries, cannot prove or disprove the existence of mechanisms promoting a Malthusian catastrophe over longer periods, the prosperity of a major fraction of the human population at the beginning of the 21st century, and the debatability of the predictions for ecological collapse made by Paul R. Ehrlich in the 1960s and 1970s, has led some people, such as economist Julian L. Simon, to question its inevitability.

A 2004 study by a group of prominent economists and ecologists, including Kenneth Arrow and Paul Ehrlich suggests that the central concerns regarding sustainability have shifted from population growth to the consumption/savings ratio, due to shifts in population growth rates since the 1970s. Empirical estimates show that public policy (taxes or the establishment of more complete property rights) can promote more efficient consumption and investment that are sustainable in an ecological sense; that is, given the current (relatively low) population growth rate, the Malthusian catastrophe can be avoided by either a shift in consumer preferences or public policy that induces a similar shift.

A 2002 study by the UN Food and Agriculture Organization predicts that world food production will be in excess of the needs of the human population by the year 2030; however, that source also states that hundreds of millions will remain hungry (presumably due to economic realities and political issues).

Criticism

Karl Marx and Friedrich Engels argued that Malthus failed to recognize a crucial difference between humans and other species. In capitalist societies, as Engels put it, scientific and technological "progress is as unlimited and at least as rapid as that of population". Marx argued, even more broadly, that the growth of both a human population in toto and the "relative surplus population" within it, occurred in direct proportion to accumulation.

Henry George criticized Malthus's view that population growth was a cause of poverty, arguing that poverty was caused by the concentration of ownership of land and natural resources. George noted that humans are distinct from other species, because unlike most species humans can use their minds to leverage the reproductive forces of nature to their advantage. He wrote, "Both the jayhawk and the man eat chickens; but the more jayhawks, the fewer chickens, while the more men, the more chickens."

Ester Boserup suggested that population levels determined agricultural methods, rather than agricultural methods determining population.

Julian Simon was another economist who argued that there could be no global Malthusian catastrophe, because of two factors: (1) the existence of new knowledge, and educated people to take advantage of it, and (2) "economic freedom", that is, the ability of the world to increase production when there is a profitable opportunity to do so.

D.E.C. Eversley observed that Malthus appeared unaware of the extent of industrialization, and either ignored or discredited the possibility that it could improve living conditions of the poorer classes.

In contrast to these criticisms, some individuals, such as Joseph Tainter, argue that science has diminishing marginal returns and that scientific progress is becoming more difficult, harder to achieve, and more costly. (DJS:  also see Ray Kurzxweil.)

Julian Simon

From Wikipedia, the free encyclopedia

Julian Lincoln Simon (February 12, 1932 – February 8, 1998) was an American professor of business administration at the University of Maryland and a Senior Fellow at the Cato Institute at the time of his death, after previously serving as a longtime economics and business professor at the University of Illinois at Urbana-Champaign.

Simon wrote many books and articles, mostly on economic subjects. He is best known for his work on population, natural resources, and immigration. His work covers cornucopian views on lasting economic benefits from natural resources and continuous population growth, even despite limited or finite physical resources, empowered by human ingenuity, substitutes, and technological progress. His works are also cited by libertarians against government regulation.

He is also known for the famous Simon–Ehrlich wager, a bet he made with ecologist Paul R. Ehrlich. Ehrlich bet that the prices for five metals would increase over a decade, while Simon took the opposite stance. Simon won the bet, as the prices for the metals sharply declined during that decade.

Theory

Simon's 1981 book The Ultimate Resource is a criticism of what was then the conventional wisdom on resource scarcity, published within the context of the cultural background created by the best-selling and highly influential book The Population Bomb in 1968 by Paul R. Ehrlich and The Limits to Growth analysis published in 1972. The Ultimate Resource challenged the conventional wisdom on population growth, raw-material scarcity and resource consumption. Simon argues that our notions of increasing resource-scarcity ignore the long-term declines in wage-adjusted raw material prices. Viewed economically, he argues, increasing wealth and technology make more resources available; although supplies may be limited physically they may be regarded as economically indefinite as old resources are recycled and new alternatives are assumed to be developed by the market. Simon challenged the notion of an impending Malthusian catastrophe—that an increase in population has negative economic consequences; that population is a drain on natural resources; and that we stand at risk of running out of resources through over-consumption. Simon argues that population is the solution to resource scarcities and environmental problems, since people and markets innovate. His ideas were praised by Nobel Laureate economists Friedrich Hayek and Milton Friedman, the latter in a 1998 foreword to The Ultimate Resource II, but they have also attracted critics such as Paul R. Ehrlich, Albert Allen Bartlett and Herman Daly.

Simon examined different raw materials, especially metals and their prices in historical times. He assumed that besides temporary shortfalls, in the long run prices for raw materials remain at similar levels or even decrease. E.g. aluminium was never as expensive as before 1886 and steel used for medieval armor carried a much higher price tag in current dollars than any modern parallel. A recent discussion of commodity index long-term trends supported his positions.

His 1984 book The Resourceful Earth (co-edited by Herman Kahn), is a similar criticism of the conventional wisdom on population growth and resource consumption and a direct response to the Global 2000 report. For example, it predicted that "There is no compelling reason to believe that world oil prices will rise in the coming decades. In fact, prices may well fall below current levels". Indeed, oil prices trended downward for nearly the next 2 decades, before rising above 1984 levels in about 2003 or 2004. Oil prices have subsequently risen and fallen, and risen again. In 2008, the price of crude oil reached $100 per barrel, a level last attained in the 1860s (inflation adjusted). Later in 2008, the price again sharply fell, to a low of about $40, before rising again to a high around $125. Since mid-2011, prices were slowly trending downward until the middle of 2014, but falling dramatically until the end of 2015 to ca. $30. Since then prices were relatively stable (below $50).

Simon was skeptical, in 1994, of claims that human activity caused global environmental damage, notably in relation to CFCs, ozone depletion and climate change, the latter primarily because of the perceived rapid switch from fears of global cooling and a new ice age (in the mid-1970s) to the later fears of global warming.

Simon also listed numerous claims about alleged environmental damage and health dangers from pollution as "definitely disproved". These included claims about lead pollution & IQ, DDT, PCBs, malathion, Agent Orange, asbestos, and the chemical contamination at Love Canal. He dismissed such concerns as a mere "value judgement."
But also, to a startling degree, the decision about whether the overall effect of a child or migrant is positive or negative depends on the values of whoever is making the judgment – your preference to spend a dollar now rather than to wait for a dollar-plus-something in twenty or thirty years, your preferences for having more or fewer wild animals alive as opposed to more or fewer human beings alive, and so on.

Influence

Simon was one of the founders of free-market environmentalism. An article entitled "The Doomslayer" profiling Julian Simon in Wired magazine inspired Danish climate skeptic Bjørn Lomborg to write the book The Skeptical Environmentalist.

Simon was also the first to suggest that airlines should provide incentives for travelers to give up their seats on overbooked flights, rather than arbitrarily taking random passengers off the plane (a practice known as "bumping"). Although the airline industry initially rejected it, his plan was later implemented with resounding success, as recounted by Milton Friedman in the foreword to The Ultimate Resource II. Economist James Heins said in 2009 that the practice had added $100 billion to the United States economy in the last 30 years. Simon gave away his idea to federal de-regulators and never received any personal profit from his solution.

Although not all of Simon's arguments were universally accepted, they contributed to a shift in opinion in the literature on demographic economics from a strongly Malthusian negative view of population growth to a more neutral view. More recent theoretical developments, based on the ideas of the demographic dividend and demographic window, have contributed to another shift, this time away from the debate viewing population growth as either good or bad.

Simon wrote a memoir, A Life Against the Grain, which was published by his wife after his death.

Wagers with rivals

Paul R. Ehrlich – first wager

Simon challenged Paul R. Ehrlich to a wager in 1980 over the price of metals a decade later; Simon had been challenging environmental scientists to the bet for some time. Ehrlich, John Harte, and John Holdren selected a basket of five metals that they thought would rise in price with increasing scarcity and depletion. Simon won the bet, with all five metals dropping in price.

Supporters of Ehrlich's position suggest that much of this price drop came because of an oil spike driving prices up in 1980 and a recession driving prices down in 1990, pointing out that the price of the basket of metals actually rose from 1950 to 1975. They also suggest that Ehrlich did not consider the prices of these metals to be critical indicators, and that Ehrlich took the bet with great reluctance. On the other hand, Ehrlich selected the metals to be used himself, and at the time of the bet called it an "astonishing offer" that he was accepting "before other greedy people jump in."

The total supply in three of these metals (chromium, copper and nickel) increased during this time. Prices also declined for reasons specific to each of the five:
  • The price of tin went down because of an increased use of aluminium, a much more abundant, useful and inexpensive material.
  • Better mining technologies allowed for the discovery of vast nickel lodes, which ended the near monopoly that was enjoyed on the market.
  • Tungsten fell due to the rise of the use of ceramics in cookware.
  • The price of chromium fell due to better smelting techniques.
  • The price of copper began to fall due to the invention of fiber optic cable (which is derived from sand), which serves a number of the functions once reserved only for copper wire.
In all of these cases, better technology allowed for either more efficient use of existing resources, or substitution with a more abundant and less expensive resource, as Simon predicted, until 2011.

Paul R. Ehrlich – proposed second wager

In 1995, Simon issued a challenge for a second bet. Ehrlich declined, and proposed instead that they bet on a metric for human welfare. Ehrlich offered Simon a set of 15 metrics over 10 years, victor to be determined by scientists chosen by the president of the National Academy of Sciences in 2005. There was no meeting of minds, because Simon felt that too many of the metric's measured attributes of the world were not directly related to human welfare, e.g. the amount of nitrous oxide in the atmosphere. For such indirect, supposedly bad indicators to be considered "bad", they would ultimately have to have some measurable detrimental effect on actual human welfare. Ehrlich refused to leave out measures considered by Simon to be immaterial.

Simon summarized the bet with the following analogy:
Let me characterize their [Ehrlich and Schneider's] offer as follows. I predict, and this is for real, that the average performances in the next Olympics will be better than those in the last Olympics. On average, the performances have gotten better, Olympics to Olympics, for a variety of reasons. What Ehrlich and others say is that they don't want to bet on athletic performances, they want to bet on the conditions of the track, or the weather, or the officials, or any other such indirect measure.

David South

The same year as his second challenge to Ehrlich, Simon also began a wager with David South, professor of the Auburn University School of Forestry. The Simon / South wager concerned timber prices. Consistent with his cornucopian analysis of this issue in The Ultimate Resource, Simon wagered that at the end of a five-year term the consumer price of pine timber would have decreased; South wagered that it would increase. Before five years had elapsed, Simon saw that market and extra-market forces were driving up the price of timber, and he paid Professor South $1,000. Simon died before the agreed-upon date of the end of the bet, by which time timber prices had risen further.

Simon's reasoning for his early exit out of the bet was due to "the far-reaching quantity and price effects of logging restrictions in the Pacific-northwest." He believed this counted as interference from the U.S. government, which rendered the bet worthless according to his economic principles. Simon's bet only considered the possibility of prices being driven up by Alabama's government; he did not believe anything worthwhile was shown when U.S. logging restrictions drove the prices up.

Main statements and criticism

Jared Diamond in his book Collapse, Albert Bartlett and Garrett Hardin describe Simon as being too optimistic and some of his assumptions being not in line with natural limitations.
We now have in our hands—really, in our libraries—the technology to feed, clothe, and supply energy to an ever-growing population for the next seven billion years. (Simon along The State of Humanity: Steadily Improving 1995)
Diamond claims that a continued stable growth rate of earth's population would result in extreme over-population long before the suggested time limit. Regarding the attributed population predictions Simon did not specify that he was assuming a fixed growth rate as Diamond, Bartlett and Hardin have done. Simon argued that people do not become poorer as the population expands; increasing numbers produce what they needed to support themselves, and have and will prosper while food prices sink.
There is no reason to believe that at any given moment in the future the available quantity of any natural resource or service at present prices will be much smaller than it is now, or non-existent. (Simon in The Ultimate Resource, 1981)
Diamond believes, and finds absurd, Simon implies it would be possible to produce metals, e.g. copper, from other elements. For Simon, human resource needs are comparably small compared to the wealth of nature. He therefore argued physical limitations play a minor role and shortages of raw materials tend to be local and temporary. The main scarcity pointed out by Simon is the amount of human brain power (i.e. "The Ultimate Resource") which allows for the perpetuation of human activities for practically unlimited time. For example, before copper ore became scarce and prices soared due to global increasing demand for copper wires and cablings, the global data and telecommunication networks have switched to glass fiber backbone networks.
This is my long-run forecast in brief, ...The material conditions of life will continue to get better for most people, in most countries, most of the time, indefinitely. Within a century or two, all nations and most of humanity will be at or above today's Western living standards. I also speculate, however, that many people will continue to think and say that the conditions of life are getting worse.
This and other quotations in Wired are supposed to be the reason for Bjørn Lomborg's The Skeptical Environmentalist. Lomborg has stated that he began his research as an attempt to counter what he saw as Simons' anti-ecological arguments but changed his mind after starting to analyze the data.

Legacy

The Institute for the Study of Labor established the annual Julian L. Simon Lecture to honor Simon's work in population economics. The University of Illinois at Urbana-Champaign held a symposium discussing Simon's work on April 24, 2002. The university also established the Julian Simon Memorial Faculty Scholar Endowment to fund an associate faculty member in the business school. India's Liberty Institute also holds a Julian Simon Memorial Lecture. The Competitive Enterprise Institute gives the Julian Simon Memorial Award annually to an economist in the vein of Simon; the first recipient was Stephen Moore, who had served as a research fellow under Simon in the 1980s.

Personal life

Simon was married to Rita James Simon, who was also a longtime member of the faculty at the University of Illinois at Urbana-Champaign and later became a public affairs professor at American University. Simon suffered from a long time depression, which allowed him to work only a few productive hours in a day. He also studied psychology of depression and wrote a book on overcoming it. Simon was Jewish. He died of a heart attack at his home in Chevy Chase in 1998 at age 65.

Education

Honors

Works

Cooperative

From Wikipedia, the free encyclopedia ...