A molecular assembler, as defined by K. Eric Drexler, is a "proposed device able to guide chemical reactions by positioning reactive molecules with atomic precision". A molecular assembler is a kind of molecular machine. Some biological molecules such as ribosomes fit this definition. This is because they receive instructions from messenger RNA and then assemble specific sequences of amino acids to construct protein molecules. However, the term "molecular assembler" usually refers to theoretical human-made devices.
Beginning in 2007, the British Engineering and Physical Sciences Research Council has funded development of ribosome-like
molecular assemblers. Clearly, molecular assemblers are possible in
this limited sense. A technology roadmap project, led by the Battelle Memorial Institute and hosted by several U.S. National Laboratories
has explored a range of atomically precise fabrication technologies,
including both early-generation and longer-term prospects for
programmable molecular assembly; the report was released in December,
2007.
In 2008, the Engineering and Physical Sciences Research Council
provided funding of £1.5 million over six years (£1,942,235.57,
$2,693,808.00 in 2021) for research working towards mechanized mechanosynthesis, in partnership with the Institute for Molecular Manufacturing, amongst others.
Likewise, the term "molecular assembler" has been used in science fiction and popular culture
to refer to a wide range of fantastic atom-manipulating nanomachines,
many of which may be physically impossible in reality. Much of the
controversy regarding "molecular assemblers" results from the confusion
in the use of the name for both technical concepts and popular
fantasies. In 1992, Drexler introduced the related but better-understood
term "molecular manufacturing", which he defined as the programmed "chemical synthesis of complex structures by mechanically positioning reactive molecules, not by manipulating individual atoms".
This article mostly discusses "molecular assemblers" in the
popular sense. These include hypothetical machines that manipulate
individual atoms and machines with organism-like self-replicating
abilities, mobility, ability to consume food, and so forth. These are
quite different from devices that merely (as defined above) "guide
chemical reactions by positioning reactive molecules with atomic
precision".
Because synthetic molecular assemblers have never been
constructed and because of the confusion regarding the meaning of the
term, there has been much controversy as to whether "molecular
assemblers" are possible or simply science fiction. Confusion and
controversy also stem from their classification as nanotechnology,
which is an active area of laboratory research which has already been
applied to the production of real products; however, there had been,
until recently, no research efforts into the actual construction of "molecular assemblers".
Nonetheless, a 2013 paper by David Leigh's group, published in the journal Science, details a new method of synthesizing a peptide in a sequence-specific manner by using an artificial molecular machine that is guided by a molecular strand.
This functions in the same way as a ribosome building proteins by
assembling amino acids according to a messenger RNA blueprint. The
structure of the machine is based on a rotaxane, which is a molecular ring sliding along a molecular axle. The ring carries a thiolate
group, which removes amino acids in sequence from the axle,
transferring them to a peptide assembly site. In 2018, the same group
published a more advanced version of this concept in which the molecular
ring shuttles along a polymeric track to assemble an oligopeptide that can fold into an α-helix that can perform the enantioselectiveepoxidation of a chalcone derivative (in a way reminiscent to the ribosome assembling an enzyme). In another paper published in Science in March 2015, chemists at the University of Illinois report a platform that automates the synthesis of 14 classes of small molecules, with thousands of compatible building blocks.
In 2017, David Leigh's group reported a molecular robot that could be programmed to construct any one of four different stereoisomers
of a molecular product by using a nanomechanical robotic arm to move a
molecular substrate between different reactive sites of an artificial
molecular machine.
An accompanying News and Views article, titled ‘A molecular assembler’,
outlined the operation of the molecular robot as effectively a
prototypical molecular assembler.
Nanofactories
A nanofactory is a proposed system in which nanomachines (resembling molecular assemblers, or industrial robot arms) would combine reactive molecules via mechanosynthesis
to build larger atomically precise parts. These, in turn, would be
assembled by positioning mechanisms of assorted sizes to build
macroscopic (visible) but still atomically-precise products.
A typical nanofactory would fit in a desktop box, in the vision of K. Eric Drexler published in Nanosystems: Molecular Machinery, Manufacturing and Computation (1992), a notable work of "exploratory engineering". During the 1990s, others have extended the nanofactory concept, including an analysis of nanofactory convergent assembly by Ralph Merkle, a systems design of a replicating nanofactory architecture by J. Storrs Hall, Forrest Bishop's "Universal Assembler", the patented exponential assembly process by Zyvex,
and a top-level systems design for a 'primitive nanofactory' by Chris
Phoenix (Director of Research at the Center for Responsible
Nanotechnology). All of these nanofactory designs (and more) are
summarized in Chapter 4 of Kinematic Self-Replicating Machines (2004) by Robert Freitas and Ralph Merkle. The Nanofactory Collaboration,
founded by Freitas and Merkle in 2000, is a focused, ongoing effort
involving 23 researchers from 10 organizations and 4 countries that is
developing a practical research agenda specifically aimed at positionally-controlled diamond mechanosynthesis and diamondoid nanofactory development.
In 2005, a computer-animatedshort film
of the nanofactory concept was produced by John Burch, in collaboration
with Drexler. Such visions have been the subject of much debate, on
several intellectual levels. No one has discovered an insurmountable
problem with the underlying theories and no one has proved that the
theories can be translated into practice. However, the debate continues,
with some of it being summarized in the molecular nanotechnology article.
If nanofactories could be built, severe disruption to the world economy
would be one of many possible negative impacts, though it could be
argued that this disruption would have little negative effect, if
everyone had such nanofactories. Great benefits also would be
anticipated. Various works of science fiction have explored these and similar concepts. The potential for such devices was part of the mandate of a major UK study led by mechanical engineering professor Dame Ann Dowling.
Self-replication
"Molecular
assemblers" have been confused with self-replicating machines. To
produce a practical quantity of a desired product, the nanoscale size of
a typical science fiction universal molecular assembler requires an
extremely large number of such devices. However, a single such
theoretical molecular assembler might be programmed to self-replicate,
constructing many copies of itself. This would allow an exponential
rate of production. Then, after sufficient quantities of the molecular
assemblers were available, they would then be re-programmed for
production of the desired product. However, if self-replication of
molecular assemblers were not restrained then it might lead to
competition with naturally occurring organisms. This has been called ecophagy or the grey goo problem.
One method of building molecular assemblers is to mimic
evolutionary processes employed by biological systems. Biological
evolution proceeds by random variation combined with culling of the
less-successful variants and reproduction of the more-successful
variants. Production of complex molecular assemblers might be evolved
from simpler systems since "A complex system
that works is invariably found to have evolved from a simple system
that worked. . . . A complex system designed from scratch never works
and can not be patched up to make it work. You have to start over,
beginning with a system that works."
However, most published safety guidelines include "recommendations
against developing ... replicator designs which permit surviving
mutation or undergoing evolution".
Most assembler designs keep the "source code" external to the
physical assembler. At each step of a manufacturing process, that step
is read from an ordinary computer file and "broadcast" to all the
assemblers. If any assembler gets out of range of that computer, or when
the link between that computer and the assemblers is broken, or when
that computer is unplugged, the assemblers stop replicating. Such a
"broadcast architecture" is one of the safety features recommended by
the "Foresight Guidelines on Molecular Nanotechnology", and a map of the
137-dimensional replicator design space
recently published by Freitas and Merkle provides numerous practical
methods by which replicators can be safely controlled by good design.
One of the most outspoken critics of some concepts of "molecular assemblers" was Professor Richard Smalley (1943–2005) who won the Nobel prize for his contributions to the field of nanotechnology.
Smalley believed that such assemblers were not physically possible and
introduced scientific objections to them. His two principal technical
objections were termed the "fat fingers problem" and the "sticky fingers
problem". He believed these would exclude the possibility of
"molecular assemblers" that worked by precision picking and placing of
individual atoms. Drexler and coworkers responded to these two issues in a 2001 publication.
Smalley also believed that Drexler's speculations about
apocalyptic dangers of self-replicating machines that have been equated
with "molecular assemblers" would threaten the public support for
development of nanotechnology. To address the debate between Drexler and
Smalley regarding molecular assemblers Chemical & Engineering News published a point-counterpoint consisting of an exchange of letters that addressed the issues.
Regulation
Speculation
on the power of systems that have been called "molecular assemblers"
has sparked a wider political discussion on the implication of
nanotechnology. This is in part due to the fact that nanotechnology is a
very broad term and could include "molecular assemblers". Discussion of
the possible implications of fantastic molecular assemblers has
prompted calls for regulation of current and future nanotechnology.
There are very real concerns with the potential health and ecological
impact of nanotechnology that is being integrated in manufactured
products. Greenpeace
for instance commissioned a report concerning nanotechnology in which
they express concern into the toxicity of nanomaterials that have been
introduced in the environment. However, it makes only passing references to "assembler" technology. The UK Royal Society and Royal Academy of Engineering also commissioned a report entitled "Nanoscience and nanotechnologies: opportunities and uncertainties"
regarding the larger social and ecological implications of
nanotechnology. This report does not discuss the threat posed by
potential so-called "molecular assemblers".
Formal scientific review
In 2006, the U.S. National Academy of Sciences released the report of a study of molecular manufacturing as part of a longer report, A Matter of Size: Triennial Review of the National Nanotechnology Initiative The study committee reviewed the technical content of Nanosystems,
and in its conclusion states that no current theoretical analysis can
be considered definitive regarding several questions of potential system
performance, and that optimal paths for implementing high-performance
systems cannot be predicted with confidence. It recommends experimental
research to advance knowledge in this area:
"Although theoretical calculations can be made today, the
eventually attainable range of chemical reaction cycles, error rates,
speed of operation, and thermodynamic efficiencies
of such bottom-up manufacturing systems cannot be reliably predicted at
this time. Thus, the eventually attainable perfection and complexity of
manufactured products, while they can be calculated in theory, cannot
be predicted with confidence. Finally, the optimum research paths that
might lead to systems which greatly exceed the thermodynamic efficiencies
and other capabilities of biological systems cannot be reliably
predicted at this time. Research funding that is based on the ability of
investigators to produce experimental demonstrations that link to
abstract models and guide long-term vision is most appropriate to
achieve this goal."
One potential scenario that has been envisioned is out-of-control self-replicating molecular assemblers in the form of grey goo which consumes carbon to continue its replication. If unchecked, such mechanical replication could potentially consume whole ecoregions or the whole Earth (ecophagy), or it could simply outcompete natural lifeforms for necessary resources such as carbon, ATP, or UV light (which some nanomotor examples run on). However, the ecophagy
and 'grey goo' scenarios, like synthetic molecular assemblers, are
based upon still-hypothetical technologies that have not yet been
demonstrated experimentally.
A global catastrophic risk is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's potential is known as an existential risk.
Over the last two decades, a number of academic and non-profit
organizations have been established to research global catastrophic and
existential risks and formulate potential mitigation measures.
Definition and classification
Scope/intensity grid from Bostrom's paper "Existential Risk Prevention as Global Priority"
Defining global catastrophic risks
The
term global catastrophic risk "lacks a sharp definition", and generally
refers (loosely) to a risk that could inflict "serious damage to human
well-being on a global scale".
Humanity has suffered large catastrophes before. Some of these
have caused serious damage, but were only local in scope—e.g. the Black Death may have resulted in the deaths of a third of Europe's population, 10% of the global population at the time. Some were global, but were not as severe—e.g. the 1918 influenza pandemic killed an estimated 3-6% of the world's population.
Most global catastrophic risks would not be so intense as to kill the
majority of life on earth, but even if one did, the ecosystem and
humanity would eventually recover (in contrast to existential risks).
Similarly, in Catastrophe: Risk and Response, Richard Posner
singles out and groups together events that bring about "utter
overthrow or ruin" on a global, rather than a "local or regional",
scale. Posner highlights such events as worthy of special attention on cost–benefit grounds because they could directly or indirectly jeopardize the survival of the human race as a whole.
Defining existential risks
Existential risks are defined as "risks that threaten the destruction of humanity's long-term potential." The instantiation of an existential risk (an existential catastrophe) would either cause outright human extinction or irreversibly lock in a drastically inferior state of affairs. Existential risks are a sub-class of global catastrophic risks, where the damage is not only global, but also terminal and permanent (preventing recovery and thus impacting both the current and all subsequent generations).
Non-extinction risks
While
extinction is the most obvious way in which humanity's long-term
potential could be destroyed, there are others, including unrecoverablecollapse and unrecoverabledystopia.
A disaster severe enough to cause the permanent, irreversible collapse
of human civilisation would constitute an existential catastrophe, even
if it fell short of extinction. Similarly, if humanity fell under a totalitarian regime, and there were no chance of recovery—as imagined by George Orwell in his 1949 novel Nineteen Eighty-Four—such a dystopian future would also be an existential catastrophe. Bryan Caplan writes that "perhaps an eternity of totalitarianism would be worse than extinction".
A dystopian scenario shares the key features of extinction and
unrecoverable collapse of civilisation—before the catastrophe, humanity
faced a vast range of bright futures to choose from; after the
catastrophe, humanity is locked forever in a terrible state.
Likelihood
Natural vs. anthropogenic
Experts generally agree that anthropogenic existential risks are (much) more likely than natural risks. A key difference between these risk types is that empirical evidence can place an upper bound on the level of natural risk.
Humanity has existed for at least 200,000 years, over which it has been
subject to a roughly constant level of natural risk. If the natural
risk were high, then it would be highly unlikely that humanity would
have survived as long as it has. Based on a formalization of this
argument, researchers have concluded that we can be confident that
natural risk is lower than 1 in 14,000 (and likely "less than one in
87,000") per year.
Another empirical method to study the likelihood of certain natural risks is to investigate the geological record. For example, a comet or asteroid impact event sufficient in scale to cause an impact winter that would cause human extinction before the year 2100 has been estimated at one-in-a-million. Moreover, large supervolcano eruptions may cause a volcanic winter that could endanger the survival of humanity.
The geological record suggests that supervolcanic eruptions are
estimated to occur on average about every 50,000 years, though most such
eruptions would not reach the scale required to cause human extinction. Famously, the supervolcano Mt. Toba may have almost wiped out humanity at the time of its last eruption (though this is contentious).
Since anthropogenic risk is a relatively recent phenomenon,
humanity's track record of survival cannot provide similar assurances.
Humanity has only survived 75 years since the creation of nuclear
weapons, and for future technologies there is no track record at all.
This has led thinkers like Carl Sagan to conclude that humanity is currently in a ‘time of perils’—a
uniquely dangerous period in human history, where it is subject to
unprecedented levels of risk, beginning from when we first started
posing risks to ourselves through our actions.
Risk estimates
Given the limitations of ordinary observation and modeling, expert
elicitation is frequently used instead to obtain probability estimates. In 2008, an informal survey of experts at a conference hosted by the Future of Humanity Institute
estimated a 19% risk of human extinction by the year 2100, though given
the survey's limitations these results should be taken "with a grain
of salt".
Risk
Estimated probability for human extinction before 2100
There have been a number of other estimates of existential risk, extinction risk, or a global collapse of civilisation:
In 1996, John Leslie estimated a 30% risk over the next five centuries (equivalent to around 9% per century, on average).
In 2002, Nick Bostrom
gave the following estimate of existential risk over the long term: ‘My
subjective opinion is that setting this probability lower than 25%
would be misguided, and the best estimate may be considerably higher.’
In 2003, Martin Rees estimated a 50% chance of collapse of civilisation in the twenty-first century.
The Global Challenges Foundation's 2016 annual report estimates an annual probability of human extinction of at least 0.05% per year.
A 2016 survey of AI experts found a median estimate of 5% that
human-level AI would cause an outcome that was "extremely bad (e.g.
human extinction)".
Metaculus users currently estimate a 3% probability of humanity going extinct before 2100.
Methodological challenges
Research
into the nature and mitigation of global catastrophic risks and
existential risks is subject to a unique set of challenges and
consequently not easily subject to the usual standards of scientific
rigour.
For instance, it is neither feasible nor ethical to study these risks
experimentally. Carl Sagan expressed this with regards to nuclear war:
“Understanding the long-term consequences of nuclear war is not a
problem amenable to experimental verification”.
Moreover, many catastrophic risks change rapidly as technology advances
and background conditions (such as international relations) change.
Another challenge is the general difficulty of accurately predicting the
future over long timescales, especially for athropogenic risks which
depend on complex human political, economic and social systems. In addition to known and tangible risks, unforeseeable black swan extinction events may occur, presenting an additional methodological problem.
Lack of historical precedent
Humanity has never suffered an existential catastrophe and if one were to occur, it would necessarily be unprecedented. Therefore, existential risks pose unique challenges to prediction, even more than other long-term events, because of observation selection effects. Unlike with most events, the failure of a complete extinction event
to occur in the past is not evidence against their likelihood in the
future, because every world that has experienced such an extinction
event has no observers, so regardless of their frequency, no
civilization observes existential risks in its history. These anthropic
issues may partly be avoided by looking at evidence that does not have
such selection effects, such as asteroid impact craters on the Moon, or
directly evaluating the likely impact of new technology.
To understand the dynamics of an unprecedented, unrecoverable
global civilisational collapse (a type of existential risk), it may be
instructive to study the various local civilizational collapses that have occurred throughout human history. For instance, civilizations such as the Roman Empire
have ended in a loss of centralized governance and a major
civilization-wide loss of infrastructure and advanced technology.
However, these examples demonstrate that societies are appeart to be
fairly resilient to catastrophe; for example, Medieval Europe survived
the Black Death without suffering anything resembling a civilization collapse despite losing 25 to 50 percent of its population.
Incentives and coordination
There are economic reasons that can explain why so little effort is going into existential risk reduction. It is a global public good, so we should expect it to be undersupplied by markets.
Even if a large nation invests in risk mitigation measures, that nation
will enjoy only a small fraction of the benefit of doing so.
Furthermore, existential risk reduction is an intergenerational
global public good, since most of the benefits of existential risk
reduction would be enjoyed by future generations, and though these
future people would in theory perhaps be willing to pay substantial sums
for existential risk reduction, no mechanism for such a transaction
exists.
Scope insensitivity influences how bad people consider the
extinction of the human race to be. For example, when people are
motivated to donate money to altruistic causes, the quantity they are
willing to give does not increase linearly with the magnitude of the
issue: people are roughly as willing to prevent the deaths of 200,000 or
2,000 birds. Similarly, people are often more concerned about threats to individuals than to larger groups.
In one of the earliest discussions of ethics of human extinction, Derek Parfit offers the following thought experiment:
I believe that if we destroy
mankind, as we now can, this outcome will be much worse than most people
think. Compare three outcomes:
(1) Peace.
(2) A nuclear war that kills 99% of the world's existing population. (3) A nuclear war that kills 100%.
(2) would be worse than (1), and (3) would be worse than (2). Which is
the greater of these two differences? Most people believe that the
greater difference is between (1) and (2). I believe that the difference
between (2) and (3) is very much greater.
— Derek Parfit
The scale of what is lost in an existential catastrophe is determined
by humanity's long-term potential—what humanity could expect to achieve
if it survived. From a utilitarian
perspective, the value of protecting humanity is the product of its
duration (how long humanity survives), its size (how many humans there
are over time), and its quality (on average, how good is life for future
people).
On average, species survive for around a million years before going
extinct. Parfit points out that the Earth will remain habitable for
around a billion years. And these might be lower bounds on our potential: if humanity is able to expand beyond Earth, it could greatly increase the human population and survive for trillions of years.
The size of the foregone potential that would be lost, were humanity to
go extinct, is very large. Therefore, reducing existential risk by even
a small amount would have a very significant moral value.
Some economists and philosophers have defended views, including exponential discounting and person-affecting views of population ethics, on which future people do not matter (or matter much less), morally speaking. While these views are controversial,
even they would agree that an existential catastrophe would be among
the worst things imaginable. It would cut short the lives of eight
billion presently existing people, destroying all of what makes their
lives valuable, and most likely subjecting many of them to profound
suffering. So even setting aside the value of future generations, there
may be strong reasons to reduce existential risk, grounded in concern
for presently existing people.
Beyond utilitarianism, other moral perspectives lend support to
the importance of reducing existential risk. An existential catastrophe
would destroy more than just humanity—it would destroy all cultural
artefacts, languages, and traditions, and many of the things we value.
So moral viewpoints on which we have duties to protect and cherish
things of value would see this as a huge loss that should be avoided. One can also consider reasons grounded in duties to past generations. For instance, Edmund Burke writes of a "partnership ... between those who are living, those who are dead, and those who are to be born".
If one takes seriously the debt humanity owes to past generations, Ord
argues the best way of repaying it might be to 'pay it forward', and
ensure that humanity's inheritance is passed down to future generations.
There are several economists who have discussed the importance of global catastrophic risks. For example, Martin Weitzman
argues that most of the expected economic damage from climate change
may come from the small chance that warming greatly exceeds the
mid-range expectations, resulting in catastrophic damage. Richard Posner has argued that humanity is doing far too little, in general, about small, hard-to-estimate risks of large-scale catastrophes.
Potential sources of risk
Some sources of catastrophic risk are anthropogenic (man-made), such as global warming, environmental degradation, engineered pandemics and nuclear war. On the other hand, some risks are non-anthropogenic or natural, such as meteor impacts or supervolcanoes.
Anthropogenic
Many experts—including those at the Future of Humanity Institute at the University of Oxford and the Centre for the Study of Existential Risk at the University of Cambridge—prioritize anthropogenic over natural risks due to their much greater estimated likelihood.
They are especially concerned by, and consequently focus on, risks
posed by advanced technology, such as artificial intelligence and
biotechnology.
It has been suggested that if AI systems rapidly become super-intelligent, they may take unforeseen actions or out-compete humanity. According to philosopher Nick Bostrom,
it is possible that the first super-intelligence to emerge would be
able to bring about almost any possible outcome it valued, as well as to
foil virtually any attempt to prevent it from achieving its objectives. Thus, even a super-intelligence indifferent to humanity could be dangerous if it perceived humans as an obstacle to unrelated goals. In Bostrom's book Superintelligence, he defines this as the control problem. Physicist Stephen Hawking, Microsoft founder Bill Gates, and SpaceX founder Elon Musk have echoed these concerns, with Hawking theorizing that such an AI could "spell the end of the human race".
In 2009, the Association for the Advancement of Artificial Intelligence (AAAI) hosted a conference to discuss whether computers and robots might be able to acquire any sort of autonomy,
and how much these abilities might pose a threat or hazard. They noted
that some robots have acquired various forms of semi-autonomy, including
being able to find power sources on their own and being able to
independently choose targets to attack with weapons. They also noted
that some computer viruses can evade elimination and have achieved
"cockroach intelligence". They noted that self-awareness as depicted in
science-fiction is probably unlikely, but there are other potential
hazards and pitfalls.
Various media sources and scientific groups have noted separate trends
in differing areas which might together result in greater robotic
functionalities and autonomy, and which pose some inherent concerns.
A survey of AI experts estimated that the chance of human-level
machine learning having an "extremely bad (e.g., human extinction)"
long-term effect on humanity is 5%. A 2008 survey by the Future of Humanity Institute estimated a 5% probability of extinction by super-intelligence by 2100. Eliezer Yudkowsky believes risks from artificial intelligence are harder to predict than any other known risks due to bias from anthropomorphism.
Since people base their judgments of artificial intelligence on their
own experience, he claims they underestimate the potential power of AI.
Biotechnology can pose a global catastrophic risk in the form of bioengineered organisms (viruses, bacteria, fungi, plants or animals). In many cases the organism will be a pathogen of humans, livestock, crops or other organisms we depend upon (e.g. pollinators or gut bacteria). However, any organism able to catastrophically disrupt ecosystem functions, e.g. highly competitive weeds, outcompeting essential crops, poses a biotechnology risk.
A biotechnology catastrophe may be caused by accidentally
releasing a genetically engineered organism from controlled
environments, by the planned release of such an organism which then
turns out to have unforeseen and catastrophic interactions with
essential natural or agro-ecosystems, or by intentional usage of biological agents in biological warfare or bioterrorism attacks. Pathogens may be intentionally or unintentionally genetically modified to change virulence and other characteristics. For example, a group of Australian researchers unintentionally changed characteristics of the mousepox virus while trying to develop a virus to sterilize rodents. The modified virus became highly lethal even in vaccinated and naturally resistant mice.
The technological means to genetically modify virus characteristics are
likely to become more widely available in the future if not properly
regulated.
Terrorist
applications of biotechnology have historically been infrequent. To
what extent this is due to a lack of capabilities or motivation is not
resolved. However, given current development, more risk from novel, engineered pathogens is to be expected in the future. Exponential growth has been observed in the biotechnology
sector, and Noun and Chyba predict that this will lead to major
increases in biotechnological capabilities in the coming decades. They argue that risks from biological warfare and bioterrorism
are distinct from nuclear and chemical threats because biological
pathogens are easier to mass-produce and their production is hard to
control (especially as the technological capabilities are becoming
available even to individual users).
In 2008, a survey by the Future of Humanity Institute estimated a 2%
probability of extinction from engineered pandemics by 2100.
Noun and Chyba propose three categories of measures to reduce
risks from biotechnology and natural pandemics: Regulation or prevention
of potentially dangerous research, improved recognition of outbreaks
and developing facilities to mitigate disease outbreaks (e.g. better
and/or more widely distributed vaccines).
Cyberattacks have the potential to destroy everything from personal data to electric grids. Christine Peterson, co-founder and past president of the Foresight Institute,
believes a cyberattack on electric grids has the potential to be a
catastrophic risk. She notes that little has been done to mitigate such
risks, and that mitigation could take several decades of readjustment.
An October 2017 report published in The Lancet stated that toxic air, water, soils, and workplaces were collectively responsible for nine million deaths worldwide in 2015, particularly from air pollution which was linked to deaths by increasing susceptibility to non-infectious diseases, such as heart disease, stroke, and lung cancer.
The report warned that the pollution crisis was exceeding "the envelope
on the amount of pollution the Earth can carry" and "threatens the
continuing survival of human societies".
A May 2020 analysis published in Scientific Reports found that if deforestation and resource consumption
continue at current rates they could culminate in a "catastrophic
collapse in human population" and possibly "an irreversible collapse of
our civilization" within the next several decades. The study says
humanity should pass from a civilization dominated by the economy to a
"cultural society" that "privileges the interest of the ecosystem above
the individual interest of its components, but eventually in accordance
with the overall communal interest." The authors also note that "while
violent events, such as global war or natural catastrophic events, are
of immediate concern to everyone, a relatively slow consumption of the
planetary resources may be not perceived as strongly as a mortal danger
for the human civilization."
Experimental technology accident
Nick
Bostrom suggested that in the pursuit of knowledge, humanity might
inadvertently create a device that could destroy Earth and the Solar
System.
Investigations in nuclear and high-energy physics could create unusual
conditions with catastrophic consequences. For example, scientists
worried that the first nuclear test might ignite the atmosphere. Others worried that the RHIC or the Large Hadron Collider might start a chain-reaction global disaster involving black holes, strangelets, or false vacuum states. These particular concerns have been challenged, but the general concern remains.
Biotechnology could lead to the creation of a pandemic, chemical warfare could be taken to an extreme, nanotechnology could lead to grey goo in which out-of-control self-replicating robots consume all living matter on earth while building more of themselves—in both cases, either deliberately or by accident.
This 1902 article attributes to Swedish Nobel laureate (for chemistry) Svante Arrhenius a theory that coal combustion could eventually lead to a degree of global warming causing human extinction.
Global warming
refers to the warming caused by human technology since the 19th century
or earlier. Projections of future climate change suggest further global
warming, sea level rise,
and an increase in the frequency and severity of some extreme weather
events and weather-related disasters. Effects of global warming include loss of biodiversity,
stresses to existing food-producing systems, increased spread of known
infectious diseases such as malaria, and rapid mutation of microorganisms.
In November 2017, a statement by 15,364 scientists from 184 countries
indicated that increasing levels of greenhouse gases from use of fossil
fuels, human population growth, deforestation, and overuse of land for
agricultural production, particularly by farming ruminants for meat consumption, are trending in ways that forecast an increase in human misery over coming decades.
Ever since Georgescu-Roegen and Daly published these views,
various scholars in the field have been discussing the existential
impossibility of allocating earth's finite stock of mineral resources
evenly among an unknown number of present and future generations. This
number of generations is likely to remain unknown to us, as there is no
way—or only little way—of knowing in advance if or when mankind will ultimately face extinction. In effect, any conceivable intertemporal allocation of the stock will inevitably end up with universal economic decline at some future point.
Many nanoscale technologies are in development or currently in use. The only one that appears to pose a significant global catastrophic risk is molecular manufacturing, a technique that would make it possible to build complex structures at atomic precision.
Molecular manufacturing requires significant advances in
nanotechnology, but once achieved could produce highly advanced products
at low costs and in large quantities in nanofactories of desktop
proportions.
When nanofactories gain the ability to produce other nanofactories,
production may only be limited by relatively abundant factors such as
input materials, energy and software.
Molecular manufacturing could be used to cheaply produce, among many other products, highly advanced, durable weapons. Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities.
Chris Phoenix and Treder classify catastrophic risks posed by nanotechnology into three categories:
From augmenting the development of other technologies such as AI and biotechnology.
By enabling mass-production of potentially dangerous products that
cause risk dynamics (such as arms races) depending on how they are used.
From uncontrolled self-perpetuating processes with destructive effects.
Several researchers say the bulk of risk from nanotechnology comes
from the potential to lead to war, arms races and destructive global
government.
Several reasons have been suggested why the availability of nanotech
weaponry may with significant likelihood lead to unstable arms races
(compared to e.g. nuclear arms races):
A large number of players may be tempted to enter the race since the threshold for doing so is low;
The ability to make weapons with molecular manufacturing will be cheap and easy to hide;
Therefore, lack of insight into the other parties' capabilities can tempt players to arm out of caution or to launch preemptive strikes;
Molecular manufacturing may reduce dependency on international trade, a potential peace-promoting factor;
Wars of aggression
may pose a smaller economic threat to the aggressor since manufacturing
is cheap and humans may not be needed on the battlefield.
Since self-regulation by all state and non-state actors seems hard to achieve, measures to mitigate war-related risks have mainly been proposed in the area of international cooperation.
International infrastructure may be expanded giving more sovereignty to
the international level. This could help coordinate efforts for arms
control. International institutions dedicated specifically to
nanotechnology (perhaps analogously to the International Atomic Energy
Agency IAEA) or general arms control may also be designed. One may also jointly make differential technological progress on defensive technologies, a policy that players should usually favour. The Center for Responsible Nanotechnology also suggests some technical restrictions. Improved transparency regarding technological capabilities may be another important facilitator for arms-control.
Grey goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book Engines of Creation and has been a theme in mainstream media and fiction. This scenario involves tiny self-replicating robots
that consume the entire biosphere using it as a source of energy and
building blocks. Nowadays, however, nanotech experts—including
Drexler—discredit the scenario. According to Phoenix, a "so-called grey
goo could only be the product of a deliberate and difficult engineering
process, not an accident".
The scenarios that have been explored most frequently are nuclear warfare and doomsday devices. Mistakenly launching a nuclear attack in response to a false alarm is one possible scenario; this nearly happened during the 1983 Soviet nuclear false alarm incident. Although the probability of a nuclear war per year is slim, Professor Martin Hellman
has described it as inevitable in the long run; unless the probability
approaches zero, inevitably there will come a day when civilization's
luck runs out. During the Cuban Missile Crisis, U.S. president John F. Kennedy estimated the odds of nuclear war at "somewhere between one out of three and even". The United States and Russia have a combined arsenal of 14,700 nuclear weapons, and there is an estimated total of 15,700 nuclear weapons in existence worldwide. Beyond nuclear, other military threats to humanity include biological warfare (BW). By contrast, chemical warfare, while able to create multiple local catastrophes, is unlikely to create a global one.
Nuclear war could yield unprecedented human death tolls and habitat destruction. Detonating large numbers of nuclear weapons would have an immediate, short term and long-term effects on the climate, causing cold weather and reduced sunlight and photosynthesis that may generate significant upheaval in advanced civilizations.
However, while popular perception sometimes takes nuclear war as "the
end of the world", experts assign low probability to human extinction
from nuclear war. In 1982, Brian Martin
estimated that a US–Soviet nuclear exchange might kill 400–450 million
directly, mostly in the United States, Europe and Russia, and maybe
several hundred million more through follow-up consequences in those
same areas.
In 2008, a survey by the Future of Humanity Institute estimated a 4%
probability of extinction from warfare by 2100, with a 1% chance of
extinction from nuclear warfare.
M. King Hubbert's prediction of world petroleum production rates. Modern agriculture is heavily dependent on petroleum energy.
The 20th century saw a rapid increase in human population due to medical developments and massive increases in agricultural productivity such as the Green Revolution.
Between 1950 and 1984, as the Green Revolution transformed agriculture
around the globe, world grain production increased by 250%. The Green
Revolution in agriculture helped food production to keep pace with
worldwide population growth
or actually enabled population growth. The energy for the Green
Revolution was provided by fossil fuels in the form of fertilizers
(natural gas), pesticides (oil), and hydrocarbon-fueled irrigation. David Pimentel, professor of ecology and agriculture at Cornell University,
and Mario Giampietro, senior researcher at the National Research
Institute on Food and Nutrition (INRAN), place in their 1994 study Food, Land, Population and the U.S. Economy the maximum U.S. population for a sustainable economy
at 200 million. To achieve a sustainable economy and avert disaster,
the United States must reduce its population by at least one-third, and
world population will have to be reduced by two-thirds, says the study.
The authors of this study believe the mentioned agricultural
crisis will begin to have an effect on the world after 2020, and will
become critical after 2050. Geologist Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before.
Since supplies of petroleum and natural gas are essential to modern agriculture techniques, a fall in global oil supplies (see peak oil for global concerns) could cause spiking food prices and unprecedented famine in the coming decades.
Wheat is humanity's third-most-produced cereal. Extant fungal infections such as Ug99 (a kind of stem rust)
can cause 100% crop losses in most modern varieties. Little or no
treatment is possible and infection spreads on the wind. Should the
world's large grain-producing areas become infected, the ensuing crisis
in wheat availability would lead to price spikes and shortages in other
food products.
Non-anthropogenic
Of all species that have ever lived, 99% have gone extinct. Earth has experienced numerous mass extinction events, in which up to 96% of all species present at the time were eliminated. A notable example is the K-T extinction event,
which killed the dinosaurs. The types of threats posed by nature have
been argued to be relatively constant, though this has been disputed.
Several asteroids have collided with Earth in recent geological history. The Chicxulub asteroid, for example, was about six miles in diameter and is theorized to have caused the extinction of non-avian dinosaurs at the end of the Cretaceous.
No sufficiently large asteroid currently exists in an Earth-crossing
orbit; however, a comet of sufficient size to cause human extinction
could impact the Earth, though the annual probability may be less than
10−8. Geoscientist Brian Toon estimates that while a few people, such as "some fishermen in Costa Rica", could plausibly survive a six-mile meteorite, a sixty-mile meteorite would be large enough to "incinerate everybody".
Asteroids with around a 1 km diameter have impacted the Earth on
average once every 500,000 years; these are probably too small to pose
an extinction risk, but might kill billions of people. Larger asteroids are less common. Small near-Earth asteroids are regularly observed and can impact anywhere on the Earth injuring local populations. As of 2013, Spaceguard estimates it has identified 95% of all NEOs over 1 km in size.
In April 2018, the B612 Foundation reported "It's a 100 per cent certain we'll be hit [by a devastating asteroid], but we're not 100 per cent sure when." Also in 2018, physicistStephen Hawking, in his final book Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. In June 2018, the US National Science and Technology Council
warned that America is unprepared for an asteroid impact event, and has
developed and released the "National Near-Earth Object Preparedness
Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched.
A number of astronomical threats have been identified. Massive objects, e.g. a star, large planet or black hole,
could be catastrophic if a close encounter occurred in the Solar
System. In April 2008, it was announced that two simulations of
long-term planetary movement, one at the Paris Observatory and the other at the University of California, Santa Cruz, indicate a 1% chance that Mercury's
orbit could be made unstable by Jupiter's gravitational pull sometime
during the lifespan of the Sun. Were this to happen, the simulations
suggest a collision with Earth could be one of four possible outcomes
(the others being Mercury colliding with the Sun, colliding with Venus,
or being ejected from the Solar System altogether). If Mercury were to
collide with Earth, all life on Earth could be obliterated entirely: an
asteroid 15 km wide is believed to have caused the extinction of the
non-avian dinosaurs, whereas Mercury is 4,879 km in diameter.
Conjectured illustration of the scorched Earth after the Sun has entered the red giant phase, about seven billion years from now
If our universe lies within a false vacuum,
a bubble of lower-energy vacuum could come to exist by chance or
otherwise in our universe, and catalyze the conversion of our universe
to a lower energy state in a volume expanding at nearly the speed of
light, destroying all that we know without forewarning. Such an
occurrence is called vacuum decay.
Another cosmic threat is a gamma-ray burst, typically produced by a supernova
when a star collapses inward on itself and then "bounces" outward in a
massive explosion. Under certain circumstances, these events are thought
to produce massive bursts of gamma radiation emanating outward from the
axis of rotation of the star. If such an event were to occur oriented
towards the Earth, the massive amounts of gamma radiation could
significantly affect the Earth's atmosphere and pose an existential
threat to all life. Such a gamma-ray burst may have been the cause of
the Ordovician–Silurian extinction events. Neither this scenario nor the destabilization of Mercury's orbit are likely in the foreseeable future.
A powerful solar flare
or solar superstorm, which is a drastic and unusual decrease or
increase in the Sun's power output, could have severe consequences for
life on Earth.
Astrophysicists currently calculate that in a few billion years
the Earth will probably be swallowed by the expansion of the Sun into a red giant star.
Intelligent extraterrestrial life,
if existent, could invade Earth either to exterminate and supplant
human life, enslave it under a colonial system, steal the planet's
resources, or destroy the planet altogether.
Although evidence of alien life has never been proven, scientists such as Carl Sagan have postulated that the existence of extraterrestrial life is very likely. In 1969, the "Extra-Terrestrial Exposure Law" was added to the United States Code of Federal Regulations (Title 14, Section 1211) in response to the possibility of biological contamination resulting from the U.S. Apollo Space Program. It was removed in 1991. Scientists consider such a scenario technically possible, but unlikely.
An article in The New York Times
discussed the possible threats for humanity of intentionally sending
messages aimed at extraterrestrial life into the cosmos in the context
of the SETI
efforts. Several renowned public figures such as Stephen Hawking and
Elon Musk have argued against sending such messages on the grounds that
extraterrestrial civilizations with technology are probably far more
advanced than humanity and could pose an existential threat to humanity.
There are numerous historical examples of pandemics that have had a devastating effect on a large number of people. The present, unprecedented scale and speed of human movement make it more difficult than ever to contain an epidemic through local quarantines,
and other sources of uncertainty and the evolving nature of the risk
mean natural pandemics may pose a realistic threat to human
civilization.
There are several classes of argument about the likelihood of
pandemics. One stems from history, where the limited size of historical
pandemics is evidence that larger pandemics are unlikely. This argument
has been disputed on grounds including the changing risk due to changing
population and behavioral patterns among humans, the limited historical
record, and the existence of an anthropic bias.
Another argument is based on an evolutionary model that predicts that naturally evolving pathogens will ultimately develop an upper limit to their virulence.
This is because pathogens with high enough virulence quickly kill their
hosts and reduce their chances of spreading the infection to new hosts
or carriers.
This model has limits, however, because the fitness advantage of
limited virulence is primarily a function of a limited number of hosts.
Any pathogen with a high virulence, high transmission rate and long
incubation time may have already caused a catastrophic pandemic before
ultimately virulence is limited through natural selection. Additionally,
a pathogen that infects humans as a secondary host and primarily
infects another species (a zoonosis) has no constraints on its virulence in people, since the accidental secondary infections do not affect its evolution. Lastly, in models where virulence level and rate of transmission are related, high levels of virulence can evolve.
Virulence is instead limited by the existence of complex populations of
hosts with different susceptibilities to infection, or by some hosts
being geographically isolated. The size of the host population and competition between different strains of pathogens can also alter virulence.
Neither of these arguments is applicable to bioengineered
pathogens, and this poses entirely different risks of pandemics. Experts
have concluded that "Developments in science and technology could
significantly ease the development and use of high consequence
biological weapons," and these "highly virulent and highly transmissible
[bio-engineered pathogens] represent new potential pandemic threats."
Natural climate change
Climate change
refers to a lasting change in the Earth's climate. The climate has
ranged from ice ages to warmer periods when palm trees grew in
Antarctica. It has been hypothesized that there was also a period called
"snowball Earth" when all the oceans were covered in a layer of ice. These global climatic changes occurred slowly, near the end of the last Major Ice Age when the climate became more stable. However, abrupt climate change
on the decade time scale has occurred regionally. A natural variation
into a new climate regime (colder or hotter) could pose a threat to
civilization.
In the history of the Earth, many ice ages
are known to have occurred. An ice age would have a serious impact on
civilization because vast areas of land (mainly in North America,
Europe, and Asia) could become uninhabitable. Currently, the world is in
an interglacial period
within a much older glacial event. The last glacial expansion ended
about 10,000 years ago, and all civilizations evolved later than this.
Scientists do not predict that a natural ice age will occur anytime
soon.
The amount of heat-trapping gases emitted into Earth's oceans and
atmosphere will prevent the next ice age, which otherwise would begin in
around 50,000 years, and likely more glacial cycles.
Yellowstone sits on top of three overlapping calderas
A geological event such as massive flood basalt, volcanism, or the eruption of a supervolcano could lead to a so-called volcanic winter, similar to a nuclear winter. One such event, the Toba eruption, occurred in Indonesia about 71,500 years ago. According to the Toba catastrophe theory, the event may have reduced human populations to only a few tens of thousands of individuals. Yellowstone Caldera is another such supervolcano, having undergone 142 or more caldera-forming eruptions in the past 17 million years.
A massive volcano eruption would eject extraordinary volumes of volcanic
dust, toxic and greenhouse gases into the atmosphere with serious
effects on global climate (towards extreme global cooling: volcanic winter if short-term, and ice age if long-term) or global warming (if greenhouse gases were to prevail).
When the supervolcano at Yellowstone last erupted 640,000 years ago, the thinnest layers of the ash ejected from the caldera spread over most of the United States west of the Mississippi River and part of northeastern Mexico. The magma
covered much of what is now Yellowstone National Park and extended
beyond, covering much of the ground from Yellowstone River in the east
to the Idaho falls in the west, with some of the flows extending north
beyond Mammoth Springs.
According to a recent study, if the Yellowstone caldera erupted
again as a supervolcano, an ash layer one to three millimeters thick
could be deposited as far away as New York, enough to "reduce traction
on roads and runways, short out electrical transformers and cause
respiratory problems". There would be centimeters of thickness over much
of the U.S. Midwest, enough to disrupt crops and livestock, especially
if it happened at a critical time in the growing season. The
worst-affected city would likely be Billings, Montana, population 109,000, which the model predicted would be covered with ash estimated as 1.03 to 1.8 meters thick.
The main long-term effect is through global climate change, which
reduces the temperature globally by about 5–15 degrees C for a decade,
together with the direct effects of the deposits of ash on their crops. A
large supervolcano like Toba would deposit one or two meters thickness
of ash over an area of several million square kilometers.(1000 cubic
kilometers is equivalent to a one-meter thickness of ash spread over a
million square kilometers). If that happened in some densely populated
agricultural area, such as India, it could destroy one or two seasons of
crops for two billion people.
However, Yellowstone shows no signs of a supereruption at
present, and it is not certain that a future supereruption will occur
there.
Research published in 2011 finds evidence that massive volcanic
eruptions caused massive coal combustion, supporting models for the
significant generation of greenhouse gases. Researchers have suggested
that massive volcanic eruptions through coal beds in Siberia would
generate significant greenhouse gases and cause a runaway greenhouse effect.
Massive eruptions can also throw enough pyroclastic debris and other
material into the atmosphere to partially block out the sun and cause a volcanic winter, as happened on a smaller scale in 1816 following the eruption of Mount Tambora, the so-called Year Without a Summer.
Such an eruption might cause the immediate deaths of millions of people
several hundred miles from the eruption, and perhaps billions of death
worldwide, due to the failure of the monsoons, resulting in major crop failures causing starvation on a profound scale.
A much more speculative concept is the verneshot: a hypothetical volcanic eruption caused by the buildup of gas deep underneath a craton. Such an event may be forceful enough to launch an extreme amount of material from the crust and mantle into a sub-orbital trajectory.
Proposed mitigation
Defense in depth is a useful framework for categorizing risk mitigation measures into three layers of defense:
Prevention: Reducing the probability of a catastrophe
occurring in the first place. Example: Measures to prevent outbreaks of
new highly-infectious diseases.
Response: Preventing the scaling of a catastrophe to the
global level. Example: Measures to prevent escalation of a small-scale
nuclear exchange into an all-out nuclear war.
Resilience: Increasing humanity's resilience (against
extinction) when faced with global catastrophes. Example: Measures to
increase food security during a nuclear winter.
Human extinction is most likely when all three defenses are weak,
that is, "by risks we are unlikely to prevent, unlikely to successfully
respond to, and unlikely to be resilient against".
The unprecedented nature of existential risks poses a special
challenge in designing risk mitigation measures since humanity will not
be able to learn from a track record of previous events.
Planetary management and respecting planetary boundaries have been proposed as approaches to preventing ecological catastrophes. Within the scope of these approaches, the field of geoengineering
encompasses the deliberate large-scale engineering and manipulation of
the planetary environment to combat or counteract anthropogenic changes
in atmospheric chemistry. Space colonization is a proposed alternative to improve the odds of surviving an extinction scenario. Solutions of this scope may require megascale engineering.
Food storage
has been proposed globally, but the monetary cost would be high.
Furthermore, it would likely contribute to the current millions of
deaths per year due to malnutrition.
The Svalbard Global Seed Vault is buried 400 feet (120 m) inside a mountain on an island in the Arctic.
It is designed to hold 2.5 billion seeds from more than 100 countries
as a precaution to preserve the world's crops. The surrounding rock is
−6 °C (21 °F) (as of 2015) but the vault is kept at −18 °C (0 °F) by
refrigerators powered by locally sourced coal.
More speculatively, if society continues to function and if the
biosphere remains habitable, calorie needs for the present human
population might in theory be met during an extended absence of
sunlight, given sufficient advance planning. Conjectured solutions
include growing mushrooms on the dead plant biomass left in the wake of
the catastrophe, converting cellulose to sugar, or feeding natural gas
to methane-digesting bacteria.
Global catastrophic risks and global governance
Insufficient global governance
creates risks in the social and political domain, but the governance
mechanisms develop more slowly than technological and social change.
There are concerns from governments, the private sector, as well as the
general public about the lack of governance mechanisms to efficiently
deal with risks, negotiate and adjudicate between diverse and
conflicting interests. This is further underlined by an understanding of
the interconnectedness of global systemic risks.
In absence or anticipation of global governance, national governments
can act individually to better understand, mitigate and prepare for
global catastrophes.
Climate emergency plans
In 2018, the Club of Rome
called for greater climate change action and published its Climate
Emergency Plan, which proposes ten action points to limit global average
temperature increase to 1.5 degrees Celsius. Further, in 2019, the Club published the more comprehensive Planetary Emergency Plan.
Organizations
The Bulletin of the Atomic Scientists
(est. 1945) is one of the oldest global risk organizations, founded
after the public became alarmed by the potential of atomic warfare in
the aftermath of WWII. It studies risks associated with nuclear war and
energy and famously maintains the Doomsday Clock established in 1947. The Foresight Institute
(est. 1986) examines the risks of nanotechnology and its benefits. It
was one of the earliest organizations to study the unintended
consequences of otherwise harmless technology gone haywire at a global
scale. It was founded by K. Eric Drexler who postulated "grey goo".
Beginning after 2000, a growing number of scientists,
philosophers and tech billionaires created organizations devoted to
studying global risks both inside and outside of academia.
Independent non-governmental organizations (NGOs) include the Machine Intelligence Research Institute (est. 2000), which aims to reduce the risk of a catastrophe caused by artificial intelligence, with donors including Peter Thiel and Jed McCaleb. The Nuclear Threat Initiative
(est. 2001) seeks to reduce global threats from nuclear, biological and
chemical threats, and containment of damage after an event. It maintains a nuclear material security index. The Lifeboat Foundation (est. 2009) funds research into preventing a technological catastrophe. Most of the research money funds projects at universities.
The Global Catastrophic Risk Institute (est. 2011) is a think tank for
catastrophic risk. It is funded by the NGO Social and Environmental
Entrepreneurs. The Global Challenges Foundation (est. 2012), based in Stockholm and founded by Laszlo Szombatfalvy, releases a yearly report on the state of global risks. The Future of Life Institute
(est. 2014) aims to support research and initiatives for safeguarding
life considering new technologies and challenges facing humanity. Elon Musk is one of its biggest donors.
The Center on Long-Term Risk (est. 2016), formerly known as the
Foundational Research Institute, is a British organization focused on
reducing risks of astronomical suffering (s-risks) from emerging technologies.
University-based organizations include the Future of Humanity Institute (est. 2005) which researches the questions of humanity's long-term future, particularly existential risk. It was founded by Nick Bostrom and is based at Oxford University. The Centre for the Study of Existential Risk
(est. 2012) is a Cambridge University-based organization which studies
four major technological risks: artificial intelligence, biotechnology,
global warming and warfare. All are man-made risks, as Huw Price
explained to the AFP news agency, "It seems a reasonable prediction
that some time in this or the next century intelligence will escape from
the constraints of biology". He added that when this happens "we're no
longer the smartest things around," and will risk being at the mercy of
"machines that are not malicious, but machines whose interests don't
include us." Stephen Hawking was an acting adviser. The Millennium Alliance for Humanity and the Biosphere
is a Stanford University-based organization focusing on many issues
related to global catastrophe by bringing together members of academic
in the humanities. It was founded by Paul Ehrlich among others. Stanford University also has the Center for International Security and Cooperation focusing on political cooperation to reduce global catastrophic risk. The Center for Security and Emerging Technology
was established in January 2019 at Georgetown's Walsh School of Foreign
Service and will focus on policy research of emerging technologies with
an initial emphasis on artificial intelligence. They received a grant of 55M USD from Good Ventures as suggested by the Open Philanthropy Project.
Other risk assessment groups are based in or are part of governmental organizations. The World Health Organization (WHO) includes a division called the Global Alert and Response (GAR) which monitors and responds to global epidemic crisis. GAR helps member states with training and coordination of response to epidemics. The United States Agency for International Development (USAID) has its Emerging Pandemic Threats Program which aims to prevent and contain naturally generated pandemics at their source. The Lawrence Livermore National Laboratory
has a division called the Global Security Principal Directorate which
researches on behalf of the government issues such as bio-security and
counter-terrorism.
History
Early history of thinking about human extinction
Before the 18th and 19th centuries, the possibility that humans or other organisms could go extinct was viewed with scepticism. It contradicted the principle of plenitude, a doctrine that all possible things exist. The principle traces back to Aristotle, and was an important tenet of Christian theology.
The doctrine was gradually undermined by evidence from the natural
sciences, particular the discovery of fossil evidence of species that
appeared to no longer exist, and the development of theories of
evolution. In On the Origin of Species, Darwin discussed the extinction of species as a natural process and core component of natural selection.
Notably, Darwin was skeptical of the possibility of sudden extinctions,
viewing it as a gradual process. He held that the abrupt disappearance
of species from the fossil record were not evidence of catastrophic
extinctions, but rather were a function of unrecognised gaps in the
record.
As the possibility of extinction became more widely established in the sciences, so did the prospect of human extinction. Beyond science, human extinction was explored in literature. The Romantic authors and poets were particularly interested in the topic. Lord Byron wrote about the extinction of life on earth in his 1816 poem ‘Darkness’, and in 1824 envisaged humanity being threatened by a comet impact, and employing a missile system to defend against it. Mary Shelley’s 1826 novel The Last Man is set in a world where humanity has been nearly destroyed by a mysterious plague.
The invention of the atomic bomb prompted a wave of discussion about
the risk of human extinction among scientists, intellectuals, and the
public at large. In a 1945 essay, Bertrand Russell
wrote that "[T]he prospect for the human race is sombre beyond all
precedent. Mankind are faced with a clear-cut alternative: either we
shall all perish, or we shall have to acquire some slight degree of
common sense." A 1950 Gallup poll found that 19% of Americans believed that another world war would mean "an end to mankind".
The discovery of 'nuclear winter'
in the early 1980s, a specific mechanism by which nuclear war could
result in human extinction, again raised the issue to prominence.
Writing about these findings in 1983, Carl Sagan
argued that measuring the badness of extinction solely in terms of
those who die "conceals its full impact," and that nuclear war "imperils
all of our descendants, for as long as there will be humans."
Modern era
John Leslie's 1996 book The End of The World
was an academic treatment of the science and ethics of human
extinction. In it, Leslie considered a range of threats to humanity and
what they have in common. In 2003, British Astronomer Royal Sir Martin Rees published Our Final Hour,
in which he argues that advances in certain technologies create new
threats for the survival of humankind, and that the 21st century may be a
critical moment in history when humanity's fate is decided. Edited by Nick Bostrom and Milan M. Ćirković, Global Catastrophic Risks was published in 2008, a collection of essays from 26 academics on various global catastrophic and existential risks. Toby Ord's 2020 book The Precipice: Existential Risk and the Future of Humanity
argues that preventing existential risks is one of the most important
moral issues of our time. The book discusses, quantifies and compares
different existential risks, concluding that the greatest risks are
presented by unaligned artificial intelligence and biotechnology.