Opte Project visualization of routing paths through a portion of the Internet. The connections and pathways of the internet could be seen as the pathways of neurons and synapses in a global brain.
Proponents of the global brain hypothesis claim that the Internet increasingly ties its users together into a single information processing system that functions as part of the collective nervous system of the planet. The intelligence of this network is collective or distributed:
it is not centralized or localized in any particular individual,
organization or computer system. Therefore, no one can command or
control it. Rather, it self-organizes or emerges from the dynamic networks of interactions between its components. This is a property typical of complex adaptive systems.
The World Wide Web in particular resembles the organization of a brain with its web pages (playing a role similar to neurons) connected by hyperlinks (playing a role similar to synapses), together forming an associative network along which information propagates. This analogy becomes stronger with the rise of social media, such as Facebook, where links between personal pages represent relationships in a social network along which information propagates from person to person. Such propagation is similar to the spreading activation that neural networks in the brain use to process information in a parallel, distributed manner.
History
Although some of the underlying ideas were already expressed by Nikola Tesla
in the late 19th century and were written about by many others before
him, the term "global brain" was coined in 1982 by Peter Russell in his
book The Global Brain. How the Internet might be developed to achieve this was set out in 1986. The first peer-reviewed article on the subject was published by Gottfried Mayer-Kress in 1995, while the first algorithms that could turn the world-wide web into a collectively intelligent network were proposed by Francis Heylighen and Johan Bollen in 1996.
Reviewing the strands of intellectual history that contributed to the global brain hypothesis, Francis Heylighen distinguishes four perspectives: organicism, encyclopedism, emergentism and evolutionary cybernetics. He asserts that these developed in relative independence but now are converging in his own scientific re-formulation.
Organicism
In the 19th century, the sociologist Herbert Spencer saw society as a social organism and reflected about its need for a nervous system. Entomologist William Wheeler developed the concept of the ant colony as a spatially extended organism, and in the 1930s he coined the term superorganism to describe such an entity. This concept was later adopted by thinkers such as Joël de Rosnay in the book Le Cerveau Planétaire (1986) and Gregory Stock in the book Metaman (1993) to describe planetary society as a superorganism.
The mental aspects of such an organic system at the planetary
level were perhaps first broadly elaborated by palaeontologist and
Jesuit priest Pierre Teilhard de Chardin.
In 1945, he described a coming "planetisation" of humanity, which he
saw as the next phase of accelerating human "socialisation". Teilhard
described both socialization and planetization as irreversible,
irresistible processes of macrobiological development culminating in the emergence of a noosphere, or global mind (see Emergentism below).
The more recent living systems theory
describes both organisms and social systems in terms of the "critical
subsystems" ("organs") they need to contain in order to survive, such as
an internal transport system, a resource reserve, and a decision-making
system. This theory has inspired several thinkers, including Peter
Russell and Francis Heylighen to define the global brain as the network
of information processing subsystems for the planetary social system.
In the perspective of encyclopedism, the emphasis is on developing a
universal knowledge network. The first systematic attempt to create such
an integrated system of the world's knowledge was the 18th century Encyclopédie of Denis Diderot and Jean le Rond d'Alembert.
However, by the end of the 19th century, the amount of knowledge had
become too large to be published in a single synthetic volume. To tackle
this problem, Paul Otlet founded the science of documentation, now called information science. In the 1930s he envisaged a World Wide Web-like
system of associations between documents and telecommunication links
that would make all the world's knowledge available immediately to
anybody. H. G. Wells
proposed a similar vision of a collaboratively developed world
encyclopedia that would be constantly updated by a global
university-like institution. He called this a World Brain, as it would function as a continuously updated memory for the planet,
although the image of humanity acting informally as a more organic
global brain is a recurring motif in many of his other works.
Tim Berners-Lee, the inventor of the World Wide Web,
too, was inspired by the free-associative possibilities of the brain
for his invention. The brain can link different kinds of information
without any apparent link otherwise; Berners-Lee thought that computers
could become much more powerful if they could imitate this functioning,
i.e. make links between any arbitrary piece of information. The most powerful implementation of encyclopedism to date is Wikipedia,
which integrates the associative powers of the world-wide-web with the
collective intelligence of its millions of contributors, approaching the
ideal of a global memory. The Semantic web,
also first proposed by Berners-Lee, is a system of protocols to make
the pieces of knowledge and their links readable by machines, so that
they could be used to make automatic inferences, thus providing this brain-like network with some capacity for autonomous "thinking" or reflection.
Emergentism
This approach focuses on the emergent aspects of the evolution and development of complexity,
including the spiritual, psychological, and moral-ethical aspects of
the global brain, and is at present the most speculative approach. The
global brain is here seen as a natural and emergent process of planetary
evolutionary development. Here again Pierre Teilhard de Chardin attempted a synthesis of science, social values, and religion in his The Phenomenon of Man, which argues that the telos
(drive, purpose) of universal evolutionary process is the development
of greater levels of both complexity and consciousness. Teilhard
proposed that if life persists then planetization, as a biological
process producing a global brain, would necessarily also produce a
global mind, a new level of planetary consciousness and a
technologically supported network of thoughts which he called the noosphere.
Teilhard's proposed technological layer for the noosphere can be
interpreted as an early anticipation of the Internet and the Web.
Evolutionary cybernetics
Systems theorists and cyberneticians commonly describe the emergence of a higher order system in evolutionary development as a "metasystem transition" (a concept introduced by Valentin Turchin) or a "major evolutionary transition". Such a metasystem consists of a group of subsystems that work together
in a coordinated, goal-directed manner. It is as such much more powerful
and intelligent than its constituent systems. Francis Heylighen
has argued that the global brain is an emerging metasystem with respect
to the level of individual human intelligence, and investigated the
specific evolutionary mechanisms that promote this transition.
In this scenario, the Internet fulfils the role of the network of
"nerves" that interconnect the subsystems and thus coordinates their
activity. The cybernetic approach makes it possible to develop
mathematical models and simulations of the processes of self-organization through which such coordination and collective intelligence emerges.
Recent developments
In 1994 Kevin Kelly, in his popular book Out of Control, posited the emergence of a "hive mind" from a discussion of cybernetics and evolutionary biology.
In 1996, Francis Heylighen and Ben Goertzel
founded the Global Brain group, a discussion forum grouping most of the
researchers that had been working on the subject of the global brain to
further investigate this phenomenon. The group organized the first
international conference on the topic in 2001 at the Vrije Universiteit Brussel.
After a period of relative neglect, the Global Brain idea has
recently seen a resurgence in interest, in part due to talks given on
the topic by Tim O'Reilly, the Internet forecaster who popularized the term Web 2.0, and Yuri Milner, the social media investor. In January 2012, the Global Brain Institute (GBI) was founded at the Vrije Universiteit Brussel to develop a mathematical theory of the "brainlike" propagation of information across the Internet. In the same year, Thomas W. Malone and collaborators from the MIT Center for Collective Intelligence have started to explore how the global brain could be "programmed" to work more effectively, using mechanisms of collective intelligence. The complexity scientist Dirk Helbing
and his NervousNet group have recently started developing a "Planetary
Nervous System", which includes a "Global Participatory Platform", as
part of the large-scale FuturICT project, thus preparing some of the groundwork for a Global Brain.
A
common criticism of the idea that humanity would become directed by a
global brain is that this would reduce individual diversity and freedom, and lead to mass surveillance. This criticism is inspired by totalitarian forms of government, as exemplified by George Orwell's character of "Big Brother". It is also inspired by the analogy between collective intelligence or swarm intelligence and insect societies,
such as beehives and ant colonies, in which individuals are essentially
interchangeable. In a more extreme view, the global brain has been
compared with the Borg, a race of collectively thinking cyborgs conceived by the Star Trek science fiction franchise.
Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence could result in human extinction.The consequences of a technological singularity and its potential
benefit or harm to the human race have been intensely debated.
Prominent technologists and academics dispute the plausibility of
a technological singularity and associated artificial intelligence
"explosion", including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, Steven Pinker, Theodore Modis, Gordon Moore, and Roger Penrose. One claim is that artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones. Stuart J. Russell and Peter Norvig
observe that in the history of technology, improvement in a particular
area tends to follow an S curve: it begins with accelerating
improvement, then levels off without continuing upward into a hyperbolic
singularity.
History
Alan Turing,
often regarded as the father of modern computer science, laid a crucial
foundation for contemporary discourse on the technological singularity.
His pivotal 1950 paper "Computing Machinery and Intelligence" argued that a machine could, in theory, exhibit intelligent behavior equivalent to or indistinguishable from that of a human. However, machines capable of performing at or beyond a human level on
certain tasks do not require a technological singularity to have
occurred in order to be developed, nor do they necessarily imply the
possibility of such an occurrence, as demonstrated by events such as the
victory of IBM's Deep Blue supercomputer in a chess match with grandmaster Gary Kasparov in 1996.
The Hungarian–American mathematician John von Neumann (1903–1957) is the first known person to discuss a coming "singularity" in technological progress. Stanislaw Ulam reported in 1958 that an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue". Subsequent authors have echoed this viewpoint.
In 1965, I. J. Good speculated that superhuman intelligence might bring about an "intelligence explosion":
Let an ultraintelligent machine be
defined as a machine that can far surpass all the intellectual
activities of any man however clever. Since the design of machines is
one of these intellectual activities, an ultraintelligent machine could
design even better machines; there would then unquestionably be an
'intelligence explosion', and the intelligence of man would be left far
behind. Thus the first ultraintelligent machine is the last invention
that man need ever make, provided that the machine is docile enough to
tell us how to keep it under control.
— Speculations Concerning the First Ultraintelligent Machine (1965)
The concept and the term "singularity" were popularized by Vernor Vinge:
first in 1983, in an article that claimed that, once humans create
intelligences greater than their own, there will be a technological and
social transition similar in some sense to "the knotted space-time at
the center of a black hole"; and then in his 1993 essay "The Coming Technological Singularity", in which he wrote that it would signal the end of the human era, as the
new superintelligence would continue to upgrade itself and advance
technologically at an incomprehensible rate, and he would be surprised
if it occurred before 2005 or after 2030.
Another significant contribution to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity Is Near, predicting singularity by 2045.
Although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia. But with the increasing power of computers and other technologies, it
might eventually be possible to build a machine significantly more
intelligent than humans.
If superhuman intelligence is invented—through either the amplification of human intelligence
or artificial intelligence—it will, in theory, vastly surpass human
problem-solving and inventive skill. Such an AI is often called a seed
AIbecause if an AI is created with engineering capabilities that match or
surpass those of its creators, it could autonomously improve its own
software and hardware to design an even more capable machine, which
could repeat the process in turn. This recursive self-improvement could
accelerate, potentially allowing enormous qualitative change before
reaching any limits imposed by the laws of physics or theoretical
computation. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.
A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of even the brightest and most gifted humans. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. I. J. Good, Vernor Vinge, and Ray Kurzweil
define the concept in terms of the technological creation of super
intelligence, arguing that it is difficult or impossible for present-day
humans to predict what human beings' lives would be like in a
post-singularity world.
The related concept of "speed superintelligence" describes an
artificial intelligence that can function like a human mind but much
faster. For example, given a millionfold increase in the speed of information
processing relative to that of humans, a subjective year would pass in
30 physical seconds. Such a increase in information processing speed could result in, or significantly contribute to the singularity.
Technology forecasters and researchers disagree about when, or
whether, human intelligence could likely be surpassed. Some argue that
advances in artificial intelligence
(AI) may result in general reasoning systems that bypass human
cognitive limitations. Others believe that humans will evolve or
directly modify their biology so as to achieve radically greater
intelligence. A number of futures studies focus on scenarios that combine these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. Robin Hanson's 2016 book The Age of Em
describes a future in which human brains are scanned and digitized,
creating "uploads" or digital versions of human consciousness. In this
future, the development of these uploads may precede or coincide with
the emergence of superintelligent AI.
Variations
Non-AI singularity
Some
writers use "the singularity" in a broader way, to refer to any radical
changes in society brought about by new technology (such as molecular nanotechnology), although Vinge and other writers say that without superintelligence, such changes would not be a true singularity.
Predictions
Progress of AI performance on various benchmarks compared to human-level performance including computer vision (MNIST, ImageNet), speech recognition
(Switchboard), natural language understanding (SQuAD 1.1, MMLU, GLUE),
general language model evaluation (MMLU, Big-Bench, and GPQA), and
mathematical reasoning (MATH). Many models surpass human-level
performance (black solid line) by 2019, demonstrating significant
advancements in AI capabilities across different domains over the past
two decades.
Numerous dates have been predicted for the attainment of singularity.
In 1965, Good wrote that it was more probable than not that an ultra-intelligent machine would be built in the 20th century.
That computing capabilities for human-level AI would be available in supercomputers before 2010 was predicted in 1988 by Moravec, assuming that the then current rate of improvement continued.
The attainment of greater-than-human intelligence between 2005 and 2030 was predicted by Vinge in 1993.
Human-level AI around 2029 and the singularity in 2045 was predicted by Kurzweil in 2005. He reaffirmed these predictions in 2024 in The Singularity is Nearer.
Human-level AI by 2040, and intelligence far beyond human by 2050
was predicted in 1998 by Moravec, revising his earlier prediction.
A median confidence of 50% that human-level AI would be developed by 2040–2050 was the outcome of four informal polls of AI researchers, conducted in 2012 and 2013 by Bostrom and Müller.
In September 2025, a review of surveys of scientists and industry
experts from the previous 15 years found that most agreed that artificial general intelligence (AGI), a level well below technological singularity, will occur by 2100. A more recent analysis by AIMultiple reported, "Current surveys of AI researchers are predicting AGI around 2040".
Robin Hanson
has expressed skepticism of human intelligence augmentation, writing
that once the "low-hanging fruit" of easy methods for increasing human
intelligence have been exhausted, further improvements will become
increasingly difficult.
In conversation regarding human-level artificial intelligence with cognitive scientist Gary Marcus, computer scientist Grady Booch,
speaking skeptically, referred to the singularity as "sufficiently
imprecise, filled with emotional and historic baggage, and touches some
of humanity's deepest hopes and fears that it's hard to have a rational
discussion therein". Later in the conversation, Marcus, while more optimistic about the
progress of AI, agreed that any major advances would not happen as a
single event, but rather as a slow and gradual increase in reliability
usefulness.
The possibility of an intelligence explosion depends on three
factors. The first accelerating factor is the new intelligence
enhancements made possible by each previous improvement. But as the
intelligences become more advanced, further advances will become more
and more complicated, possibly outweighing the advantage of increased
intelligence. Each improvement should generate at least one more
improvement, on average, for movement toward singularity to continue.
Finally, the laws of physics may eventually prevent further improvement.
There are two logically independent, but mutually reinforcing,
causes of intelligence improvements: increases in the speed of
computation and improvements to the algorithms used. The former is predicted by Moore's Law and the forecasted improvements in hardware, and is comparatively similar to previous technological advances. "Most
experts believe that Moore's law is coming to an end during this
decade", the AIMultiple report reads, but "quantum computing can be used to efficiently train neural networks", potentially working around any end to Moore's Law. But Schulman and Sandberg argue that software will present more complex challenges than simply
operating on hardware capable of running at human intelligence levels or
beyond.
A 2017 email survey of authors with publications at the 2015 NeurIPS and ICMLmachine learning
conferences asked about the chance that "the intelligence explosion
argument is broadly correct". Of the respondents, 12% said it was "quite
likely", 17% said it was "likely", 21% said it was "about even", 24%
said it was "unlikely", and 26% said it was "quite unlikely".
Speed improvements
Both
for human and artificial intelligence, hardware improvements increase
the rate of future hardware improvements. Some upper limit on speed may
eventually be reached. Jeff Hawkins has said that a self-improving
computer system will inevitably run into limits on computing power: "in
the end there are limits to how big and fast computers can run. We would
end up in the same place; we'd just get there a bit faster. There would
be no singularity."
It is difficult to directly compare silicon-based hardware with neurons. But Anthony Berglas notes that computer speech recognition
is approaching human capabilities, and that this capability seems to
require 0.01% of the volume of the brain. This analogy suggests that
modern computer hardware is within a few orders of magnitude of being as
powerful as the human brain, as well as taking up much less space. The costs of training systems with deep learning may be larger.
The exponential growth in computing technology suggested by Moore's
law is commonly cited as a reason to expect a singularity in the
relatively near future, and a number of authors have proposed
generalizations of Moore's law. Computer scientist and futurist Hans
Moravec proposed in a 1998 book that the exponential growth curve could be extended back to earlier computing technologies before the integrated circuit.
Ray Kurzweil postulates a law of accelerating returns whereby the speed of technological change (and more generally, all evolutionary processes) increases exponentially, generalizing Moore's law in the same manner as
Moravec's proposal, and also including material technology (especially
as applied to nanotechnology) and medical technology. Between 1986 and 2007, machines' application-specific capacity to
compute information per capita roughly doubled every 14 months; the per
capita capacity of the world's general-purpose computers has doubled
every 18 months; the global telecommunication capacity per capita
doubled every 34 months; and the world's storage capacity per capita
doubled every 40 months. On the other hand, it has been argued that the global acceleration
pattern having a 21st-century singularity as its parameter should be
characterized as hyperbolic rather than exponential.
Kurzweil reserves the term "singularity" for a rapid increase in
artificial intelligence (as opposed to other technologies), writing:
"The Singularity will allow us to transcend these limitations of our
biological bodies and brains ... There will be no distinction,
post-Singularity, between human and machine". He also defines the singularity as when computer-based intelligences
significantly exceed the sum total of human brainpower, writing that
advances in computing before that "will not represent the Singularity"
because they do "not yet correspond to a profound expansion of our
intelligence."
Some singularity proponents argue its inevitability through
extrapolation of past trends, especially those pertaining to shortening
gaps between improvements to technology. In one of the first uses of the
term "singularity" in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:
One
conversation centered on the ever accelerating progress of technology
and changes in the mode of human life, which gives the appearance of
approaching some essential singularity in the history of the race beyond
which human affairs, as we know them, could not continue.
Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies surmount it. He predicts paradigm shifts
will become increasingly common, leading to "technological change so
rapid and profound it represents a rupture in the fabric of human
history". Kurzweil believes that the singularity will occur by 2045. His predictions differ from Vinge's in that he predicts a gradual
ascent to the singularity, rather than Vinge's rapidly self-improving
superhuman intelligence.
Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's 2000 Wired magazine article "Why The Future Doesn't Need Us".
Algorithm improvements
Some intelligence technologies, like "seed AI", may also be able to make themselves not just faster but also more efficient, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.
The mechanism for a recursively self-improving set of algorithms
differs from an increase in raw computation speed in two ways. First, it
does not require external influence: machines designing faster hardware
would still require humans to create the improved hardware, or to
program factories appropriately. An AI rewriting its own source code could do so while contained in an AI box.
Second, as with Vernor Vinge's
conception of the singularity, it is much harder to predict the
outcome. While speed increases seem to be only a quantitative difference
from human intelligence, actual algorithm improvements would be
qualitatively different.
Substantial dangers are associated with an intelligence explosion
singularity originating from a recursively self-improving set of
algorithms. First, the goal structure of the AI might self-modify,
potentially causing the AI to optimise for something other than what was
originally intended.Second, AIs could compete for the resources humankind uses to survive. While not actively malicious, AIs would promote the goals of their
programming, not necessarily broader human goals, and thus might crowd
out humans.
Carl Shulman and Anders Sandberg
suggest that algorithm improvements may be the limiting factor for a
singularity; while hardware efficiency tends to improve at a steady
pace, software innovations are more unpredictable and may be
bottlenecked by serial, cumulative research. They suggest that in the
case of a software-limited singularity, intelligence explosion would
actually become more likely than with a hardware-limited singularity,
because in the software-limited case, once human-level AI is developed,
it could run serially on very fast hardware, and the abundance of cheap
hardware would make AI research less constrained. An abundance of accumulated hardware that can be unleashed once the
software figures out how to use it has been called "computing overhang".
Criticism
Linguist and cognitive scientist Steven Pinker
wrote in 2008: "There is not the slightest reason to believe in a
coming singularity. The fact that you can visualize a future in your
imagination is not evidence that it is likely or even possible. Look at
domed cities, jet-pack commuting, underwater cities, mile-high
buildings, and nuclear-powered automobiles—all staples of futuristic
fantasies when I was a child that have never arrived. Sheer processing
power is not a pixie dust that magically solves all your problems."
Jaron Lanier
denies that the singularity is inevitable: "I do not think the
technology is creating itself. It's not an autonomous process [...] The
reason to believe in human agency over technological determinism is that
you can then have an economy where people earn their own way and invent
their own lives. If you structure a society on not emphasizing
individual human agency, it's the same thing operationally as denying
people clout, dignity, and self-determination ... to embrace [the idea
of the Singularity] would be a celebration of bad data and bad
politics."
Philosopher and cognitive scientist Daniel Dennett
said in 2017: "The whole singularity stuff, that's preposterous. It
distracts us from much more pressing problems [...] AI tools that we
become hyper-dependent on—that is going to happen. And one of the
dangers is that we will give them more authority than they warrant."
Some critics suggest religious motivations for believing in the
singularity, especially Kurzweil's version. The buildup to the
singularity is compared to Christian end-times scenarios. Beam called it "a Buck Rogers vision of the hypothetical Christian Rapture". John Gray has said, "the Singularity echoes apocalyptic myths in which history is about to be interrupted by a world-transforming event".
In The New York Times, David Streitfeld questioned whether "it might manifest first and foremost—thanks, in part, to the bottom-line obsession of today’s Silicon Valley—as a tool to slash corporate America’s head count."
Astrophysicist and scientific philosopherAdam Becker
criticizes Kurzweil's concept of human mind uploads to computers on the
grounds that they are too fundamentally different and incompatible.
Skepticism of exponential growth
Theodore Modis holds the singularity cannot happen. He claims the "technological singularity" and especially Kurzweil lack
scientific rigor; Kurzweil is alleged to mistake the logistic function
(S-function) for an exponential function, and to see a "knee" in an
exponential function where there can in fact be no such thing. In a 2021 article, Modis wrote that no milestones—breaks in historical
perspective comparable in importance to the Internet, DNA, the
transistor, or nuclear energy—had been observed in the previous 20
years, while five of them would have been expected according to the
exponential trend advocated by proponents of the technological
singularity.
AI researcher Jürgen Schmidhuber
has said that the frequency of subjectively "notable events" appears to
be approaching a 21st-century singularity, but cautioned readers to
take such plots of subjective events with a grain of salt: perhaps
differences in memory of recent and distant events create an illusion of
accelerating change where none exists.
Hofstadter
(2006) raises concern that Kurzweil is insufficiently rigorous, that an
exponential tendency of technology is not a scientific law like one of
physics, and that exponential curves have no "knees". Nonetheless, he did not rule out the singularity in principle in the distant future and in light of ChatGPT and other recent advancements has revised his opinion significantly toward dramatic technological change in the near future.
Economist Robert J. Gordon points out that measured economic growth slowed around 1970 and slowed even further since the 2008 financial crisis, and argues that the economic data show no trace of a coming Singularity as imagined by I. J. Good.
In addition to general criticisms of the singularity concept,
several critics have raised issues with Kurzweil's iconic chart. One
line of criticism is that a log-log
chart of this nature is inherently biased toward a straight-line
result. Others identify selection bias in the points Kurzweil uses. For
example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily. Kurzweil has rebutted this by charting evolutionary events from 15
neutral sources and showing that they fit a straight line on a log-log chart. Kelly
(2006) argues that the way the Kurzweil chart is constructed, with the
x-axis having time before the present, it always points to the
singularity being "now", for any date on which one would construct such a
chart, and shows this visually on Kurzweil's chart.
Technological limiting factors
Martin Ford postulates a "technology paradox": most routine jobs could be automated
with a level of technology inferior to that required for a singularity.
This would cause massive unemployment and plummeting consumer demand,
which would eliminate the incentive to invest in the technology required
to bring about the singularity. Job displacement is no longer limited
to the types of work traditionally considered "routine".
Theodore Modis and Jonathan Huebner argue that the rate of technological innovation has not only ceased to
rise but is actually now declining. Evidence for this decline is that
the rise in computer clock rates
is slowing, even while Moore's prediction of exponentially increasing
circuit density continues to hold. This is due to excessive heat buildup
from the chip, which cannot be dissipated quickly enough to prevent it
from melting when operating at higher speeds. Advances in speed may be
possible in the future by virtue of more power-efficient CPU designs and
multi-cell processors.
Microsoft co-founder Paul Allen has argued that there is a "complexity brake": the more progress science makes toward understanding intelligence, the
more difficult it becomes to make additional progress. A study of the
number of patents shows that human creativity does not show accelerating
returns, but in fact, as suggested by Joseph Tainter in The Collapse of Complex Societies, a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since. The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse".
Potential impacts
Dramatic
changes in the rate of economic growth have occurred in the past
because of technological advancement. Based on population growth, the
economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. Since the Industrial Revolution,
the world's economic output has doubled every 15 years, 60 times faster
than during the agricultural era. If the rise of superhuman
intelligence causes a similar revolution, argues Robin Hanson, one would
expect the economy to double at least quarterly and possibly weekly.
The term "technological singularity" reflects the idea that such
change may happen suddenly and that it is difficult to predict how the
resulting new world would operate. It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an existential threat. Because AI is a major factor in singularity risk, several organizations
pursue a technical theory of aligning AI goal-systems with human
values, including the Future of Humanity Institute (until 2024), the Machine Intelligence Research Institute, the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute.
Physicist Stephen Hawking
said in 2014: "Success in creating AI would be the biggest event in
human history. Unfortunately, it might also be the last, unless we learn
how to avoid the risks." Hawking believed that in the coming decades, AI could offer
"incalculable benefits and risks" such as "technology outsmarting
financial markets, out-inventing human researchers, out-manipulating
human leaders, and developing weapons we cannot even understand." He suggested that artificial intelligence should be taken more
seriously and that more should be done to prepare for the singularity:
So,
facing possible futures of incalculable benefits and risks, the experts
are surely doing everything possible to ensure the best outcome, right?
Wrong. If a superior alien civilisation sent us a message saying,
"We'll arrive in a few decades," would we just reply, "OK, call us when
you get here – we'll leave the lights on"? Probably not – but this is
more or less what is happening with AI.
Berglas (2008)
claims that there is no direct evolutionary motivation for AI to be
friendly to humans. Evolution has no inherent tendency to produce
outcomes valued by humans, and there is little reason to expect an
arbitrary optimisation process to promote an outcome desired by
humankind, rather than inadvertently leading to an AI behaving in a way
not intended by its creators. Anders Sandberg has elaborated on this, addressing various common counter-arguments. AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,and humans would be powerless to stop them. Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.
Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:
When we create the first
superintelligent entity, we might make a mistake and give it goals that
lead it to annihilate humankind, assuming its enormous intellectual
advantage gives it the power to do so. For example, we could mistakenly
elevate a subgoal to the status of a supergoal. We tell it to solve a
mathematical problem, and it complies by turning all the matter in the
solar system into a giant calculating device, in the process killing the
person who asked the question.
According to Eliezer Yudkowsky,
a significant problem in AI safety is that unfriendly AI is likely to
be much easier to create than friendly AI. Both require large advances
in recursive optimisation process design, but friendly AI also requires
the ability to make goal structures invariant under self-improvement (or
the AI could transform itself into something unfriendly) and a goal
structure that aligns with human values and does not automatically
destroy the human race. An unfriendly AI, on the other hand, can
optimize for an arbitrary goal structure, which does not need to be
invariant under self-modification. Bill Hibbard (2014) proposes an AI design that avoids several dangers, including self-delusion, unintended instrumental actions, and corruption of the reward generator. He also discusses social impacts of AI and testing AI. His 2001 book Super-Intelligent Machines
advocates public education about AI and public control over AI. It also
proposes a simple design that is vulnerable to corruption of the reward
generator.
Schematic Timeline of Information and Replicators in the Biosphere: Gillings et al.'s "major evolutionary transitions" in information processing.
Amount of digital information worldwide (5×1021 bytes) versus human genome information worldwide (1019 bytes) in 2014
A 2016 Trends in Ecology & Evolution article argues that humanity is in the midst of a major evolutionary transition
that merges technology, biology, and society. This is due to digital
technology infiltrating the fabric of human society to a degree of often
life-sustaining dependence. The article says, "humans already embrace
fusions of biology and technology. We spend most of our waking time
communicating through digitally mediated channels [...] we trust
artificial intelligence with our lives through antilock braking in cars and autopilots
in planes... With one in three courtships leading to marriages in
America beginning online, digital algorithms are also taking a role in
human pair bonding and reproduction".
The article further argues that from the perspective of evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity,
and culture and language). In the current stage of life's evolution,
the carbon-based biosphere has generated a system (humans) capable of
creating technology that will result in a comparable evolutionary transition.
The digital information created by humans has reached a similar
magnitude to biological information in the biosphere. Since the 1980s,
the quantity of digital information stored has doubled about every 2.5
years, reaching about 5 zettabytes in 2014 (5×1021 bytes).
In biological terms, there are 7.2 billion humans on the planet,
each with a genome of 6.2 billion nucleotides. Since one byte can encode
four nucleotide pairs, the individual genomes of every human could be
encoded by approximately 1×1019
bytes. The digital realm stored 500 times more information than this in
2014 (see figure). The total amount of DNA in all the cells on Earth is
estimated to be about 5.3×1037 base pairs, equivalent to 1.325×1037 bytes of information. If growth in digital storage continues at its current rate of 30–38% compound annual growth per year, it will rival the total information content in all the DNA in all the
cells on Earth in about 110 years. This would represent a doubling of
the amount of information stored in the biosphere in just 150 years.
In 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz
chaired a meeting of leading computer scientists, artificial
intelligence researchers, and roboticists at the Asilomar conference
center in Pacific Grove, California. The goal was to discuss the impact
of the possibility that robots could become self-sufficient and able to
make their own decisions. They discussed the extent to which computers
and robots might acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.
Some machines are programmed with various forms of semi-autonomy,
including the ability to locate their own power sources and choose
targets to attack with weapons. Also, some computer viruses
can evade elimination and, according to scientists in attendance, could
therefore be said to have reached a "cockroach" stage of machine
intelligence. The conference attendees noted that self-awareness as
depicted in science fiction is probably unlikely, but that other
potential hazards and pitfalls exist.
Frank S. Robinson predicts that once humans achieve a machine
with the intelligence of a human, scientific and technological problems
will be tackled and solved with brainpower far superior to that of
humans. He notes that artificial systems are able to share data more
directly than humans, and predicts that this will result in a global
network of super-intelligence that dwarfs human capability. Robinson also discusses how vastly different the future would look after such an intelligence explosion.
Hard or soft takeoff
In
this sample recursive self-improvement scenario, humans modifying an
AI's architecture would be able to double its performance every three
years through, for example, 30 generations before exhausting all
feasible improvements (left). If instead the AI is smart enough to
modify its own architecture as well as human researchers can, its time
required to complete a redesign halves with each generation, and it
progresses all 30 feasible generations in six years (right).
In a hard takeoff scenario, an artificial superintelligence rapidly
self-improves, "taking control" of the world (perhaps in a matter of
hours), too quickly for significant human-initiated error correction or
for a gradual tuning of the agent's goals. In a soft takeoff, the AI
still becomes far more powerful than humanity, but at a human-like pace
(perhaps on the order of decades), on a timescale where ongoing human
interaction and correction can effectively steer its development.
Ramez Naam
argues against a hard takeoff. He has pointed out that we already see
recursive self-improvement by superintelligences, such as corporations. Intel,
for example, has "the collective brainpower of tens of thousands of
humans and probably millions of CPU cores to... design better CPUs!" But
this has not led to a hard takeoff; rather, it has led to a soft
takeoff in the form of Moore's law. Naam further points out that the computational complexity of higher
intelligence may be much greater than linear, such that "creating a mind
of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1."
J. Storrs Hall
believes that "many of the more commonly seen scenarios for overnight
hard takeoff are circular – they seem to assume hyperhuman capabilities
at the starting point of the self-improvement process" in order
for an AI to be able to make the dramatic, domain-general improvements
required for takeoff. Hall suggests that rather than recursively
self-improving its hardware, software, and infrastructure all on its
own, a fledgling AI would be better off specializing in one area where
it was most effective and then buying the remaining components on the
marketplace, because the quality of products on the marketplace
continually improves, and the AI would have a hard time keeping up with
the cutting-edge technology used by the rest of the world.
Ben Goertzel agrees with Hall's suggestion that a new human-level
AI would do well to use its intelligence to accumulate wealth. The AI's
talents might inspire companies and governments to disperse its
software throughout society. Goertzel is skeptical of a hard five-minute
takeoff but speculates that a takeoff from human to superhuman level on
the order of five years is reasonable. He calls this a "semihard
takeoff".
Max More
disagrees, arguing that if there were only a few superfast human-level
AIs, that they would not radically change the world, as they would still
depend on other people to get things done and would still have human
cognitive constraints. Even if all superfast AIs worked on intelligence
augmentation, it is unclear why they would do better in a discontinuous
way than existing human cognitive scientists at producing superhuman
intelligence, although the rate of progress would increase. More further
argues that superintelligence would not transform the world overnight:
it would need to engage with existing, slow human systems to have
physical impact on the world. "The need for collaboration, for
organization, and for putting ideas into physical changes will ensure
that all the old rules are not thrown out overnight or even within
years."
Eric Drexler, one of the founders of nanotechnology, theorized in 1986 the possibility of cell repair devices, including ones operating within cells and using as yet hypothetical biological machines, allowing immortality via nanotechnology. According to Richard Feynman, his former graduate student and collaborator Albert Hibbs originally suggested to him (circa 1959) the idea of a medical
use for Feynman's theoretical micromachines. Hibbs suggested that
certain repair machines might one day be shrunk to the point that it
would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.
In 1988, Moravec predicted mind uploading,
the possibility of "uploading" a human mind into a human-like robot,
achieving quasi-immortality by extreme longevity via transfer of the
human mind between successive new robots as the old ones wear out;
beyond that, he predicts later exponential acceleration of subjective
experience of time leading to a subjective sense of immortality.
In 2005, Kurzweil suggested that medical advances would allow people to protect their bodies from the effects of aging, making life expectancy limitless.
He argues that technological advances in medicine would allow us to
continuously repair and replace defective components in our bodies,
prolonging life to an undetermined age. Kurzweil buttresses his argument by discussing current bio-engineering advances. He suggests somatic gene therapy;
after synthetic viruses with specific genetic information, the next
step is to apply this technology to gene therapy, replacing human DNA
with synthesized genes.
Beyond merely extending the operational life of the physical body, Jaron Lanier
argues for a form of immortality called "Digital Ascension" that
involves "people dying in the flesh and being uploaded into a computer
and remaining conscious." This concept was the central to the television series Upload.
History of the concept
A paper by Mahendra Prasad, published in AI Magazine, asserts that the 18th-century mathematician Marquis de Condorcet first hypothesized and mathematically modeled an intelligence explosion and its effects on humanity.
An early description of the idea was made in John W. Campbell's 1932 short story "The Last Evolution".
In his 1958 obituary for John von Neumann,
Ulam recalled a conversation with him about the "ever accelerating
progress of technology and changes in the mode of human life, which
gives the appearance of approaching some essential singularity in the
history of the race beyond which human affairs, as we know them, could
not continue."
In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence.
In 1977, Hans Moravec
wrote an article with unclear publishing status where he envisioned a
development of self-improving thinking machines, a creation of
"super-consciousness, the synthesis of terrestrial life, and perhaps
jovian and martian life as well, constantly improving and extending
itself, spreading outwards from the solar system, converting non-life
into mind." The article describes the human mind uploading later covered in Moravec
(1988). The machines are expected to reach human level and then improve
themselves beyond that ("Most significantly of all, they [the machines]
can be put to work as programmers and engineers, with the task of
optimizing the software and hardware which make them what they are. The
successive generations of machines produced this way will be
increasingly smarter and more cost effective.") Humans will no longer be
needed, and their abilities will be overtaken by the machines: "In the
long run the sheer physical inability of humans to keep up with these
rapidly evolving progeny of our minds will ensure that the ratio of
people to machines approaches zero, and that a direct descendant of our
culture, but not our genes, inherits the universe." While the word
"singularity" is not used, the notion of human-level thinking machines
thereafter improving themselves beyond human level is there. In this
view, there is no intelligence explosion in the sense of a very rapid
intelligence increase once human equivalence is reached. An updated
version of the article was published in 1979 in Analog Science Fiction and Fact.
In 1981, Stanisław Lem published his science fiction novel Golem XIV.
It describes a military AI computer (Golem XIV) that obtains
consciousness and starts to increase its intelligence, moving toward
personal technological singularity. Golem XIV was originally created to
aid its builders in fighting wars, but as its intelligence advances to a
much higher level than that of humans, it stops being interested in the
military requirements because it finds them lacking internal logical
consistency.
Vernor Vinge addressed Good's intelligence explosion in the January 1983 issue of Omni
magazine. Vinge seems to have been the first to use the term
"singularity" (although not "technological singularity") in a way
specifically tied to the creation of intelligent machines:
We will soon create intelligences
greater than our own. When this happens, human history will have reached
a kind of singularity, an intellectual transition as impenetrable as
the knotted space-time at the center of a black hole, and the world will
pass far beyond our understanding. This singularity, I believe, already
haunts a number of science-fiction writers. It makes realistic
extrapolation to an interstellar future impossible. To write a story set
more than a century hence, one needs a nuclear war in between ... so
that the world remains intelligible.
In 1985, in "The Time Scale of Artificial Intelligence", AI researcher Ray Solomonoff
articulated mathematically the related notion of what he called an
"infinity point": if a research community of human-level self-improving
AIs take four years to double their own speed, then two years, then one
year and so on, their capabilities increase infinitely in finite time.
In 1986, Vinge published Marooned in Realtime,
a science-fiction novel where a few remaining humans traveling forward
in the future have survived an unknown extinction event that might well
be a singularity. In a short afterword, Vinge writes that an actual
technological singularity would not be the end of the human species: "of
course it seems very unlikely that the Singularity would be a clean
vanishing of the human race. (On the other hand, such a vanishing is the
timelike analog of the silence we find all across the sky.)".
In 1988, Vinge used the phrase "technological singularity" in the short-story collection Threats and Other Promises, writing in the introduction to his story "The Whirligig of Time": Barring a worldwide catastrophe, I believe that technology will achieve our wildest dreams, and soon. When
we raise our own intelligence and that of our creations, we are no
longer in a world of human-sized characters. At that point we have
fallen into a technological "black hole", a technological singularity.
In 1988, Hans Moravec published Mind Children, in which he predicted human-level intelligence in supercomputers by
2010, self-improving intelligent machines far surpassing human
intelligence later, human mind uploading into human-like robots later,
intelligent machines leaving humans behind, and space colonization. He
did not mention "singularity", though, and he did not speak of a rapid
explosion of intelligence immediately after the human level is achieved.
Nonetheless, the overall singularity tenor is there in predicting both
human-level artificial intelligence and further artificial intelligence
far surpassing humans later.
Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era", spread widely on the internet and helped popularize the idea. This article contains the statement, "Within thirty years, we will have
the technological means to create superhuman intelligence. Shortly
after, the human era will be ended." Vinge argues that science-fiction
authors cannot write realistic post-singularity characters who surpass
the human intellect, as the thoughts of such an intellect is beyond
humans' ability to express.
Minsky's
1994 article says robots will "inherit the Earth", possibly with the
use of nanotechnology, and proposes to think of robots as human "mind
children", drawing the analogy from Moravec. The rhetorical effect of
the analogy is that if humans are fine to pass the world to their
biological children, they should be equally fine to pass it to robots,
their "mind children". Per Minsky, "we could design our 'mind-children'
to think a million times faster than we do. To such a being, half a
minute might seem as long as one of our years, and each hour as long as
an entire human lifetime." The feature of the singularity present in
Minsky is the development of superhuman artificial intelligence
("million times faster"), but there is no talk of sudden intelligence
explosion, self-improving thinking machines, or unpredictability beyond
any specific event, and the word "singularity" is not used.
Tipler's 1994 book The Physics of Immortality
predicts a future where super–intelligent machines build enormously
powerful computers, people are "emulated" in computers, life reaches
every galaxy, and people achieve immortality when they reach Omega Point. There is no talk of Vingean "singularity" or sudden intelligence
explosion, but intelligence much greater than human is there, as well as
immortality.
In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of robotics, genetic engineering, and nanotechnology.
In 2007, Yudkowsky suggested that many of the varied definitions
that have been assigned to "singularity" are mutually incompatible
rather than mutually supporting. For example, Kurzweil extrapolates current technological trajectories
past the arrival of self-improving AI or superhuman intelligence, which
Yudkowsky argues represents a tension with both I. J. Good's proposed
discontinuous upswing in intelligence and Vinge's thesis on
unpredictability.
In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University,
a nonaccredited private institute whose mission is "to educate, inspire
and empower leaders to apply exponential technologies to address
humanity's grand challenges." Funded by companies such as Google, Autodesk, and ePlanet Ventures, the organization runs an annual ten-week graduate program as well as smaller "executive" courses.