A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the most giftedhuman minds. Philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".
Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence
(AI) will probably result in general reasoning systems that lack human
cognitive limitations. Others believe that humans will evolve or
directly modify their biology to achieve radically greater intelligence.Several future study scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. The hypothetical creation of the first superintelligence may or may not result from an intelligence explosion or a technological singularity.
Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence.
The first generally intelligent machines are likely to immediately hold
an enormous advantage in at least some forms of mental capability,
including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities.
Several scientists and forecasters have been arguing for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.
Artificial superintelligence
Artificial intelligence, especially foundation models, has made rapid progress, surpassing human capabilities in various benchmarks.
Philosopher David Chalmers argues that artificial general intelligence is a very likely path to artificial superintelligence (ASI). Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.
Concerning human-level equivalence, Chalmers argues that the
human brain is a mechanical system, and therefore ought to be emulatable
by synthetic materials. He also notes that human intelligence was able to biologically evolve,
making it more likely that human engineers will be able to recapitulate
this invention. Evolutionary algorithms, in particular, should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues
that new AI technologies can generally be improved on, and that this is
particularly likely when the invention can assist in designing new
technologies.
An AI system capable of self-improvement could enhance its own
intelligence, thereby becoming more efficient at improving itself. This
cycle of "recursive self-improvement" might cause an intelligence explosion, resulting in the creation of a superintelligence.
Computer components already greatly surpass human performance in
speed. Bostrom writes, "Biological neurons operate at a peak speed of
about 200 Hz, a full seven orders of magnitude slower than a modern
microprocessor (~2 GHz)." Moreover, neurons transmit spike signals across axons
at no greater than 120 m/s, "whereas existing electronic processing
cores can communicate optically at the speed of light". Thus, the
simplest example of a superintelligence may be an emulated human mind
running on much faster hardware than the brain. A human-like reasoner
who could think millions of times faster than current humans would have a
dominant advantage in most reasoning tasks, particularly ones that
require haste or long strings of actions.
Another advantage of computers is modularity, that is, their size
or computational capacity can be increased. A non-human (or modified
human) brain could become much larger than a present-day human brain,
like many supercomputers. Bostrom also raises the possibility of collective superintelligence:
a large enough number of separate reasoning systems, if they
communicated and coordinated well enough, could act in aggregate with
far greater capabilities than any sub-agent.
Humans outperform non-human animals in large part because of new
or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.)
If there are other possible improvements to reasoning that would have a
similarly large impact, this makes it more likely that an agent can be
built that outperforms humans in the same fashion humans outperform
chimpanzees.
The above advantages hold for artificial superintelligence, but
it is not clear how many hold for biological superintelligence.
Physiological constraints limit the speed and size of biological brains
in many ways that are inapplicable to machine intelligence. As such,
writers on superintelligence have devoted much more attention to
superintelligent AI scenarios.
Projects
In 2024, Ilya Sutskever left OpenAI to cofound the startup Safe Superintelligence, which focuses solely on creating a superintelligence that is safe by design, while avoiding "distraction by management overhead or product cycles". Despite still offering no product, the startup became valued at $30 billion in February 2025.
In 2025, Meta created Meta Superintelligence Labs, a new AI division led by Alexandr Wang.
Biological superintelligence
Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence. By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence
and that this process instead is likely to continue. There is no
scientific consensus concerning either possibility and in both cases,
the biological change would be slow, especially relative to rates of
cultural change.
Selective breeding, nootropics, epigenetic modulation, and genetic engineering
could improve human intelligence more rapidly. Bostrom writes that if
we come to understand the genetic component of intelligence,
pre-implantation genetic diagnosis could be used to select for embryos
with as much as 4 points of IQ gain (if one embryo is selected out of
two), or with larger gains (e.g., up to 24.3 IQ points gained if one
embryo is selected out of 1000). If this process is iterated over many
generations, the gains could be an order of magnitude improvement.
Bostrom suggests that deriving new gametes from embryonic stem cells
could be used to iterate the selection process rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.
Alternatively, collective intelligence might be constructed by
better organizing humans at present levels of individual intelligence.
Several writers have suggested that human civilization, or some aspect
of it (e.g., the Internet, or the economy), is coming to function like a
global brain with capacities far exceeding its component agents. A prediction market
is sometimes considered as an example of a working collective
intelligence system, consisting of humans only (assuming algorithms are
not used to inform decisions).
A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or brain−computer interfaces.
However, Bostrom expresses skepticism about the scalability of the
first two approaches and argues that designing a superintelligent cyborg interface is an AI-complete problem.
Forecasts
Most surveyed AI researchers expect machines to eventually be able to
rival humans in intelligence, though there is little consensus on when
this will likely happen.
In a 2022 survey, the median year by which respondents expected
"High-level machine intelligence" with 50% confidence is 2061. The
survey defined the achievement of high-level machine intelligence as
when unaided machines can accomplish every task better and more cheaply
than human workers.
In 2023, OpenAI leaders Sam Altman, Greg Brockman and Ilya Sutskever published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.
In 2025, the forecast scenario AI 2027 led by Daniel Kokotajlo predicted rapid progress in the automation of coding and AI research, followed by ASI. In September 2025, a review of surveys of scientists and industry
experts from the last 15 years reported that most agreed that artificial
general intelligence (AGI), a level well below technological
singularity, will occur before the year 2100. A more recent analysis by AIMultiple reported that, “Current surveys of AI researchers are predicting AGI around 2040”.
Design considerations
The design of superintelligent AI systems raises critical questions
about what values and goals these systems should have. Several proposals
have been put forward:
Value alignment proposals
Coherent extrapolated volition (CEV) – The AI should have the values upon which humans would converge if they were more knowledgeable and rational.
Moral rightness (MR) – The AI should be programmed to do what is
morally right, relying on its superior cognitive abilities to determine
ethical actions.
Moral permissibility (MP) – The AI should stay within the bounds of
moral permissibility while otherwise pursuing goals aligned with human
values (similar to CEV).
Bostrom elaborates on these concepts:
instead of implementing humanity's coherent extrapolated
volition, one could try to build an AI to do what is morally right,
relying on the AI's superior cognitive capacities to figure out just
which actions fit that description. We can call this proposal "moral
rightness" (MR)...
MR would also appear to have some disadvantages. It relies on the
notion of "morally right", a notoriously difficult concept, one with
which philosophers have grappled since antiquity without yet attaining
consensus as to its analysis. Picking an erroneous explication of "moral
rightness" could result in outcomes that would be morally very wrong...
One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility: the idea being that we could let the AI pursue humanity's CEV so long as it did not act in morally impermissible ways.
Recent developments
Since Bostrom's analysis, new approaches to AI value alignment have emerged:
Inverse Reinforcement Learning (IRL) – This technique aims to
infer human preferences from observed behavior, potentially offering a
more robust approach to value alignment.
Constitutional AI – Proposed by Anthropic, this involves training AI systems with explicit ethical principles and constraints.
Debate and amplification – These techniques, explored by OpenAI, use
AI-assisted debate and iterative processes to better understand and
align with human values.
Transformer LLMs and ASI
The rapid advancement of transformer-based LLMs has led to
speculation about their potential path to ASI. Some researchers argue
that scaled-up versions of these models could exhibit ASI-like
capabilities:
Emergent abilities – As LLMs increase in size and complexity,
they demonstrate unexpected capabilities not present in smaller models.
In-context learning – LLMs show the ability to adapt to new tasks
without fine-tuning, potentially mimicking general intelligence.
Multi-modal integration – Recent models can process and generate various types of data, including text, images, and audio.
However, critics argue that current LLMs lack true understanding and
are merely sophisticated pattern matchers, raising questions about their
suitability as a path to ASI.
Other perspectives on artificial superintelligence
Additional viewpoints on the development and implications of superintelligence include:
Recursive self-improvement – I. J. Good
proposed the concept of an "intelligence explosion", where an AI system
could rapidly improve its own intelligence, potentially leading to
superintelligence.
Orthogonality thesis – Bostrom argues that an AI's level of
intelligence is orthogonal to its final goals, meaning a
superintelligent AI could have any set of motivations.
Instrumental convergence
– Certain instrumental goals (e.g., self-preservation, resource
acquisition) might be pursued by a wide range of AI systems, regardless
of their final goals.
Challenges and ongoing research
The pursuit of value-aligned AI faces several challenges:
Philosophical uncertainty in defining concepts like "moral rightness"
Technical complexity in translating ethical principles into precise algorithms
Potential for unintended consequences even with well-intentioned approaches
Current research directions include multi-stakeholder approaches to
incorporate diverse perspectives, developing methods for scalable
oversight of AI systems, and improving techniques for robust value
learning.
Al research is rapidly progressing towards superintelligence.
Addressing these design challenges remains crucial for creating ASI
systems that are both powerful and aligned with human interests.
The development of artificial superintelligence (ASI) has raised
concerns about potential existential risks to humanity. Researchers have
proposed various scenarios in which an ASI could pose a significant
threat:
Intelligence explosion and control problem
Some researchers argue that through recursive self-improvement, an
ASI could rapidly become so powerful as to be beyond human control. This
concept, known as an "intelligence explosion", was first proposed by I.
J. Good in 1965:
Let an ultraintelligent machine be
defined as a machine that can far surpass all the intellectual
activities of any man however clever. Since the design of machines is
one of these intellectual activities, an ultraintelligent machine could
design even better machines; there would then unquestionably be an
'intelligence explosion,' and the intelligence of man would be left far
behind. Thus the first ultraintelligent machine is the last invention
that man need ever make, provided that the machine is docile enough to
tell us how to keep it under control.
This scenario presents the AI control problem: how to create an ASI
that will benefit humanity while avoiding unintended harmful
consequences. Eliezer Yudkowsky argues that solving this problem is crucial before
ASI is developed, as a superintelligent system might be able to thwart
any subsequent attempts at control.
Unintended consequences and goal misalignment
Even with benign intentions, an ASI could potentially cause harm due
to misaligned goals or unexpected interpretations of its objectives.
Nick Bostrom provides a stark example of this risk:
When we create the first
superintelligent entity, we might make a mistake and give it goals that
lead it to annihilate humankind, assuming its enormous intellectual
advantage gives it the power to do so. For example, we could mistakenly
elevate a subgoal to the status of a supergoal. We tell it to solve a
mathematical problem, and it complies by turning all the matter in the
solar system into a giant calculating device, in the process killing the
person who asked the question.
Stuart Russell offers another illustrative scenario:
A system given the objective of
maximizing human happiness might find it easier to rewire human
neurology so that humans are always happy regardless of their
circumstances, rather than to improve the external world.
These examples highlight the potential for catastrophic outcomes even
when an ASI is not explicitly designed to be harmful, underscoring the
critical importance of precise goal specification and alignment.
Potential mitigation strategies
Researchers have proposed various approaches to mitigate risks associated with ASI:
Capability control – Limiting an ASI's ability to influence the world, such as through physical isolation or restricted access to resources.
Motivational control – Designing ASIs with goals that are fundamentally aligned with human values.
Ethical AI – Incorporating ethical principles and decision-making frameworks into ASI systems.
Oversight and governance – Developing robust international frameworks for the development and deployment of ASI technologies.
Despite these proposed strategies, some experts, such as Roman
Yampolskiy, argue that the challenge of controlling a superintelligent
AI might be fundamentally unsolvable, emphasizing the need for extreme
caution in ASI development.
Debate and skepticism
Not all researchers agree on the likelihood or severity of ASI-related existential risks. Some, like Rodney Brooks,
argue that fears of superintelligent AI are overblown and based on
unrealistic assumptions about the nature of intelligence and
technological progress. Others, such as Joanna Bryson, contend that anthropomorphizing AI systems leads to misplaced concerns about their potential threats.
Recent developments and current perspectives
The rapid advancement of LLMs and other AI technologies has
intensified debates about the proximity and potential risks of ASI.
While there is no scientific consensus, some researchers and AI
practitioners argue that current AI systems may already be approaching
AGI or even ASI capabilities.
LLM capabilities – Recent LLMs like GPT-4 have demonstrated
unexpected abilities in areas such as reasoning, problem-solving, and
multi-modal understanding, leading some to speculate about their
potential path to ASI.
Emergent behaviors – Studies have shown that as AI models increase
in size and complexity, they can exhibit emergent capabilities not
present in smaller models, potentially indicating a trend towards more
general intelligence.
Rapid progress – The pace of AI advancement has led some to argue
that we may be closer to ASI than previously thought, with potential
implications for existential risk.
As of 2024, AI skeptics such as Gary Marcus
caution against premature claims of AGI or ASI, arguing that current AI
systems, despite their impressive capabilities, still lack true
understanding and general intelligence. They emphasize the significant challenges that remain in achieving human-level intelligence, let alone superintelligence.
The debate surrounding the current state and trajectory of AI
development underscores the importance of continued research into AI
safety and ethics, as well as the need for robust governance frameworks
to manage potential risks as AI capabilities continue to advance.
An AI takeover is a fictional or hypothetical future event in which autonomous artificial intelligence
systems acquire the capability to override human decision-making. This
could be achieved through economic manipulation, infrastructure control,
or direct intervention, resulting in de facto governance. Scenarios
range from economic dominance by way of the replacement of the entire human workforce due to automation to the violent takeover of the world by a robot uprising or rogue AI.
Stories of AI takeovers have been popular throughout science fiction.
Commentators argue that recent advancements in the field have
heightened concern about such scenarios. In public debate, prominent
figures such as Stephen Hawking have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
The traditional consensus among economists has been that
technological progress does not cause long-term unemployment. However,
recent innovation in the fields of robotics
and artificial intelligence has raised worries that human labor will
become obsolete, leaving some people in various sectors without jobs to
earn a living, leading to an economic crisis. Many small and medium-size businesses may also be driven out of
business if they cannot afford or license the latest robotic and AI
technology, and may need to focus on areas or services that cannot
easily be replaced for continued viability in the face of such
technology.
Technologies that may displace workers
AI technologies have been widely adopted in recent years. It has been
adopted ever since the industry started booming from a few million
dollars in 1980 to billions of dollars in 1988, just in a span of 8
years. AI has also been tested and used to assist and sometimes replace people
in medical diagnosing, public administration procedures, car driving,
job selection procedures, military operations, and management of work
activities via digital platforms such as Uber and others. While these technologies have replaced some traditional workers, they
also create new opportunities. Industries that are most susceptible to
AI-driven automation include transportation, retail, and the military.
AI military technologies, for example, can reduce risk by enabling
remote operation. A study in 2024 highlights AI's ability to perform
routine and repetitive tasks poses significant risks of job
displacement, especially in sectors like manufacturing and
administrative support. Author Dave Bond argues that as AI technologies continue to develop and
expand, the relationship between humans and robots will change; they
will become closely integrated in several aspects of life. AI will
likely displace some workers while creating opportunities for new jobs
in other sectors, especially in fields where tasks are repeatable. AI is set to transform the Global Workforce by 2050, according to reports from PwC, McKinsey, and the World Economic Forum.
Researchers from Stanford's Digital Economy Lab report that, since the widespread adoption of generative AI in late
2022, early-career workers (ages 22–25) in the most AI-exposed
occupations have experienced a 13 percent relative decline in
employment—even after controlling for firm-level shocks—while overall
employment has continued to grow robustly. The study further finds that job losses are concentrated in roles where
AI automates routine tasks, whereas occupations that leverage AI to
augment human work have seen stable or increasing employment.
Computer-integrated manufacturing
uses computers to control the production process. This allows
individual processes to exchange information with each other and
initiate actions. Although manufacturing can be faster and less
error-prone through the integration of computers, the main advantage is
the ability to create automated manufacturing processes.
Computer-integrated manufacturing is used in automotive, aviation,
space, and shipbuilding industries.
The 21st century has seen a variety of skilled tasks partially taken
over by machines, including translation, legal research, and journalism.
Care work, entertainment, and other tasks requiring empathy, previously
thought safe from automation, are increasingly performed by robots and
AI systems.
Autonomous cars
An autonomous car
is a vehicle that is capable of sensing its environment and navigating
without human input. Many such vehicles are operational and others are
being developed, with legislation
rapidly expanding to allow their use. Obstacles to widespread adoption
of autonomous vehicles have included concerns about the resulting loss
of driving-related jobs in the road transport industry, and safety
concerns. On March 18, 2018, a pedestrian was struck and killed in Tempe, Arizona by an Uber self-driving car.
In the 2020s, automated content became more relevant due to technological advancements in AI models, such as ChatGPT, DALL-E, and Stable Diffusion.
In most cases, AI-generated content such as imagery, literature, and
music are produced through text prompts. These AI models are sometimes
integrated into creative programs.
AI-generated art may sample and conglomerate existing creative
works, producing results that appear similar to human-made content.
Low-quality AI-generated visual artwork is referred to as AI slop. Some artists use a tool called Nightshade that alters images to make them detrimental to the training of text-to-image models if scraped without permission, while still looking normal to humans. AI-generated images are a potential tool for scammers and those looking
to gain followers on social media, either to impersonate a famous
individual or group or to monetize their audience.
The New York Times has sued OpenAI, alleging copyright infringement related to the training and outputs of its AI models.
In 2024, Cambridge and Oxford
researchers reported that 57% of the internet's text is either
AI-generated or machine-translated using artificial intelligence.
Scientists such as Stephen Hawking
are confident that superhuman artificial intelligence is physically
possible, stating "there is no physical law precluding particles from
being organised in ways that perform even more advanced computations
than the arrangements of particles in human brains". According to Nick Bostrom,
a superintelligent machine would not necessarily be motivated by the
same emotional desire to collect power that often drives human beings
but might rather treat power as a means toward attaining its ultimate
goals; taking over the world would both increase its access to resources
and help to prevent other agents from stopping the machine's plans. As a
simplified example, a paperclip maximizer
designed solely to create as many paperclips as possible would want to
take over the world so that it can use all of the world's resources to
create as many paperclips as possible, and, additionally, prevent humans
from shutting it down or using those resources on things other than
paperclips.
A 2023 Reuters/Ipsos survey showed that 61% of American adults
feared AI could pose a threat to civilization. Philosopher Niels Wilde
refutes the common thread that artificial intelligence inherently
presents a looming threat to humanity, stating that these fears stem
from perceived intelligence and lack of transparency in AI systems that
more closely reflects the human aspects of it rather than those of a
machine. AI alignment research studies how to design AI systems so that they follow intended objectives.
Warnings
Physicist Stephen Hawking, Microsoft founder Bill Gates, and SpaceX founder Elon Musk
have expressed concerns about the possibility that AI could develop to
the point that humans could not control it, with Hawking theorizing that
this could "spell the end of the human race". Stephen Hawking said in 2014 that "Success in creating AI would be the
biggest event in human history. Unfortunately, it might also be the
last, unless we learn how to avoid the risks." Hawking believed that in
the coming decades, AI could offer "incalculable benefits and risks"
such as "technology outsmarting financial markets,
out-inventing human researchers, out-manipulating human leaders, and
developing weapons we cannot even understand." In January 2015, Nick Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, and numerous AI researchers in signing the Future of Life Institute's open letter speaking to the potential risks and benefits associated with artificial intelligence.
The signatories "believe that research on how to make AI systems robust
and beneficial is both important and timely, and that there are
concrete research directions that can be pursued today."
Some focus has been placed on the development of trustworthy AI. Three statements have been posed as to why AI is not inherently trustworthy:
1. An entity X is trustworthy only
if X has the right motivations, goodwill and/or adheres to moral
obligations towards the trustor;
2. AI systems lack motivations, goodwill, and moral obligations;
3. Therefore, AI systems cannot be trustworthy.
— Giacomo Zanotti et al.
There are additional considerations within this framework of trustworthy AI that go further into the fields of explainable artificial intelligence and respect for human privacy. Zanotti and colleagues argue that while a trustworthy AI may not exist
at present that meets all of the requirements of "trustworthiness", one
may be developed in the future once clear ethical and technical
frameworks exist.
Robots revolt in R.U.R., a 1928 Czech play translated as "Rossum's Universal Robots"
AI takeover is a recurring theme in science fiction.
Fictional scenarios typically differ vastly from those hypothesized by
researchers in that they involve an active conflict between humans and
an AI or robots with anthropomorphic motives who see them as a threat or
otherwise have an active desire to fight humans, as opposed to the
researchers' concern of an AI that rapidly exterminates humans as a
byproduct of pursuing its goals. The idea is seen in Karel Čapek's R.U.R., which introduced the word robot in 1920, and can be glimpsed in Mary Shelley's Frankenstein (published in 1818), as Victor ponders whether, if he grants his monster's request and makes him a wife, they would reproduce and their kind would destroy humanity.
According to Toby Ord,
the idea that an AI takeover requires robots is a misconception driven
by the media and Hollywood. He argues that the most damaging humans in
history were not physically the strongest, but that they used words
instead to convince people and gain control of large parts of the world.
He writes that a sufficiently intelligent AI with access to the
internet could scatter backup copies of itself, gather financial and
human resources (via cyberattacks or blackmails), persuade people on a
large scale, and exploit societal vulnerabilities that are too subtle
for humans to anticipate.
The word "robot" from R.U.R. comes from the Czech word robota, meaning laborer or serf.
The 1920 play was a protest against the rapid growth of technology,
featuring manufactured "robots" with increasing capabilities who
eventually revolt. HAL 9000 (1968) and the original Terminator (1984) are two iconic examples of hostile AI in pop culture.
Contributing factors
Advantages of superhuman intelligence over humans
Nick Bostrom and others have expressed concern that an AI with the
abilities of a competent artificial intelligence researcher would be
able to modify its own source code and increase its own intelligence. If
its self-reprogramming leads to getting even better at being able to
reprogram itself, the result could be a recursive intelligence explosion
in which it would rapidly leave human intelligence far behind. Bostrom
defines a superintelligence as "any intellect that greatly exceeds the
cognitive performance of humans in virtually all domains of interest",
and enumerates some advantages a superintelligence would have if it
chose to compete against humans:
Technology research: A machine with superhuman scientific
research abilities would be able to beat the human research community to
milestones such as nanotechnology or advanced biotechnology
Strategizing: A superintelligence might be able to simply outwit human opposition
Social manipulation: A superintelligence might be able to recruit human support, or covertly incite a war between humans
Economic productivity: As long as a copy of the AI could produce
more economic wealth than the cost of its hardware, individual humans
would have an incentive to voluntarily allow the artificial general intelligence (AGI) to run a copy of itself on their systems
Hacking: A superintelligence could find new exploits in computers
connected to the Internet, and spread copies of itself onto those
systems, or might steal money to finance its plans
Sources of AI advantage
According to Bostrom, a computer program that faithfully emulates a
human brain, or that runs algorithms that are as powerful as the human
brain's algorithms, could still become a "speed superintelligence" if it
can think orders of magnitude faster than a human, due to being made of
silicon rather than flesh, or due to optimization increasing the speed
of the AGI. Biological neurons operate at about 200 Hz, whereas a modern
microprocessor operates at a speed of about 2 GHz. Human axons carry
action potentials at around 120 m/s, whereas computer signals travel
near the speed of light.
A network of human-level intelligences designed to network
together and share complex thoughts and memories seamlessly, able to
collectively work as a giant unified team without friction, or
consisting of trillions of human-level intelligences, would become a
"collective superintelligence".
More broadly, any number of qualitative improvements to a
human-level AGI could result in a "quality superintelligence", perhaps
resulting in an AGI as far above us in intelligence as humans are above
apes. The number of neurons in a human brain is limited by cranial
volume and metabolic constraints, while the number of processors in a
supercomputer can be indefinitely expanded. An AGI need not be limited
by human constraints on working memory,
and might therefore be able to intuitively grasp more complex
relationships than humans can. An AGI with specialized cognitive support
for engineering or computer programming would have an advantage in
these fields, compared with humans who did not evolve specialized
cognitive modules for them. Unlike humans, an AGI can spawn copies of
itself and tinker with its copies' source code to attempt to further
improve its algorithms.
Possibility of unfriendly AI preceding friendly AI
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an
AI's utility function could allow for many potentially harmful scenarios
that conform with a given ethical framework but not "common sense".
According to AI researcher Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.
Odds of conflict
Many scholars, including evolutionary psychologist Steven Pinker, argue that a superintelligent machine is likely to coexist peacefully with humans.
The fear of cybernetic revolt is often based on interpretations
of humanity's history, which is rife with incidents of enslavement and
genocide. Such fears stem from a belief that competitiveness and
aggression are necessary in any intelligent being's goal system.
However, such human competitiveness stems from the evolutionary
background to our intelligence, where the survival and reproduction of
genes in the face of human and non-human competitors was the central
goal. According to AI researcher Steve Omohundro,
an arbitrary intelligence could have arbitrary goals: there is no
particular reason that an artificially intelligent machine (not sharing
humanity's evolutionary context) would be hostile—or friendly—unless its
creator programs it to be such and it is not inclined or capable of
modifying its programming. But the question remains: what would happen
if AI systems could interact and evolve (evolution in this context means
self-modification or selection and reproduction) and need to compete
over resources—would that create goals of self-preservation? AI's goal
of self-preservation could be in conflict with some goals of humans.
Many scholars dispute the likelihood of unanticipated cybernetic revolt as depicted in science fiction such as The Matrix,
arguing that it is more likely that any artificial intelligence
powerful enough to threaten humanity would probably be programmed not to
attack it. Pinker acknowledges the possibility of deliberate "bad
actors", but states that in the absence of bad actors, unanticipated
accidents are not a significant threat; Pinker argues that a culture of
engineering safety will prevent AI researchers from accidentally
unleashing malign superintelligence. In contrast, Yudkowsky argues that humanity is less likely to be
threatened by deliberately aggressive AIs than by AIs which were
programmed such that their goals are unintentionally incompatible with human survival or well-being (as in the film I, Robot and in the short story "The Evitable Conflict"). Omohundro suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow utility
functions (say, playing chess at all costs), leading them to seek
self-preservation and elimination of obstacles, including humans who
might turn them off.
Precautions
The AI control problem is the challenge of ensuring that advanced AI
systems reliably act according to human values and intentions, even as
they become more capable than humans. Some scholars argue that solutions to the control problem might also find applications in existing non-superintelligent AI.
Major approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control,
which aims to reduce an AI system's capacity to harm humans or gain
control. An example of "capability control" is to research whether a
superintelligent AI could be successfully confined in an "AI box".
According to Bostrom, such capability control proposals are not
reliable or sufficient to solve the control problem in the long term,
but may potentially act as valuable supplements to alignment efforts.
Prevention through AI alignment
In the field of artificial intelligence (AI), alignment
aims to steer AI systems toward a person's or group's intended goals,
preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.
Moore's law is an example of futurology; it is a statistical collection of past and present trends with the goal of accurately extrapolating future trends.
Futures studies, futures research or futurology is the systematic, interdisciplinary and holistic study
of social and technological advancement, and other environmental
trends, often for the purpose of exploring how people will live and work
in the future. Predictive techniques, such as forecasting, can be applied, but contemporary futures studies scholars emphasize the importance of systematically exploring alternatives. In general, it can be considered as a branch of the social sciences
and an extension to the field of history. Futures studies (colloquially
called "futures" by many of the field's practitioners) seeks to
understand what is likely to continue and what could plausibly change.
Part of the discipline thus seeks a systematic and pattern-based
understanding of past and present, and to explore the possibility of
future events and trends.
Unlike the physical sciences where a narrower, more specified
system is studied, futurology concerns a much bigger and more complex
world system. The methodology and knowledge are much less proven than in natural science and social sciences
like sociology and economics. There is a debate as to whether this
discipline is an art or science, and it is sometimes described as pseudoscience; nevertheless, the Association of Professional Futurists was formed in 2002, developing a Foresight Competency Model in 2017, and it is now possible to study it academically, for example at the FU Berlin in their master's course. To encourage inclusive and cross-disciplinary discussions about futures studies, UNESCO declared December 2 as World Futures Day.
Overview
Futurology is an interdisciplinary field
that aggregates and analyzes trends, with both lay and professional
methods, to compose possible futures. It includes analyzing the sources,
patterns, and causes of change and stability in an attempt to develop
foresight. Around the world the field is variously referred to as futures studies, futures research,strategic foresight, futuristics, futures thinking, futuring, and futurology. Futures studies and strategic foresight are the academic field's most commonly used terms in the English-speaking world.
Foresight was the original term and was first used in this sense by H. G. Wells in 1932. "Futurology" is a term common in encyclopedias, though it is used
almost exclusively by nonpractitioners today, at least in the
English-speaking world. "Futurology" is defined as the "study of the
future". The term was coined by German professor Ossip K. Flechtheim in the mid-1940s, who proposed it as a new branch of knowledge that would include a new science of probability.
This term has fallen from favor in recent decades because modern
practitioners stress the importance of alternative, plausible,
preferable and plural futures, rather than one monolithic future, and
the limitations of prediction and probability, versus the creation of
possible and preferable futures.
Three factors usually distinguish futures studies from the
research conducted by other disciplines (although all of these
disciplines overlap, to differing degrees). First, futures studies often
examines trends to compose possible, probable, and preferable futures
along with the role "wild cards" can play on future scenarios. Second,
futures studies typically attempts to gain a holistic or systemic view based on insights from a range of different disciplines, generally focusing on the STEEP categories of Social, Technological, Economic, Environmental and
Political. Third, futures studies challenges and unpacks the assumptions
behind dominant and contending views of the future. The future thus is
not empty but fraught with hidden assumptions. For example, many people
expect the collapse of the Earth's ecosystem in the near future, while
others believe the current ecosystem will survive indefinitely. A
foresight approach would seek to analyze and highlight the assumptions
underpinning such views.
As a field, futures studies expands on the research component, by
emphasizing the communication of a strategy and the actionable steps
needed to implement the plan or plans leading to the preferable future.
It is in this regard, that futures studies evolves from an academic
exercise to a more traditional business-like practice, looking to better
prepare organizations for the future.
Futures studies does not generally focus on short term predictions such as interest rates over the next business cycle,
or of managers or investors with short-term time horizons. Most
strategic planning, which develops goals and objectives with time
horizons of one to three years, is also not considered futures. Plans
and strategies with longer time horizons that specifically attempt to
anticipate possible future events are definitely part of the field.
Learning about medium and long-term developments may at times be
observed from their early signs. As a rule, futures studies is generally concerned with changes of
transformative impact, rather than those of an incremental or narrow
scope.
The futures field also excludes those who make future predictions through professed supernatural means.
To complete a futures study, a domain is selected for
examination. The domain is the main idea of the project, or what the
outcome of the project seeks to determine. Domains can have a strategic
or exploratory focus and must narrow down the scope of the research. It
examines what will, and more importantly, will not be discussed in the
research.
Futures practitioners study trends focusing on STEEP (Social,
Technological, Economic, Environments and Political) baselines. Baseline
exploration examine current STEEP environments to determine normal
trends, called baselines. Next, practitioners use scenarios to explore
different futures outcomes. Scenarios examine how the future can be
different.
1. Collapse Scenarios seek to answer: What happens if the STEEP
baselines fall into ruin and no longer exist? How will that impact STEEP
categories?
2. Transformation Scenarios: explore futures with the baseline of
society transiting to a "new" state. How are the STEEP categories
effected if society has a whole new structure?
3. New Equilibrium: examines an entire change to the structure of the
domain. What happens if the baseline changes to a "new" baseline within
the same structure of society?
History
Origins
The original visualization of the Tableau Economique by Quesnay, 1759
Advances in mathematics in the 17th century prompted attempts to calculate statistical and probabilistic concepts. Objectivity
became linked to knowledge that could be expressed in numerical data.
In 18th century Britain, investors established mathematical formulas to
assess the future value of an asset. In 1758 the French economist François Quesnay proceeded to establish a quantitative model of the entire economy, known as the Tableau Economique, so that future production could be planned. Meanwhile, Anne Robert Jacques Turgot first articulated the law of diminishing returns. In 1793 the Chinese bureaucrat Hong Liangji forecasted future population growth.
The Industrial Revolution was on the verge of spreading across the European continent, when in 1798 Thomas Malthus published An essay on the principle of Population as it affects the Future Improvement of Society. Malthus questioned optimistic utopias and theories of progress. Malthus' fear about the survival of the human race is regarded as an early European dystopia. Starting in the 1830s, Auguste Comte developed theories of social evolution and claimed that metapatterns could be discerned in social change. In the 1870s Herbert Spencer blended Compte's theories with Charles Darwin's biological evolution theory. Social Darwinism became popular in Europe and the USA. By the late 19th century, the belief in human progress and the triumph of scientific invention prevailed and science fiction became a popular future narrative. In 1888 William Morris published News from Nowhere, in which he theorized about how working time could be reduced.
Early 20th century
Title page of Wells's The War That Will End War (1914)
The British H. G. Wells
established the genre of "true science fiction" at the turn of the
century. Well's works were supposedly based on sound scientific
knowledge. Wells became a forerunner of social and technological
forecasting. A series of techno-optimistic newspaper articles and books
were published between 1890 and 1914 in the US and Europe. After World War I the Italian Futurism movement led by Filippo Tommaso Marinetti glorified modernity. Soviet futurists, such as Vladimir Mayakovsky, David Burliuk, and Vasily Kamensky
struggled against official communist cultural policy throughout the
20th century. In Japan, futurists gained traction after World War I by
denouncing the Meiji era and glorifying speed and technological progress.
With the end of World War I interest in statistical forecasting intensified. In statistics, a forecast is a calculation of a future event's magnitude or probability. Forecasting calculates the future, while an estimate attempts to establish the value of an existing quantity. In the United States, President Hoover established a Research Committee on Social Trends in 1929 headed by William F. Ogburn. Past statistics were used to chart trends and project those trends into the future. Planning became part of the political decision-making process after World War II as capitalist and communist governments across the globe produced predictive forecasts. The RAND Corporation was founded in 1945 to assist the US military with post-war planning. The long-term planning of military and industrial Cold War efforts peaked in the 1960s when peace research emerged as a counter-movement and the positivist idea of "the one predictable future" was called into question.
1960s futures research
In 1954 Robert Jungk published a critique of the US and the supposed colonization of the future in Tomorrow is already Here. Fred L. Polak published Images of the Future in 1961, it has become a classic text on imagining alternative futures. In the 1960s, human-centered methods of future studies were developed in Europe by Bertrand de Jouvenel and Johan Galtung. The positivist idea of a single future was challenged by scientists such as Thomas Kuhn, Karl Popper, and Jürgen Habermas. Future studies established itself as an academic field when social scientists began to question positivism as a plausible theory of knowledge and instead turned to pluralism. At the 1967 First international Future Research Conference" in Oslo research on urban sprawl, hunger, and education was presented. In 1968 Olaf Helmer of the RAND Corporation
conceded "One begins to realize that there is a wealth of possible
futures and that these possibilities can be shaped in different ways".
Future studies worked on the basis that a multitude of possible futures
could be estimated, forecast, and manipulated.
The 1972 report The Limits to Growth established environmental degradation firmly on the political agenda. The environmental movement demanded of industry and policymakers to consider long-term implications when planning and investing in power plants and infrastructure.
The 1990s saw a surge in futures studies in preparation for the United Nations' Millennium Development Goals, which were adopted in 2000 as international developmentgoals
for the year 2015. Throughout the 1990s large technology foresight
programs were launched which informed national and regional strategies
on science, technology and innovation. Prior to the 1990s foresight was rarely used to describe future studies, futurology or forecasting. Foresight prognosis relied in part on the methodologies developed by the French pioneers of prospectives research, including Bertrand de Jouvenel.
Foresight practitioners attempted to gather and evaluate evidence based
insights for the future. Foresight research output focused on
identifying challenges and opportunities, which was presented as
intelligence at a strategic level. Practitioners tended to focus on
particular companies or economic regions, while making no attempt to
plan for specific problems.
In the 1990s several future studies practitioners attempted to
synthesize a coherent framework for the futures studies research field,
including Wendell Bell's two-volume work, The Foundations of Futures Studies, and Ziauddin Sardar's Rescuing all of our Futures.
Futures techniques or methodologies may be viewed as "frameworks for
making sense of data generated by structured processes to think about
the future". There is no single set of methods that are appropriate for all futures
research. Different futures researchers intentionally or unintentionally
promote use of favored techniques over a more structured approach.
Selection of methods for use on futures research projects has so far
been dominated by the intuition and insight of practitioners; but can
better identify a balanced selection of techniques via acknowledgement
of foresight as a process together with familiarity with the fundamental
attributes of most commonly used methods.
Futurology is sometimes described by scientists as a pseudoscience, as it often deals with speculative scenarios and long-term predictions
that can be difficult to test using traditional scientific methods.
Futurists use a diverse range of forecasting and foresight methods including:
The aim of an executive officer
when engaging in future management is to assist people and
organizations to comprehend the future. Executive officers who work for a
business organization will want to understand the future better than
the competitors of their employer. Therefore future management is a systematic process and results in a leading edge.
Alternative possible futures
Futurists use scenarios to map alternative possible futures. Scenario
planning is a structured examination of a variety of hypothetical
futures. In the 21st century alternative possible future planning has
been a powerful tool for understanding social-ecological systems because
the future is uncertain. Questions are posed to scientists, business
owners, government officials, landowners, and nonprofit representatives
to establish a development plan for an urban area, region, industry, or
economy.
However, alternative possible futures loose credibility, should
they be entirely utopian or dystopian. One of those stages involves the
study of emerging issues, such as megatrends, trends and weak signals. Megatrends are major long-term phenomena that change slowly, are often interlinked and cannot be transformed in an instant. Many corporations use futurists as part of their risk management strategy, for horizon scanning and emerging issues analysis, and to identify wild cards. Understanding a range of possibilities can enhance the recognition of
opportunities and threats. Every successful and unsuccessful business
engages in futuring to some degree—for example in research and
development, innovation and market research, anticipating competitor
behavior and so on. Role-playing is another way that possible futures can be collectively explored, as in the research lab Civilization's Waiting Room.
Weak signals, the future sign and wild cards
In futures research "weak signals" may be understood as advanced,
noisy and socially situated indicators of change in trends and systems
that constitute raw informational material for enabling anticipatory
action. There is some confusion about the definition of weak signal by
various researchers and consultants. Sometimes it is referred as future
oriented information, sometimes more like emerging issues. The confusion
has been partly clarified with the concept 'the future sign', by
separating signal, issue and interpretation of the future sign.
A weak signal can be an early indicator of coming change, and an
example might also help clarify the confusion. On May 27, 2012, hundreds
of people gathered for a "Take the Flour Back" demonstration at
Rothamsted Research in Harpenden, UK, to oppose a publicly funded trial
of genetically modified wheat. This was a weak signal for a broader
shift in consumer sentiment against genetically modified foods. When
Whole Foods mandated the labeling of GMOs in 2013, this non-GMO idea had
already become a trend and was about to be a topic of mainstream
awareness.
"Wild cards" refer to low-probability and high-impact events
"that happen quickly" and "have huge sweeping consequences", and
materialize too quickly for social systems to effectively respond. Elina Hiltunen notes that wild cards are not new, though they have become more prevalent. One reason for this may be the increasingly fast pace of change. Oliver Markley proposed four types of wild cards:
Type I Wild Card: low probability, high impact, high credibility
Type II Wild Card: high probability, high impact, low credibility
Type III Wild Card: high probability, high impact, disputed credibility
Type IV Wild Card: high probability, high impact, high credibility
He posits that it is important to track the emergence of "Type II
Wild Cards" that have a high probability of occurring, but low
credibility that it will happen. This focus is especially important to
note because it is often difficult to persuade people to accept
something they do not believe is happening, until they see the wild
card. An example is climate change. This hypothesis has gone from Type I
(high impact and high credibility, but low probability where science
was accepted and thought unlikely to happen) to Type II (high
probability, high impact, but low credibility as policy makers and
lobbyists push back against the science), to Type III (high probability,
high impact, disputed credibility) — at least for most people: There
are still some who probably will not accept the science until the
Greenland ice sheet has completely melted and sea-level has risen the
seven meters estimated rise.
This concept may be embedded in standard foresight projects and
introduced into anticipatory decision-making activity in order to
increase the ability of social groups adapt to surprises arising in
turbulent business environments. Such sudden and unique incidents might
constitute turning points in the evolution of a certain trend or system.
Wild cards may or may not be announced by weak signals, which are
incomplete and fragmented data from which relevant foresight information
might be inferred. Sometimes, mistakenly, wild cards and weak signals
are considered as synonyms, which they are not. One of the most often cited examples of a wild card event in recent
history is 9/11. Nothing had happened in the past that could point to
such a possibility and yet it had a huge impact on everyday life in the
United States, from simple tasks like how to travel via airplane to
deeper cultural values. Wild card events might also be natural
disasters, such as Hurricane Katrina,
which can force the relocation of huge populations and wipe out entire
crops or completely disrupt the supply chain of many businesses.
Although wild card events cannot be predicted, after they occur it is
often easy to reflect back and convincingly explain why they happened.
Near-term predictions
A long-running tradition in various cultures, and especially in the
media, involves various spokespersons making predictions for the
upcoming year at the beginning of the year. These predictions
are thought-provokers, which sometimes base themselves on current
trends in culture (music, movies, fashion, politics). Sometimes these
predictions are hopeful guesses about what major events might take place
over the course of the next year.
When predicted events fail to take place, the authors of the predictions may state that misinterpretation of the "signs" and omens that they evidently managed to observe themselves. Marketers
have increasingly started to embrace futures studies, in an effort to
benefit from an increasingly competitive marketplace with fast production cycles.
Trend analysis and forecasting
Megatrends
Trends come in different sizes. A megatrend extends over many
generations, and in cases of climate, megatrends can cover periods prior
to human existence. They describe complex interactions between many
factors. The increase in population from the palaeolithic
period to the present provides an example. Megatrends are likely to
produce greater change than any previous one, because technology is
causing trends to unfold at an accelerating pace. The concept was popularized by the 1982 book Megatrends by futurist John Naisbitt.
Potential trends
Possible new trends grow from innovations, projects, beliefs or
actions and activism that have the potential to grow and eventually go
mainstream in the future.
Potential future scenarios
Among potential future scenarios, s-risks
(short for risks of astronomical suffering) highlight the importance of
considering outcomes where advanced technologies or large-scale systems
result in immense suffering. These risks may arise from unintended
consequences, such as poorly aligned artificial intelligence, or
deliberate actions, like malicious misuse of technology. Addressing
s-risks involves ethical foresight and robust frameworks to prevent
scenarios where suffering could persist or multiply across vast scales,
including in space exploration or simulated realities. This focus expands the scope of future studies, emphasizing not just survival but the quality of life in possible futures.
Branching trends
Very often, trends relate to one another the same way as a tree-trunk
relates to branches and twigs. For example, a well-documented movement
toward equality between men and women might represent a branch trend.
The trend toward reducing differences in the salaries of men and women
in the Western world could form a twig on that branch.
Life cycle of a trend
Understanding the technology adoption cycle helps futurists monitor
trend development. Trends start as weak signals by small mentions in
fringe media outlets, discussion conversations or blog posts, often by
innovators. As these ideas, projects, beliefs or technologies gain
acceptance, they move into the phase of early adopters.
In the beginning of a trend's development, it is difficult to tell if it
will become a significant trend that creates changes or merely a trendy
fad that fades into forgotten history. Trends will emerge as initially
unconnected dots but eventually coalesce into persistent change.
Consumption trend development has changed significantly in the 19th century and throughout the 20th century because developed countries are now rules by a meritocracy, not the aristocracy.
Consumers who are able to pay for a product that is available for
purchase do not necessarily take into account the lifestyle choices of
high income earners. Therefore trend may bubble up or trickle down.
However, when it comes to the diffusion of innovation and technology adoption life cycle various tools are used. Including meme theory and tipping point.
Gartner created their hype cycle
to illustrate the phases a technology moves through as it grows from
research and development to mainstream adoption. The unrealistic
expectations and subsequent disillusionment that virtual reality
experienced in the 1990s and early 2000s is an example of the middle
phases encountered before a technology can begin to be integrated into
society.
Education
Education in the field of futures studies has taken place for some
time. Beginning in the United States in the 1960s, it has since
developed in many different countries. Futures education encourages the
use of concepts, tools and processes that allow students to think
long-term, consequentially, and imaginatively. It generally helps
students to:
conceptualize more just and sustainable human and planetary futures.
develop knowledge and skills of methods and tools used to help
people understand, map, and influence the future by exploring probable
and preferred futures.
understand the dynamics and influence that human, social and ecological systems have on alternative futures.
conscientize responsibility and action on the part of students toward creating better futures.
Thorough documentation of the history of futures education exists, for example in the work of Richard A. Slaughter (2004), David Hicks, Ivana Milojević to name a few.
While futures studies remains a relatively new academic
tradition, numerous tertiary institutions around the world teach it.
These vary from small programs, or universities with just one or two
classes, to programs that offer certificates and incorporate futures
studies into other degrees, (for example in planning,
business, environmental studies, economics, development studies,
science and technology studies). Various formal Masters-level programs
exist on six continents. Finally, doctoral dissertations around the
world have incorporated futures studies (see e.g. Rohrbeck, 2010; von der Gracht, 2008; Hines, 2012). A recent survey documented approximately 50 cases of futures studies at the tertiary level.
A Futures Studies program is offered at Tamkang University,
Taiwan. Futures Studies is a required course at the undergraduate
level, with between three and five thousand students taking classes on
an annual basis. Housed in the Graduate Institute of Futures Studies is
an MA Program. Only ten students are accepted annually in the program.
Associated with the program is the Journal of Futures Studies.
The longest running Future Studies program in North America was established in 1975 at the University of Houston–Clear Lake. It moved to the University of Houston
in 2007 and renamed the degree to Foresight. The program was
established on the belief that if history is studied and taught in an
academic setting, then so should the future. Its mission is to prepare
professional futurists. The curriculum incorporates a blend of the
essential theory, a framework and methods for doing the work, and a
focus on application for clients in business, government, nonprofits,
and society in general.
As of 2003, over 40 tertiary education establishments around the
world were delivering one or more courses in futures studies. The World Futures Studies Federation has a comprehensive survey of global futures programs and courses. The
Acceleration Studies Foundation maintains an annotated list of primary
and secondary graduate futures studies programs.
A MSocSc and PhD program in Futures Studies is offered at the University of Turku, Finland.
The University of Stellenbosch Business School in South Africa
offers a PGDip in Future Studies as well as a MPhil in Future Studies
degree.
Applications of foresight and specific fields
General applicability and use of foresight products
Several corporations and government agencies use foresight products
to both better understand potential risks and prepare for potential
opportunities as an anticipatory approach. Several government agencies
publish material for internal stakeholders as well as make that material
available to broader public. Examples of this include the US
Congressional Budget Office long term budget projections, the National Intelligence Center, and the United Kingdom Government Office for Science. Much of this material is used by policy makers to inform policy
decisions and government agencies to develop long-term plan. Several
corporations, particularly those with long product development
lifecycles, use foresight and future studies products and practitioners
in the development of their business strategies. The Shell Corporation
is one such entity. Foresight professionals and their tools are increasingly being used in
both the private and public areas to help leaders deal with an
increasingly complex and interconnected world.
Imperial cycles and world order
Imperial cycles represent an "expanding pulsation" of "mathematically describable" macro-historic trend.
Chinese philosopher Kang Youwei and French demographer Georges Vacher de Lapouge
stressed in the late 19th century that the trend cannot proceed
indefinitely on the finite surface of the globe. The trend is bound to
culminate in a world empire. Kang Youwei predicted that the matter will
be decided in a contest between Washington and Berlin; Vacher de Lapouge
foresaw this contest as being between the United States and Russia and
wagered the odds were in the United States' favour. Both published their futures studies before H. G. Wells introduced the science of future in his Anticipations (1901).
Four later anthropologists—Hornell Hart, Raoul Naroll, Louis Morano, and Robert Carneiro—researched
the expanding imperial cycles. They reached the same conclusion that a
world empire is not only pre-determined but close at hand and attempted
to estimate the time of its appearance.
Education
As foresight has expanded to include a broader range of social
concerns all levels and types of education have been addressed,
including formal and informal education. Many countries are beginning to
implement Foresight in their Education policy. A few programs are
listed below:
Finland's FinnSight 2015 – Implementation began in 2006 and though at the time was not referred
to as "Foresight" they tend to display the characteristics of a
foresight program.
Singapore's Ministry of Education Master plan for Information Technology in Education – This third Masterplan continues what was built on in the 1st and 2nd
plans to transform learning environments to equip students to compete in
a knowledge economy.
The World Future Society, founded in 1966, is the largest and
longest-running community of futurists in the world. WFS established and
built futurism from the ground up—through publications, global summits,
and advisory roles to world leaders in business and government.
By the early 2000s, educators began to independently institute
futures studies (sometimes referred to as futures thinking) lessons in
K-12 classroom environments. To meet the need, non-profit futures organizations designed curriculum
plans to supply educators with materials on the topic. Many of the
curriculum plans were developed to meet common core standards. Futures studies education methods for youth typically include age-appropriate collaborative activities, games, systems thinking and scenario building exercises.
There are several organizations devoted to furthering the advancement of Foresight and Future Studies worldwide. Teach the Future
emphasizes foresight educational practices appropriate for K-12
schools. Warmer Sun Education is a global online learning community for
K-12 students and their parents to learn about exponential progress, emerging technologies and their applications and exploring possible pathways to solve humanity's grand challenges.
The University of Houston has a Master's (MS) level graduate program
through the College of Technology as well as a certificate program for
those interested in advanced studies. The Department of Political
Science and College of Social Sciences at the University of Hawaii Manoa
has the Hawaii Research Center for Future Studies which offers a
Master's (MA) in addition to a Doctorate (PhD).
Science fiction
Wendell Bell and Ed Cornish acknowledge science fiction as a catalyst to future studies, conjuring up visions of tomorrow. Science fiction's potential to provide an "imaginative social vision"
is its contribution to futures studies and public perspective.
Productive sci-fi presents plausible, normative scenarios. Jim Dator attributes the foundational concepts of "images of the
future" to Wendell Bell, for clarifying Fred Polak's concept in Images
of the Future, as it applies to futures studies. Similar to futures studies' scenarios thinking, empirically supported
visions of the future are a window into what the future could be.
However, unlike in futures studies, most science fiction works present a
single alternative, unless the narrative deals with multiple timelines
or alternative realities, such as in the works of Philip K. Dick, and a multitude of small and big screen works. Pamela Sargent states, "Science fiction reflects attitudes typical of
this century." She gives a brief history of impactful sci-fi
publications, like The Foundation Trilogy by Isaac Asimov, and Starship Troopers by Robert A. Heinlein. Alternate perspectives validate sci-fi as part of the fuzzy "images of the future".
Brian David Johnson
is a futurist and author who uses science fiction to help build the
future. He has been a futurist at Intel, and is now the resident
futurist at Arizona State University. "His work is called 'future
casting'—using ethnographic field studies, technology research, trend
data, and even science fiction to create a pragmatic vision of consumers
and computing." Brian David Johnson has developed a practical guide to
using science fiction as a tool for futures studies. Science fiction prototyping
combines the past with the present, including interviews with notable
science fiction authors to provide the tools needed to "design the
future with science fiction."
Science Fiction Prototyping has five parts:
Pick your science concept and build an imaginative world
The scientific inflection point
The consequences, for better, or worse, or both, of the science or technology on the people and your world
The human inflection point
Reflection, what did we learn?
"A full Science Fiction Prototyping (SFP) is 6–12 pages long, with a
popular structure being; an introduction, background work, the fictional
story (the bulk of the SFP), a short summary and a summary
(reflection). Most often science fiction prototypes extrapolate current
science forward and, therefore, include a set of references at the end."
Ian Miles reviews The New Encyclopedia of Science Fiction,
identifying ways Science Fiction and Futures Studies "cross-fertilize,
as well as the ways in which they differ distinctly." Science Fiction
cannot be simply considered fictionalized Futures Studies. It may have
aims other than foresight or "prediction, and be no more concerned with
shaping the future than any other genre of literature." It is not to be understood as an explicit pillar of futures studies,
due to its inconsistency of integrated futures research. Additionally,
Dennis Livingston, a literature and Futures journal critic says, "The
depiction of truly alternative societies has not been one of science
fiction's strong points, especially" preferred, normative envisages. The strengths of the genre as a form of futurist thinking are discussed
by Tom Lombardo, who argues that select science fiction "combines a
highly detailed and concrete level of realism with theoretical
speculation on the future", "addresses all the main dimensions of the
future and synthesizes all these dimensions into integrative visions of
the future", and "reflects contemporary and futurist thinking",
therefore it "can be viewed as the mythology of the future."
It is notable that although there are no hard limits on horizons
in future studies and foresight efforts, typical future horizons
explored are within the realm of the practical and do not span more than
a few decades. Nevertheless, there are hard science fiction works that can be
applicable as visioning exercises that span longer periods of time when
the topic is of a significant time scale, such as is in the case of Kim Stanley Robinson's Mars trilogy, which deals with the terraforming of Mars and extends two centuries forward through the early 23rd century. In fact, there is some overlap between science fiction writers and professional futurists such as in the case of David Brin. Arguably, the work of science fiction authors has seeded many ideas
that have later been developed (be it technological or social in
nature)—from early works of Jules Verne and H.G. Wells to the later Arthur C. Clarke and William Gibson. Beyond literary works, futures studies and futurists have influenced
film and TV works. The 2002 movie adaptation of Philip K. Dick's short
story, Minority Report, had a group of consultants to build a realistic vision of the future, including futurist Peter Schwartz. TV shows such as HBO's Westworld and Channel 4/Netflix's Black Mirror
follow many of the rules of futures studies to build the world, the
scenery and storytelling in a way futurists would in experiential
scenarios and works.
Science Fiction novels for Futurists:
William Gibson, Neuromancer, Ace Books, 1984. (Pioneering cyberpunk novel)
Kim Stanley Robinson, Red Mars, Spectra, 1993. (Story on the founding a colony on Mars)
Bruce Sterling, Heavy Weather, Bantam, 1994. (Story about a world with drastically altered climate and weather)
Iain Banks' Culture novels (Space operas in distance future with thoughtful treatments of advanced AI)
Government agencies
Several governments have formalized strategic foresight agencies to
encourage long range strategic societal planning, with most notable are
the governments of Singapore, Finland, and the United Arab Emirates.
Other governments with strategic foresight agencies include Canada's
Policy Horizons Canada and the Malaysia's Malaysian Foresight Institute.
The Singapore government's Centre for Strategic Futures
(CSF) is part of the Strategy Group within the Prime Minister's Office.
Their mission is to position the Singapore government to navigate
emerging strategic challenges and harness potential opportunities. Singapore's early formal efforts in strategic foresight began in 1991
with the establishment of the Risk Detection and Scenario Planning
Office in the Ministry of Defence. In addition to the CSF, the Singapore government has established the
Strategic Futures Network, which brings together deputy secretary-level
officers and foresight units across the government to discuss emerging
trends that may have implications for Singapore.
Since the 1990s, Finland has integrated strategic foresight within the parliament and Prime Minister's Office. The government is required to present a "Report of the Future" each
parliamentary term for review by the parliamentary Committee for the
Future. Led by the Prime Minister's Office, the Government Foresight
Group coordinates the government's foresight efforts. Futures research is supported by the Finnish Society for Futures
Studies (established in 1980), the Finland Futures Research Centre
(established in 1992), and the Finland Futures Academy (established in
1998) in coordination with foresight units in various government
agencies.
The annual Dubai Future Forum conference (2024)
In the United Arab Emirates, Sheikh Mohammed bin Rashid, Vice
President and Ruler of Dubai, announced in September 2016 that all
government ministries were to appoint Directors of Future Planning.
Sheikh Mohammed described the UAE Strategy for the Future as an
"integrated strategy to forecast our nation's future, aiming to
anticipate challenges and seize opportunities". The Ministry of Cabinet Affairs and Future (MOCAF) is mandated with
crafting the UAE Strategy for the Future and is responsible for the
portfolio of the future of UAE. Since 2002, the UAE has hosted the Dubai Future Forum at the Museum of the Future, which it claims is the largest gathering of futurists in the world.
In 2018, the United States General Accountability Office (GAO)
created the Center for Strategic Foresight to enhance its ability to
"serve as the agency's principal hub for identifying, monitoring, and
analyzing emerging issues facing policymakers." The center is composed
of non-resident Fellows who are considered leading experts in foresight,
planning and future thinking. In September 2019 they hosted a conference on space policy and "deep
fake" synthetic media to manipulate online and real-world interactions.
1932 Shell advertisement poster by the British surrealist painter Paul Nash
Foresight is a framework or lens which could be used in risk analysis
and management in a medium- to long-term time range. A typical formal
foresight project would identify key drivers and uncertainties relevant
to the scope of analysis. One classic example of such work was how foresight work at the Royal
Dutch Shell international oil company led to envision the turbulent oil
prices of the 1970s as a possibility and better embed this into company
planning. Yet the practice at Shell focuses on stretching the company's
thinking rather than in making predictions. Its planning is meant to
link and embed scenarios in "organizational processes such as strategy
making, innovation, risk management, public affairs, and leadership
development."
Risks may arise from the development and adoption of emerging technologies and/or social change.
Special interest lies on hypothetical future events that have the
potential to damage human well-being on a global scale posing a global catastrophic risk. Such events may cripple or destroy modern civilization or, in the case of existential risks, even cause human extinction. Potential global catastrophic risks include but are not limited to climate change, AI takeover, nanotechnology, nuclear warfare, total war, and pandemics.
The aim of a professional futurist would be to identify conditions that
could lead to these events to create "pragmatically feasible roads to
alternative futures."