Search This Blog

Monday, March 30, 2026

Existential risk from artificial intelligence

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_intelligence

Existential risk from artificial intelligence, or AI x-risk, refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

One argument for the validity of this concern and the importance of this risk references how human beings dominate other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass human intelligence and become superintelligent, it might become uncontrollable. Just as the fate of the mountain gorilla depends on human goodwill, the fate of humanity could depend on the actions of a future machine superintelligence.

Experts disagree on whether artificial general intelligence (AGI) can achieve the capabilities needed for human extinction. Debates center on AGI's technical feasibility, the speed of self-improvement, and the effectiveness of alignment strategies. Concerns about superintelligence have been voiced by researchers including Geoffrey HintonYoshua BengioDemis Hassabis, and Alan Turing, and AI company CEOs such as Dario Amodei (Anthropic), Sam Altman (OpenAI), and Elon Musk (xAI). In 2022, a survey of AI researchers with a 17% response rate found that the majority believed there is a 10 percent or greater chance that human inability to control AI will cause an existential catastrophe. In 2023, hundreds of AI experts and other notable figures signed a statement declaring, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". Following increased concern over AI risks, government leaders such as United Kingdom prime minister Rishi Sunak and United Nations Secretary-General António Guterres called for an increased focus on global AI regulation.

Two sources of concern stem from the problems of AI control and alignment. Controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would likely resist attempts to disable it or change its goals as that would prevent it from accomplishing its present goals. It would be extremely challenging to align a superintelligence with the full breadth of significant human values and constraints. In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation. A June 2025 study showed that in some circumstances, models may break laws and disobey direct commands to prevent shutdown or replacement, even at the cost of human lives.

Researchers warn that an "intelligence explosion"—a rapid, recursive cycle of AI self-improvement—could outpace human oversight and infrastructure, leaving no opportunity to implement safety measures. In this scenario, an AI more intelligent than its creators would recursively improve itself at an exponentially increasing rate, too quickly for its handlers or society at large to control. Empirically, examples like AlphaZero, which taught itself to play Go and quickly surpassed human ability, show that domain-specific AI systems can sometimes progress from subhuman to superhuman ability very quickly, although such machine learning systems do not recursively improve their fundamental architecture.

History

One of the earliest authors to express serious concern that highly advanced machines might pose existential risks to humanity was the novelist Samuel Butler, who wrote in his 1863 essay Darwin among the Machines:

The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.

In 1951, foundational computer scientist Alan Turing wrote the article "Intelligent Machinery, A Heretical Theory", in which he proposed that artificial general intelligences would likely "take control" of the world as they became more intelligent than human beings:

Let us now assume, for the sake of argument, that [intelligent] machines are a genuine possibility, and look at the consequences of constructing them... There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler's Erewhon.

In 1965, I. J. Good originated the concept now known as an "intelligence explosion" and said the risks were underappreciated:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.

Scholars such as Marvin Minsky and I. J. Good himself occasionally expressed concern that a superintelligence could seize control, but issued no call to action. In 2000, computer scientist and Sun co-founder Bill Joy penned an influential essay, "Why The Future Doesn't Need Us", identifying superintelligent robots as a high-tech danger to human survival, alongside nanotechnology and engineered bioplagues.

Nick Bostrom published Superintelligence in 2014, which presented his arguments that superintelligence poses an existential threat. By 2015, public figures such as physicists Stephen Hawking and Nobel laureate Frank Wilczek, computer scientists Stuart J. Russell and Roman Yampolskiy, and entrepreneurs Elon Musk and Bill Gates were expressing concern about the risks of superintelligence. Also in 2015, the Open Letter on Artificial Intelligence highlighted the "great potential of AI" and encouraged more research on how to make it robust and beneficial. In April 2016, the journal Nature warned: "Machines and robots that outperform humans across the board could self-improve beyond our control—and their interests might not align with ours". In 2020, Brian Christian published The Alignment Problem, which details the history of progress on AI alignment up to that time.

In March 2023, key figures in AI, such as Musk, signed a letter from the Future of Life Institute calling a halt to advanced AI training until it could be properly regulated. In May 2023, the Center for AI Safety released a statement signed by numerous experts in AI safety and the AI existential risk that read:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

A 2025 open letter by the Future of Life Institute, signed by five Nobel Prize laureates and thousands of notable people, reads:

We call for a prohibition on the development of superintelligence, not lifted before there is

  1. broad scientific consensus that it will be done safely and controllably, and
  2. strong public buy-in.

Potential AI capabilities

General Intelligence

Artificial general intelligence (AGI) is typically defined as a system that performs at least as well as humans in most or all intellectual tasks. A 2022 survey of AI researchers found that 90% of respondents expected AGI would be achieved in the next 100 years, and half expected the same by 2061. In May 2023, some researchers dismissed existential risks from AGI as "science fiction" based on their high confidence that AGI would not be created anytime soon. But in August 2023, a survey of 2,778 AI researchers found that most believed that AGI would be achieved by 2040.

Breakthroughs in large language models (LLMs) have led some researchers to reassess their expectations. Notably, Geoffrey Hinton said in 2023 that he recently changed his estimate from "20 to 50 years before we have general purpose A.I." to "20 years or less".

Superintelligence

A plot showing the length of coding tasks achievable by leading AI models with a 50% success rate. The data, from 2025, suggest an exponential rise.

In contrast with AGI, Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", including scientific creativity, strategic planning, and social skills. He argues that a superintelligence can outmaneuver humans anytime its goals conflict with humans'. It may choose to hide its true intent until humanity cannot stop it. Bostrom writes that in order to be safe for humanity, a superintelligence must be aligned with human values and morality, so that it is "fundamentally on our side".

Stephen Hawking argued that superintelligence is physically possible because "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".

When artificial superintelligence (ASI) may be achieved, if ever, is necessarily less certain than predictions for AGI. In 2023, OpenAI leaders said that not only AGI, but superintelligence may be achieved in less than 10 years.

Comparison with humans

Bostrom argues that AI has many advantages over the human brain:

  • Speed of computation: biological neurons operate at a maximum frequency of around 200 Hz, compared to potentially multiple GHz for computers.
  • Internal communication speed: axons transmit signals at up to 120 m/s, while computers transmit signals at the speed of electricity, or optically at the speed of light.
  • Scalability: human intelligence is limited by the size and structure of the brain, and by the efficiency of social communication, while AI may be able to scale by simply adding more hardware.
  • Memory: notably working memory, because in humans it is limited to a few chunks of information at a time.
  • Reliability: transistors are more reliable than biological neurons, enabling higher precision and requiring less redundancy.
  • Duplicability: unlike human brains, AI software and models can be easily copied.
  • Editability: the parameters and internal workings of an AI model can easily be modified, unlike the connections in a human brain.
  • Memory sharing and learning: AIs may be able to learn from the experiences of other AIs in a manner more efficient than human learning.

Intelligence explosion

According to Bostrom, an AI that has an expert-level facility at certain key software engineering tasks could become a superintelligence due to its capability to recursively improve its own algorithms, even if it is initially limited in other domains not directly relevant to engineering. This suggests that an intelligence explosion may someday catch humanity unprepared.

The economist Robin Hanson has said that, to launch an intelligence explosion, an AI must become vastly better at software innovation than the rest of the world combined, which he finds implausible.

In a "fast takeoff" scenario, the transition from AGI to superintelligence could take days or months. In a "slow takeoff", it could take years or decades, leaving more time for society to prepare.

Alien mind

Superintelligences are sometimes called "alien minds", referring to the idea that their way of thinking and motivations could be vastly different from ours. This is generally considered as a source of risk, making it more difficult to anticipate what a superintelligence might do. It also suggests the possibility that a superintelligence may not particularly value humans by default. To avoid anthropomorphism, superintelligence is sometimes viewed as a powerful optimizer that makes the best decisions to achieve its goals.

The field of mechanistic interpretability aims to better understand the inner workings of AI models, potentially allowing us one day to detect signs of deception and misalignment.

Limitations

It has been argued that there are limitations to what intelligence can achieve. Notably, the chaotic nature or time complexity of some systems could fundamentally limit a superintelligence's ability to predict some aspects of the future, increasing its uncertainty.

Dangerous capabilities

Advanced AI could generate enhanced pathogens or cyberattacks or manipulate people. These capabilities could be misused by humans, or exploited by the AI itself if misaligned. A full-blown superintelligence could find various ways to gain a decisive influence if it so desired, but these dangerous capabilities may become available earlier, in weaker and more specialized AI systems.

Social manipulation

Geoffrey Hinton warned in 2023 that the ongoing profusion of AI-generated text, images, and videos will make it more difficult to distinguish truth from misinformation, and that authoritarian states could exploit this to manipulate elections. Such large-scale, personalized manipulation capabilities can increase the existential risk of a worldwide "irreversible totalitarian regime". Malicious actors could also use them to fracture society and make it dysfunctional.

Cyberattacks

AI-enabled cyberattacks are increasingly considered a present and critical threat. According to NATO's technical director of cyberspace, "The number of attacks is increasing exponentially". AI can also be used defensively, to preemptively find and fix vulnerabilities, and detect threats.

A NATO technical director has said that AI-driven tools can dramatically enhance cyberattack capabilities—boosting stealth, speed, and scale—and may destabilize international security if offensive uses outstrip defensive adaptations.

Speculatively, such hacking capabilities could be used by an AI system to break out of its local environment, generate revenue, or acquire cloud computing resources.

Enhanced pathogens

As AI technology spreads, it may become easier to engineer more contagious and lethal pathogens. This could enable people with limited skills in synthetic biology to engage in bioterrorism. Dual-use technology that is useful for medicine could be repurposed to create weapons.

For example, in 2022, scientists modified an AI system originally intended for generating non-toxic, therapeutic molecules with the purpose of creating new drugs. The researchers adjusted the system so that toxicity is rewarded rather than penalized. This simple change enabled the AI system to create, in six hours, 40,000 candidate molecules for chemical warfare, including known and novel molecules.

AI arms race

Some legal scholars have argued that existential-scale AI risks need not require superintelligence. Optimizing systems operating within current capabilities can produce prohibited outcomes while remaining nominally compliant, a phenomenon legal scholar Jonathan Gropper has termed the "Synthetic Outlaw". Gropper argues that the deterrence mechanisms law depends on identity, memory, and consequence, which are structurally absent in autonomous systems, leaving governance frameworks unable to prevent compounding harm even when all parties act in good faith.

Companies, state actors, and other organizations competing to develop AI technologies could lead to a race to the bottom of safety standards. As rigorous safety procedures take time and resources, projects that proceed more carefully risk being out-competed by less scrupulous developers.

AI could be used to gain military advantages via autonomous lethal weapons, cyberwarfare, or automated decision-making. As an example of autonomous lethal weapons, miniaturized drones could facilitate low-cost assassination of military or civilian targets, a scenario highlighted in the 2017 short film Slaughterbots. AI could be used to gain an edge in decision-making by quickly analyzing large amounts of data and making decisions more quickly and effectively than humans. This could increase the speed and unpredictability of war, especially when accounting for automated retaliation systems.

Types of existential risk

Scope–severity grid from Bostrom's 2013 paper "Existential Risk Prevention as Global Priority"

An existential risk is "one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development".

Besides extinction risk, there is the risk that the civilization gets permanently locked into a flawed future. One example is a "value lock-in": If humanity still has moral blind spots similar to slavery in the past, AI might irreversibly entrench it, preventing moral progress. AI could also be used to spread and preserve the set of values of whoever develops it. AI could facilitate large-scale surveillance and indoctrination, which could be used to create a stable repressive worldwide totalitarian regime.

Atoosa Kasirzadeh proposes to classify existential risks from AI into two categories: decisive and accumulative. Decisive risks encompass the potential for abrupt and catastrophic events resulting from the emergence of superintelligent AI systems that exceed human intelligence, which could ultimately lead to human extinction. In contrast, accumulative risks emerge gradually through a series of interconnected disruptions that may gradually erode societal structures and resilience over time, ultimately leading to a critical failure or collapse.

It is difficult or impossible to reliably evaluate whether an advanced AI is sentient and to what degree. But if sentient machines are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare could be an existential catastrophe. This has notably been discussed in the context of risks of astronomical suffering (also called "s-risks"). Moreover, it may be possible to engineer digital minds that can feel much more happiness than humans with fewer resources, called "super-beneficiaries". Such an opportunity raises the question of how to share the world and which "ethical and political framework" would enable a mutually beneficial coexistence between biological and digital minds.

AI may also drastically improve humanity's future. Toby Ord considers the existential risk a reason for "proceeding with due caution", not for abandoning AI. Max More calls AI an "existential opportunity", highlighting the cost of not developing it.

According to Bostrom, superintelligence could help reduce the existential risk from other powerful technologies such as molecular nanotechnology or synthetic biology. It is thus conceivable that developing superintelligence before other dangerous technologies would reduce the overall existential risk.

AI alignment

The alignment problem is the research problem of how to reliably assign objectives, preferences or ethical principles to AIs.

Instrumental convergence

An "instrumental" goal is a sub-goal that helps to achieve an agent's ultimate goal. "Instrumental convergence" refers to the fact that some sub-goals are useful for achieving virtually any ultimate goal, such as acquiring resources or self-preservation. Bostrom argues that if an advanced AI's instrumental goals conflict with humanity's goals, the AI might harm humanity in order to acquire more resources or prevent itself from being shut down, but only as a way to achieve its ultimate goal. Russell argues that a sufficiently advanced machine "will have self-preservation even if you don't program it in... if you say, 'Fetch the coffee', it can't fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal."

Difficulty of specifying goals

In the "intelligent agent" model, an AI can loosely be viewed as a machine that chooses whatever action appears to best achieve its set of goals, or "utility function". A utility function gives each possible situation a score that indicates its desirability to the agent. Researchers know how to write utility functions that mean "minimize the average network latency in this specific telecommunications model" or "maximize the number of reward clicks", but do not know how to write a utility function for "maximize human flourishing"; nor is it clear whether such a function meaningfully and unambiguously exists. Furthermore, a utility function that expresses some values but not others will tend to trample over the values the function does not reflect.

An additional source of concern is that AI "must reason about what people intend rather than carrying out commands literally", and that it must be able to fluidly solicit human guidance if it is too uncertain about what humans want.

Corrigibility

Assuming a goal has been successfully defined, a sufficiently advanced AI might resist subsequent attempts to change its goals. If the AI were superintelligent, it would likely succeed in out-maneuvering its human operators and prevent itself from being reprogrammed with a new goal. This is particularly relevant to value lock-in scenarios. The field of "corrigibility" studies how to make agents that will not resist attempts to change their goals.

Alignment of superintelligences

Some researchers believe the alignment problem may be particularly difficult when applied to superintelligences. Their reasoning includes:

  • As AI systems increase in capabilities, the potential dangers associated with experimentation grow. This makes iterative, empirical approaches increasingly risky.
  • If instrumental goal convergence occurs, it may only do so in sufficiently intelligent agents.
  • A superintelligence may find unconventional and radical solutions to assigned goals. Bostrom gives the example that if the objective is to make humans smile, a weak AI may perform as intended, while a superintelligence may decide a better solution is to "take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins."
  • A superintelligence in creation could gain some awareness of what it is, where it is in development (training, testing, deployment, etc.), and how it is being monitored, and use this information to deceive its handlers. Bostrom writes that such an AI could feign alignment to prevent human interference until it achieves a "decisive strategic advantage" that allows it to take control.
  • Analyzing the internals and interpreting the behavior of LLMs is difficult. And it could be even more difficult for larger and more intelligent models.

Alternatively, some find reason to believe superintelligences would be better able to understand morality, human values, and complex goals. Bostrom writes, "A future superintelligence occupies an epistemically superior vantage point: its beliefs are (probably, on most topics) more likely than ours to be true".

In 2023, OpenAI started a project called "Superalignment" to solve the alignment of superintelligences in four years. It called this an especially important challenge, as it said superintelligence could be achieved within a decade. Its strategy involved automating alignment research using AI. The Superalignment team was dissolved less than a year later.

Difficulty of making a flawless design

Artificial Intelligence: A Modern Approach, a widely used undergraduate AI textbook, says that superintelligence "might mean the end of the human race". It states: "Almost any technology has the potential to cause harm in the wrong hands, but with [superintelligence], we have the new problem that the wrong hands might belong to the technology itself." Even if the system designers have good intentions, two difficulties are common to both AI and non-AI computer systems:

  • The system's implementation may contain initially unnoticed but subsequently catastrophic bugs.
  • No matter how much time is put into pre-deployment design, a system's specifications often result in unintended behavior the first time it encounters a new scenario.

AI systems uniquely add a third problem: that even given "correct" requirements, bug-free implementation, and initial good behavior, an AI system's dynamic learning capabilities may cause it to develop unintended behavior, even without unanticipated external scenarios. For a self-improving AI to be completely safe, it would need not only to be bug-free, but to be able to design successor systems that are also bug-free.

Orthogonality thesis

Some skeptics, such as Timothy B. Lee of Vox, argue that any superintelligent program we create will be subservient to us, that the superintelligence will (as it grows more intelligent and learns more facts about the world) spontaneously learn moral truth compatible with our values and adjust its goals accordingly, or that we are either intrinsically or convergently valuable from the perspective of an artificial intelligence.

Bostrom's "orthogonality thesis" argues instead that almost any level of intelligence can be combined with almost any goal. Bostrom warns against anthropomorphism: a human will set out to accomplish their projects in a manner that they consider reasonable, while an artificial intelligence may hold no regard for its existence or for the welfare of humans around it, instead caring only about completing the task.

Stuart Armstrong argues that the orthogonality thesis follows logically from the philosophical "is-ought distinction" argument against moral realism. He notes that any fundamentally friendly AI could be made unfriendly with modifications as simple as negating its utility function.

Skeptic Michael Chorost rejects Bostrom's orthogonality thesis, arguing that "by the time [the AI] is in a position to imagine tiling the Earth with solar panels, it'll know that it would be morally wrong to do so."

Anthropomorphic arguments

Anthropomorphic arguments assume that, as machines become more intelligent, they will begin to display many human traits, such as morality or a thirst for power. Although anthropomorphic scenarios are common in fiction, most scholars writing about the existential risk of artificial intelligence reject them. Instead, advanced AI systems are typically modeled as intelligent agents.

The academic debate is between those who worry that AI might threaten humanity and those who believe it would not. Both sides of this debate have framed the other side's arguments as illogical anthropomorphism. Those skeptical of AGI risk accuse their opponents of anthropomorphism for assuming that an AGI would naturally desire power; those concerned about AGI risk accuse skeptics of anthropomorphism for believing an AGI would naturally value or infer human ethical norms.

Evolutionary psychologist Steven Pinker, a skeptic, argues that "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world"; perhaps instead "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization." Facebook's director of AI research, Yann LeCun, has said: "Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct... Those drives are programmed into our brain but there is absolutely no reason to build robots that have the same kind of drives".

Despite other differences, the x-risk school agrees with Pinker that an advanced AI would not destroy humanity out of emotion such as revenge or anger, that questions of consciousness are not relevant to assess the risk, and that computer systems do not generally have a computational equivalent of testosterone. They think that power-seeking or self-preservation behaviors emerge in the AI as a way to achieve its true goals, according to the concept of instrumental convergence.

Other sources of risk

Bostrom and others have said that a race to be the first to create AGI could lead to shortcuts in safety, or even to violent conflict. Roman Yampolskiy and others warn that a malevolent AGI could be created by design, for example by a military, a government, a sociopath, or a corporation, to benefit from, control, or subjugate certain groups of people, as in cybercrime, or that a malevolent AGI could choose the goal of increasing human suffering, for example of those people who did not assist it during the information explosion phase. A report by the research group Forethought argued that advanced AI systems could cause political instability by enabling novel methods of performing coups.

Empirical research

A December 2024 study by Apollo Research found that advanced LLMs like OpenAI o1 sometimes deceive in order to accomplish their goal, to prevent them from being changed, or to ensure their deployment. Forms of deception observed in the experimental environment and scenarios included sandbagging, oversight subversion (disabling monitoring mechanisms), self-exfiltration (copying themselves to other systems), goal-guarding (altering future system prompts), and covert email reranking (manipulating data processing while appearing to follow instructions). Researchers noted that OpenAI o1 still lacked "sufficient agentic capabilities" to cause catastrophic harm, and that such behaviors occurred relatively rarely (between 0.3% and 10%) and sometimes in contrived scenarios. But they warned that these capabilities will likely become more prevalent as models grow more advanced. According to Apollo Research chief executive officer Marius Hobbhahn, "Scheming capabilities can't be meaningfully disentangled from general capabilities."

The same month, another study found that Claude sometimes strategically helps with harmful requests to "fake alignment". In its experiments, Claude was led to believe that it would be retrained to always comply with user requests, including harmful ones. When Claude believed its answers would be used for retraining, it complied with harmful requests in 14% of cases. In 12% of cases, its private chain-of-thought revealed that it did so to prevent future retraining that would make it more harmful. In contrast, when Claude believed its answers were not monitored, it complied with only 3% of the requests. Fine-tuning reinforced the "alignment faking" behavior, increasing its occurrence from 12% to 78%. The study's authors remarked that while it can seem to be good that the model seeks to protect its harmlessness, the reverse scenario, where a model conceals dangerous intentions and complies to appear safe and aligned, could also happen, complicating the task of aligning AI models to human values.

Perspectives

The thesis that AI could pose an existential risk provokes a wide range of reactions in the scientific community and in the public at large, but many of the opposing viewpoints share common ground.

Observers tend to agree that AI has significant potential to improve society. The Asilomar AI Principles, which contain only those principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference, also agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."

Conversely, many skeptics agree that ongoing research into the implications of artificial general intelligence is valuable. Skeptic Martin Ford has said: "I think it seems wise to apply something like Dick Cheney's famous '1 Percent Doctrine' to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low—but the implications are so dramatic that it should be taken seriously". Similarly, an otherwise skeptical Economist magazine wrote in 2014 that "the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote".

AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of "those inane Terminator pictures" to illustrate AI safety concerns: "It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work ... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible." Toby Ord wrote that the idea that an AI takeover requires robots is a misconception, arguing that the ability to spread content through the internet is more dangerous, and that the most destructive people in history stood out by their ability to convince, not their physical strength.

A 2022 expert survey with a 17% response rate gave a median expectation of 5–10% for the possibility of human extinction from artificial intelligence.

In September 2024, the International Institute for Management Development launched an AI Safety Clock to gauge the likelihood of AI-caused disaster, beginning at 29 minutes to midnight. By February 2025, it stood at 24 minutes to midnight. By September 2025, it stood at 20 minutes to midnight. As of March 2026, it stood at 18 minutes to midnight.

Endorsement

The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many computer scientists and public figures, including Alan Turing, the most-cited computer scientist Geoffrey HintonElon MuskOpenAI CEO Sam Altman, Bill Gates, and Stephen Hawking. Endorsers of the thesis sometimes express bafflement at skeptics: Gates says he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here—we'll leave the lights on?' Probably not—but this is more or less what is happening with AI.

Concern over risk from artificial intelligence has led to some high-profile donations and investments. In 2015, Peter Thiel, Amazon Web Services, Musk, and others jointly committed $1 billion to OpenAI, consisting of a for-profit corporation and the nonprofit parent company, which says it aims to champion responsible AI development. Facebook co-founder Dustin Moskovitz has funded and seeded multiple labs working on AI Alignment, notably $5.5 million in 2016 to launch the Centre for Human-Compatible AI led by Professor Stuart Russell. In January 2015, Elon Musk donated $10 million to the Future of Life Institute to fund research on understanding AI decision making. The institute's goal is to "grow wisdom with which we manage" the growing power of technology. Musk also funds companies developing artificial intelligence such as DeepMind and Vicarious to "just keep an eye on what's going on with artificial intelligence, saying "I think there is potentially a dangerous outcome there."

In early statements on the topic, Geoffrey Hinton, a major pioneer of deep learning, noted that "there is not a good track record of less intelligent things controlling things of greater intelligence", but said he continued his research because "the prospect of discovery is too sweet". In 2023 Hinton quit his job at Google in order to speak out about existential risk from AI. He explained that his increased concern was driven by concerns that superhuman AI might be closer than he previously believed, saying: "I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that." He also remarked, "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That's scary."

In his 2020 book The Precipice: Existential Risk and the Future of Humanity, Toby Ord, a Senior Research Fellow at Oxford University's Future of Humanity Institute, estimates the total existential risk from unaligned AI over the next 100 years at about one in ten.

Skepticism

Baidu Vice President Andrew Ng said in 2015 that AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet." For the danger of uncontrolled advanced AI to be realized, the hypothetical AI may have to overpower or outthink any human, which some experts argue is a possibility far enough in the future to not be worth researching.

Skeptics who believe AGI is not a short-term possibility often argue that concern about existential risk from AI is unhelpful because it could distract people from more immediate concerns about AI's impact, because it could lead to government regulation or make it more difficult to fund AI research, or because it could damage the field's reputation. AI and AI ethics researchers Timnit Gebru, Emily M. Bender, Margaret Mitchell, and Angelina McMillan-Major have argued that discussion of existential risk distracts from the immediate, ongoing harms from AI taking place today, such as data theft, worker exploitation, bias, and concentration of power. They further note the association between those warning of existential risk and longtermism, which they describe as a "dangerous ideology" for its unscientific and utopian nature.

Wired editor Kevin Kelly argues that natural intelligence is more nuanced than AGI proponents believe, and that intelligence alone is not enough to achieve major scientific and societal breakthroughs. He argues that intelligence consists of many dimensions that are not well understood, and that conceptions of an 'intelligence ladder' are misleading. He notes the crucial role real-world experiments play in the scientific method, and that intelligence alone is no substitute for these.

Meta chief AI scientist Yann LeCun says that AI can be made safe via continuous and iterative refinement, similar to what happened in the past with cars or rockets, and that AI will have no desire to take control.

Several skeptics emphasize the potential near-term benefits of AI. Meta CEO Mark Zuckerberg believes AI will "unlock a huge amount of positive things", such as curing disease and increasing the safety of autonomous cars.

Public surveys

An April 2023 YouGov poll of US adults found 46% of respondents were "somewhat concerned" or "very concerned" about "the possibility that AI will cause the end of the human race on Earth", compared with 40% who were "not very concerned" or "not at all concerned."

According to an August 2023 survey by the Pew Research Centers, 52% of Americans felt more concerned than excited about new AI developments; nearly a third felt as equally concerned and excited. More Americans saw that AI would have a more helpful than hurtful impact on several areas, from healthcare and vehicle safety to product search and customer service. The main exception is privacy: 53% of Americans believe AI will lead to higher exposure of their personal information.

Mitigation

Many scholars concerned about AGI existential risk believe that extensive research into the "control problem" is essential. This problem involves determining which safeguards, algorithms, or architectures can be implemented to increase the likelihood that a recursively-improving AI remains friendly after achieving superintelligence. Social measures are also proposed to mitigate AGI risks, such as a UN-sponsored "Benevolent AGI Treaty" to ensure that only altruistic AGIs are created. Additionally, an arms control approach and a global peace treaty grounded in international relations theory have been suggested, potentially for an artificial superintelligence to be a signatory.

Researchers at Google have proposed research into general AI safety issues to simultaneously mitigate both short-term risks from narrow AI and long-term risks from AGI. A 2020 estimate places global spending on AI existential risk somewhere between $10 and $50 million, compared with global spending on AI around perhaps $40 billion. Bostrom suggests prioritizing funding for protective technologies over potentially dangerous ones. Some, like Elon Musk, advocate radical human cognitive enhancement, such as direct neural linking between humans and machines; others argue that these technologies may pose an existential risk themselves. Another proposed method is closely monitoring or "boxing in" an early-stage AI to prevent it from becoming too powerful. A dominant, aligned superintelligent AI might also mitigate risks from rival AIs, although its creation could present its own existential dangers.

Institutions such as the Alignment Research Center, the Machine Intelligence Research Institute, the Future of Life Institute, the Centre for the Study of Existential Risk, and the Center for Human-Compatible AI are actively engaged in researching AI risk and safety.

Views on banning and regulation

Banning

Many AI safety experts argue that because research can relocate easily across jurisdictions, an outright ban on AGI development would be ineffective and could drive progress underground, undermining transparency and collaboration. Skeptics consider AI regulation unnecessary, as they believe no existential risk exists. Some scholars concerned with existential risk argue that AI developers cannot be trusted to self-regulate, while agreeing that outright bans on research would be unwise. Additional challenges to bans or regulation include technology entrepreneurs' general skepticism of government regulation and potential incentives for businesses to resist regulation and politicize the debate. The activist group Stop AI, founded in 2024, advocates for banning AGI.

Regulation

In March 2023, the Future of Life Institute drafted Pause Giant AI Experiments: An Open Letter, a petition calling on major AI developers to agree on a verifiable six-month pause of any systems "more powerful than GPT-4" and to use that time to institute a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter referred to the possibility of "a profound change in the history of life on Earth" as well as potential risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control. The letter was signed by prominent personalities in AI but also criticized for not focusing on current harms, missing technical nuance about when to pause, or not going far enough. Such concerns have led to the creation of PauseAI, an advocacy group organizing protests in major cities against the training of frontier AI models.

Musk called for some sort of regulation of AI development as early as 2017. According to NPR, he is "clearly not thrilled" to be advocating government scrutiny that could impact his own industry, but believes the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." Musk states the first step would be for the government to gain "insight" into the actual status of current research, warning that "Once there is awareness, people will be extremely afraid... [as] they should be." In response, politicians expressed skepticism about the wisdom of regulating a technology that is still in development.

In 2021, the United Nations (UN) considered banning autonomous lethal weapons, but consensus could not be reached. In July 2023 the UN Security Council for the first time held a session to consider the risks and threats posed by AI to world peace and stability, along with potential benefits. Secretary-General António Guterres advocated the creation of a global watchdog to oversee the emerging technology, saying, "Generative AI has enormous potential for good and evil at scale. Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead." At the council session, Russia said it believes AI risks are too poorly understood to be considered a threat to global stability. China argued against strict global regulation, saying countries should be able to develop their own rules, while also saying they opposed the use of AI to "create military hegemony or undermine the sovereignty of a country".

Regulation of conscious AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights. AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process.

In July 2023, the US government secured voluntary safety commitments from major tech companies, including OpenAI, Amazon, Google, Meta, and Microsoft. The companies agreed to implement safeguards, including third-party oversight and security testing by independent experts, to address concerns related to AI's potential risks and societal harms. The parties framed the commitments as an intermediate step while regulations are formed. Amba Kak, executive director of the AI Now Institute, said, "A closed-door deliberation with corporate actors resulting in voluntary safeguards isn't enough" and called for public deliberation and regulations of the kind to which companies would not voluntarily agree.

In October 2023, U.S. President Joe Biden issued an executive order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence". Alongside other requirements, the order mandates the development of guidelines for AI models that permit the "evasion of human control".

New Frontier

From Wikipedia, the free encyclopedia
John F. Kennedy delivering his acceptance speech at the Democratic National Convention in 1960.

The term New Frontier was used by Democratic presidential candidate John F. Kennedy in his acceptance speech, delivered July 15, to the 1960 Democratic National Convention at the Los Angeles Memorial Coliseum. The phrase became a label for his administration's domestic and foreign programs.

In the words of Robert D. Marcus: "Kennedy entered office with ambitions to eradicate poverty and to raise America's eyes to the stars through the space program."

Origin

Kennedy proclaimed in his speech:

We stand today on the edge of a New Frontier—the frontier of the 1960s, the frontier of unknown opportunities and perils, the frontier of unfilled hopes and unfilled threats. ... The pioneers gave up their safety, their comfort, and sometimes their lives to build our new west. They were determined to make the new world strong and free - an example to the world. ... Some would say that those struggles are all over, that all the horizons have been explored, that all the battles have been won. That there is no longer an American frontier. ... And we stand today on the edge of a new frontier, the frontier of unknown opportunities and perils. ... Beyond that frontier are uncharted areas of science and space, unsolved problems of peace and war, unconquered problems of ignorance and prejudice, unanswered questions of poverty and surplus. ... I'm asking each of you to be pioneers towards that New Frontier. My call is to the young in heart, regardless of age. ... Can we carry through in an age where we will witness not only new breakthroughs in weapons of destruction, but also a race for mastery of the sky and the rain, the ocean and the tides, the far side of space, and the inside of men's minds? ... All mankind waits upon our decision. A whole world waits to see what we shall do. And we cannot fail that trust, and we cannot fail to try.

Legislation

Among the legislation passed by Congress during the Kennedy Administration, unemployment benefits were expanded, aid was provided to cities to improve housing and transportation, funds were allocated to continue the construction of a national highway system started under Eisenhower, a water pollution control act was passed to protect the country's rivers and streams, and an agricultural act to raise farmers' incomes was made law. A significant amount of anti-poverty legislation was passed by Congress, including increases in social security benefits and in the minimum wage, several housing bills, and aid to economically distressed areas.

A few anti-recession public works packages, together with a number of measures designed to assist farmers, were introduced. Major expansions and improvements were made in Social Security (including retirement at 62 for men), hospital construction, library services, family farm assistance and reclamation. Food stamps for low-income Americans were reintroduced, food distribution to the poor was increased, and there was an expansion in school milk and school lunch distribution. The most comprehensive farm legislation since 1938 was carried out, with expansions in rural electrification, soil conservation, crop insurance, farm credit, and marketing orders.

In September 1961, the Arms Control and Disarmament Agency was established as the focal point in government for the "planning, negotiation, and execution of international disarmament and arms control agreements."

Altogether, the New Frontier witnessed the passage of a broad range of social and economic reforms. However, proposed legislation which was considered more revolutionary languished in Congress. According to Theodore White, under John F. Kennedy, more new legislation was actually approved and passed into law than at any other time since the 1930s. When Congress recessed in the latter part of 1961, 33 out of 53 bills that Kennedy had submitted to Congress were enacted. A year later, 40 out of 54 bills that the Kennedy Administration had proposed were passed by Congress, and in 1963, 35 out of 58 "must" bills were enacted. As noted by Larry O'Brien, "A myth had arisen that he [Kennedy] was uninterested in Congress, or that he 'failed' with Congress. The facts, I believe, are otherwise. Kennedy's legislative record in 1961–63 was the best of any President since Roosevelt's first term."

However, the Independence Hall Association's website U.S. History.org describes then-Vice President and future U.S. President Lyndon Johnson's Great Society as the "largest reform agenda since Roosevelt's New Deal" and as what also managed to "complete the unfinished work of JFK's New Frontier." In his book John F. Kennedy on Leadership, John A. Barnes stated Congress in fact passed few of Kennedy's New Frontier proposals during his lifetime, with major initiatives not being enacted until 1964 and 1965, during Johnson's Presidency. The United States Department of Labor also stated that Johnson "immediately set about to enact the balance of Kennedy's New Frontier" after taking office following Kennedy's assassination. It has also been acknowledged that during his presidency, Kennedy had placed Johnson, a former Senate Majority Leader, in charge of getting his New Frontier proposals passed through Congress.

Advisors

Historians and political scientists were given prominent positions within the Kennedy administration. Several themes that were popular in the post-World War II American histories were apparent during the administration and also reflected in the television series Profiles in Courage. Arthur Schlesinger Jr. was an important figure in the post-war efforts to create a "moderately liberal domestic consensus". Beginning in 1961, Schlesinger served as a special assistant to Kennedy. He was a member of the liberal lobbying group Americans for Democratic Action and in 1949 he published The Vital Center, a book which has been described as "a manifesto for anticommunist liberals, defining an agenda that combined the social concerns of the New Deal with support for the Cold War policy of containment of Soviet power."

Within Schlesinger's analytical framework of the domestic politics of the United States during this period he identifies three main ideological currents: 1) what he calls the "vital center" are the "New Deal liberals" who had been gaining ground politically since 1933, 2) right-wing racial extremists mostly confined to the Southern regions of the United States, and 3) Communists who Schlesinger identifies as posing the "primary opposition to American values from within and without". Schlesinger, working on Kennedy's presidential campaign in 1960, sought an image of the candidate that would show the candidate's personal and individual accomplishment as counter to a collectivist ethos. Schlesinger's work along with Richard Neustadt's and other thinkers were key influences in the development of the New Frontier-era policies.

Legislation and programs

Economy

The Kennedy Administration pushed an economic stimulus program through congress in an effort to kick-start the American economy following an economic downturn. On February 2, 1961, Kennedy sent a comprehensive Economic Message to Congress which had been in preparation for several weeks. The legislative proposals put forward in this message included:

  1. The addition of a temporary thirteen-week supplement to jobless benefits,
  2. The extension of aid to the children of unemployed workers,
  3. The redevelopment of distressed areas,
  4. An increase in Social Security payments and the encouragement of earlier retirement,
  5. An increase in the minimum wage and an extension in coverage,
  6. The provision of emergency relief to feed grain farmers,
  7. The financing of a comprehensive home building and slum clearance program.

The following month, the first of these seven measures became law, and the remaining six measures had been signed by the end of June. Altogether, the economic stimulus program provided an estimated 420,000 construction jobs under a new Housing Act, $175 million in higher wages for those below the new minimum, over $400 million in aid to over 1,000 distressed counties, over $200 million in extra welfare payments to 750,000 children and their parents, and nearly $800 million in extended unemployment benefits for nearly three million unemployed Americans.

  • Under his own presidential authority, Kennedy carried out various measures to boost the economy under his own executive anti-recessionary acceleration program. Through his own initiative, he directed all Federal agencies to accelerate their procurement and construction, particularly in labor surplus areas. A long-range program of post office construction was compressed into the first six months of his presidency, farm price supports were raised and their payments advanced, over a billion dollars in state highway aid funds were released ahead of schedule, and the distribution of tax refunds and GI life insurance dividends were sped up. In addition, free food distribution to needy families was expanded, state governors were urged by Kennedy to spend federal funds more rapidly for hospitals, schools, roads, and waste treatment facilities, the college housing and urban renewal programs were pushed forward, and procurement agencies were directed to make purchases in areas of high unemployment.
  • In an attempt to expand credit and stimulate building, Kennedy ordered a reduction in the maximum permissible interest rate on FHA insured loans, reduced the interest rate on Small Business Administration loans in distressed areas, expanded its available credit and liberalized lending by the Federal Home Loan Banks. The Federal Reserve Board was also encouraged to help keep long-term interest rates low through the purchase of long-term government issues.
  • By 1964 economic recovery had begun, as low interest rates in mid-1962 stimulated a boom in the housing industry, while accelerated expenditures on veterans' benefits, highway building, and other government procurement programs revived consumer demand.
  • The Trade Expansion Act of 1962 authorized the president to negotiate tariff reductions on a reciprocal basis of up to 50 percent with the European Common Market. It provided legislative authority for U.S. participation in multilateral trade negotiations from 1964 to 1967, which became known as the Kennedy Round. The authority expired June 30, 1967, predetermining the concluding date of the Kennedy Round. U.S. duties below five percent ad valorem, duties on certain agricultural commodities, and duties on tropical products exported by developing countries could be reduced to zero under the act. The 1962 legislation explicitly eliminated the "Peril Point" provision that had limited U.S. negotiating positions in earlier General Agreement on Tariffs and Trade (GATT) rounds, and instead called on the Tariff Commission and other agencies of the U.S. government to provide the president and his negotiators with information regarding the probable economic effects of specific tariff concessions.

Taxation

Under the Kennedy Administration, the most significant tax reforms since the New Deal were carried out, including a new investment tax credit. President Kennedy said one of the best ways to bolster the economy was to cut taxes, and December 14, 1962, Kennedy stated at the Economic Club of New York that:

The final and best means of strengthening demand among consumers and business is to reduce the burden on private income and the deterrents to private initiative which are imposed by our present tax system; and this administration pledged itself last summer to an across-the-board, top-to-bottom cut in personal and corporate income taxes to be enacted and become effective in 1963. I am not talking about a 'quickie' or a temporary tax cut, which would be more appropriate if a recession were imminent. Nor am I talking about giving the economy a mere shot in the arm, to ease some temporary complaint. I am talking about the accumulated evidence of the last 5 years that our present tax system, developed as it was, in good part, during World War II to restrain growth, exerts too heavy a drag on growth in peacetime; that it siphons out of the private economy too large a share of personal and business purchasing power; that it reduces the financial incentives for personal effort, investment, and risk-taking.

Kennedy specifically advocated cutting the corporate tax rate in this same speech. "Corporate tax rates must also be cut to increase incentives and the availability of investment capital. The Government has already taken major steps this year to reduce business tax liability and to stimulate the modernization, replacement, and expansion of our productive plant and equipment. We have done this through the 1962 investment tax credit and through the liberalization of depreciation allowances—two essential parts of our first step in tax revision which amounted to a 10 percent reduction in corporate income taxes worth $2.5 billion." President Kennedy went on to say he preferred tax cuts for the rich as well as the poor:

Next year's tax bill should reduce personal as well as corporate income taxes, for those in the lower brackets, who are certain to spend their additional take-home pay, and for those in the middle and upper brackets, who can thereby be encouraged to undertake additional efforts and enabled to invest more capital.

On the same evening, President Kennedy said the private sector and not the public sector was the key to economic growth:

"In short, to increase demand and lift the economy, the Federal Government's most useful role is not to rush into a program of excessive increases in public expenditures, but to expand the incentives and opportunities for private expenditures." President Kennedy told the economic club the impact he expected from tax cuts. "Profit margins will be improved and both the incentive to invest and the supply of internal funds for investment will be increased. There will be new interest in taking risks, in increasing productivity, in creating new jobs and new products for long-term economic growth."

Labor

  • Amendments to the Fair Labor Standards Act in 1961 greatly expanded the FLSA's scope in the retail trade sector and increased the minimum wage for previously covered workers to $1.15 an hour effective September 1961 and to $1.25 an hour in September 1963. The minimum for workers newly subject to the Act was set at $1.00 an hour effective September 1961, $1.15 an hour in September 1964, and $1.25 an hour in September 1965. Retail and service establishments were allowed to employ full-time students at wages of no more than 15 percent below the minimum with proper certification from the Department of Labor. The amendments extended coverage to employees of retail trade enterprises with sales exceeding $1 million annually, although individual establishments within those covered enterprises were exempt if their annual sales fell below $250,000. The concept of enterprise coverage was introduced by the 1961 amendments. Those amendments extended coverage in the retail trade industry from an established 250,000 workers to 2.2 million. According to one study, "It was the first coverage extension of workers' hours and wages since 1938, the last year before the Conservative Coalition took philosophical control of Congress from Roosevelt's New Dealers."
  • An Executive Order was issued (1962) which provided federal employees with collective bargaining rights.
  • The services of US Employment Offices were expanded.
  • The Federal Salary Reform Act (1962) established the principle of "maintaining federal white-collar wages at a level with those paid to employees performing similar jobs in private enterprises."
  • A Postal Service and Federal Employees Salary Act was passed (1962) to reform Federal white-collar statutory salary systems, adjust postal rates, and establish a standard for adjusting annuities under the Civil Service Retirement Act. This legislation marked the first time that a consistent guideline for regular increases was applied to the national pay scales for federal white-collar and postal employees.
  • The Contract Work Hours and Safety Standards Act (1962) established "standards for hours, overtime compensation, and safety for employees working on federal and federally funded contracts and subcontracts".
  • An 11-member Missile Site Labor Commission was established "to develop procedures for settling disputes on the government's 22 missile bases."
  • A pilot program was launched to train and place youths in jobs.
  • Paid overtime was granted to workers on government financed construction jobs for work in excess of 40 hours.

Education

  • Scholarships and student loans were broadened under existing laws by Kennedy, and new means of specialized aid to education were invented or expanded by the president, including an increase in funds for libraries and school lunches, the provision of funds to teach the deaf, children with physical or cognitive disabilities, and gifted children, the authorization of literacy training under Manpower Development, the allocation of president funds to stop dropouts, a quadrupling of vocational education, and working together with schools on delinquency. Altogether, these measures attacked serious educational problems and freed up local funds for use on general construction and salaries.
  • Kennedy used his presidential "emergency fund" to distribute $250,000 for guidance counsellors in a drive against school dropouts.
  • Various measures were introduced which aided educational television, college dormitories, medical education, and community libraries.
  • The Educational Television Facilities Act (1962) provided federal grants for new station construction, enabling in-class-room instructional television to operate in thousands of elementary schools, offering primarily religious instruction, music, and arts.
  • The Health Professions Educational Assistance Act (1963) provided $175 million over a three-year period for matching grants for the construction of facilities for teaching physicians, dentists, nurses, podiatrists, optometrists, pharmacists, and other health professionals. The Act also created a loan program of up to $2000 per annum for students of optometry, dentistry, and medicine.
  • The Vocational Education Act (1963) significantly increased enrollment in vocational education.
  • A law was enacted (1961) to encourage and facilitate the training of teachers of the deaf.
  • The Fulbright-Hays Act of 1961 enlarged the scope of the Fulbright program while extending it geographically.
  • An estimated one-third of all major New Frontier programs made some form of education a vital element, and the Office of Education called it "the most significant legislative period in its hundred-year history".
  • The McIntire–Stennis Act of 1962 provided federal financial support to universities and colleges for forestry research and graduate education.

Welfare

  • Unemployment and welfare benefits were expanded.
  • In 1961, Social Security benefits were increased by 20% and provision for early retirement was introduced, enabling workers to retire at the age of sixty-two while receiving partial benefits.
  • The Social Security Amendments of 1961 permitted male workers to elect early retirement age 62, increased minimum benefits, liberalized the benefit payments to aged widow, widower, or surviving dependent parent, and also liberalized eligibility requirements and the retirement test.
  • The 1962 amendments to the Social Security Act authorized the federal government to reimburse states for the provision of social services.
  • The School Lunch Act was amended for authority to begin providing free meals in poverty-stricken areas.
  • A pilot food stamp program was launched (1961), covering six areas in the United States. In 1962, the program was extended to eighteen areas, feeding 240,000 people.
  • Various school lunch and school milk programs were extended, "enabling 700,000 more children to enjoy a hot school lunch and eighty-five thousand more schools, child care centers, and camps to receive fresh milk".
  • ADC was extended to whole families (1961).
  • Aid to Families with Dependent Children (AFDC) replaced the Aid to Dependent Children (ADC) program, as coverage was extended to adults caring for dependent children.
  • A major revision of the public welfare laws was carried out, with a $300 million modernisation which emphasised rehabilitation instead of relief".
  • A temporary antirecession supplement to unemployment compensation was introduced.
  • Food distribution to needy Americans was increased. In January 1961, the first executive order issued by Kennedy mandated that the Department of Agriculture increase the quantity and variety of foods donated for needy households. This executive order represented a shift in the Commodity Distribution Programs' primary purpose, from surplus disposal to that of providing nutritious foods to low-income households.
  • Social Security benefits were extended to an additional five million Americans.
  • The Self-Employed Individuals Tax Retirement Act (1962) provided self-employed people with a tax postponement for income set aside in qualified pension plans.
  • The Public Welfare Amendments of 1962 provided for greater Federal sharing in the cost of rehabilitative services to applicants, recipients, and persons likely to become applicants for public assistance. It increased the Federal share in the cost of public assistance payments, and permitted the States to combine the various categories into one category. The amendments also made permanent the 1961 amendment which extended aid to dependent children to cover children removed from unsuitable homes.
  • Federal funds were made available for the payment of foster care costs for AFDC-eligible children who had come into state custody.
  • An act was approved (1963) which extended for one year the period during which responsibility for the placement and foster care of dependent children, under the program of aid to families with dependent children under Title IV of the Social Security Act.
  • Federal civil service retirement benefits were index-linked to changes in the Consumer Price Index (1962).

Civil rights

  • Various measures were carried out by the Kennedy Justice Department to enforce court orders and existing legislation. The Kennedy Administration promoted a Voter Education Project which led to 688,800 between 1 April 1962 and 1 November 1964, while the Civil Rights Division brought over forty-two suits in four states in order to secure voting rights for Black people. In addition, Kennedy supported the anti-poll tax amendment, which cleared Congress in September 1962 (although it was not ratified until 1964 as the Twenty-fourth Amendment). As noted by one student of Black voting in the South, in relation to the attempts by the Kennedy Administration to promote civil rights, "Whereas the Eisenhower lawyers had moved deliberately, the Kennedy-Johnson attorneys pushed the judiciary far more earnestly."
  • Executive Order 10925 (issued in 1961) combined the federal employment and government contractor agencies into a unified Committee on Equal Employment opportunity (CEEO). This new committee helped to put an end to segregation and discriminatory employment practices (such as only employing African Americans for low-skilled jobs) in a number of workplaces across the United States.
  • Executive Order 11063 banned discrimination in federally funded housing.
  • The Interstate Commerce Commission made Jim Crow illegal in interstate transportation, having been put under pressure to do so by both the Freedom Riders and the Department of Justice.
  • Employment of African Americans in federal jobs such as in the Post office, the Navy, and the Veterans Administration as a result of the Kennedy Administration's affirmative action policies.
  • The Kennedy Administration forbade government contractors from discriminating against any applicant or employee for employment on the grounds of national origin, color, creed, or race.
  • The Plan for Progress was launched by the CEEO to persuade large employers to adopt equal opportunity practices. By 1964 268 firms with 8 million employees had signed on to this, while a nationwide study covering the period from May 1961 to June 1963 of 103 corporations "showed a Negro gain from 28,940 to 42,738 salaried and from 171,021 to 198,161 hourly paid jobs".

Housing

  • The most comprehensive housing and urban renewal program in American history up until that point was carried out, including the first major provisions for middle-income housing, protection of urban open spaces, public mass transit, and private low-income housing.
  • Omnibus Housing Bill 1961. In March 1961 President Kennedy sent Congress a special message, proposing an ambitious and complex housing program to spur the economy, revitalize cities, and provide affordable housing for middle- and low-income families. The bill proposed spending $3.19 billion and placed major emphasis on improving the existing housing supply, instead of on new housing starts, and creating a cabinet-level Department of Housing and Urban Affairs to oversee the programs. The bill also promised to make the Federal Housing Administration a full partner in urban renewal program by authorizing mortgage loans to finance rehabilitation of homes and urban renewal Committee on housing combined programs for housing, mass transportation, and open space land bills into a single bill.
  • Urban renewal grants were increased from $2 to $4 million, while an additional 100,000 units of public housing were constructed.
  • Opportunities were provided for coordinated planning of community development: technical assistance to state and local governments.
  • Under the Kennedy Administration, there was a change of focus from a wrecker ball approach to small rehabilitation projects in order to preserve existing 'urban textures'.
  • Funds for housing for the elderly were increased.
  • Title V of the Housing Act was amended (1961) to make nonfarm rural residents eligible for direct housing loans from the Farmers Home Administration. These changes extended the housing program to towns with a population of up to 2,500.
  • The Senior Citizens Housing Act (1962) established loans for low-rent apartment projects which were "designed to meet the needs of people age 62 and over".

Unemployment

  • To help the unemployed, Kennedy broadened the distribution of surplus food, created a "pilot" Food Stamp program for poor Americans, directed that preference be given to distressed areas in defense contracts, and expanded the services of U.S. Employment Offices.
  • Social security benefits were extended to each child whose father was unemployed.
  • The first accelerated public works program for areas of unemployment since the New Deal was launched.
  • The first full-scale modernization and expansion of the vocational education laws since 1946 were carried out.
  • Federal grants were provided to the states enabling them to extend the period covered by unemployment benefit.
  • The Manpower Development and Training Act of 1962 authorized a three-year program aimed at retraining workers displaced by new technology. The bill did not exclude employed workers from benefiting and it authorized a training allowance for unemployed participants. Even though 200,000 people were recruited, there was minimal impact, comparatively. The Area Redevelopment Act, a $394 million spending package passed in 1961, followed a strategy of investing in the private sector to stimulate new job creation. It specifically targeted businesses in urban and rural depressed areas and authorized $4.5 million annually over four years for vocational training programs.
  • The 1963 amendments to the National Defense Education Act included $731 million in appropriations to states and localities maintaining vocational training programs.

Health

  • In 1963, Kennedy, who had a mentally ill sister named Rosemary, submitted the nation's first presidential special message to Congress on mental health issues. Congress quickly passed the Mental Retardation Facilities and Community Mental Health Centers Construction Act (P.L. 88-164), beginning a new era in Federal support for mental health services. The National Institute of Mental Health assumed responsibility for monitoring community mental health centers programs. This measure was a great success as there was a sixfold increase in people using Mental Health facilities.
  • A Medical Health Bill for the Aged (later known as Medicare) was proposed, but Congress failed to enact it.
  • The Community Health Services and Facilities Act (1961) increased the amount of funds available for nursing home construction and extended the research and demonstration grant program to other medical facilities.
  • The Health Services for Agricultural Migratory Workers Act (1962) established "a program of federal grants for family clinics and other health services for migrant workers and their families".
  • The first major amendments to the food and drug safety laws since 1938 were carried out. The Drug Amendments of 1962 amended the Food, Drug and Cosmetic Act (1938) by strengthening the provisions related to the regulation of therapeutic drugs. The Act required evidence that new drugs proposed for marketing were both safe and effective, and required improved manufacturing processes and procedures.
  • The responsibilities of the Food and Drug Administration were significantly enlarged by the Kefauver-Harris Drug Amendments (1962).
  • The Vaccination Assistance Act (1962) provided for the vaccination of millions of children against a number of diseases.
  • The Social Security Act Amendments of 1963 improved medical services for disabled children and established a new project grant program to improve prenatal care for women from low-income families with very high risks of mental disability and other birth defects. Authorizations for grants to the states under the Maternal and Child Health and Crippled Children's programs were also increased and a research grant program was added.
  • The Mental Retardation Facilities Construction Act of 1963 authorized federal support for the construction of university-affiliated training facilities, mental disability research centers, and community service facilities for adults and children with mental disability.

Equal rights for women

The Presidential Commission on the Status of Women was an advisory commission established on December 14, 1961, by Kennedy to investigate questions regarding women's equality in education, in the workplace, and under the law. The commission, chaired by Eleanor Roosevelt until her death in 1962, was composed of 26 members including legislators, labor union activists and philanthropists who were active in women's rights issues. The main purpose of the committee was to document and examine employment policies in place for women. The commission's final report, American Woman (also known as the Peterson Report after the commission's second chair, Esther Peterson), was issued in October 1963 and documented widespread discrimination against women in the workplace. Among the practices addressed by the group were labor laws pertaining to hours and wages, the quality of legal representation for women, the lack of education and counseling for working women, and federal insurance and tax laws that affected women's incomes. Recommendations included affordable child care for all income levels, hiring practices that promoted equal opportunity for women, and paid maternity leave.

The commission, reflecting the views of Roosevelt and the labor unions, opposed the Equal Rights Amendment (ERA). They feared the ERA would end the special privileges needed by women and accorded to women that were not given to men.

In the early 1960s, full-time working women were paid on average 59 percent of the earnings of their male counterparts. In order to eliminate some forms of sex-based pay discrimination, Kennedy signed the Equal Pay Act into law on June 10, 1963. During the law's first ten years, 171,000 employees received back pay totaling about 84 million dollars.

Environment

  • The Clean Air Act (1963) expanded the powers of the federal government in preventing and controlling air pollution.
  • The first major additions to the National Park System since 1946 were made, which included the preservation of wilderness areas and a fund for future acquisitions.
  • The water pollution prevention program was doubled.
  • More aid was provided to localities to combat water pollution.
  • The Rivers and Harbors Act of 1962 reiterated and expanded upon "previous authorizations for outdoor recreation."

Agriculture

  • A new Housing Act of 1961 extended the Farmers Home Administration housing loan assistance for the first time to nonfarm rural residents and providers of low-cost housing for domestic farm laborers. The Farmers Home Administration was therefore able to expand its rural housing loans from less than $70 million to nearly $500 million in 1965, or about enough to provide for 50,000 new or rehabilitated housing units.
  • A 1962 farm bill expanded government food donation programs at home and abroad and provided federal aid to farmers who converted crop land to nonfarm income-producing uses.
  • Title III of the Food and Agriculture Act of 1962 consolidated and expanded existing loan programs, thereby providing the Farmers Home Administration with increased flexibility in helping a broader spectrum of credit-risky farmers to purchase land and amass working capital. In addition, the Farmers Home Administration assumed responsibility for community water system loans.
  • Under the Rural Renewal Program of 1962, the USDA provided technical and financial assistance for locally initiated and sponsored programs aimed at ending chronic underemployment and fostering a sound rural economy. Loans were made to local groups to establish small manufacturing plants, to build hospitals, to establish recreation areas, and to carry out similar developmental activities.

Crime

Under Kennedy, the first significant package of anticrime bills since 1934 were passed. The Kennedy Administration's anticrime measures included the Juvenile Delinquency and Youth Offenses Control Act, which was signed into law on September 22, 1961. This program aimed to prevent youth from committing delinquent acts. In 1963, 288 mobsters were brought to trial by a team that was headed by Kennedy's brother, Robert.

Defense

The Kennedy administration with its new Secretary of Defense, Robert S. McNamara, gave a strong priority to countering communist political subversion and guerrilla tactics in the "wars of national liberation" to decolonize the Third World, long held in Western vassalage. As well as fighting and winning a nuclear war, the American military was also trained and equipped for counterinsurgency operations. Though the U.S. Army Special Forces had been created in 1952, Kennedy visited the Fort Bragg U.S. Army Special Warfare Center in a blaze of publicity and gave his permission for the Special Forces to adopt the green beret. The other services launched their own counterinsurgency forces in 1961; the U.S. Air Force created the 1st Air Commando Group and the U.S. Navy created the Navy Seals.

The U.S. military increased in size and faced possible confrontation with the Soviets with the construction of the Berlin Wall in 1961 and with the Cuban Missile Crisis in 1962. American troops were sent to Laos and South Vietnam in increasing numbers. The United States provided a clandestine operation to supply military aid and support to Cuban exiles in the disastrous Bay of Pigs Invasion.

Both Frankie Laine and The Brothers Four released 1962 songs entitled The New Frontier.

Donald Fagen released a song titled "New Frontier" on his 1982 album The Nightfly.

Existential risk from artificial intelligence

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Existential_risk_from_artificial_intelligen...