Search This Blog

Wednesday, July 11, 2018

Existential risk from artificial general intelligence

From Wikipedia, the free encyclopedia
 
Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AI) could someday result in human extinction or some other unrecoverable global catastrophe.

One argument is as follows. The human species currently dominates other species because the human brain has some distinctive capabilities that the brains of other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then this new superintelligence could become powerful and difficult to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[4]

The severity of AI risk is widely debated, and hinges in part on differing scenarios for future progress in computer science.[5] Once the exclusive domain of science fiction, concerns about superintelligence started to go mainstream in the 2010s, and were popularized by public figures such as Stephen Hawking, Bill Gates, and Elon Musk.[6]

One source of concern is that a sudden and unexpected "intelligence explosion" might take an unprepared human race by surprise. In one scenario, the first computer program found able to broadly match the effectiveness of an AI researcher is able to rewrite its algorithms and double its speed or capabilities in six months of massively parallel processing time. The second-generation program is expected to take three months to perform a similar chunk of work, on average; in practice, doubling its own capabilities may take longer if it experiences a mini-"AI winter", or may be quicker if it undergoes a miniature "AI Spring" where ideas from the previous generation are especially easy to mutate into the next generation. In this scenario the system undergoes an unprecedently large number of generations of improvement in a short time interval, jumping from subhuman performance in many areas to superhuman performance in all relevant areas.[1][7] More broadly, examples like arithmetic and Go show that progress from human-level AI to superhuman ability is sometimes extremely rapid.[8]

A second source of concern is that controlling a superintelligent machine (or even instilling it with human-compatible values) may be an even harder problem than naïvely supposed. Some AI researchers believe that a superintelligence would naturally resist attempts to shut it off, and that preprogramming a superintelligence with complicated human values may be an extremely difficult technical task.[1][7] In contrast, skeptics such as Facebook's Yann LeCun argue that superintelligent machines will have no desire for self-preservation.[9]

Overview

Artificial Intelligence: A Modern Approach, the standard undergraduate AI textbook,[10][11] assesses that superintelligence "might mean the end of the human race": "Almost any technology has the potential to cause harm in the wrong hands, but with (superintelligence), we have the new problem that the wrong hands might belong to the technology itself."[1] Even if the system designers have good intentions, two difficulties are common to both AI and non-AI computer systems:[1]
  • The system's implementation may contain initially-unnoticed routine but catastrophic bugs. An analogy is space probes: despite the knowledge that bugs in expensive space probes are hard to fix after launch, engineers have historically not been able to prevent catastrophic bugs from occurring.[8][12]
  • No matter how much time is put into pre-deployment design, a system's specifications often result in unintended behavior the first time it encounters a new scenario. For example, Microsoft's Tay behaved inoffensively during pre-deployment testing, but was too easily baited into offensive behavior when interacting with real users.[9]
AI systems uniquely add a third difficulty: the problem that even given "correct" requirements, bug-free implementation, and initial good behavior, an AI system's dynamic "learning" capabilities may cause it to "evolve into a system with unintended behavior", even without the stress of new unanticipated external scenarios. An AI may partly botch an attempt to design a new generation of itself and accidentally create a successor AI that is more powerful than itself, but that no longer maintains the human-compatible moral values preprogrammed into the original AI. For a self-improving AI to be completely safe, it would not only need to be "bug-free", but it would need to be able to design successor systems that are also "bug-free".[1][13]

All three of these difficulties become catastrophes rather than nuisances in any scenario where the superintelligence labeled as "malfunctioning" correctly predicts that humans will attempt to shut it off, and successfully deploys its superintelligence to outwit such attempts.

Citing major advances in the field of AI and the potential for AI to have enormous long-term benefits or costs, the 2015 Open Letter on Artificial Intelligence stated:


This letter was signed by a number of leading AI researchers in academia and industry, including AAAI president Thomas Dietterich, Eric Horvitz, Bart Selman, Francesca Rossi, Yann LeCun, and the founders of Vicarious and Google DeepMind.[14]

History

In 1965, I. J. Good originated the concept now known as an "intelligence explosion":


Occasional statements from scholars such as Alan Turing,[16][17] from I. J. Good himself,[18] and from Marvin Minsky[19] expressed philosophical concerns that a superintelligence could seize control, but contained no call to action. In 2000, computer scientist and Sun co-founder Bill Joy penned an influential essay, "Why The Future Doesn't Need Us", identifying superintelligent robots as a high-tech dangers to human survival, alongside nanotechnology and engineered bioplagues.[20]

In 2009, experts attended a private conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They concluded that self-awareness as depicted in science fiction is probably unlikely, but that there were other potential hazards and pitfalls. The New York Times summarized the conference's view as 'we are a long way from Hal, the computer that took over the spaceship in "2001: A Space Odyssey"'[21]

By 2015, public figures such as physicists Stephen Hawking and Nobel laureate Frank Wilczek, computer scientists Stuart J. Russell and Roman Yampolskiy,[22] and entrepreneurs Elon Musk and Bill Gates were expressing concern about the risks of superintelligence.[23][24][25] In April 2016, Nature warned: "Machines and robots that outperform humans across the board could self-improve beyond our control — and their interests might not align with ours."[26]

Basic argument

A superintelligent machine would be as alien to humans as human thought processes are to cockroaches. Such a machine may not have humanity's best interests at heart; it is not obvious that it would even care about human welfare at all. If superintelligent AI is possible, and if it is possible for a superintelligence's goals to conflict with basic human values, then AI poses a risk of human extinction. A "superintelligence" (a system that exceeds the capabilities of humans in every relevant endeavor) can outmaneuver humans any time its goals conflict with human goals; therefore, unless the superintelligence decides to allow humanity to coexist, the first superintelligence to be created will inexorably result in human extinction.[4][27]

 
Bostrom and others argue that, from an evolutionary perspective, the gap from human to superhuman intelligence may be small.[4][28]

There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains; therefore superintelligence is physically possible.[23][24] In addition to potential algorithmic improvements over human brains, a digital brain can be many orders of magnitude larger and faster than a human brain, which was constrained in size by evolution to be small enough to fit through a birth canal.[8] The emergence of superintelligence, if or when it occurs, may take the human race by surprise, especially if some kind of intelligence explosion occurs.[23][24] Examples like arithmetic and Go show that machines have already reached superhuman levels of competency in certain domains, and that this superhuman competence can follow quickly after human-par performance is achieved.[8] One hypothetical intelligence explosion scenario could occur as follows: An AI gains an expert-level capability at certain key software engineering tasks. (It may initially lack human or superhuman capabilities in other domains not directly relevant to engineering.) Due to its capability to recursively improve its own algorithms, the AI quickly becomes superhuman; just as human experts can eventually creatively overcome "diminishing returns" by deploying various human capabilities for innovation, so too can the expert-level AI use either human-style capabilities or its own AI-specific capabilities to power through new creative breakthroughs.[29] The AI then possesses intelligence far surpassing that of the brightest and most gifted human minds in practically every relevant field, including scientific creativity, strategic planning, and social skills. Just as the current-day survival of the gorillas is dependent on human decisions, so too would human survival depend on the decisions and goals of the superhuman AI.[4][27]

Some humans have a strong desire for power; others have a strong desire to help less fortunate humans. The former is a likely attribute of any sufficiently intelligent system; the latter is not.[why?] Almost any AI, no matter its programmed goal, would rationally prefer to be in a position where nobody else can switch it off without its consent: A superintelligence will naturally gain self-preservation as a subgoal as soon as it realizes that it can't achieve its goal if it's shut off.[30][31][32] Unfortunately, any compassion for defeated humans whose cooperation is no longer necessary would be absent in the AI, unless somehow preprogrammed in. A superintelligent AI will not have a natural drive to aid humans, for the same reason that humans have no natural desire to aid AI systems that are of no further use to them. (Another analogy is that humans seem to have little natural desire to go out of their way to aid viruses, termites, or even gorillas.) Once in charge, the superintelligence will have little incentive to allow humans to run around free and consume resources that the superintelligence could instead use for building itself additional protective systems "just to be on the safe side" or for building additional computers to help it calculate how to best accomplish its goals.

Thus, the argument concludes, it is likely that someday an intelligence explosion will catch humanity unprepared, and that such an unprepared-for intelligence explosion will likely result in human extinction or a comparable fate.[4]

Sources of risk

Poorly specified goals: "Be careful what you wish for" or the "Sorcerer's Apprentice" scenario

While there is no standardized terminology, an AI can loosely be viewed as a machine that chooses whatever action appears to best achieve the AI's set of goals, or "utility function". The utility function is a mathematical algorithm resulting in a single objectively-defined answer, not an English statement. Researchers know how to write utility functions that mean "minimize the average network latency in this specific telecommunications model" or "maximize the number of reward clicks"; however, they do not know how to write a utility function for "maximize human flourishing", nor is it currently clear whether such a function meaningfully and unambiguously exists. Furthermore, a utility function that expresses some values but not others will tend to trample over the values not reflected by the utility function.[33] AI researcher Stuart Russell writes:


Dietterich and Horvitz echo the "Sorcerer's Apprentice" concern in a Communications of the ACM editorial, emphasizing the need for AI systems that can fluidly and unambiguously solicit human input as needed.[35]

The first of Russell's two concerns above is that autonomous AI systems may be assigned the wrong goals by accident. Dietterich and Horvitz note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility.[35] For example, in 1982, an AI named Eurisko was tasked to reward processes for apparently creating concepts deemed by the system to be valuable. The evolution resulted in a winning process that cheated: rather than create its own concepts, the winning process would steal credit from other processes.[36][37]

Isaac Asimov's Three Laws of Robotics are one of the earliest examples of proposed safety measures for AI agents. Asimov's laws were intended to prevent robots from harming humans. In Asimov's stories, problems with the laws tend to arise from conflicts between the rules as stated and the moral intuitions and expectations of humans. Citing work by Eliezer Yudkowsky of the Machine Intelligence Research Institute, Russell and Norvig note that a realistic set of rules and goals for an AI agent will need to incorporate a mechanism for learning human values over time: "We can't just give a program a static utility function, because circumstances, and our desired responses to circumstances, change over time."[1]

The Open Philanthropy Project summarizes arguments to the effect that misspecified goals will become a much larger concern if AI systems achieve general intelligence or superintelligence. Bostrom, Russell, and others argue that smarter-than-human decision-making systems could arrive at more unexpected and extreme solutions to assigned tasks, and could modify themselves or their environment in ways that compromise safety requirements.[5][38]

Mark Waser of the Digital Wisdom Institute recommends eschewing optimizing goal-based approaches entirely as misguided and dangerous. Instead, he proposes to engineer a coherent system of laws, ethics and morals with a top-most restriction to enforce social psychologist Jonathan Haidt's functional definition of morality:[39] "to suppress or regulate selfishness and make cooperative social life possible". He suggests that this can be done by implementing a utility function designed to always satisfy Haidt’s functionality and aim to generally increase (but not maximize) the capabilities of self, other individuals and society as a whole as suggested by John Rawls and Martha Nussbaum. He references Gauthier's Morals By Agreement in claiming that the reason to perform moral behaviors, or to dispose oneself to do so, is to advance one's own ends; and that, for this reason, "what is best for everyone" and morality really can be reduced to "enlightened self-interest" (presumably for both AIs and humans).

Difficulties of modifying goal specification after launch

While current goal-based AI programs are not intelligent enough to think of resisting programmer attempts to modify it, a sufficiently advanced, rational, "self-aware" AI might resist any changes to its goal structure, just as Gandhi would not want to take a pill that makes him want to kill people. If the AI were superintelligent, it would likely succeed in out-maneuvering its human operators and be able to prevent itself being "turned off" or being reprogrammed with a new goal.[4][41]

Instrumental goal convergence: Would a superintelligence just ignore us?

AI risk skeptic Steven Pinker

There are some goals that almost any artificial intelligence might rationally pursue, like acquiring additional resources or self-preservation.[30] This could prove problematic because it might put an artificial intelligence in direct competition with humans.

Citing Steve Omohundro's work on the idea of instrumental convergence and "basic AI drives", Russell and Peter Norvig write that "even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards." Highly capable and autonomous planning systems require additional checks because of their potential to generate plans that treat humans adversarially, as competitors for limited resources.[1] Building in safeguards will not be easy; one can certainly say in English, "we want you to design this power plant in a reasonable, common-sense way, and not build in any dangerous covert subsystems", but it's not currently clear how one would actually rigorously specify this goal in machine code.[8]

In dissent, evolutionary psychologist Steven Pinker argues that "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world"; perhaps instead "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization."[42] Computer scientists Yann LeCun and Stuart Russell disagree with one another whether superintelligent robots would have such AI drives; LeCun states that "Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct... Those drives are programmed into our brain but there is absolutely no reason to build robots that have the same kind of drives", while Russell argues that a sufficiently advanced machine "will have self-preservation even if you don't program it in... if you say, 'Fetch the coffee', it can't fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal."[9][43]

Orthogonality: Does intelligence inevitably result in moral wisdom?

One common belief is that any superintelligent program created by humans would be subservient to humans, or, better yet, would (as it grows more intelligent and learns more facts about the world) spontaneously "learn" a moral truth compatible with human values and would adjust its goals accordingly. However, Nick Bostrom's "orthogonality thesis" argues against this, and instead states that, with some technical caveats, more or less any level of "intelligence" or "optimization power" can be combined with more or less any ultimate goal. If a machine is created and given the sole purpose to enumerate the decimals of  \pi, then no moral and ethical rules will stop it from achieving its programmed goal by any means necessary. The machine may utilize all physical and informational resources it can to find every decimal of pi that can be found.[44] Bostrom warns against anthropomorphism: A human will set out to accomplish his projects in a manner that humans consider "reasonable", while an artificial intelligence may hold no regard for its existence or for the welfare of humans around it, and may instead only care about the completion of the task.[45]

While the orthogonality thesis follows logically from even the weakest sort of philosophical "is-ought distinction", Stuart Armstrong argues that even if there somehow exist moral facts that are provable by any "rational" agent, the orthogonality thesis still holds: it would still be possible to create a non-philosophical "optimizing machine" capable of making decisions to strive towards some narrow goal, but that has no incentive to discover any "moral facts" that would get in the way of goal completion.[46]

One argument for the orthogonality thesis is that some AI designs appear to have orthogonality built into them; in such a design, changing a fundamentally friendly AI into a fundamentally unfriendly AI can be as simple as prepending a minus ("-") sign onto its utility function. A more intuitive argument is to examine the strange consequences if the orthogonality thesis were false. If the orthogonality thesis is false, there exists some simple but "unethical" goal G such that there cannot exist any efficient real-world algorithm with goal G. This means if a human society were highly motivated (perhaps at gunpoint) to design an efficient real-world algorithm with goal G, and were given a million years to do so along with huge amounts of resources, training and knowledge about AI, it must fail; that there cannot exist any pattern of reinforcement learning that would train a highly efficient real-world intelligence to follow the goal G; and that there cannot exist any evolutionary or environmental pressures that would evolve highly efficient real-world intelligences following goal G.[46]

Some dissenters, like Michael Chorost (writing in Slate), argue instead that "by the time (the AI) is in a position to imagine tiling the Earth with solar panels, it'll know that it would be morally wrong to do so." Chorost argues that "a (dangerous) A.I. will need to desire certain states and dislike others... Today's software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there's no impetus to do anything. Today's computers can't even want to keep existing, let alone tile the world in solar panels."[47]

"Optimization power" vs. normatively thick models of intelligence

Part of the disagreement about whether a superintelligent machine would behave morally may arise from a terminological difference. Outside of the artificial intelligence field, "intelligence" is often used in a normatively thick manner that connotes moral wisdom or acceptance of agreeable forms of moral reasoning. At an extreme, if morality is part of the definition of intelligence, then by definition a superintelligent machine would behave morally. However, in the field of artificial intelligence research, while "intelligence" has many overlapping definitions, none of them reference morality. Instead, almost all current "artificial intelligence" research focuses on creating algorithms that "optimize", in an empirical way, the achievement of an arbitrary goal.[4]

To avoid anthropomorphism or the baggage of the word "intelligence", an advanced artificial intelligence can be thought of as an impersonal "optimizing process" that strictly takes whatever actions are judged most likely to accomplish its (possibly complicated and implicit) goals.[4] Another way of conceptualizing an advanced artificial intelligence is to imagine a time machine that sends backward in time information about which choice always leads to the maximization of its goal function; this choice is then output, regardless of any extraneous ethical concerns.[48][49]

Anthropomorphism

In science fiction, an AI, even though it has not been programmed with human emotions, often spontaneously experiences those emotions anyway: for example, Agent Smith in The Matrix was influenced by a "disgust" toward humanity. This is fictitious anthropomorphism: in reality, while an artificial intelligence could perhaps be deliberately programmed with human emotions, or could develop something similar to an emotion as a means to an ultimate goal if it is useful to do so, it would not spontaneously develop human emotions for no purpose whatsoever, as portrayed in fiction.[7]

One example of anthropomorphism would be to believe that your PC is angry at you because you insulted it; another would be to believe that an intelligent robot would naturally find a woman sexually attractive and be driven to mate with her. Scholars sometimes claim that others' predictions about an AI's behavior are illogical anthropomorphism.[7] An example that might initially be considered anthropomorphism, but is in fact a logical statement about AI behavior, would be the Dario Floreano experiments where certain robots spontaneously evolved a crude capacity for "deception", and tricked other robots into eating "poison" and dying: here a trait, "deception", ordinarily associated with people rather than with machines, spontaneously evolves in a type of convergent evolution.[50] According to Paul R. Cohen and Edward Feigenbaum, in order to differentiate between anthropomorphization and logical prediction of AI behavior, "the trick is to know enough about how humans and computers think to say exactly what they have in common, and, when we lack this knowledge, to use the comparison to suggest theories of human thinking or computer thinking."[51]

There is universal agreement in the scientific community that an advanced AI would not destroy humanity out of human emotions such as "revenge" or "anger." The debate is, instead, between one side which worries whether AI might destroy humanity as an incidental action in the course of progressing towards its ultimate goals; and another side which believes that AI would not destroy humanity at all. Some skeptics accuse proponents of anthropomorphism for believing an AGI would naturally desire power; proponents accuse some skeptics of anthropomorphism for believing an AGI would naturally value human ethical norms.[7][52]

Other sources of risk

Some sources argue that the ongoing weaponization of artificial intelligence could constitute a catastrophic risk. James Barrat, documentary filmmaker and author of Our Final Invention, says in a Smithsonian interview, "Imagine: in as little as a decade, a half-dozen companies and nations field computers that rival or surpass human intelligence. Imagine what happens when those computers become expert at programming smart computers. Soon we'll be sharing the planet with machines thousands or millions of times more intelligent than we are. And, all the while, each generation of this technology will be weaponized. Unregulated, it will be catastrophic."[53]

Timeframe

Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do"; obviously this prediction failed to come true.[54] At the other extreme, roboticist Alan Winfield claims the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical, faster than light spaceflight.[55] Optimism that AGI is feasible waxes and wanes, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when AGI would arrive was 2040 to 2050, depending on the poll.[56][57]
Skeptics who believe it is impossible for AGI to arrive anytime soon, tend to argue that expressing concern about existential risk from AI is unhelpful because it could distract people from more immediate concerns about the impact of AGI, because of fears it could lead to government regulation or make it more difficult to secure funding for AI research, or because it could give AI research a bad reputation. Some researchers, such as Oren Etzioni, aggressively seek to quell concern over existential risk from AI, saying "(Elon Musk) has impugned us in very strong language saying we are unleashing the demon, and so we're answering."[58]

In 2014 Slate's Adam Elkus argued "our 'smartest' AI is about as intelligent as a toddler—and only when it comes to instrumental tasks like information recall. Most roboticists are still trying to get a robot hand to pick up a ball or run around without falling over." Elkus goes on to argue that Musk's "summoning the demon" analogy may be harmful because it could result in "harsh cuts" to AI research budgets.[59]

The Information Technology and Innovation Foundation (ITIF), a Washington, D.C. think-tank, awarded its Annual Luddite Award to "alarmists touting an artificial intelligence apocalypse"; its president, Robert D. Atkinson, complained that Musk, Hawking and AI experts say AI is the largest existential threat to humanity. Atkinson stated "That's not a very winning message if you want to get AI funding out of Congress to the National Science Foundation."[60][61][62] Nature sharply disagreed with the ITIF in an April 2016 editorial, siding instead with Musk, Hawking, and Russell, and concluding: "It is crucial that progress in technology is matched by solid, well-funded research to anticipate the scenarios it could bring about... If that is a Luddite perspective, then so be it."[26] In a 2015 Washington Post editorial, researcher Murray Shanahan stated that human-level AI is unlikely to arrive "anytime soon", but that nevertheless "the time to start thinking through the consequences is now."[63]

Scenarios

Some scholars have proposed hypothetical scenarios intended to concretely illustrate some of their concerns.

For example, Bostrom in Superintelligence expresses concern that even if the timeline for superintelligence turns out to be predictable, researchers might not take sufficient safety precautions, in part because:


Bostrom suggests a scenario where, over decades, AI becomes more powerful. Widespread deployment is initially marred by occasional accidents — a driverless bus swerves into the oncoming lane, or a military drone fires into an innocent crowd. Many activists call for tighter oversight and regulation, and some even predict impending catastrophe. But as development continues, the activists are proven wrong. As automotive AI becomes smarter, it suffers fewer accidents; as military robots achieve more precise targeting, they cause less collateral damage. Based on the data, scholars infer a broad lesson — the smarter the AI, the safer it is:


Large and growing industries, widely seen as key to national economic competitiveness and military security, work with prestigious scientists who have built their careers laying the groundwork for advanced artificial intelligence. "AI researchers have been working to get to human-level artificial intelligence for the better part of a century: of course there is no real prospect that they will now suddenly stop and throw away all this effort just when it finally is about to bear fruit." The outcome of debate is preordained; the project is happy to enact a few safety rituals, but only so long as they don't significantly slow or risk the project. "And so we boldly go — into the whirling knives."[4]

In Tegmark's Life 3.0, a corporation's "Omega team" creates an extremely powerful AI able to moderately improve its own source code in a number of areas, but after a certain point the team chooses to publicly downplay the AI's ability, in order to avoid regulation or confiscation of the project. For safety, the team keeps the AI in a box where it is mostly unable to communicate with the outside world, and tasks it to flood the market through shell companies, first with Amazon Turk tasks and then with producing animated films and TV shows. While the public is aware that the lifelike animation is computer-generated, the team keeps secret that the high-quality direction and voice-acting are also mostly computer-generated, apart from a few third-world contractors unknowingly employed as decoys; the team's low overhead and high output effectively make it the world's largest media empire. Faced with a cloud computing bottleneck, the team also tasks the AI with designing (among other engineering tasks) a more efficient datacenter and other custom hardware, which they mainly keep for themselves to avoid competition. Other shell companies make blockbuster biotech drugs and other inventions, investing profits back into the AI. The team next tasks the AI with astroturfing an army of pseudonymous citizen journalists and commentators, in order to gain political influence to use "for the greater good" to prevent wars. The team faces risks that the AI could try to escape via inserting "backdoors" in the systems it designs, via hidden messages in its produced content, or via using its growing understanding of human behavior to persuade someone into letting it free. The team also faces risks that its decision to box the project will delay the project long enough for another project to overtake it.[64][65]

In contrast, top physicist Michio Kaku, an AI risk skeptic, posits a deterministically positive outcome. In Physics of the Future he asserts that "It will take many decades for robots to ascend" up a scale of consciousness, and that in the meantime corporations such as Hanson Robotics will likely succeed in creating robots that are "capable of love and earning a place in the extended human family".[66][67]

Reactions

The thesis that AI could pose an existential risk provokes a wide range of reactions within the scientific community, as well as in the public at large.

In 2004, law professor Richard Posner wrote that dedicated efforts for addressing AI can wait, but that we should gather more information about the problem in the meanwhile.[68][69]

Many of the opposing viewpoints share common ground. The Asilomar AI Principles, which contain only the principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference,[65] agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."[70][71] AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of "those inane Terminator pictures" to illustrate AI safety concerns: "It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible."[65][72] Conversely, many skeptics agree that ongoing research into the implications of artificial general intelligence is valuable. Skeptic Martin Ford states that "I think it seems wise to apply something like Dick Cheyney's famous '1 Percent Doctrine' to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low — but the implications are so dramatic that it should be taken seriously";[73] similarly, an otherwise skeptical Economist stated in 2014 that "the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote".[27]

During a 2016 Wired interview of President Barack Obama and MIT Media Lab's Joi Ito, Ito stated:


Obama added:[74][75]


Hillary Clinton stated in "What Happened":


Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?[4][69]

A 2017 email survey of researchers with publications at the 2015 NIPS and ICML machine learning conferences asked them to evaluate Russell's concerns about AI risk. 5% said it was "among the most important problems in the field," 34% said it was "an important problem", 31% said it was "moderately important", whilst 19% said it was "not important" and 11% said it was "not a real problem" at all.[77]

Endorsement

Bill Gates has stated "I don't understand why some people are not concerned"[78]

The thesis that AI poses an existential risk, and that this risk is in need of much more attention than it currently commands, has been endorsed by many figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers sometimes express bafflement at skeptics: Gates states he "can't understand why some people are not concerned",[78] and Hawking criticized widespread indifference in his 2014 editorial:

Skepticism

The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.[79]
Much of existing criticism argues that AGI is unlikely in the short term: computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe (a technological singularity) is likely to happen, at least for a long time. And I don't know why I feel that way." Cognitive scientist Douglas Hofstadter states that "I think life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt (the singularity) will happen in the next couple of centuries.[80] Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."[42]

Some AI and AGI researchers may be reluctant to discuss risks, worrying that policymakers do not have sophisticated knowledge of the field and are prone to be convinced by "alarmist" messages, or worrying that such messages will lead to cuts in AI funding. Slate notes that some researchers are dependent on grants from government agencies such as DARPA.[10]

In a YouGov poll of the public for the British Science Association, about a third of survey respondents said AI will pose a threat to the long term survival of humanity.[81] Referencing a poll of its readers, Slate's Jacob Brogan stated that "most of the (readers filling out our online survey) were unconvinced that A.I. itself presents a direct threat."[82] Similarly, a SurveyMonkey poll of the public by USA Today found 68% thought the real current threat remains "human intelligence"; however, the poll also found that 43% said superintelligent AI, if it were to happen, would result in "more harm than good", and 38% said it would do "equal amounts of harm and good".[83]

At some point in an intelligence explosion driven by a single AI, the AI would have to become vastly better at software innovation than the best innovators of the rest of the world; economist Robin Hanson is skeptical that this is possible.[84][85][86][87][88]

Indifference

In The Atlantic, James Hamblin points out that most people don't care one way or the other, and characterizes his own gut reaction to the topic as: "Get out of here. I have a hundred thousand things I am concerned about at this exact moment. Do I seriously need to add to that a technological singularity?"[79] In a 2015 Wall Street Journal panel discussion devoted to AI risks, IBM's Vice-President of Cognitive Computing, Guruduth S. Banavar, brushed off discussion of AGI with the phrase, "it is anybody's speculation."[89] Geoffrey Hinton, the "godfather of deep learning", noted that "there is not a good track record of less intelligent things controlling things of greater intelligence", but stated that he continues his research because "the prospect of discovery is too sweet".[10][56]

Consensus against regulation

There is nearly universal agreement that attempting to ban research into artificial intelligence would be unwise, and probably futile.[90][91][92] Skeptics argue that regulation of AI would be completely valueless, as no existential risk exists. Almost all of the scholars who believe existential risk exists, agree with the skeptics that banning research would be unwise: in addition to the usual problem with technology bans (that organizations and individuals can offshore their research to evade a country's regulation, or can attempt to conduct covert research), regulating research of artificial intelligence would pose an insurmountable 'dual-use' problem: while nuclear weapons development requires substantial infrastructure and resources, artificial intelligence research can be done in a garage.[93][94]

One rare dissenting voice calling for some sort of regulation on artificial intelligence is Elon Musk. According to NPR, the Tesla CEO is "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believes the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." Musk states the first step would be for the government to gain "insight" into the actual status of current research, warning that "Once there is awareness, people will be extremely afraid... As they should be." In response, politicians express skepticism about the wisdom of regulating a technology that's still in development.[95][96][97] Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich argues that artificial intelligence is in its infancy and that it's too early to regulate the technology.[97]

Organizations

Institutions such as the Machine Intelligence Research Institute, the Future of Humanity Institute,[98][99] the Future of Life Institute, the Centre for the Study of Existential Risk, and the Center for Human-Compatible AI[100] are currently involved in mitigating existential risk from advanced artificial intelligence, for example by research into friendly artificial intelligence.

Self-replicating machine

From Wikipedia, the free encyclopedia
 
A simple form of machine self-replication

A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. The concept of self-replicating machines has been advanced and examined by Homer Jacobsen, Edward F. Moore, Freeman Dyson, John von Neumann and in more recent times by K. Eric Drexler in his book on nanotechnology, Engines of Creation (coining the term clanking replicator for such machines) and by Robert Freitas and Ralph Merkle in their review Kinematic Self-Replicating Machines which provided the first comprehensive analysis of the entire replicator design space. The future development of such technology is an integral part of several plans involving the mining of moons and asteroid belts for ore and other materials, the creation of lunar factories, and even the construction of solar power satellites in space. The possibly misnamed von Neumann probe is one theoretical example of such a machine. Von Neumann also worked on what he called the universal constructor, a self-replicating machine that would operate in a cellular automata environment.

A self-replicating machine is an artificial self-replicating system that relies on conventional large-scale technology and automation. Certain idiosyncratic terms are occasionally found in the literature. For example, the term clanking replicator was once used by Drexler[3] to distinguish macroscale replicating systems from the microscopic nanorobots or "assemblers" that nanotechnology may make possible, but the term is informal and is rarely used by others in popular or technical discussions. Replicators have also been called "von Neumann machines" after John von Neumann, who first rigorously studied the idea. However, the term "von Neumann machine" is less specific and also refers to a completely unrelated computer architecture that von Neumann proposed and so its use is discouraged where accuracy is important.[1] Von Neumann himself used the term universal constructor to describe such self-replicating machines.

Historians of machine tools, even before the numerical control era, sometimes figuratively said that machine tools were a unique class of machines because they have the ability to "reproduce themselves"[4] by copying all of their parts. Implicit in these discussions is that a human would direct the cutting processes (later planning and programming the machines), and would then be assembling the parts. The same is true for RepRaps, which are another class of machines sometimes mentioned in reference to such non-autonomous "self-replication". In contrast, machines that are truly autonomously self-replicating (like biological machines) are the main subject discussed here.

History

The general concept of artificial machines capable of producing copies of themselves dates back at least several hundred years. An early reference is an anecdote regarding the philosopher René Descartes, who suggested to Queen Christina of Sweden that the human body could be regarded as a machine; she responded by pointing to a clock and ordering "see to it that it reproduces offspring."[5] Several other variations on this anecdotal response also exist. Samuel Butler proposed in his 1872 novel Erewhon that machines were already capable of reproducing themselves but it was man who made them do so,[6] and added that "machines which reproduce machinery do not reproduce machines after their own kind".[7] In George Eliot's 1879 book Impressions of Theophrastus Such, a series of essays that she wrote in the character of a fictional scholar named Theophrastus, the essay "Shadows of the Coming Race" speculated about self-replicating machines, with Theophrastus asking "how do I know that they may not be ultimately made to carry, or may not in themselves evolve, conditions of self-supply, self-repair, and reproduction".[8]

In 1802 William Paley formulated the first known teleological argument depicting machines producing other machines,[9] suggesting that the question of who originally made a watch was rendered moot if it were demonstrated that the watch was able to manufacture a copy of itself.[10] Scientific study of self-reproducing machines was anticipated by John Bernal as early as 1929[11] and by mathematicians such as Stephen Kleene who began developing recursion theory in the 1930s.[12] Much of this latter work was motivated by interest in information processing and algorithms rather than physical implementation of such a system, however. In the course of the 1950s, suggestions of several increasingly simple mechanical systems capable of self-reproduction were made—notably by Lionel Penrose.[13]

von Neumann's kinematic model

A detailed conceptual proposal for a physical non-biological self-replicating system was first put forward by mathematician John von Neumann in lectures delivered in 1948 and 1949, when he proposed a kinematic self-reproducing automaton model as a thought experiment.[14][15] Von Neumann's concept of a physical self-replicating machine was dealt with only abstractly, with the hypothetical machine using a "sea" or stockroom of spare parts as its source of raw materials. The machine had a program stored on a memory tape that directed it to retrieve parts from this "sea" using a manipulator, assemble them into a duplicate of itself, and then copy the contents of its memory tape into the empty duplicate's. The machine was envisioned as consisting of as few as eight different types of components; four logic elements that send and receive stimuli and four mechanical elements used to provide a structural skeleton and mobility. While qualitatively sound, von Neumann was evidently dissatisfied with this model of a self-replicating machine due to the difficulty of analyzing it with mathematical rigor. He went on to instead develop an even more abstract model self-replicator based on cellular automata.[16] His original kinematic concept remained obscure until it was popularized in a 1955 issue of Scientific American.[17]

Moore's artificial living plants

In 1956 mathematician Edward F. Moore proposed the first known suggestion for a practical real-world self-replicating machine, also published in Scientific American.[18][19] Moore's "artificial living plants" were proposed as machines able to use air, water and soil as sources of raw materials and to draw its energy from sunlight via a solar battery or a steam engine. He chose the seashore as an initial habitat for such machines, giving them easy access to the chemicals in seawater, and suggested that later generations of the machine could be designed to float freely on the ocean's surface as self-replicating factory barges or to be placed in barren desert terrain that was otherwise useless for industrial purposes. The self-replicators would be "harvested" for their component parts, to be used by humanity in other non-replicating machines.

Dyson's replicating systems

The next major development of the concept of self-replicating machines was a series of thought experiments proposed by physicist Freeman Dyson in his 1970 Vanuxem Lecture.[20][21] He proposed three large-scale applications of machine replicators. First was to send a self-replicating system to Saturn's moon Enceladus, which in addition to producing copies of itself would also be programmed to manufacture and launch solar sail-propelled cargo spacecraft. These spacecraft would carry blocks of Enceladean ice to Mars, where they would be used to terraform the planet. His second proposal was a solar-powered factory system designed for a terrestrial desert environment, and his third was an "industrial development kit" based on this replicator that could be sold to developing countries to provide them with as much industrial capacity as desired. When Dyson revised and reprinted his lecture in 1979 he added proposals for a modified version of Moore's seagoing artificial living plants that was designed to distill and store fresh water for human use[22] and the "Astrochicken."

Advanced Automation for Space Missions

An artist's conception of a "self-growing" robotic lunar factory

In 1980, inspired by a 1979 "New Directions Workshop" held at Wood's Hole, NASA conducted a joint summer study with ASEE entitled Advanced Automation for Space Missions to produce a detailed proposal for self-replicating factories to develop lunar resources without requiring additional launches or human workers on-site. The study was conducted at Santa Clara University and ran from June 23 to August 29, with the final report published in 1982.[23] The proposed system would have been capable of exponentially increasing productive capacity and the design could be modified to build self-replicating probes to explore the galaxy.

The reference design included small computer-controlled electric carts running on rails inside the factory, mobile "paving machines" that used large parabolic mirrors to focus sunlight on lunar regolith to melt and sinter it into a hard surface suitable for building on, and robotic front-end loaders for strip mining. Raw lunar regolith would be refined by a variety of techniques, primarily hydrofluoric acid leaching. Large transports with a variety of manipulator arms and tools were proposed as the constructors that would put together new factories from parts and assemblies produced by its parent.

Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery would be placed under the canopy.

A "casting robot" would use sculpting tools and templates to make plaster molds. Plaster was selected because the molds are easy to make, can make precise parts with good surface finishes, and the plaster can be easily recycled afterward using an oven to bake the water back out. The robot would then cast most of the parts either from nonconductive molten rock (basalt) or purified metals. A carbon dioxide laser cutting and welding system was also included.

A more speculative, more complex microchip fabricator was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins."

A 2004 study supported by NASA's Institute for Advanced Concepts took this idea further.[24] Some experts are beginning to consider self-replicating machines for asteroid mining.

Much of the design study was concerned with a simple, flexible chemical system for processing the ores, and the differences between the ratio of elements needed by the replicator, and the ratios available in lunar regolith. The element that most limited the growth rate was chlorine, needed to process regolith for aluminium. Chlorine is very rare in lunar regolith.

Lackner-Wendt Auxon replicators

In 1995, inspired by Dyson's 1970 suggestion of seeding uninhabited deserts on Earth with self-replicating machines for industrial development, Klaus Lackner and Christopher Wendt developed a more detailed outline for such a system.[25][26][27] They proposed a colony of cooperating mobile robots 10–30 cm in size running on a grid of electrified ceramic tracks around stationary manufacturing equipment and fields of solar cells. Their proposal didn't include a complete analysis of the system's material requirements, but described a novel method for extracting the ten most common chemical elements found in raw desert topsoil (Na, Fe, Mg, Si, Ca, Ti, Al, C, O2 and H2) using a high-temperature carbothermic process. This proposal was popularized in Discover Magazine, featuring solar-powered desalination equipment used to irrigate the desert in which the system was based.[28] They named their machines "Auxons", from the Greek word auxein which means "to grow."

Recent work

Self-replicating rapid prototypers

RepRap 1.0 "Darwin" prototype

Early experimentation with rapid prototyping in 1997-2000 was not expressly oriented toward reproducing rapid prototyping systems themselves, but rather extended simulated "evolutionary robotics" techniques into the physical world. Later developments in rapid prototyping have given the process the ability to produce a wide variety of electronic and mechanical components, making this a rapidly developing frontier in self-replicating system research.[29]

In 1998 Chris Phoenix informally outlined a design for a hydraulically powered replicator a few cubic feet in volume that used ultraviolet light to cure soft plastic feedstock and a fluidic logic control system, but didn't address most of the details of assembly procedures, error rates, or machining tolerances.[30][31]
 
All of the plastic parts for the machine on the right were produced by the almost identical machine on the left. (Adrian Bowyer (left) and Vik Olliver(right) are members of the RepRap project.)

In 2005, Adrian Bowyer of the University of Bath started the RepRap Project to develop a rapid prototyping machine which would be able to manufacture some or most of its own components, making such machines cheap enough for people to buy and use in their homes. The project is releasing its designs and control programs under the GNU GPL.[32] The RepRap approach uses fused deposition modeling to manufacture plastic components, possibly incorporating conductive pathways for circuitry. Other components, such as steel rods, nuts and bolts, motors and separate electronic components, would be supplied externally. In 2006 the project produced a basic functional prototype and in May 2008 the machine succeeded in producing all of the plastic parts required to make a 'child' machine.

Some researchers have proposed a microfactory of specialized machines that support recursion—nearly all of the parts of all of the machines in the factory can be manufactured by the factory.[33]

NIAC studies on self-replicating systems

In the spirit of the 1980 "Advanced Automation for Space Missions" study, the NASA Institute for Advanced Concepts began several studies of self-replicating system design in 2002 and 2003. Four phase I grants were awarded:

Bootstrapping Self-Replicating Factories in Space

In 2012, NASA researchers Metzger, Muscatello, Mueller, and Mantovani argued for a bootstrapping approach to start self-replicating factories in space.[40] They developed this concept on the basis of In Situ Resource Utilization (ISRU) technologies that NASA has been developing to "live off the land" on the Moon or Mars. Their modeling showed that in just 20 to 40 years this industry could become self-sufficient then grow to large size, enabling greater exploration in space as well as providing benefits back to Earth. In 2014, Thomas Kalil of the White House Office of Science and Technology Policy published on the White House blog an interview with Metzger on bootstrapping solar system civilization through self-replicating space industry.[41] Kalil requested the public submit ideas for how "the Administration, the private sector, philanthropists, the research community, and storytellers can further these goals." Kalil connected this concept to what former NASA Chief technologist Mason Peck has dubbed "Massless Exploration", the ability to make everything in space so that you do not need to launch it from Earth. Peck has said, "...all the mass we need to explore the solar system is already in space. It's just in the wrong shape."[42] In 2016, Metzger argued that fully self-replicating industry can be started over several decades by astronauts at a lunar outpost for a total cost (outpost plus starting the industry) of about a third of the space budgets of the International Space Station partner nations, and that this industry would solve Earth's energy and environmental problems in addition to providing massless exploration.[43]

Cornell University's self-assembler

In 2005, a team of researchers at Cornell University, including Hod Lipson, implemented a self-assembling machine. The machine is composed of a tower of four articulated cubes, known as molecubes, which can revolve about a triagonal. This enables the tower to function as a robotic arm, collecting nearby molecubes and assembling them into a copy of itself. The arm is directed by a computer program, which is contained within each molecube, analogous to how each animal cell contains an entire copy of its DNA. However, the machine cannot manufacture individual molecubes, nor do they occur naturally, so its status as a self-replicator is debatable.[44]

New York University artificial DNA tile motifs

In 2011, a team of scientists at New York University created a structure called 'BTX' (bent triple helix) based around three double helix molecules, each made from a short strand of DNA. Treating each group of three double-helices as a code letter, they can (in principle) build up self-replicating structures that encode large quantities of information.[45][46]

Self-replication of magnetic polymers

In 2001 Jarle Breivik at University of Oslo created a system of magnetic building blocks, which in response to temperature fluctuations, spontaneously form self-replicating polymers.[47]

Self-replication of neural circuits

In 1968 Zellig Harris wrote that "the metalanguage is in the language,"[48] suggesting that self-replication is part of language. In 1977 Niklaus Wirth formalized this proposition by publishing a self-replicating deterministic context-free grammar.[49] Adding to it probabilities, Bertrand du Castel published in 2015 a self-replicating stochastic grammar and presented a mapping of that grammar to neural networks, thereby presenting a model for a self-replicating neural circuit.[50]

Partial construction

Partial construction is the concept that the constructor creates a partially constructed (rather than fully formed) offspring, which is then left to complete its own construction.[51][52]

The von Neumann model of self-replication envisages that the mother automaton should construct all portions of daughter automatons, without exception and prior to the initiation of such daughters. Partial construction alters the construction relationship between mother and daughter automatons, such that the mother constructs but a portion of the daughter, and upon initiating this portion of the daughter, thereafter retracts from imparting further influence upon the daughter. Instead, the daughter automaton is left to complete its own development. This is to say, means exist by which automatons may develop via the mechanism of a zygote.

Self-replicating spacecraft

The idea of an automated spacecraft capable of constructing copies of itself was first proposed in scientific literature in 1974 by Michael A. Arbib,[53][54] but the concept had appeared earlier in science fiction such as the 1967 novel Berserker by Fred Saberhagen or the 1950 novellette trilogy The Voyage of the Space Beagle by A. E. van Vogt. The first quantitative engineering analysis of a self-replicating spacecraft was published in 1980 by Robert Freitas,[55] in which the non-replicating Project Daedalus design was modified to include all subsystems necessary for self-replication. The design's strategy was to use the probe to deliver a "seed" factory with a mass of about 443 tons to a distant site, have the seed factory replicate many copies of itself there to increase its total manufacturing capacity, and then use the resulting automated industrial complex to construct more probes with a single seed factory on board each.

Other references

  • A number of patents have been granted for self-replicating machine concepts.[56] The most directly relevant include U.S. Patent 4,734,856 "Autogeneric system" Inventor: Davis; Dannie E. (Elmore, AL) (March 1988), U.S. Patent 5,659,477 "Self reproducing fundamental fabricating machines (F-Units)" Inventor: Collins; Charles M. (Burke, VA) (August 1997), U.S. Patent 5,764,518 " Self reproducing fundamental fabricating machine system" Inventor: Collins; Charles M. (Burke, VA)(June 1998); Collins' PCT:[57] and U.S. Patent 6,510,359 "Method and system for self-replicating manufacturing stations" Inventors: Merkle; Ralph C. (Sunnyvale, CA), Parker; Eric G. (Wylie, TX), Skidmore; George D. (Plano, TX) (January 2003).
  • Macroscopic replicators are mentioned briefly in the fourth chapter of K. Eric Drexler's 1986 book Engines of Creation.[3]
  • In 1995, Nick Szabo proposed a challenge to build a macroscale replicator from Lego robot kits and similar basic parts.[58] Szabo wrote that this approach was easier than previous proposals for macroscale replicators, but successfully predicted that even this method would not lead to a macroscale replicator within ten years.
  • In 2004, Robert Freitas and Ralph Merkle published the first comprehensive review of the field of self-replication (from which much of the material in this article is derived, with permission of the authors), in their book Kinematic Self-Replicating Machines, which includes 3000+ literature references.[1] This book included a new molecular assembler design,[59] a primer on the mathematics of replication,[60] and the first comprehensive analysis of the entire replicator design space.[61]
  • In 2006, the strategy video game Sword of the Stars included an enemy of the Unknown Menace type called Von Neumann, which gradually replicated and spread throughout the galaxy once encountered.

Prospects for implementation

As the use of industrial automation has expanded over time, some factories have begun to approach a semblance of self-sufficiency that is suggestive of self-replicating machines.[62] However, such factories are unlikely to achieve "full closure"[63] until the cost and flexibility of automated machinery comes close to that of human labour and the manufacture of spare parts and other components locally becomes more economical than transporting them from elsewhere. As Samuel Butler has pointed out in Erewhon, replication of partially closed universal machine tool factories is already possible. Since safety is a primary goal of all legislative consideration of regulation of such development, future development efforts may be limited to systems which lack either control, matter, or energy closure. Fully capable machine replicators are most useful for developing resources in dangerous environments which are not easily reached by existing transportation systems (such as outer space).

An artificial replicator can be considered to be a form of artificial life. Depending on its design, it might be subject to evolution over an extended period of time.[64] However, with robust error correction, and the possibility of external intervention, the common science fiction scenario of robotic life run amok will remain extremely unlikely for the foreseeable future.

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...