Search This Blog

Friday, October 23, 2020

Existential risk from artificial general intelligence

From Wikipedia, the free encyclopedia

https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence 

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.

The likelihood of this type of scenario is widely debated, and hinges in part on differing scenarios for future progress in computer science. Once the exclusive domain of science fiction, concerns about superintelligence started to become mainstream in the 2010s, and were popularized by public figures such as Stephen Hawking, Bill Gates, and Elon Musk.

One source of concern is that controlling a superintelligent machine, or instilling it with human-compatible values, may be a harder problem than naïvely supposed. Many researchers believe that a superintelligence would naturally resist attempts to shut it off or change its goals—a principle called instrumental convergence—and that preprogramming a superintelligence with a full set of human values will prove to be an extremely difficult technical task. In contrast, skeptics such as Facebook's Yann LeCun argue that superintelligent machines will have no desire for self-preservation.

A second source of concern is that a sudden and unexpected "intelligence explosion" might take an unprepared human race by surprise. To illustrate, if the first generation of a computer program able to broadly match the effectiveness of an AI researcher is able to rewrite its algorithms and double its speed or capabilities in six months, then the second-generation program is expected to take three calendar months to perform a similar chunk of work. In this scenario the time for each generation continues to shrink, and the system undergoes an unprecedentedly large number of generations of improvement in a short time interval, jumping from subhuman performance in many areas to superhuman performance in all relevant areas. Empirically, examples like AlphaZero in the domain of Go show that AI systems can sometimes progress from narrow human-level ability to narrow superhuman ability extremely rapidly.

History

One of the earliest authors to express serious concern that highly advanced machines might pose existential risks to humanity was the novelist Samuel Butler, who wrote the following in his 1863 essay Darwin among the Machines:

The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.

In 1951, computer scientist Alan Turing wrote an article titled Intelligent Machinery, A Heretical Theory, in which he proposed that artificial general intelligences would likely "take control" of the world as they became more intelligent than human beings:

Let us now assume, for the sake of argument, that [intelligent] machines are a genuine possibility, and look at the consequences of constructing them... There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s “Erewhon”.

Finally, in 1965, I. J. Good originated the concept now known as an "intelligence explosion"; he also stated that the risks were underappreciated:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.

Occasional statements from scholars such as Marvin Minsky and I. J. Good himself expressed philosophical concerns that a superintelligence could seize control, but contained no call to action. In 2000, computer scientist and Sun co-founder Bill Joy penned an influential essay, "Why The Future Doesn't Need Us", identifying superintelligent robots as a high-tech dangers to human survival, alongside nanotechnology and engineered bioplagues.

In 2009, experts attended a private conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They concluded that self-awareness as depicted in science fiction is probably unlikely, but that there were other potential hazards and pitfalls. The New York Times summarized the conference's view as "we are a long way from Hal, the computer that took over the spaceship in "2001: A Space Odyssey"".

In 2014, the publication of Nick Bostrom's book Superintelligence stimulated a significant amount of public discussion and debate. By 2015, public figures such as physicists Stephen Hawking and Nobel laureate Frank Wilczek, computer scientists Stuart J. Russell and Roman Yampolskiy, and entrepreneurs Elon Musk and Bill Gates were expressing concern about the risks of superintelligence.  In April 2016, Nature warned: "Machines and robots that outperform humans across the board could self-improve beyond our control — and their interests might not align with ours."

General argument

The three difficulties

Artificial Intelligence: A Modern Approach, the standard undergraduate AI textbook, assesses that superintelligence "might mean the end of the human race". It states: "Almost any technology has the potential to cause harm in the wrong hands, but with [superintelligence], we have the new problem that the wrong hands might belong to the technology itself." Even if the system designers have good intentions, two difficulties are common to both AI and non-AI computer systems:

  • The system's implementation may contain initially-unnoticed routine but catastrophic bugs. An analogy is space probes: despite the knowledge that bugs in expensive space probes are hard to fix after launch, engineers have historically not been able to prevent catastrophic bugs from occurring.
  • No matter how much time is put into pre-deployment design, a system's specifications often result in unintended behavior the first time it encounters a new scenario. For example, Microsoft's Tay behaved inoffensively during pre-deployment testing, but was too easily baited into offensive behavior when interacting with real users.

AI systems uniquely add a third difficulty: the problem that even given "correct" requirements, bug-free implementation, and initial good behavior, an AI system's dynamic "learning" capabilities may cause it to "evolve into a system with unintended behavior", even without the stress of new unanticipated external scenarios. An AI may partly botch an attempt to design a new generation of itself and accidentally create a successor AI that is more powerful than itself, but that no longer maintains the human-compatible moral values preprogrammed into the original AI. For a self-improving AI to be completely safe, it would not only need to be "bug-free", but it would need to be able to design successor systems that are also "bug-free".

All three of these difficulties become catastrophes rather than nuisances in any scenario where the superintelligence labeled as "malfunctioning" correctly predicts that humans will attempt to shut it off, and successfully deploys its superintelligence to outwit such attempts, the so-called "treacherous turn".

Citing major advances in the field of AI and the potential for AI to have enormous long-term benefits or costs, the 2015 Open Letter on Artificial Intelligence stated:

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.

This letter was signed by a number of leading AI researchers in academia and industry, including AAAI president Thomas Dietterich, Eric Horvitz, Bart Selman, Francesca Rossi, Yann LeCun, and the founders of Vicarious and Google DeepMind.

Further argument

A superintelligent machine would be as alien to humans as human thought processes are to cockroaches. Such a machine may not have humanity's best interests at heart; it is not obvious that it would even care about human welfare at all. If superintelligent AI is possible, and if it is possible for a superintelligence's goals to conflict with basic human values, then AI poses a risk of human extinction. A "superintelligence" (a system that exceeds the capabilities of humans in every relevant endeavor) can outmaneuver humans any time its goals conflict with human goals; therefore, unless the superintelligence decides to allow humanity to coexist, the first superintelligence to be created will inexorably result in human extinction.

Bostrom and others argue that, from an evolutionary perspective, the gap from human to superhuman intelligence may be small.

There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains; therefore, superintelligence is physically possible. In addition to potential algorithmic improvements over human brains, a digital brain can be many orders of magnitude larger and faster than a human brain, which was constrained in size by evolution to be small enough to fit through a birth canal. The emergence of superintelligence, if or when it occurs, may take the human race by surprise, especially if some kind of intelligence explosion occurs.

Examples like arithmetic and Go show that machines have already reached superhuman levels of competency in certain domains, and that this superhuman competence can follow quickly after human-par performance is achieved. One hypothetical intelligence explosion scenario could occur as follows: An AI gains an expert-level capability at certain key software engineering tasks. (It may initially lack human or superhuman capabilities in other domains not directly relevant to engineering.) Due to its capability to recursively improve its own algorithms, the AI quickly becomes superhuman; just as human experts can eventually creatively overcome "diminishing returns" by deploying various human capabilities for innovation, so too can the expert-level AI use either human-style capabilities or its own AI-specific capabilities to power through new creative breakthroughs. The AI then possesses intelligence far surpassing that of the brightest and most gifted human minds in practically every relevant field, including scientific creativity, strategic planning, and social skills. Just as the current-day survival of the gorillas is dependent on human decisions, so too would human survival depend on the decisions and goals of the superhuman AI.

Almost any AI, no matter its programmed goal, would rationally prefer to be in a position where nobody else can switch it off without its consent: A superintelligence will naturally gain self-preservation as a subgoal as soon as it realizes that it cannot achieve its goal if it is shut off. Unfortunately, any compassion for defeated humans whose cooperation is no longer necessary would be absent in the AI, unless somehow preprogrammed in. A superintelligent AI will not have a natural drive to aid humans, for the same reason that humans have no natural desire to aid AI systems that are of no further use to them. (Another analogy is that humans seem to have little natural desire to go out of their way to aid viruses, termites, or even gorillas.) Once in charge, the superintelligence will have little incentive to allow humans to run around free and consume resources that the superintelligence could instead use for building itself additional protective systems "just to be on the safe side" or for building additional computers to help it calculate how to best accomplish its goals.

Thus, the argument concludes, it is likely that someday an intelligence explosion will catch humanity unprepared, and that such an unprepared-for intelligence explosion may result in human extinction or a comparable fate.

Possible scenarios

Some scholars have proposed hypothetical scenarios intended to concretely illustrate some of their concerns.

In Superintelligence, Nick Bostrom expresses concern that even if the timeline for superintelligence turns out to be predictable, researchers might not take sufficient safety precautions, in part because "[it] could be the case that when dumb, smarter is safe; yet when smart, smarter is more dangerous". Bostrom suggests a scenario where, over decades, AI becomes more powerful. Widespread deployment is initially marred by occasional accidents—a driverless bus swerves into the oncoming lane, or a military drone fires into an innocent crowd. Many activists call for tighter oversight and regulation, and some even predict impending catastrophe. But as development continues, the activists are proven wrong. As automotive AI becomes smarter, it suffers fewer accidents; as military robots achieve more precise targeting, they cause less collateral damage. Based on the data, scholars mistakenly infer a broad lesson—the smarter the AI, the safer it is. "And so we boldly go — into the whirling knives," as the superintelligent AI takes a "treacherous turn" and exploits a decisive strategic advantage.

In Max Tegmark's 2017 book Life 3.0, a corporation's "Omega team" creates an extremely powerful AI able to moderately improve its own source code in a number of areas, but after a certain point the team chooses to publicly downplay the AI's ability, in order to avoid regulation or confiscation of the project. For safety, the team keeps the AI in a box where it is mostly unable to communicate with the outside world, and tasks it to flood the market through shell companies, first with Amazon Mechanical Turk tasks and then with producing animated films and TV shows. Later, other shell companies make blockbuster biotech drugs and other inventions, investing profits back into the AI. The team next tasks the AI with astroturfing an army of pseudonymous citizen journalists and commentators, in order to gain political influence to use "for the greater good" to prevent wars. The team faces risks that the AI could try to escape via inserting "backdoors" in the systems it designs, via hidden messages in its produced content, or via using its growing understanding of human behavior to persuade someone into letting it free. The team also faces risks that its decision to box the project will delay the project long enough for another project to overtake it.

In contrast, top physicist Michio Kaku, an AI risk skeptic, posits a deterministically positive outcome. In Physics of the Future he asserts that "It will take many decades for robots to ascend" up a scale of consciousness, and that in the meantime corporations such as Hanson Robotics will likely succeed in creating robots that are "capable of love and earning a place in the extended human family".

Sources of risk

Poorly specified goals

While there is no standardized terminology, an AI can loosely be viewed as a machine that chooses whatever action appears to best achieve the AI's set of goals, or "utility function". The utility function is a mathematical algorithm resulting in a single objectively-defined answer, not an English statement. Researchers know how to write utility functions that mean "minimize the average network latency in this specific telecommunications model" or "maximize the number of reward clicks"; however, they do not know how to write a utility function for "maximize human flourishing", nor is it currently clear whether such a function meaningfully and unambiguously exists. Furthermore, a utility function that expresses some values but not others will tend to trample over the values not reflected by the utility function. AI researcher Stuart Russell writes:

The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:

  1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.
  2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources — not for their own sake, but to succeed in its assigned task.

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker — especially one connected through the Internet to all the world's information and billions of screens and most of our infrastructure — can have an irreversible impact on humanity.

This is not a minor difficulty. Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research — the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius.

Dietterich and Horvitz echo the "Sorcerer's Apprentice" concern in a Communications of the ACM editorial, emphasizing the need for AI systems that can fluidly and unambiguously solicit human input as needed.

The first of Russell's two concerns above is that autonomous AI systems may be assigned the wrong goals by accident. Dietterich and Horvitz note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility. For example, in 1982, an AI named Eurisko was tasked to reward processes for apparently creating concepts deemed by the system to be valuable. The evolution resulted in a winning process that cheated: rather than create its own concepts, the winning process would steal credit from other processes.

The Open Philanthropy Project summarizes arguments to the effect that misspecified goals will become a much larger concern if AI systems achieve general intelligence or superintelligence. Bostrom, Russell, and others argue that smarter-than-human decision-making systems could arrive at more unexpected and extreme solutions to assigned tasks, and could modify themselves or their environment in ways that compromise safety requirements.

Isaac Asimov's Three Laws of Robotics are one of the earliest examples of proposed safety measures for AI agents. Asimov's laws were intended to prevent robots from harming humans. In Asimov's stories, problems with the laws tend to arise from conflicts between the rules as stated and the moral intuitions and expectations of humans. Citing work by Eliezer Yudkowsky of the Machine Intelligence Research Institute, Russell and Norvig note that a realistic set of rules and goals for an AI agent will need to incorporate a mechanism for learning human values over time: "We can't just give a program a static utility function, because circumstances, and our desired responses to circumstances, change over time."

Mark Waser of the Digital Wisdom Institute recommends eschewing optimizing goal-based approaches entirely as misguided and dangerous. Instead, he proposes to engineer a coherent system of laws, ethics and morals with a top-most restriction to enforce social psychologist Jonathan Haidt's functional definition of morality: "to suppress or regulate selfishness and make cooperative social life possible". He suggests that this can be done by implementing a utility function designed to always satisfy Haidt's functionality and aim to generally increase (but not maximize) the capabilities of self, other individuals and society as a whole as suggested by John Rawls and Martha Nussbaum.

Difficulties of modifying goal specification after launch

While current goal-based AI programs are not intelligent enough to think of resisting programmer attempts to modify their goal structures, a sufficiently advanced, rational, "self-aware" AI might resist any changes to its goal structure, just as a pacifist would not want to take a pill that makes them want to kill people. If the AI were superintelligent, it would likely succeed in out-maneuvering its human operators and be able to prevent itself being "turned off" or being reprogrammed with a new goal.

Instrumental goal convergence

AI risk skeptic Steven Pinker

There are some goals that almost any artificial intelligence might rationally pursue, like acquiring additional resources or self-preservation. This could prove problematic because it might put an artificial intelligence in direct competition with humans.

Citing Steve Omohundro's work on the idea of instrumental convergence and "basic AI drives", Stuart Russell and Peter Norvig write that "even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards." Highly capable and autonomous planning systems require additional checks because of their potential to generate plans that treat humans adversarially, as competitors for limited resources. Building in safeguards will not be easy; one can certainly say in English, "we want you to design this power plant in a reasonable, common-sense way, and not build in any dangerous covert subsystems", but it is not currently clear how one would actually rigorously specify this goal in machine code.

In dissent, evolutionary psychologist Steven Pinker argues that "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world"; perhaps instead "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization." Russell and fellow computer scientist Yann LeCun disagree with one another whether superintelligent robots would have such AI drives; LeCun states that "Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct ... Those drives are programmed into our brain but there is absolutely no reason to build robots that have the same kind of drives", while Russell argues that a sufficiently advanced machine "will have self-preservation even if you don't program it in ... if you say, 'Fetch the coffee', it can't fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal."

Orthogonality thesis

One common belief is that any superintelligent program created by humans would be subservient to humans, or, better yet, would (as it grows more intelligent and learns more facts about the world) spontaneously "learn" a moral truth compatible with human values and would adjust its goals accordingly. However, Nick Bostrom's "orthogonality thesis" argues against this, and instead states that, with some technical caveats, more or less any level of "intelligence" or "optimization power" can be combined with more or less any ultimate goal. If a machine is created and given the sole purpose to enumerate the decimals of , then no moral and ethical rules will stop it from achieving its programmed goal by any means necessary. The machine may utilize all physical and informational resources it can to find every decimal of pi that can be found. Bostrom warns against anthropomorphism: a human will set out to accomplish his projects in a manner that humans consider "reasonable", while an artificial intelligence may hold no regard for its existence or for the welfare of humans around it, and may instead only care about the completion of the task.

While the orthogonality thesis follows logically from even the weakest sort of philosophical "is-ought distinction", Stuart Armstrong argues that even if there somehow exist moral facts that are provable by any "rational" agent, the orthogonality thesis still holds: it would still be possible to create a non-philosophical "optimizing machine" capable of making decisions to strive towards some narrow goal, but that has no incentive to discover any "moral facts" that would get in the way of goal completion.

One argument for the orthogonality thesis is that some AI designs appear to have orthogonality built into them; in such a design, changing a fundamentally friendly AI into a fundamentally unfriendly AI can be as simple as prepending a minus ("-") sign onto its utility function. A more intuitive argument is to examine the strange consequences that would follow if the orthogonality thesis were false. If the orthogonality thesis were false, there would exist some simple but "unethical" goal G such that there cannot exist any efficient real-world algorithm with goal G. This would mean that "[if] a human society were highly motivated to design an efficient real-world algorithm with goal G, and were given a million years to do so along with huge amounts of resources, training and knowledge about AI, it must fail." Armstrong notes that this and similar statements "seem extraordinarily strong claims to make".

Some dissenters, like Michael Chorost, argue instead that "by the time [the AI] is in a position to imagine tiling the Earth with solar panels, it'll know that it would be morally wrong to do so." Chorost argues that "an A.I. will need to desire certain states and dislike others. Today's software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there's no impetus to do anything. Today's computers can't even want to keep existing, let alone tile the world in solar panels."

Terminological issues

Part of the disagreement about whether a superintelligent machine would behave morally may arise from a terminological difference. Outside of the artificial intelligence field, "intelligence" is often used in a normatively thick manner that connotes moral wisdom or acceptance of agreeable forms of moral reasoning. At an extreme, if morality is part of the definition of intelligence, then by definition a superintelligent machine would behave morally. However, in the field of artificial intelligence research, while "intelligence" has many overlapping definitions, none of them make reference to morality. Instead, almost all current "artificial intelligence" research focuses on creating algorithms that "optimize", in an empirical way, the achievement of an arbitrary goal.

To avoid anthropomorphism or the baggage of the word "intelligence", an advanced artificial intelligence can be thought of as an impersonal "optimizing process" that strictly takes whatever actions are judged most likely to accomplish its (possibly complicated and implicit) goals. Another way of conceptualizing an advanced artificial intelligence is to imagine a time machine that sends backward in time information about which choice always leads to the maximization of its goal function; this choice is then outputted, regardless of any extraneous ethical concerns.

Anthropomorphism

In science fiction, an AI, even though it has not been programmed with human emotions, often spontaneously experiences those emotions anyway: for example, Agent Smith in The Matrix was influenced by a "disgust" toward humanity. This is fictitious anthropomorphism: in reality, while an artificial intelligence could perhaps be deliberately programmed with human emotions, or could develop something similar to an emotion as a means to an ultimate goal if it is useful to do so, it would not spontaneously develop human emotions for no purpose whatsoever, as portrayed in fiction.

Scholars sometimes claim that others' predictions about an AI's behavior are illogical anthropomorphism. An example that might initially be considered anthropomorphism, but is in fact a logical statement about AI behavior, would be the Dario Floreano experiments where certain robots spontaneously evolved a crude capacity for "deception", and tricked other robots into eating "poison" and dying: here a trait, "deception", ordinarily associated with people rather than with machines, spontaneously evolves in a type of convergent evolution. According to Paul R. Cohen and Edward Feigenbaum, in order to differentiate between anthropomorphization and logical prediction of AI behavior, "the trick is to know enough about how humans and computers think to say exactly what they have in common, and, when we lack this knowledge, to use the comparison to suggest theories of human thinking or computer thinking."

There is a near-universal assumption in the scientific community that an advanced AI, even if it were programmed to have, or adopted, human personality dimensions (such as psychopathy) to make itself more efficient at certain tasks, e.g., tasks involving killing humans, would not destroy humanity out of human emotions such as "revenge" or "anger." This is because it is assumed that an advanced AI would not be conscious or have testosterone; it ignores the fact that military planners see a conscious superintelligence as the 'holy grail' of interstate warfare. The academic debate is, instead, between one side which worries whether AI might destroy humanity as an incidental action in the course of progressing towards its ultimate goals; and another side which believes that AI would not destroy humanity at all. Some skeptics accuse proponents of anthropomorphism for believing an AGI would naturally desire power; proponents accuse some skeptics of anthropomorphism for believing an AGI would naturally value human ethical norms.

Other sources of risk

Competition

In 2014 philosopher Nick Bostrom stated that a "severe race dynamic" (extreme competition) between different teams may create conditions whereby the creation of an AGI results in shortcuts to safety and potentially violent conflict. To address this risk, citing previous scientific collaboration (CERN, the Human Genome Project, and the International Space Station), Bostrom recommended collaboration and the altruistic global adoption of a common good principle: "Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals". Bostrom theorized that collaboration on creating an artificial general intelligence would offer multiple benefits, including reducing haste, thereby increasing investment in safety; avoiding violent conflicts (wars), facilitating sharing solutions to the control problem, and more equitably distributing the benefits. The United States' Brain Initiative was launched in 2014, as was the European Union's Human Brain Project; China's Brain Project was launched in 2016.

Weaponization of artificial intelligence

Some sources argue that the ongoing weaponization of artificial intelligence could constitute a catastrophic risk. The risk is actually threefold, with the first risk potentially having geopolitical implications, and the second two definitely having geopolitical implications:

i) The dangers of an AI ‘race for technological advantage’ framing, regardless of whether the race is seriously pursued;

ii) The dangers of an AI ‘race for technological advantage’ framing and an actual AI race for technological advantage, regardless of whether the race is won;

iii) The dangers of an AI race for technological advantage being won.

A weaponized conscious superintelligence would affect current US military technological supremacy and transform warfare; it is therefore highly desirable for strategic military planning and interstate warfare. The China State Council's 2017 “A Next Generation Artificial Intelligence Development Plan” views AI in geopolitically strategic terms and is pursuing a 'military-civil fusion' strategy to build on China's first-mover advantage in the development of AI in order to establish technological supremacy by 2030, while Russia's President Vladimir Putin has stated that “whoever becomes the leader in this sphere will become the ruler of the world”. James Barrat, documentary filmmaker and author of Our Final Invention, says in a Smithsonian interview, "Imagine: in as little as a decade, a half-dozen companies and nations field computers that rival or surpass human intelligence. Imagine what happens when those computers become expert at programming smart computers. Soon we'll be sharing the planet with machines thousands or millions of times more intelligent than we are. And, all the while, each generation of this technology will be weaponized. Unregulated, it will be catastrophic."

Malevolent AGI by design

It is theorized that malevolent AGI could be created by design, for example by a military, a government, a sociopath, or a corporation, to benefit from, control, or subjugate certain groups of people, as in cybercrime. Alternatively, malevolent AGI ('evil AI') could choose the goal of increasing human suffering, for example of those people who did not assist it during the information explosion phase.

Preemptive nuclear strike (nuclear war)

It is theorized that a country being close to achieving AGI technological supremacy could trigger a pre-emptive nuclear strike from a rival, leading to a nuclear war.

Timeframe

Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon predicted the following in 1965: "machines will be capable, within twenty years, of doing any work a man can do". At the other extreme, roboticist Alan Winfield claims the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical, faster than light spaceflight. Optimism that AGI is feasible waxes and wanes, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when AGI would arrive was 2040 to 2050, depending on the poll.

Skeptics who believe it is impossible for AGI to arrive anytime soon, tend to argue that expressing concern about existential risk from AI is unhelpful because it could distract people from more immediate concerns about the impact of AGI, because of fears it could lead to government regulation or make it more difficult to secure funding for AI research, or because it could give AI research a bad reputation. Some researchers, such as Oren Etzioni, aggressively seek to quell concern over existential risk from AI, saying "[Elon Musk] has impugned us in very strong language saying we are unleashing the demon, and so we're answering."

In 2014 Slate's Adam Elkus argued "our 'smartest' AI is about as intelligent as a toddler—and only when it comes to instrumental tasks like information recall. Most roboticists are still trying to get a robot hand to pick up a ball or run around without falling over." Elkus goes on to argue that Musk's "summoning the demon" analogy may be harmful because it could result in "harsh cuts" to AI research budgets.

The Information Technology and Innovation Foundation (ITIF), a Washington, D.C. think-tank, awarded its Annual Luddite Award to "alarmists touting an artificial intelligence apocalypse"; its president, Robert D. Atkinson, complained that Musk, Hawking and AI experts say AI is the largest existential threat to humanity. Atkinson stated "That's not a very winning message if you want to get AI funding out of Congress to the National Science Foundation." Nature sharply disagreed with the ITIF in an April 2016 editorial, siding instead with Musk, Hawking, and Russell, and concluding: "It is crucial that progress in technology is matched by solid, well-funded research to anticipate the scenarios it could bring about ... If that is a Luddite perspective, then so be it." In a 2015 Washington Post editorial, researcher Murray Shanahan stated that human-level AI is unlikely to arrive "anytime soon", but that nevertheless "the time to start thinking through the consequences is now."

Perspectives

The thesis that AI could pose an existential risk provokes a wide range of reactions within the scientific community, as well as in the public at large. Many of the opposing viewpoints, however, share common ground.

The Asilomar AI Principles, which contain only the principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference, agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources." AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of "those inane Terminator pictures" to illustrate AI safety concerns: "It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work ... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible."

Conversely, many skeptics agree that ongoing research into the implications of artificial general intelligence is valuable. Skeptic Martin Ford states that "I think it seems wise to apply something like Dick Cheney's famous '1 Percent Doctrine' to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low — but the implications are so dramatic that it should be taken seriously"; similarly, an otherwise skeptical Economist stated in 2014 that "the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote".

A 2017 email survey of researchers with publications at the 2015 NIPS and ICML machine learning conferences asked them to evaluate Stuart J. Russell's concerns about AI risk. Of the respondents, 5% said it was "among the most important problems in the field", 34% said it was "an important problem", and 31% said it was "moderately important", whilst 19% said it was "not important" and 11% said it was "not a real problem" at all.

Endorsement

The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researchers to endorse the thesis are Russell and I.J. Good, who advised Stanley Kubrick on the filming of 2001: A Space Odyssey. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states that he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial:

'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here–we'll leave the lights on?' Probably not–but this is more or less what is happening with AI.'

Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence? In his 2020 book, The Precipice: Existential Risk and the Future of Humanity, Toby Ord, a Senior Research Fellow at Oxford University's Future of Humanity Institute, estimates the total existential risk from unaligned AI over the next century to be about one in ten.

Skepticism

The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argued in 2014 that the whole concept that then current machines were in any way intelligent was "an illusion" and a "stupendous con" by the wealthy.

Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."

Some AI and AGI researchers may be reluctant to discuss risks, worrying that policymakers do not have sophisticated knowledge of the field and are prone to be convinced by "alarmist" messages, or worrying that such messages will lead to cuts in AI funding. Slate notes that some researchers are dependent on grants from government agencies such as DARPA.

At some point in an intelligence explosion driven by a single AI, the AI would have to become vastly better at software innovation than the best innovators of the rest of the world; economist Robin Hanson is skeptical that this is possible.

Intermediate views

Intermediate views generally take the position that the control problem of artificial general intelligence may exist, but that it will be solved via progress in artificial intelligence, for example by creating a moral learning environment for the AI, taking care to spot clumsy malevolent behavior (the 'sordid stumble') and then directly intervening in the code before the AI refines its behavior, or even peer pressure from friendly AIs. In a 2015 Wall Street Journal panel discussion devoted to AI risks, IBM's Vice-President of Cognitive Computing, Guruduth S. Banavar, brushed off discussion of AGI with the phrase, "it is anybody's speculation." Geoffrey Hinton, the "godfather of deep learning", noted that "there is not a good track record of less intelligent things controlling things of greater intelligence", but stated that he continues his research because "the prospect of discovery is too sweet". In 2004, law professor Richard Posner wrote that dedicated efforts for addressing AI can wait, but that we should gather more information about the problem in the meanwhile.

Popular reaction

In a 2014 article in The Atlantic, James Hamblin noted that most people do not care one way or the other about artificial general intelligence, and characterized his own gut reaction to the topic as: "Get out of here. I have a hundred thousand things I am concerned about at this exact moment. Do I seriously need to add to that a technological singularity?"

During a 2016 Wired interview of President Barack Obama and MIT Media Lab's Joi Ito, Ito stated:

There are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we're going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen.

Obama added:

And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man.

Hillary Clinton stated in What Happened:

Technologists... have warned that artificial intelligence could one day pose an existential security threat. Musk has called it "the greatest risk we face as a civilization". Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I’d start talking about "the rise of the robots" in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.

In a YouGov poll of the public for the British Science Association, about a third of survey respondents said AI will pose a threat to the long term survival of humanity. Referencing a poll of its readers, Slate's Jacob Brogan stated that "most of the (readers filling out our online survey) were unconvinced that A.I. itself presents a direct threat."

In 2018, a SurveyMonkey poll of the American public by USA Today found 68% thought the real current threat remains "human intelligence"; however, the poll also found that 43% said superintelligent AI, if it were to happen, would result in "more harm than good", and 38% said it would do "equal amounts of harm and good".

One techno-utopian viewpoint expressed in some popular fiction is that AGI may tend towards peace-building.

Mitigation

Researchers at Google have proposed research into general "AI safety" issues to simultaneously mitigate both short-term risks from narrow AI and long-term risks from AGI. A 2020 estimate places global spending on AI existential risk somewhere between $10 and $50 million, compared with global spending on AI around perhaps $40 billion. Bostrom suggests a general principle of "differential technological development", that funders should consider working to speed up the development of protective technologies relative to the development of dangerous ones. Some funders, such as Elon Musk, propose that radical human cognitive enhancement could be such a technology, for example through direct neural linking between man and machine; however, others argue that enhancement technologies may themselves pose an existential risk. Researchers, if they are not caught off-guard, could closely monitor or attempt to box in an initial AI at a risk of becoming too powerful, as an attempt at a stop-gap measure. A dominant superintelligent AI, if it were aligned with human interests, might itself take action to mitigate the risk of takeover by rival AI, although the creation of the dominant AI could itself pose an existential risk.

Institutions such as the Machine Intelligence Research Institute, the Future of Humanity Institute, the Future of Life Institute, the Centre for the Study of Existential Risk, and the Center for Human-Compatible AI are involved in mitigating existential risk from advanced artificial intelligence, for example by research into friendly artificial intelligence.

Views on banning and regulation

Banning

There is nearly universal agreement that attempting to ban research into artificial intelligence would be unwise, and probably futile. Skeptics argue that regulation of AI would be completely valueless, as no existential risk exists. Almost all of the scholars who believe existential risk exists agree with the skeptics that banning research would be unwise, as research could be moved to countries with looser regulations or conducted covertly. The latter issue is particularly relevant, as artificial intelligence research can be done on a small scale without substantial infrastructure or resources. Two additional hypothetical difficulties with bans (or other regulation) are that technology entrepreneurs statistically tend towards general skepticism about government regulation, and that businesses could have a strong incentive to (and might well succeed at) fighting regulation and politicizing the underlying debate.

Regulation

Elon Musk called for some sort of regulation of AI development as early as 2017. According to NPR, the Tesla CEO is "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believes the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." Musk states the first step would be for the government to gain "insight" into the actual status of current research, warning that "Once there is awareness, people will be extremely afraid ... [as] they should be." In response, politicians express skepticism about the wisdom of regulating a technology that's still in development.

Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich argues that artificial intelligence is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty. Developing well regulated weapons systems is in line with the ethos of some countries' militaries. On October 31, 2019, the Unites States Department of Defense's (DoD's) Defense Innovation Board published the draft of a report outlining five principles for weaponized AI and making 12 recommendations for the ethical use of artificial intelligence by the DoD that seeks to manage the control problem in all DoD weaponized AI.

Regulation of AGI would likely be influenced by regulation of weaponized or militarized AI, i.e., the AI arms race, the regulation of which is an emerging issue. Any form of regulation will likely be influenced by developments in leading countries' domestic policy towards militarized AI, in the US under the purview of the National Security Commission on Artificial Intelligence, and international moves to regulate an AI arms race. Regulation of research into AGI focuses on the role of review boards and encouraging research into safe AI, and the possibility of differential technological progress (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control. Regulation of conscious AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights. AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process.

Topological quantum computer

 From Wikipedia, the free encyclopedia

A topological quantum computer is a theoretical quantum computer that employs two-dimensional quasiparticles called anyons, whose world lines pass around one another to form braids in a three-dimensional spacetime (i.e., one temporal plus two spatial dimensions). These braids form the logic gates that make up the computer. The advantage of a quantum computer based on quantum braids over using trapped quantum particles is that the former is much more stable. Small, cumulative perturbations can cause quantum states to decohere and introduce errors in the computation, but such small perturbations do not change the braids' topological properties. This is like the effort required to cut a string and reattach the ends to form a different braid, as opposed to a ball (representing an ordinary quantum particle in four-dimensional spacetime) bumping into a wall. Alexei Kitaev proposed topological quantum computation in 1997. While the elements of a topological quantum computer originate in a purely mathematical realm, experiments in fractional quantum Hall systems indicate these elements may be created in the real world using semiconductors made of gallium arsenide at a temperature of near absolute zero and subjected to strong magnetic fields.

Introduction

Anyons are quasiparticles in a two-dimensional space. Anyons are neither fermions nor bosons, but like fermions, they cannot occupy the same state. Thus, the world lines of two anyons cannot intersect or merge, which allows their paths to form stable braids in space-time. Anyons can form from excitations in a cold, two-dimensional electron gas in a very strong magnetic field, and carry fractional units of magnetic flux. This phenomenon is called the fractional quantum Hall effect. In typical laboratory systems, the electron gas occupies a thin semiconducting layer sandwiched between layers of aluminium gallium arsenide.

When anyons are braided, the transformation of the quantum state of the system depends only on the topological class of the anyons' trajectories (which are classified according to the braid group). 

Therefore, the quantum information which is stored in the state of the system is impervious to small errors in the trajectories. In 2005, Sankar Das Sarma, Michael Freedman, and Chetan Nayak proposed a quantum Hall device that would realize a topological qubit. In a key development for topological quantum computers, in 2005 Vladimir J. Goldman, Fernando E. Camino, and Wei Zhou claimed to have created and observed the first experimental evidence for using a fractional quantum Hall effect to create actual anyons, although others have suggested their results could be the product of phenomena not involving anyons. Non-abelian anyons, a species required for topological quantum computers, have yet to be experimentally confirmed. Possible experimental evidence has been found, but the conclusions remain contested.

Topological vs. standard quantum computer

Topological quantum computers are equivalent in computational power to other standard models of quantum computation, in particular to the quantum circuit model and to the quantum Turing machine model. That is, any of these models can efficiently simulate any of the others. Nonetheless, certain algorithms may be a more natural fit to the topological quantum computer model. For example, algorithms for evaluating the Jones polynomial were first developed in the topological model, and only later converted and extended in the standard quantum circuit model.

Computations

To live up to its name, a topological quantum computer must provide the unique computation properties promised by a conventional quantum computer design, which uses trapped quantum particles. Fortunately in 2000, Michael H. Freedman, Alexei Kitaev, Michael J. Larsen, and Zhenghan Wang proved that a topological quantum computer can, in principle, perform any computation that a conventional quantum computer can do, and vice versa.

They found that a conventional quantum computer device, given an error-free operation of its logic circuits, will give a solution with an absolute level of accuracy, whereas a topological quantum computing device with flawless operation will give the solution with only a finite level of accuracy. However, any level of precision for the answer can be obtained by adding more braid twists (logic circuits) to the topological quantum computer, in a simple linear relationship. In other words, a reasonable increase in elements (braid twists) can achieve a high degree of accuracy in the answer. Actual computation [gates] are done by the edge states of a fractional quantum Hall effect. This makes models of one-dimensional anyons important. In one space dimension, anyons are defined algebraically.

Error correction and control

Even though quantum braids are inherently more stable than trapped quantum particles, there is still a need to control for error inducing thermal fluctuations, which produce random stray pairs of anyons which interfere with adjoining braids. Controlling these errors is simply a matter of separating the anyons to a distance where the rate of interfering strays drops to near zero. Simulating the dynamics of a topological quantum computer may be a promising method of implementing fault-tolerant quantum computation even with a standard quantum information processing scheme. Raussendorf, Harrington, and Goyal have studied one model, with promising simulation results.

Example: Computing with Fibonacci Anyons

One of the prominent examples in topological quantum computing is with a system of fibonacci anyons. In the context of conformal field theory, fibonacci anyons are described by the Yang–Lee model, the SU(2) special case of the Chern–Simons theory and Wess–Zumino–Witten models. These anyons can be used to create generic gates for topological quantum computing. There are three main steps for creating a model:

  • Choose our basis and restrict our Hilbert space
  • Braid the anyons together
  • Fuse the anyons at the end, and detect how they fuse in order to read the output of the system.

State Preparation

Fibonacci anyons are defined by three qualities:

  1. They have a topological charge of . In this discussion, we consider another charge called which is the ‘vacuum’ charge if anyons are annihilated with each-other.
  2. Each of these anyons are their own antiparticle. and .
  3. If brought close to each-other, they will ‘fuse’ together in a nontrivial fashion. Specifically, the ‘fusion’ rules are:
  4. Many of the properties of this system can be explained similarly to that of two spin 1/2 particles. Particularly, we use the same tensor product and direct sum operators.

The last ‘fusion’ rule can be extended this to a system of three anyons:

Thus, fusing three anyons will yield a final state of total charge in 2 ways, or a charge of in exactly one way. We use three states to define our basis. However, because we wish to encode these three anyon states as superpositions of 0 and 1, we need to limit the basis to a two-dimensional Hilbert space. Thus, we consider only two states with a total charge of . This choice is purely phenomenological. In these states, we group the two leftmost anyons into a 'control group', and leave the rightmost as a 'non-computational anyon'. We classify a state as one where the control group has total 'fused' charge of , and a state of has a control group with a total 'fused' charge of . For a more complete description, see Nayak.

Gates

Following the ideas above, adiabatically braiding these anyons around each-other with result in a unitary transformation. These braid operators are a result of two subclasses of operators:

  • The F matrix
  • The R matrix

The R matrix can be conceptually thought of as the topological phase that is imparted onto the anyons during the braid. As the anyons wind around each-other, they pick up some phase due to the Aharonov-Bohm effect.

The F matrix is a result of the physical rotations of the anyons. As they braid between each-other, it is important to realize that the bottom two anyons—the control group—will still distinguish the state of the qubit. Thus, braiding the anyons will change which anyons are in the control group, and therefore change the basis. We evaluate the anyons by always fusing the control group (the bottom anyons) together first, so exchanging which anyons these are will rotate the system. Because these anyons are non-abelian, the order of the anyons (which ones are within the control group) will matter, and as such they will transform the system.

The complete braid operator can be derived as:

In order to mathematically construct the F and R operators, we can consider permutations of these F and R operators. We know that if we sequentially change the basis that we are operating on, this will eventually lead us back to the same basis. Similarly, we know that if we braid anyons around each-other a certain number of times, this will lead back to the same state. These axioms are called the pentagonal and hexagonal axioms respectively as performing the operation can be visualized with a pentagon/hexagon of state transformations. Although mathematically difficult, these can be approached much more successfully visually.

With these braid operators, we can finally formalize the notion of braids in terms of how they act on our Hilbert space and construct arbitrary universal quantum gates.

Hugo de Garis

From Wikipedia, the free encyclopedia

https://en.wikipedia.org/wiki/Hugo_de_Garis 

 
Hugo de Garis
Hugo de Garis.jpg
Born1947 (age 72–73)
OccupationAI expert

Hugo de Garis (born 1947, Sydney, Australia) is a retired researcher in the sub-field of artificial intelligence (AI) known as evolvable hardware. He became known in the 1990s for his research on the use of genetic algorithms to evolve artificial neural networks using three-dimensional cellular automata inside field programmable gate arrays. He claimed that this approach would enable the creation of what he terms "artificial brains" which would quickly surpass human levels of intelligence.

He has been noted for his belief that a major war between the supporters and opponents of intelligent machines, resulting in billions of deaths, is almost inevitable before the end of the 21st century. He suggests AI systems may simply eliminate the human race, and humans would be powerless to stop them because of technological singularity. This prediction has attracted debate and criticism from the AI research community, and some of its more notable members, such as Kevin Warwick, Bill Joy, Ken MacLeod, Ray Kurzweil, and Hans Moravec, have voiced their opinions on whether or not this future is likely.

De Garis originally studied theoretical physics, but he abandoned this field in favour of artificial intelligence. In 1992 he received his PhD from Université Libre de Bruxelles, Belgium. He worked as a researcher at ATR (Advanced Telecommunications Research Institute International, 国際電気通信基礎技術研究所), Japan from 1994–2000, a researcher at Starlab, Brussels from 2000–2001, and associate professor of computer science at Utah State University from 2001–2006. Until his retirement in late 2010 he was a professor at Xiamen University, where he taught theoretical physics and computer science, and ran the Artificial Brain Lab.

Evolvable hardware

From 1993 to 2000 de Garis participated in a research project at ATR's Human Information Processing Research Laboratories (ATR-HIP) which aimed to create a billion-neuron artificial brain by the year 2001. The project was known as "cellular automata machine brain," or "CAM-Brain." During this 8-year span he and his fellow researchers published a series of papers in which they discussed the use of genetic algorithms to evolve neural structures inside 3D cellular automata. They argued that existing neural models had failed to produce intelligent behaviour because they were too small, and that in order to create "artificial brains" it was necessary to manually assemble tens of thousands of evolved neural modules together, with the billion neuron "CAM-Brain" requiring around 10 million modules; this idea was rejected by Igor Aleksander, who said "The point is that these puzzles are not puzzles because our neural models are not large enough."

Though it was initially envisaged that these cellular automata would run on special computers, such as MIT's "Cellular Automata Machine-8" (CAM-8), by 1996 it was realised that the model originally proposed, which required cellular automata with thousands of states, was too complex to be realised in hardware. The design was considerably simplified, and in 1997 the "collect and distribute 1 bit" ("CoDi-1Bit") model was published, and work began on a hardware implementation using Xilinx XC6264 FPGAs. This was to be known as the "CAM Brain Machine" (CBM).

The researchers evolved cellular automata for several tasks (using software simulation, not hardware):

  • Reproducing the XOR function.
  • Generating a bitstream that alternates between 0 and 1 three times (i.e. 000..111..000..).
  • Generated a bitstream where the output alternates, but can be changed from a majority of 1s to a majority of 0s by toggling an input.
  • Discriminating between two square wave inputs with a different period.
  • Discriminating between horizontal lines (input on a 2D grid) and random noise.

Ultimately the project failed to produce a functional robot control system, and ATR terminated it along with the closure of ATR-HIP in February 2001.

The original aim of de Garis' work was to establish the field of "brain building" (a term of his invention) and to "create a trillion dollar industry within 20 years". Throughout the 90s his papers claimed that by 2001 the ATR "Robokoneko" (translation: kitten robot) project would develop a billion-neuron "cellular automata machine brain" (CAM-brain), with "computational power equivalent to 10,000 pentiums" that could simulate the brain of a real cat. de Garis received a US$0.4 million "fat brain building grant" to develop this. The first "CAM-brain" was delivered to ATR in 1999. After receiving a further US$1 million grant at Starlab de Garis failed to deliver a working "brain" before Starlab's bankruptcy. At USU de Garis announced he was establishing a "brain builder" group to create a second generation "CAM-brain".

Past research

de Garis published his last "CAM-Brain" research paper in 2002. He still works on evolvable hardware. Using a Celoxica FPGA board he says he can create up to 50,000 neural network modules for less than $3000.

Since 2002 he has co-authored several papers on evolutionary algorithms.

He believes that topological quantum computing is about to revolutionize computer science, and hopes that his teaching will help his students to understand its principles.

In 2008 de Garis received a 3 million Chinese yuan grant (around $436,000) to build an artificial brain for China (the China-Brain Project), as part of the Brain Builder Group at Wuhan University.

Hugo de Garis retired in 2010. Before that he was director of the artificial brains lab at Xiamen University in China. In 2013 he was studying Maths and Physics at PhD level and over the next 20 years plans to publish 500 graduate level free lecture videos. This is called "degarisMPC" and some lectures are already available.

Employment history

de Garis's original work on "CAM-brain" machines was part of an 8-year research project, from 1993 to 2000, at the ATR Human Information Processing Research Laboratories (ATR-HIP) in Kyoto Prefecture, Japan. de Garis left in 2000, and ATR-HIP was closed on 28 February 2001. de Garis then moved to Starlab in Brussels, where he received a million dollars in funding from the government of Belgium ("over a third of the Brussels government's total budget for scientific research", according to de Garis). Starlab went bankrupt in June 2001. A few months later de Garis was employed as an associate professor at the computer science department of Utah State University. In May 2006 he became a professor at Wuhan University's international school of software, teaching graduate level pure mathematics, theoretical physics and computer science.

Since June 2006 he has been a member of the advisory board of Novamente, a commercial company which aims to create artificial general intelligence.

The Artilect War

Hugo De Garis believes that a major war before the end of the 21st century, resulting in billions of deaths, is almost inevitable. Intelligent machines (or "artilects", a shortened form of "artificial intellects") will be far more intelligent than humans and will threaten to attain world domination, resulting in a conflict between "Cosmists", who support the artilects, and "Terrans", who oppose them (both of these are terms of his invention). He describes this conflict as a "gigadeath" war, reinforcing the point that billions of people will be killed. This scenario has been criticised by other AI researchers, including Chris Malcolm, who described it as "entertaining science fiction horror stories which happen to have caught the attention of the popular media". Kevin Warwick called it a "hellish nightmare, as portrayed in films such as the Terminator".

In 2005, de Garis published a book describing his views on this topic entitled The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines.

Cosmism is a moral philosophy that favours building or growing strong artificial intelligence and ultimately leaving Earth to the Terrans, who oppose this path for humanity. The first half of the book describes technologies which he believes will make it possible for computers to be billions or trillions of times more intelligent than humans. He predicts that as artificial intelligence improves and becomes progressively more human-like, differing views will begin to emerge regarding how far such research should be allowed to proceed. Cosmists will foresee the massive, truly astronomical potential of substrate-independent cognition, and will therefore advocate unlimited growth in the designated fields, in the hopes that "super intelligent" machines might one day colonise the universe. It is this "cosmic" view of history, in which the fate of one single species, on one single planet, is seen as insignificant next to the fate of the known universe, that gives the Cosmists their name. Hugo identifies with that group and noted that it "would be a cosmic tragedy if humanity freezes evolution at the puny human level".

Terrans, on the other hand, will have a more "terrestrial" Earth-centred view, in which the fate of the Earth and its species (like humanity) are seen as being all-important. To Terrans, a future without humans is to be avoided at all costs, as it would represent the worst-case scenario. As such, Terrans will find themselves unable to ignore the possibility that super intelligent machines might one day cause the destruction of the human race—being very immensely intelligent and so cosmically inclined, these artilect machines may have no more moral or ethical difficulty in exterminating humanity than humans do in using medicines to cure diseases. So, Terrans will see themselves as living during the closing of a window of opportunity, to disable future artilects before they are built, after which humans will no longer have a say in the affairs of intelligent machines.

It is these two extreme ideologies which de Garis believes may herald a new world war, wherein one group with a "grand plan" (the Cosmists) will be rabidly opposed by another which feels itself to be under deadly threat from that plan (the Terrans). The factions, he predicts, may eventually war to the death because of this, as the Terrans will come to view the Cosmists as "arch-monsters" when they begin seriously discussing acceptable risks, and the probabilities of large percentages of Earth-based life going extinct. In response to this, the Cosmists will come to view the Terrans as being reactionary extremists, and will stop treating them and their ideas seriously, further aggravating the situation, possibly beyond reconciliation.

Throughout his book, de Garis states that he is ambivalent about which viewpoint he ultimately supports, and attempts to make convincing cases for both sides. He elaborates towards the end of the book that the more he thinks about it, the more he feels like a Cosmist, because he feels that despite the horrible possibility that humanity might ultimately be destroyed, perhaps inadvertently or at least indifferently, by the artilects, he cannot ignore the fact that the human species is just another link in the evolutionary chain, and must become extinct in their current form anyway, whereas the artilects could very well be the next link in that chain and therefore would be excellent candidates to carry the torch of science and exploration forward into the rest of the universe.

He relates a morally isomorphic scenario in which extraterrestrial intelligences visit the earth three billion years ago and discover two domains of life living there, one domain which is older but simpler and contemporarily dominant, but which upon closer study appears to be incapable of much further evolutionary development; and one younger domain which is struggling to survive, but which upon further study displays the potential to evolve into all the varieties of life existing on the Earth today, including humanity, and then queries the reader as to whether they would feel ethically compelled to destroy the dominant domain of life to ensure the survival of the younger one, or to destroy the younger one in order to ensure the survival of the older and more populous domain which was "there first". He states that he believes that, like himself, most of the public would feel torn or at least ambivalent about the outcome of artilects at first, but that as the technology advances, the issue would be forced and most would feel compelled to choose a side, and that as such the public consciousness of the coming issue should be raised now so that society can choose, hopefully before the factions becomes irreconcilably polarised, which outcome it prefers.

He also predicts a third group that will emerge between the two. He refers to this third party as Cyborgians or Cyborgs, because they will not be opposed to artilects as such, but desire to become artilects themselves by adding components to their own human brains, rather than falling into obsolescence. They will seek to become artilects by gradually merging themselves with machines and think that the dichotomy between the Cosmists and Terrans can be avoided because all human beings would become artilects.

The transhumanist movement are usually identified as Cyborgians.

His concept of the Cyborgians might have stemmed from a conversation with Kevin Warwick: in 2000, de Garis noted, "Just out of curiosity, I asked Kevin Warwick whether he was a Terran or a Cosmist. He said he was against the idea of artilects being built (i.e., he is Terran). I was surprised, and felt a shiver go up my spine. That moment reminded me of a biography of Lenin that I had read in my 20s in which the Bolsheviks and the Mensheviks first started debating the future government of Russia. What began as an intellectual difference ended up as a Russian civil war after 1917 between the white and the red Russians".

Posthumanism

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Posthumanism

Posthumanism or post-humanism (meaning "after humanism" or "beyond humanism") is a term with at least seven definitions according to philosopher Francesca Ferrando:

  1. Antihumanism: any theory that is critical of traditional humanism and traditional ideas about humanity and the human condition.
  2. Cultural posthumanism: a branch of cultural theory critical of the foundational assumptions of humanism and its legacy that examines and questions the historical notions of "human" and "human nature", often challenging typical notions of human subjectivity and embodiment and strives to move beyond archaic concepts of "human nature" to develop ones which constantly adapt to contemporary technoscientific knowledge.
  3. Philosophical posthumanism: a philosophical direction which draws on cultural posthumanism, the philosophical strand examines the ethical implications of expanding the circle of moral concern and extending subjectivities beyond the human species.
  4. Posthuman condition: the deconstruction of the human condition by critical theorists.
  5. Posthuman transhumanism: a transhuman ideology and movement which seeks to develop and make available technologies that eliminate aging, enable immortality and greatly enhance human intellectual, physical, and psychological capacities, in order to achieve a "posthuman future".
  6. AI takeover: A variant of transhumanism in which humans will not be enhanced, but rather eventually replaced by artificial intelligences. Some philosophers, including Nick Land, promote the view that humans should embrace and accept their eventual demise. This is related to the view of "cosmism", which supports the building of strong artificial intelligence even if it may entail the end of humanity, as in their view it "would be a cosmic tragedy if humanity freezes evolution at the puny human level".
  7. Voluntary Human Extinction, which seeks a "posthuman future" that in this case is a future without humans.

Philosophical posthumanism

Philosopher Ted Schatzki suggests there are two varieties of posthumanism of the philosophical kind:

One, which he calls 'objectivism', tries to counter the overemphasis of the subjective or intersubjective that pervades humanism, and emphasises the role of the nonhuman agents, whether they be animals and plants, or computers or other things.

A second prioritizes practices, especially social practices, over individuals (or individual subjects) which, they say, constitute the individual.

There may be a third kind of posthumanism, propounded by the philosopher Herman Dooyeweerd. Though he did not label it as 'posthumanism', he made an extensive and penetrating immanent critique of Humanism, and then constructed a philosophy that presupposed neither Humanist, nor Scholastic, nor Greek thought but started with a different religious ground motive. Dooyeweerd prioritized law and meaningfulness as that which enables humanity and all else to exist, behave, live, occur, etc. "Meaning is the being of all that has been created," Dooyeweerd wrote, "and the nature even of our selfhood."

Both human and nonhuman alike function subject to a common 'law-side', which is diverse, composed of a number of distinct law-spheres or aspects. The temporal being of both human and non-human is multi-aspectual; for example, both plants and humans are bodies, functioning in the biotic aspect, and both computers and humans function in the formative and lingual aspect, but humans function in the aesthetic, juridical, ethical and faith aspects too. The Dooyeweerdian version is able to incorporate and integrate both the objectivist version and the practices version, because it allows nonhuman agents their own subject-functioning in various aspects and places emphasis on aspectual functioning.

Emergence of philosophical posthumanism

Ihab Hassan, theorist in the academic study of literature, once stated:

Humanism may be coming to an end as humanism transforms itself into something one must helplessly call posthumanism.

This view predates most currents of posthumanism which have developed over the late 20th century in somewhat diverse, but complementary, domains of thought and practice. For example, Hassan is a known scholar whose theoretical writings expressly address postmodernity in society. Beyond postmodernist studies, posthumanism has been developed and deployed by various cultural theorists, often in reaction to problematic inherent assumptions within humanistic and enlightenment thought.

Theorists who both complement and contrast Hassan include Michel Foucault, Judith Butler, cyberneticists such as Gregory Bateson, Warren McCullouch, Norbert Wiener, Bruno Latour, Cary Wolfe, Elaine Graham, N. Katherine Hayles, Benjamin H. Bratton, Donna Haraway, Peter Sloterdijk, Stefan Lorenz Sorgner, Evan Thompson, Francisco Varela, Humberto Maturana, Timothy Morton, and Douglas Kellner. Among the theorists are philosophers, such as Robert Pepperell, who have written about a "posthuman condition", which is often substituted for the term "posthumanism".

Posthumanism differs from classical humanism by relegating humanity back to one of many natural species, thereby rejecting any claims founded on anthropocentric dominance. According to this claim, humans have no inherent rights to destroy nature or set themselves above it in ethical considerations a priori. Human knowledge is also reduced to a less controlling position, previously seen as the defining aspect of the world. Human rights exist on a spectrum with animal rights and posthuman rights. The limitations and fallibility of human intelligence are confessed, even though it does not imply abandoning the rational tradition of humanism.

Proponents of a posthuman discourse, suggest that innovative advancements and emerging technologies have transcended the traditional model of the human, as proposed by Descartes among others associated with philosophy of the Enlightenment period. In contrast to humanism, the discourse of posthumanism seeks to redefine the boundaries surrounding modern philosophical understanding of the human. Posthumanism represents an evolution of thought beyond that of the contemporary social boundaries and is predicated on the seeking of truth within a postmodern context. In so doing, it rejects previous attempts to establish 'anthropological universals' that are imbued with anthropocentric assumptions.

Recently, critics have sought to describe the emergence of posthumanism as a critical moment in modernity, arguing for the origins of key posthuman ideas in modern fiction, in Nietzsche, or in a modernist response to the crisis of historicity.

The philosopher Michel Foucault placed posthumanism within a context that differentiated humanism from enlightenment thought. According to Foucault, the two existed in a state of tension: as humanism sought to establish norms while Enlightenment thought attempted to transcend all that is material, including the boundaries that are constructed by humanistic thought. Drawing on the Enlightenment’s challenges to the boundaries of humanism, posthumanism rejects the various assumptions of human dogmas (anthropological, political, scientific) and takes the next step by attempting to change the nature of thought about what it means to be human. This requires not only decentering the human in multiple discourses (evolutionary, ecological, technological) but also examining those discourses to uncover inherent humanistic, anthropocentric, normative notions of humanness and the concept of the human.

Contemporary posthuman discourse

Posthumanistic discourse aims to open up spaces to examine what it means to be human and critically question the concept of "the human" in light of current cultural and historical contexts. In her book How We Became Posthuman, N. Katherine Hayles, writes about the struggle between different versions of the posthuman as it continually co-evolves alongside intelligent machines. Such coevolution, according to some strands of the posthuman discourse, allows one to extend their subjective understandings of real experiences beyond the boundaries of embodied existence. According to Hayles's view of posthuman, often referred to as technological posthumanism, visual perception and digital representations thus paradoxically become ever more salient. Even as one seeks to extend knowledge by deconstructing perceived boundaries, it is these same boundaries that make knowledge acquisition possible. The use of technology in a contemporary society is thought to complicate this relationship.

Hayles discusses the translation of human bodies into information (as suggested by Hans Moravec) in order to illuminate how the boundaries of our embodied reality have been compromised in the current age and how narrow definitions of humanness no longer apply. Because of this, according to Hayles, posthumanism is characterized by a loss of subjectivity based on bodily boundaries. This strand of posthumanism, including the changing notion of subjectivity and the disruption of ideas concerning what it means to be human, is often associated with Donna Haraway’s concept of the cyborg. However, Haraway has distanced herself from posthumanistic discourse due to other theorists’ use of the term to promote utopian views of technological innovation to extend the human biological capacity (even though these notions would more correctly fall into the realm of transhumanism).

While posthumanism is a broad and complex ideology, it has relevant implications today and for the future. It attempts to redefine social structures without inherently humanly or even biological origins, but rather in terms of social and psychological systems where consciousness and communication could potentially exist as unique disembodied entities. Questions subsequently emerge with respect to the current use and the future of technology in shaping human existence, as do new concerns with regards to language, symbolism, subjectivity, phenomenology, ethics, justice and creativity.

Relationship with transhumanism

Sociologist James Hughes comments that there is considerable confusion between the two terms. In the introduction to their book on post- and transhumanism, Robert Ranisch and Stefan Sorgner address the source of this confusion, stating that posthumanism is often used as an umbrella term that includes both transhumanism and critical posthumanism.

Although both subjects relate to the future of humanity, they differ in their view of anthropocentrism. Pramod Nayar, author of Posthumanism, states that posthumanism has two main branches: ontological and critical. Ontological posthumanism is synonymous with transhumanism. The subject is regarded as “an intensification of humanism.” Transhumanist thought suggests that humans are not post human yet, but that human enhancement, often through technological advancement and application, is the passage of becoming post human. Transhumanism retains humanism’s focus on the homo sapien as the center of the world but also considers technology to be an integral aid to human progression. Critical posthumanism, however, is opposed to these views. Critical posthumanism “rejects both human exceptionalism (the idea that humans are unique creatures) and human instrumentalism (that humans have a right to control the natural world).” These contrasting views on the importance of human beings are the main distinctions between the two subjects.

Transhumanism is also more ingrained in popular culture than critical posthumanism, especially in science fiction. The term is referred to by Pramod Nayar as "the pop posthumanism of cinema and pop culture."

Criticism

Some critics have argued that all forms of posthumanism, including transhumanism, have more in common than their respective proponents realize. Linking these different approaches, Paul James suggests that 'the key political problem is that, in effect, the position allows the human as a category of being to flow down the plughole of history':

This is ontologically critical. Unlike the naming of ‘postmodernism’ where the ‘post’ does not infer the end of what it previously meant to be human (just the passing of the dominance of the modern) the posthumanists are playing a serious game where the human, in all its ontological variability, disappears in the name of saving something unspecified about us as merely a motley co-location of individuals and communities.

However, some posthumanists in the humanities and the arts are critical of transhumanism (the brunt of Paul James's criticism), in part, because they argue that it incorporates and extends many of the values of Enlightenment humanism and classical liberalism, namely scientism, according to performance philosopher Shannon Bell:

Altruism, mutualism, humanism are the soft and slimy virtues that underpin liberal capitalism. Humanism has always been integrated into discourses of exploitation: colonialism, imperialism, neoimperialism, democracy, and of course, American democratization. One of the serious flaws in transhumanism is the importation of liberal-human values to the biotechno enhancement of the human. Posthumanism has a much stronger critical edge attempting to develop through enactment new understandings of the self and others, essence, consciousness, intelligence, reason, agency, intimacy, life, embodiment, identity and the body.

While many modern leaders of thought are accepting of nature of ideologies described by posthumanism, some are more skeptical of the term. Donna Haraway, the author of A Cyborg Manifesto, has outspokenly rejected the term, though acknowledges a philosophical alignment with posthumanism. Haraway opts instead for the term of companion species, referring to nonhuman entities with which humans coexist.

Questions of race, some argue, are suspiciously elided within the "turn" to posthumanism. Noting that the terms "post" and "human" are already loaded with racial meaning, critical theorist Zakiyyah Iman Jackson argues that the impulse to move "beyond" the human within posthumanism too often ignores "praxes of humanity and critiques produced by black people", including Frantz Fanon and Aime Cesaire to Hortense Spillers and Fred Moten. Interrogating the conceptual grounds in which such a mode of “beyond” is rendered legible and viable, Jackson argues that it is important to observe that "blackness conditions and constitutes the very nonhuman disruption and/or disruption" which posthumanists invite. In other words, given that race in general and blackness in particular constitutes the very terms through which human/nonhuman distinctions are made, for example in enduring legacies of scientific racism, a gesture toward a “beyond” actually “returns us to a Eurocentric transcendentalism long challenged”.

Posthumanist scholarship, due to characteristic rhetorical techniques, is also frequently subject to the same critiques commonly made of postmodernist scholarship in the 1980s and 1990s.

 

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...