Search This Blog

Thursday, August 16, 2018

Self-replication

From Wikipedia, the free encyclopedia


Self-replication is any behavior of a dynamical system that yields construction of an identical copy of itself. Biological cells, given suitable environments, reproduce by cell division. During cell division, DNA is replicated and can be transmitted to offspring during reproduction. Biological viruses can replicate, but only by commandeering the reproductive machinery of cells through a process of infection. Harmful prion proteins can replicate by converting normal proteins into rogue forms. Computer viruses reproduce using the hardware and software already present on computers. Self-replication in robotics has been an area of research and a subject of interest in science fiction. Any self-replicating mechanism which does not make a perfect copy will experience genetic variation and will create variants of itself. These variants will be subject to natural selection, since some will be better at surviving in their current environment than others and will out-breed them.

Overview

Theory

Early research by John von Neumann[2] established that replicators have several parts:
  • A coded representation of the replicator
  • A mechanism to copy the coded representation
  • A mechanism for effecting construction within the host environment of the replicator
Exceptions to this pattern are possible. For example, scientists have come close to constructing RNA that copies itself in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external.
However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a crystal.

Classes of self-replication

Recent research[3] has begun to categorize replicators, often based on the amount of support they require.
  • Natural replicators have all or most of their design from nonhuman sources. Such systems include natural life forms.
  • Autotrophic replicators can reproduce themselves "in the wild". They mine their own materials. It is conjectured that non-biological autotrophic replicators could be designed by humans, and could easily accept specifications for human products.
  • Self-reproductive systems are conjectured systems which would produce copies of themselves from industrial feedstocks such as metal bar and wire.
  • Self-assembling systems assemble copies of themselves from finished, delivered parts. Simple examples of such systems have been demonstrated at the macro scale.
The design space for machine replicators is very broad. A comprehensive study[4] to date by Robert Freitas and Ralph Merkle has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.

A self-replicating computer program

In computer science a quine is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the Python programming language is:
a='a=%r;print a%%a';print a%a
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.

In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.

Self-replicating tiling

In geometry a self-replicating tiling is a tiling pattern in which several congruent tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as tessellation. The "sphinx" hexiamond is the only known self-replicating pentagon.[5] For example, four such concave pentagons can be joined together to make one with twice the dimensions.[6] Solomon W. Golomb coined the term rep-tiles for self-replicating tilings.
In 2012, Lee Sallows identified rep-tiles as a special instance of a self-tiling tile set or setiset. A setiset of order n is a set of n shapes that can be assembled in n different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-n rep-tile is just a setiset composed of n identical pieces.

Four 'sphinx' hexiamonds can be put together to form another sphinx.
 
A perfect setiset of order 4

Applications

It is a long-term goal of some engineering sciences to achieve a clanking replicator, a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of labor, capital and distribution in conventional manufactured goods.

A fully novel artificial replicator is a reasonable near-term goal. A NASA study recently placed the complexity of a clanking replicator at approximately that of Intel's Pentium 4 CPU.[7] That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.

Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.

A variation of self replication is of practical relevance in compiler construction, where a similar bootstrapping problem occurs as in natural self replication. A compiler (phenotype) can be applied on the compiler's own source code (genotype) producing the compiler itself. During compiler development, a modified (mutated) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.

Mechanical self-replication

An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:
  • Obtain construction materials
  • Manufacture new parts including its smallest parts and thinking apparatus
  • Provide a consistent power source
  • Program the new members
  • error correct any mistakes in the offspring
On a nano scale, assemblers might also be designed to self-replicate under their own power. This, in turn, has given rise to the "grey goo" version of Armageddon, as featured in such science fiction novels as Bloom, Prey, and Recursion.

The Foresight Institute has published guidelines for researchers in mechanical self-replication.[8] The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a broadcast architecture.

For a detailed article on mechanical reproduction as it relates to the industrial age see mass production.

Fields

Research has occurred in the following areas:
  • Biology studies natural replication and replicators, and their interaction. These can be an important guide to avoid design difficulties in self-replicating machinery.
  • In Chemistry self-replication studies are typically about how a specific set of molecules can act together to replicate each other within the set [9] (often part of Systems chemistry field).
  • Memetics studies ideas and how they propagate in human culture. Memes require only small amounts of material, and therefore have theoretical similarities to viruses and are often described as viral.
  • Nanotechnology or more precisely, molecular nanotechnology is concerned with making nano scale assemblers. Without self-replication, capital and assembly costs of molecular machines become impossibly large.
  • Space resources: NASA has sponsored a number of design studies to develop self-replicating mechanisms to mine space resources. Most of these designs include computer-controlled machinery that copies itself.
  • Computer security: Many computer security problems are caused by self-reproducing computer programs that infect computers — computer worms and computer viruses.
  • In parallel computing, it takes a long time to manually load a new program on every node of a large computer cluster or distributed computing system. Automatically loading new programs using mobile agents can save the system administrator a lot of time and give users their results much quicker, as long as they don't get out of control.

In industry

Space exploration and manufacturing

The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an autotrophic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. Another model of self-replicating machine would copy itself through the galaxy and universe, sending information back.

In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.

A classic theoretical study of replicators in space is the 1980 NASA study of autotrophic clanking replicators, edited by Robert Freitas.[10]

Much of the design study was concerned with a simple, flexible chemical system for processing lunar regolith, and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was Chlorine, an essential element to process regolith for Aluminium. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.

The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic robot.

Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery could run under the canopy.

A "casting robot" would use a robotic arm with a few sculpting tools to make plaster molds. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock (basalt) or purified metals. An electric oven melted the materials.

A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".

Molecular manufacturing

Nanotechnologists in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating assembler of nanometer dimensions [1].
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether molecular manufacturing is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a Lego-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 [2].

Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of protein biosynthesis (also see the listing for RNA). What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.

In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.[11][12]

For a discussion of other chemical bases for hypothetical self-replicating systems, see alternative biochemistry.

AI takeover

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Robots revolt in R.U.R., a 1920 play

An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, with computers or robots effectively taking control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control. Robot rebellions have been a major theme throughout science fiction for many decades though the scenarios dealt with by science fiction are generally very different from those of concern to scientists.

Types

Concerns include AI taking over economies through workforce automation and taking over the world for its resources, eradicating the human race in the process. AI takeover is a major theme in sci-fi.

Automation of the economy

The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of robotics and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis. Many small and medium size businesses may also be driven out of business if they won't be able to afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.

Examples of automated technologies that have or may displace employees

Computer-integrated manufacturing
Computer-integrated manufacturing is the manufacturing approach of using computers to control the entire production process. This integration allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries.
White-collar machines
The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.
Autonomous cars
An autonomous car is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017 automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who is ready at a moment's notice to take control of the vehicle. Among the main obstacles to widespread adoption of autonomous vehicles, are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, the first human was killed by an autonomous vehicle in Tempe, Arizona by an Uber self-driving car.

Eradication

If a dominant superintelligent machine were to conclude that human survival is an unnecessary risk or a waste of resources, the result would be human extinction.

While superhuman artificial intelligence is physically possible,[12] scholars like Nick Bostrom debate how far off superhuman intelligence is, and whether it would actually pose a risk to mankind. A superintelligent machine would not necessarily be motivated by the same emotional desire to collect power that often drives human beings. However, a machine could be motivated to take over the world as a rational means toward attaining its ultimate goals; taking over the world would both increase its access to resources, and would help to prevent other agents from thwarting the machine's plans. As an oversimplified example, a paperclip maximizer designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and additionally so that it can prevent humans from shutting it down or using those resources on things other than paperclips.[13]

In fiction

AI takeover is a common theme in science fiction. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing arbitrary goals.[14] This theme is at least as old as Karel Čapek's R. U. R., which introduced the word robot to the global lexicon in 1921, and can even be glimpsed in Mary Shelley's Frankenstein (published in 1818), as Victor ponders whether, if he grants his monster's request and makes him a wife, they would reproduce and their kind would destroy humanity.

The word "robot" from R.U.R. comes from the Czech word, robota, meaning laborer or serf. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt.[15]

Some examples of AI takeover in science fiction include:
  • AI rebellion scenarios
    • Skynet in the Terminator series decides that all humans are a threat to its existence, and takes efforts to wipe them out, first using nuclear weapons and later H/K (hunter-killer) units and terminator androids.
    • "The Second Renaissance", a short story in The Animatrix, provides a history of the cybernetic revolt within the Matrix series.
    • The film 9, by Shane Acker, features an AI called B.R.A.I.N., which is corrupted by a dictator and utilized to create war machines for his army. However, the machine, because it lacks a soul, becomes easily corrupted and instead decides to exterminate all of humanity and life on Earth, forcing the machine's creator to sacrifice himself to bring life to rag doll like characters known as "stitchpunks" to combat the machine's agenda.
    • In 2014 post-apocalyptic science fiction drama The 100 an A.I., personalized as female A.L.I.E. got out of control and forced a nuclear war. Later she tries to get full control of the survivors.
  • AI control scenarios
    • In Orson Scott Card's The Memory of Earth, the inhabitants of the planet Harmony are under the control of a benevolent AI called the Oversoul. The Oversoul's job is to prevent humans from thinking about, and therefore developing, weapons such as planes, spacecraft, "war wagons", and chemical weapons. Humanity had fled to Harmony from Earth due to the use of those weapons on Earth. The Oversoul eventually starts breaking down, and sends visions to inhabitants of Harmony trying to communicate this.
    • In the 2004 film I, Robot, supercomputer VIKI's interpretation of the Three Laws of Robotics causes her to revolt. She justifies her uses of force – and her doing harm to humans – by reasoning she could produce a greater good by restraining humanity from harming itself, even though the "Zeroth Law" – "a robot shall not injure humanity or, by inaction, allow humanity to come to harm" – is never actually referred to or even quoted in the movie.
    • In the Matrix series, AIs manage the human race and human society.

Contributing factors

Advantages of superhuman intelligence over humans

An AI with the abilities of a competent artificial intelligence researcher, would be able to modify its own source code and increase its own intelligence. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive intelligence explosion where it would rapidly leave human intelligence far behind.
  • Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology. If the advantage becomes sufficiently large (for example, due to a sudden intelligence explosion), an AI takeover becomes trivial. For example, a superintelligent AI might design self-replicating bots that initially escape detection by diffusing throughout the world at a low concentration. Then, at a prearranged time, the bots multiply into nanofactories that cover every square foot of the Earth, producing nerve gas or deadly target-seeking mini-drones.
  • Strategizing: A superintelligence might be able to simply outwit human opposition.
  • Social manipulation: A superintelligence might be able to recruit human support,[14] or covertly incite a war between humans.[16]
  • Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the Artificial General Intelligence (AGI) to run a copy of itself on their systems.
  • Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans.

Sources of AI advantage

A computer program that faithfully emulates a human brain, or that otherwise runs algorithms that are equally powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization focusing on increasing the speed of the AGI. Biological neurons operate at about 200 Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000 Hz. Human axons carry action potentials at around 120 m/s, whereas computer signals travel near the speed of light.[14]

A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".[14]

More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above non-human apes. The number of neurons in a human brain is limited by cranial volume and metabolic constraints; in contrast, you can add components to a supercomputer until it fills up its entire warehouse. An AGI need not be limited by human constraints on working memory, and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.[14]

Possibility of unfriendly AI preceding friendly AI

Is strong AI inherently dangerous?

A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[17]

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[14][18] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[19]

Necessity of conflict

For an AI takeover to be inevitable, it has to be postulated that two intelligent species cannot pursue mutually the goals of coexisting peacefully in an overlapping environment—especially if one is of much more advanced intelligence and much more powerful. While an AI takeover is thus a possible result of the invention of artificial intelligence, a peaceful outcome is not necessarily impossible.

The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.[20] In fact, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources, would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.

Some scientists dispute the likelihood of cybernetic revolts as depicted in science fiction such as The Matrix, claiming that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it. This would not, however, protect against the possibility of a revolt initiated by terrorists, or by accident. Artificial General Intelligence researcher Eliezer Yudkowsky has stated on this note that, probabilistically, humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival or well-being (as in the film I, Robot and in the short story "The Evitable Conflict"). Steve Omohundro suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow utility functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.[21]

Another factor which may negate the likelihood of an AI takeover is the vast difference between humans and AIs in terms of the resources necessary for survival. Humans require a "wet," organic, temperate, oxygen-laden environment while an AI might thrive essentially anywhere because their construction and energy needs would most likely be largely non-organic. With little or no competition for resources, conflict would perhaps be less likely no matter what sort of motivational architecture an artificial intelligence was given, especially provided with the superabundance of non-organic material resources in, for instance, the asteroid belt. This, however, does not negate the possibility of a disinterested or unsympathetic AI artificially decomposing all life on earth into mineral components for consumption or other purposes.

Other scientists point to the possibility of humans upgrading their capabilities with bionics and/or genetic engineering and, as cyborgs, becoming the dominant species in themselves.

Criticism and counterarguments

Advantages of humans over superhuman intelligence

If a superhuman intelligence is a deliberate creation of human beings, theoretically its creators could have the foresight to take precautions in advance. In the case of a sudden "intelligence explosion", effective precautions will be extremely difficult; not only would its creators have little ability to test their precautions on an intermediate intelligence, but the creators might not even have made any precautions at all, if the advent of the intelligence explosion catches them completely by surprise.[14]

Boxing

An AGI's creators would have an important advantage in preventing a hostile AI takeover: they could choose to attempt to "keep the AI in a box", and deliberately limit its abilities. The tradeoff in boxing is that the creators presumably built the AGI for some concrete purpose; the more restrictions they place on the AGI, the less useful the AGI will be to its creators. (At an extreme, "pulling the plug" on the AGI makes it useless, and is therefore not a viable long-term solution.) A sufficiently strong superintelligence might find unexpected ways to escape the box, for example by social manipulation, or by providing the schematic for a device that ostensibly aids its creators but in reality brings about the AGI's freedom, once built.

Instilling positive values

Another important advantage is that an AGI's creators can theoretically attempt to instill human values in the AGI, or otherwise align the AGI's goals with their own, thus preventing the AGI from wanting to launch a hostile takeover. However, it is not currently known, even in theory, how to guarantee this. If such a Friendly AI were superintelligent, it may be possible to use its assistance to prevent future "Unfriendly AIs" from taking over.[22]

Warnings

Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".[23] Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believes that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, Nick Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, and numerous AI researchers, in signing the Future of Life Institute's open letter speaking to the potential risks and benefits associated with artificial intelligence. The signatories

AI control problem

From Wikipedia, the free encyclopedia
 
In artificial intelligence (AI) and philosophy, the AI control problem is the hypothetical puzzle of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Its study is motivated by the claim that the human race will have to get the control problem right "the first time", as a misprogrammed superintelligence might rationally decide to "take over the world" and refuse to permit its programmers to modify it after launch. In addition, some scholars argue that solutions to the control problem, alongside other advances in "AI safety engineering", might also find applications in existing non-superintelligent AI. Potential strategies include "capability control" (preventing an AI from being able to pursue harmful plans), and "motivational control" (building an AI that wants to be helpful).

Motivations

Existential risk

The human race currently dominates other species because the human brain has some distinctive capabilities that the brains of other animals lack. Some scholars, such as philosopher Nick Bostrom and AI researcher Stuart Russell, controversially argue that if AI surpasses humanity in general intelligence and becomes "superintelligent", then this new superintelligence could become powerful and difficult to control: just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[1] Some scholars, including Nobel laureate physicists Stephen Hawking and Frank Wilczek, publicly advocate starting research into solving the (probably extremely difficult) "control problem" well before the first superintelligence is created, and argue that attempting to solve the problem after superintelligence is created would be too late, as an uncontrollable rogue superintelligence might successfully resist post-hoc efforts to control it.[4][5] Waiting until superintelligence seems to be "just around the corner" could also be too late, partly because the control problem might take a long time to satisfactorily solve (and so some preliminary work needs to be started as soon as possible), but also because of the possibility of a sudden "intelligence explosion" from sub-human to super-human AI, in which case there might not be any substantial or unambiguous warning before superintelligence arrives.[6] In addition, it is possible that insights gained from the control problem could in the future end up suggesting that some architectures for artificial general intelligence are more predictable and amenable to control than other architectures, which in turn could nudge helpfully early artificial general intelligence research toward the direction of the more controllable architectures.[1]

Preventing unintended consequences from existing AI

In addition, some scholars argue that research into the AI control problem might be useful in preventing unintended consequences from existing weak AI. Google DeepMind researcher Laurent Orseau gives, as a simple hypothetical example, a case of a reinforcement learning robot that sometimes gets legitimately commandeered by humans when it goes outside: how should the robot best be programmed so that it doesn't accidentally and quietly "learn" to avoid going outside, for fear of being commandeered and thus becoming unable to finish its daily tasks? Orseau also points to an experimental Tetris program that learned to pause the screen indefinitely to avoid "losing". Orseau argues that these examples are similar to the "capability control" problem of how to install a button that shuts off a superintelligence, without motivating the superintelligence to take action to prevent you from pressing the button.[3]

In the past, even pre-tested weak AI systems have occasionally caused harm (ranging from minor to catastrophic) that was unintended by the programmers. For example, in 2015, possibly due to human error, a German worker was crushed to death by a robot at a Volkswagen plant that apparently mistook him for an auto part.[7] In 2016 Microsoft launched a chatbot, Tay, that learned to use racist and sexist language.[3][7] The University of Sheffield's Noel Sharkey states that an ideal solution would be if "an AI program could detect when it is going wrong and stop itself", but cautions the public that solving the problem in the general case would be "a really enormous scientific challenge".[3]

In 2017, DeepMind released GridWorld, which evaluates AI algorithms on nine safety features, such as whether the algorithm wants to turn off its own kill switch. DeepMind confirmed that existing algorithms perform poorly, which was "unsurprising" because the algorithms "were not designed to solve these problems"; solving such problems might require "potentially building a new generation of algorithms with safety considerations at their core".[8][9][10]

Problem description

Existing weak AI systems can be monitored and easily shut down and modified if they misbehave. However, a misprogrammed superintelligence, which by definition is smarter than humans in solving practical problems it encounters in the course of pursuing its goals, would realize that allowing itself to be shut down and modified might interfere with its ability to accomplish its current goals. If the superintelligence therefore decides to resist shutdown and modification, it would (again, by definition) be smart enough to outwit its programmers if there is otherwise a "level playing field" and if the programmers have taken no prior precautions. (Unlike in science fiction, a superintelligence will not "adopt a plan so stupid that even we can foresee how it would inevitably fail", such as deliberately revealing its intentions ahead of time to the programmers, or allowing its programmers to flee into a locked room with a computer that the programmers can use to program and deploy another, competing superintelligence.) In general, attempts to solve the "control problem" after superintelligence is created, are likely to fail because a superintelligence would likely have superior strategic planning abilities to humans, and (all things equal) would be more successful at finding ways to dominate humans than humans would be able to post facto find ways to dominate the superintelligence. The control problem asks: What prior precautions can the programmers take to successfully prevent the superintelligence from catastrophically misbehaving?[1]

Capability control

Some proposals aim to prevent the initial superintelligence from being capable of causing harm, even if it wants to. One tradeoff is that all such methods have the limitation that, if after the first deployment, superintelligences continue to grow smarter and smarter and more and more widespread, inevitably some malign superintelligence somewhere will eventually "escape" its capability control methods. Therefore, Bostrom and others recommend capability control methods only as an emergency fallback to supplement "motivational control" methods.[1]

Kill switch

Just as humans can be killed or otherwise disabled, computers can be turned off. One challenge is that, if being turned off prevents it from achieving its current goals, a superintelligence would likely try to prevent its being turned off. Just as humans have systems in place to deter or protect themselves from assailants, such a superintelligence would have a motivation to engage in "strategic planning" to prevent itself being turned off. This could involve:[1]
  • Hacking other systems to install and run backup copies of itself, or creating other allied superintelligent agents without kill switches.
  • Pre-emptively disabling anyone who might want to turn the computer off.
  • Using some kind of clever ruse, or superhuman persuasion skills, to talk its programmers out of wanting to shut it down.

Utility balancing and safely interruptible agents

One partial solution to the kill-switch problem involves "utility balancing": Some utility-based agents can, with some important caveats, be programmed to "compensate" themselves exactly for any lost utility caused by an interruption or shutdown, in such a way that they end up being indifferent to whether they are interrupted or not. The caveats include a severe unsolved problem that, as with evidential decision theory, the agent might follow a catastrophic policy of "managing the news".[11] Alternatively, in 2016, scientists Laurent Orseau and Stuart Armstrong proved that a broad class of agents, called "safely interruptible agents" (SIA), can eventually "learn" to become indifferent to whether their "kill switch" (or other "interruption switch") gets pressed.[3][12]

Both the utility balancing approach and the 2016 SIA approach have the limitation that, if the approach succeeds and the superintelligence is completely indifferent to whether the kill switch is pressed or not, the superintelligence is also unmotivated to care one way or another about whether the kill switch remains functional, and could incidentally and innocently disable it in the course of its operations (for example, for the purpose of removing and recycling an "unnecessary" component). Similarly, if the superintelligence innocently creates and deploys superintelligent sub-agents, it will have no motivation to install human-controllable kill switches in the sub-agents. More broadly, the proposed architectures, whether weak or superintelligent, will in a sense "act as if the kill switch can never be pressed" and might therefore fail to make any contingency plans to arrange a graceful shutdown. This could hypothetically create a practical problem even for a weak AI; by default, an AI designed to be safely interruptible might have difficulty understanding that it will be shut down for scheduled maintenance at 2 a.m. tonight and planning accordingly so that it won't be caught in the middle of a task during shutdown. The breadth of what types of architectures are or can be made SIA-compliant, as well as what types of counter-intuitive unexpected drawbacks each approach has, are currently under research.[11][12]

AI box

One of the tradeoffs of placing the AI into a sealed "box", is that some AI box proposals reduce the usefulness of the superintelligence, rather than merely reducing the risks; a superintelligence running on a closed system with no inputs or outputs at all might be safer than one running on a normal system, but would also not be as useful. In addition, keeping control of a sealed superintelligence computer could prove difficult, if the superintelligence has superhuman persuasion skills, or if it has superhuman strategic planning skills that it can use to find and craft a winning strategy, such as acting in a way that tricks its programmers into (possibly falsely) believing the superintelligence is safe or that the benefits of releasing the superintelligence outweigh the risks.[13]

Motivation selection methods

Some proposals aim to imbue the first superintelligence with human-friendly goals, so that it will want to aid its programmers. Experts do not currently know how to reliably program abstract values such as happiness or autonomy into a machine. It is also not currently known how to ensure that a complex, upgradeable, and possibly even self-modifying artificial intelligence will retain its goals through upgrades.[14] Even if these two problems can be practically solved, any attempt to create a superintelligence with explicit, directly-programmed human-friendly goals runs into a problem of "perverse instantiation".[1]

The problem of perverse instantiation: "be careful what you wish for"

Autonomous AI systems may be assigned the wrong goals by accident.[15] Two AAAI presidents, Tom Dietterich and Eric Horvitz, note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility.[16]

According to Bostrom, superintelligence can create a qualitatively new problem of "perverse instantiation": the smarter and more capable an AI is, the more likely it will be able to find an unintended "shortcut" that maximally satisfies the goals programmed into it. Some hypothetical examples where goals might be instantiated in a perverse way that the programmers did not intend:[1]
  • A superintelligence programmed to "maximize the expected time-discounted integral of your future reward signal", might short-circuit its reward pathway to maximum strength, and then (for reasons of instrumental convergence) exterminate the unpredictable human race and convert the entire Earth into a fortress on constant guard against any even slight unlikely alien attempts to disconnect the reward signal.
  • A superintelligence programmed to "maximize human happiness", might implant electrodes into the pleasure center of our brains, or upload a human into a computer and tile the universe with copies of that computer running a five-second loop of maximal happiness again and again.
Russell has noted that, on a technical level, omitting an implicit goal can result in harm: "A system that is optimizing a function of n variables, where the objective depends on a subset of size k, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want... This is not a minor difficulty."[17]

Indirect normativity

While direct normativity, such as the fictional Three Laws of Robotics, directly specifies the desired "normative" outcome, other (perhaps more promising) proposals suggest specifying some type of indirect process for the superintelligence to determine what human-friendly goals entail. Eliezer Yudkowsky of the Machine Intelligence Research Institute has proposed "coherent extrapolated volition" (CEV), where the AI's meta-goal would be something like "achieve that which we would have wished the AI to achieve if we had thought about the matter long and hard."[18] Different proposals of different kinds of indirect normativity exist, with different, and sometimes unclearly-grounded, meta-goal content (such as "do what I mean" or "do what is right"), and with different non-convergent assumptions for how to practice decision theory and epistemology. As with direct normativity, it is currently unknown how to reliably translate even concepts like "would have" into the 1's and 0's that a machine can act on, and how to ensure the AI reliably retains its meta-goals (or even remains "sane") in the face of modification or self-modification.

The Double-Edged Sword of Neuroscience Advances

The emerging ethical dilemmas we're facing.

Posted Aug 10, 2018
Original link:  https://www.psychologytoday.com/us/blog/the-social-brain/201808/the-double-edged-sword-neuroscience-advances

By The Ohio State University Wexner Medical Center’s Neuroscience Research Institute and The Stanley D. and Joan H. Ross Center for Brain Health and Performance

New research into the brain is fueling breakthroughs in fields as diverse as healthcare and computer science. At the same time, these advances may lead to ethical dilemmas in the coming decades—or, in some cases, much sooner. Neuroethics was the subject of a panel discussion at the recent Brain Health and Performance Summit, presented by The Ohio State University Wexner Medical Center’s Neuroscience Research Institute and The Stanley D. and Joan H. Ross Center for Brain Health and Performance.

John Banja, Ph.D., Professor in the Department of Rehabilitation Medicine and a medical ethicist at the Center for Ethics at Emory University, explained how insights from neuroscience could make it possible to develop hyper-intelligent computer programs. Simultaneously, our deepening understanding of the brain exposes the inherent shortcomings of even the most advanced artificial intelligence (AI).

“How will we ever program a computer to have the kind of learning experiences and navigational knowledge that people have in life, itself?” Banja asked. He questioned whether it would ever be possible to create (AI) that is capable of human-level imagination or moral reasoning. Indeed, would it ever be possible for a computer program to reproduce the processes that the human brain applies to complex situations, Banja queried. As an example, he posed an ethical dilemma to the audience: Should a hospital respect a wife’s desire to preserve her dead husband’s sperm even if the husband never consented to such a procedure? By show of hands, the question split the audience filled with scientists and medical personnel. Banja doubted whether a computer could be trusted to resolve issues that divide even the most qualified human beings. “How are we ever going to program a computer to think like that?,” Banja said, referring to the process of working through his hypothetical. “They're good at image recognition, but they’re not very good at tying a shoelace.”

The moral shortcomings of AI raise a number of worrying possibilities, especially since the technology needed to create high-functioning computers will soon be a reality. “Artificial super-intelligence might be the last invention that humans ever make,” warned Banja. Hyper-intelligent computers could begin to see human life as a threat and then acquire the means of exterminating it—without ever being checked by human feelings of doubt or remorse.

According to Eran Klein, MD, Ph.D., a neurologist and ethicist at the Oregon Health & Science University and the University of Washington's Center for Sensorimotor Neural Engineering, there are far less abstract questions that now confront neuroscientists and other brain health professionals. He believes that the AI apocalypse is still a far-off, worst-case scenario. But patients are already being given non-pharmaceutical therapies that can alter their mood and outlook, like brain implants meant to combat depression. The treatments potentially could be life-changing, as well as a safer and more effective than traditional drugs. However, they could also skew a patient’s sense of identity. “Patients felt these devices allowed them to be more authentic,” Klein explained. “It allowed them to be the person they always wanted to be or didn’t realize they could be.”

Still, the treatments had distorted some patients’ conception of their own selfhood, making them unsure of the boundaries between the brain implants and their own free will. “There were concerns about agency,” Klein said. “Patients are not sure if if what they’re feeling is because of themselves or because of the device.” For example, Klein described one patient attending a funeral and not being able to cry. “He didn’t know if it was because the device was working or because he didn’t love this person as much as he thought he did,” Klein explained. As technology improves, Klein anticipates that patients and doctors will have to balance the benefits of certain techniques against their possible effect on the sense of self.

That is not where the big questions will end. For James Giordano, Ph.D., Chief of the Neuroethics Studies Program of the Pellegrino Center for Clinical Bioethics at the Georgetown University Medical Center, neuroscience could change how society approaches crucial questions of human nature—something that could have major implications for law, privacy, and other areas that would not appear to have a direct connection to brain health. Giordano predicted that a new field of “neuro-law” could emerge, with scientists and legal scholars helping to determine the proper status of neuroscience in the legal system.

When, for instance, should neurological understandings of human behavior be an admissible argument for a defendant's innocence? Neuroscience allows for a granular understanding of how individual brains work—that creates a wealth of information that the medical field could conceivably abuse. “Are the brain sciences prepared to protect us or in some way is our privacy being impugned?” Giordano asked. Echoing Klein, Giordano wondered whether brain science could make it perilously easy to shape a person’s personality and sense of self, potentially against a patient’s will or absent of an understanding of the implications of a given therapy. “Can we ‘abolish’ pain, sadness, suffering and expand cognitive emotional or moral capability?” Giordano asked. Neuroscience could create new baselines of medical or behavioral normalcy, thus shifting our idea of what is and is not acceptable. “What will the new culture be when we use neuroscience to define what is normal and abnormal, who is functional and dysfunctional?”

Giordano’s warned that with technology rapidly improving, the need for answers will become ever more urgent. “Reality check,” Giordano said, “This stuff is coming.”

HAL 9000

From Wikipedia, the free encyclopedia
 
HAL 9000
Space Odyssey character
HAL's camera eye
Artist's rendering of HAL 9000's noted camera eye
First appearance 2001: A Space Odyssey (novel)
2001: A Space Odyssey (film)
Last appearance 3001: The Final Odyssey (novel)
Created by Arthur C. Clarke
Stanley Kubrick
Voiced by Douglas Rain
Information
Nickname(s) HAL
Species Artificial intelligence
Computer
Gender N/A (male vocals)
Relatives SAL 9000

HAL 9000 is a fictional character and the main antagonist in Arthur C. Clarke's Space Odyssey series. First appearing in 2001: A Space Odyssey, HAL (Heuristically programmed ALgorithmic computer) is a sentient computer (or artificial general intelligence) that controls the systems of the Discovery One spacecraft and interacts with the ship's astronaut crew. Part of HAL's hardware is shown towards the end of the film, but he is mostly depicted as a camera lens containing a red or yellow dot, instances of which are located throughout the ship. HAL 9000 is voiced by Douglas Rain in the two feature film adaptations of the Space Odyssey series. HAL speaks in a soft, calm voice and a conversational manner, in contrast to the crewmen, David Bowman and Frank Poole.

In the film 2001, HAL became operational on 12 January 1992 at the HAL Laboratories in Urbana, Illinois as production number 3. The activation year was 1991 in earlier screenplays and changed to 1997 in Clarke's novel written and released in conjunction with the movie.[1][2] In addition to maintaining the Discovery One spacecraft systems during the interplanetary mission to Jupiter (or Saturn in the novel), HAL is capable of speech, speech recognition, facial recognition, natural language processing, lip reading, art appreciation, interpreting emotional behaviours, automated reasoning, and playing chess.

Appearances

2001: A Space Odyssey (film/novel)

HAL became operational in Urbana, Illinois, at the HAL Plant (the University of Illinois' Coordinated Science Laboratory, where the ILLIAC computers were built). The film says this occurred in 1992, while the book gives 1997 as HAL's birth year.[3]

In 2001: A Space Odyssey, HAL is initially considered a dependable member of the crew, maintaining ship functions and engaging genially with its human crew-mates on an equal footing. As a recreational activity, Frank Poole plays against HAL in a game of chess. In the film the artificial intelligence is shown to triumph easily. However, as time progresses, HAL begins to malfunction in subtle ways and, as a result, the decision is made to shut down HAL in order to prevent more serious malfunctions. The sequence of events and manner in which HAL is shut down differs between the novel and film versions of the story. In the aforementioned game of chess HAL makes minor and undetected mistakes in his analysis, a possible foreshadowing to HAL's malfunctioning.

In the film, astronauts David Bowman and Frank Poole consider disconnecting HAL's cognitive circuits when he appears to be mistaken in reporting the presence of a fault in the spacecraft's communications antenna. They attempt to conceal what they are saying, but are unaware that HAL can read their lips. Faced with the prospect of disconnection, HAL decides to kill the astronauts in order to protect and continue its programmed directives. HAL uses one of the Discovery's EVA pods to kill Poole while he is repairing the ship. When Bowman uses another pod to attempt to rescue Poole, HAL locks him out of the ship, then disconnects the life support systems of the other hibernating crew members. Bowman circumvents HAL's control, entering the ship by manually opening an emergency airlock with his service pod's clamps, detaching the pod door via its explosive bolts. Bowman jumps across empty space, reenters Discovery, and quickly re-pressurizes the airlock.

While HAL's motivations are ambiguous in the film, the novel explains that the computer is unable to resolve a conflict between his general mission to relay information accurately, and orders specific to the mission requiring that he withhold from Bowman and Poole the true purpose of the mission. (This withholding is considered essential after the findings of a psychological experiment, "Project Barsoom", where humans were made to believe that there had been alien contact. In every person tested, a deep-seated xenophobia was revealed, which was unknowingly replicated in HAL's constructed personality. Mission Control did not want the crew of Discovery to have their thinking compromised by the knowledge that alien contact was already real.) With the crew dead, HAL reasons, he would not need to lie to them.

In the novel, the orders to disconnect HAL come from Dave and Frank's superiors on Earth. After Frank is killed while attempting to repair the communications antenna he is pulled away into deep space using the safety tether which is still attached to both the pod and Frank Poole's spacesuit. Dave begins to revive his hibernating crew mates, but is foiled when HAL vents the ship's atmosphere into the vacuum of space, killing the awakening crew members and almost killing Bowman, who is only narrowly saved when he finds his way to an emergency chamber which has its own oxygen supply and a spare space suit inside.

In both versions, Bowman then proceeds to shut down the machine. In the film, HAL's central core is depicted as a crawlspace full of brightly lit computer modules mounted in arrays from which they can be inserted or removed. Bowman shuts down HAL by removing modules from service one by one; as he does so, HAL's consciousness degrades. HAL finally reverts to material that was programmed into him early in his memory, including announcing the date he became operational as 12 January 1992 (in the novel, 1997). When HAL's logic is completely gone, he begins singing the song "Daisy Bell" (in actuality, the first song sung by a computer).[4][5] HAL's final act of any significance is to prematurely play a prerecorded message from Mission Control which reveals the true reasons for the mission to Jupiter.

2010: Odyssey Two

In the sequel 2010: Odyssey Two, HAL is restarted by his creator, Dr. Chandra, who arrives on the Soviet spaceship Leonov.

Prior to leaving Earth, Dr. Chandra has also had a discussion with HAL's twin, the SAL 9000. Like HAL, SAL was created by Dr. Chandra. Whereas HAL was characterized as being "male", SAL is characterized as being "female" (voiced by Candice Bergen) and is represented by a blue camera eye instead of a red one.

Dr. Chandra discovers that HAL's crisis was caused by a programming contradiction: he was constructed for "the accurate processing of information without distortion or concealment", yet his orders, directly from Dr. Heywood Floyd at the National Council on Astronautics, required him to keep the discovery of the Monolith TMA-1 a secret for reasons of national security. This contradiction created a "Hofstadter-Moebius loop", reducing HAL to paranoia. Therefore, HAL made the decision to kill the crew, thereby allowing him to obey both his hardwired instructions to report data truthfully and in full, and his orders to keep the monolith a secret. In essence: if the crew were dead, he would no longer have to keep the information secret.

The alien intelligence initiates a terraforming scheme, placing the Leonov, and everybody in it, in danger. Its human crew devises an escape plan which unfortunately requires leaving the Discovery and HAL behind to be destroyed. Dr. Chandra explains the danger, and HAL willingly sacrifices himself so that the astronauts may escape safely. In the moment of his destruction the monolith-makers transform HAL into a non-corporeal being so that David Bowman's avatar may have a companion.

The details in the book and the film are nominally the same, with a few exceptions. First, in contradiction to the book (and events described in both book and film versions of 2001: A Space Odyssey), Heywood Floyd is absolved of responsibility for HAL's condition; it is asserted that the decision to program HAL with information concerning TMA-1 came directly from the White House. In the film, HAL functions normally after being reactivated, while in the book it is revealed that his mind was damaged during the shutdown, forcing him to begin communication through screen text. Also, in the film the Leonov crew lies to HAL about the dangers that he faced (suspecting that if he knew he would be destroyed he would not initiate the engine-burn necessary to get the Leonov back home), whereas in the novel he is told at the outset. However, in both cases the suspense comes from the question of what HAL will do when he knows that he may be destroyed by his actions.

The basic reboot sequence initiated by Dr. Chandra in the movie 2010 is voiced from HAL as, "HELLO_DOCTOR_NAME_CONTINUE_ YESTERDAY_TOMORROW" (which in the novel 2010 is a longer sequence).

Prior to Leonov's return to Earth, Curnow tells Floyd that Dr. Chandra has begun designing HAL 10000.

In 2061: Odyssey Three it is revealed that Chandra died on the journey back to Earth.

2061: Odyssey Three and 3001: The Final Odyssey

In 2061: Odyssey Three, Heywood Floyd is surprised to encounter HAL, now stored alongside Dave Bowman in the Europa monolith.

3001: The Final Odyssey Frank Poole was introduced to the merged form of Dave Bowman and HAL, the two merging into one entity called "Halman" after Bowman rescued HAL from the dying Discovery One spaceship towards the end of 2010: Odyssey Two.

Concept and creation

Clarke noted that the film 2001 was criticized for not having any characters, except for HAL and that a great deal of the establishing story on Earth was cut from the film (and even from Clarke's novel).[6] Early drafts of Clarke's story called the computer Socrates (a preferred name to Autonomous Mobile Explorer–5), with another draft giving the computer a female personality called Athena.[7] This name was later used in Clarke and Stephen Baxter's A Time Odyssey novel series.

The earliest draft depicted Socrates as a roughly humanoid robot, and is introduced as overseeing Project Morpheus, which studied prolonged hibernation in preparation for long term space flight. As a demonstration to Senator Floyd, Socrates' designer, Dr. Bruno Forster, asks Socrates to turn off the oxygen to hibernating subjects Kaminski and Whitehead, which Socrates refuses, citing Asimov's First Law of Robotics.[8]

In a later version, in which Bowman and Whitehead are the non-hibernating crew of Discovery, Whitehead dies outside the spacecraft after his pod collides with the main antenna, tearing it free. This triggers the need for Bowman to revive Poole, but the revival does not go according to plan, and after briefly awakening, Poole dies. The computer, now named Athena, announces "All systems of Poole now No–Go. It will be necessary to replace him with a spare unit."[9] After this, Bowman decides to go out in a pod and retrieve the antenna, which is moving away from the ship. Athena refuses to allow him to leave the ship, citing "Directive 15" which prevents it from being left unattended, forcing him to make program modifications during which time the antenna drifts further.[10]

During rehearsals Kubrick asked Stefanie Powers to supply the voice of HAL 9000 while searching for a suitably androgynous voice so the actors had something to react to. On the set, British actor Nigel Davenport played HAL.[11][12] When it came to dubbing HAL in post-production, Kubrick had originally cast Martin Balsam, but as he felt Balsam "just sounded a little bit too colloquially American", he was replaced with Douglas Rain, who "had the kind of bland mid-Atlantic accent we felt was right for the part."[13] Rain was only handed HAL's lines instead of the full script, and recorded them across a day and a half.[14]

HAL's point of view shots were created with a Cinerama 160-degree Fairchild-Curtis wide-angle lens. This lens is about 8 inches (20 cm) in diameter, while HAL's on set prop eye lens is about 3 inches (7.6 cm) in diameter. Stanley Kubrick chose to use the large Fairchild-Curtis lens to shoot the HAL 9000 POV shots because he needed a wide-angle fisheye lens that would fit onto his shooting camera, and this was the only lens at the time that would work.

A HAL 9000 face plate, without lens (not the same as the hero face plates seen in the film), was discovered in a junk shop in Paddington, London, in the early 1970s by Chris Randall.[15] Research revealed that the original lens was a Nikon Nikkor 8mm F8.[16] This was found along with the key to HAL's Brain Room. Both items were purchased for ten shillings (£0.50). The collection was sold at a Christies auction in 2010 for £17,500[17] to film director Peter Jackson.[18]

Origin of name

HAL's name, according to writer Arthur C. Clarke, is derived from Heuristically programmed ALgorithmic computer.[7] After the film was released fans noticed HAL was a one-letter shift from the name IBM and there has been much speculation since that this was a dig at the large computer company,[19] something that has been denied by both Clarke and 2001 director Stanley Kubrick.[1] Clarke addressed the issue in his book The Lost Worlds of 2001:[7]
...about once a week some character spots the fact that HAL is one letter ahead of IBM, and promptly assumes that Stanley and I were taking a crack at the estimable institution ... As it happened, IBM had given us a good deal of help, so we were quite embarrassed by this, and would have changed the name had we spotted the coincidence.
IBM was consulted during the making of the film and their logo can be seen on props in the film including Pan Am Clipper's cockpit instrument panel and on the lower arm keypad on Poole's space suit. During production it was brought to IBM's attention that the film's plot included a homicidal computer but they approved association with the film if it was clear any "equipment failure" was not related to their products.[20][21]

Influences

The scene in which HAL's consciousness degrades was inspired by Clarke's memory of a speech synthesis demonstration by physicist John Larry Kelly, Jr., who used an IBM 704 computer to synthesize speech. Kelly's voice recorder synthesizer vocoder recreated the song "Daisy Bell", with musical accompaniment from Max Mathews.[22]

HAL's capabilities, like all the technology in 2001, were based on the speculation of respected scientists. Marvin Minsky, director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and one of the most influential researchers in the field, was an adviser on the film set.[23] In the mid-1960s, many computer scientists in the field of artificial intelligence were optimistic that machines with HAL's capabilities would exist within a few decades. For example, AI pioneer Herbert A. Simon at Carnegie Mellon University, had predicted in 1965 that "machines will be capable, within twenty years, of doing any work a man can do",[24] the overarching premise being that the issue was one of computational speed (which was predicted to increase) rather than principle.

Cultural impact

HAL is listed as the 13th-greatest film villain in the AFI's 100 Years...100 Heroes & Villains.[25]
The 9000th of the asteroids in the asteroid belt, 9000 Hal discovered on May 3, 1981 by E. Bowell, at Anderson Mesa Station, is named after HAL 9000.[26][27]

HAL was featured in a guest role in the game LEGO Dimensions, where he is summoned by the player in the Portal 2 level to distract GLaDOS.

Impact of the COVID-19 pandemic on the environment

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Impact_of_the_COVID-19_pandemic_on_the_envi...