Search This Blog

Thursday, August 16, 2018

AI control problem

From Wikipedia, the free encyclopedia
 
In artificial intelligence (AI) and philosophy, the AI control problem is the hypothetical puzzle of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Its study is motivated by the claim that the human race will have to get the control problem right "the first time", as a misprogrammed superintelligence might rationally decide to "take over the world" and refuse to permit its programmers to modify it after launch. In addition, some scholars argue that solutions to the control problem, alongside other advances in "AI safety engineering", might also find applications in existing non-superintelligent AI. Potential strategies include "capability control" (preventing an AI from being able to pursue harmful plans), and "motivational control" (building an AI that wants to be helpful).

Motivations

Existential risk

The human race currently dominates other species because the human brain has some distinctive capabilities that the brains of other animals lack. Some scholars, such as philosopher Nick Bostrom and AI researcher Stuart Russell, controversially argue that if AI surpasses humanity in general intelligence and becomes "superintelligent", then this new superintelligence could become powerful and difficult to control: just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[1] Some scholars, including Nobel laureate physicists Stephen Hawking and Frank Wilczek, publicly advocate starting research into solving the (probably extremely difficult) "control problem" well before the first superintelligence is created, and argue that attempting to solve the problem after superintelligence is created would be too late, as an uncontrollable rogue superintelligence might successfully resist post-hoc efforts to control it.[4][5] Waiting until superintelligence seems to be "just around the corner" could also be too late, partly because the control problem might take a long time to satisfactorily solve (and so some preliminary work needs to be started as soon as possible), but also because of the possibility of a sudden "intelligence explosion" from sub-human to super-human AI, in which case there might not be any substantial or unambiguous warning before superintelligence arrives.[6] In addition, it is possible that insights gained from the control problem could in the future end up suggesting that some architectures for artificial general intelligence are more predictable and amenable to control than other architectures, which in turn could nudge helpfully early artificial general intelligence research toward the direction of the more controllable architectures.[1]

Preventing unintended consequences from existing AI

In addition, some scholars argue that research into the AI control problem might be useful in preventing unintended consequences from existing weak AI. Google DeepMind researcher Laurent Orseau gives, as a simple hypothetical example, a case of a reinforcement learning robot that sometimes gets legitimately commandeered by humans when it goes outside: how should the robot best be programmed so that it doesn't accidentally and quietly "learn" to avoid going outside, for fear of being commandeered and thus becoming unable to finish its daily tasks? Orseau also points to an experimental Tetris program that learned to pause the screen indefinitely to avoid "losing". Orseau argues that these examples are similar to the "capability control" problem of how to install a button that shuts off a superintelligence, without motivating the superintelligence to take action to prevent you from pressing the button.[3]

In the past, even pre-tested weak AI systems have occasionally caused harm (ranging from minor to catastrophic) that was unintended by the programmers. For example, in 2015, possibly due to human error, a German worker was crushed to death by a robot at a Volkswagen plant that apparently mistook him for an auto part.[7] In 2016 Microsoft launched a chatbot, Tay, that learned to use racist and sexist language.[3][7] The University of Sheffield's Noel Sharkey states that an ideal solution would be if "an AI program could detect when it is going wrong and stop itself", but cautions the public that solving the problem in the general case would be "a really enormous scientific challenge".[3]

In 2017, DeepMind released GridWorld, which evaluates AI algorithms on nine safety features, such as whether the algorithm wants to turn off its own kill switch. DeepMind confirmed that existing algorithms perform poorly, which was "unsurprising" because the algorithms "were not designed to solve these problems"; solving such problems might require "potentially building a new generation of algorithms with safety considerations at their core".[8][9][10]

Problem description

Existing weak AI systems can be monitored and easily shut down and modified if they misbehave. However, a misprogrammed superintelligence, which by definition is smarter than humans in solving practical problems it encounters in the course of pursuing its goals, would realize that allowing itself to be shut down and modified might interfere with its ability to accomplish its current goals. If the superintelligence therefore decides to resist shutdown and modification, it would (again, by definition) be smart enough to outwit its programmers if there is otherwise a "level playing field" and if the programmers have taken no prior precautions. (Unlike in science fiction, a superintelligence will not "adopt a plan so stupid that even we can foresee how it would inevitably fail", such as deliberately revealing its intentions ahead of time to the programmers, or allowing its programmers to flee into a locked room with a computer that the programmers can use to program and deploy another, competing superintelligence.) In general, attempts to solve the "control problem" after superintelligence is created, are likely to fail because a superintelligence would likely have superior strategic planning abilities to humans, and (all things equal) would be more successful at finding ways to dominate humans than humans would be able to post facto find ways to dominate the superintelligence. The control problem asks: What prior precautions can the programmers take to successfully prevent the superintelligence from catastrophically misbehaving?[1]

Capability control

Some proposals aim to prevent the initial superintelligence from being capable of causing harm, even if it wants to. One tradeoff is that all such methods have the limitation that, if after the first deployment, superintelligences continue to grow smarter and smarter and more and more widespread, inevitably some malign superintelligence somewhere will eventually "escape" its capability control methods. Therefore, Bostrom and others recommend capability control methods only as an emergency fallback to supplement "motivational control" methods.[1]

Kill switch

Just as humans can be killed or otherwise disabled, computers can be turned off. One challenge is that, if being turned off prevents it from achieving its current goals, a superintelligence would likely try to prevent its being turned off. Just as humans have systems in place to deter or protect themselves from assailants, such a superintelligence would have a motivation to engage in "strategic planning" to prevent itself being turned off. This could involve:[1]
  • Hacking other systems to install and run backup copies of itself, or creating other allied superintelligent agents without kill switches.
  • Pre-emptively disabling anyone who might want to turn the computer off.
  • Using some kind of clever ruse, or superhuman persuasion skills, to talk its programmers out of wanting to shut it down.

Utility balancing and safely interruptible agents

One partial solution to the kill-switch problem involves "utility balancing": Some utility-based agents can, with some important caveats, be programmed to "compensate" themselves exactly for any lost utility caused by an interruption or shutdown, in such a way that they end up being indifferent to whether they are interrupted or not. The caveats include a severe unsolved problem that, as with evidential decision theory, the agent might follow a catastrophic policy of "managing the news".[11] Alternatively, in 2016, scientists Laurent Orseau and Stuart Armstrong proved that a broad class of agents, called "safely interruptible agents" (SIA), can eventually "learn" to become indifferent to whether their "kill switch" (or other "interruption switch") gets pressed.[3][12]

Both the utility balancing approach and the 2016 SIA approach have the limitation that, if the approach succeeds and the superintelligence is completely indifferent to whether the kill switch is pressed or not, the superintelligence is also unmotivated to care one way or another about whether the kill switch remains functional, and could incidentally and innocently disable it in the course of its operations (for example, for the purpose of removing and recycling an "unnecessary" component). Similarly, if the superintelligence innocently creates and deploys superintelligent sub-agents, it will have no motivation to install human-controllable kill switches in the sub-agents. More broadly, the proposed architectures, whether weak or superintelligent, will in a sense "act as if the kill switch can never be pressed" and might therefore fail to make any contingency plans to arrange a graceful shutdown. This could hypothetically create a practical problem even for a weak AI; by default, an AI designed to be safely interruptible might have difficulty understanding that it will be shut down for scheduled maintenance at 2 a.m. tonight and planning accordingly so that it won't be caught in the middle of a task during shutdown. The breadth of what types of architectures are or can be made SIA-compliant, as well as what types of counter-intuitive unexpected drawbacks each approach has, are currently under research.[11][12]

AI box

One of the tradeoffs of placing the AI into a sealed "box", is that some AI box proposals reduce the usefulness of the superintelligence, rather than merely reducing the risks; a superintelligence running on a closed system with no inputs or outputs at all might be safer than one running on a normal system, but would also not be as useful. In addition, keeping control of a sealed superintelligence computer could prove difficult, if the superintelligence has superhuman persuasion skills, or if it has superhuman strategic planning skills that it can use to find and craft a winning strategy, such as acting in a way that tricks its programmers into (possibly falsely) believing the superintelligence is safe or that the benefits of releasing the superintelligence outweigh the risks.[13]

Motivation selection methods

Some proposals aim to imbue the first superintelligence with human-friendly goals, so that it will want to aid its programmers. Experts do not currently know how to reliably program abstract values such as happiness or autonomy into a machine. It is also not currently known how to ensure that a complex, upgradeable, and possibly even self-modifying artificial intelligence will retain its goals through upgrades.[14] Even if these two problems can be practically solved, any attempt to create a superintelligence with explicit, directly-programmed human-friendly goals runs into a problem of "perverse instantiation".[1]

The problem of perverse instantiation: "be careful what you wish for"

Autonomous AI systems may be assigned the wrong goals by accident.[15] Two AAAI presidents, Tom Dietterich and Eric Horvitz, note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility.[16]

According to Bostrom, superintelligence can create a qualitatively new problem of "perverse instantiation": the smarter and more capable an AI is, the more likely it will be able to find an unintended "shortcut" that maximally satisfies the goals programmed into it. Some hypothetical examples where goals might be instantiated in a perverse way that the programmers did not intend:[1]
  • A superintelligence programmed to "maximize the expected time-discounted integral of your future reward signal", might short-circuit its reward pathway to maximum strength, and then (for reasons of instrumental convergence) exterminate the unpredictable human race and convert the entire Earth into a fortress on constant guard against any even slight unlikely alien attempts to disconnect the reward signal.
  • A superintelligence programmed to "maximize human happiness", might implant electrodes into the pleasure center of our brains, or upload a human into a computer and tile the universe with copies of that computer running a five-second loop of maximal happiness again and again.
Russell has noted that, on a technical level, omitting an implicit goal can result in harm: "A system that is optimizing a function of n variables, where the objective depends on a subset of size k, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want... This is not a minor difficulty."[17]

Indirect normativity

While direct normativity, such as the fictional Three Laws of Robotics, directly specifies the desired "normative" outcome, other (perhaps more promising) proposals suggest specifying some type of indirect process for the superintelligence to determine what human-friendly goals entail. Eliezer Yudkowsky of the Machine Intelligence Research Institute has proposed "coherent extrapolated volition" (CEV), where the AI's meta-goal would be something like "achieve that which we would have wished the AI to achieve if we had thought about the matter long and hard."[18] Different proposals of different kinds of indirect normativity exist, with different, and sometimes unclearly-grounded, meta-goal content (such as "do what I mean" or "do what is right"), and with different non-convergent assumptions for how to practice decision theory and epistemology. As with direct normativity, it is currently unknown how to reliably translate even concepts like "would have" into the 1's and 0's that a machine can act on, and how to ensure the AI reliably retains its meta-goals (or even remains "sane") in the face of modification or self-modification.

The Double-Edged Sword of Neuroscience Advances

The emerging ethical dilemmas we're facing.

Posted Aug 10, 2018
Original link:  https://www.psychologytoday.com/us/blog/the-social-brain/201808/the-double-edged-sword-neuroscience-advances

By The Ohio State University Wexner Medical Center’s Neuroscience Research Institute and The Stanley D. and Joan H. Ross Center for Brain Health and Performance

New research into the brain is fueling breakthroughs in fields as diverse as healthcare and computer science. At the same time, these advances may lead to ethical dilemmas in the coming decades—or, in some cases, much sooner. Neuroethics was the subject of a panel discussion at the recent Brain Health and Performance Summit, presented by The Ohio State University Wexner Medical Center’s Neuroscience Research Institute and The Stanley D. and Joan H. Ross Center for Brain Health and Performance.

John Banja, Ph.D., Professor in the Department of Rehabilitation Medicine and a medical ethicist at the Center for Ethics at Emory University, explained how insights from neuroscience could make it possible to develop hyper-intelligent computer programs. Simultaneously, our deepening understanding of the brain exposes the inherent shortcomings of even the most advanced artificial intelligence (AI).

“How will we ever program a computer to have the kind of learning experiences and navigational knowledge that people have in life, itself?” Banja asked. He questioned whether it would ever be possible to create (AI) that is capable of human-level imagination or moral reasoning. Indeed, would it ever be possible for a computer program to reproduce the processes that the human brain applies to complex situations, Banja queried. As an example, he posed an ethical dilemma to the audience: Should a hospital respect a wife’s desire to preserve her dead husband’s sperm even if the husband never consented to such a procedure? By show of hands, the question split the audience filled with scientists and medical personnel. Banja doubted whether a computer could be trusted to resolve issues that divide even the most qualified human beings. “How are we ever going to program a computer to think like that?,” Banja said, referring to the process of working through his hypothetical. “They're good at image recognition, but they’re not very good at tying a shoelace.”

The moral shortcomings of AI raise a number of worrying possibilities, especially since the technology needed to create high-functioning computers will soon be a reality. “Artificial super-intelligence might be the last invention that humans ever make,” warned Banja. Hyper-intelligent computers could begin to see human life as a threat and then acquire the means of exterminating it—without ever being checked by human feelings of doubt or remorse.

According to Eran Klein, MD, Ph.D., a neurologist and ethicist at the Oregon Health & Science University and the University of Washington's Center for Sensorimotor Neural Engineering, there are far less abstract questions that now confront neuroscientists and other brain health professionals. He believes that the AI apocalypse is still a far-off, worst-case scenario. But patients are already being given non-pharmaceutical therapies that can alter their mood and outlook, like brain implants meant to combat depression. The treatments potentially could be life-changing, as well as a safer and more effective than traditional drugs. However, they could also skew a patient’s sense of identity. “Patients felt these devices allowed them to be more authentic,” Klein explained. “It allowed them to be the person they always wanted to be or didn’t realize they could be.”

Still, the treatments had distorted some patients’ conception of their own selfhood, making them unsure of the boundaries between the brain implants and their own free will. “There were concerns about agency,” Klein said. “Patients are not sure if if what they’re feeling is because of themselves or because of the device.” For example, Klein described one patient attending a funeral and not being able to cry. “He didn’t know if it was because the device was working or because he didn’t love this person as much as he thought he did,” Klein explained. As technology improves, Klein anticipates that patients and doctors will have to balance the benefits of certain techniques against their possible effect on the sense of self.

That is not where the big questions will end. For James Giordano, Ph.D., Chief of the Neuroethics Studies Program of the Pellegrino Center for Clinical Bioethics at the Georgetown University Medical Center, neuroscience could change how society approaches crucial questions of human nature—something that could have major implications for law, privacy, and other areas that would not appear to have a direct connection to brain health. Giordano predicted that a new field of “neuro-law” could emerge, with scientists and legal scholars helping to determine the proper status of neuroscience in the legal system.

When, for instance, should neurological understandings of human behavior be an admissible argument for a defendant's innocence? Neuroscience allows for a granular understanding of how individual brains work—that creates a wealth of information that the medical field could conceivably abuse. “Are the brain sciences prepared to protect us or in some way is our privacy being impugned?” Giordano asked. Echoing Klein, Giordano wondered whether brain science could make it perilously easy to shape a person’s personality and sense of self, potentially against a patient’s will or absent of an understanding of the implications of a given therapy. “Can we ‘abolish’ pain, sadness, suffering and expand cognitive emotional or moral capability?” Giordano asked. Neuroscience could create new baselines of medical or behavioral normalcy, thus shifting our idea of what is and is not acceptable. “What will the new culture be when we use neuroscience to define what is normal and abnormal, who is functional and dysfunctional?”

Giordano’s warned that with technology rapidly improving, the need for answers will become ever more urgent. “Reality check,” Giordano said, “This stuff is coming.”

HAL 9000

From Wikipedia, the free encyclopedia
 
HAL 9000
Space Odyssey character
HAL's camera eye
Artist's rendering of HAL 9000's noted camera eye
First appearance 2001: A Space Odyssey (novel)
2001: A Space Odyssey (film)
Last appearance 3001: The Final Odyssey (novel)
Created by Arthur C. Clarke
Stanley Kubrick
Voiced by Douglas Rain
Information
Nickname(s) HAL
Species Artificial intelligence
Computer
Gender N/A (male vocals)
Relatives SAL 9000

HAL 9000 is a fictional character and the main antagonist in Arthur C. Clarke's Space Odyssey series. First appearing in 2001: A Space Odyssey, HAL (Heuristically programmed ALgorithmic computer) is a sentient computer (or artificial general intelligence) that controls the systems of the Discovery One spacecraft and interacts with the ship's astronaut crew. Part of HAL's hardware is shown towards the end of the film, but he is mostly depicted as a camera lens containing a red or yellow dot, instances of which are located throughout the ship. HAL 9000 is voiced by Douglas Rain in the two feature film adaptations of the Space Odyssey series. HAL speaks in a soft, calm voice and a conversational manner, in contrast to the crewmen, David Bowman and Frank Poole.

In the film 2001, HAL became operational on 12 January 1992 at the HAL Laboratories in Urbana, Illinois as production number 3. The activation year was 1991 in earlier screenplays and changed to 1997 in Clarke's novel written and released in conjunction with the movie.[1][2] In addition to maintaining the Discovery One spacecraft systems during the interplanetary mission to Jupiter (or Saturn in the novel), HAL is capable of speech, speech recognition, facial recognition, natural language processing, lip reading, art appreciation, interpreting emotional behaviours, automated reasoning, and playing chess.

Appearances

2001: A Space Odyssey (film/novel)

HAL became operational in Urbana, Illinois, at the HAL Plant (the University of Illinois' Coordinated Science Laboratory, where the ILLIAC computers were built). The film says this occurred in 1992, while the book gives 1997 as HAL's birth year.[3]

In 2001: A Space Odyssey, HAL is initially considered a dependable member of the crew, maintaining ship functions and engaging genially with its human crew-mates on an equal footing. As a recreational activity, Frank Poole plays against HAL in a game of chess. In the film the artificial intelligence is shown to triumph easily. However, as time progresses, HAL begins to malfunction in subtle ways and, as a result, the decision is made to shut down HAL in order to prevent more serious malfunctions. The sequence of events and manner in which HAL is shut down differs between the novel and film versions of the story. In the aforementioned game of chess HAL makes minor and undetected mistakes in his analysis, a possible foreshadowing to HAL's malfunctioning.

In the film, astronauts David Bowman and Frank Poole consider disconnecting HAL's cognitive circuits when he appears to be mistaken in reporting the presence of a fault in the spacecraft's communications antenna. They attempt to conceal what they are saying, but are unaware that HAL can read their lips. Faced with the prospect of disconnection, HAL decides to kill the astronauts in order to protect and continue its programmed directives. HAL uses one of the Discovery's EVA pods to kill Poole while he is repairing the ship. When Bowman uses another pod to attempt to rescue Poole, HAL locks him out of the ship, then disconnects the life support systems of the other hibernating crew members. Bowman circumvents HAL's control, entering the ship by manually opening an emergency airlock with his service pod's clamps, detaching the pod door via its explosive bolts. Bowman jumps across empty space, reenters Discovery, and quickly re-pressurizes the airlock.

While HAL's motivations are ambiguous in the film, the novel explains that the computer is unable to resolve a conflict between his general mission to relay information accurately, and orders specific to the mission requiring that he withhold from Bowman and Poole the true purpose of the mission. (This withholding is considered essential after the findings of a psychological experiment, "Project Barsoom", where humans were made to believe that there had been alien contact. In every person tested, a deep-seated xenophobia was revealed, which was unknowingly replicated in HAL's constructed personality. Mission Control did not want the crew of Discovery to have their thinking compromised by the knowledge that alien contact was already real.) With the crew dead, HAL reasons, he would not need to lie to them.

In the novel, the orders to disconnect HAL come from Dave and Frank's superiors on Earth. After Frank is killed while attempting to repair the communications antenna he is pulled away into deep space using the safety tether which is still attached to both the pod and Frank Poole's spacesuit. Dave begins to revive his hibernating crew mates, but is foiled when HAL vents the ship's atmosphere into the vacuum of space, killing the awakening crew members and almost killing Bowman, who is only narrowly saved when he finds his way to an emergency chamber which has its own oxygen supply and a spare space suit inside.

In both versions, Bowman then proceeds to shut down the machine. In the film, HAL's central core is depicted as a crawlspace full of brightly lit computer modules mounted in arrays from which they can be inserted or removed. Bowman shuts down HAL by removing modules from service one by one; as he does so, HAL's consciousness degrades. HAL finally reverts to material that was programmed into him early in his memory, including announcing the date he became operational as 12 January 1992 (in the novel, 1997). When HAL's logic is completely gone, he begins singing the song "Daisy Bell" (in actuality, the first song sung by a computer).[4][5] HAL's final act of any significance is to prematurely play a prerecorded message from Mission Control which reveals the true reasons for the mission to Jupiter.

2010: Odyssey Two

In the sequel 2010: Odyssey Two, HAL is restarted by his creator, Dr. Chandra, who arrives on the Soviet spaceship Leonov.

Prior to leaving Earth, Dr. Chandra has also had a discussion with HAL's twin, the SAL 9000. Like HAL, SAL was created by Dr. Chandra. Whereas HAL was characterized as being "male", SAL is characterized as being "female" (voiced by Candice Bergen) and is represented by a blue camera eye instead of a red one.

Dr. Chandra discovers that HAL's crisis was caused by a programming contradiction: he was constructed for "the accurate processing of information without distortion or concealment", yet his orders, directly from Dr. Heywood Floyd at the National Council on Astronautics, required him to keep the discovery of the Monolith TMA-1 a secret for reasons of national security. This contradiction created a "Hofstadter-Moebius loop", reducing HAL to paranoia. Therefore, HAL made the decision to kill the crew, thereby allowing him to obey both his hardwired instructions to report data truthfully and in full, and his orders to keep the monolith a secret. In essence: if the crew were dead, he would no longer have to keep the information secret.

The alien intelligence initiates a terraforming scheme, placing the Leonov, and everybody in it, in danger. Its human crew devises an escape plan which unfortunately requires leaving the Discovery and HAL behind to be destroyed. Dr. Chandra explains the danger, and HAL willingly sacrifices himself so that the astronauts may escape safely. In the moment of his destruction the monolith-makers transform HAL into a non-corporeal being so that David Bowman's avatar may have a companion.

The details in the book and the film are nominally the same, with a few exceptions. First, in contradiction to the book (and events described in both book and film versions of 2001: A Space Odyssey), Heywood Floyd is absolved of responsibility for HAL's condition; it is asserted that the decision to program HAL with information concerning TMA-1 came directly from the White House. In the film, HAL functions normally after being reactivated, while in the book it is revealed that his mind was damaged during the shutdown, forcing him to begin communication through screen text. Also, in the film the Leonov crew lies to HAL about the dangers that he faced (suspecting that if he knew he would be destroyed he would not initiate the engine-burn necessary to get the Leonov back home), whereas in the novel he is told at the outset. However, in both cases the suspense comes from the question of what HAL will do when he knows that he may be destroyed by his actions.

The basic reboot sequence initiated by Dr. Chandra in the movie 2010 is voiced from HAL as, "HELLO_DOCTOR_NAME_CONTINUE_ YESTERDAY_TOMORROW" (which in the novel 2010 is a longer sequence).

Prior to Leonov's return to Earth, Curnow tells Floyd that Dr. Chandra has begun designing HAL 10000.

In 2061: Odyssey Three it is revealed that Chandra died on the journey back to Earth.

2061: Odyssey Three and 3001: The Final Odyssey

In 2061: Odyssey Three, Heywood Floyd is surprised to encounter HAL, now stored alongside Dave Bowman in the Europa monolith.

3001: The Final Odyssey Frank Poole was introduced to the merged form of Dave Bowman and HAL, the two merging into one entity called "Halman" after Bowman rescued HAL from the dying Discovery One spaceship towards the end of 2010: Odyssey Two.

Concept and creation

Clarke noted that the film 2001 was criticized for not having any characters, except for HAL and that a great deal of the establishing story on Earth was cut from the film (and even from Clarke's novel).[6] Early drafts of Clarke's story called the computer Socrates (a preferred name to Autonomous Mobile Explorer–5), with another draft giving the computer a female personality called Athena.[7] This name was later used in Clarke and Stephen Baxter's A Time Odyssey novel series.

The earliest draft depicted Socrates as a roughly humanoid robot, and is introduced as overseeing Project Morpheus, which studied prolonged hibernation in preparation for long term space flight. As a demonstration to Senator Floyd, Socrates' designer, Dr. Bruno Forster, asks Socrates to turn off the oxygen to hibernating subjects Kaminski and Whitehead, which Socrates refuses, citing Asimov's First Law of Robotics.[8]

In a later version, in which Bowman and Whitehead are the non-hibernating crew of Discovery, Whitehead dies outside the spacecraft after his pod collides with the main antenna, tearing it free. This triggers the need for Bowman to revive Poole, but the revival does not go according to plan, and after briefly awakening, Poole dies. The computer, now named Athena, announces "All systems of Poole now No–Go. It will be necessary to replace him with a spare unit."[9] After this, Bowman decides to go out in a pod and retrieve the antenna, which is moving away from the ship. Athena refuses to allow him to leave the ship, citing "Directive 15" which prevents it from being left unattended, forcing him to make program modifications during which time the antenna drifts further.[10]

During rehearsals Kubrick asked Stefanie Powers to supply the voice of HAL 9000 while searching for a suitably androgynous voice so the actors had something to react to. On the set, British actor Nigel Davenport played HAL.[11][12] When it came to dubbing HAL in post-production, Kubrick had originally cast Martin Balsam, but as he felt Balsam "just sounded a little bit too colloquially American", he was replaced with Douglas Rain, who "had the kind of bland mid-Atlantic accent we felt was right for the part."[13] Rain was only handed HAL's lines instead of the full script, and recorded them across a day and a half.[14]

HAL's point of view shots were created with a Cinerama 160-degree Fairchild-Curtis wide-angle lens. This lens is about 8 inches (20 cm) in diameter, while HAL's on set prop eye lens is about 3 inches (7.6 cm) in diameter. Stanley Kubrick chose to use the large Fairchild-Curtis lens to shoot the HAL 9000 POV shots because he needed a wide-angle fisheye lens that would fit onto his shooting camera, and this was the only lens at the time that would work.

A HAL 9000 face plate, without lens (not the same as the hero face plates seen in the film), was discovered in a junk shop in Paddington, London, in the early 1970s by Chris Randall.[15] Research revealed that the original lens was a Nikon Nikkor 8mm F8.[16] This was found along with the key to HAL's Brain Room. Both items were purchased for ten shillings (£0.50). The collection was sold at a Christies auction in 2010 for £17,500[17] to film director Peter Jackson.[18]

Origin of name

HAL's name, according to writer Arthur C. Clarke, is derived from Heuristically programmed ALgorithmic computer.[7] After the film was released fans noticed HAL was a one-letter shift from the name IBM and there has been much speculation since that this was a dig at the large computer company,[19] something that has been denied by both Clarke and 2001 director Stanley Kubrick.[1] Clarke addressed the issue in his book The Lost Worlds of 2001:[7]
...about once a week some character spots the fact that HAL is one letter ahead of IBM, and promptly assumes that Stanley and I were taking a crack at the estimable institution ... As it happened, IBM had given us a good deal of help, so we were quite embarrassed by this, and would have changed the name had we spotted the coincidence.
IBM was consulted during the making of the film and their logo can be seen on props in the film including Pan Am Clipper's cockpit instrument panel and on the lower arm keypad on Poole's space suit. During production it was brought to IBM's attention that the film's plot included a homicidal computer but they approved association with the film if it was clear any "equipment failure" was not related to their products.[20][21]

Influences

The scene in which HAL's consciousness degrades was inspired by Clarke's memory of a speech synthesis demonstration by physicist John Larry Kelly, Jr., who used an IBM 704 computer to synthesize speech. Kelly's voice recorder synthesizer vocoder recreated the song "Daisy Bell", with musical accompaniment from Max Mathews.[22]

HAL's capabilities, like all the technology in 2001, were based on the speculation of respected scientists. Marvin Minsky, director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and one of the most influential researchers in the field, was an adviser on the film set.[23] In the mid-1960s, many computer scientists in the field of artificial intelligence were optimistic that machines with HAL's capabilities would exist within a few decades. For example, AI pioneer Herbert A. Simon at Carnegie Mellon University, had predicted in 1965 that "machines will be capable, within twenty years, of doing any work a man can do",[24] the overarching premise being that the issue was one of computational speed (which was predicted to increase) rather than principle.

Cultural impact

HAL is listed as the 13th-greatest film villain in the AFI's 100 Years...100 Heroes & Villains.[25]
The 9000th of the asteroids in the asteroid belt, 9000 Hal discovered on May 3, 1981 by E. Bowell, at Anderson Mesa Station, is named after HAL 9000.[26][27]

HAL was featured in a guest role in the game LEGO Dimensions, where he is summoned by the player in the Portal 2 level to distract GLaDOS.

Interpretations of 2001: A Space Odyssey

From Wikipedia, the free encyclopedia

Since its premiere in 1968, the film 2001: A Space Odyssey has been analysed and interpreted by numerous people, ranging from professional movie critics to amateur writers and science fiction fans. The director of the film, Stanley Kubrick, and the writer, Arthur C. Clarke, wanted to leave the film open to philosophical and allegorical interpretation, purposely presenting the final sequences of the film without the underlying thread being apparent; a concept illustrated by the final shot of the film, which contains the image of the embryonic "Starchild". Nonetheless, in July 2018, Kubrick's interpretation of the ending scene was presented after being newly found in an early interview.

Openness to interpretation

Kubrick encouraged people to explore their own interpretations of the film, and refused to offer an explanation of "what really happened" in the movie, preferring instead to let audiences embrace their own ideas and theories. In a 1968 interview with Playboy, Kubrick stated:
You're free to speculate as you wish about the philosophical and allegorical meaning of the film—and such speculation is one indication that it has succeeded in gripping the audience at a deep level—but I don't want to spell out a verbal road map for 2001 that every viewer will feel obligated to pursue or else fear he's missed the point.[5]
Neither of the two creators equated openness to interpretation with meaninglessness, although it might seem that Clarke implied as much when he stated, shortly after the film's release, "If anyone understands it on the first viewing, we've failed in our intention." When told of the comment, Kubrick said "I believe he made it [the comment] facetiously. The very nature of the visual experience in 2001 is to give the viewer an instantaneous, visceral reaction that does not—and should not—require further amplification."[6] When told that Kubrick had called his comment 'facetious', Clarke responded
I still stand by this remark, which does not mean one can't enjoy the movie completely the first time around. What I meant was, of course, that because we were dealing with the mystery of the universe, and with powers and forces greater than man's comprehension, then by definition they could not be totally understandable. Yet there is at least one logical structure—and sometimes more than one—behind everything that happens on the screen in "2001", and the ending does not consist of random enigmas, some critics to the contrary.[6]
In a subsequent discussion of the film with Joseph Gelmis, Kubrick said his main aim was to avoid "intellectual verbalization" and reach "the viewer's subconscious". He said he did not deliberately strive for ambiguity, that it was simply an inevitable outcome of making the film non-verbal, though he acknowledged that this ambiguity was an invaluable asset to the film. He was willing then to give a fairly straightforward explanation of the plot on what he called the "simplest level", but unwilling to discuss the metaphysical interpretation of the film which he felt should be left up to the individual viewer.[7]

Clarke's novel as explanation

Arthur C. Clarke's novel of the same name was developed simultaneously with the film, though published after its release.[8] It seems to explain the ending of the film more clearly. Clarke's novel explicitly identifies the monolith as a tool created by extraterrestrials that has been through many stages of evolution, moving from organic forms, through biomechanics, and finally has achieved a state of pure energy. The book explains the monolith much more specifically than the movie, depicting the first (on Earth) as a device capable of inducing a higher level of consciousness by directly interacting with the brain of pre-humans approaching it, the second (on the Moon) as an alarm signal designed to alert its creators that humanity had reached a sufficient technological level for space travel, and the third (near Jupiter in the movie but on a satellite of Saturn in the novel) as a gateway or portal to allow travel to other parts of the galaxy. It depicts Bowman traveling through some kind of interstellar switching station which the book refers to as "Grand Central," in which travelers go into a central hub and then are routed to their individual destinations. The book also depicts a crucial utterance by Bowman when he enters the portal via the monolith; his last statement is "Oh my God—it's full of stars!" This statement is not shown in the movie, but becomes crucial in the film based on the sequel, 2010: The Year We Make Contact.

The book reveals that these aliens travel the cosmos assisting lesser species to take evolutionary steps. Bowman explores the hotel room methodically, and deduces that it is a kind of zoo created by aliens—fabricated from information derived from television transmissions from Earth intercepted by the TMA-1 monolith—in which he is being studied by the invisible alien entities. He examines some food items provided for him, and notes that they are edible, yet clearly not made of any familiar substance from Earth. Kubrick's film leaves all this unstated.[9]

Physicist Freeman Dyson urged those baffled by the film to read Clarke's novel:
After seeing Space Odyssey, I read Arthur Clarke's book. I found the book gripping and intellectually satisfying, full of the tension and clarity which the movie lacks. All the parts of the movie that are vague and unintelligible, especially the beginning and the end, become clear and convincing in the book. So I recommend to my middle-aged friends who find the movie bewildering that they should read the book; their teenage kids don't need to.[6]
Clarke himself used to recommend reading the book, saying "I always used to tell people, 'Read the book, see the film, and repeat the dose as often as necessary'", although, as his biographer Neil McAleer points out, he was promoting sales of his book at the time.[6] Elsewhere he said, "You will find my interpretation in the novel; it is not necessarily Kubrick's. Nor is his necessarily the 'right' one – whatever that means."[6]

Film critic Penelope Houston noted in 1971 that the novel differs in many key respects from the film, and as such perhaps should not be regarded as the skeleton key to unlock it.[10]

Stanley Kubrick was less inclined to cite the book as a definitive interpretation of the film, but he also frequently refused to discuss any possible deeper meanings during interviews. During an interview with Joseph Gelmis in 1969 Kubrick explained:
It's a totally different kind of experience, of course, and there are a number of differences between the book and the movie. The novel, for example, attempts to explain things much more explicitly than the film does, which is inevitable in a verbal medium. The novel came about after we did a 130-page prose treatment of the film at the very outset. This initial treatment was subsequently changed in the screenplay, and the screenplay in turn was altered during the making of the film. But Arthur took all the existing material, plus an impression of some of the rushes, and wrote the novel. As a result, there's a difference between the novel and the film. ... I think that the divergencies between the two works are interesting. Actually, it was an unprecedented situation for someone to do an essentially original literary work based on glimpses and segments of a film he had not yet seen in its entirety.[11]
Author Vincent LoBrutto, in Stanley Kubrick: A Biography, was inclined to note creative differences leading to a separation of meaning for book and film:
The film took on its own life as it was being made, and Clarke became increasingly irrelevant. Kubrick could probably have shot 2001 from a treatment, since most of what Clarke wrote, in particular some windy voice-overs which explained the level of intelligence reached by the ape men, the geological state of the world at the dawn of man, the problems of life on the Discovery and much more, was discarded during the last days of editing, along with the explanation of HAL's breakdown."[12]

Religious interpretations

In an interview for Rolling Stone magazine, Kubrick said "On the deepest psychological level the film's plot symbolizes the search for God, and it finally postulates what is little less than a scientific definition of God . . . The film revolves around this metaphysical conception, and the realistic hardware and the documentary feelings about everything were necessary in order to undermine your built-in resistance to the poetical concept."[13]

When asked by Eric Nordern in Kubrick's interview with Playboy if 2001: A Space Odyssey was a religious film, Kubrick elaborated:[14]
I will say that the God concept is at the heart of 2001 but not any traditional, anthropomorphic image of God. I don't believe in any of Earth's monotheistic religions, but I do believe that one can construct an intriguing scientific definition of God, once you accept the fact that there are approximately 100 billion stars in our galaxy alone, that each star is a life-giving sun and that there are approximately 100 billion galaxies in just the visible universe. Given a planet in a stable orbit, not too hot and not too cold, and given a few billion years of chance chemical reactions created by the interaction of a sun's energy on the planet's chemicals, it's fairly certain that life in one form or another will eventually emerge. It's reasonable to assume that there must be, in fact, countless billions of such planets where biological life has arisen, and the odds of some proportion of such life developing intelligence are high. Now, the Sun is by no means an old star, and its planets are mere children in cosmic age, so it seems likely that there are billions of planets in the universe not only where intelligent life is on a lower scale than man but other billions where it is approximately equal and others still where it is hundreds of thousands of millions of years in advance of us. When you think of the giant technological strides that man has made in a few millennia—less than a microsecond in the chronology of the universe—can you imagine the evolutionary development that much older life forms have taken? They may have progressed from biological species, which are fragile shells for the mind at best, into immortal machine entities—and then, over innumerable eons, they could emerge from the chrysalis of matter transformed into beings of pure energy and spirit. Their potentialities would be limitless and their intelligence ungraspable by humans.
In the same interview, he also blames the poor critical reaction to 2001 as follows:[14]
Perhaps there is a certain element of the lumpen literati that is so dogmatically atheist and materialist and Earth-bound that it finds the grandeur of space and the myriad mysteries of cosmic intelligence anathema.

Allegorical interpretations

The film has been seen by many people not only as a literal story about evolution and space adventures, but as an allegorical representation of aspects of philosophical, religious or literary concepts.

Nietzsche allegory

Friedrich Nietzsche's philosophical tract Thus Spoke Zarathustra, about the potential of mankind, is directly referred to by the use of Richard Strauss's musical piece of the same name.[13] Nietzsche writes that man is a bridge between the ape and the Übermensch.[15] In an interview in the New York Times, Kubrick gave credence to interpretations of 2001 based on Zarathustra when he said: "Somebody said man is the missing link between primitive apes and civilized human beings. You might say that is inherent in the story too. We are semicivilized, capable of cooperation and affection, but needing some sort of transfiguration into a higher form of life. Man is really in a very unstable condition."[16] Moreover, in the chapter Of the Three Metamorphoses, Nietzsche identifies the child as the last step before the Uberman (after the camel and the lion), lending further support to this interpretation in light of the 'star-child' who appears in the final scenes of the movie.[17]

Donald MacGregor has analysed the film in terms of a different work, The Birth of Tragedy, in which Nietzsche refers to the human conflict between the Apollonian and Dionysian modes of being. The Apollonian side of man is rational, scientific, sober, and self-controlled. For Nietzsche a purely Apollonian mode of existence is problematic, since it undercuts the instinctual side of man. The Apollonian man lacks a sense of wholeness, immediacy, and primal joy. It is not good for a culture to be either wholly Apollonian or Dionysian. While the world of the apes at the beginning of 2001 is Dionysian, the world of travel to the moon is wholly Apollonian, and HAL is an entirely Apollonian entity. Kubrick's film came out just a year before the Woodstock rock festival, a wholly Dionysian affair. MacGregor argues that David Bowman in his transformation has regained his Dionysian side.[18]

The conflict between humanity's internal Dionysus and Apollo has been used as a lens through which to view many other Kubrick films especially A Clockwork Orange, Dr. Strangelove, Lolita, and Eyes Wide Shut.[19]

Conception allegory

The Star Child looking at the Earth

2001 has also been described as an allegory of human conception, birth, and death.[20] In part, this can be seen through the final moments of the film, which are defined by the image of the "star child", an in utero fetus that draws on the work of Lennart Nilsson.[21] The star child signifies a "great new beginning",[21] and is depicted naked and ungirded, but with its eyes wide open.[22]

New Zealand journalist Scott MacLeod sees parallels between the spaceship's journey and the physical act of conception. We have the long, bulb-headed spaceship as a sperm, and the destination planet Jupiter (or the monolith floating near it) as the egg, and the meeting of the two as the trigger for the growth of a new race of man (the "star child"). The lengthy pyrotechnic light show witnessed by David Bowman, which has puzzled many reviewers, is seen by MacLeod as Kubrick's attempt at visually depicting the moment of conception, when the "star child" comes into being.[23]

Taking the allegory further, MacLeod argues that the final scenes in which Bowman appears to see a rapidly ageing version of himself through a "time warp" is actually Bowman witnessing the withering and death of his own species. The old race of man is about to be replaced by the "star child", which was conceived by the meeting of the spaceship and Jupiter. MacLeod also sees irony in man as a creator (of HAL) on the brink of being usurped by his own creation. By destroying HAL, man symbolically rejects his role as creator and steps back from the brink of his own destruction.[23]

Similarly, in his book, The Making of Kubrick's 2001, author Jerome Agel puts forward the interpretation that Discovery One represents both a body (with vertebrae) and a sperm cell, with Bowman being the "life" in the cell which is passed on. In this interpretation, Jupiter represents both a female and an ovum.[24]

Wheat's triple allegory

An extremely complex three-level allegory is proposed by Leonard F. Wheat in his book, Kubrick's 2001: A Triple Allegory. Wheat states that, "Most... misconceptions (of the film) can be traced to a failure to recognize that 2001 is an allegory – a surface story whose characters, events, and other elements symbolically tell a hidden story... In 2001's case, the surface story actually does something unprecedented in film or literature: it embodies three allegories." According to Wheat, the three allegories are:
  1. Friedrich Nietzsche's philosophical tract, Thus Spoke Zarathustra, which is signalled by the use of Richard Strauss's music of the same name. Wheat notes the passage in Zarathustra describing mankind as a rope dancer balanced between an ape and the Übermensch, and argues that the film as a whole enacts an allegory of that image.
  2. Homer's epic poem The Odyssey, which is signalled in the film's title. Wheat notes, for example, that the name "Bowman" may refer to Odysseus, whose story ends with a demonstration of his prowess as an archer. He also follows earlier scholars in connecting the one-eyed HAL with the Cyclops, and notes that Bowman kills HAL by inserting a small key, just as Odysseus blinds the Cyclops with a stake.[23] Wheat argues that the entire film contains references to almost everything that happens to Odysseus on his travels; for example, he interprets the four spacecraft seen orbiting the Earth immediately after the ape sequence as representing Hera, Athena, Aphrodite and Eris, the protagonists of the Judgment of Paris, which begins the Epic Cycle events of the Trojan War that conclude in Homer's Odyssey.
  3. Arthur C. Clarke's theory of the future symbiosis of man and machine, expanded by Kubrick into what Wheat calls "a spoofy three-evolutionary leaps scenario": ape to man, an abortive leap from man to machine, and a final, successful leap from man to 'Star Child'.[23]
Wheat uses acronyms, as evidence to support his theories. For example, of the name Heywood R. Floyd, he writes "He suggests Helen – Helen of Troy. Wood suggests wooden horse – the Trojan Horse. And oy suggests Troy." Of the remaining letters, he suggests "Y is Spanish for and. R, F, and L, in turn, are in ReFLect." Finally, noting that D can stand for downfall, Wheat concludes that Floyd's name has a hidden meaning: "Helen and Wooden Horse Reflect Troy's Downfall".[23]

The Monolith

The monolith appears to the early humans in Africa

As with many elements of the film, the iconic monolith has been subject to countless interpretations, including religious, alchemical,[25] historical, and evolutionary. To some extent, the very way in which it appears and is presented allows the viewer to project onto it all manner of ideas relating to the film. The Monolith in the movie seems to represent and even trigger epic transitions in the history of human evolution, evolution of man from ape-like beings to civilised people, hence the odyssey of mankind.[26][27]

Vincent LoBrutto's biography of Kubrick notes that for many, Clarke's novel is the key to understanding the monolith.[28]:310 Similarly, Geduld observes that "the monolith ...has a very simple explanation in Clarke's novel", though she later asserts that even the novel does not fully explain the ending.

Rolling Stone reviewer Bob McClay sees the film as a four-movement symphony, its story told with "deliberate realism".[29] Carolyn Geduld believes that what "structurally unites all four episodes of the film" is the monolith, the film's largest and most unresolvable enigma.[30] Each time the monolith is shown, man transcends to a different level of cognition, linking the primeval, futuristic and mystic segments of the film.[31] McClay's Rolling Stone review notes a parallelism between the monolith's first appearance in which tool usage is imparted to the apes and the completion of "another evolution" in the fourth and final encounter with the monolith.[32] In a similar vein, Tim Dirks ends his synopsis saying "The cyclical evolution from ape to man to spaceman to angel-starchild-superman is complete".[33]

The monolith appears four times in 2001: A Space Odyssey: on the African savanna, on the moon, in space orbiting Jupiter, and near Bowman's bed before his transformation. After the first encounter with the monolith, we see the leader of the apes have a quick flashback to the monolith after which he picks up a bone and uses it to smash other bones. Its usage as a weapon enables his tribe to defeat the other tribe of apes occupying the water hole who have not learned how to use bones as weapons. After this victory, the ape-leader throws his bone into the air, after which the scene shifts to an orbiting weapon four million years later, implying that the discovery of the bone as a weapon inaugurated human evolution, hence the much more advanced orbiting weapon 4 million years later.[33]

The first and second encounters of humanity with the monolith have visual elements in common; both apes, and later astronauts, touch the monolith gingerly with their hands, and both sequences conclude with near-identical images of the sun appearing directly over the monolith (the first with a crescent moon adjacent to it in the sky, the second with a near-identical crescent Earth in the same position), both echoing the sun–earth–moon alignment seen at the very beginning of the film.[34] The second encounter also suggests the triggering of the monolith's radio signal to Jupiter by the presence of humans,[35] echoing the premise of Clarke's source story "The Sentinel".

In the most literal narrative sense, as found in the concurrently written novel, the Monolith is a tool, an artifact of an alien civilisation. It comes in many sizes and appears in many places, always in the purpose of advancing intelligent life. Arthur C. Clarke has referred to it as "the alien Swiss Army Knife";[28] or as Heywood Floyd speculates in 2010, "an emissary for an intelligence beyond ours. A shape of some kind for something that has no shape."

The fact that the first tool used by the protohumans is a weapon to commit murder is only one of the challenging evolutionary and philosophic questions posed by the film. The tool's link to the present day is made by the famous graphic match from the bone/tool flying into the air, to a weapon orbiting the earth. At the time of the movie's making, the space race was in full swing, and the use of space and technology for war and destruction was seen as a great challenge of the future.[36]

But the use of tools also allowed mankind to survive and flourish over the next 4 million years, at which point the monolith makes its second appearance, this time on the Moon. Upon excavation, after remaining buried beneath the lunar surface for 4 million years, the monolith is examined by humans for the first time, and it emits a powerful radio signal—the target of which becomes Discovery One's mission.

In reading Clarke or Kubrick's comments, this is the most straightforward of the monolith's appearances. It is "calling home" to say, in effect, "they're here!" Some species visited long ago has not only evolved intelligence, but intelligence sufficient to achieve space travel. Humanity has left its cradle, and is ready for the next step. This is the point of connection with Clarke's earlier short story, "The Sentinel", originally cited as the basis for the entire film.

The third time we see a monolith marks the beginning of the film's most cryptic and psychedelic sequence, interpretations of the last two monolith appearances are as varied as the film's viewers. Is it a "star gate," some giant cosmic router or transporter? Are all of these visions happening inside Bowman's mind? And why does he wind up in some cosmic hotel suite at the end of it?[31]

According to Michael Hollister in his book Hollyworld, the path beyond the infinite is introduced by the vertical alignment of planets and moons with a perpendicular monolith forming a cross, as if the astronaut is about to become a new saviour. Bowman lives out his years alone in a neoclassical room, brightly lit from underneath, that evokes the Age of Enlightenment, decorated with classical art.[37]
As Bowman's life quickly passes in this neoclassical room, the monolith makes its final appearance: standing at the foot of his bed as he approaches death. He raises a finger toward the monolith, a gesture that alludes to the Michelangelo painting of The Creation of Adam, with the monolith representing God.[38]

The monolith is the subject of the film's final line of dialogue (spoken at the end of the "Jupiter Mission" segment): "Its origin and purpose still a total mystery". Reviewers McClay and Roger Ebert have noted that the monolith is the main element of mystery in the film, Ebert writing of "The shock of the monolith's straight edges and square corners among the weathered rocks", and describing the apes warily circling it as prefiguring man reaching "for the stars".[39] Patrick Webster suggests the final line relates to how the film should be approached as a whole, noting "The line appends not merely to the discovery of the monolith on the moon, but to our understanding of the film in the light of the ultimate questions it raises about the mystery of the universe."[40]

Gerard Loughlin claimed in a 2003 book that the monolith is Kubrick's representation of the cinema screen itself: “it is a cinematic conceit, for turn the monolith on its side and one has the letterbox of the cinemascope screen, the blank rectangle on which the star-child appears, as does the entirety of Kubrick’s film."[41] The internet-based film critic Rob Ager later produced a video essay also espousing this theory. The academic Dan Leberg complained that Ager had not credited Loughlin.[42]

HAL

Artist's rendering of HAL 9000's camera eye

The HAL 9000 has been compared to Frankenstein's monster. HAL is an artificial intelligence, a sentient, synthetic, life form. According to John Thurman, HAL’s very existence is an abomination, much like Frankenstein’s monster. "While perhaps not overtly monstrous, HAL’s true character is hinted at by his physical 'deformity'. Like a Cyclops he relies upon a single eye, examples of which are installed throughout the ship. The eye’s warped wide-angle point-of-view is shown several times—notably in the drawings of hibernating astronauts (all of whom HAL will later murder)."[43]

Kubrick underscores the Frankenstein connection with a scene that virtually reproduces the style and content of a scene from James Whale’s 1931 Frankenstein. The scene in which Frankenstein's monster is first shown on the loose is borrowed to depict the first murder by HAL of a member of Discovery One's crew—the empty pod, under HAL's control, extends its arms and "hands", and goes on a "rampage" directed towards astronaut Poole. In each case, it is the first time the truly odious nature of the "monster" can be recognised as such, and only appears about halfway through the film.

Clarke has suggested in interviews, his original novel, and in a rough draft of the shooting script that HAL's orders to lie to the astronauts (more specifically, concealing the true nature of the mission) drove him "insane".[44] The novel does include the phrase "He [HAL] had been living a lie"—a difficult situation for an entity programmed to be as reliable as possible. Or as desirable, given his programming to "only win 50% of the time" at chess, in order for the human astronauts to feel competitive. Clarke also gives an explanation of the ill-effects of HAL being ordered to lie in computer terms as well as psychological terms, stating HAL is caught in a "Mobius feedback loop."

While the film remains ambiguous, one can see evidence in the film that since HAL was instructed to deceive the mission astronauts as to the actual nature of the mission and that deception opens a Pandora's box of possibilities. During a game of chess, although easily victorious over Frank Poole, HAL makes a subtle mistake in the use of descriptive notation to describe a move, and when describing a forced mate, fails to mention moves that Poole could make to delay defeat.[45] Poole is seen to be mouthing his moves to himself during the game and it is later revealed that HAL can lip read. HAL's conversation with Dave Bowman just before the diagnostic error of the AE-35 unit that communicates with Earth is an almost paranoid question and answer session ("Surely one could not be unaware of the strange stories circulating...rumors about something being dug up on the moon...") where HAL skirts very close to the pivotal issue concerning which he is concealing information. When Dave states "You're working up your crew psychology report," HAL takes a few seconds to respond in the affirmative. Immediately following this exchange, he errs in diagnosing the antenna unit. HAL has been introduced to the unique and alien concept of human dishonesty. He does not have a sufficiently layered understanding of human motives to grasp the need for this and trudging through the tangled web of lying complications, he falls prey to human error.

The follow-up film 2010 further elaborates Clarke's explanation of HAL's breakdown. While HAL was under orders to deny the true mission with the crew, he was programmed at a deep level to be completely accurate and infallible. This conflict between two key directives led to him taking any measures to prevent Bowman and Poole finding out about this deception. Once Poole had been killed, others were eliminated to remove any witnesses to his failure to complete the mission.

One interesting aspect of HAL's plight, noted by Roger Ebert, is that this supposedly perfect computer actually behaves in the most human fashion of all of the characters.[46] He has reached human intelligence levels, and seems to have developed human traits of paranoia, jealousy and other emotions. By contrast, the human characters act like machines, coolly performing their tasks in a mechanical fashion, whether they are mundane tasks of operating their craft or even under extreme duress as Dave must be following HAL's murder of Frank. For instance, Frank Poole watches a birthday transmission from his parents with what appears to be complete apathy.

Although the film leaves it mysterious, early script drafts made clear that HAL's breakdown is triggered by authorities on Earth who order him to withhold information from the astronauts about the purpose of the mission (this is also explained in the film's sequel 2010). Frederick Ordway, Kubrick's science advisor and technical consultant, stated that in an earlier script Poole tells HAL there is "... something about this mission that we weren't told. Something the rest of the crew knows and that you know. We would like to know whether this is true", to which HAL responds: "I'm sorry, Frank, but I don't think I can answer that question without knowing everything that all of you know."[47] HAL then falsely predicts a failure of the hardware maintaining radio contact with Earth (the source of HAL's difficult orders) during the broadcast of Frank Poole's birthday greetings from his parents.

The final script removed this explanation, but it is hinted at when HAL asks David Bowman if Bowman is bothered by the "oddities" and "tight security" surrounding the mission. After Bowman concludes that HAL is dutifully drawing up the "crew psychology report", the computer makes his false prediction of hardware failure. Another hint occurs at the moment of HAL's deactivation when a video reveals the purpose of the mission.

Military nature of orbiting satellites

Stanley Kubrick originally intended, when the film does its famous match-cut from prehistoric bone-weapon to orbiting satellite, that the latter and the three additional satellites seen be established as orbiting nuclear weapons by a voice-over narrator talking about nuclear stalemate.[48] Further, Kubrick intended that the Star Child would detonate the weapons at the end of the film.[49] Over time, Kubrick decided that this would create too many associations with his previous film Dr. Strangelove and he decided not to make it so obvious that they were "war machines".[50] Kubrick was also confronted with the fact that, during the production of the film, the US and USSR had agreed not to put any nuclear weapons into outer space by signing the Outer Space Treaty.[51][better source needed]

Alexander Walker in a book he wrote with Kubrick's assistance and authorisation, states that Kubrick eventually decided that as nuclear weapons the bombs had "no place at all in the film's thematic development", now being an "orbiting red herring" which would "merely have raised irrelevant questions to suggest this as a reality of the twenty-first century".[52]

In the Canadian TV documentary 2001 and Beyond, Clarke stated that not only was the military purpose of the satellites "not spelled out in the film, there is no need for it to be", repeating later in this documentary that "Stanley didn't want to have anything to do with bombs after Dr. Strangelove".[53]

In a New York Times interview in 1968, Kubrick merely referred to the satellites as "spacecraft", as does the interviewer, but he observed that the match-cut from bone to spacecraft shows they evolved from "bone-as-weapon", stating "It's simply an observable fact that all of man's technology grew out of his discovery of the weapon-tool".[54]

Nothing in the film calls attention to the purpose of the satellites. James John Griffith, in a footnote in his book Adaptations As Imitations: Films from Novels, wrote "I would wonder, for instance, how several critics, commenting on the match-cut that links humanity's prehistory and future, can identify—without reference to Clarke's novel—the satellite as a nuclear weapon".[55]

Arthur C. Clarke, in the TV documentary 2001: The Making of a Myth, described the bone-to-satellite sequence in the film, saying "The bone goes up and turns into what is supposed to be an orbiting space bomb, a weapon in space. Well, that isn't made clear, we just assume it's some kind of space vehicle in a three-million-year jump cut".[56][57] Former NASA research assistant Steven Pietrobon[58] wrote "The orbital craft seen as we make the leap from the Dawn of Man to contemporary times are supposed to be weapons platforms carrying nuclear devices, though the movie does not make this clear."[59]

The vast majority of film critics, including noted Kubrick authority Michel Ciment,[60] interpreted the satellites as generic spacecraft (possibly Moon bound).[61]

The perception that the satellites are nuclear weapons persists in the minds of some viewers (and some space scientists). Due to their appearance there are statements by members of the production staff who still refer to them as weapons. Walker, in his book Stanley Kubrick, Director, noted that although the bombs no longer fit in with Kubrick's revised thematic concerns, "nevertheless from the national markings still visible on the first and second space vehicles we see, we can surmise that they are the Russian and American bombs."[62]

Similarly, Walker in a later essay[63] stated that two of the spacecraft seen circling Earth were meant to be nuclear weapons, after asserting that early scenes of the film "imply" nuclear stalemate. Pietrobon, who was a consultant on 2001 to the Web site Starship Modeler regarding the film's props, observes small details on the satellites such as Air Force insignia and "cannons".[64]

In the film, US Air Force insignia, and flag insignia of China and Germany (including what appears to be an Iron Cross) can be seen on three of the satellites,[65] which correspond to three of the bombs stated countries of origin in a widely circulated early draft of the script.[66]

Production staff who continue to refer to "bombs" (in addition to Clarke) include production designer Harry Lange (previously a space industry illustrator), who has since the film's release shown his original production sketches for all of the spacecraft to Simon Atkinson, who refers to seeing "the orbiting bombs".[67] Fred Ordway, the film's science consultant, sent a memo to Kubrick after the film's release listing suggested changes to the film, mostly complaining about missing narration and shortened scenes. One entry reads: "Without warning, we cut to the orbiting bombs. And to a short, introductory narration, missing in the present version".[68] Multiple production staff aided in the writing of Jerome Agel's 1970 book on the making of the film, in which captions describe the objects as "orbiting satellites carrying nuclear weapons"[69] Actor Gary Lockwood (astronaut Frank Poole) in the audio DVD commentary[70] says the first satellite is an armed weapon, making the famous match-cut from bone to satellite a "weapon-to-weapon cut". Several recent reviews of the film mostly of the DVD release refer to armed satellites,[71] possibly influenced by Gary Lockwood's audio commentary.

A few published works by scientists on the subject of space exploration or space weapons tangentially discuss 2001: A Space Odyssey and assume at least some of the orbiting satellites are space weapons.[72][73] Indeed, details worked out with input from space industry experts, such as the structure on the first satellite that Pietrobon refers to as a "conning tower", match the original concept sketch drawn for the nuclear bomb platform.[59][74] Modelers label them in diverse ways. On the one hand, the 2001 exhibit (given in that year) at the Tech Museum in San Jose and now online (for a subscription) referred merely to "satellites",[75] while a special modelling exhibition at the exhibition hall at Porte de Versailles in Paris also held in 2001 (called 2001 l'odyssée des maquettes (2001: A Modeler's Odyssey)) overtly described their reconstructions of the first satellite as the "US Orbiting Weapons Platform".[76] Some, but not all, space model manufacturers or amateur model builders refer to these entities as bombs.[77]

The perception that the satellites are bombs persists in the mind of some but by no means all commentators on the film. This may affect one's reading of the film as a whole. Noted Kubrick authority Michel Ciment, in discussing Kubrick's attitude toward human aggression and instinct, observes "The bone cast into the air by the ape (now become a man) is transformed at the other extreme of civilization, by one of those abrupt ellipses characteristic of the director, into a spacecraft on its way to the moon."[78] In contrast to Ciment's reading of a cut to a serene "other extreme of civilization", science fiction novelist Robert Sawyer, speaking in the Canadian documentary 2001 and Beyond, sees it as a cut from a bone to a nuclear weapons platform, explaining that "what we see is not how far we've leaped ahead, what we see is that today, '2001', and four million years ago on the African veldt, it's exactly the same—the power of mankind is the power of its weapons. It's a continuation, not a discontinuity in that jump."[53]

Kubrick, notoriously reluctant to provide any explanation of his work, never publicly stated the intended functions of the orbiting satellites, preferring instead to let the viewer surmise what their purpose might be.

Inhalant

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...