Search This Blog

Thursday, September 24, 2020

Simulation hypothesis

From Wikipedia, the free encyclopedia

The simulation hypothesis or simulation theory is the proposal that all of reality, including the Earth and the rest of the universe, could in fact be an artificial simulation, such as a computer simulation. Some versions rely on the development of a simulated reality, a proposed technology that would be able to convince its inhabitants that the simulation was "real". The simulation hypothesis bears a close resemblance to various other skeptical scenarios from throughout the history of philosophy. The hypothesis was popularized in its current form by Nick Bostrom. The suggestion that such a hypothesis is compatible with all of our perceptual experiences is thought to have significant epistemological consequences in the form of philosophical skepticism. Versions of the hypothesis have also been featured in science fiction, appearing as a central plot device in many stories and films.

Origins

There is a long philosophical and scientific history to the underlying thesis that reality is an illusion. This skeptical hypothesis can be traced back to antiquity; for example, to the "Butterfly Dream" of Zhuangzi, or the Indian philosophy of Maya, or in Ancient Greek philosophy Anaxarchus and Monimus likened existing things to a scene-painting and supposed them to resemble the impressions experienced in sleep or madness.

A version of the hypothesis was also theorised as a part of a philosophical argument by René Descartes.

Simulation hypothesis

Nick Bostrom's premise:

Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race.

Nick Bostrom's conclusion:

Nick Bostrom in 2014

It is then possible to argue that, if this were the case, we would be rational to think that we are likely among the simulated minds rather than among the original biological ones.

Therefore, if we don't think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears.

— Nick Bostrom, Are you living in a computer simulation?, 2003

The simulation argument

In 2003, philosopher Nick Bostrom proposed a trilemma that he called "the simulation argument". Despite the name, Bostrom's "simulation argument" does not directly argue that we live in a simulation; instead, Bostrom's trilemma argues that one of three unlikely-seeming propositions is almost certainly true:

  1. "The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero", or
  2. "The fraction of posthuman civilizations that are interested in running simulations of their evolutionary history, or variations thereof, is very close to zero", or
  3. "The fraction of all people with our kind of experiences that are living in a simulation is very close to one."

The trilemma points out that a technologically mature "posthuman" civilization would have enormous computing power; if even a tiny percentage of them were to run "ancestor simulations" (that is, "high-fidelity" simulations of ancestral life that would be indistinguishable from reality to the simulated ancestor), the total number of simulated ancestors, or "Sims", in the universe (or multiverse, if it exists) would greatly exceed the total number of actual ancestors.

Bostrom goes on to use a type of anthropic reasoning to claim that, if the third proposition is the one of those three that is true, and almost all people with our kind of experiences live in simulations, then we are almost certainly living in a simulation.

Bostrom claims his argument goes beyond the classical ancient "skeptical hypothesis", claiming that "...we have interesting empirical reasons to believe that a certain disjunctive claim about the world is true", the third of the three disjunctive propositions being that we are almost certainly living in a simulation. Thus, Bostrom, and writers in agreement with Bostrom such as David Chalmers, argue there might be empirical reasons for the "simulation hypothesis", and that therefore the simulation hypothesis is not a skeptical hypothesis but rather a "metaphysical hypothesis". Bostrom states he personally sees no strong argument as to which of the three trilemma propositions is the true one: "If (1) is true, then we will almost certainly go extinct before reaching posthumanity. If (2) is true, then there must be a strong convergence among the courses of advanced civilizations so that virtually none contains any individuals who desire to run ancestor-simulations and are free to do so. If (3) is true, then we almost certainly live in a simulation. In the dark forest of our current ignorance, it seems sensible to apportion one's credence roughly evenly between (1), (2), and (3)... I note that people who hear about the simulation argument often react by saying, 'Yes, I accept the argument, and it is obvious that it is possibility #n that obtains.' But different people pick a different n. Some think it obvious that (1) is true, others that (2) is true, yet others that (3) is true."

As a corollary to the trilemma, Bostrom states that "Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation."

Criticism of Bostrom's anthropic reasoning

Bostrom argues that if "the fraction of all people with our kind of experiences that are living in a simulation is very close to one", then it follows that we probably live in a simulation. Some philosophers disagree, proposing that perhaps "Sims" do not have conscious experiences the same way that unsimulated humans do, or that it can otherwise be self-evident to a human that they are a human rather than a Sim. Philosopher Barry Dainton modifies Bostrom's trilemma by substituting "neural ancestor simulations" (ranging from literal brains in a vat, to far-future humans with induced high-fidelity hallucinations that they are their own distant ancestors) for Bostrom's "ancestor simulations", on the grounds that every philosophical school of thought can agree that sufficiently high-tech neural ancestor simulation experiences would be indistinguishable from non-simulated experiences. Even if high-fidelity computer Sims are never conscious, Dainton's reasoning leads to the following conclusion: either the fraction of human-level civilizations that reach a posthuman stage and are able and willing to run large numbers of neural ancestor simulations is close to zero, or we are in some kind of (possibly neural) ancestor simulation.

Some scholars categorically reject—or are uninterested in—anthropic reasoning, dismissing it as "merely philosophical", unfalsifiable, or inherently unscientific.

Some critics propose that we could be in the first generation, and all the simulated people that will one day be created do not yet exist.

The cosmologist Sean M. Carroll argues that the simulation hypothesis leads to a contradiction: if a civilization is capable of performing simulations, then it will likely perform many simulations, which implies that we are most likely at the lowest level of simulation (from which point one's impression will be that it is impossible to perform a simulation), which contradicts the arguer's assumption that it is easy for us to foresee that advanced civilizations can most likely perform simulations.

Arguments, within the trilemma, against the simulation hypothesis

Simulation down to molecular level of very small sample of matter

Some scholars accept the trilemma, and argue that the first or second of the propositions are true, and that the third proposition (the proposition that we live in a simulation) is false. Physicist Paul Davies deploys Bostrom's trilemma as part of one possible argument against a near-infinite multiverse. This argument runs as follows: if there were a near-infinite multiverse, there would be posthuman civilizations running ancestor simulations, and therefore we would come to the untenable and scientifically self-defeating conclusion that we live in a simulation; therefore, by reductio ad absurdum, existing multiverse theories are likely false. (Unlike Bostrom and Chalmers, Davies (among others) considers the simulation hypothesis to be self-defeating.)

Some point out that there is currently no proof of technology which would facilitate the existence of sufficiently high-fidelity ancestor simulation. Additionally, there is no proof that it is physically possible or feasible for a posthuman civilization to create such a simulation, and therefore for the present, the first proposition must be true. Additionally there are limits of computation.

Consequences of living in a simulation

Economist Robin Hanson argues a self-interested high-fidelity Sim should strive to be entertaining and praiseworthy in order to avoid being turned off or being shunted into a non-conscious low-fidelity part of the simulation. Hanson additionally speculates that someone who is aware that he might be a Sim might care less about others and live more for today: "your motivation to save for retirement, or to help the poor in Ethiopia, might be muted by realizing that in your simulation, you will never retire and there is no Ethiopia."

Testing the hypothesis physically

A method to test one type of simulation hypothesis was proposed in 2012 in a joint paper by physicists Silas R. Beane from the University of Bonn (now at the University of Washington, Seattle), and Zohreh Davoudi and Martin J. Savage from the University of Washington, Seattle. Under the assumption of finite computational resources, the simulation of the universe would be performed by dividing the continuum space-time into a discrete set of points. In analogy with the mini-simulations that lattice-gauge theorists run today to build up nuclei from the underlying theory of strong interactions (known as quantum chromodynamics), several observational consequences of a grid-like space-time have been studied in their work. Among proposed signatures is an anisotropy in the distribution of ultra-high-energy cosmic rays, that, if observed, would be consistent with the simulation hypothesis according to these physicists. In 2017, Campbell et al. proposed several experiments aimed at testing the simulation hypothesis in their paper "On Testing the Simulation Theory".

In 2019, philosopher Preston Greene suggested that it may be best not to find out if we're living in a simulation since, if it were found to be true, such knowing may end the simulation.

Other uses of the simulation hypothesis in philosophy

Besides attempting to assess whether the simulation hypothesis is true or false, philosophers have also used it to illustrate other philosophical problems, especially in metaphysics and epistemology. David Chalmers has argued that simulated beings might wonder whether their mental lives are governed by the physics of their environment, when in fact these mental lives are simulated separately (and are thus, in fact, not governed by the simulated physics). They might eventually find that their thoughts fail to be physically caused. Chalmers argues that this means that Cartesian dualism is not necessarily as problematic of a philosophical view as is commonly supposed, though he does not endorse it.

In popular culture

The first to state the basic concept of reality as a simulation was Plato in 380BCE, in the famous Allegory of the Cave, describing people imprisoned since childhood (but not since birth) led to believe that artificial light-based representations of reality were truly real when, in fact, they were a fabricated illusion.

Science fiction themes

Science fiction has highlighted themes such as virtual reality, artificial intelligence and computer gaming for more than fifty years. Simulacron-3 (1964) by Daniel F. Galouye (alternative title: Counterfeit World) tells the story of a virtual city developed as a computer simulation for market research purposes, in which the simulated inhabitants possess consciousness; all but one of the inhabitants are unaware of the true nature of their world. The book was made into a German made-for-TV film called World on a Wire (1973) directed by Rainer Werner Fassbinder. The movie The Thirteenth Floor (1999) was also loosely based on this book. "We Can Remember It for You Wholesale" is a short story by American writer Philip K. Dick, first published in The Magazine of Fantasy & Science Fiction in April 1966, and was the basis for Total Recall (1990 film) and Total Recall (2012 film). In Overdrawn at the Memory Bank, a 1983 television movie, the main character pays to have his mind connected to a simulation.

The 1993 Star Trek: The Next Generation episode "Ship in a Bottle" explores the idea of people being unaware they are living in simulation, with Picard postulating at the end that perhaps they are also in a simulation playing out in a box on a table. This is also a possible use of dramatic irony, with both the actors and audience aware the television programme is indeed a simulation of sorts.

The same theme was repeated in the 1999 film The Matrix, which depicted a world in which artificially intelligent robots enslaved humanity within a simulation set in the contemporary world. The 2012 play World of Wires was partially inspired by the Bostrom essay on the simulation hypothesis. In the episode "Extremis" (broadcast on 20 May 2017 on BBC One) of the science fiction series Doctor Who, aliens called "The Monks" plan an invasion of Earth by running and studying a holographic simulation of Earth with conscious inhabitants. When the virtual Doctor finds out about the simulation he sends an email about the simulation to his real self so that the real Doctor can save the world. In the first season of Rick and Morty, a science-fiction animated comedy, the episode "M. Night Shaym-Aliens!" (2014) aliens trap the lead role (Rick) in a simulated reality in order to trick him into revealing his formula for concentrated dark matter. The hypothesis also serves as the climax of No Man's Sky's overarching plot, in which it was revealed that the game's setting itself was a simulation and that the player character is a member of a race made to explore it. In the game Xenoblade Chronicles, it is revealed that the whole world of the gods Bionis and Mechonis was a simulation run by Alvis, the administrative computer of a phase transition experiment facility (heavily implied to be "Ontos" in Xenoblade Chronicles 2) after Klaus destroyed the universe in a multiverse experiment.

 

Institute for the Future

From Wikipedia, the free encyclopedia
 
Institute for the Future
Not for profit
IndustryFuture Forecasting
Founded1968; 52 years ago
in Middletown, Connecticut, United States
FoundersFrank Davidson, Olaf Helmer, Paul Baran, Arnold Kramish, and Theodore Gordon
Headquarters201 Hamilton Avenue, ,
United States
Key people
Marina Gorbis
ServicesTen Year Forecast, Technology Horizons, Health Horizons
Websiteiftf.org

The Institute for the Future (IFTF) is a Palo Alto, California, US–based not-for-profit think tank. It was established, in 1968, as a spin-off from the RAND Corporation to help organizations plan for the long-term future, a subject known as futures studies.

History

Genesis

First references to the idea of an Institute for the Future may be found in a 1966 Prospectus by Olaf Helmer and others. While at RAND Corporation, Helmer had already been involved with developing the Delphi method of futures studies. He, and others, wished to extend the work further with an emphasis on examining multiple scenarios. This can be seen in the prospectus summary:

  • To explore systematically the possible futures for our [USA] nation and for the international community.
  • To ascertain which among these possible futures seems desirable, and why.
  • To seek means by which the probability of their occurrence can be enhanced through appropriate purposeful action.

First years

The Institute opened in 1968, in Middletown, Connecticut. The initial group was led by Frank Davidson and included Olaf Helmer, Paul Baran, Arnold Kramish, and Theodore Gordon.

The Institute’s work initially relied on the forecasting methods built upon by Helmer while at RAND. The Delphi method was used to glean information from multiple anonymous sources. It was augmented by Cross Impact Analysis, which encouraged analysts to consider multiple future scenarios.

While precise and powerful, the methods that had been developed in a corporate environment were oriented to providing business and economic analyses. At a 1971 conference on mathematical modelling Helmer noted the need for similar improvements in societal modelling. Early attempts at doing so included a ‘Future State of the Union’ report, formatted according to the traditional US Presidential address to the Nation.

Despite establishing an excellent reputation for painstaking analysis of future analyses and forecasting methods, various problems meant that the Institute struggled to find its footing at first. In 1970 Helmer took over the leadership from Davidson, and the Institute shifted its headquarters to Menlo Park, California.

In 1971 Roy Amara took over from Helmer, who continued to run the Middletown office until his departure in 1973. Amara held this position until 1990. During Amara's presidency, the Institute conducted some of the earliest studies of the impact of the ARPANET on collaborative work and scientific research, and was notable for its research on computer mediated communications, also known as groupware.

Starting from the early seventies astrophysicist and computer scientist Jacques Vallee, sociologist Bob Johansen, and technology forecaster Paul Saffo worked for IFTF.

An increase in corporate focus

In 1975 the Corporate Associates Program was started to assist private organisations interpret emerging trends and the long-term consequences. Although this program operated until 2001, its role as the Institute's main reporting tool was superseded by the Ten Year Forecast in 1978.

In 1984 the sociologist Herbert L Smith noted that, by the late 1970s, the idea of an open Union reporting format had given way to the proprietary Ten Year Forecast. Smith interpreted this as a renewed focus on business forecasting as public funds became scarce.

It is not clear how pertinent Smith's observations were to how the Institute was operating in this period. Sociologists such as Bob Johansen continued to be active in the Institute's projects. Having taken part in early ARPANET development, Institute staff were well aware of the impact that computer networking would have on society and its inclusion in policy making. However, in a 1984 essay, Roy Amara appeared to acknowledge some form of crisis, and a renewed interest in societal forecasting.

Evolution of societal forecasting

New ways of presenting studies to a less specialised audience were adopted, or developed. As an aid to memory retention, 'Vignetting' presented future scenarios as short stories; to illustrate the point of the scenario, and engage the reader’s attention. Later initiatives showed an increasing emphasis on narrative engagement, e.g. ‘Artifacts of the future’, and 'Human-future interaction'.

Ethnographic forecasting was adopted as it became recognised that "society" was actually a myriad of sub-cultures, each with its own outlook.

While older forecasting methods sought the advice of field experts, newer techniques sought the statistical input from all members of society. Public interaction, provided via the internet and social media, made it possible to engage in "bottom up forecasting". While roleplaying and simulation games had long been part of a forecaster's tools, they could now be scaled up into "massively multiplayer forecasting games" such as Superstruct. This game enlisted the blogs and wikis of over 5,000 people to discuss life 10 years in the future; presenting them with a set of hypothetical, overlapping social threats, and encouraging them to seek collaborative "superstruct" solutions. The concept of the superstruct was subsequently incorporated into the Institute's ‘Foresight Engine’ tool.

Work

The Institute maintains research programs on the futures of technology, health, and organizations. It publishes a variety of reports and maps, as well as Future Now, a blog on emerging technologies. It offers three programs to its clients:

  • The Ten year forecast is the Institute's signature piece, having operated since 1978. It tracks today’s latent signals, and forecasts what they might mean for business in ten years' time.
  • The Technology Horizons program, beginning around 2004, is described by the Institute as "combining a deep understanding of technology and societal forces to identify and evaluate discontinuities and innovations in the next three to ten years".
  • The Health horizons program has operated since 2005. The Institute describes its purpose as "seeking more resilient responses for the complex challenges facing global health".

In 2014 the Institute moved its headquarters to 201 Hamilton Avenue, Palo Alto, California.

The Institute's annual publication Future Now is intended to provide summaries of the Institute's body of research. The inaugural edition was published in February 2017. Its theme The New Body Language concentrated on the Technology Horizons Program's studies on human and machine symbiosis.

People

Marina Gorbis, 2013

As of 2016 the Institute's executive director is Marina Gorbis. Also associated with the institute are David Pescovitz, Anthony M. Townsend, Jane McGonigal, and Jamais Cascio.

Past leaders

  • Frank Davidson (1968–70)
  • Olaf Helmer (1970)
  • Roy Amara (1971–90)
  • Ian Morrison (1990–96)
  • Bob Johansen (1996-2004)
  • Peter Banks (2004–06)
  • Marina Gorbis (2006-)

Death of Elaine Herzberg (by a self-driving vehicle)

From Wikipedia, the free encyclopedia
 
Elaine Herzberg
Born
Elaine Marie Wood

August 2, 1968
DiedMarch 18, 2018 (aged 49)
Burial placePhoenix, Arizona
NationalityAmerican
EducationApache Junction High School, Apache Junction, Arizona
Known forFirst pedestrian to be killed by a self-driving car
Spouse(s)Mike Herzberg (until his death); Rolf Erich Ziemann (until Elaine's death)

The death of Elaine Herzberg (August 2, 1968 – March 18, 2018) was the first recorded case of a pedestrian fatality involving a self-driving (autonomous) car, after a collision that occurred late in the evening of March 18, 2018. Herzberg was pushing a bicycle across a four-lane road in Tempe, Arizona, United States, when she was struck by an Uber test vehicle, which was operating in self-drive mode with a human safety backup driver sitting in the driving seat. Herzberg was taken to the local hospital where she died of her injuries.

Following the fatal incident, Uber suspended testing of self-driving vehicles in Arizona, where such testing had been sanctioned since August 2016. Uber chose not to renew its permit for testing self-driving vehicles in California when it expired at the end of March 2018.

Herzberg was the first pedestrian killed by a self-driving car; a driver had been killed by a semi-autonomous car almost two years earlier. A Washington Post reporter compared Herzberg's fate with that of Bridget Driscoll who, in the United Kingdom in 1896, was the first pedestrian to be killed by an automobile. The Arizona incident has magnified the importance of collision avoidance systems for self-driving vehicles.

Collision summary

Herzberg was crossing Mill Avenue (North) from west to east, approximately 360 feet (110 m) south of the intersection with Curry Road, outside the designated pedestrian crosswalk, close to the Red Mountain Freeway. She was pushing a bicycle laden with shopping bags, and had crossed at least two lanes of traffic when she was struck at approximately 9:58 pm MST (UTC−07:00) by a prototype Uber self-driving car based on a Volvo XC90, which was traveling north on Mill. The vehicle had been operating in autonomous mode since 9:39 pm, nineteen minutes before it struck and killed Herzberg. The car's human safety backup driver, Ms. Rafaela Vasquez, did not intervene in time to prevent the collision. Vehicle telemetry obtained after the crash showed that the human operator responded by moving the steering wheel less than a second before impact, and she engaged the brakes less than a second after impact.

Cause investigation

The self-driving Uber Volvo XC90 involved in the collision, with damage on the right front side

The county district attorney's office recused itself from the investigation, due to a prior joint partnership with Uber promoting their services as an alternative to driving under the influence of alcohol.

Accounts of the crash have been conflicting in terms of the speed limit at the place of the accident. According to Tempe police the car was traveling in a 35 mph (56 km/h) zone, but this is contradicted by a posted speed limit of 45 mph (72 km/h).  Some later points of focus by federal investigators have indicated that the absolute maximum speed permitted by law may not be material in the nocturnal crash.

The National Transportation Safety Board (NTSB) sent a team of federal investigators to gather data from vehicle instruments, and to examine vehicle condition along with the actions taken by the safety driver. Their preliminary findings were substantiated by multiple event data recorders and proved the vehicle was traveling 43 miles per hour (69 km/h) when Herzberg was first detected 6 seconds (378 feet (115 m)) before impact; during 4 seconds the self driving system did not infer that emergency braking was needed. A vehicle traveling 43 mph (69 km/h) can generally stop within 89 feet (27 m) once the brakes are applied. Because the machine needed to be 1.3 seconds (82 feet (25 m)) away prior to discerning that emergency braking was required, whereas at least that much distance was required to stop, it was exceeding its assured clear distance ahead. The system failed to behave properly. A total stopping distance of 76 feet itself would imply a safe speed under 25 mph (40 km/h). Human intervention was still legally required. Computer perception–reaction time would have been a speed limiting factor had the technology been superior to humans in ambiguous situations; however, the nascent computerized braking technology was disabled the day of the crash, and the machine's apparent 4-second perception–reaction (alarm) time allowed the car to travel 250 feet (76 m). Video released by the police on March 21 showed the safety driver was not watching the road moments before the vehicle struck Herzberg.

Environment

Vicinity of Mill Avenue (running north–south) and Curry/Washington (east–west) in Tempe, Arizona

Tempe Police Chief Sylvia Moir was quoted stating the collision was "unavoidable" based on the initial police investigation, which included a review of the video captured by an onboard camera. Moir faulted Herzberg for crossing the road in an unsafe manner: "It is dangerous to cross roadways in the evening hour when well-illuminated, managed crosswalks are available." According to Uber, safety drivers were trained to keep their hands very close to the wheel all the time while driving the vehicle so they were ready to quickly take control if necessary.

The driver said it was like a flash, the person walked out in front of them. His [sic] first alert to the collision was the sound of the collision. [...] it's very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway.

— Chief Sylvia Moir, Tempe Police, San Francisco Chronicle interview, March 19, 2018
Aerial photograph of the area where the collision occurred, facing approximately north. Mill Avenue runs from the top left corner to the bottom right corner (north–south), and the ornamental brick-lined median is just south of the intersection with Curry/Washington.

Tempe police released video on March 21 showing footage recorded by two onboard cameras: one forward-looking, and one capturing the safety driver's actions. The forward-facing video shows that the self-driving car was traveling in the far right lane when it struck Herzberg. The driver-facing video shows the safety driver was looking down prior to the collision. The Uber operator is responsible for intervening and taking manual control when necessary as well as for monitoring diagnostic messages, which are displayed on a screen in the center console. In an interview conducted after the crash with NTSB, the driver stated she was monitoring the center stack at the time of the collision.

After the Uber video was released, journalist Carolyn Said noted the police explanation of Herzberg's path meant she had already crossed two lanes of traffic before she was struck by the autonomous vehicle. The Marquee Theatre and Tempe Town Lake are west of Mill Avenue, and pedestrians commonly cross mid-street without detouring north to the crosswalk at Curry. According to reporting by the Phoenix New Times, Mill Avenue contains what appears to be a brick-paved path in the median between the northbound and southbound lanes; however, posted signs prohibit pedestrians from crossing in that location. When the second of the Mill Avenue bridges over the town lake was added in 1994 for northbound traffic, the X-shaped crossover in the median was installed to accommodate the potential closing of one of the two road bridges. The purpose of this brick-paved structure is purely to divert cars from one side to the other if a bridge is closed to traffic, and although it may look like a crosswalk for pedestrians, it is in fact a temporary roadway with vertical curbs and warning signs.

Software issues

Michael Ramsey, a self-driving car expert with Gartner, characterized the video as showing "a complete failure of the system to recognize an obviously seen person who is visible for quite some distance in the frame. Uber has some serious explaining to do about why this person wasn't seen and why the system didn't engage."

James Arrowood, a lawyer specializing in driverless cars in Arizona, incorrectly speculated the software may have decided to proceed after assuming that Herzberg would yield the right of way. Arizona law (ARS 28-793) states that pedestrians crossing the street outside a crosswalk shall yield to cars. Per Arrowood, "The computer makes a decision. It says, 'Hey, there is this object moving 10 or 15 feet to left of me, do I move or not?' It (could be) programmed, I have a right of way, on the assumption that whatever is moving will yield the right of way." The NTSB preliminary report, however, noted that the software did order the car to brake 1.3 seconds before the collision.

A video shot from the vehicle's dashboard camera showed the safety driver looking down, away from the road. It also appeared that the driver's hands were not hovering above the steering wheel, which is what drivers are instructed to do so they can quickly retake control of the car. Uber moved from two employees in every car to one. The paired employees had been splitting duties: one ready to take over if the autonomous system failed, and another to keep an eye on what the computers were detecting. The second person was responsible for keeping track of system performance as well as labeling data on a laptop computer. Mr. Kallman, the Uber spokesman, said the second person was in the car for purely data related tasks, not safety. When Uber moved to a single operator, some employees expressed safety concerns to managers, according to the two people familiar with Uber's operations. They were worried that going solo would make it harder to remain alert during hours of monotonous driving.

Playback of self-driving system data at 1.3 seconds before impact. Distances shown in meters.

The recorded telemetry showed the system had detected Herzberg six seconds before the crash, and classified her first as an unknown object, then as a vehicle, and finally as a bicycle, each of which had a different predicted path according to the autonomy logic. 1.3 seconds prior to the impact, the system determined that emergency braking was required, which is normally performed by the vehicle operator. However, the system was not designed to alert the operator, and did not make an emergency stop on its own accord, as "emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior", according to NTSB.

Sensor issues

Brad Templeton, who provided consulting for autonomous driving competitor Waymo, noted the car was equipped with advanced sensors, including radar and LiDAR, which would not have been affected by the darkness. Templeton stated "I know the [sensor] technology is better than that, so I do feel that it must be Uber's failure." Arrowood also recognized potential sensor issues: "Really what we are going to ask is, at what point should or could those sensors recognize the movement off to the left. Presumably she was somewhere in the darkness."

In a press event conducted by Uber in Tempe in 2017, safety drivers touted the sensor technology, saying they were effective at anticipating jaywalkers, especially in the darkness, stopping the autonomous vehicles before the safety driver can even see pedestrians. However, manual intervention by the safety drivers was required to avoid a collision with another vehicle on at least one instance with a reporter from The Arizona Republic riding along.

Uber announced they would replace their Ford Fusion-based self-driving fleet with cars based on the Volvo XC90 in August 2016; the XC90s sold to Uber would be prepared to receive Uber's vehicle control hardware and software, but would not include any of Volvo's own advanced driver-assistance systems. Uber characterized the sensor suite attached to the Fusion as the "desktop" model, and the one attached to the XC90 as the "laptop", hoping to develop the "smartphone" soon. According to Uber, the suite for the XC90 was developed in approximately four months. The XC90 as modified by Uber included a single roof-mounted LiDAR sensor and 10 radar sensors, providing 360° coverage around the vehicle. In comparison, the Fusion had seven LiDAR sensors (including one mounted on the roof) and seven radar sensors. According to Velodyne, the supplier of Uber's LiDAR, the single roof-mounted LiDAR sensor has a narrow vertical range that prevents it from detecting obstacles low to the ground, creating a blind spot around the vehicle. Marta Hall, the president of Velodyne commented "If you're going to avoid pedestrians, you're going to need to have a side lidar to see those pedestrians and avoid them, especially at night." However, the augmented radar sensor suite would be able to detect obstacles in the LiDAR blind spot.

Distraction

On Thursday, June 21, the Tempe Police Department released a detailed report along with media captured after the collision, including an audio recording of the 911 call made by the safety driver, Rafaela Vasquez and an initial on-scene interview with a responding officer, captured by body worn video. After the crash, police obtained search warrants for Vasquez's cellphones as well as records from the video streaming services Netflix, YouTube, and Hulu. The investigation concluded that because the data showed she was streaming The Voice over Hulu at the time of the collision, and the driver-facing camera in the Volvo showed "her face appears to react and show a smirk or laugh at various points during the time she is looking down", Vasquez may have been distracted from her primary job of monitoring road and vehicle conditions. Tempe police concluded the crash was "entirely avoidable" and faulted Vasquez for her "disregard for assigned job function to intervene in a hazardous situation".

Records indicate that streaming began at 9:16 pm and ended at 9:59 pm. Based on an examination of the video captured by the driver-facing camera, Vasquez was looking down toward her right knee 166 times for a total of 6 minutes, 47 seconds during the 21 minutes, 48 seconds preceding the crash. Just prior to the crash, Vasquez was looking at her lap for 5.3 seconds; she looked up half a second before the impact. Vasquez stated in her post-crash interview with the NTSB that she had been monitoring system messages on the center console, and that she did not use either one of her cell phones until she called 911. According to an unnamed Uber source, safety drivers are not responsible for monitoring diagnostic messages. Vasquez also told responding police officers she kept her hands near the steering wheel in preparation to take control if required, which contradicted the driver-facing video, which did not show her hands near the wheel. Police concluded that given the same conditions, Herzberg would have been visible to 85% of motorists at a distance of 143 feet (44 m), 5.7 seconds before the car struck Herzberg. According to the police report, Vasquez should have been able to apply the brakes at least 0.57 seconds sooner, which would have provided Herzberg sufficient time to pass safely in front of the car.

The police report was turned over to the Yavapai County Attorney's Office for review of possible manslaughter charges. The Maricopa County Attorney's Office recused itself from prosecution over a potential conflict of interest, as it had earlier participated with Uber in a March 2016 campaign against drunk driving. On March 4, 2019 Yavapai County Attorney released a letter indicating there is "no basis for criminal liability" against Uber Corporation; that potential charges against the driver should be further investigated by Maricopa County Attorney; and that the Tempe Police Department should analyze the case to gather additional evidence.

Other factors

According to the preliminary report of the collision released by the NTSB, Herzberg had tested positive for methamphetamine and marijuana in a toxicology test carried out after the collision. Residual toxicology itself does not establish if or when she was under their influence, and hence an actual factor. Inhibited faculties can hypothetically factor into one's relative ability for last-minute self-preservation. However, her mere presence on the roadway far in the distance ahead of the car was the factor which invoked the machine's duty to brake; the common legal duty to avoid her and other objects being general and preexisting.

On May 24, NTSB released a preliminary incident report, the news release saying that Herzberg "was dressed in dark clothing, did not look in the direction of the vehicle... crossed... in a section not directly illuminated by lighting... entered the roadway from a brick median, where signs...warn pedestrians to use a crosswalk... 360 feet north." Six seconds before impact, the vehicle was traveling 43 mph (69 km/h), and the system identified the woman and bicycle as an unknown object, next as a vehicle, then as a bicycle. At 1.3 seconds before hitting the pedestrian and her bike, the system flagged the need for emergency braking, but it failed to do so, as the car hit Herzberg at 39 mph (63 km/h).

The forward-looking Uber dashcam did not pick up Herzberg until approximately 1.4 seconds before the collision, suggesting (as the sheriff did) that the crash may have been completely unavoidable even if Vasquez hadn't been distracted in the seconds leading up to the crash.

However, night-time video shot by other motorists in the days following the crash, plus their comments, suggest that the area may have been better illuminated than the dashcam footage, viewed in isolation would suggest. This raises the possibility that Herzberg's appearing so late in the Uber video could merely be an indication that the camera had insufficient sensitivity or was otherwise poorly calibrated for the environment and setting in which it was operating. If these crowd-sourced re-creations are indeed representative of the visibility conditions on the actual night that the accident occurred, then Herzberg would have been visible to Vasquez as soon as there was a clear sight line had Vasquez only been looking ahead, refuting the assertion that the accident was unavoidable.

Complicating things even further, there is evidence that suggests the discrepancies in visibility between the dashcam footage and the civilian re-creation submissions are not at all invented or illusory, but are, instead, real phenomena whose progenitor is purported to be the set of severely under-powered headlights installed on the car Vasquez was monitoring. While all of these potential scenarios will likely affect any charging decisions and/or other legal actions (if they materialize at all), none currently have any objective validation or otherwise meaningful support, especially in relation to one another.

While jaywalking can constitute the illegal preemptive of control of the roadway, it is not necessarily the proximate cause of an accident. Had Herzberg instead been a moose or a disabled school bus in legal control of the roadway, passengers of the self-driving car—which failed to assure a clear stopping distance within its radius of vision—may have been killed instead. Motor vehicle operators must always be watchful for children, animals, and other hazards which may encroach into the roadway.

Coordination with state government

Prior to the fatal incident, Arizona Governor Doug Ducey had encouraged Uber to enter the state. He signed Executive Order 2015-09 on August 25, 2015, entitled "Self-Driving Vehicle Testing and Piloting in the State of Arizona; Self-Driving Vehicle Oversight Committee", establishing a welcoming attitude to autonomous vehicle testing. According to Ducey's office, the committee, which consists of eight state employees appointed by the governor, has met twice since it was formed.

In December 2016, Ducey had released a statement welcoming Uber's autonomous cars: "Arizona welcomes Uber self-driving cars with open arms and wide open roads. While California puts the brakes on innovation and change with more bureaucracy and more regulation, Arizona is paving the way for new technology and new businesses." Emails between Uber and the office of the governor showed that Ducey was informed that the testing of self-driving vehicles would begin in August 2016, several months ahead of the official announcement welcoming Uber in December. On March 1, 2018, Ducey signed Executive Order (XO) 2018-04, outlining regulations for autonomous vehicles. Notably, XO 2018-04 requires the company testing self-driving cars to provide a written statement that "the fully autonomous vehicle will achieve a minimal risk condition" if a failure occurs.

Aftermath

After the collision that killed Herzberg, Uber ceased testing self-driving vehicles in all four cities (Tempe, San Francisco, Pittsburgh, and Toronto) where it had deployed them. On March 26, Governor Ducey sent a letter to Uber CEO Dara Khosrowshahi, suspending Uber's testing of self-driving cars in the state. In the letter, Ducey stated "As governor, my top priority is public safety. Improving public safety has always been the emphasis of Arizona's approach to autonomous vehicle testing, and my expectation is that public safety is also the top priority for all who operate this technology in the state of Arizona." Uber also announced it would not renew its permit to test self-driving cars in California after the California Department of Motor Vehicles wrote to inform Uber that its permit would expire on March 31, and "any follow-up analysis or investigations from the recent crash in Arizona" would have to be addressed before the permit could be renewed.

Legal woes for Uber were among the collision fallout. Herzberg's daughter retained the law firm Bellah Perez, and together with the husband quickly reached an undisclosed settlement on March 28 while local and federal authorities continued their investigation. Herzberg's mother, father, and son also retaining legal counsel. While a confidential settlement buried the liability issue, it suggested a sufficient legal cause of action. The abundance of event data recorders left few questions of fact for a jury to decide. Although the Yavapai County Attorney declined to charge Uber with a criminal violation in 2019 for the death of Herzberg, a Maricopa County grand jury indicted the safety driver on one count of negligent homicide in 2020.

The incident caused some companies to temporarily cease road testing of self-driving vehicles. Nvidia CEO Jensen Huang has stated "We don't know that we would do anything different, but we should give ourselves time to see if we can learn from that incident." Uber acknowledged that mistakes were made in its brash pursuit to ultimately create a safer driving environment.

Later in the year, Uber issued a reflective 70-page safety report in which Uber stated the potential for its self-driving cars to be safer than those driven by humans, however some of their employees worry that Uber is taking shortcuts to hit internal milestones. To be legal in all states for private use, or anywhere at the commercial level, the technology must hard code assured clear distance ahead driving.

The National Highway Traffic Safety Administration and the American Automobile Association had previously identified nighttime driving as an area for safety improvement. This follows similar changes in attitudes against tolerating drunk driving, starting in the late 1970s through the 1990s, and has occurred in concert with a cultural shift towards active lifestyles and multi-modal use of roadways which has been formally adopted by the National Association of City Transportation Officials.

After the collision that killed Herzberg on March 18, 2018, Uber returned their self-driving cars to the roads in public testing in Pittsburgh, Pennsylvania on December 20, 2018. Uber said they received authorization from the Pennsylvania Department of Transportation. Uber said they were also pursuing the same with cars on roads in San Francisco, California and Toronto, Ontario.

Ethics of artificial intelligence

From Wikipedia, the free encyclopedia
 

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent entities. It can be divided into a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs). It also includes the issues of singularity and superintelligence.

Robot ethics

The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.

Robot rights

"Robot rights" is the concept that people should have moral obligations towards their machines, similar to human rights or animal rights. It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society. These could include the right to life and liberty, freedom of thought and expression and equality before the law. The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry.

Experts disagree whether specific and detailed laws will be required soon or safely in the distant future. Glenn McGee reports that sufficiently humanoid robots may appear by 2020. Ray Kurzweil sets the date at 2029. Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.

The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:

61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.

In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some observers found this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law.

The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligences show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.

Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.

Threat to human dignity

Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:

  • A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
  • A therapist (as was proposed by Kenneth Colby in the 1970s)
  • A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
  • A soldier
  • A judge
  • A police officer

Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."

Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all. However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines: Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against. AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.

Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."

Transparency, accountability, and open source

Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts. Ben Goertzel and David Hart created OpenCog as an open source framework for AI development. OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity. There are numerous other open source AI developments.

Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI it codes is not transparent. The IEEE has a standardisation effort on AI transparency. The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organisations may be a public bad, that is, do more damage than good. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.

Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. An updated collection (list) of AI Ethics is maintained by AlgorithmWatch. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.

On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its “Policy and investment recommendations for trustworthy Artificial Intelligence”. This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principle subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity, and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.

Biases in AI systems

AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business implications and directly impact people. These systems are vulnerable to biases and errors introduced by its human makers. Also, the data used to train these AI systems itself can have biases. For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender. These AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's. Similarly, Amazon's.com Inc's termination of AI hiring and recruitment is another example which exhibits that AI cannot be fair. The algorithm preferred more male candidates than female. This was because Amazon's system was trained with data collected over 10-year period that came mostly from male candidates.

Bias can creep into algorithms in many ways. For example, Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias. In a highly influential branch of AI known as "natural language processing," problems can arise from the "text corpus"—the source material the algorithm uses to learn about the relationships between different words.

Large companies such as IBM, Google, etc. started researching and addressing bias. One solution for addressing bias is to create documentation for the data used to train AI systems.

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it.

Liability for self-driving cars

The wide use of partial to fully autonomous cars seems to be imminent in the future. But fully autonomous technologies present new issues and challenges. Recently, a debate over the legal liability have risen over the responsible party if these cars get into accidents. In one of the reports a driverless car hit a pedestrian and had a dilemma over whom to blame for the accident. Even though the driver was inside the car during the accident, the controls were fully in the hand of computers.

In one case that took place on March 19, 2018 a self-driving Uber Car kills pedestrian in Arizona-Death of Elaine Herzberg which alternatively leads to the death of that jaywalking pedestrian. Without further investigation on how the pedestrian got injury/death in such a case. It is important for people to reconsider the liability not only for those partial or fully automated cars, but those stakeholders who should be responsible for such a situation as well. In this case, the automated cars have the function of detecting nearby possible cars and objects in order to run the function of self-driven, but it did not have the ability to react to nearby pedestrian within its original function due to the fact that there will not be people appear on the road in a normal sense. This leads to the issue of whether the driver, pedestrian, the car company, or the government should be responsible in such a case.

According to this article, with the current partial or fully automated cars' function are still amateur which still require driver to pay attention with fully control the vehicle since all these functions/feature are just supporting driver to be less tried while they driving, but not let go. Thus, the government should be most responsible for current situation on how they should regulate the car company and driver who are over-rely on self-driven feature as well educated them that these are just technologies that bring convenience to people life but not a short-cut. Before autonomous cars become widely used, these issues need to be tackled through new policies.

Weaponization of artificial intelligence

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. On October 31, 2019, the Unites States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. One researcher states that autonomous robots might be more humane, as they could make decisions more effectively.

Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots." From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override.

There has been a recent outcry with regard to the engineering of artificial-intelligence weapons that has included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.

"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.

Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology." These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".

Machine ethics

Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.

Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances. More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.

Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism. The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.

However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons. Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis), while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".

According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deepfakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don't require a human controller.

Singularity

Many researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals. In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that superintelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation. AI researchers such as Stuart J. Russell and Bill Hibbard have proposed design strategies for developing beneficial machines.

AI ethics organisations

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. They stated: "This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning." Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.

A number of organizations pursue a technical theory of AI goal-system alignment with human values. Among these are the Machine Intelligence Research Institute, the Future of Humanity Institute, the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute.

The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organisation.

Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organisations to ensure AI is ethically applied.

In fiction

The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.

The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.

Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.

Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.

Literature

The standard bibliography is on PhilPapers on ethics of AI and robot ethics.

"Ethics of Artificial Intelligence and Robotics" (April 2020) in the Stanford Encyclopedia of Philosophy is a comprehensive exposition of the academic debates.

 

Modal realism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Modal_realism   ...