Search This Blog

Saturday, December 1, 2018

Virtual reality

From Wikipedia, the free encyclopedia

Researchers with the European Space Agency in Darmstadt, Germany, exploring virtual reality for controlling planetary rovers and satellites in orbit

Virtual reality (VR) is an interactive computer-generated experience taking place within a simulated environment. It incorporates mainly auditory and visual feedback, but may also allow other types of sensory feedback like haptic. This immersive environment can be similar to the real world or it can be fantastical. Augmented reality systems may also be considered a form of VR that layers virtual information over a live camera feed into a headset or through a smartphone or tablet device giving the user the ability to view three-dimensional images.

Current VR technology most commonly uses virtual reality headsets or multi-projected environments, sometimes in combination with physical environments or props, to generate realistic images, sounds and other sensations that simulate a user's physical presence in a virtual or imaginary environment. A person using virtual reality equipment is able to "look around" the artificial world, move around in it, and interact with virtual features or items. The effect is commonly created by VR headsets consisting of a head-mounted display with a small screen in front of the eyes, but can also be created through specially designed rooms with multiple large screens.

VR systems that include transmission of vibrations and other sensations to the user through a game controller or other devices are known as haptic systems. This tactile information is generally known as force feedback in medical, video gaming, and military training applications.

Etymology and terminology

Paramount for the sensation of immersion into virtual reality are a high frame rate (at least 95 fps), as well as a low latency.

"Virtual" has had the meaning of "being something in essence or effect, though not actually or in fact" since the mid-1400s. The term "virtual" has been used in the computer sense of "not physically existing but made to appear by software" since 1959. In 1938, the French avant-garde playwright Antonin Artaud described the illusory nature of characters and objects in the theatre as "la réalité virtuelle" in a collection of essays, Le Théâtre et son double. The English translation of this book, published in 1958 as The Theater and its Double, is the earliest published use of the term "virtual reality". The term "artificial reality", coined by Myron Krueger, has been in use since the 1970s. The term "virtual reality" was first used in a science fiction context in The Judas Mandala, a 1982 novel by Damien Broderick.

A "cyberspace" is a networked virtual reality.

Virtual reality shares some elements with "augmented reality" (or AR). AR is a type of virtual reality technology that blends what the user sees in their real surroundings with digital content generated by computer software. The additional software-generated images with the virtual scene typically enhance how the real surroundings look in some way. Some AR systems use a camera to capture the user's surroundings or some type of display screen which the user looks at (e.g., Microsoft's HoloLens, Magic Leap).

Technology

The Virtual Reality Modelling Language (VRML), first introduced in 1994, was intended for the development of "virtual worlds" without dependency on headsets. The Web3D consortium was subsequently founded in 1997 for the development of industry standards for web-based 3D graphics. The consortium subsequently developed X3D from the VRML framework as an archival, open-source standard for web-based distribution of VR content.

All modern VR displays are based on technology developed for smartphones including: gyroscopes and motion sensors for tracking head, hand, and body positions; small HD screens for stereoscopic displays; and small, lightweight and fast processors. These components led to relative affordability for independent VR developers, and lead to the 2012 Oculus Rift Kickstarter offering the first independently developed VR headset.

Independent production of VR images and video has increased by the development of omnidirectional cameras, also known as 360-degree cameras or VR cameras, that have the ability to record in all directions, although at low-resolutions or in highly compressed formats for online streaming of 360 video. In contrast, photogrammetry is increasingly used to combine several high-resolution photographs for the creation of detailed 3D objects and environments in VR applications.

History

View-Master, a stereoscopic visual simulator, was introduced in 1939

The exact origins of virtual reality are disputed, partly because of how difficult it has been to formulate a definition for the concept of an alternative existence. The development of perspective in Renaissance Europe created convincing depictions of spaces that did not exist, in what has been referred to as the "multiplying of artificial worlds". Other elements of virtual reality appeared as early as the 1860s. Antonin Artaud took the view that illusion was not distinct from reality, advocating that spectators at a play should suspend disbelief and regard the drama on stage as reality. The first references to the more modern concept of virtual reality came from science fiction.

1950–1970

The Sensorama was designed in the 1960s

Morton Heilig wrote in the 1950s of an "Experience Theatre" that could encompass all the senses in an effective manner, thus drawing the viewer into the onscreen activity. He built a prototype of his vision dubbed the Sensorama in 1962, along with five short films to be displayed in it while engaging multiple senses (sight, sound, smell, and touch). Predating digital computing, the Sensorama was a mechanical device. Heilig also developed what he referred to as the "Telesphere Mask" (patented in 1960). The patent application described the device as "a telescopic television apparatus for individual use...The spectator is given a complete sensation of reality, i.e. moving three dimensional images which may be in colour, with 100% peripheral vision, binaural sound, scents and air breezes".

Around the same time, Douglas Engelbart used computer screens both as input and output devices. In 1968, Ivan Sutherland, with the help of his student Bob Sproull, created what was widely considered to be the first head-mounted display (HMD) system for use in immersive simulation applications. It was primitive both in terms of user interface and realism, and the HMD to be worn by the user was so heavy that it had to be suspended from the ceiling. The graphics comprising the virtual environment were simple wire-frame model rooms. The formidable appearance of the device inspired its name, The Sword of Damocles.

1970–1990

The VR industry mainly provided VR devices for medical, flight simulation, automobile industry design, and military training purposes from 1970 to 1990.

David Em became the first artist to produce navigable virtual worlds at NASA's Jet Propulsion Laboratory (JPL), where he was Artist in Residence from 1977 to 1984.

The Aspen Movie Map was created at the MIT in 1978. The program was a crude virtual simulation of Aspen, Colorado in which users could wander the streets in one of the three modes: summer, winter, and polygons.

In 1979 Eric Howlett developed the Large Expanse, Extra Perspective (LEEP) optical system. The combined system created a stereoscopic image with a field of view wide enough to create a convincing sense of space. The users of the system have been impressed by the sensation of depth [field of view] in the scene and the corresponding realism. The original LEEP system was redesigned for the NASA Ames Research Center in 1985 for their first virtual reality installation, the VIEW (Virtual Interactive Environment Workstation) by Scott Fisher. The LEEP system provides the basis for most of the current virtual reality helmets available today.

A 1980 Atari arcade video game Battlezone used 3D vector graphics to immerse the player in a VR world

Atari founded a research lab for virtual reality in 1982, but the lab was closed after two years due to the Atari Shock (North American video game crash of 1983). However, its hired employees, such as Tom Zimmerman, Scott Fisher, Jaron Lanier, Michael Naimark, and Brenda Laurel, kept their research and development on VR-related technologies.

A VPL Research DataSuit, a full-body outfit with sensors for measuring the movement of arms, legs, and trunk. Developed circa 1989. Displayed at the Nissho Iwai showroom in Tokyo

By the 1980s the term "virtual reality" was popularized by Jaron Lanier, one of the modern pioneers of the field. Lanier had founded the company VPL Research in 1985. VPL Research has developed several VR devices like the Data Glove, the EyePhone, and the Audio Sphere. VPL licensed the Data Glove technology to Mattel, which used it to make an accessory known as the Power Glove. While the Power Glove was hard to use and not popular, at US$75, it was an early affordable VR device.

1990–2000

In 1991, Carolina Cruz-Neira, Daniel J. Sandin and Thomas A. DeFanti from the Electronic Visualization Laboratory created the first cubic immersive room, The Cave. Developed as Cruz-Neira's PhD thesis, it involved a multi-projected environment, similar to the holodeck, allowing people to see their own bodies in relation to others in the room.

Between 1989-1992, Nicole Stenger created Angels, the first real-time interactive immersive movie. The interaction was facilitated with a dataglove and high-resolution goggles.

In 1992, Louis Rosenberg created the Virtual Fixtures system at the U.S. Air Force’s Armstrong Labs using a full upper-body exoskeleton, enabling a physically realistic virtual reality in 3D. The system enabled the overlay of physically real 3D virtual objects registered with a user's direct view of the real world, producing the first true augmented reality experience enabling sight, sound, and touch.

Antonio Medina, a MIT graduate and NASA scientist, designed a virtual reality system to "drive" Mars rovers from Earth in apparent real time despite the substantial delay of Mars-Earth-Mars signals.

The 1990s saw the first widespread commercial releases of consumer headsets. In 1991, Sega announced the Sega VR headset for arcade games and the Mega Drive console. It used LCD screens in the visor, stereo headphones, and inertial sensors that allowed the system to track and react to the movements of the user's head. In the same year, Virtuality launched and went on to become the first mass-produced, networked, multiplayer VR entertainment system. It was released in many countries, including a dedicated VR arcade at Embarcadero Center in San Francisco. Costing up to $73,000 per multi-pod Virtuality system, they featured headsets and exoskeleton gloves that gave one of the first "immersive" VR experiences. Computer Gaming World predicted "Affordable VR by 1994".

By 1994, Sega released the Sega VR-1 motion simulator arcade attraction, in SegaWorld amusement arcades. It was able to track head movement and featured 3D polygon graphics in stereoscopic 3D, powered by the Sega Model 1 arcade system board. Apple released QuickTime VR, which, despite using the term "VR", was unable to represent virtual reality, and instead displayed 360 photographic panoramas.

Nintendo's Virtual Boy console was released in 1995. Also in 1995, a group in Seattle created public demonstrations of a "CAVE-like" 270 degree immersive projection room called the Virtual Environment Theater, produced by entrepreneurs Chet Dagit and Bob Jacobson. Forte released the VFX1, a PC-powered virtual reality headset in 1995.

In 1999, entrepreneur Philip Rosedale formed Linden Lab with an initial focus on the development of VR hardware. In its earliest form, the company struggled to produce a commercial version of "The Rig", which was realized in prototype form as a clunky steel contraption with several computer monitors that users could wear on their shoulders. The concept was later adapted into the personal computer-based, 3D virtual world Second Life.

2000–present

In 2001, SAS Cube (SAS3) became the first PC based cubic room, developed by Z-A Production (Maurice Benayoun, David Nahon), Barco, and Clarté. It was installed in Laval, France. The SAS library gave birth to Virtools VRPack.

By 2007, Google introduced Street View, a service that shows panoramic views of an increasing number of worldwide positions such as roads, indoor buildings and rural areas. It also features a stereoscopic 3D mode, introduced in 2010.

In 2010, Palmer Luckey designed the first prototype of the Oculus Rift. This prototype, built on a shell of another virtual reality headset, was only capable of rotational tracking. However, it boasted a 90-degree field of vision that was previously unseen in the consumer market at the time. This initial design would later serve as a basis from which the later designs came. In 2014, Facebook purchased Oculus VR for $2 billion. This purchase occurred after the first development kits ordered through Oculus' 2012 Kickstarter had shipped in 2013 but before the shipping of their second development kits in 2014.

In 2013, Valve Corporation discovered and freely shared the breakthrough of low-persistence displays which make lag-free and smear-free display of VR content possible. This was adopted by Oculus and was used in all their future headsets. In early 2014, Valve showed off their SteamSight prototype, the precursor to both consumer headsets released in 2016. It shared major features with the consumer headsets including separate 1K displays per eye, low persistence, positional tracking over a large area, and fresnel lenses.

The PlayStation VR, a 2016 virtual reality headset exclusively for the PlayStation 4 console

In 2014, Sony announced Project Morpheus (its code name for PlayStation VR), a virtual reality headset for the PlayStation 4 video game console. In 2015, HTC and Valve announced the virtual reality headset HTC Vive and controllers. The set included tracking technology called Lighthouse, which utilized wall-mounted "base stations" for positional tracking using infrared light.

In 2015, Google announced Cardboard, a do-it-yourself stereoscopic viewer for smartphones. The user places their smartphone in the cardboard holder, which they wear on their head. Michael Naimark was appointed Google's first-ever 'resident artist' in their new VR division. The Kickstarter campaign for Gloveone, a pair of gloves providing motion tracking and haptic feedback, was successfully funded, with over $150,000 in contributions.

By 2016 there were at least 230 companies developing VR-related products. Facebook had 400 employees focused on VR development; Google, Apple, Amazon, Microsoft, Sony and Samsung all had dedicated AR and VR groups. Dynamic binaural audio was common to most headsets released that year. However, haptic interfaces were not well developed, and most hardware packages incorporated button-operated handsets for touch-based interactivity. Visually, displays were still of a low-enough resolution and frame-rate that images were still identifiable as virtual.

In 2016, HTC shipped its first units of the HTC Vive SteamVR headset. This marked the first major commercial release of sensor-based tracking, allowing for free movement of users within a defined space. In 2017, a patent filed by Sony showed they were developing a similar location tracking technology to the Vive for PlayStation VR, with the potential for the development of a wireless headset.

Applications

VR is most commonly used in entertainment applications such as gaming and 3D cinema. Consumer virtual reality headsets were first released by video game companies in the early-mid 1990s. Beginning in the 2010s, next-generation commercial tethered headsets were released by Oculus (Rift), HTC (Vive) and Sony (PlayStation VR), setting off a new wave of application development. 3D cinema has been used for sporting events, pornography, fine art, music videos and short films. Since 2015, roller coasters and theme parks have incorporated virtual reality to match visual effects with haptic feedback .

In robotics, virtual reality has been used to control robots in telepresence and telerobotic systems. The technology is useful in robotics development such as in experiments that investigate how robots—through virtual articulations—can be applied as an intuitive human interface. For instance, researchers can simulate how robots are remotely controlled in different environments such as in space. Here, virtual reality not only offers insights into the manipulation and locomotion of robotic technology but also shows opportunities for inspection.

"World Skin, A Photo Safari in the Land of War" – Maurice Benayoun, Jean-Baptiste Barrière, Virtual Reality Installation – 1997

In social sciences and psychology, virtual reality offers a cost-effective tool to study and replicate interactions in a controlled environment. It can be used as a form of therapeutic intervention. For instance, there is the case of the virtual reality exposure therapy (VRET), a form of exposure therapy for treating anxiety disorders such as post traumatic stress disorder (PTSD) and phobias.

With the supervision of experts to provide feedback, simulated VR surgical environment provide effective and repeatable training at a low cost, allowing trainees to recognize and amend errors as they occur.

Virtual reality has been used in rehabilitation since the 2000s. Despite numerous studies conducted, good quality evidence of its efficacy compared to other rehabilitation methods without sophisticated and expensive equipment is lacking for the treatment of Parkinson's disease. A 2018 review on the effectiveness of mirror therapy by virtual reality and robotics for any type of pathology concluded in a similar way.

VR can simulate real workspaces for workplace occupational safety and health purposes, educational purposes, and training purposes. It can be used to provide learners with a virtual environment where they can develop their skills without the real-world consequences of failing. It has been used and studied in primary education, military, astronaut training, flight simulators, miner training, driver training and bridge inspection. Supplementing military training with virtual training environments has been claimed to offer avenues of realism in military and healthcare training while minimizing cost. It also has been claimed to reduce military training costs by minimizing the amounts of ammunition expended during training periods.

Applications for VR are facilitated by technologies that go beyond the use of graphics and headsets. There are a multitude of techniques, technologies, and hardware solutions to enhance the immersive experience and derive data from the users’ responses to that experience. The use of haptic clothing, eye tracking technologies, and “Fourth Dimension” sensory stimulation are becoming especially popular and useful.

The first fine art virtual world was created in the 1970s. As the technology developed, more artistic programs were produced throughout the 1990s, including feature films. When commercially available technology became more widespread, VR festivals began to emerge in the mid-2010s. The first uses of VR in museum settings began in the 1990s, seeing a significant increase in the mid-2010s. Additionally, museums have begun making some of their content virtual reality accessible.

Immersive VR engineering systems enable engineers to see virtual prototypes prior to the availability of any physical prototypes.

Virtual reality's growing market presents an opportunity and an alternative channel for digital marketing. It is also seen as a new platform for e-commerce, particularly in the bid to challenge traditional brick and mortar retailers. A study revealed that the majority of goods are still purchased in physical stores. For this reason, the simulated store environment made possible by VR technology has the potential to attract more consumers since it offers an almost similar experience in the physical store without the inconvenience of being there.

In fiction and popular culture

There have been many works of fiction that reference and describe forms of virtual reality. In the 1980s and 1990s, cyberpunks viewed the technology as a potential means for social change. The recreational drug subculture praised virtual reality not only as a new art form, but as an entirely new frontier.

Concerns and challenges

Health and safety

There are many health and safety considerations of virtual reality. Most virtual reality systems come with consumer warnings, including: seizures; developmental issues in children; trip-and-fall and collision warnings; discomfort; repetitive stress injury; and interference with medical devices. Some users may experience twitches, seizures or blackouts while using VR headsets, even if they do not have a history of epilepsy and have never had blackouts or seizures before. As many as one in 4000 people may experience these symptoms. Since these symptoms are more common among people under the age of 20, children are advised against using VR headsets. Other problems may occur in physical interactions with one's environment. While wearing VR headsets, people quickly lose awareness of their real-world surroundings and may injure themselves by tripping over, or colliding with real-world objects.

A number of unwanted symptoms have been caused by prolonged use of virtual reality, and these may have slowed proliferation of the technology. For example, in 1995, Nintendo released a gaming console known as the Virtual Boy. Worn as a headpiece and connected to a typical controller, the Virtual Boy received much criticism for its negative physical effects, including "dizziness, nausea, and headaches". VR headsets may regularly cause eye fatigue, as does all screened technology, because people tend to blink less when watching screens, causing their eyes to become more dried out. There have been some concerns about VR headsets contributing to myopia, but although VR headsets sit close to the eyes, they may not necessarily contribute to nearsightedness if the focal length of the image being displayed is sufficiently far away. Virtual reality sickness (also known as cybersickness) occurs when a person's exposure to a virtual environment causes symptoms that are similar to motion sickness symptoms. The most common symptoms are general discomfort, headache, stomach awareness, nausea, vomiting, pallor, sweating, fatigue, drowsiness, disorientation, and apathy. These motion sickness symptoms are caused by a disconnect between what is being seen and what the rest of the body perceives. When the vestibular system, the body's internal balancing system, does not experience the motion that it expects from visual input through the eyes, the user may experience VR sickness. This can also happen if the VR system does not have a high enough frame rate, or if there is a lag between the body's movement and the onscreen visual reaction to it. Because approximately 25–40% of people experience some kind of VR sickness when using VR machines, companies are actively looking for ways to reduce VR sickness.

Privacy

The persistent tracking required by all VR systems makes the technology particularly useful for, and vulnerable to, mass surveillance. The expansion of VR will increase the potential and reduce the costs for information gathering of personal actions, movements and responses.

Conceptual and philosophical concerns

In addition, there are conceptual and philosophical considerations and implications associated with the use of virtual reality. What the phrase "virtual reality" means or refers to can be ambiguous. Mychilo S. Cline argued in 2005 that through virtual reality techniques will be developed to influence human behavior, interpersonal communication, and cognition.

Internalism and externalism

From Wikipedia, the free encyclopedia

Internalism and externalism are two opposing ways of explaining various subjects in several areas of philosophy. These include human motivation, knowledge, justification, meaning, and truth. The distinction arises in many areas of debate with similar but distinct meanings.

Internalism is the thesis that no fact about the world can provide reasons for action independently of desires and beliefs. Externalism is the thesis that reasons are to be identified with objective features of the world.

Moral philosophy

Motivation

In contemporary moral philosophy, motivational internalism (or moral internalism) is the view that moral convictions (which are not necessarily beliefs, e.g. feelings of moral approval or disapproval) are intrinsically motivating. That is, the motivational internalist believes that there is an internal, necessary connection between one's conviction that X ought to be done and one's motivation to do X. Conversely, the motivational externalist (or moral externalist) claims that there is no necessary internal connection between moral convictions and moral motives. That is, there is no necessary connection between the conviction that X is wrong and the motivational drive not to do X. (The use of these terms has roots in W.D. Falk's (1947) paper "'Ought' and Motivation").

These views in moral psychology have various implications. In particular, if motivational internalism is true, then an amoralist is unintelligible (and metaphysically impossible). An amoralist is not simply someone who is immoral, rather it is someone who knows what the moral things to do are, yet is not motivated to do them. Such an agent is unintelligible to the motivational internalist, because moral judgments about the right thing to do have built into them corresponding motivations to do those things that are judged by the agent to be the moral things to do. On the other hand, an amoralist is entirely intelligible to the motivational externalist, because the motivational externalist thinks that moral judgments about the right thing to do not necessitate some motivation to do those things that are judged to be the right thing to do; rather, an independent desire—such as the desire to do the right thing—is required (Brink, 2003), (Rosati, 2006).

Reasons

There is also a distinction in ethics and action theory, largely made popular by Bernard Williams (1979, reprinted in 1981), concerning internal and external reasons for action. An internal reason is, roughly, something that one has in light of one's own "subjective motivational set"---one's own commitments, desires (or wants), goals, etc. On the other hand, an external reason is something that one has independent of one's subjective motivational set. For example, suppose that Sally is going to drink a glass of poison, because she wants to commit suicide and believes that she can do so by drinking the poison. Sally has an internal reason to drink the poison, because she wants to commit suicide. However, one might say that she has an external reason not to drink the poison because, even though she wants to die, one ought not kill oneself no matter what—regardless of whether one wants to die.

Some philosophers embrace the existence of both kinds of reason, while others deny the existence of one or the other. For example, Bernard Williams (1981) argues that there are really only internal reasons for action. Such a view is called internalism about reasons (or reasons internalism). Externalism about reasons (or reasons externalism) is the denial of reasons internalism. It is the view that there are external reasons for action; that is, there are reasons for action that one can have even if the action is not part of one's subjective motivational set.

Consider the following situation. Suppose that it's against the moral law to steal from the poor, and Sasha knows this. However, Sasha doesn't desire to follow the moral law, and there is currently a poor person next to him. Is it intelligible to say that Sasha has a reason to follow the moral law right now (to not steal from the poor person next to him), even though he doesn't care to do so? The reasons externalist answers in the affirmative ("Yes, Sasha has a reason not to steal from that poor person."), since he believes that one can have reasons for action even if one does not have the relevant desire. Conversely, the reasons internalist answers the question in the negative ("No, Sasha does not have a reason not to steal from that poor person, though others might."). The reasons internalist claims that external reasons are unintelligible; one has a reason for action only if one has the relevant desire (that is, only internal reasons can be reasons for action). The reasons internalist claims the following: the moral facts are a reason for Sasha's action not to steal from the poor person next to him only if he currently wants to follow the moral law (or if not stealing from the poor person is a way to satisfy his other current goals—that is, part of what Williams calls his "subjective motivational set"). In short, the reasoning behind reasons internalism, according to Williams, is that reasons for action must be able to explain one's action; and only internal reasons can do this.

Epistemology

Justification

Internalism

Generally speaking, internalist conceptions of epistemic justification require that one’s justification for a belief be internal to the believer in some way. Two main varieties of epistemic internalism about justification are access internalism and ontological internalism. Access internalists require that a believer must have internal access to the justifier(s) of her belief p in order to be justified in believing p. For the access internalist, justification amounts to something like the believer being aware (or capable of being aware) of certain facts that make her belief in p rational, or her being able to give reasons for her belief in p. At minimum, access internalism requires that the believer have some kind of reflective access or awareness to whatever justifies her belief. Ontological internalism is the view that justification for a belief is established by one’s mental states. Ontological internalism can be distinct from access internalism, but the two are often thought to go together since we are generally considered to be capable of having reflective access to mental states.

One popular argument for internalism is known as the 'new evil demon problem'. The new evil demon problem indirectly supports internalism by challenging externalist views of justification, particularly reliabilism. The argument asks us to imagine a subject with beliefs and experiences identical to ours, but the subject is being systematically deceived by a malicious Cartesian demon so that all their beliefs turn out false. In spite of the subject's unfortunate deception, the argument goes, we do not think this subject ceases to be rational in taking things to be as they appear as we do. After all, it is possible that we could be radically deceived in the same way, yet we are still justified in holding most of our beliefs in spite of this possibility. Since reliabilism maintains that one's beliefs are justified via reliable belief-forming processes (where reliable means yielding true beliefs), the subject in the evil demon scenario would not likely have any justified beliefs according to reliabilism because all of their beliefs would be false. Since this result is supposed to clash with our intuitions that the subject is justified in their beliefs in spite of being systematically deceived, some take the new evil demon problem as a reason for rejecting externalist views of justification.

Externalism

Externalist views of justification emerged in epistemology during the late 20th century. Externalist conceptions of justification assert that facts external to the believer can serve as the justification for a belief. According to the externalist, a believer need not have any internal access or cognitive grasp of any reasons or facts which make her belief justified. The externalist’s assessment of justification can be contrasted with access internalism, which demands that the believer have internal reflective access to reasons or facts which corroborate their belief in order to be justified in holding it. Externalism, on the other hand, maintains that the justification for someone’s belief can come from facts that are entirely external to the agent’s subjective awareness.

Alvin Goldman, one of the most well-known proponents of externalism in epistemology, is known for developing a popular form of externalism called reliabilism. In his paper, “What is Justified Belief?” Goldman characterizes the reliabilist conception of justification as such: "If S’s believing p at t results from a reliable cognitive belief-forming process (or set of processes), then S’s belief in p at t is justified.”

Goldman notes that a reliable belief-forming process is one which generally produces true beliefs.
A unique consequence of reliabilism (and other forms of externalism) is that one can have a justified belief without knowing one is justified (this is not possible under most forms of epistemic internalism). In addition, we do not yet know which cognitive processes are in fact reliable, so anyone who embraces reliabilism must concede that we do not always know whether some of our beliefs are justified (even though there is a fact of the matter).

As a response to skepticism

In responding to skepticism, Hilary Putnam (1982) claims that semantic externalism yields "an argument we can give that shows we are not brains in a vat (BIV). (See also DeRose, 1999.) If semantic externalism is true, then the meaning of a word or sentence is not wholly determined by what individuals think those words mean. For example, semantic externalists maintain that the word "water" referred to the substance whose chemical composition is H2O even before scientists had discovered that chemical composition. The fact that the substance out in the world we were calling "water" actually had that composition at least partially determined the meaning of the word. One way to use this in a response to skepticism is to apply the same strategy to the terms used in a skeptical argument in the following way (DeRose, 1999):
Either I am a BIV, or I am not a BIV.
If I am not a BIV, then when I say "I am not a BIV", it is true.
If I am a BIV, then, when I say "I am not a BIV", it is true (because "brain" and "vat" would only pick out the brains and vats being simulated, not real brains and real vats).
---
My utterance of "I am not a BIV" is true.

To clarify how this argument is supposed to work: Imagine that there is brain in a vat, and a whole world is being simulated for it. Call the individual who is being deceived "Steve." When Steve is given an experience of walking through a park, semantic externalism allows for his thought, "I am walking through a park" to be true so long as the simulated reality is one in which he is walking through a park. Similarly, what it takes for his thought, "I am a brain in a vat," to be true is for the simulated reality to be one where he is a brain in a vat. But in the simulated reality, he is not a brain in a vat.

Apart from disputes over the success of the argument or the plausibility of the specific type of semantic externalism required for it to work, there is question as to what is gained by defeating the skeptical worry with this strategy. Skeptics can give new skeptical cases that wouldn't be subject to the same response (e.g., one where the person was very recently turned into a brain in a vat, so that their words "brain" and "vat" still pick out real brains and vats, rather than simulated ones). Further, if even brains in vats can correctly believe "I am not a brain in a vat," then the skeptic can still press us on how we know we are not in that situation (though the externalist will point out that it may be difficult for the skeptic to describe that situation).

Another attempt to use externalism to refute skepticism is done by Brueckner and Warfield. It involves the claim that our thoughts are about things, unlike a BIV's thoughts, which cannot be about things (DeRose, 1999).

Semantics

Semantic externalism comes in two varieties, depending on whether meaning is construed cognitively or linguistically. On a cognitive construal, externalism is the thesis that what concepts (or contents) are available to a thinker is determined by their environment, or their relation to their environment. On a linguistic construal, externalism is the thesis that the meaning of a word is environmentally determined. Likewise, one can construe semantic internalism in two ways, as a denial of either of these two theses.

Externalism and internalism in semantics is closely tied to the distinction in philosophy of mind concerning mental content, since the contents of one's thoughts (specifically, intentional mental states) are usually taken to be semantic objects that are truth-evaluable.

See also:

Philosophy of mind

Within the context of the philosophy of mind, externalism is the theory that the contents of at least some of one's mental states are dependent in part on their relationship to the external world or one's environment.

The traditional discussion on externalism was centered around the semantic aspect of mental content. This is by no means the only meaning of externalism now. Externalism is now a broad collection of philosophical views considering all aspects of mental content and activity. There are various forms of externalism that consider either the content or the vehicles of the mind or both. Furthermore, externalism could be limited to cognition, or it could address broader issues of consciousness.

As to the traditional discussion on semantic externalism (often dubbed content externalism), some mental states, such as believing that water is wet, and fearing that the Queen has been insulted, have contents we can capture using 'that' clauses. The content externalist often appeal to observations found as early as Hilary Putnam's seminal essay, "The Meaning of 'Meaning'," (1975). Putnam stated that we can easily imagine pairs of individuals that are microphysical duplicates embedded in different surroundings who use the same words but mean different things when using them.
For example, suppose that Ike and Tina's mothers are identical twins and that Ike and Tina are raised in isolation from one another in indistinguishable environments. When Ike says, "I want my mommy," he expresses a want satisfied only if he is brought to his mommy. If we brought Tina's mommy, Ike might not notice the difference, but he doesn't get what he wants. It seems that what he wants and what he says when he says, "I want my mommy," will be different from what Tina wants and what she says she wants when she says, "I want my mommy."

Externalists say that if we assume competent speakers know what they think, and say what they think, the difference in what these two speakers mean corresponds to a difference in the thoughts of the two speakers that is not (necessarily) reflected by a difference in the internal make up of the speakers or thinkers. They urge us to move from externalism about meaning of the sort Putnam defended to externalism about contentful states of mind. The example pertains to singular terms, but has been extended to cover kind terms as well such as natural kinds (e.g., 'water') and for kinds of artifacts (e.g., 'espresso maker'). There is no general agreement amongst content externalists as to the scope of the thesis.

Philosophers now tend to distinguish between wide content (externalist mental content) and narrow content (anti-externalist mental content). Some, then, align themselves as endorsing one view of content exclusively, or both. For example, Jerry Fodor (1980) argues for narrow content (although he comes to reject that view in his 1995), while David Chalmers (2002) argues for a two dimensional semantics according to which the contents of mental states can have both wide and narrow content.

Critics of the view have questioned the original thought experiments saying that the lessons that Putnam and later writers such as Tyler Burge (1979, 1982) have urged us to draw can be resisted. Frank Jackson and John Searle, for example, have defended internalist accounts of thought content according to which the contents of our thoughts are fixed by descriptions that pick out the individuals and kinds that our thoughts intuitively pertain to the sorts of things that we take them to. In the Ike/Tina example, one might agree that Ike's thoughts pertain to Ike's mother and that Tina's thoughts pertain to Tina's but insist that this is because Ike thinks of that woman as his mother and we can capture this by saying that he thinks of her as 'the mother of the speaker'. This descriptive phrase will pick out one unique woman. Externalists claim this is implausible, as we would have to ascribe to Ike knowledge he wouldn't need to successfully think about or refer to his mother.

Critics have also claimed that content externalists are committed to epistemological absurdities. Suppose that a speaker can have the concept of water we do only if the speaker lives in a world that contains H2O. It seems this speaker could know a priori that she thinks that water is wet. This is the thesis of privileged access. It also seems that she could know on the basis of simple thought experiments that she can only think that water is wet if she lives in a world that contains water. What would prevent her from putting these together and coming to know a priori that the world contains water? If we should say that no one could possibly know whether water exists a priori, it seems either we cannot know content externalism to be true on the basis of thought experiments or we cannot know what we are thinking without first looking into the world to see what it is like.

As mentioned, content externalism (limited to the semantic aspects) is only one among many other options offered by externalism by and large.

See also:

Historiography of science

Internalism in the historiography of science claims that science is completely distinct from social influences and pure natural science can exist in any society and at any time given the intellectual capacity. Imre Lakatos is a notable proponent of historiographical internalism.

Externalism in the historiography of science is the view that the history of science is due to its social context – the socio-political climate and the surrounding economy determines scientific progress. Thomas Kuhn is a notable proponent of historiographical externalism.

How to Create a Mind

From Wikipedia, the free encyclopedia

How to Create a Mind: The Secret of Human Thought Revealed
Cover of How to Create a Mind
AuthorRay Kurzweil
CountryUnited States
LanguageEnglish
PublisherViking Penguin
Publication date
2012
Media typePrint
Pages352
ISBN978-0670025299
OCLC779263503
612.82
LC ClassQP385.K87
Preceded byThe Singularity Is Near 

How to Create a Mind: The Secret of Human Thought Revealed is a non-fiction book about brains, both human and artificial, by the inventor and futurist Ray Kurzweil. First published in hardcover on November 13, 2012 by Viking Press it became a New York Times Best Seller. It has received attention from The Washington Post, The New York Times and The New Yorker.

Kurzweil describes a series of thought experiments which suggest to him that the brain contains a hierarchy of pattern recognizers. Based on this he introduces his Pattern Recognition Theory of Mind (PRTM). He says the neocortex contains 300 million very general pattern recognition circuits and argues that they are responsible for most aspects of human thought. He also suggests that the brain is a "recursive probabilistic fractal" whose line of code is represented within the 30-100 million bytes of compressed code in the genome.

Kurzweil then explains that a computer version of this design could be used to create an artificial intelligence more capable than the human brain. It would employ techniques such as hidden Markov models and genetic algorithms, strategies Kurzweil used successfully in his years as a commercial developer of speech recognition software. Artificial brains will require massive computational power, so Kurzweil reviews his law of accelerating returns which explains how the compounding effects of exponential growth will deliver the necessary hardware in only a few decades.

Critics felt the subtitle of the book, The Secret of Human Thought Revealed, overpromises. Some protested that pattern recognition does not explain the "depth and nuance" of mind including elements like emotion and imagination. Others felt Kurzweil's ideas might be right, but they are not original, pointing to existing work as far back as the 1980s. Yet critics admire Kurzweil's "impressive track record" and say that his writing is "refreshingly clear", containing "lucid discussions" of computing history.

Background

Kurzweil has written several futurology books including The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999) and The Singularity is Near (2005). In his books he develops the law of accelerating returns. The law is similar to Moore's Law, the persistent doubling in capacity of computer chips, but extended to all "human technological advancement, the billions of years of terrestrial evolution" and even "the entire history of the universe".

Picture of Mr. Kurzweil giving a speech
Ray Kurzweil in 2008

Due to the exponential growth in computing technologies predicted by the law, Kurzweil says that by "the end of the 2020s" computers will have "intelligence indistinguishable to biological humans". As computational power continues to grow, machine intelligence will represent an ever-larger percentage of total intelligence on the planet. Ultimately it will lead to the Singularity, a merger between biology and technology, which Kurzweil predicts will occur in 2045. He says "There will be no distinction, post-Singularity, between human and machine...".

Kurzweil himself plans to "stick around" for the Singularity. He has written two health and nutrition books aimed at living longer, the subtitle of one is "Live Long Enough to Live Forever". One month after How to Create a Mind was published, Google announced that it had hired Kurzweil to work as Director of Engineering "on new projects involving machine learning and language processing".\ Kurzweil said his goal at Google is to "create a truly useful AI [artificial intelligence] that will make all of us smarter".

Content

Thought experiments

Kurzweil opens the book by reminding us of the importance of thought experiments in the development of major theories, including evolution and relativity. It's worth noting that Kurzweil sees Darwin as "a good contender" for the leading scientist of the 19th century. He suggests his own thought experiments related to how the brain thinks and remembers things. For example, he asks the reader to recite the alphabet, but then to recite the alphabet backwards. The difficulty in going backwards suggests "our memories are sequential and in order". Later he asks the reader to visualize someone he has met only once or twice, the difficulty here suggests "there are no images, videos, or sound recordings stored in the brain" only sequences of patterns. Eventually he concludes the brain uses a hierarchy of pattern recognizers.

Pattern Recognition Theory of Mind

Kurzweil states that the neocortex contains about 300 million very general pattern recognizers, arranged in a hierarchy. For example, to recognize a written word there might be several pattern recognizers for each different letter stroke: diagonal, horizontal, vertical or curved. The output of these recognizers would feed into higher level pattern recognizers, which look for the pattern of strokes which form a letter. Finally a word-level recognizer uses the output of the letter recognizers. All the while signals feed both "forward" and "backward". For example, if a letter is obscured, but the remaining letters strongly indicate a certain word, the word-level recognizer might suggest to the letter-recognizer which letter to look for, and the letter-level would suggest which strokes to look for. Kurzweil also discusses how listening to speech requires similar hierarchical pattern recognizers.

Kurzweil's main thesis is that these hierarchical pattern recognizers are used not just for sensing the world, but for nearly all aspects of thought. For example, Kurzweil says memory recall is based on the same patterns that were used when sensing the world in the first place. Kurzweil says that learning is critical to human intelligence. A computer version of the neocortex would initially be like a new born baby, unable to do much. Only through repeated exposure to patterns would it eventually self-organize and become functional.

Kurzweil writes extensively about neuroanatomy, of both the neocortex and "the old brain". He cites recent evidence that interconnections in the neocortex form a grid structure, which suggests to him a common algorithm across "all neocortical functions".

Digital brain

Hidden Markov Model
Example of a hidden Markov model.

Kurzweil next writes about creating a digital brain inspired by the biological brain he has been describing. One existing effort he points to is Henry Markram's Blue Brain Project, which is attempting to create a full brain simulation by 2023. Kurzweil says the full molecular modeling they are attempting will be too slow, and that they will have to swap in simplified models to speed up initial self-organization.

Kurzweil believes these large scale simulations are valuable, but says a more explicit "functional algorithmic model" will be required to achieve human levels of intelligence. Kurzweil is unimpressed with neural networks and their potential while he's very bullish on vector quantization, hidden Markov models and genetic algorithms since he used all three successfully in his speech recognition work. Kurzweil equates pattern recognizers in the neocortex with statements in the LISP programming language, which is also hierarchical. He also says his approach is similar to Jeff Hawkins' hierarchical temporal memory, although he feels the hierarchical hidden Markov models have an advantage in pattern detection.

Kurzweil touches on some modern applications of advanced AI including Google's self-driving cars, IBM's Watson which beat the best human players at the game Jeopardy!, the Siri personal assistant in the Apple iPhone or its competitor Google Voice Search. He contrasts the hand-coded knowledge of the Douglas Lenat's Cyc project with the automated learning of systems like Google Translate and suggests the best approach is to use a combination of both, which is how IBM's Watson was so effective.[30] Kurzweil says that John Searle's has leveled his "Chinese Room" objection at Watson, arguing that Watson only manipulates symbols without meaning. Kurzweil thinks the human brain is "just" doing hierarchical statistical analysis as well.

In a section entitled A Strategy for Creating a Mind Kurzweil summarizes how he would put together a digital mind. He would start with a pattern recognizer and arrange for a hierarchy to self-organize using a hierarchical hidden Markov model. All parameters of the system would be optimized using genetic algorithms. He would add in a "critical thinking module" to scan existing patterns in the background for incompatibilities, to avoid holding inconsistent ideas. Kurzweil says the brain should have access to "open questions in every discipline" and have the ability to "master vast databases", something traditional computers are good at. He feels the final digital brain would be "as capable as biological ones of effecting changes in the world".

Philosophy

A digital brain with human-level intelligence raises many philosophical questions, the first of which is whether it is conscious. Kurzweil feels that consciousness is "an emergent property of a complex physical system", such that a computer emulating a brain would have the same emergent consciousness as the real brain. This is in contrast to people like John Searle, Stuart Hameroff and Roger Penrose who believe there is something special about the physical brain that a computer version could not duplicate.

Another issue is that of free will, the degree to which people are responsible for their own choices. Free will relates to determinism, if everything is strictly determined by prior state, then some would say that no one can have free will. Kurzweil holds a pragmatic belief in free will because he feels society needs it to function. He also suggests that quantum mechanics may provide "a continual source of uncertainty at the most basic level of reality" such that determinism does not exist.

Graph of a curve showing how computer capacity increases exponentially
Exponential Growth of Computing

Finally Kurzweil addresses identity with futuristic scenarios involving cloning a nonbiological version of someone, or gradually turning that same person into a nonbiological entity one surgery at a time. In the first case it is tempting to say the clone is not the original person, because the original person still exists. Kurzweil instead concludes both versions are equally the same person. He explains that an advantage of nonbiological systems is "the ability to be copied, backed up, and re-created" and this is just something people will have to get used to. Kurzweil believes identity "is preserved through continuity of the pattern of information that makes us" and that humans are not bound to a specific "substrate" like biology.

Law of accelerating returns

The law of accelerating returns is the basis for all of these speculations about creating a digital brain. It explains why computational capacity will continue to increase unabated even after Moore's Law expires, which Kurzweil predicts will happen around 2020. Integrated circuits, the current method of creating computer chips, will fade from the limelight, while some new more advanced technology will pick up the slack. It is this new technology that will get us to the massive levels of computation needed to create an artificial brain.

As exponential progress continues into and beyond the Singularity, Kurzweil says "we will merge with the intelligent technology we are creating". From there intelligence will expand outward rapidly. Kurzweil even wonders whether the speed of light is really a firm limit to civilization's ability to colonize the universe.

Reception

Analysis

Picture of Simson Garfinkel surrounded by disk drives on shelves
Simson Garfinkel thinks Kurzweil's "pattern recognition theory of mind" is not a theory.

Simson Garfinkel, an entrepreneur and professor of computer science at the Naval Postgraduate School, says Kurzweil's pattern recognition theory of mind (PRTM) is misnamed because of the word "theory", he feels it is not a theory since it cannot be tested. Garfinkel rejects Kurzweil's one-algorithm approach instead saying "the brain is likely to have many more secrets and algorithms than the one Kurzweil describes". Garfinkel caricatures Kurzweil's plan for artificial intelligence as "build something that can learn, then give it stuff to learn", which he thinks is hardly the "secret of human thought" promised by the subtitle of the book.

Gary Marcus, a research psychologist and professor at New York University, says only the name PRTM is new. He says the basic theory behind PRTM is "in the spirit of" a model of vision known as the neocognitron, introduced in 1980. He also says PRTM even more strongly resembles Hierarchical Temporal Memory promoted by Jeff Hawkins in recent years. Marcus feels any theory like this needs to be proven with an actual working computer model. And to that end he says that "a whole slew" of machines have been programmed with an approach similar to PRTM, and they have often performed poorly.

Colin McGinn, a philosophy professor at the University of Miami, asserted in The New York Review of Books that "pattern recognition pertains to perception specifically, not to all mental activity". While Kurzweil does say "memories are stored as sequences of patterns" McGinn asks about "emotion, imagination, reasoning, willing, intending, calculating, silently talking to oneself, feeling pain and pleasure, itches, and mood" insisting these have nothing to do with pattern recognition. McGinn is also critical of the "homunculus language" Kurzweil uses, the anthropomorphization of anatomical parts like neurons. Kurzweil will write that a neuron "shouts" when it "sees" a pattern, where McGinn would prefer he say a neuron "fires" when it receives certain stimuli. In McGinn's mind only conscious entities can "recognize" anything, a bundle of neurons cannot. Finally he takes objection with Kurzweil's "law" of accelerating change, insisting it is not a law, but just a "fortunate historical fact about the twentieth century".

In 2015, Kurzweil's theory was extended to a Pattern Activation/Recognition Theory of Mind with a stochastic model of self-describing neural circuits.

Reviews

Garfinkel says Kurzweil is at his best with the thought experiments early in the book, but says the "warmth and humanitarianism" evident in Kurzweil's talks is missing. Marcus applauds Kurzweil for "lucid discussion" of Alan Turing and John von Neumann and was impressed by his descriptions of computer algorithms and the detailed histories of Kurzweil's own companies.

Matthew Feeney, assistant editor for Reason, was disappointed in how briefly Kurzweil dealt with the philosophical aspects of the mind-body problem, and the ethical implications of machines which appear to be conscious. He does say Kurzweil's "optimism about an AI-assisted future is contagious." While Drew DeSilver, business reporter at the Seattle Times, says the first half of the book "has all the pizazz and drive of an engineering manual" but says Kurzweil's description of how the Jeopardy! computer champion Watson worked "is eye-opening and refreshingly clear".

McGinn says the book is "interesting in places, fairly readable, moderately informative, but wildly overstated." He mocks the book's subtitle by writing "All is revealed!" after paraphrasing Kurzweil's pattern recognition theory of mind. Speaking as a philosopher, McGinn feels that Kurzweil is "way of out of his depth" when discussing Wittgenstein.

Matt Ridley, journalist and author, wrote in The Wall Street Journal that Kurzweil "has a more impressive track record of predicting technological progress than most" and therefore he feels "it would be foolish, not wise, to bet against the emulation of the human brain in silicon within a couple of decades".

Translations

  • Spanish: "Cómo crear una mente. El secreto del pensamiento humano" (Lola Books, 2013).
  • German: "Das Geheimnis des menschlichen Denkens. Einblicke in das Reverse Engineering des Gehirns" (Lola Books, 2014).

Telepathy

From Wikipedia, the free encyclopedia

The Ganzfeld experiments that aimed to demonstrate telepathy have been criticized for lack of replication and poor controls.
 
Telepathy (from the Greek τῆλε, tele meaning "distant" and πάθος, pathos or -patheia meaning "feeling, perception, passion, affliction, experience") is the purported transmission of information from one person to another without using any known human sensory channels or physical interaction. The term was coined in 1882 by the classical scholar Frederic W. H. Myers, a founder of the Society for Psychical Research, and has remained more popular than the earlier expression thought-transference.

Telepathy experiments have historically been criticized for lack of proper controls and repeatability. There is no convincing evidence that telepathy exists, and the topic is generally considered by the scientific community to be pseudoscience.

Origins of the concept

According to historians such as Roger Luckhurst and Janet Oppenheim the origin of the concept of telepathy in Western civilization can be tracked to the late 19th century and the formation of the Society for Psychical Research. As the physical sciences made significant advances, scientific concepts were applied to mental phenomena (e.g., animal magnetism), with the hope that this would help to understand paranormal phenomena. The modern concept of telepathy emerged in this context.

Psychical researcher Eric Dingwall criticized SPR founding members Frederic W. H. Myers and William F. Barrett for trying to "prove" telepathy rather than objectively analyze whether or not it existed.

Thought reading

In the late 19th century, the magician and mentalist, Washington Irving Bishop would perform "thought reading" demonstrations. Bishop claimed no supernatural powers and ascribed his powers to muscular sensitivity (reading thoughts from unconscious bodily cues). Bishop was investigated by a group of scientists including the editor of the British Medical Journal and the psychologist Francis Galton. Bishop performed several feats successfully such as correctly identifying a selected spot on a table and locating a hidden object. During the experiment Bishop required physical contact with a subject who knew the correct answer. He would hold the hand or wrist of the helper. The scientists concluded that Bishop was not a genuine telepath but using a highly trained skill to detect ideomotor movements.

Another famous thought reader was the magician Stuart Cumberland. He was famous for performing blindfolded feats such as identifying a hidden object in a room that a person had picked out or asking someone to imagine a murder scene and then attempt to read the subject's thoughts and identify the victim and reenact the crime. Cumberland claimed to possess no genuine psychic ability and his thought reading performances could only be demonstrated by holding the hand of his subject to read their muscular movements. He came into dispute with psychical researchers associated with the Society for Psychical Research who were searching for genuine cases of telepathy. Cumberland argued that both telepathy and communication with the dead were impossible and that the mind of man cannot be read through telepathy, but only by muscle reading.

Case studies

Gilbert Murray conducted early telepathy experiments.

In the late 19th century the Creery Sisters (Mary, Alice, Maud, Kathleen, and Emily) were tested by the Society for Psychical Research and believed to have genuine psychic ability. However, during a later experiment they were caught utilizing signal codes and they confessed to fraud. George Albert Smith and Douglas Blackburn were claimed to be genuine psychics by the Society for Psychical Research but Blackburn confessed to fraud:
For nearly thirty years the telepathic experiments conducted by Mr. G. A. Smith and myself have been accepted and cited as the basic evidence of the truth of thought transference... ...the whole of those alleged experiments were bogus, and originated in the honest desire of two youths to show how easily men of scientific mind and training could be deceived when seeking for evidence in support of a theory they were wishful to establish.
Between 1916 and 1924, Gilbert Murray conducted 236 experiments into telepathy and reported 36% as successful, however, it was suggested that the results could be explained by hyperaesthesia as he could hear what was being said by the sender. Psychologist Leonard T. Troland had carried out experiments in telepathy at Harvard University which were reported in 1917. The subjects produced below chance expectations.

Arthur Conan Doyle and W. T. Stead were duped into believing Julius and Agnes Zancig had genuine psychic powers. Both Doyle and Stead wrote the Zancigs performed telepathy. In 1924, Julius and Agnes Zancig confessed that their mind reading act was a trick and published the secret code and all the details of the trick method they had used under the title of Our Secrets!! in a London newspaper.

In 1924, Robert H. Gault of Northwestern University with Gardner Murphy conducted the first American radio test for telepathy. The results were entirely negative. One of their experiments involved the attempted thought transmission of a chosen number, out of 2010 replies none were correct.

In February 1927, with the co-operation of the British Broadcasting Corporation (BBC), V. J. Woolley who was at the time the Research Officer for the SPR, arranged a telepathy experiment in which radio listeners were asked to take part. The experiment involved 'agents' thinking about five selected objects in an office at Tavistock Square, whilst listeners on the radio were asked to identify the objects from the BBC studio at Savoy Hill. 24, 659 answers were received. The results revealed no evidence for telepathy.

A famous experiment in telepathy was recorded by the American author Upton Sinclair in his book Mental Radio which documents Sinclair's test of psychic abilities of Mary Craig Sinclair, his second wife. She attempted to duplicate 290 pictures which were drawn by her husband. Sinclair claimed Mary successfully duplicated 65 of them, with 155 "partial successes" and 70 failures. However, these experiments were not conducted in a controlled scientific laboratory environment. Science writer Martin Gardner suggested that the possibility of sensory leakage during the experiment had not been ruled out:
In the first place, an intuitive wife, who knows her husband intimately, may be able to guess with a fair degree of accuracy what he is likely to draw—particularly if the picture is related to some freshly recalled event the two experienced in common. At first, simple pictures like chairs and tables would likely predominate, but as these are exhausted, the field of choice narrows and pictures are more likely to be suggested by recent experiences. It is also possible that Sinclair may have given conversational hints during some of the tests—hints which in his strong will to believe, he would promptly forget about. Also, one must not rule out the possibility that in many tests, made across the width of a room, Mrs. Sinclair may have seen the wiggling of the top of a pencil, or arm movements, which would convey to her unconscious a rough notion of the drawing.
Frederick Marion who was investigated by the Society for Psychical Research in the late 1930-1940s.

The Turner-Ownbey long distance telepathy experiment was discovered to contain flaws. May Frances Turner positioned herself in the Duke Parapsychology Laboratory whilst Sara Ownbey claimed to receive transmissions 250 miles away. For the experiment Turner would think of a symbol and write it down whilst Ownbey would write her guesses. The scores were highly successful and both records were supposed to be sent to J. B. Rhine; however, Ownbey sent them to Turner. Critics pointed out this invalidated the results as she could have simply written her own record to agree with the other. When the experiment was repeated and the records were sent to Rhine the scores dropped to average.

Another example is the experiment carried out by the author Harold Sherman with the explorer Hubert Wilkins who carried out their own experiment in telepathy for five and a half months starting in October 1937. This took place when Sherman was in New York and Wilkins was in the Arctic. The experiment consisted of Sherman and Wilkins at the end of each day to relax and visualise a mental image or "thought impression" of the events or thoughts they had experienced in the day and then to record those images and thoughts on paper in a diary. The results at the end when comparing Sherman's and Wilkins' diaries were claimed to be more than 60 percent.

The full results of the experiments were published in 1942 in a book by Sherman and Wilkins titled Thoughts Through Space. In the book both Sherman and Wilkins had written they believed they had demonstrated that it was possible to send and receive thought impressions from the mind of one person to another. The magician John Booth wrote the experiment was not an example of telepathy as a high percentage of misses had occurred. Booth wrote it was more likely that the "hits" were the result of "coincidence, law of averages, subconscious expectancy, logical inference or a plain lucky guess". A review of their book in the American Journal of Orthopsychiatry cast doubt on their experiment noting "the study was published five years after it was conducted, arouses suspicion on the validity of the conclusions.

In 1948, on the BBC radio Maurice Fogel made the claim that he could demonstrate telepathy. This intrigued the journalist Arthur Helliwell who wanted to discover his methods. He found that Fogel's mind reading acts were all based on trickery, he relied on information about members of his audience before the show started. Helliwell exposed Fogel's methods in a newspaper article. Although Fogel managed to fool some people into believing he could perform genuine telepathy, the majority of his audience knew he was a showman.

In a series of experiments Samuel Soal and his assistant K. M. Goldney examined 160 subjects over 128,000 trials and obtained no evidence for the existence of telepathy. Soal tested Basil Shackleton and Gloria Stewart between 1941 and 1943 in over five hundred sittings and over twenty thousand guesses. Shackleton scored 2890 compared with a chance expectation of 2308 and Gloria scored 9410 compared with a chance level of 7420. It was later discovered the results had been tampered with. Gretl Albert who was present during many of the experiments said she had witnessed Soal altering the records during the sessions. Betty Marwick discovered Soal had not used the method of random selection of numbers as he had claimed. Marwick showed that there had been manipulation of the score sheets "all the experiments reported by Soal had thereby been discredited."

In 1979 the physicists John G. Taylor and Eduardo Balanovski wrote the only scientifically feasible explanation for telepathy could be electromagnetism (EM) involving EM fields. In a series of experiments the EM levels were many orders of magnitude lower than calculated and no paranormal effects were observed. Both Taylor and Balanovski wrote their results were a strong argument against the validity of telepathy.

Research in anomalistic psychology has discovered that in some cases telepathy can be explained by a covariation bias. In an experiment (Schienle et al. 1996) 22 believers and 20 skeptics were asked to judge the covariation between transmitted symbols and the corresponding feedback given by a receiver. According to the results the believers overestimated the number of successful transmissions whilst the skeptics made accurate hit judgments. The results from another telepathy experiment involving 48 undergraduate college students (Rudski, 2002) were explained by hindsight and confirmation biases.

In parapsychology

Within the field of parapsychology, telepathy is considered to be a form of extrasensory perception (ESP) or anomalous cognition in which information is transferred through Psi. It is often categorized similarly to precognition and clairvoyance. Experiments have been used to test for telepathic abilities. Among the most well known are the use of Zener cards and the Ganzfeld experiment.

Types

Parapsychology describes several forms of telepathy:
  • Latent telepathy, formerly known as "deferred telepathy", is described as the transfer of information, through Psi, with an observable time-lag between transmission and reception;
  • Retrocognitive, precognitive, and intuitive telepathy is described as being the transfer of information, through Psi, about the past, future or present state of an individual's mind to another individual;
  • Emotive telepathy, also known as remote influence or emotional transfer, is the process of transferring kinesthetic sensations through altered states;
  • Superconscious telepathy involves tapping into the superconscious to access the collective wisdom of the human species for knowledge.

Zener Cards

Zener cards

Zener cards are marked with five distinctive symbols. When using them, one individual is designated the "sender" and another the "receiver". The sender selects a random card and visualize the symbol on it, while the receiver attempts to determine that symbol using Psi. Statistically, the receiver has a 20% chance of randomly guessing the correct symbol, so to demonstrate telepathy, they must repeatedly score a success rate that is significantly higher than 20%. If not conducted properly, this method can be vulnerable to sensory leakage and card counting.

J. B. Rhine's experiments with Zener cards were discredited due to the discovery that sensory leakage or cheating could account for all his results such as the subject being able to read the symbols from the back of the cards and being able to see and hear the experimenter to note subtle clues. Once Rhine took precautions in response to criticisms of his methods, he was unable to find any high-scoring subjects. Due to the methodological problems, parapsychologists no longer utilize card-guessing studies.

Dream telepathy

Parapsychological studies into dream telepathy were carried out at the Maimonides Medical Center in Brooklyn, New York led by Stanley Krippner and Montague Ullman. They concluded the results from some of their experiments supported dream telepathy. However, the results have not been independently replicated. The psychologist James Alcock has written the dream telepathy experiments at Maimonides have failed to provide evidence for telepathy and "lack of replication is rampant."

The picture target experiments that were conducted by Krippner and Ullman were criticized by C. E. M. Hansel. According to Hansel there were weaknesses in the design of the experiments in the way in which the agent became aware of their target picture. Only the agent should have known the target and no other person until the judging of targets had been completed, however, an experimenter was with the agent when the target envelope was opened. Hansel also wrote there had been poor controls in the experiment as the main experimenter could communicate with the subject.

An attempt to replicate the experiments that used picture targets was carried out by Edward Belvedere and David Foulkes. The finding was that neither the subject nor the judges matched the targets with dreams above chance level. Results from other experiments by Belvedere and Foulkes were also negative.

Ganzfeld experiment

When using the Ganzfeld experiment to test for telepathy, one individual is designated the receiver and is placed inside a controlled environment where they are deprived of sensory input, and another is designated the sender and is placed in a separate location. The receiver is then required to receive information from the sender. The nature of the information may vary between experiments.

The ganzfeld experiment studies that were examined by Ray Hyman and Charles Honorton had methodological problems that were well documented. Honorton reported only 36% of the studies used duplicate target sets of pictures to avoid handling cues. Hyman discovered flaws in all of the 42 ganzfeld experiments and to access each experiment, he devised a set of 12 categories of flaws. Six of these concerned statistical defects, the other six covered procedural flaws such as inadequate documentation, randomization and security as well as possibilities of sensory leakage. Over half of the studies failed to safeguard against sensory leakage and all of the studies contained at least one of the 12 flaws. Because of the flaws, Honorton agreed with Hyman the 42 ganzfeld studies could not support the claim for the existence of psi.

Possibilities of sensory leakage in the ganzfeld experiments included the receivers hearing what was going on in the sender's room next door as the rooms were not soundproof and the sender's fingerprints to be visible on the target object for the receiver to see.

Hyman also reviewed the autoganzfeld experiments and discovered a pattern in the data that implied a visual cue may have taken place:
The most suspicious pattern was the fact that the hit rate for a given target increased with the frequency of occurrence of that target in the experiment. The hit rate for the targets that occurred only once was right at the chance expectation of 25%. For targets that appeared twice the hit rate crept up to 28%. For those that occurred three times it was 38%, and for those targets that occurred six or more times, the hit rate was 52%. Each time a videotape is played its quality can degrade. It is plausible then, that when a frequently used clip is the target for a given session, it may be physically distinguishable from the other three decoy clips that are presented to the subject for judging. Surprisingly, the parapsychological community has not taken this finding seriously. They still include the autoganzfeld series in their meta-analyses and treat it as convincing evidence for the reality of psi.
Hyman wrote the autoganzfeld experiments were flawed because they did not preclude the possibility of sensory leakage. In 2010, Lance Storm, Patrizio Tressoldi, and Lorenzo Di Risio analyzed 29 ganzfeld studies from 1997 to 2008. Of the 1,498 trials, 483 produced hits, corresponding to a hit rate of 32.2%. This hit rate is statistically significant with p < .001. Participants selected for personality traits and personal characteristics thought to be psi-conducive were found to perform significantly better than unselected participants in the ganzfeld condition. Hyman (2010) published a rebuttal to Storm et al. According to Hyman "reliance on meta-analysis as the sole basis for justifying the claim that an anomaly exists and that the evidence for it is consistent and replicable is fallacious. It distorts what scientists mean by confirmatory evidence." Hyman wrote the ganzfeld studies have not been independently replicated and have failed to produce evidence for telepathy. Storm et al. published a response to Hyman claiming the ganzfeld experimental design has proved to be consistent and reliable but parapsychology is a struggling discipline that has not received much attention so further research on the subject is necessary. Rouder et al. 2013 wrote that critical evaluation of Storm et al.'s meta-analysis reveals no evidence for telepathy, no plausible mechanism and omitted replication failures.

A 2016 paper examined questionable research practices in the ganzfeld experiments.

Twin telepathy

Twin telepathy is a belief that has been described as a myth in psychological literature. Psychologists Stephen Hupp and Jeremy Jewell have noted that all experiments on the subject have failed to provide any scientific evidence for telepathy between twins. According to Hupp and Jewell there are various behavioral and genetic factors that contribute to the twin telepathy myth "identical twins typically spend a lot of time together and are usually exposed to very similar environments. Thus, it's not at all surprising that they act in similar ways and are adept at anticipating and forecasting each other's reactions to events."

A 1993 study by Susan Blackmore investigated the claims of twin telepathy. In an experiment with six sets of twins one subject would act as the sender and the other the receiver. The sender was given selected objects, photographs or numbers and would attempt to psychically send the information to the receiver. The results from the experiment were negative, no evidence of telepathy was observed.

The skeptical investigator Benjamin Radford has noted that "Despite decades of research trying to prove telepathy, there is no credible scientific evidence that psychic powers exist, either in the general population or among twins specifically. The idea that two people who shared their mother's womb — or even who share the same DNA — have a mysterious mental connection is an intriguing one not borne out in science."

Scientific reception

A variety of tests have been performed to demonstrate telepathy, but there is no scientific evidence that the power exists. A panel commissioned by the United States National Research Council to study paranormal claims concluded that "despite a 130-year record of scientific research on such matters, our committee could find no scientific justification for the existence of phenomena such as extrasensory perception, mental telepathy or 'mind over matter' exercises... Evaluation of a large body of the best available evidence simply does not support the contention that these phenomena exist." The scientific community considers parapsychology a pseudoscience. There is no known mechanism for telepathy. Philosopher and physicist Mario Bunge has written that telepathy would contradict laws of science and the claim that "signals can be transmitted across space without fading with distance is inconsistent with physics".

Physicist John Taylor has written the experiments that have been claimed by parapsychologists to support evidence for the existence of telepathy are based on the use of shaky statistical analysis and poor design, and attempts to duplicate such experiments by the scientific community have failed. Taylor also wrote the arguments used by parapsychologists for the feasibility of such phenomena are based on distortions of theoretical physics as well as "complete ignorance" of relevant areas of physics.

Psychologist Stuart Sutherland wrote that cases of telepathy can be explained by people underestimating the probability of coincidences. According to Sutherland, "most stories about this phenomenon concern people who are close to one another - husband and wife or brother and sister. Since such people have much in common, it is highly probable that they will sometimes think the same thought at the same time." Graham Reed, a specialist in anomalistic psychology, noted that experiments into telepathy often involve the subject relaxing and reporting the 'messages' to consist of colored geometric shapes. Reed wrote that these are a common type of hypnagogic image and not evidence for telepathic communication.

Outside of parapsychology, telepathy is generally explained as the result of fraud, self-delusion and/or self-deception and not as a paranormal power. Psychological research has also revealed other explanations such as confirmation bias, expectancy bias, sensory leakage, subjective validation and wishful thinking. Virtually all of the instances of more popular psychic phenomena, such as mediumship, can be attributed to non-paranormal techniques such as cold reading. Magicians such as Ian Rowland and Derren Brown have demonstrated techniques and results similar to those of popular psychics, without paranormal means. They have identified, described, and developed psychological techniques of cold reading and hot reading.

Psychiatry

The notion of telepathy is not dissimilar to two clinical concepts: delusions of thought insertion/removal. This similarity might explain how an individual might come to the conclusion that they were experiencing telepathy. Thought insertion/removal is a symptom of psychosis, particularly of schizophrenia, schizoaffective disorder or substance-induced psychosis. Psychiatric patients who experience this symptom falsely believe that some of their thoughts are not their own and that others (e.g., other people, aliens, demons or fallen angels, or conspiring intelligence agencies) are putting thoughts into their minds (thought insertion). Some patients feel as if thoughts are being taken out of their minds or deleted (thought removal). Along with other symptoms of psychosis, delusions of thought insertion may be reduced by antipsychotic medication. Psychiatrists and clinical psychologists believe and empirical findings support the idea that people with schizotypy and schizotypal personality disorder are particularly likely to believe in telepathy.

Use in fiction

Telepathy is a common theme in modern fiction and science fiction, with many extraterrestrials (such as the Protoss in the StarCraft franchise), superheroes, and supervillains having telepathic ability.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...