Search This Blog

Monday, February 16, 2015

Blindsight


From Wikipedia, the free encyclopedia

Blindsight is the ability of people who are cortically blind due to lesions in their striate cortex, also known as primary visual cortex or V1, to respond to visual stimuli that they do not consciously see.[1] The majority of studies on blindsight are conducted on patients who have the "blindness" on only one side of their visual field. Following the destruction of the striate cortex, patients are asked to detect, localize, and discriminate amongst visual stimuli that are presented to their blind side, often in a forced-response or guessing situation, even though they don't consciously recognise the visual stimulus. Research shows that blind patients achieve a higher accuracy than would be expected from chance alone. Type 1 blindsight is the term given to this ability to guess—at levels significantly above chance—aspects of a visual stimulus (such as location or type of movement) without any conscious awareness of any stimuli. Type 2 blindsight occurs when patients claim to have a feeling that there has been a change within their blind area—e.g. movement—but that it was not a visual percept.[2] Blindsight challenges the common belief that perceptions must enter consciousness to affect our behavior;[3] it shows that our behavior can be guided by sensory information of which we have no conscious awareness.[3] It may be thought of as a converse of the form of anosognosia known as Anton–Babinski syndrome, in which there is full cortical blindness along with the confabulation of visual experience.

History

We owe much of our current understanding of blindsight to early experiments on monkeys. One monkey in particular, Helen, could be considered the "star monkey in visual research" because she was the original blindsight subject. Helen was a macaque monkey that had been decorticated; specifically, her primary visual cortex (V1) was completely removed. This procedure had the expected results that Helen became blind as indicated by the typical test results for blindness. Nevertheless, under certain specific situations, Helen exhibited sighted behavior. Her pupils would dilate and she would blink at stimuli that threatened her eyes. Furthermore, under certain experimental conditions, she could detect a variety of visual stimuli, such as the presence and location of objects, as well as shape, pattern, orientation, motion, and color.[4][5][6] In many cases she was able to navigate her environment and interact with objects as if she were sighted.[7]

A similar phenomenon was also discovered in humans. Subjects who had suffered damage to their visual cortices due to accidents or strokes reported partial or total blindness. In spite of this, when they were prompted they could "guess" with above-average accuracy about the presence and details of objects, much like the animal subjects, and they could even catch objects that were tossed at them. Interestingly, the subjects never developed any kind of confidence in their abilities. Even when told of their successes, they would not begin to spontaneously make "guesses" about objects, but instead still required prompting. Furthermore, blindsight subjects rarely express the amazement about their abilities that sighted people would expect them to express.[8]

Describing blindsight

The brain contains several mechanisms involved in vision. Consider two systems in the brain which evolved at different times. The first that evolved is more primitive and resembles the visual system of animals such as fish and frogs. The second to evolve is more complex and is possessed by mammals.
The second system seems to be the one that is responsible for our ability to perceive the world around us and the first system is devoted mainly to controlling eye movements and orienting our attention to sudden movements in our periphery. Patients with blindsight have damage to the second, "mammalian" visual system (the visual cortex of the brain and some of the nerve fibers that bring information to it from the eyes).[3] This phenomenon shows how, after the more complex visual system is damaged, people can use the primitive visual system of their brains to guide hand movements towards an object even though they can't see what they are reaching for.[3] Hence, visual information can control behavior without producing a conscious sensation. This ability of those with blindsight to see objects that they are unconscious of suggests that consciousness is not a general property of all parts of the brain; yet it suggests that only certain parts of the brain play a special role in consciousness.[3]

Blindsight patients show awareness of single visual features, such as edges and motion, but cannot gain a holistic visual percept. This suggests that perceptual awareness is modular and that—in sighted individuals—there is a "binding process that unifies all information into a whole percept", which is interrupted in patients with such conditions as blindsight and visual agnosia.[1] Therefore, object identification and object recognition are thought to be separate process and occur in different areas of the brain, working independently from one another. The modular theory of object perception and integration would account for the "hidden perception" experienced in blindsight patients. Research has shown that visual stimuli with the single visual features of sharp borders, sharp onset/offset times,[9] motion,[10] and low spacial frequency[11] contribute to, but are not strictly necessary for, an object's salience in blindsight.

Theories of causation

There are three theories for the explanation of blindsight. The first states that after damage to area V1, other branches of the optic nerve deliver visual information to the superior colliculus and several other areas, including parts of the cerebral cortex. These areas might control the blindsight responses, but still many people with damage to area V1 don't show blindsight or only show it in certain parts of the visual field.

Another explanation to the phenomenon is that even though the majority of a person's visual cortex may be damaged, tiny islands of healthy tissue remain. These islands aren't large enough to provide conscious perception, but nevertheless enough for blindsight. (Kalat, 2009)

A third theory is that the information required to determine the distance to and velocity of an object in object space is determined by the lateral geniculate nucleus before the information is projected to the cerebral cortex. In the normal subject these signals are used to merge the information from the eyes into a three-dimensional representation (which includes the position and velocity of individual objects relative to the organism), extract a vergence signal to benefit the precision (previously auxiliary) optical system (POS), and extract a focus control signal for the lenses of the eyes. The stereoscopic information is attached to the object information passed to the cerebral cortex.[12]

Evidence of blindsight can be indirectly observed in children as young as two months, although there is difficulty in determining the type in a patient who is not old enough to answer questions.[13]

Blindsight and the lateral geniculate nucleus

Blindsight is a disorder in which the individual sustains damage to the primary visual cortex and as a result, loses sight in that corresponding visual field.[14] Patients, however, are able to detect stimuli in that damaged visual field which attributes the "sight" portion in the term blindsight. Mosby's Dictionary of Medicine, Nursing & Health Professions defines the lateral geniculate nucleus (LGN) as "one of two elevations of the lateral posterior thalamus receiving visual impulses from the retina via the optic nerves and tracts and relaying the impulses to the calcarine (visual) cortex".[15] In particular, the magnocellular system of the LGN is less affected by the removal of V1 which suggests that it is because of this system in the LGN that blindsight occurs.[16] This quote suggests that the LGN is what is responsible for the causation of blindsight. Although damage to the primary visual cortex (V1) is what causes blindsight, the still functioning magnoceullular system of the LGN is what causes the sight in blindsight. According to Dragoi of The UT Medical School at Houston, the LGN is made up of 6 layers including layers 1 & 2 which are part of the magnocellular layers. The cells in these layers behave like that of M-retinal ganglion cells,[17] are most sensitive to movement of visual stimuli [17] and have large center-surround receptive fields.[17] M-retinal ganglion cells project to the magnocellular layers.[17]

To obtain a better understanding, a more detailed look of visual pathways is included. What is seen in the left and right visual field is taken in by each eye and brought back to the optic disk via the nerve fiber of the retina.[17] From there, the visual information is taken from the optic disk to the optic nerve and down to the optic chiasm where the information is then in the optic tract and will terminate in four different areas of the brain including the LGN, superior colliculus, pretectum of the mid brain and suprachiasmatic nucleus of the hypothalamus and most axons from the LGN will terminate in the primary visual cortex, but not all of them.[17]

In a normal, healthy individual, brain scans were conducted to try and prove that visual motion can bypass V1, creating a connection from the LGN to the human middle temporal complex.[18] Their findings concluded that yes, there was a connection of visual motion information that went directly from the LGN to the hMT+ that allowed information travel without means of V1.[18] Alan Cowey also states that there is still a direct pathway from the retina to the LGN following a traumatic injury to V1 which from there is sent to the extrastriate visual areas.[19] The extrastriate visual areas include the occipital lobes that surround V1.[17] In non-human primates, these can include V2, V3 and V4.[17]

In a study done on primates, after removal of part of V1, the V2 and V3 regions of the brain were still excited by visual stimulus.[19] According to Lawrence Weiskrantz, "the LGN projections that survive V1 removal are relatively sparse in density, but are nevertheless widespread and probably encompass all extrastriate visual areas".[20] This was found through indirect testing methods by first measuring the influence of a stimuli on the blind hemifield against the same stimuli presented in the intact hemifield as well as using reflex measures such as electrical skin conductance.[20] These methods, used with MRI, produced the results that Weiskrantz recorded.[20] This finding also suggests that following the removal of V1, neurons from the LGN remain and send visual stimulus information to V2, V4, V5 and TEO.[20]

Injury to the primary visual cortex, including lesions and other trauma, leads to the loss of visual experience.[16] However, the residual vision that is left cannot be attributed to V1. According to the study done by Schmid, Mrowka, Turchi, Saunders, Wilke, Peters, Ye & Leopold, "thalamic lateral geniculate nucleus has a causal role in V1-independent processing of visual information". This information was founded by the previously stated authors through experiments using fMRI images during activation and inactivation of the LGN and the contribution the LGN has on visual experience in monkeys with a V1 lesion. Furthermore, once the LGN was inactivated, virtually all of the extrastriate areas of the brain no longer showed a response on the fMRI.[16] The information founded in this study lead to a qualitative assessment that included "scotoma stimulation, with the LGN intact had fMRI activation of ~20% of that under normal conditions".[16] This finding agrees with the information obtained from and fMRI images of patients with blindsight.[16]

These findings run parallel with the theory that the LGN is what actually houses this residual vision upon damage of the VI. Also from the same study [16] patient GY presented with blindsight. After research, it was discovered that the LGN is less affected by a V1 injury as many neurons are still reacting to lower-visual field stimulation.[16] All residual vision was lost, however, when there is injury to the LGN.[16] This further substantiates the idea that the LGN is preserved upon injury of V1 and is what encompasses the "sight" portion of blindsight.

The LGN plays a major role in blindsight.[16][17][19][20] Although injury to V1 does create a loss of vision, the LGN is what is credited for the residual vision that remains, substantiating the word "sight" in blindsight. It is important to recognize that the LGN has the ability to bypass V1 and still communicate to the extrastraite areas of the brain, creating the response to visual stimuli that we see in blindsight patients.[16][17][19][20] Using the proper techniques such as fMRI and other indirect methodologies, blindsight can be attributed to V1 damage and a functioning LGN.

Evidence in animals

In 1995 Dr. Cowey published the paper "Blindsight in Monkeys". At the time Blindsight was a little proven phenomenon that was believed to be caused by damage due to stroke. However patients were rare and it was hard to separate the areas responsible for the condition from other damage retained. In this experiment Cowey attempted to show monkeys with lesions in or even wholly removed striate cortexes also suffered from Blindsight. To do this he had the monkeys complete a task similar to the tasks commonly used on human patients with the disorder. The monkeys were placed in front of a monitor and taught to differentiate between trials where either an object in their visual field or nothing is present when a tone is played. Since Blindsight causes people to not see anything in their right visual field, if the monkeys registered blank trials, or trials in which no object was presented, the same as trials in which something appeared on the right, then they would have responded the same way as a human with Blindsight. Cowey hoped this would provide evidence for his claims that the striate cortex was key to the disorder, and he did find that the monkeys did indeed perform very similar to human participants.[21]

Also that year, Cowey published a second paper, "Visual detection in monkey's with Blindsight". In this paper he wanted to show that monkeys too could be conscious of movement in their deficit visual field despite not being consciously aware of the presence of an object there. To do this Cowey used another standard test for humans with the condition. The test is similar to the one he previously used, however for this trial the object would only be presented in the deficit visual field and would move. Starting from the center of the deficit visual field the object would either move up, down, or to the right. The monkeys performed identically to humans on the test, getting them right almost every time. This showed that the monkey's ability to detect movement is separate from their ability to consciously detect an object in their deficit visual field, and gave further evidence for the claim that damage to the striate cortex plays a large role in causing the disorder.[22]

Several years later, Cowey would go on to publish another paper in which he would compare and contrast the data collected from his monkeys and that of a specific human patient with Blindsight, GY. GY's striate cortical region was damaged through trauma at the age of eight, though for the most part he retained full functionality, GY was not consciously aware of anything in his right visual field. By comparing brain scars of both GY and the monkeys he had worked with, as well as their test results, Cowey concluded that the effects of striate cortical damage are the same in both species. This finding provided strong validation for Cowey's previous work with monkeys, and showed that monkeys can be used as accurate test subjects for Blindsight.[23]

Case studies

Researchers first delved into what would become the study of blindsight when it was observed that monkeys that have had their primary visual cortex removed could still seemingly discern shape, spatial location, and movement to some extent.[24] Humans however have appeared to lose their sense of sight entirely when the visual cortex was damaged. But, researchers were able to show that human blindsight sufferers did exhibit some amount of unconscious visual recognition when they were tested using the same techniques that the researchers who studied blindsight in animals used.[24]

Researchers applied the same type of tests that were used to study blindsight in animals to a patient referred to as DB. The normal techniques that were used to assess visual acuity in humans involved asking them to verbally describe some visually recognizable aspect of an object or objects. DB was given forced-choice tasks to complete instead. This meant that even if he or she wasn't visually conscious of the presence, location, or shape of an object they still had to attempt to guess regardless. The results of DB's guesses—if one would even refer to them as such—showed that DB was able to determine shape and detect movement at some unconscious level, despite not being visually aware of this. DB themselves chalked up the accuracy of their guesses to be merely coincidental.[24]

The discovery of the condition known as blindsight raised questions about how different types of visual information, even unconscious information, may be affected and sometimes even unaffected by damage to different areas of the visual cortex.[25] Previous studies had already demonstrated that even without conscious awareness of visual stimuli that humans could still determine certain visual features such as presence in the visual field, shape, orientation and movement.[24] But, in a newer study evidence showed that if the damage to the visual cortex occurs in areas above the primary visual cortex the conscious awareness of visual stimuli itself is not damaged.[25] Blindsight is a phenomenon that shows that even when the primary visual cortex is damaged or removed a person can still perform actions guided by unconscious visual information. So even when damage occurs in the area necessary for conscious awareness of visual information, other functions of the processing of these visual percepts are still available to the individual.[24] The same also goes for damage to other areas of the visual cortex. If an area of the cortex that is responsible for a certain function is damaged it will only result in the loss of that particular function or aspect, functions that other parts of the visual cortex are responsible for remain intact.[25]

Alexander and Cowey investigated how contrasting brightness of stimuli affects blindsight patients' ability to discern movement. Prior studies have already shown that blindsight patients are able to detect motion even though they claim they do not see any visual percepts in their blind fields.[24] The subjects of the study were two patients who suffered from hemianopsia—blindness in more than half of their visual field. Both of the subjects had displayed the ability to accurately determine the presence of visual stimuli in their blind hemifields without acknowledging an actual visual percept previously.[26]

To test the effect of brightness on the subject's ability to determine motion they used a white background with a series of colored dots. They would alter the contrast of the brightness of the dots compared to the white background in each different trial to see if the participants performed better or worse when there was a larger discrepancy in brightness or not.[26] Their procedure was to have the participants face the display for a period of time and ask them to tell the researchers when the dots were moving. The subjects focused on the display through two equal length time intervals. They would tell the researchers whether they thought the dots were moving during the first or the second time interval.[26]

When the contrast in brightness between the background and the dots was higher both of the subjects could discern motion more accurately than they would have statistically by just guessing. However one of the subjects was not able to accurately determine whether or not blue dots were moving regardless of the brightness contrast, but he/she was able to do so with every other color dot.[26] When the contrast was highest the subjects were able to tell whether or not the dots were moving with very high rates of accuracy. Even when the dots were white, but still of a different brightness from the background, the subjects could still determine if they were moving or not. But, regardless of the dots' color the subjects could not tell when they were in motion or not when the white background and the dots were of similar brightness.[26]

Kentridge, Heywood, and Weiskrantz used the phenomenon of blindsight to investigate the connection between visual attention and visual awareness. They wanted to see if their subject—who exhibited blindsight in other studies[26]—could react more quickly when his/her attention was cued without the ability to be visually aware of it. The researchers wanted to show that being conscious of a stimulus and paying attention to it was not the same thing.[27]

To test the relationship between attention and awareness they had the participant try to determine where a target was and whether it was oriented horizontally or vertically on a computer screen.[27] The target line would appear at one of two different locations and would be oriented in one of two directions. Before the target would appear an arrow would become visible on the screen and sometimes it would point to the correct position of the target line and less frequently it would not, this arrow was the cue for the subject. The participant would press a key to indicate whether the line was horizontal or vertical, and could then also indicate to an observer whether or not he/she actually had a feeling that any object was there or not—even if they couldn't see anything. The participant was able to accurately determine the orientation of the line when the target was cued by an arrow before the appearance of the target, even though these visual stimuli did not equal awareness in the subject who had no vision in that area of his/her visual field. The study showed that even without the ability to be visually aware of a stimulus the participant could still focus his/her attention on this object.[27]

In 2003, a patient known as TN lost use of his primary visual cortex, area V1. He had two successive strokes, which knocked out the region in both his left and right hemisphere. After his strokes, ordinary tests of TN's sight turned up nothing. He could not even detect large objects moving right in front of his eyes. Researchers eventually began to notice that TN exhibited signs of blindsight and in 2008 decided to test their theory. They took TN into a hallway and asked him to walk through it without using the cane he always carried after having the strokes. TN was not aware at the time, but the researchers had placed various obstacles in the hallway to test if he could avoid them without conscious use of his sight. To the researchers' delight, he moved around every obstacle with ease, at one point even pressing himself up against the wall to squeeze past a trashcan placed in his way. After navigating through the hallway, TN reported that he was just walking the way he wanted to, not because he knew anything was there. (de Gelder, 2008)

Another case study, written about in Carlson's "Physiology of Behavior, 11th edition", gives another insight into blindsight. In this case study, a girl had brought her grandfather in to see a neuropsychologist. The girl's grandfather, Mr. J., had had a stroke which had left him completely blind apart from a tiny spot in the middle of his visual field. The neuropsychologist, Dr. M., performed an exercise with him. The doctor helped Mr. J. to a chair, had him sit down, and then asked to borrow his cane. The doctor then asked, "Mr. J., please look straight ahead. Keep looking that way, and don't move your eyes or turn your head. I know that you can see a little bit straight ahead of you, and I don't want you to use that piece of vision for what I'm going to ask you to do. Fine. Now, I'd like you to reach out with your right hand [and] point to what I'm holding." Mr. J. then replied, "But I dont see anything—I'm blind!". The doctor then said, "I know, but please try, anyway." Mr. J then shrugged and pointed, and was surprised when his finger encountered the end of the cane which the doctor was pointing toward him. After this, Mr. J. said that "it was just luck". The doctor then turned the cane around so that the handle side was pointing towards Mr. J. He then asked for Mr. J. to grab hold of the cane. Mr. J. reached out with an open hand and grabbed hold of the cane. After this, the doctor said, "Good. Now put your hand down, please." The doctor then rotated the cane 90 degrees, so that the handle was oriented vertically. The doctor then asked Mr. J. to reach for the cane again. Mr. J. did this, and he turned his wrist so that his hand matched the orientation of the handle.
This case study shows that—although (on a conscious level) Mr. J. was completely unaware of any visual abilities that he may have had—he was able to orient his grabbing motions as if he had no visual impairments.[3]

Research

Lawrence Weiskrantz and colleagues showed in the early 1970s that if forced to guess about whether a stimulus is present in their blind field, some observers do better than chance.[28] This ability to detect stimuli that the observer is not conscious of can extend to discrimination of the type of stimulus (for example, whether an 'X' or 'O' has been presented in the blind field).

Electrophysiological evidence from the late 1970s (de Monasterio, 1978; Marrocco & Li, 1977; Schiller & Malpeli, 1977) has shown that there is no direct retinal input from S-cones to the superior colliculus, implying that the perception of color information should be impaired. However, recent evidence point to a pathway from S-cones to the superior colliculus, opposing de Monasterio's previous research and supporting the idea that some chromatic processing mechanisms are intact in blindsight.[29][30]

Marco Tamietto & Beatrice de Gelder performed experiments linking emotion detection and blindsight. Patients shown images on their blind side of people expressing emotions correctly guessed the emotion most of the time. The movement of facial muscles used in smiling and frowning were measured and reacted in ways that matched the kind of emotion in the unseen image. Therefore, the emotions were recognized without involving conscious sight.

A recent study found that a young woman with a unilateral lesion of area V1 could scale her grasping movement as she reached out to pick up objects of different sizes placed in her blind field, even though she could not report the sizes of the objects.[31] Similarly, another patient with unilateral lesion of area V1 could avoid obstacles placed in his blind field when he reached toward a target that was visible in his intact visual field.[32] Even though he avoided the obstacles, he never reported seeing them.

Dr. Cowey wrote extensively about patient GY and the tests he underwent to provide further evidence Blindsight is functionally different from conscious vision. To do this GY was not only asked to discriminate between whether or not a stimulus was presented, but he was asked to state the opposite direction of travel for that stimulus. That is to say if the stimulus was traveling upward, he was to indicate it was moving downward, and if it was moving downward he was to indicate that it moved upward. GY was able to do this with incredible accuracy in his left visual field, however he consistently stated the wrong direction of travel in his deficit visual field. This indicated that though GY was aware of the movement, he was not able to apply the rule as stated by the researchers to his observations.[33]

Brain regions involved

Visual processing in the brain goes through a series of stages. Destruction of the primary visual cortex leads to blindness in the part of the visual field that corresponds to the damaged cortical representation. The area of blindness - known as a scotoma - is in the visual field opposite the damaged hemisphere and can vary from a small area up to the entire hemifield. Visual processing occurs in the brain in a hierarchical series of stages (with much crosstalk and feedback between areas). The route from the retina through V1 is not the only visual pathway into the cortex, though it is by far the largest; it is commonly thought that the residual performance of people exhibiting blindsight is due to preserved pathways into the extrastriate cortex that bypass V1. What is surprising is that activity in these extrastriate areas is apparently insufficient to support visual awareness in the absence of V1.

To put it in a more complex way, recent physiological findings suggest that visual processing takes place along several independent, parallel pathways. One system processes information about shape, one about color, and one about movement, location and spatial organization. This information moves through an area of the brain called the lateral geniculate nucleus, located in the thalamus, and on to be processed in the primary visual cortex, area V1 (also known as the striate cortex because of its striped appearance). People with damage to V1 report no conscious vision, no visual imagery, and no visual images in their dreams. However, some of these people still experience the blindsight phenomenon. (Kalat, 2009)

The superior colliculus and prefrontal cortex also have a major role in awareness of a visual stimulus.[34]

Philosophical reception

Colin McGinn sees in the phenomenon of blindsight one reason for his thesis of a natural depth of consciousness.[35] Robert Nozick also joins McGinn in supporting this thesis.[36]

Turing test



From Wikipedia, the free encyclopedia


The "standard interpretation" of the Turing Test, in which player C, the interrogator, is tasked with trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination. Image adapted from Saygin, 2000.[1]

The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In the original illustrative example, a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being. The conversation is limited to a text-only channel such as a computer keyboard and screen so that the result is not dependent on the machine's ability to render words into audio.[2] All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer to questions; it checks how closely each answer resembles the answer a human would give.

The test was introduced by Alan Turing in his 1950 paper "Computing Machinery and Intelligence," which opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[3] Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"[4] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".[5]

In the years since 1950, the test has proven to be both highly influential and widely criticised, and it is an essential concept in the philosophy of artificial intelligence.[1][6] His test has come to be referred to with Turing's name.

History

Philosophical background

The question of whether it is possible for machines to think has a long history, which is firmly entrenched in the distinction between dualist and materialist views of the mind. René Descartes prefigures aspects of the Turing Test in his 1637 Discourse on the Method when he writes:
[H]ow many different automata or moving machines can be made by the industry of man [...] For we can easily understand a machine's being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.[7]
Here Descartes notes that automata are capable of responding to human interactions but argues that such automata can not respond appropriately to things said in their presence in the way that any human can. Descartes therefore prefigures the Turing Test by identifying the insufficiency of appropriate linguistic response as that which separates the human from the automaton. Descartes fails to consider the possibility that the insufficiency of appropriate linguistic response might be capable of being overcome by future automata and so does not propose the Turing Test as such, even if he prefigures its conceptual framework and criterion.

Denis Diderot formulates in his Pensées philosophiques a Turing-test criterion:

"If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation."[8]

This does not mean he agrees with this, but that it was already a common argument of materialists at that time.

According to dualism, the mind is non-physical (or, at the very least, has non-physical properties)[9] and, therefore, cannot be explained in purely physical terms. According to materialism, the mind can be explained physically, which leaves open the possibility of minds that are produced artificially.[10]

In 1936, philosopher Alfred Ayer considered the standard philosophical question of other minds: how do we know that other people have the same conscious experiences that we do? In his book, Language, Truth and Logic, Ayer suggested a protocol to distinguish between a conscious man and an unconscious machine: "The only ground I can have for asserting that an object which appears to be conscious is not really a conscious being, but only a dummy or a machine, is that it fails to satisfy one of the empirical tests by which the presence or absence of consciousness is determined."[11] (This suggestion is very similar to the Turing test, but is concerned with consciousness rather than intelligence. Moreover, it is not certain that Ayer's popular philosophical classic was familiar to Turing.) In other words, a thing is not conscious if it fails the consciousness test.

Alan Turing

Researchers in the United Kingdom had been exploring "machine intelligence" for up to ten years prior to the founding of the field of artificial intelligence (AI) research in 1956.[12] It was a common topic among the members of the Ratio Club, who were an informal group of British cybernetics and electronics researchers that included Alan Turing, after whom the test is named.[13]

Turing, in particular, had been tackling the notion of machine intelligence since at least 1941[14] and one of the earliest-known mentions of "computer intelligence" was made by him in 1947.[15] In Turing's report, "Intelligent Machinery",[16] he investigated "the question of whether or not it is possible for machinery to show intelligent behaviour"[17] and, as part of that investigation, proposed what may be considered the forerunner to his later tests:
It is not difficult to devise a paper machine which will play a not very bad game of chess.[18] Now get three men as subjects for the experiment. A, B and C. A and C are to be rather poor chess players, B is the operator who works the paper machine. ... Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine. C may find it quite difficult to tell which he is playing.[19]
"Computing Machinery and Intelligence" (1950) was the first published paper by Turing to focus exclusively on machine intelligence. Turing begins the 1950 paper with the claim, "I propose to consider the question 'Can machines think?'"[3] As he highlights, the traditional approach to such a question is to start with definitions, defining both the terms "machine" and "intelligence". Turing chooses not to do so; instead he replaces the question with a new one, "which is closely related to it and is expressed in relatively unambiguous words."[3] In essence he proposes to change the question from "Can machines think?" to "Can machines do what we (as thinking entities) can do?"[20] The advantage of the new question, Turing argues, is that it draws "a fairly sharp line between the physical and intellectual capacities of a man."[21]

To demonstrate this approach Turing proposes a test inspired by a party game, known as the "Imitation Game," in which a man and a woman go into separate rooms and guests try to tell them apart by writing a series of questions and reading the typewritten answers sent back. In this game both the man and the woman aim to convince the guests that they are the other. (Huma Shah argues that this two-human version of the game was presented by Turing only to introduce the reader to the machine-human question-answer test.[22]) Turing described his new version of the game as follows:
We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"[21]
Later in the paper Turing suggests an "equivalent" alternative formulation involving a judge conversing only with a computer and a man.[23] While neither of these formulations precisely matches the version of the Turing Test that is more generally known today, he proposed a third in 1952. In this version, which Turing discussed in a BBC radio broadcast, a jury asks questions of a computer and the role of the computer is to make a significant proportion of the jury believe that it is really a man.[24]

Turing's paper considered nine putative objections, which include all the major arguments against artificial intelligence that have been raised in the years since the paper was published (see "Computing Machinery and Intelligence").[5]

ELIZA and PARRY

In 1966, Joseph Weizenbaum created a program which appeared to pass the Turing test. The program, known as ELIZA, worked by examining a user's typed comments for keywords. If a keyword is found, a rule that transforms the user's comments is applied, and the resulting sentence is returned. If a keyword is not found, ELIZA responds either with a generic riposte or by repeating one of the earlier comments.[25] In addition, Weizenbaum developed ELIZA to replicate the behaviour of a Rogerian psychotherapist, allowing ELIZA to be "free to assume the pose of knowing almost nothing of the real world."[26] With these techniques, Weizenbaum's program was able to fool some people into believing that they were talking to a real person, with some subjects being "very hard to convince that ELIZA [...] is not human."[26] Thus, ELIZA is claimed by some to be one of the programs (perhaps the first) able to pass the Turing Test,[26][27] even though this view is highly contentious (see below).

Kenneth Colby created PARRY in 1972, a program described as "ELIZA with attitude".[28] It attempted to model the behaviour of a paranoid schizophrenic, using a similar (if more advanced) approach to that employed by Weizenbaum. To validate the work, PARRY was tested in the early 1970s using a variation of the Turing Test. A group of experienced psychiatrists analysed a combination of real patients and computers running PARRY through teleprinters. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the "patients" were human and which were computer programs.[29] The psychiatrists were able to make the correct identification only 48 percent of the time – a figure consistent with random guessing.[30]

In the 21st century, versions of these programs (now known as "chatterbots") continue to fool people. "CyberLover", a malware program, preys on Internet users by convincing them to "reveal information about their identities or to lead them to visit a web site that will deliver malicious content to their computers".[31] The program has emerged as a "Valentine-risk" flirting with people "seeking relationships online in order to collect their personal data".[32]

The Chinese room

John Searle's 1980 paper Minds, Brains, and Programs proposed the "Chinese room" thought experiment and argued that the Turing test could not be used to determine if a machine can think. Searle noted that software (such as ELIZA) could pass the Turing Test simply by manipulating symbols of which they had no understanding. Without understanding, they could not be described as "thinking" in the same sense people do. Therefore, Searle concludes, the Turing Test cannot prove that a machine can think.[33] Much like the Turing test itself, Searle's argument has been both widely criticised[34] and highly endorsed.[35]
Arguments such as Searle's and others working on the philosophy of mind sparked off a more intense debate about the nature of intelligence, the possibility of intelligent machines and the value of the Turing test that continued through the 1980s and 1990s.[36]

Loebner Prize

The Loebner Prize provides an annual platform for practical Turing Tests with the first competition held in November 1991.[37] It is underwritten by Hugh Loebner. The Cambridge Center for Behavioral Studies in Massachusetts, United States, organized the prizes up to and including the 2003 contest. As Loebner described it, one reason the competition was created is to advance the state of AI research, at least in part, because no one had taken steps to implement the Turing Test despite 40 years of discussing it.[38]
The first Loebner Prize competition in 1991 led to a renewed discussion of the viability of the Turing Test and the value of pursuing it, in both the popular press[39] and the academia.[40] The first contest was won by a mindless program with no identifiable intelligence that managed to fool naive interrogators into making the wrong identification. This highlighted several of the shortcomings of the Turing Test (discussed below): The winner won, at least in part, because it was able to "imitate human typing errors";[39] the unsophisticated interrogators were easily fooled;[40] and some researchers in AI have been led to feel that the test is merely a distraction from more fruitful research.[41]

The silver (text only) and gold (audio and visual) prizes have never been won. However, the competition has awarded the bronze medal every year for the computer system that, in the judges' opinions, demonstrates the "most human" conversational behaviour among that year's entries. Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) has won the bronze award on three occasions in recent times (2000, 2001, 2004). Learning AI Jabberwacky won in 2005 and 2006.[42]

The Loebner Prize tests conversational intelligence; winners are typically chatterbot programs, or Artificial Conversational Entities (ACE)s. Early Loebner Prize rules restricted conversations: Each entry and hidden-human conversed on a single topic,[43] thus the interrogators were restricted to one line of questioning per entity interaction. The restricted conversation rule was lifted for the 1995 Loebner Prize. Interaction duration between judge and entity has varied in Loebner Prizes. In Loebner 2003, at the University of Surrey, each interrogator was allowed five minutes to interact with an entity, machine or hidden-human. Between 2004 and 2007, the interaction time allowed in Loebner Prizes was more than twenty minutes. In 2008, the interrogation duration allowed was five minutes per pair, because the organiser, Kevin Warwick, and coordinator, Huma Shah, consider this to be the duration for any test, as Turing stated in his 1950 paper: " ... making the right identification after five minutes of questioning".[44] They felt Loebner's longer test, implemented in Loebner Prizes 2006 and 2007, was inappropriate for the state of artificial conversation technology.[45] It is ironic that the 2008 winning entry, Elbot from Artificial Solutions, does not mimic a human; its personality is that of a robot, yet Elbot deceived three human judges that it was the human during human-parallel comparisons.[46]

During the 2009 competition, held in Brighton, UK, the communication program restricted judges to 10 minutes for each round, 5 minutes to converse with the human, 5 minutes to converse with the program. This was to test the alternative reading of Turing's prediction that the 5-minute interaction was to be with the computer. For the 2010 competition, the Sponsor has again increased the interaction time, between interrogator and system, to 25 minutes.[47]

2014 University of Reading competition

On 7 June 2014 a Turing test competition, organized by Huma Shah and Kevin Warwick to mark the 60th anniversary of Turing's death, was held at the Royal Society London and was won by the Russian chatter bot Eugene Goostman. The bot, during a series of five-minute-long text conversations, convinced 33% of the contest's judges that it was human. Judges included John Sharkey, a sponsor of the bill granting a government pardon to Turing, AI Professor Aaron Sloman and Red Dwarf actor Robert Llewellyn.[48][49][50][51]

The competition's organisers believed that the Turing test had been "passed for the first time" at the event, saying that "some will claim that the Test has already been passed. The words Turing Test have been applied to similar competitions around the world. However this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations."[49]

The contest has faced criticism.[52] First, only a third of the judges were fooled by the computer. Second, the program's character claimed to be a 13 year old Ukrainian who learned English as a second language. The contest required 30% of judges to be fooled, which was based on Turing's statement in his Computing Machinery and Intelligence paper. Joshua Tenenbaum, an expert in mathematical psychology at MIT stated that, in his view, the result was unimpressive.[53]

Versions of the Turing test


The Imitation Game, as described by Alan Turing in "Computing Machinery and Intelligence." Player C, through a series of written questions, attempts to determine which of the other two players is a man, and which of the two is the woman. Player A, the man, tries to trick player C into making the wrong decision, while player B tries to help player C. Figure adapted from Saygin, 2000.[1]

Saul Traiger argues that there are at least three primary versions of the Turing test, two of which are offered in "Computing Machinery and Intelligence" and one that he describes as the "Standard Interpretation."[54] While there is some debate regarding whether the "Standard Interpretation" is that described by Turing or, instead, based on a misreading of his paper, these three versions are not regarded as equivalent,[54] and their strengths and weaknesses are distinct.[55]

Huma Shah points out that Turing himself was concerned with whether a machine could think and was providing a simple method to examine this: through human-machine question-answer sessions.[56] Shah argues there is one imitation game which Turing described could be practicalised in two different ways: a) one-to-one interrogator-machine test, and b) simultaneous comparison of a machine with a human, both questioned in parallel by an interrogator.[22] Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalises naturally to all of human performance capacity, verbal as well as nonverbal (robotic).[57]

Imitation Game

Turing's original game described a simple party game involving three players. Player A is a man, player B is a woman and player C (who plays the role of the interrogator) is of either sex. In the Imitation Game, player C is unable to see either player A or player B, and can communicate with them only through written notes. By asking questions of player A and player B, player C tries to determine which of the two is the man and which is the woman. Player A's role is to trick the interrogator into making the wrong decision, while player B attempts to assist the interrogator in making the right one.[1]

Sterret referred to this as the "Original Imitation Game Test".[58] Turing proposed that the role of player A be filled by a computer so that its task was to pretend to be a woman and attempt to trick the interrogator into making an incorrect evaluation. The success of the computer was determined by comparing the outcome of the game when player A is a computer against the outcome when player A is a man. Turing stated if "the interrogator decide[s] wrongly as often when the game is played [with the computer] as he does when the game is played between a man and a woman",[21] it may be argued that the computer is intelligent.

The Original Imitation Game Test, in which the player A is replaced with a computer. The computer is now charged with the role of the man, while player B continues to attempt to assist the interrogator. Figure adapted from Saygin, 2000.[1]

The second version appeared later in Turing's 1950 paper. Similar to the Original Imitation Game Test, the role of player A is performed by a computer. However, the role of player B is performed by a man rather than a woman.
"Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?"[21]
In this version, both player A (the computer) and player B are trying to trick the interrogator into making an incorrect decision.

Standard interpretation

Common understanding has it that the purpose of the Turing Test is not specifically to determine whether a computer is able to fool an interrogator into believing that it is a human, but rather whether a computer could imitate a human.[1] While there is some dispute whether this interpretation was intended by Turing – Sterrett believes that it was[58] and thus conflates the second version with this one, while others, such as Traiger, do not[54] – this has nevertheless led to what can be viewed as the "standard interpretation." In this version, player A is a computer and player B a person of either sex.
The role of the interrogator is not to determine which is male and which is female, but which is a computer and which is a human.[59] The fundamental issue with the standard interpretation is that the interrogator cannot differentiate which responder is human, and which is machine. There are issues about duration, but the standard interpretation generally considers this limitation as something that should be reasonable.

Imitation Game vs. Standard Turing Test

Controversy has arisen over which of the alternative formulations of the test Turing intended.[58] Sterrett argues that two distinct tests can be extracted from his 1950 paper and that, pace Turing's remark, they are not equivalent. The test that employs the party game and compares frequencies of success is referred to as the "Original Imitation Game Test," whereas the test consisting of a human judge conversing with a human and a machine is referred to as the "Standard Turing Test," noting that Sterrett equates this with the "standard interpretation" rather than the second version of the imitation game. Sterrett agrees that the Standard Turing Test (STT) has the problems that its critics cite but feels that, in contrast, the Original Imitation Game Test (OIG Test) so defined is immune to many of them, due to a crucial difference: Unlike the STT, it does not make similarity to human performance the criterion, even though it employs human performance in setting a criterion for machine intelligence. A man can fail the OIG Test, but it is argued that it is a virtue of a test of intelligence that failure indicates a lack of resourcefulness: The OIG Test requires the resourcefulness associated with intelligence and not merely "simulation of human conversational behaviour." The general structure of the OIG Test could even be used with non-verbal versions of imitation games.[60]

Still other writers[61] have interpreted Turing as proposing that the imitation game itself is the test, without specifying how to take into account Turing's statement that the test that he proposed using the party version of the imitation game is based upon a criterion of comparative frequency of success in that imitation game, rather than a capacity to succeed at one round of the game.

Saygin has suggested that maybe the original game is a way of proposing a less biased experimental design as it hides the participation of the computer.[62] The imitation game also includes a "social hack" not found in the standard interpretation, as in the game both computer and male human are required to play as pretending to be someone they are not.[63]

Should the interrogator know about the computer?

A crucial piece of any laboratory test is that there should be a control. Turing never makes clear whether the interrogator in his tests is aware that one of the participants is a computer. However, if there was a machine that did have the potential to pass a Turing test, it would be safe to assume a double blind control would be necessary.

To return to the Original Imitation Game, he states only that player A is to be replaced with a machine, not that player C is to be made aware of this replacement.[21] When Colby, FD Hilf, S Weber and AD Kramer tested PARRY, they did so by assuming that the interrogators did not need to know that one or more of those being interviewed was a computer during the interrogation.[64] As Ayse Saygin, Peter Swirski,[65] and others have highlighted, this makes a big difference to the implementation and outcome of the test.[1] An experimental study looking at Gricean maxim violations using transcripts of Loebner's one-to-one (interrogator-hidden interlocutor) Prize for AI contests between 1994–1999, Ayse Saygin found significant differences between the responses of participants who knew and did not know about computers being involved.[66]

Huma Shah and Kevin Warwick, who organized the 2008 Loebner Prize at Reading University which staged simultaneous comparison tests (one judge-two hidden interlocutors), showed that knowing/not knowing did not make a significant difference in some judges' determination. Judges were not explicitly told about the nature of the pairs of hidden interlocutors they would interrogate. Judges were able to distinguish human from machine, including when they were faced with control pairs of two humans and two machines embedded among the machine-human set ups. Spelling errors gave away the hidden-humans; machines were identified by 'speed of response' and lengthier utterances.[46]

Strengths of the test

Tractability and simplicity

The power and appeal of the Turing test derives from its simplicity. The philosophy of mind, psychology, and modern neuroscience have been unable to provide definitions of "intelligence" and "thinking" that are sufficiently precise and general to be applied to machines. Without such definitions, the central questions of the philosophy of artificial intelligence cannot be answered. The Turing test, even if imperfect, at least provides something that can actually be measured. As such, it is a pragmatic solution to a difficult philosophical question.

Breadth of subject matter

The format of the test allows the interrogator to give the machine a wide variety of intellectual tasks. Turing wrote that "the question and answer method seems to be suitable for introducing almost any one of the fields of human endeavor that we wish to include."[67] John Haugeland adds that "understanding the words is not enough; you have to understand the topic as well."[68]

To pass a well-designed Turing test, the machine must use natural language, reason, have knowledge and learn. The test can be extended to include video input, as well as a "hatch" through which objects can be passed: this would force the machine to demonstrate the skill of vision and robotics as well. Together, these represent almost all of the major problems that artificial intelligence research would like to solve.[69]

The Feigenbaum test is designed to take advantage of the broad range of topics available to a Turing test. It is a limited form of Turing's question-answer game which compares the machine against the abilities of experts in specific fields such as literature or chemistry. IBM's Watson machine achieved success in a man versus machine television quiz show of human knowledge, Jeopardy![70][relevant to this paragraph? ]

Weaknesses of the test

Turing did not explicitly state that the Turing test could be used as a measure of intelligence, or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward.

Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. It assumes that an interrogator can determine if a machine is "thinking" by comparing its behaviour with human behaviour. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing only behaviour and the value of comparing the machine with a human. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field.

Human intelligence vs intelligence in general

Weakness of Turing test 1.svg

The Turing test does not directly test whether the computer behaves intelligently – it tests only whether the computer behaves like a human being. Since human behaviour and intelligent behaviour are not exactly the same thing, the test can fail to accurately measure intelligence in two ways:
Some human behaviour is unintelligent
The Turing test requires that the machine be able to execute all human behaviours, regardless of whether they are intelligent. It even tests for behaviours that we may not consider intelligent at all, such as the susceptibility to insults,[71] the temptation to lie or, simply, a high frequency of typing mistakes. If a machine cannot imitate these unintelligent behaviours in detail it fails the test.
This objection was raised by The Economist, in an article entitled "artificial stupidity" published shortly after the first Loebner prize competition in 1992. The article noted that the first Loebner winner's victory was due, at least in part, to its ability to "imitate human typing errors."[39] Turing himself had suggested that programs add errors into their output, so as to be better "players" of the game.[72]
Some intelligent behaviour is inhuman
The Turing test does not test for highly intelligent behaviours, such as the ability to solve difficult problems or come up with original insights. In fact, it specifically requires deception on the part of the machine: if the machine is more intelligent than a human being it must deliberately avoid appearing too intelligent. If it were to solve a computational problem that is practically impossible for a human to solve, then the interrogator would know the program is not human, and the machine would fail the test.
Because it cannot measure intelligence that is beyond the ability of humans, the test cannot be used to build or evaluate systems that are more intelligent than humans. Because of this, several test alternatives that would be able to evaluate superintelligent systems have been proposed.[73]

Real intelligence vs simulated intelligence

The Turing test is concerned strictly with how the subject acts – the external behaviour of the machine. In this regard, it takes a behaviourist or functionalist approach to the study of intelligence. 
The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behaviour by following a simple (but large) list of mechanical rules, without thinking or having a mind at all.
John Searle has argued that external behaviour cannot be used to determine if a machine is "actually" thinking or merely "simulating thinking."[33] His Chinese room argument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has a mind, consciousness, or intentionality. (Intentionality is a philosophical term for the power of thoughts to be "about" something.)

Turing anticipated this line of criticism in his original paper,[74] writing:
I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.[75]

Naivety of interrogators and the anthropomorphic fallacy

In practice, the test's results can easily be dominated not by the computer's intelligence, but by the attitudes, skill or naivety of the questioner.

Turing does not specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term "average interrogator": "[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning".[44]

Shah & Warwick (2009b) show that experts are fooled, and that interrogator strategy, "power" vs "solidarity" affects correct identification, the latter being more successful.

Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings. In these cases, the "interrogator" is not even aware of the possibility that they are interacting with a computer. To successfully appear human, there is no need for the machine to have any intelligence whatsoever and only a superficial resemblance to human behaviour is required.

Early Loebner prize competitions used "unsophisticated" interrogators who were easily fooled by the machines.[40] Since 2004, the Loebner Prize organizers have deployed philosophers, computer scientists, and journalists among the interrogators. Nonetheless, some of these experts have been deceived by the machines.[76]

Michael Shermer points out that human beings consistently choose to consider non-human objects as human whenever they are allowed the chance, a mistake called the anthropomorphic fallacy: They talk to their cars, ascribe desire and intentions to natural forces (e.g., "nature abhors a vacuum"), and worship the sun as a human-like being with intelligence. If the Turing test is applied to religious objects, Shermer argues, then, that inanimate statues, rocks, and places have consistently passed the test throughout history.[citation needed] This human tendency towards anthropomorphism effectively lowers the bar for the Turing test, unless interrogators are specifically trained to avoid it.

Human Misidentification

One interesting feature of the Turing Test is the frequency with which hidden human foils are misidentified by interrogators as being machines.[77] It has been suggested that interrogators are looking more for expected human responses rather than typical ones. As a result some individuals can be often categorized as being machines. This can therefore work in favor of a competing machine.

Impracticality and irrelevance: the Turing test and AI research

Mainstream AI researchers argue that trying to pass the Turing Test is merely a distraction from more fruitful research.[41] Indeed, the Turing test is not an active focus of much academic or commercial effort—as Stuart Russell and Peter Norvig write: "AI researchers have devoted little attention to passing the Turing test."[78] There are several reasons.

First, there are easier ways to test their programs. Most current research in AI-related fields is aimed at modest and specific goals, such as automated scheduling, object recognition, or logistics. To test the intelligence of the programs that solve these problems, AI researchers simply give them the task directly. Russell and Norvig suggest an analogy with the history of flight: Planes are tested by how well they fly, not by comparing them to birds. "Aeronautical engineering texts," they write, "do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.'"[78]

Second, creating lifelike simulations of human beings is a difficult problem on its own that does not need to be solved to achieve the basic goals of AI research. Believable human characters may be interesting in a work of art, a game, or a sophisticated user interface, but they are not part of the science of creating intelligent machines, that is, machines that solve problems using intelligence.

Turing wanted to provide a clear and understandable example to aid in the discussion of the philosophy of artificial intelligence.[79] John McCarthy observes that the philosophy of AI is "unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science."[80]

Variations of the Turing test

Numerous other versions of the Turing test, including those expounded above, have been mooted through the years.

Reverse Turing test and CAPTCHA

A modification of the Turing test wherein the objective of one or more of the roles have been reversed between machines and humans is termed a reverse Turing test. An example is implied in the work of psychoanalyst Wilfred Bion,[81] who was particularly fascinated by the "storm" that resulted from the encounter of one mind by another. In his 2000 book,[65] among several other original points with regard to the Turing test, literary scholar Peter Swirski discussed in detail the idea of what he termed the Swirski test—essentially the reverse Turing test. He pointed out that it overcomes most if not all standard objections levelled at the standard version.
Carrying this idea forward, R. D. Hinshelwood[82] described the mind as a "mind recognizing apparatus." The challenge would be for the computer to be able to determine if it were interacting with a human or another computer. This is an extension of the original question that Turing attempted to answer but would, perhaps, offer a high enough standard to define a machine that could "think" in a way that we typically define as characteristically human.

CAPTCHA is a form of reverse Turing test. Before being allowed to perform some action on a website, the user is presented with alphanumerical characters in a distorted graphic image and asked to type them out. This is intended to prevent automated systems from being used to abuse the site. The rationale is that software sufficiently sophisticated to read and reproduce the distorted image accurately does not exist (or is not available to the average user), so any system able to do so is likely to be a human.

Software that can reverse CAPTCHA with some accuracy by analysing patterns in the generating engine is being actively developed.[83] OCR or optical character recognition is also under development as a workaround for the inaccessibility of several CAPTCHA schemes to humans with disabilities.

Subject matter expert Turing test

Another variation is described as the subject matter expert Turing test, where a machine's response cannot be distinguished from an expert in a given field. This is also known as a "Feigenbaum test" and was proposed by Edward Feigenbaum in a 2003 paper.[84]

Total Turing test

The "Total Turing test"[57] variation of the Turing test adds two further requirements to the traditional Turing test. The interrogator can also test the perceptual abilities of the subject (requiring computer vision) and the subject's ability to manipulate objects (requiring Robotics).[85]

Minimum Intelligent Signal Test

The Minimum Intelligent Signal Test was proposed by Chris McKinstry as "the maximum abstraction of the Turing test",[86] in which only binary responses (true/false or yes/no) are permitted, to focus only on the capacity for thought. It eliminates text chat problems like anthropomorphism bias, and doesn't require emulation of unintelligent human behaviour, allowing for systems that exceed human intelligence. The questions must each stand on their own, however, making it more like an IQ test than an interrogation. It is typically used to gather statistical data against which the performance of artificial intelligence programs may be measured.[87]

Hutter Prize

The organizers of the Hutter Prize believe that compressing natural language text is a hard AI problem, equivalent to passing the Turing test.

The data compression test has some advantages over most versions and variations of a Turing test, including:
  • It gives a single number that can be directly used to compare which of two machines is "more intelligent."
  • It does not require the computer to lie to the judge
The main disadvantages of using data compression as a test are:
  • It is not possible to test humans this way.
  • It is unknown what particular "score" on this test—if any—is equivalent to passing a human-level Turing test.

Other tests based on compression or Kolmogorov Complexity

A related approach to Hutter's prize which appeared much earlier in the late 1990s is the inclusion of compression problems in an extended Turing Test.[88] or by tests which are completely derived from Kolmogorov complexity.[89] Other related tests in this line are presented by Hernandez-Orallo and Dowe.[90]

Algorithmic IQ, or AIQ for short, is an attempt to convert the theoretical Universal Intelligence Measure from Legg and Hutter (based on Solomonoff's inductive inference) into a working practical test of machine intelligence.[91]

Two major advantages of some of these tests are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

Ebert test

The Turing test inspired the Ebert test proposed in 2011 by film critic Roger Ebert which is a test whether a computer-based synthesised voice has sufficient skill in terms of intonations, inflections, timing and so forth, to make people laugh.[92]

Predictions

Turing predicted that machines would eventually be able to pass the test; in fact, he estimated that by the year 2000, machines with around 100 MB of storage would be able to fool 30% of human judges in a five-minute test, and that people would no longer consider the phrase "thinking machine" contradictory.[3] (In practice, from 2009–2012, the Loebner Prize chatterbot contestants only managed to fool a judge once,[93] and that was only due to the human contestant pretending to be a chatbot.[94]) He further predicted that machine learning would be an important part of building powerful machines, a claim considered plausible by contemporary researchers in artificial intelligence.[44]

In a 2008 paper submitted to 19th Midwest Artificial Intelligence and Cognitive Science Conference, Dr. Shane T. Mueller predicted a modified Turing Test called a "Cognitive Decathlon" could be accomplished within 5 years.[95]

By extrapolating an exponential growth of technology over several decades, futurist Ray Kurzweil predicted that Turing test-capable computers would be manufactured in the near future. In 1990, he set the year around 2020.[96] By 2005, he had revised his estimate to 2029.[97]

The Long Bet Project Bet Nr. 1 is a wager of $20,000 between Mitch Kapor (pessimist) and Ray Kurzweil (optimist) about whether a computer will pass a lengthy Turing Test by the year 2029.
During the Long Now Turing Test, each of three Turing Test Judges will conduct online interviews of each of the four Turing Test Candidates (i.e., the Computer and the three Turing Test Human Foils) for two hours each for a total of eight hours of interviews. The bet specifies the conditions in some detail.[98]

Conferences

Turing Colloquium

1990 marked the fortieth anniversary of the first publication of Turing's "Computing Machinery and Intelligence" paper, and, thus, saw renewed interest in the test. Two significant events occurred in that year: The first was the Turing Colloquium, which was held at the University of Sussex in April, and brought together academics and researchers from a wide variety of disciplines to discuss the Turing Test in terms of its past, present, and future; the second was the formation of the annual Loebner Prize competition.

Blay Whitby lists four major turning points in the history of the Turing Test – the publication of "Computing Machinery and Intelligence" in 1950, the announcement of Joseph Weizenbaum's ELIZA in 1966, Kenneth Colby's creation of PARRY, which was first described in 1972, and the Turing Colloquium in 1990.[99]

2005 Colloquium on Conversational Systems

In November 2005, the University of Surrey hosted an inaugural one-day meeting of artificial conversational entity developers,[100] attended by winners of practical Turing Tests in the Loebner Prize: Robby Garner, Richard Wallace and Rollo Carpenter. Invited speakers included David Hamill, Hugh Loebner (sponsor of the Loebner Prize) and Huma Shah.

2008 AISB Symposium on the Turing Test

In parallel to the 2008 Loebner Prize held at the University of Reading,[101] the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB), hosted a one-day symposium to discuss the Turing Test, organized by John Barnden, Mark Bishop, Huma Shah and Kevin Warwick.[102] The Speakers included Royal Institution's Director Baroness Susan Greenfield, Selmer Bringsjord, Turing's biographer Andrew Hodges, and consciousness scientist Owen Holland. No agreement emerged for a canonical Turing Test, though Bringsjord expressed that a sizeable prize would result in the Turing Test being passed sooner.

2010 AISB symposium on the Turing Test

Sixty years following its introduction, continued argument over Turing's 'Can machines think?' experiment led to its reconsideration for the 21st Century Symposium at the AISB Convention, held 29 March to 1 April 2010 in De Montfort University, UK. The AISB is the (British) Society for the Study of Artificial Intelligence and the Simulation of Behaviour.[103]

The Alan Turing Year, and Turing100 in 2012

Throughout 2012, a number of major events took place to celebrate Turing's life and scientific impact. The Turing100 group supported these events and also, organized a special Turing test event in Bletchley Park on 23 June 2012 to celebrate the 100th anniversary of Turing's birth.

2012 AISB/IACAP symposium on the Turing Test

Latest discussions on the Turing Test in a symposium with 11 speakers, organized by Vincent C. Müller (ACT & Oxford) and Aladdin Ayeshm (de Montfort) – with Mark Bishop, John Barnden, Alessio Plebe and Pietro Perconti.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...