Search This Blog

Wednesday, February 17, 2021

Sign (semiotics)

From Wikipedia, the free encyclopedia

https://en.wikipedia.org/wiki/Sign_(semiotics) 

In semiotics, a sign is anything that communicates a meaning that is not the sign itself to the interpreter of the sign. The meaning can be intentional such as a word uttered with a specific meaning, or unintentional, such as a symptom being a sign of a particular medical condition. Signs can communicate through any of the senses, visual, auditory, tactile, olfactory, or taste.

Two major theories describe the way signs acquire the ability to transfer information. Both theories understand the defining property of the sign as a relation between a number of elements. In the tradition of semiotics developed by Ferdinand de Saussure (referred to as semiology) the sign relation is dyadic, consisting only of a form of the sign (the signifier) and its meaning (the signified). Saussure saw this relation as being essentially arbitrary (the principle of semiotic arbitrariness), motivated only by social convention. Saussure's theory has been particularly influential in the study of linguistic signs. The other major semiotic theory, developed by C. S. Peirce, defines the sign as a triadic relation as "something that stands for something, to someone in some capacity" This means that a sign is a relation between the sign vehicle (the specific physical form of the sign), a sign object (the aspect of the world that the sign carries meaning about) and an interpretant (the meaning of the sign as understood by an interpreter). According to Peirce signs can be divided by the type of relation that holds the sign relation together as either icons, indices or symbols. Icons are those signs that signify by means of similarity between sign vehicle and sign object (e.g. a portrait, or a map), indices are those that signify by means of a direct relation of contiguity or causality between sign vehicle and sign object (e.g. a symptom), and symbols are those that signify through a law or arbitrary social convention.

Dyadic signs

According to Saussure (1857–1913), a sign is composed of the signifier (signifiant), and the signified (signifié). These cannot be conceptualized as separate entities but rather as a mapping from significant differences in sound to potential (correct) differential denotation. The Saussurean sign exists only at the level of the synchronic system, in which signs are defined by their relative and hierarchical privileges of co-occurrence. It is thus a common misreading of Saussure to take signifiers to be anything one could speak, and signifieds as things in the world. In fact, the relationship of language to parole (or speech-in-context) is and always has been a theoretical problem for linguistics (cf. Roman Jakobson's famous essay "Closing Statement: Linguistics and Poetics" et al.).

A famous thesis by Saussure states that the relationship between a sign and the real-world thing it denotes is an arbitrary one. There is not a natural relationship between a word and the object it refers to, nor is there a causal relationship between the inherent properties of the object and the nature of the sign used to denote it. For example, there is nothing about the physical quality of paper that requires denotation by the phonological sequence ‘paper’. There is, however, what Saussure called ‘relative motivation’: the possibilities of signification of a signifier are constrained by the compositionality of elements in the linguistic system (cf. Emile Benveniste's paper on the arbitrariness of the sign in the first volume of his papers on general linguistics). In other words, a word is only available to acquire a new meaning if it is identifiably different from all the other words in the language and it has no existing meaning. Structuralism was later based on this idea that it is only within a given system that one can define the distinction between the levels of system and use, or the semantic "value" of a sign.

Triadic signs

Charles Sanders Peirce (1839–1914) proposed a different theory. Unlike Saussure who approached the conceptual question from a study of linguistics and phonology, Peirce, the so-called father of the Pragmatist school of philosophy, extended the concept of sign to embrace many other forms. He considered "word" to be only one particular kind of sign, and characterized sign as any mediational means to understanding. He covered not only artificial, linguistic, and symbolic signs, but also all semblances (such as kindred sensible qualities), and all indicators (such as mechanical reactions). He counted as symbols all terms, propositions, and arguments whose interpretation is based upon convention or habit, even apart from their expression in particular languages. He held that "all this universe is perfused with signs, if it is not composed exclusively of signs". The setting of Peirce's study of signs is philosophical logic, which he defined as formal semiotic, and characterized as a normative field following esthetics and ethics, as more basic than metaphysics, and as the art of devising methods of research. He argued that, since all thought takes time, all thought is in signs, that all thought has the form of inference (even when not conscious and deliberate), and that, as inference, "logic is rooted in the social principle", since inference depends on a standpoint that, in a sense, is unlimited. The result is a theory not of language in particular, but rather of the production of meaning, and it rejects the idea of a static relationship between a sign and what it represents: its object. Peirce believed that signs are meaningful through recursive relationships that arise in sets of three.

Even when a sign represents by a resemblance or factual connection independent of interpretation, the sign is a sign only insofar as it is at least potentially interpretable by a mind and insofar as the sign is a determination of a mind or at least a quasi-mind, that functions as if it were a mind, for example in crystals and the work of bees—the focus here is on sign action in general, not on psychology, linguistics, or social studies (fields Peirce also pursued).

A sign depends on an object in a way that enables (and, in a sense, determines) an interpretation, an interpretant, to depend on the object as the sign depends on the object. The interpretant, then, is a further sign of the object, and thus enables and determines still further interpretations, further interpretant signs. The process, called semiosis, is irreducibly triadic, Peirce held, and is logically structured to perpetuate itself. It is what defines sign, object, and interpretant in general. As Jean-Jacques Nattiez (1990: 7) put it, "the process of referring effected by the sign is infinite." (Peirce used the word "determine" in the sense not of strict determinism, but of effectiveness that can vary like an influence.)

Peirce further characterized the three semiotic elements as follows:

  1. Sign (or representamen): that which represents the denoted object (cf. Saussure's "signifier").
  2. Object (or semiotic object): that which the sign represents (or as some put it, encodes). It can be anything thinkable, a law, a fact, or even a possibility (a semiotic object could even be fictional, such as Hamlet); those are partial objects; the total object is the universe of discourse, the totality of objects in that world to which one attributes the partial object. For example, perturbation of Pluto's orbit is a sign about Pluto, but not only about Pluto. The object may be
    1. immediate to the sign, the object as represented in the sign, or
    2. dynamic, the object as it really is, on which the immediate object is founded.
  3. Interpretant (or interpretant sign): a sign's meaning or ramification as formed into a further sign by interpreting (or, as some put it, decoding) the sign. The interpretant may be:
    1. immediate to the sign, a kind of possibility, all that the sign is suited to immediately express, for instance a word's usual meaning;
    2. dynamic, that is, the meaning as formed into an actual effect, for example an individual translation or a state of agitation, or
    3. final or normal, that is, the ultimate meaning that inquiry taken far enough would be destined to reach. It is a kind of norm or ideal end, with which an actual interpretant may, at most, coincide.

Peirce explained that signs mediate between their objects and their interpretants in semiosis, the triadic process of determination. In semiosis a first is determined or influenced to be a sign by a second, as its object. The object determines the sign to determine a third as an interpretant. Firstness itself is one of Peirce's three categories of all phenomena, and is quality of feeling. Firstness is associated with a vague state of mind as feeling and a sense of the possibilities, with neither compulsion nor reflection. In semiosis the mind discerns an appearance or phenomenon, a potential sign. Secondness is reaction or resistance, a category associated with moving from possibility to determinate actuality. Here, through experience outside of and collateral to the given sign or sign system, one recalls or discovers the object the sign refers to, for example when a sign consists in a chance semblance of an absent but remembered object. It is through one's collateral experience that the object determines the sign to determine an interpretant. Thirdness is representation or mediation, the category associated with signs, generality, rule, continuity, habit-taking, and purpose. Here one forms an interpretant expressing a meaning or ramification of the sign about the object. When a second sign is considered, the initial interpretant may be confirmed, or new possible meanings may be identified. As each new sign is addressed, more interpretants, themselves signs, emerge. It can involve a mind's reading of nature, people, mathematics, anything.

Peirce generalized the communicational idea of utterance and interpretation of a sign, to cover all signs:

Admitting that connected Signs must have a Quasi-mind, it may further be declared that there can be no isolated sign. Moreover, signs require at least two Quasi-minds; a Quasi-utterer and a Quasi-interpreter; and although these two are at one (i.e., are one mind) in the sign itself, they must nevertheless be distinct. In the Sign they are, so to say, welded. Accordingly, it is not merely a fact of human Psychology, but a necessity of Logic, that every logical evolution of thought should be dialogic.

According to Nattiez, writing with Jean Molino, the tripartite definition of sign, object, and interpretant is based on the "trace" or neutral level, Saussure's "sound-image" (or "signified", thus Peirce's "representamen"). Thus, "a symbolic form...is not some 'intermediary' in a process of 'communication' that transmits the meaning intended by the author to the audience; it is instead the result of a complex process of creation (the poietic process) that has to do with the form as well as the content of the work; it is also the point of departure for a complex process of reception (the esthesic process that reconstructs a 'message'").

Molino's and Nattiez's diagram:

Poietic Process Esthesic Process
"Producer" Trace Receiver
(Nattiez 1990, p. 17)

Peirce's theory of the sign therefore offered a powerful analysis of the signification system, its codes, and its processes of inference and learning—because the focus was often on natural or cultural context rather than linguistics, which only analyses usage in slow-time whereas human semiotic interaction in the real world often has a chaotic blur of language and signal exchange. Nevertheless, the implication that triadic relations are structured to perpetuate themselves leads to a level of complexity not usually experienced in the routine of message creation and interpretation. Hence, different ways of expressing the idea have developed.

Classes of triadic signs

By 1903 Peirce came to classify signs by three universal trichotomies dependent on his three categories (quality, fact, habit). He classified any sign:

  1. by what stands as the sign — either (qualisign, also called a tone) a quality — or (sinsign, also called token) an individual fact — or (legisign, also called type) a rule, a habit;
  2. by how the sign stands for its object — either (icon) by its own quality, such that it resembles the object, regardless of factual connection and of interpretive rule of reference — or (index) by factual connection to its object, regardless of resemblance and of interpretive rule of reference — or (symbol) by rule or habit of interpreted reference to its object, regardless of resemblance and of factual connection; and
  3. by how the sign stands for its object to its interpretant — either (rheme, also called seme, such as a term) as regards quality or possibility, as if the sign were a qualisign, though it can be qualisign, sinsign, or legisign — or (dicisign, also called pheme, such as a proposition) as regards fact, as if the sign were an index, though it can be index or symbol — or (argument, also called delome) as regards rule or habit. This is the trichotomy of all signs as building blocks in an inference process.
  • Any qualisign is an icon. Sinsigns include some icons and some indices. Legisigns include some icons, some indices, and all symbols.
  • Any icon is a rheme. Indices (be they sinsigns or legisigns) include some rhemes and some dicisigns. Symbols include some rhemes, some dicisigns, and all arguments.

Because of those classificatory interdependences, the three trichotomies intersect to form ten (rather than 27) classes of signs. There are also various kinds of meaningful combination. Signs can be attached to one another. A photograph is an index with a meaningfully attached icon. Arguments are composed of dicisigns, and dicisigns are composed of rhemes. In order to be embodied, legisigns (types) need sinsigns (tokens) as their individual replicas or instances. A symbol depends as a sign on how it will be interpreted, regardless of resemblance or factual connection to its object; but the symbol's individual embodiment is an index to your experience of the object. A symbol is instanced by a specialized indexical sinsign. A symbol such as a sentence in a language prescribes qualities of appearance for its instances, and is itself a replica of a symbol such as a proposition apart from expression in a particular language. Peirce covered both semantic and syntactical issues in his theoretical grammar, as he sometimes called it. He regarded formal semiotic, as logic, as furthermore encompassing study of arguments (hypothetical, deductive, and inductive) and inquiry's methods including pragmatism; and as allied to but distinct from logic's pure mathematics.

Peirce sometimes referred to the “ground” of a sign. The ground is the pure abstraction of a quality. A sign's ground is the respect in which the sign represents its object, e.g. as in literal and figurative language. For example, an icon presents a characteristic or quality attributed to an object, while a symbol imputes to an object a quality either presented by an icon or symbolized so as to evoke a mental icon.

Peirce called an icon apart from a label, legend, or other index attached to it, a "hypoicon", and divided the hypoicon into three classes: (a) the image, which depends on a simple quality; (b) the diagram, whose internal relations, mainly dyadic or so taken, represent by analogy the relations in something; and (c) the metaphor, which represents the representative character of a sign by representing a parallelism in something else. A diagram can be geometric, or can consist in an array of algebraic expressions, or even in the common form "All __ is ___" which is subjectable, like any diagram, to logical or mathematical transformations. Peirce held that mathematics is done by diagrammatic thinking — observation of, and experimentation on, diagrams. Peirce developed for deductive logic a system of visual existential graphs, which continue to be researched today.

20th-century theories

It is now agreed that the effectiveness of the acts that may convert the message into text (including speaking, writing, drawing, music and physical movements) depends upon the knowledge of the sender. If the sender is not familiar with the current language, its codes and its culture, then he or she will not be able to say anything at all, whether as a visitor in a different language area or because of a medical condition such as aphasia.

Modern theories deny the Saussurian distinction between signifier and signified, and look for meaning not in the individual signs, but in their context and the framework of potential meanings that could be applied. Such theories assert that language is a collective memory or cultural history of all the different ways in which meaning has been communicated, and may to that extent, constitute all life's experiences (see Louis Hjelmslev). Hjelmslev did not consider the sign to be the smallest semiotic unit, as he believed it possible to decompose it further; instead, he considered the "internal structure of language" to be a system of figurae, a concept somewhat related to that of figure of speech, which he considered to be the ultimate semiotic unit.

This position implies that speaking is simply one more form of behaviour and changes the focus of attention from the text as language, to the text as a representation of purpose, a functional version of the author's intention. But, once the message has been transmitted, the text exists independently.

Hence, although the writers who co-operated to produce this page exist, they can only be represented by the signs actually selected and presented here. The interpretation process in the receiver's mind may attribute meanings completely different from those intended by the senders. But, why might this happen? Neither the sender nor the receiver of a text has a perfect grasp of all language. Each individual's relatively small stock of knowledge is the product of personal experience and their attitude to learning. When the audience receives the message, there will always be an excess of connotational meanings available to be applied to the particular signs in their context (no matter how relatively complete or incomplete their knowledge, the cognitive process is the same).

The first stage in understanding the message is therefore, to suspend or defer judgement until more information becomes available. At some point, the individual receiver decides which of all possible meanings represents the best possible fit. Sometimes, uncertainty may not be resolved, so meaning is indefinitely deferred, or a provisional or approximate meaning is allocated. More often, the receiver's desire for closure leads to simple meanings being attributed out of prejudices and without reference to the sender's intentions.

Postmodern theory

In critical theory, the notion of sign is used variously.

Many postmodernist theorists postulate a complete disconnection of the signifier and the signified. An 'empty' or 'floating signifier' is variously defined as a signifier with a vague, highly variable, unspecifiable or non-existent signified. Such signifiers mean different things to different people: they may stand for many or even any signifieds; they may mean whatever their interpreters want them to mean.

Tuesday, February 16, 2021

Ideasthesia

From Wikipedia, the free encyclopedia
 
Example of associations between graphemes and colors that are described more accurately as ideasthesia than as synesthesia

Ideasthesia (alternative spelling ideaesthesia) is a neuroscientific phenomenon in which activations of concepts (inducers) evoke perception-like sensory experiences (concurrents). The name comes from the Ancient Greek ἰδέα (idéa) and αἴσθησις (aísthēsis), meaning "sensing concepts" or "sensing ideas". The notion was introduced by neuroscientist Danko Nikolić as an alternative explanation for a set of phenomena traditionally covered by synesthesia.

While "synesthesia" meaning "union of senses" implies the association of two sensory elements with little connection to the cognitive level, empirical evidence indicated that most phenomena linked to synesthesia are in fact induced by semantic representations. That is, the linguistic meaning of the stimulus is what is important rather than its sensory properties. In other words, while synesthesia presumes that both the trigger (inducer) and the resulting experience (concurrent) are of sensory nature, ideasthesia presumes that only the resulting experience is of sensory nature while the trigger is semantic.

Research have later extended the concept to topics other than synesthesia and as it turned out applicable to everyday perception the concept have developed into a theory of how we perceive. For example ideasthesia has been applied to the theory of art and could bear important implications in explaining human conscious experience, which, according to ideasthesia, is grounded in how we activate concepts.

Examples and evidence

A drawing by a synesthete which illustrates time unit-space synesthesia/ideasthesia. The months in a year are organized into a circle surrounding the synesthete's body, each month having a fixed location in space and a unique color.

A common example of synesthesia is the association between graphemes and colors, usually referred to as grapheme-color synesthesia. Here, letters of the alphabet are associated with vivid experiences of color. Studies have indicated that the perceived color is context-dependent and is determined by the extracted meaning of a stimulus. For example, an ambiguous stimulus '5' that can be interpreted either as 'S' or '5' will have the color associated with 'S' or with '5', depending on the context in which it is presented. If presented among numbers, it will be interpreted as '5' and will associate the respective color. If presented among letters, it will be interpreted as 'S' and will associate the respective synesthetic color.

Evidence for grapheme-color synesthesia comes also from the finding that colors can be flexibly associated to graphemes, as new meanings become assigned to those graphemes. In one study synesthetes were presented with Glagolitic letters that they have never seen before, and the meaning was acquired through a short writing exercise. The Glagolitic graphemes inherited the colors of the corresponding Latin graphemes as soon as the Glagolitic graphemes acquired the new meaning.

In another study, synesthetes were prompted to form novel synesthetic associations to graphemes never seen before. Synesthetes created those associations within minutes or seconds - which was time too short to account for creation of new physical connections between color representation and grapheme representation areas in the brain, pointing again towards ideasthesia. Although the time course is consistent with postsynaptic AMPA receptor upregulation and/or NMDA receptor coactivation, which would imply that the realtime experience is invoked at the synaptic level of analysis prior to establishment of novel wiring per se, a very intuitively appealing model.

For lexical-gustatory synesthesia evidence also points towards ideasthesia: In lexical-gustatory synesthesia, verbalisation of the stimulus is not necessary for the experience of concurrents. Instead, it is sufficient to activate the concept.

Another case of synesthesia is swimming-style synesthesia in which each swimming style is associated with a vivid experience of a color. These synesthetes do not need to perform the actual movements of a corresponding swimming style. To activate the concurrent experiences, it is sufficient to activate the concept of a swimming style (e.g., by presenting a photograph of a swimmer or simply talking about swimming).

It has been argued that grapheme-color synesthesia for geminate consonants also provides evidence for ideasthesia.

In pitch-color synesthesia, the same tone will be associated with different colors depending on how it has been named; do-sharp (i.e. di) will have colors similar to do (e.g., a reddish color) and re-flat (i.e. ra) will have color similar to that of re (e.g., yellowish), although the two classes refer to the same tone. Similar semantic associations have been found between the acoustic characteristics of vowels and the notion of size.

One-shot synesthesia: There are synesthetic experiences that can occur just once in a lifetime, and are thus dubbed one-shot synesthesia. Investigation of such cases has indicated that such unique experiences typically occur when a synesthete is involved in an intensive mental and emotional activity such as making important plans for one's future or reflecting on one's life. It has been thus concluded that this is also a form of ideasthesia.

In normal perception

Which one would be called Bouba and which Kiki? Responses are highly consistent among people. This is an example of ideasthesia as the conceptualization of the stimulus plays an important role.

Over the past decade, it has been suggested that the Bouba/Kiki phenomenon is a case of ideasthesia. Most people will agree that the star-shaped object on the left is named Kiki and the round one on the right Bouba. It has been assumed that these associations come from direct connections between visual and auditory cortices. For example, according to that hypothesis, representations of sharp inflections in the star-shaped object would be physically connected to the representations of sharp inflection in the sound of Kiki. However, Gomez et al. have shown that Kiki/Bouba associations are much richer as either word and either image is associated semantically to a number of concepts such as white or black color, feminine vs. masculine, cold vs. hot, and others. These sound-shape associations seem to be related through a large overlap between semantic networks of Kiki and star-shape on one hand, and Bouba and round-shape on the other hand. For example, both Kiki and star-shape are clever, small, thin and nervous. This indicates that behind Kiki-Bouba effect lies a rich semantic network. In other words, our sensory experience is largely determined by the meaning that we assign to stimuli. Food description and wine tasting is another domain in which ideasthetic association between flavor and other modalities such as shape may play an important role. These semantic-like relations play a role in successful marketing; the name of a product should match its other characteristics.

Implications for development of synesthesia

The concept of ideasthesia bears implications for understanding how synesthesia develops in children. Synesthetic children may associate concrete sensory-like experiences primarily to the abstract concepts that they have otherwise difficulties dealing with. Synesthesia may thus be used as a cognitive tool to cope with the abstractness of the learning materials imposed by the educational system — referred to also as a "semantic vacuum hypothesis". This hypothesis explains why the most common inducers in synesthesia are graphemes and time units — both relating to the first truly abstract ideas that a child needs to master.

Implications for art theory

The concept of ideasthesia has been often discussed in relation to art, and also used to formulate a psychological theory of art. According to the theory, we consider something to be a piece of art when experiences induced by the piece are accurately balanced with semantics induced by the same piece. Thus, a piece of art makes us both strongly think and strongly experience. Moreover, the two must be perfectly balanced such that the most salient stimulus or event is both the one that evokes strongest experiences (fear, joy, ... ) and strongest cognition (recall, memory, ...) — in other words, idea is well balanced with aesthesia.

Ideasthesia theory of art may be used for psychological studies of aesthetics. It may also help explain classificatory disputes about art as its main tenet is that experience of art can only be individual, depending on person's unique knowledge, experiences and history. There could exist no general classification of art satisfactorily applicable to each and all individuals.

Neurophysiology of ideasthesia

Ideasthesia is congruent with the theory of brain functioning known as practopoiesis. According to that theory, concepts are not an emergent property of highly developed, specialized neuronal networks in the brain, as is usually assumed; rather, concepts are proposed to be fundamental to the very adaptive principles by which living systems and the brain operate.

A study using magnetoencephalography has shown that the information on synesthetic colors is available in the brain signals only about 200 milliseconds after the stimulus, which is consistent with conceptual mediation. The study supports the idea that synesthesia is a semantic phenomenon—i.e., ideasthesia.

Explanatory gap

From Wikipedia, the free encyclopedia

In philosophy of mind and consciousness, the explanatory gap is the difficulty that physicalist theories have in explaining how physical properties give rise to the way things feel when they are experienced. It is a term introduced by philosopher Joseph Levine. In the 1983 paper in which he first used the term, he used as an example the sentence, "Pain is the firing of C fibers", pointing out that while it might be valid in a physiological sense, it does not help us to understand how pain feels.

The explanatory gap has vexed and intrigued philosophers and AI researchers alike for decades and caused considerable debate. Bridging this gap (that is, finding a satisfying mechanistic explanation for experience and qualia) is known as "the hard problem".

To take an example of a phenomenon in which there is no gap, a modern computer's behavior can be adequately explained by its physical components alone, such as its circuitry and software.In contrast, it is thought by many mind-body dualists (e.g. René Descartes, David Chalmers) that subjective conscious experience constitutes a separate effect that demands another cause that is either outside the physical world (dualism) or due to an as yet unknown physical phenomenon (see for instance quantum mind, indirect realism).

Proponents of dualism claim that the mind is substantially and qualitatively different from the brain and that the existence of something metaphysically extra-physical is required to "fill the gap". Similarly, some argue that there are further facts—facts that do not follow logically from the physical facts of the world—about conscious experience. For example, they argue that what it is like to experience seeing red does not follow logically from the physical facts of the world.

Implications

The nature of the explanatory gap has been the subject of some debate. For example, some consider it to simply be a limit on our current explanatory ability. They argue that future findings in neuroscience or future work from philosophers could close the gap. However, others have taken a stronger position and argued that the gap is a definite limit on our cognitive abilities as humans—no amount of further information will allow us to close it. There has also been no consensus regarding what metaphysical conclusions the existence of the gap provides. Those wishing to use its existence to support dualism have often taken the position that an epistemic gap—particularly if it is a definite limit on our cognitive abilities—necessarily entails a metaphysical gap.

Levine and others have wished to either remain silent on the matter or argue that no such metaphysical conclusion should be drawn. He agrees that conceivability (as used in the Zombie and inverted spectrum arguments) is flawed as a means of establishing metaphysical realities; but he points out that even if we come to the metaphysical conclusion that qualia are physical, they still present an explanatory problem.

While I think this materialist response is right in the end, it does not suffice to put the mind-body problem to rest. Even if conceivability considerations do not establish that the mind is in fact distinct from the body, or that mental properties are metaphysically irreducible to physical properties, still they do demonstrate that we lack an explanation of the mental in terms of the physical.

However, such an epistemological or explanatory problem might indicate an underlying metaphysical issue—the non-physicality of qualia, even if not proven by conceivability arguments is far from ruled out.

In the end, we are right back where we started. The explanatory gap argument doesn't demonstrate a gap in nature, but a gap in our understanding of nature. Of course a plausible explanation for there being a gap in our understanding of nature is that there is a genuine gap in nature. But so long as we have countervailing reasons for doubting the latter, we have to look elsewhere for an explanation of the former.

At the core of the problem, according to Levine, is our lack of understanding of what it means for a qualitative experience to be fully comprehended. He emphasizes that we don't even know to what extent it is appropriate to inquire into the nature of this kind of experience. He uses the laws of gravity as an example, which laws seem to explain gravity completely yet do not account for the gravitational constant. Similarly to the way in which gravity appears to be an inexplicable brute fact of nature, the case of qualia may be one in which we are either lacking essential information or in which we're exploring a natural phenomenon that simply is not further apprehensible. Levine suggests that, as qualitative experience of a physical or functional state may simply be such a brute fact, perhaps we should consider whether or not it is really necessary to find a more complete explanation of qualitative experience.

Levine points out that the solution to the problem of understanding how much there is to be known about qualitative experience seems even more difficult because we also lack a way to articulate what it means for actualities to be knowable in the manner that he has in mind. He does conclude that there are good reasons why we wish for a more complete explanation of qualitative experiences. One very significant reason is that consciousness appears to only manifest where mentality is demonstrated in physical systems that are quite highly organized. This, of course, may be indicative of a human capacity for reasoning that is no more than the result of organized functions. Levine expresses that it seems counterintuitive to accept this implication that the human brain, so highly organized as it is, could be no more than a routine executor. He notes that although, at minimum, Materialism appears to entail reducibility of anything that is not physically primary to an explanation of its dependence on a mechanism that can be described in terms of physical fundamentals, that kind of reductionism doesn't attempt to reduce psychology to physical science. However, it still entails that there are inexplicable classes of facts which are not treated as relevant to statements pertinent to psychology.

Monday, February 15, 2021

Chinese room

From Wikipedia, the free encyclopedia

The Chinese room argument holds that a digital computer executing a program cannot be shown to have a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since. The centerpiece of the argument is a thought experiment known as the Chinese room.

The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the argument is intended to refute a position Searle calls strong AI: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."

Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of mainstream AI research, because it does not show a limit in the amount of "intelligent" behavior a machine can display. The argument applies only to digital computers running programs and does not apply to machines in general.

Chinese room thought experiment

John Searle in December 2005

Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI".

Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. ("I don't speak a word of Chinese," he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that the "strong AI" hypothesis is false.

History

Gottfried Leibniz made a similar argument in 1714 against mechanism (the position that the mind is a machine and nothing more). Leibniz used the thought experiment of expanding the brain until it was the size of a mill. Leibniz found it difficult to imagine that a "mind" capable of "perception" could be constructed using only mechanical processes. In the 1961 short story "The Game" by Anatoly Dneprov, a stadium of people act as switches and memory cells implementing a program to translate a sentence of Portuguese, a language that none of them knows. In 1974, Lawrence Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called the China brain, also the "Chinese Nation" or the "Chinese Gym".

The Chinese Room Argument was introduced in Searle's 1980 paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences. It eventually became the journal's "most influential target article", generating an enormous number of commentaries and responses in the ensuing decades, and Searle has continued to defend and refine the argument in many papers, popular articles and books. David Cole writes that "the Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear in the past 25 years".

Most of the discussion consists of attempts to refute it. "The overwhelming majority", notes BBS editor Stevan Harnad, "still think that the Chinese Room Argument is dead wrong". The sheer volume of the literature that has grown up around it inspired Pat Hayes to comment that the field of cognitive science ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false".

Searle's argument has become "something of a classic in cognitive science", according to Harnad. Varol Akman agrees, and has described the original paper as "an exemplar of philosophical clarity and purity".

Philosophy

Although the Chinese Room argument was originally presented in reaction to the statements of artificial intelligence researchers, philosophers have come to consider it as an important part of the philosophy of mind. It is a challenge to functionalism and the computational theory of mind, and is related to such questions as the mind–body problem, the problem of other minds, the symbol-grounding problem, and the hard problem of consciousness.

Strong AI

Searle identified a philosophical position he calls "strong AI":

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.

The definition depends on the distinction between simulating a mind and actually having a mind. Searle writes that "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind."

The claim is implicit in some of the statements of early AI researchers and analysts. For example, in 1955, AI founder Herbert A. Simon declared that "there are now in the world machines that think, that learn and create". Simon, together with Allen Newell and Cliff Shaw, after having completed the first "AI" program, the Logic Theorist, claimed that they had "solved the venerable mind–body problem, explaining how a system composed of matter can have the properties of mind." John Haugeland wrote that "AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves."

Searle also ascribes the following claims to advocates of strong AI:

  • AI systems can be used to explain the mind;
  • The study of the brain is irrelevant to the study of the mind; and
  • The Turing test is adequate for establishing the existence of mental states.

Strong AI as computationalism or functionalism

In more recent presentations of the Chinese room argument, Searle has identified "strong AI" as "computer functionalism" (a term he attributes to Daniel Dennett). Functionalism is a position in modern philosophy of mind that holds that we can define mental phenomena (such as beliefs, desires, and perceptions) by describing their functions in relation to each other and to the outside world. Because a computer program can accurately represent functional relationships as relationships between symbols, a computer can have mental phenomena if it runs the right program, according to functionalism.

Stevan Harnad argues that Searle's depictions of strong AI can be reformulated as "recognizable tenets of computationalism, a position (unlike "strong AI") that is actually held by many thinkers, and hence one worth refuting." Computationalism is the position in the philosophy of mind which argues that the mind can be accurately described as an information-processing system.

Each of the following, according to Harnad, is a "tenet" of computationalism:

  • Mental states are computational states (which is why computers can have mental states and help to explain the mind);
  • Computational states are implementation-independent—in other words, it is the software that determines the computational state, not the hardware (which is why the brain, being hardware, is irrelevant); and that
  • Since implementation is unimportant, the only empirical data that matters is how the system functions; hence the Turing test is definitive.

Strong AI vs. biological naturalism

Searle holds a philosophical position he calls "biological naturalism": that consciousness and understanding require specific biological machinery that are found in brains. He writes "brains cause minds" and that "actual human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains". Searle argues that this machinery (known to neuroscience as the "neural correlates of consciousness") must have some causal powers that permit the human experience of consciousness. Searle's belief in the existence of these powers has been criticized.

Searle does not disagree with the notion that machines can have consciousness and understanding, because, as he writes, "we are precisely such machines". Searle holds that the brain is, in fact, a machine, but that the brain gives rise to consciousness and understanding using machinery that is non-computational. If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding. However, without the specific machinery required, Searle does not believe that consciousness can occur.

Biological naturalism implies that one cannot determine if the experience of consciousness is occurring merely by examining how a system functions, because the specific machinery of the brain is essential. Thus, biological naturalism is directly opposed to both behaviorism and functionalism (including "computer functionalism" or "strong AI"). Biological naturalism is similar to identity theory (the position that mental states are "identical to" or "composed of" neurological events); however, Searle has specific technical objections to identity theory. Searle's biological naturalism and strong AI are both opposed to Cartesian dualism, the classical idea that the brain and mind are made of different "substances". Indeed, Searle accuses strong AI of dualism, writing that "strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter."

Consciousness

Searle's original presentation emphasized "understanding"—that is, mental states with what philosophers call "intentionality"—and did not directly address other closely related ideas such as "consciousness". However, in more recent presentations Searle has included consciousness as the real target of the argument.

Computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modelled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.

— John R. Searle, Consciousness and Language, p. 16

David Chalmers writes "it is fairly clear that consciousness is at the root of the matter" of the Chinese room.

Colin McGinn argues that the Chinese room provides strong evidence that the hard problem of consciousness is fundamentally insoluble. The argument, to be clear, is not about whether a machine can be conscious, but about whether it (or anything else for that matter) can be shown to be conscious. It is plain that any other method of probing the occupant of a Chinese room has the same difficulties in principle as exchanging questions and answers in Chinese. It is simply not possible to divine whether a conscious agency or some clever simulation inhabits the room.

Searle argues that this is only true for an observer outside of the room. The whole point of the thought experiment is to put someone inside the room, where they can directly observe the operations of consciousness. Searle claims that from his vantage point within the room there is nothing he can see that could imaginably give rise to consciousness, other than himself, and clearly he does not have a mind that can speak Chinese.

Applied ethics

Sitting in the combat information center aboard a warship – proposed as a real-life analog to the Chinese Room

Patrick Hew used the Chinese Room argument to deduce requirements from military command and control systems if they are to preserve a commander's moral agency. He drew an analogy between a commander in their command center and the person in the Chinese Room, and analyzed it under a reading of Aristotle’s notions of "compulsory" and "ignorance". Information could be "down converted" from meaning to symbols, and manipulated symbolically, but moral agency could be undermined if there was inadequate 'up conversion' into meaning. Hew cited examples from the USS Vincennes incident.

Computer science

The Chinese room argument is primarily an argument in the philosophy of mind, and both major computer scientists and artificial intelligence researchers consider it irrelevant to their fields. However, several concepts developed by computer scientists are essential to understanding the argument, including symbol processing, Turing machines, Turing completeness, and the Turing test.

Strong AI vs. AI research

Searle's arguments are not usually considered an issue for AI research. Stuart Russell and Peter Norvig observe that most AI researchers "don't care about the strong AI hypothesis—as long as the program works, they don't care whether you call it a simulation of intelligence or real intelligence." The primary mission of artificial intelligence research is only to create useful systems that act intelligently, and it does not matter if the intelligence is "merely" a simulation.

Searle does not disagree that AI research can create machines that are capable of highly intelligent behavior. The Chinese room argument leaves open the possibility that a digital machine could be built that acts more intelligently than a person, but does not have a mind or intentionality in the same way that brains do.

Searle's "strong AI" should not be confused with "strong AI" as defined by Ray Kurzweil and other futurists, who use the term to describe machine intelligence that rivals or exceeds human intelligence. Kurzweil is concerned primarily with the amount of intelligence displayed by the machine, whereas Searle's argument sets no limit on this. Searle argues that even a superintelligent machine would not necessarily have a mind and consciousness.

Turing test

The "standard interpretation" of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination. Image adapted from Saygin, et al. 2000.

The Chinese room implements a version of the Turing test. Alan Turing introduced the test in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.

Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of "consciousness" or "understanding". He did not believe this was relevant to the issues that he was addressing. He wrote:

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.

To Searle, as a philosopher investigating in the nature of mind and consciousness, these are the relevant mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would.

Symbol processing

The Chinese room (and all modern computers) manipulate physical objects in order to carry out calculations and do simulations. AI researchers Allen Newell and Herbert A. Simon called this kind of machine a physical symbol system. It is also equivalent to the formal systems used in the field of mathematical logic.

Searle emphasizes the fact that this kind of symbol manipulation is syntactic (borrowing a term from the study of grammar). The computer manipulates the symbols using a form of syntax rules, without any knowledge of the symbol's semantics (that is, their meaning).

Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today, artificial general intelligence. They framed this as a philosophical position, the physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means for general intelligent action." The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind.

Chinese room and Turing completeness

The Chinese room has a design analogous to that of a modern computer. It has a Von Neumann architecture, which consists of a program (the book of instructions), some memory (the papers and file cabinets), a CPU which follows the instructions (the man), and a means to write symbols in memory (the pencil and eraser). A machine with this design is known in theoretical computer science as "Turing complete", because it has the necessary machinery to carry out any computation that a Turing machine can do, and therefore it is capable of doing a step-by-step simulation of any other digital machine, given enough memory and time. Alan Turing writes, "all digital computers are in a sense equivalent." The widely accepted Church–Turing thesis holds that any function computable by an effective procedure is computable by a Turing machine.

The Turing completeness of the Chinese room implies that it can do whatever any other digital computer can do (albeit much, much more slowly). Thus, if the Chinese room does not or can not contain a Chinese-speaking mind, then no other digital computer can contain a mind. Some replies to Searle begin by arguing that the room, as described, cannot have a Chinese-speaking mind. Arguments of this form, according to Stevan Harnad, are "no refutation (but rather an affirmation)" of the Chinese room argument, because these arguments actually imply that no digital computers can have a mind.

There are some critics, such as Hanoch Ben-Yami, who argue that the Chinese room cannot simulate all the abilities of a digital computer, such as being able to determine the current time.

Complete argument

Searle has produced a more formal version of the argument of which the Chinese Room forms a part. He presented the first version in 1984. The version given below is from 1990. The only part of the argument which should be controversial is A3 and it is this point which the Chinese room thought experiment is intended to prove.

He begins with three axioms:

(A1) "Programs are formal (syntactic)."
A program uses syntax to manipulate symbols and pays no attention to the semantics of the symbols. It knows where to put the symbols and how to move them around, but it doesn't know what they stand for or what they mean. For the program, the symbols are just physical objects like any others.
(A2) "Minds have mental contents (semantics)."
Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent.
(A3) "Syntax by itself is neither constitutive of nor sufficient for semantics."
This is what the Chinese room thought experiment is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics.

Searle posits that these lead directly to this conclusion:

(C1) Programs are neither constitutive of nor sufficient for minds.
This should follow without controversy from the first three: Programs don't have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore no programs are minds.

This much of the argument is intended to show that artificial intelligence can never produce a machine with a mind by writing programs that manipulate symbols. The remainder of the argument addresses a different issue. Is the human brain running a program? In other words, is the computational theory of mind correct? He begins with an axiom that is intended to express the basic modern scientific consensus about brains and minds:

(A4) Brains cause minds.

Searle claims that we can derive "immediately" and "trivially" that:

(C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains.
Brains must have something that causes a mind to exist. Science has yet to determine exactly what it is, but it must exist, because minds exist. Searle calls it "causal powers". "Causal powers" is whatever the brain uses to create a mind. If anything else can cause a mind to exist, it must have "equivalent causal powers". "Equivalent causal powers" is whatever else that could be used to make a mind.

And from this he derives the further conclusions:

(C3) Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program.
This follows from C1 and C2: Since no program can produce a mind, and "equivalent causal powers" produce minds, it follows that programs do not have "equivalent causal powers."
(C4) The way that human brains actually produce mental phenomena cannot be solely by virtue of running a computer program.
Since programs do not have "equivalent causal powers", "equivalent causal powers" produce minds, and brains produce minds, it follows that brains do not use programs to produce minds.

Replies

Replies to Searle's argument may be classified according to what they claim to show:

  • Those which identify who speaks Chinese
  • Those which demonstrate how meaningless symbols can become meaningful
  • Those which suggest that the Chinese room should be redesigned in some way
  • Those which contend that Searle's argument is misleading
  • Those which argue that the argument makes false assumptions about subjective conscious experience and therefore proves nothing

Some of the arguments (robot and brain simulation, for example) fall into multiple categories.

Systems and virtual mind replies: finding the mind

These replies attempt to answer the question: since the man in the room doesn't speak Chinese, where is the "mind" that does? These replies address the key ontological issues of mind vs. body and simulation vs. reality. All of the replies that identify the mind in the room are versions of "the system reply".

System reply

The basic version argues that it is the "whole system" that understands Chinese. While the man understands only English, when he is combined with the program, scratch paper, pencils and file cabinets, they form a system that can understand Chinese. "Here, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part" Searle explains. The fact that man does not understand Chinese is irrelevant, because it is only the system as a whole that matters.

Searle notes that (in this simple version of the reply) the "system" is nothing more than a collection of ordinary physical objects; it grants the power of understanding and consciousness to "the conjunction of that person and bits of paper" without making any effort to explain how this pile of objects has become a conscious, thinking being. Searle argues that no reasonable person should be satisfied with the reply, unless they are "under the grip of an ideology;" In order for this reply to be remotely plausible, one must take it for granted that consciousness can be the product of an information processing "system", and does not require anything resembling the actual biology of the brain.

Searle then responds by simplifying this list of physical objects: he asks what happens if the man memorizes the rules and keeps track of everything in his head? Then the whole system consists of just one object: the man himself. Searle argues that if the man doesn't understand Chinese then the system doesn't understand Chinese either because now "the system" and "the man" both describe exactly the same object.

Critics of Searle's response argue that the program has allowed the man to have two minds in one head. If we assume a "mind" is a form of information processing, then the theory of computation can account for two computations occurring at once, namely (1) the computation for universal programmability (which is the function instantiated by the person and note-taking materials independently from any particular program contents) and (2) the computation of the Turing machine that is described by the program (which is instantiated by everything including the specific program). The theory of computation thus formally explains the open possibility that the second computation in the Chinese Room could entail a human-equivalent semantic understanding of the Chinese inputs. The focus belongs on the program's Turing machine rather than on the person's. However, from Searle's perspective, this argument is circular. The question at issue is whether consciousness is a form of information processing, and this reply requires that we make that assumption.

More sophisticated versions of the systems reply try to identify more precisely what "the system" is and they differ in exactly how they describe it. According to these replies, the "mind that speaks Chinese" could be such things as: the "software", a "program", a "running program", a simulation of the "neural correlates of consciousness", the "functional system", a "simulated mind", an "emergent property", or "a virtual mind" (Marvin Minsky's version of the systems reply, described below).

Virtual mind reply

The term "virtual" is used in computer science to describe an object that appears to exist "in" a computer (or computer network) only because software makes it appear to exist. The objects "inside" computers (including files, folders, and so on) are all "virtual", except for the computer's electronic components. Similarly, Minsky argues, a computer may contain a "mind" that is virtual in the same sense as virtual machines, virtual communities and virtual reality.
To clarify the distinction between the simple systems reply given above and virtual mind reply, David Cole notes that two simulations could be running on one system at the same time: one speaking Chinese and one speaking Korean. While there is only one system, there can be multiple "virtual minds," thus the "system" cannot be the "mind".

Searle responds that such a mind is, at best, a simulation, and writes: "No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched." Nicholas Fearn responds that, for some things, simulation is as good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don't complain that 'it isn't really a calculator', because the physical attributes of the device do not matter." The question is, is the human mind like the pocket calculator, essentially composed of information? Or is the mind like the rainstorm, something other than a computer, and not realizable in full by a computer simulation? For decades, this question of simulation has lead AI researchers and philosophers to consider whether the term "synthetic intelligence" is more appropriate than the common description of such intelligences as "artificial."

These replies provide an explanation of exactly who it is that understands Chinese. If there is something besides the man in the room that can understand Chinese, Searle can't argue that (1) the man doesn't understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle's argument fails to prove that "strong AI" is false.

These replies, by themselves, do not provide any evidence that strong AI exists or can exist, however. They do not show that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing Test. Searle argues that, if we are to consider Strong AI remotely plausible, the Chinese Room is an example that requires explanation, and it is difficult or impossible to explain how consciousness might "emerge" from the room or how the system would have consciousness. As Searle writes "the systems reply simply begs the question by insisting that the system must understand Chinese" and thus is dodging the question or hopelessly circular.

Robot and semantics replies: finding the meaning

As far as the person in the room is concerned, the symbols are just meaningless "squiggles." But if the Chinese room really "understands" what it is saying, then the symbols must get their meaning from somewhere. These arguments attempt to connect the symbols to the things they symbolize. These replies address Searle's concerns about intentionality, symbol grounding and syntax vs. semantics.

Robot reply

Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. This would allow a "causal connection" between the symbols and things they represent. Hans Moravec comments: "If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."
Searle's reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes "he doesn't see what comes into the robot's eyes."

Derived meaning

Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is "talking" to and through the programmers who designed the knowledge base in his file cabinet. The symbols Searle manipulates are already meaningful, they're just not meaningful to him.
Searle says that the symbols only have a "derived" meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, like a book, has no understanding of its own.

Commonsense knowledge / contextualist reply

Some have argued that the meanings of the symbols would come from a vast "background" of commonsense knowledge encoded in the program and the filing cabinets. This would provide a "context" that would give the symbols their meaning.
Searle agrees that this background exists, but he does not agree that it can be built into programs. Hubert Dreyfus has also criticized the idea that the "background" can be represented symbolically.

To each of these suggestions, Searle's response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes "syntax is insufficient for semantics."

However, for those who accept that Searle's actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors that roboticists can supply.

Brain simulation and connectionist replies: redesigning the room

These arguments are all versions of the systems reply that identify a particular kind of system as being important; they identify some special technology that would create conscious understanding in a machine. (The "robot" and "commonsense knowledge" replies above also specify a certain kind of system as being important.)

Brain simulator reply

Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker. This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain.
Searle replies that such a simulation does not reproduce the important features of the brain—its causal and intentional states. Searle is adamant that "human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains." Moreover, he argues:

[I]magine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes. Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination.

Two variations on the brain simulator reply are the China brain and the brain-replacement scenario.
China brain
What if we ask each citizen of China to simulate one neuron, using the telephone system to simulate the connections between axons and dendrites? In this version, it seems obvious that no individual would have any understanding of what the brain might be saying. It is also obvious that this system would be functionally equivalent to a brain, so if consciousness is a function, this system would be conscious.
Brain replacement scenario
In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle's critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins. Searle predicts that, while going through the brain prosthesis, "you find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when doctors test your vision, you hear them say 'We are holding up a red object in front of you; please tell us what you see.' You want to cry out 'I can't see anything. I'm going totally blind.' But you hear your voice saying in a way that is completely outside of your control, 'I see a red object in front of me.' [...] [Y]our conscious experience slowly shrinks to nothing, while your externally observable behavior remains the same."

Connectionist replies

Closely related to the brain simulator reply, this claims that a massively parallel connectionist architecture would be capable of understanding.

Combination reply

This response combines the robot reply with the brain simulation reply, arguing that a brain simulation connected to the world through a robot body could have a mind.

Many mansions / wait till next year reply

Better technology in the future will allow computers to understand. Searle agrees that this is possible, but considers this point irrelevant. Searle agrees that there may be designs that would cause a machine to have conscious understanding.

These arguments (and the robot or commonsense knowledge replies) identify some special technology that would help create conscious understanding in a machine. They may be interpreted in two ways: either they claim (1) this technology is required for consciousness, the Chinese room does not or cannot implement this technology, and therefore the Chinese room cannot pass the Turing test or (even if it did) it would not have conscious understanding. Or they may be claiming that (2) it is easier to see that the Chinese room has a mind if we visualize this technology as being used to create it.

In the first case, where features like a robot body or a connectionist architecture are required, Searle claims that strong AI (as he understands it) has been abandoned. The Chinese room has all the elements of a Turing complete machine, and thus is capable of simulating any digital computation whatsoever. If Searle's room can't pass the Turing test then there is no other digital technology that could pass the Turing test. If Searle's room could pass the Turing test, but still does not have a mind, then the Turing test is not sufficient to determine if the room has a "mind". Either way, it denies one or the other of the positions Searle thinks of as "strong AI", proving his argument.

The brain arguments in particular deny strong AI if they assume that there is no simpler way to describe the mind than to create a program that is just as mysterious as the brain was. He writes "I thought the whole idea of strong AI was that we don't need to know how the brain works to know how the mind works." If computation does not provide an explanation of the human mind, then strong AI has failed, according to Searle.

Other critics hold that the room as Searle described it does, in fact, have a mind, however they argue that it is difficult to see—Searle's description is correct, but misleading. By redesigning the room more realistically they hope to make this more obvious. In this case, these arguments are being used as appeals to intuition (see next section).

In fact, the room can just as easily be redesigned to weaken our intuitions. Ned Block's Blockhead argument suggests that the program could, in theory, be rewritten into a simple lookup table of rules of the form "if the user writes S, reply with P and goto X". At least in principle, any program can be rewritten (or "refactored") into this form, even a brain simulation. In the blockhead scenario, the entire mental state is hidden in the letter X, which represents a memory address—a number associated with the next rule. It is hard to visualize that an instant of one's conscious experience can be captured in a single large number, yet this is exactly what "strong AI" claims. On the other hand, such a lookup table would be ridiculously large (to the point of being physically impossible), and the states could therefore be extremely specific.

Searle argues that however the program is written or however the machine is connected to the world, the mind is being simulated by a simple step-by-step digital machine (or machines). These machines are always just like the man in the room: they understand nothing and don't speak Chinese. They are merely manipulating symbols without knowing what they mean. Searle writes: "I can have any formal program you like, but I still understand nothing."

Speed and complexity: appeals to intuition

The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle's room, or how the symbols he manipulates could become meaningful. However, by raising doubts about Searle's intuitions they support other positions, such as the system and robot replies. These arguments, if accepted, prevent Searle from claiming that his conclusion is obvious by undermining the intuitions that his certainty requires.

Several critics believe that Searle's argument relies entirely on intuitions. Ned Block writes "Searle's argument depends for its force on intuitions that certain entities do not think." Daniel Dennett describes the Chinese room argument as a misleading "intuition pump" and writes "Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the 'obvious' conclusion from it."

Some of the arguments above also function as appeals to intuition, especially those that are intended to make it seem more plausible that the Chinese room contains a mind, which can include the robot, commonsense knowledge, brain simulation and connectionist replies. Several of the replies above also address the specific issue of complexity. The connectionist reply emphasizes that a working artificial intelligence system would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be "an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and meta-knowledge and meta-meta-knowledge", as Daniel Dennett explains.

Speed and complexity replies

The speed at which human brains process information is (by some estimates) 100 billion operations per second. Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.

An especially vivid version of the speed and complexity reply is from Paul and Patricia Churchland. They propose this analogous thought experiment:

Churchland's luminous room

"Consider a dark room containing a man holding a bar magnet or charged object. If the man pumps the magnet up and down, then, according to Maxwell's theory of artificial luminance (AL), it will initiate a spreading circle of electromagnetic waves and will thus be luminous. But as all of us who have toyed with magnets or charged balls well know, their forces (or any other forces for that matter), even when set in motion produce no luminance at all. It is inconceivable that you might constitute real luminance just by moving forces around!" The problem is that he would have to wave the magnet up and down something like 450 trillion times per second in order to see anything.

Stevan Harnad is critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of 'complexity.')"

Searle argues that his critics are also relying on intuitions, however his opponents' intuitions have no empirical basis. He writes that, in order to consider the "system reply" as remotely plausible, a person must be "under the grip of an ideology". The system reply only makes sense (to Searle) if one assumes that any "system" can have consciousness, just by virtue of being a system with the right behavior and functional parts. This assumption, he argues, is not tenable given our experience of consciousness.

Other minds and zombies: meaninglessness

Several replies argue that Searle's argument is irrelevant because his assumptions about the mind and consciousness are faulty. Searle believes that human beings directly experience their consciousness, intentionality and the nature of the mind every day, and that this experience of consciousness is not open to question. He writes that we must "presuppose the reality and knowability of the mental." These replies question whether Searle is justified in using his own experience of consciousness to determine that it is more than mechanical symbol processing. In particular, the other minds reply argues that we cannot use our experience of consciousness to answer questions about other minds (even the mind of a computer), and the epiphenomena reply argues that Searle's consciousness does not "exist" in the sense that Searle thinks it does.

Other minds reply
This reply points out that Searle's argument is a version of the problem of other minds, applied to machines. There is no way we can determine if other people's subjective experience is the same as our own. We can only study their behavior (i.e., by giving them our own Turing test). Critics of Searle argue that he is holding the Chinese room to a higher standard than we would hold an ordinary person.

Nils Nilsson writes "If a program behaves as if it were multiplying, most of us would say that it is, in fact, multiplying. For all I know, Searle may only be behaving as if he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I'm willing to credit him with real thought."

Alan Turing anticipated Searle's line of argument (which he called "The Argument from Consciousness") in 1950 and makes the other minds reply. He noted that people never consider the problem of other minds when dealing with each other. He writes that "instead of arguing continually over this point it is usual to have the polite convention that everyone thinks." The Turing test simply extends this "polite convention" to machines. He doesn't intend to solve the problem of other minds (for machines or people) and he doesn't think we need to.

Eliminative Materialism reply
Several philosophers argue that consciousness, as Searle describes it, does not exist. This position is sometimes referred to as eliminative materialism: the view that consciousness is a property that can be reduced to a strictly mechanical description, and that our experience of consciousness is, as Daniel Dennett describes it, a "user illusion". Other mental properties, such as original intentionality (also called “meaning”, “content”, and “semantic character”), is also commonly regarded as something special about beliefs and other propositional attitudes. Eliminative materialism maintains that propositional attitudes such as beliefs and desires, among other intentional mental states that have content, do not exist. If eliminative materialism is the correct scientific account of human cognition then the assumption of the Chinese room argument that "minds have mental contents (semantics)" must be rejected.

Stuart Russell and Peter Norvig argue that, if we accept Searle's description of intentionality, consciousness and the mind, we are forced to accept that consciousness is epiphenomenal: that it "casts no shadow", that it is undetectable in the outside world. They argue that Searle must be mistaken about the "knowability of the mental", and in his belief that there are "causal properties" in our neurons that give rise to the mind. They point out that, by Searle's own description, these causal properties can't be detected by anyone outside the mind, otherwise the Chinese Room couldn't pass the Turing test—the people outside would be able to tell there wasn't a Chinese speaker in the room by detecting their causal properties. Since they can't detect causal properties, they can't detect the existence of the mental. In short, Searle's "causal properties" and consciousness itself is undetectable, and anything that cannot be detected either does not exist or does not matter.

Daniel Dennett provides this extension to the "epiphenomena" argument.

Dennett's reply from natural selection
Suppose that, by some mutation, a human being is born that does not have Searle's "causal properties" but nevertheless acts exactly like a human being. (This sort of animal is called a "zombie" in thought experiments in the philosophy of mind). This new animal would reproduce just as any other human and eventually there would be more of these zombies. Natural selection would favor the zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. So therefore, if Searle is right, it is most likely that human beings (as we see them today) are actually "zombies", who nevertheless insist they are conscious. It is impossible to know whether we are all zombies or not. Even if we are all zombies, we would still believe that we are not.

Searle disagrees with this analysis and argues that "the study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't ... what we wanted to know is what distinguishes the mind from thermostats and livers." He takes it as obvious that we can detect the presence of consciousness and dismisses these replies as being off the point.

Newton's flaming laser sword reply
Mike Alder argues that the entire argument is frivolous, because it is non-verificationist: not only is the distinction between simulating a mind and having a mind ill-defined, but it is also irrelevant because no experiments were, or even can be, proposed to distinguish between the two.

English reply

Margaret Boden provided this reply in her paper "Escaping from the Chinese Room." In it she suggests, that even if the person in the room does not understand the Chinese, it does not mean there is no understanding in the room. The person in the room at least understands the rule book used to provide output responses.

In popular culture

The Chinese room argument is a central concept in Peter Watts's novels Blindsight and (to a lesser extent) Echopraxia. It is also a central theme in the video game Virtue's Last Reward, and ties into the game's narrative. In Season 4 of the American crime drama Numb3rs there is a brief reference to the Chinese room.

The Chinese Room is also the name of a British independent video game development studio best known for working on experimental first-person games, such as Everybody's Gone to the Rapture, or Dear Esther.

In the 2016 video game The Turing Test, the Chinese Room thought experiment is explained to the player by an AI.

Butane

From Wikipedia, the free encyclopedia ...