Search This Blog

Tuesday, May 22, 2018

Artificial consciousness

From Wikipedia, the free encyclopedia
Artificial consciousness[1] (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics. The aim of the theory of artificial consciousness is to "Define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995).
Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC, though there are challenges to that perspective. Proponents of AC believe it is possible to construct systems (e.g., computer systems) that can emulate this NCC interoperation.[2]

Artificial consciousness concepts are also pondered in the philosophy of artificial intelligence through questions about mind, consciousness, and mental states.[3]

Philosophical views

As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of “raw feels”, “what it is like” or qualia (Block 1997).

Plausibility debate

Type-identity theorists and other skeptics hold the view that consciousness can only be realized in particular physical systems because consciousness has properties that necessarily depend on physical constitution (Block 1978; Bickle 2003).[4][5]

In his article "Artificial Consciousness: Utopia or Real Possibility" Giorgio Buttazzo says that despite our current technology's ability to simulate autonomy, "Working in a fully automated mode, they [the computers] cannot exhibit creativity, emotions, or free will. A computer, like a washing machine, is a slave operated by its components."[6]

For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness (Putnam 1967).

Computational Foundation argument

One of the most explicit arguments for the plausibility of AC comes from David Chalmers. His proposal, found within his article Chalmers 2011, is roughly that the right kinds of computations are sufficient for the possession of a conscious mind. In the outline, he defends his claim thus: Computers perform computations. Computations can capture other systems' abstract causal organization.

The most controversial part of Chalmers' proposal is that mental properties are "organizationally invariant". Mental properties are of two kinds, psychological and phenomenological. Psychological properties, such as belief and perception, are those that are "characterized by their causal role". He adverts to the work of Armstrong 1968 and Lewis 1972 in claiming that "[s]ystems with the same causal topology…will share their psychological properties".

Phenomenological properties are not prima facie definable in terms of their causal roles. Establishing that phenomenological properties are amenable to individuation by causal role therefore requires argument. Chalmers provides his Dancing Qualia Argument for this purpose.[7]

Chalmers begins by assuming that agents with identical causal organizations could have different experiences. He then asks us to conceive of changing one agent into the other by the replacement of parts (neural parts replaced by silicon, say) while preserving its causal organization. Ex hypothesi, the experience of the agent under transformation would change (as the parts were replaced), but there would be no change in causal topology and therefore no means whereby the agent could "notice" the shift in experience.

Critics of AC object that Chalmers begs the question in assuming that all mental properties and external connections are sufficiently captured by abstract causal organization.

Ethics

If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer of a building or large machine is a particular ambiguity. Should laws be made for such a case, consciousness would also require a legal definition (for example a machine's ability to experience pleasure or pain, known as sentience). Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction (see below).

The rules for the 2003 Loebner Prize competition explicitly addressed the question of robot rights:
61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[8]

Research and implementation proposals

Aspects of consciousness

There are various aspects of consciousness generally deemed necessary for a machine to be artificially conscious. A variety of functions in which consciousness plays a role were suggested by Bernard Baars (Baars 1988) and others. The functions of consciousness suggested by Bernard Baars are Definition and Context Setting, Adaptation and Learning, Editing, Flagging and Debugging, Recruiting and Control, Prioritizing and Access-Control, Decision-making or Executive Function, Analogy-forming Function, Metacognitive and Self-monitoring Function, and Autoprogramming and Self-maintenance Function. Igor Aleksander suggested 12 principles for artificial consciousness (Aleksander 1995) and these are: The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.

Awareness

Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined, and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling of the physical world, modeling of one's own internal states and processes, and modeling of other conscious entities.

There are at least three types of awareness:[9] agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.

Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.[10]

Memory

Conscious events interact with memory systems in learning, rehearsal, and retrieval.[11] The IDA model[12] elucidates the role of consciousness in the updating of perceptual memory,[13] transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA, there is evidence that this is also the case in the nervous system.[14] In IDA, these two memories are implemented computationally using a modified version of Kanerva’s Sparse distributed memory architecture.[15]

Learning

Learning is also considered necessary for AC. By Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events (Baars 1988). By Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments" (Cleeremans 2001).

Anticipation

The ability to predict (or anticipate) foreseeable events is considered important for AC by Igor Aleksander.[16] The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.

Relationships between real world states are mirrored in the state structure of a conscious organism enabling the organism to predict events.[16] An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.

Subjective experience

Subjective experiences or qualia are widely considered to be the hard problem of consciousness. Indeed, it is held to pose a challenge to physicalism, let alone computationalism. On the other hand, there are problems in other fields of science which limit that which we can observe, such as the uncertainty principle in physics, which have not made the research in these fields of science impossible.

Role of cognitive architectures

The term "cognitive architecture" may refer to a theory about the structure of the human mind, or any portion or function thereof, including consciousness. In another context, a cognitive architecture implements the theory on computers. An example is QuBIC: Quantum and Bio-inspired Cognitive Architecture for Machine Consciousness. One of the main goals of a cognitive architecture is to summarize the various results of cognitive psychology in a comprehensive computer model. However, the results need to be in a formalized form so they can be the basis of a computer program. Also, the role of cognitive architecture is for the A.I. to clearly structure, build, and implement it's thought process.

Symbolic or hybrid proposals

Franklin's Intelligent Distribution Agent

Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is capable of several of the functions of consciousness as identified by Bernard Baars' Global Workspace Theory (Baars 1988, 1997). His brain child IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it functionally conscious by definition. IDA's task is to negotiate new assignments for sailors in the US Navy after they end a tour of duty, by matching each individual's skills and preferences with the Navy's needs. IDA interacts with Navy databases and communicates with the sailors via natural language e-mail dialog while obeying a large set of Navy policies. The IDA computational model was developed during 1996–2001 at Stan Franklin's "Conscious" Software Research Group at the University of Memphis. It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA's top-down architecture, high-level cognitive functions are explicitly modeled (see Franklin 1995 and Franklin 2003 for details). While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to his own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task."

Ron Sun's cognitive architecture CLARION

CLARION posits a two-level representation that explains the distinction between conscious and unconscious mental processes.

CLARION has been successful in accounting for a variety of psychological data. A number of well-known skill learning tasks have been simulated using CLARION that span the spectrum ranging from simple reactive skills to complex cognitive skills. The tasks include serial reaction time (SRT) tasks, artificial grammar learning (AGL) tasks, process control (PC) tasks, the categorical inference (CI) task, the alphabetical arithmetic (AA) task, and the Tower of Hanoi (TOH) task (Sun 2002). Among them, SRT, AGL, and PC are typical implicit learning tasks, very much relevant to the issue of consciousness as they operationalized the notion of consciousness in the context of psychological experiments.

Ben Goertzel's OpenCog

Ben Goertzel is pursuing an embodied AGI through the open-source OpenCog project. Current code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, being done at the Hong Kong Polytechnic University.

Connectionist proposals

Haikonen's cognitive architecture

Pentti Haikonen (2003) considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection." Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many, e.g. Freeman (1999) and Cotterill (2003). A low-complexity implementation of the architecture proposed by Haikonen (2003) was reportedly not capable of AC, but did exhibit emotions as expected. See Doan (2009) for a comprehensive introduction to Haikonen's cognitive architecture. An updated account of Haikonen's architecture, along with a summary of his philosophical views, is given in Haikonen (2012).

Shanahan's cognitive architecture

Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination") (Shanahan 2006). For discussions of Shanahan's architecture, see (Gamez 2008) and (Reggia 2013) and Chapter 20 of (Haikonen 2012).

Takeno's self-awareness research

Self-awareness in robots is being investigated by Junichi Takeno[17] at Meiji University in Japan. Takeno is asserting that he has developed a robot capable of discriminating between a self-image in a mirror and any other having an identical image to it,[18][19] and this claim has already been reviewed (Takeno, Inaba & Suzuki 2005). Takeno asserts that he first contrived the computational module called a MoNAD, which has a self-aware function, and he then constructed the artificial consciousness system by formulating the relationships between emotions, feelings and reason by connecting the modules in a hierarchy (Igarashi, Takeno 2007). Takeno completed a mirror image cognition experiment using a robot equipped with the MoNAD system. Takeno proposed the Self-Body Theory stating that "humans feel that their own mirror image is closer to themselves than an actual part of themselves." The most important point in developing artificial consciousness or clarifying human consciousness is the development of a function of self awareness, and he claims that he has demonstrated physical and mathematical evidence for this in his thesis.[20] He also demonstrated that robots can study episodes in memory where the emotions were stimulated and use this experience to take predictive actions to prevent the recurrence of unpleasant emotions (Torigoe, Takeno 2009).

Aleksander's impossible mind

Igor Aleksander, emeritus professor of Neural Systems Engineering at Imperial College, has extensively researched artificial neural networks and claims in his book Impossible Minds: My Neurons, My Consciousness that the principles for creating a conscious machine already exist but that it would take forty years to train such a machine to understand language.[21] Whether this is true remains to be demonstrated and the basic principle stated in Impossible Minds—that the brain is a neural state machine—is open to doubt.[22]

Thaler's Creativity Machine Paradigm

Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI),[23][24][25] or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies.[26] He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity.[27][28][29] Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.[28][30][31][32][33][34]

Michael Graziano's attention schema

In 2011, Michael Graziano and Sabine Kastler published a paper named "Human consciousness and its relationship to social neuroscience: A novel hypothesis" proposing a theory of consciousness as an attention schema.[35] Graziano went on to publish an expanded discussion of this theory in his book "Consciousness and the Social Brain".[2] This Attention Schema Theory of Consciousness, as he named it, proposes that the brain tracks attention to various sensory inputs by way of an attention schema, analogous to the well study body schema that tracks the spatial place of a person's body.[2] This relates to artificial consciousness by proposing a specific mechanism of information handling, that produces what we allegedly experience and describe as consciousness, and which should be able to be duplicated by a machine using current technology. When the brain finds that person X is aware of thing Y, it is in effect modeling the state in which person X is applying an attentional enhancement to Y. In the attention schema theory, the same process can be applied to oneself. The brain tracks attention to various sensory inputs, and one's own awareness is a schematized model of one's attention. Graziano proposes specific locations in the brain for this process, and suggests that such awareness is a computed feature constructed by an expert system in the brain.

Testing

The most well-known method for testing machine intelligence is the Turing test. But when interpreted as only observational, this test contradicts the philosophy of science principles of theory dependence of observations. It also has been suggested that Alan Turing's recommendation of imitating not a human adult consciousness, but a human child consciousness, should be taken seriously.[36]

Other tests, such as ConsScale, test the presence of features inspired by biological systems, or measure the cognitive development of artificial systems.

Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Although various systems may display various signs of behavior correlated with functional consciousness, there is no conceivable way in which third-person tests can have access to first-person phenomenological features. Because of that, and because there is no empirical definition of consciousness,[37] a test of presence of consciousness in AC may be impossible.

In 2014, Victor Argonov suggested a non-Turing test for machine consciousness based on machine's ability to produce philosophical judgments.[38] He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures’ consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine’s intellect, not by absence of consciousness.

In fiction

Characters with artificial consciousness (or at least with personalities that imply they have consciousness), from works of fiction:

Multiple drafts model

From Wikipedia, the free encyclopedia

Daniel Dennett's multiple drafts model of consciousness is a physicalist theory of consciousness based upon cognitivism, which views the mind in terms of information processing. The theory is described in depth in his book, Consciousness Explained, published in 1991. As the title states, the book proposes a high-level explanation of consciousness which is consistent with support for the possibility of strong AI.

Dennett describes the theory as first-person operationalism. As he states it:
The Multiple Drafts model makes [the procedure of] "writing it down" in memory criterial for consciousness: that is what it is for the "given" to be "taken" ... There is no reality of conscious experience independent of the effects of various vehicles of content on subsequent action (and hence, of course, on memory).[1]

The thesis of multiple drafts

Dennett's thesis is that our modern understanding of consciousness is unduly influenced by the ideas of René Descartes. To show why, he starts with a description of the phi illusion. In this experiment, two different coloured lights, with an angular separation of a few degrees at the eye, are flashed in succession. If the interval between the flashes is less than a second or so, the first light that is flashed appears to move across to the position of the second light. Furthermore, the light seems to change colour as it moves across the visual field. A green light will appear to turn red as it seems to move across to the position of a red light. Dennett asks how we could see the light change colour before the second light is observed.

Dennett claims that conventional explanations of the colour change boil down to either Orwellian or Stalinesque hypotheses, which he says are the result of Descartes' continued influence on our vision of the mind. In an Orwellian hypothesis, the subject comes to one conclusion, then goes back and changes that memory in light of subsequent events. This is akin to George Orwell's Nineteen Eighty-Four, where records of the past are routinely altered. In a Stalinesque hypothesis, the two events would be reconciled prior to entering the subject's consciousness, with the final result presented as fully resolved. This is akin to Joseph Stalin's show trials, where the verdict has been decided in advance and the trial is just a rote presentation.
[W]e can suppose, both theorists have exactly the same theory of what happens in your brain; they agree about just where and when in the brain the mistaken content enters the causal pathways; they just disagree about whether that location is to be deemed pre-experiential or post-experiential. [...] [T]hey even agree about how it ought to "feel" to subjects: Subjects should be unable to tell the difference between misbegotten experiences and immediately misremembered experiences. [p. 125, original emphasis.]
Dennett argues that there is no principled basis for picking one of these theories over the other, because they share a common error in supposing that there is a special time and place where unconscious processing becomes consciously experienced, entering into what Dennett calls the "Cartesian theatre". Both theories require us to cleanly divide a sequence of perceptions and reactions into before and after the instant that they reach the seat of consciousness, but he denies that there is any such moment, as it would lead to infinite regress. Instead, he asserts that there is no privileged place in the brain where consciousness happens. Dennett states that, "[t]here does not exist [...] a process such as 'recruitment of consciousness' (into what?), nor any place where the 'vehicle's arrival' is recognized (by whom?)"[2]
Cartesian materialism is the view that there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of "presentation" in experience because what happens there is what you are conscious of. ... Many theorists would insist that they have explicitly rejected such an obviously bad idea. But [...] the persuasive imagery of the Cartesian Theater keeps coming back to haunt us—laypeople and scientists alike—even after its ghostly dualism has been denounced and exorcized. [p. 107, original emphasis.]
With no theatre, there is no screen, hence no reason to re-present data after it has already been analysed. Dennett says that, "the Multiple Drafts model goes on to claim that the brain does not bother 'constructing' any representations that go to the trouble of 'filling in' the blanks. That would be a waste of time and (shall we say?) paint. The judgement is already in so we can get on with other tasks!"

According to the model, there are a variety of sensory inputs from a given event and also a variety of interpretations of these inputs. The sensory inputs arrive in the brain and are interpreted at different times, so a given event can give rise to a succession of discriminations, constituting the equivalent of multiple drafts of a story. As soon as each discrimination is accomplished, it becomes available for eliciting a behaviour; it does not have to wait to be presented at the theatre.

Like a number of other theories, the Multiple Drafts model understands conscious experience as taking time to occur, such that percepts do not instantaneously arise in the mind in their full richness. The distinction is that Dennett's theory denies any clear and unambiguous boundary separating conscious experiences from all other processing. According to Dennett, consciousness is to be found in the actions and flows of information from place to place, rather than some singular view containing our experience. There is no central experiencer who confers a durable stamp of approval on any particular draft.

Different parts of the neural processing assert more or less control at different times. For something to reach consciousness is akin to becoming famous, in that it must leave behind consequences by which it is remembered. To put it another way, consciousness is the property of having enough influence to affect what the mouth will say and the hands will do. Which inputs are "edited" into our drafts is not an exogenous act of supervision, but part of the self-organizing functioning of the network, and at the same level as the circuitry that conveys information bottom-up.

The conscious self is taken to exist as an abstraction visible at the level of the intentional stance, akin to a body of mass having a "centre of gravity". Analogously, Dennett refers to the self as the "centre of narrative gravity", a story we tell ourselves about our experiences. Consciousness exists, but not independently of behaviour and behavioural disposition, which can be studied through heterophenomenology.

The origin of this operationalist approach can be found in Dennett's immediately preceding work. Dennett (1988) explains consciousness in terms of access consciousness alone, denying the independent existence of what Ned Block has labeled phenomenal consciousness.[3] He argues that "Everything real has properties, and since I don't deny the reality of conscious experience, I grant that conscious experience has properties". Having related all consciousness to properties, he concludes that they cannot be meaningfully distinguished from our judgements about them. He writes:
The infallibilist line on qualia treats them as properties of one's experience one cannot in principle misdiscover, and this is a mysterious doctrine (at least as mysterious as papal infallibility) unless we shift the emphasis a little and treat qualia as logical constructs out of subjects' qualia-judgments: a subject's experience has the quale F if and only if the subject judges his experience to have quale F. We can then treat such judgings as constitutive acts, in effect, bringing the quale into existence by the same sort of license as novelists have to determine the hair color of their characters by fiat. We do not ask how Dostoevski knows that Raskolnikov's hair is light brown.[4]
In other words, once we've explained a perception fully in terms of how it affects us, there is nothing left to explain. In particular, there is no such thing as a perception which may be considered in and of itself (a quale). Instead, the subject's honest reports of how things seem to them are inherently authoritative on how things seem to them, but not on the matter of how things actually are.
So when we look one last time at our original characterization of qualia, as ineffable, intrinsic, private, directly apprehensible properties of experience, we find that there is nothing to fill the bill. In their place are relatively or practically ineffable public properties we can refer to indirectly via reference to our private property-detectors — private only in the sense of idiosyncratic. And insofar as we wish to cling to our subjective authority about the occurrence within us of states of certain types or with certain properties, we can have some authority — not infallibility or incorrigibility, but something better than sheer guessing — but only if we restrict ourselves to relational, extrinsic properties like the power of certain internal states of ours to provoke acts of apparent re-identification. So contrary to what seems obvious at first blush, there simply are no qualia at all.[4]
The key to the multiple drafts model is that, after removing qualia, explaining consciousness boils down to explaining the behaviour we recognise as conscious. Consciousness is as consciousness does.

Critical responses to multiple drafts

Some of the criticism of Dennett's theory is due to the perceived tone of his presentation. As one grudging supporter admits, "there is much in this book that is disputable. And Dennett is at times aggravatingly smug and confident about the merits of his arguments [...] All in all Dennett's book is annoying, frustrating, insightful, provocative and above all annoying." (Korb 1993)

Bogen (1992) points out that the brain is bilaterally symmetrical. That being the case, if Cartesian materialism is true, there might be two Cartesian theatres, so arguments against only one are flawed.[5] Velmans (1992) argues that the phi effect and the cutaneous rabbit illusion demonstrate that there is a delay whilst modelling occurs and that this delay was discovered by Libet.[6]

It has also been claimed that the argument in the multiple drafts model does not support its conclusion.[7]

"Straw man"

Much of the criticism asserts that Dennett's theory attacks the wrong target, failing to explain what it claims to. Chalmers (1996) maintains that Dennett has produced no more than a theory of how subjects report events.[8] Some even parody the title of the book as "Consciousness Explained Away", accusing him of greedy reductionism.[9] Another line of criticism disputes the accuracy of Dennett's characterisations of existing theories:
The now standard response to Dennett's project is that he has picked a fight with a straw man. Cartesian materialism, it is alleged, is an impossibly naive account of phenomenal consciousness held by no one currently working in cognitive science or the philosophy of mind. Consequently, whatever the effectiveness of Dennett's demolition job, it is fundamentally misdirected (see, e.g., Block, 1993, 1995; Shoemaker, 1993; and Tye, 1993).[10]

Unoriginality

Multiple drafts is also attacked for making a claim to novelty. It may be the case, however, that such attacks mistake which features Dennett is claiming as novel. Korb states that, "I believe that the central thesis will be relatively uncontentious for most cognitive scientists, but that its use as a cleaning solvent for messy puzzles will be viewed less happily in most quarters." (Korb 1993) In this way, Dennett uses uncontroversial ideas towards more controversial ends, leaving him open to claims of unoriginality when uncontroversial parts are focused upon.

Even the notion of consciousness as drafts is not unique to Dennett. According to Hankins, Dieter Teichert suggests that Paul Ricoeur's theories agree with Dennett's on the notion that "the self is basically a narrative entity, and that any attempt to give it a free-floating independent status is misguided." [Hankins] Others see Derrida's (1982) representationalism as consistent with the notion of a mind that has perceptually changing content without a definitive present instant.[11]

To those who believe that consciousness entails something more than behaving in all ways conscious, Dennett's view is seen as eliminativist, since it denies the existence of qualia and the possibility of philosophical zombies. However, Dennett is not denying the existence of the mind or of consciousness, only what he considers a naive view of them. The point of contention is whether Dennett's own definitions are indeed more accurate, whether what we think of when we speak of perceptions and consciousness can be understood in terms of nothing more than their effect on behaviour.

Information processing and consciousness

The role of information processing in consciousness has been criticised by John Searle who, in his Chinese room argument,[12] states that he cannot find anything that could be recognised as conscious experience in a system that relies solely on motions of things from place to place. Dennett sees this argument as misleading, arguing that consciousness is not to be found in a specific part of the system, but in the actions of the whole. In essence, he denies that consciousness requires something in addition to capacity for behaviour, saying that philosophers such as Searle, "just can't imagine how understanding could be a property that emerges from lots of distributed quasi-understanding in a large system" (p. 439).

Global workspace theory

From Wikipedia, the free encyclopedia

Global workspace theory (GWT) is a simple cognitive architecture that has been developed to account qualitatively for a large set of matched pairs of conscious and unconscious processes. It was proposed by Bernard Baars (1988, 1997, 2002). Brain interpretations and computational simulations of GWT are the focus of current research.

GWT resembles the concept of working memory, and is proposed to correspond to a "momentarily active, subjectively experienced" event in working memory (WM)—the "inner domain in which we can rehearse telephone numbers to ourselves or in which we carry on the narrative of our lives. It is usually thought to include inner speech and visual imagery." (in Baars, 1997).

The theater metaphor

GWT can be explained in terms of a "theater metaphor". In the "theater of consciousness" a "spotlight of selective attention" shines a bright spot on stage. The bright spot reveals the contents of consciousness, actors moving in and out, making speeches or interacting with each other. The audience is not lit up—it is in the dark (i.e., unconscious) watching the play. Behind the scenes, also in the dark, are the director (executive processes), stage hands, script writers, scene designers and the like. They shape the visible activities in the bright spot, but are themselves invisible. Baars argues that this is distinct from the concept of the Cartesian theater, since it is not based on the implicit dualistic assumption of "someone" viewing the theater, and is not located in a single place in the mind (in Blackmore, 2005).

The model

GWT involves a fleeting memory with a duration of a few seconds (much shorter than the 10–30 seconds of classical working memory). GWT contents are proposed[citation needed] to correspond to what we are conscious of, and are broadcast to a multitude of unconscious cognitive brain processes, which may be called receiving processes. Other unconscious processes, operating in parallel with limited communication between them, can form coalitions which can act as input processes to the global workspace. Since globally broadcast messages can evoke actions in receiving processes throughout the brain,[citation needed] the global workspace may be used to exercise executive control to perform voluntary actions. Individual as well as allied processes compete for access to the global workspace,[citation needed] striving to disseminate their messages to all other processes in an effort to recruit more cohorts and thereby increase the likelihood of achieving their goals.

Baars (1997) suggests that the global workspace "is closely related to conscious experience, though not identical to it." Conscious events may involve more necessary conditions, such as interacting with a "self" system, and an executive interpreter in the brain, such as has been suggested by a number of authors including Michael S. Gazzaniga.

Nevertheless, GWT can successfully model a number of characteristics of consciousness, such as its role in handling novel situations, its limited capacity, its sequential nature, and its ability to trigger a vast range of unconscious brain processes. Moreover, GWT lends itself well to computational modeling. Stan Franklin's IDA model is one such computational implementation of GWT. See also Dehaene et al. (2003) and Shanahan (2006).

GWT also specifies "behind the scenes" contextual systems, which shape conscious contents without ever becoming conscious, such as the dorsal cortical stream of the visual system. This architectural approach leads to specific neural hypotheses. Sensory events in different modalities may compete with each other for consciousness if their contents are incompatible. For example, the audio and video track of a movie will compete rather than fuse if the two tracks are out of sync by more than 100 ms., approximately. The 100 ms time domain corresponds closely with the known brain physiology of consciousness, including brain rhythms in the alpha-theta-gamma domain, and event-related potentials in the 200-300 ms domain.[1]

Global Neuronal Workspace

Stanislas Dehaene extended the global workspace with the "neuronal avalanche" showing how sensory information gets selected to be broadcast throughout the cortex. [2] Many brain regions, the prefrontal cortex , anterior temporal lobe, inferior parietal lobe, and the precuneus all send and receive numerous projections to and from a broad variety of distant brain regions, allowing the neurons there to integrate information over space and time. Multiple sensory modules can therefore converge onto a single coherent interpretation, for example, a "red sports car zooming by". This global interpretation is broadcast back to the global workspace creating the conditions for the emergence of a single state of consciousness, at once differentiated and integrated.

Criticism

Susan Blackmore challenged the concept of stream of consciousness in several papers, by stating that "When I say that consciousness is an illusion I do not mean that consciousness does not exist. I mean that consciousness is not what it appears to be. If it seems to be a continuous stream of rich and detailed experiences, happening one after the other to a conscious person, this is the illusion."[3] Blackmore also quotes William James: "The attempt at introspective analysis in these cases is in fact like seizing a spinning top to catch its motion, or trying to turn up the gas quickly enough to see how the darkness looks."[citation needed]

Baars is in agreement with these points. The continuity of the "stream of consciousness" may in fact be illusory, just as the continuity of a movie is illusory. Nevertheless, the seriality of mutually incompatible conscious events is well supported by objective research over some two centuries of experimental work. A simple illustration would be to try to be conscious of two interpretations of an ambiguous figure or word at the same time. When timing is precisely controlled, as in the case of the audio and video tracks of the same movie, seriality appears to be compulsory for potentially conscious events presented within the same 100 ms interval.[citation needed]

J. W. Dalton has criticized the global workspace theory on the grounds that it provides, at best, an account of the cognitive function of consciousness, and fails even to address the deeper problem of its nature, of what consciousness is, and of how any mental process whatsoever can be conscious: the so-called "hard problem of consciousness".[4] A. C. Elitzur has argued, however, "While this hypothesis does not address the 'hard problem', namely, the very nature of consciousness, it constrains any theory that attempts to do so and provides important insights into the relation between consciousness and cognition."[5]

New work by Richard Robinson shows promise in establishing the brain functions involved in this model and may help shed light on how we understand signs or symbols and reference these to our semiotic registers.[6]

What Is Consciousness?

Scientists are beginning to unravel a mystery that has long vexed philosophers
What Is Consciousness?
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Credit: Chris Gash
Consciousness is everything you experience. It is the tune stuck in your head, the sweetness of chocolate mousse, the throbbing pain of a toothache, the fierce love for your child and the bitter knowledge that eventually all feelings will end.

The origin and nature of these experiences, sometimes referred to as qualia, have been a mystery from the earliest days of antiquity right up to the present. Many modern analytic philosophers of mind, most prominently perhaps Daniel Dennett of Tufts University, find the existence of consciousness such an intolerable affront to what they believe should be a meaningless universe of matter and the void that they declare it to be an illusion. That is, they either deny that qualia exist or argue that they can never be meaningfully studied by science.

If that assertion was true, this essay would be very short. All I would need to explain is why you, I and most everybody else is so convinced that we have feelings at all. If I have a tooth abscess, however, a sophisticated argument to persuade me that my pain is delusional will not lessen its torment one iota. As I have very little sympathy for this desperate solution to the mind-body problem, I shall move on.

The majority of scholars accept consciousness as a given and seek to understand its relationship to the objective world described by science. More than a quarter of a century ago Francis Crick and I decided to set aside philosophical discussions on consciousness (which have engaged scholars since at least the time of Aristotle) and instead search for its physical footprints. What is it about a highly excitable piece of brain matter that gives rise to consciousness? Once we can understand that, we hope to get closer to solving the more fundamental problem.

We seek, in particular, the neuronal correlates of consciousness (NCC), defined as the minimal neuronal mechanisms jointly sufficient for any specific conscious experience. What must happen in your brain for you to experience a toothache, for example? Must some nerve cells vibrate at some magical frequency? Do some special “consciousness neurons” have to be activated? In which brain regions would these cells be located?

Neuronal Correlates of Consciousness

When defining the NCC, the qualifier “minimal” is important. The brain as a whole can be considered an NCC, after all: it generates experience, day in and day out. But the seat of consciousness can be further ring-fenced. Take the spinal cord, a foot-and-a-half-long flexible tube of nervous tissue inside the backbone with about a billion nerve cells. If the spinal cord is completely severed by trauma to the neck region, victims are paralyzed in legs, arms and torso, unable to control their bowel and bladder, and without bodily sensations. Yet these tetraplegics continue to experience life in all its variety—they see, hear, smell, feel emotions and remember as much as before the incident that radically changed their life.

Or consider the cerebellum, the “little brain” underneath the back of the brain. One of the most ancient brain circuits in evolutionary terms, it is involved in motor control, posture and gait and in the fluid execution of complex sequences of motor movements. Playing the piano, typing, ice dancing or climbing a rock wall—all these activities involve the cerebellum. It has the brain's most glorious neurons, called Purkinje cells, which possess tendrils that spread like a sea fan coral and harbor complex electrical dynamics. It also has by far the most neurons, about 69 billion (most of which are the star-shaped cerebellar granule cells), four times more than in the rest of the brain combined.

What happens to consciousness if parts of the cerebellum are lost to a stroke or to the surgeon's knife? Very little! Cerebellar patients complain of several deficits, such as the loss of fluidity of piano playing or keyboard typing but never of losing any aspect of their consciousness. They hear, see and feel fine, retain a sense of self, recall past events and continue to project themselves into the future. Even being born without a cerebellum does not appreciably affect the conscious experience of the individual.

All of the vast cerebellar apparatus is irrelevant to subjective experience. Why? Important hints can be found within its circuitry, which is exceedingly uniform and parallel (just as batteries may be connected in parallel). The cerebellum is almost exclusively a feed-forward circuit: one set of neurons feeds the next, which in turn influences a third set. There are no complex feedback loops that reverberate with electrical activity passing back and forth. (Given the time needed for a conscious perception to develop, most theoreticians infer that it must involve feedback loops within the brain's cavernous circuitry.) Moreover, the cerebellum is functionally divided into hundreds or more independent computational modules. Each one operates in parallel, with distinct, nonoverlapping inputs and output, controlling movements of different motor or cognitive systems. They scarcely interact—another feature held indispensable for consciousness.

One important lesson from the spinal cord and the cerebellum is that the genie of consciousness does not just appear when any neural tissue is excited. More is needed. This additional factor is found in the gray matter making up the celebrated cerebral cortex, the outer surface of the brain. It is a laminated sheet of intricately interconnected nervous tissue, the size and width of a 14-inch pizza. Two of these sheets, highly folded, along with their hundreds of millions of wires—the white matter—are crammed into the skull. All available evidence implicates neocortical tissue in generating feelings.

We can narrow down the seat of consciousness even further. Take, for example, experiments in which different stimuli are presented to the right and the left eyes. Suppose a picture of Donald Trump is visible only to your left eye and one of Hillary Clinton only to your right eye. We might imagine that you would see some weird superposition of Trump and Clinton. In reality, you will see Trump for a few seconds, after which he will disappear and Clinton will appear, after which she will go away and Trump will reappear. The two images will alternate in a never-ending dance because of what neuroscientists call binocular rivalry. Because your brain is getting an ambiguous input, it cannot decide: Is it Trump, or is it Clinton?

If, at the same time, you are lying inside a magnetic scanner that registers brain activity, experimenters will find that a broad set of cortical regions, collectively known as the posterior hot zone, is active. These are the parietal, occipital and temporal regions in the posterior part of cortex [see graphic below] that play the most significant role in tracking what we see. Curiously, the primary visual cortex that receives and passes on the information streaming up from the eyes does not signal what the subject sees. A similar hierarchy of labor appears to be true of sound and touch: primary auditory and primary somatosensory cortices do not directly contribute to the content of auditory or somatosensory experience. Instead it is the next stages of processing—in the posterior hot zone—that give rise to conscious perception, including the image of Trump or Clinton.

More illuminating are two clinical sources of causal evidence: electrical stimulation of cortical tissue and the study of patients following the loss of specific regions caused by injury or disease. Before removing a brain tumor or the locus of a patient's epileptic seizures, for example, neurosurgeons map the functions of nearby cortical tissue by directly stimulating it with electrodes. Stimulating the posterior hot zone can trigger a diversity of distinct sensations and feelings. These could be flashes of light, geometric shapes, distortions of faces, auditory or visual hallucinations, a feeling of familiarity or unreality, the urge to move a specific limb, and so on. Stimulating the front of the cortex is a different matter: by and large, it elicits no direct experience.

A second source of insights are neurological patients from the first half of the 20th century. Surgeons sometimes had to excise a large belt of prefrontal cortex to remove tumors or to ameliorate epileptic seizures. What is remarkable is how unremarkable these patients appeared. The loss of a portion of the frontal lobe did have certain deleterious effects: the patients developed a lack of inhibition of inappropriate emotions or actions, motor deficits, or uncontrollable repetition of specific action or words. Following the operation, however, their personality and IQ improved, and they went on to live for many more years, with no evidence that the drastic removal of frontal tissue significantly affected their conscious experience. Conversely, removal of even small regions of the posterior cortex, where the hot zone resides, can lead to a loss of entire classes of conscious content: patients are unable to recognize faces or to see motion, color or space.

So it appears that the sights, sounds and other sensations of life as we experience it are generated by regions within the posterior cortex. As far as we can tell, almost all conscious experiences have their origin there. What is the crucial difference between these posterior regions and much of the prefrontal cortex, which does not directly contribute to subjective content? The truth is that we do not know. Even so—and excitingly—a recent finding indicates that neuroscientists may be getting closer.

The Consciousness Meter

An unmet clinical need exists for a device that reliably detects the presence or absence of consciousness in impaired or incapacitated individuals. During surgery, for example, patients are anesthetized to keep them immobile and their blood pressure stable and to eliminate pain and traumatic memories. Unfortunately, this goal is not always met: every year hundreds of patients have some awareness under anesthesia.

Another category of patients, who have severe brain injury because of accidents, infections or extreme intoxication, may live for years without being able to speak or respond to verbal requests. Establishing that they experience life is a grave challenge to the clinical arts. Think of an astronaut adrift in space, listening to mission control's attempts to contact him. His damaged radio does not relay his voice, and he appears lost to the world. This is the forlorn situation of patients whose damaged brain will not let them communicate to the world—an extreme form of solitary confinement.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Credit: Mesa Schumacher
In the early 2000s Giulio Tononi of the University of Wisconsin–Madison and Marcello Massimini, now at the University of Milan in Italy, pioneered a technique, called zap and zip, to probe whether someone is conscious or not. The scientists held a sheathed coil of wire against the scalp and “zapped” it—sent an intense pulse of magnetic energy into the skull—inducing a brief electric current in the neurons underneath. The perturbation, in turn, excited and inhibited the neurons' partner cells in connected regions, in a chain reverberating across the cortex, until the activity died out. A network of electroencephalogram (EEG) sensors, positioned outside the skull, recorded these electrical signals. As they unfolded over time, these traces, each corresponding to a specific location in the brain below the skull, yielded a movie.

These unfolding records neither sketched a stereotypical pattern, nor were they completely random. Remarkably, the more predictable these waxing and waning rhythms were, the more likely the brain was unconscious. The researchers quantified this intuition by compressing the data in the movie with an algorithm commonly used to “zip” computer files. The zipping yielded an estimate of the complexity of the brain's response. Volunteers who were awake turned out have a “perturbational complexity index” of between 0.31 and 0.70, dropping to below 0.31 when deeply asleep or anesthetized. Massimini and Tononi tested this zap-and-zip measure on 48 patients who were brain-injured but responsive and awake, finding that in every case, the method confirmed the behavioral evidence for consciousness.

The team then applied zap and zip to 81 patients who were minimally conscious or in a vegetative state. For the former group, which showed some signs of nonreflexive behavior, the method correctly found 36 out of 38 patients to be conscious. It misdiagnosed two patients as unconscious. Of the 43 vegetative-state patients in which all bedside attempts to establish communication failed, 34 were labeled as unconscious, but nine were not. Their brains responded similarly to those of conscious controls—implying that they were conscious yet unable to communicate with their loved ones.

Ongoing studies seek to standardize and improve zap and zip for neurological patients and to extend it to psychiatric and pediatric patients. Sooner or later scientists will discover the specific set of neural mechanisms that give rise to any one experience. Although these findings will have important clinical implications and may give succor to families and friends, they will not answer some fundamental questions: Why these neurons and not those? Why this particular frequency and not that? Indeed, the abiding mystery is how and why any highly organized piece of active matter gives rise to conscious sensation. After all, the brain is like any other organ, subject to the same physical laws as the heart or the liver. What makes it different? What is it about the biophysics of a chunk of highly excitable brain matter that turns gray goo into the glorious surround sound and Technicolor that is the fabric of everyday experience?

Ultimately what we need is a satisfying scientific theory of consciousness that predicts under which conditions any particular physical system—whether it is a complex circuit of neurons or silicon transistors—has experiences. Furthermore, why does the quality of these experiences differ? Why does a clear blue sky feel so different from the screech of a badly tuned violin? Do these differences in sensation have a function, and if so, what is it? Such a theory will allow us to infer which systems will experience anything. Absent a theory with testable predictions, any speculation about machine consciousness is based solely on our intuition, which the history of science has shown is not a reliable guide.

Fierce debates have arisen around the two most popular theories of consciousness. One is the global neuronal workspace (GNW) by psychologist Bernard J. Baars and neuroscientists Stanislas Dehaene and Jean-Pierre Changeux. The theory begins with the observation that when you are conscious of something, many different parts of your brain have access to that information. If, on the other hand, you act unconsciously, that information is localized to the specific sensory motor system involved. For example, when you type fast, you do so automatically. Asked how you do it, you would not know: you have little conscious access to that information, which also happens to be localized to the brain circuits linking your eyes to rapid finger movements.

Toward a Fundamental Theory

GNW argues that consciousness arises from a particular type of information processing—familiar from the early days of artificial intelligence, when specialized programs would access a small, shared repository of information. Whatever data were written onto this “blackboard” became available to a host of subsidiary processes: working memory, language, the planning module, and so on. According to GNW, consciousness emerges when incoming sensory information, inscribed onto such a blackboard, is broadcast globally to multiple cognitive systems—which process these data to speak, store or call up a memory or execute an action.

Because the blackboard has limited space, we can only be aware of a little information at any given instant. The network of neurons that broadcast these messages is hypothesized to be located in the frontal and parietal lobes. Once these sparse data are broadcast on this network and are globally available, the information becomes conscious. That is, the subject becomes aware of it. Whereas current machines do not yet rise to this level of cognitive sophistication, this is only a question of time. GNW posits that computers of the future will be conscious.

Integrated information theory (IIT), developed by Tononi and his collaborators, including me, has a very different starting point: experience itself. Each experience has certain essential properties. It is intrinsic, existing only for the subject as its “owner”; it is structured (a yellow cab braking while a brown dog crosses the street); and it is specific—distinct from any other conscious experience, such as a particular frame in a movie. Furthermore, it is unified and definite. When you sit on a park bench on a warm, sunny day, watching children play, the different parts of the experience—the breeze playing in your hair or the joy of hearing your toddler laugh—cannot be separated into parts without the experience ceasing to be what it is.

Tononi postulates that any complex and interconnected mechanism whose structure encodes a set of cause-and-effect relationships will have these properties—and so will have some level of consciousness. It will feel like something from the inside. But if, like the cerebellum, the mechanism lacks integration and complexity, it will not be aware of anything. As IIT states it, consciousness is intrinsic causal power associated with complex mechanisms such as the human brain.

IIT theory also derives, from the complexity of the underlying interconnected structure, a single nonnegative number Φ (pronounced “fy”) that quantifies this consciousness. If Φ is zero, the system does not feel like anything to be itself. Conversely, the bigger this number, the more intrinsic causal power the system possesses and the more conscious it is. The brain, which has enormous and highly specific connectivity, possesses very high Φ, which implies a high level of consciousness. IIT explains a number of observations, such as why the cerebellum does not contribute to consciousness and why the zap-and-zip meter works. (The quantity the meter measures is a very crude approximation of Φ.)

IIT also predicts that a sophisticated simulation of a human brain running on a digital computer cannot be conscious—even if it can speak in a manner indistinguishable from a human being. Just as simulating the massive gravitational attraction of a black hole does not actually deform spacetime around the computer implementing the astrophysical code, programming for consciousness will never create a conscious computer. Consciousness cannot be computed: it must be built into the structure of the system.

Two challenges lie ahead. One is to use the increasingly refined tools at our disposal to observe and probe the vast coalitions of highly heterogeneous neurons making up the brain to further delineate the neuronal footprints of consciousness. This effort will take decades, given the byzantine complexity of the central nervous system. The other is to verify or falsify the two, currently dominant, theories. Or, perhaps, to construct a better theory out of fragments of these two that will satisfactorily explain the central puzzle of our existence: how a three-pound organ with the consistency of tofu exudes the feeling of life.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...