Search This Blog

Friday, June 22, 2018

Computational theory of mind

From Wikipedia, the free encyclopedia
In philosophy, the computational theory of mind (CTM) refers to a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition.[1] The theory was proposed in its modern form by Hilary Putnam in 1961, and developed by the MIT philosopher and cognitive scientist Jerry Fodor (who was Putnam's PhD student) in the 1960s, 1970s and 1980s.[2][3] Despite being vigorously disputed in analytic philosophy in the 1990s (due to work by Putnam himself, John Searle, and others), the view is common in modern cognitive psychology and is presumed by many theorists of evolutionary psychology; in the 2000s and 2010s the view has resurfaced in analytic philosophy (Scheutz 2003, Edelman 2008).

The computational theory of mind holds that the mind is a computational system that is realized (i.e. physically implemented) by neural activity in the brain. The theory can be elaborated in many ways and varies largely based on how the term computation is understood. Computation is commonly understood in terms of Turing Machines which manipulate symbols according to a rule, in combination with the internal state of the machine. The critical aspect of such a computational model is that we can abstract away from particular physical details of the machine that is implementing the computation.[3] This is to say that computation can be implemented by silicon chips or neural networks, so long as there is a series of outputs based on manipulations of inputs and internal states, performed according to a rule. CTM, therefore holds that the mind is not simply analogous to a computer program, but that it is literally a computational system.[3]

Computational theories of mind are often said to require mental representation because 'input' into a computation comes in the form of symbols or representations of other objects. A computer cannot compute an actual object, but must interpret and represent the object in some form and then compute the representation. The computational theory of mind is related to the representational theory of mind in that they both require that mental states are representations. However, the representational theory of mind shifts the focus to the symbols being manipulated. This approach better accounts for systematicity and productivity.[3] In Fodor's original views, the computational theory of mind is also related to the language of thought. The language of thought theory allows the mind to process more complex representations with the help of semantics. (See below in semantics of mental states).

Recent work has suggested that we make a distinction between the mind and cognition. Building from the tradition of McCulloch and Pitts, the Computational Theory of Cognition (CTC) states that neural computations explain cognition.[1] The Computational Theory of Mind asserts that not only cognition, but also phenomenal consciousness or qualia, are computational. That is to say, CTM entails CTC. While phenomenal consciousness could fulfill some other functional role, computational theory of cognition leaves open the possibility that some aspects of the mind could be non-computational. CTC therefore provides an important explanatory framework for understanding neural networks, while avoiding counter-arguments that center around phenomenal consciousness.

"Computer metaphor"

Computational theory of mind is not the same as the computer metaphor, comparing the mind to a modern-day digital computer.[4] Computational theory just uses some of the same principles as those found in digital computing.[4] While the computer metaphor draws an analogy between the mind as software and the brain as hardware, CTM is the claim that the mind is a computational system.

'Computational system' is not meant to mean a modern-day electronic computer. Rather, a computational system is a symbol manipulator that follows step by step functions to compute input and form output. Alan Turing describes this type of computer in his concept of a Turing Machine.

Early proponents

One of the earliest proponents of the computational theory of mind was Thomas Hobbes, who said, "by reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract."[5] Since Hobbes lived before the contemporary identification of computing with instantiating effective procedures, he cannot be interpreted as explicitly endorsing the computational theory of mind, in the contemporary sense.

Causal picture of thoughts

At the heart of the Computational Theory of Mind is the idea that thoughts are a form of computation, and a computation is by definition a systematic set of laws for the relations among representations. This means that a mental state represents something if and only if there is some causal correlation between the mental state and that particular thing. An example would be seeing dark clouds and thinking “clouds mean rain”, where there is a correlation between the thought of the clouds and rain, as the clouds causing rain. This is sometimes known as Natural Meaning. Conversely, there is another side to the causality of thoughts and that is the non-natural representation of thoughts. An example would be seeing a red traffic light and thinking “red means stop”, there is nothing about the color red that indicates it represents stopping, and thus is just a convention that has been invented, similar to languages and their abilities to form representations.

Semantics of mental states

The computational theory of mind states that the mind functions as a symbolic operator, and that mental representations are symbolic representations; just as the semantics of language are the features of words and sentences that relate to their meaning, the semantics of mental states are those meanings of representations, the definitions of the ‘words’ of the language of thought. If these basic mental states can have a particular meaning just as words in a language do, then this means that more complex mental states (thoughts) can be created, even if they have never been encountered before. Just as new sentences that are read can be understood even if they have never been encountered before, as long as the basic components are understood, and it is syntactically correct. For example: “I have eaten plum pudding every day of this fortnight.” While it's doubtful many have seen this particular configuration of words, nonetheless most readers should be able to glean an understanding of this sentence because it is syntactically correct and the constituent parts are understood.

Criticism

A range of arguments have been proposed against Computational Theories of Mind.

An early, though indirect, criticism of the Computational Theory of Mind comes from philosopher John Searle. In his thought experiment known as the Chinese room, Searle attempts to refute the claims that artificially intelligent systems can be said to have intentionality and understanding and that these systems, because they can be said to be minds themselves, are sufficient for the study of the human mind.[6] Searle asks us to imagine that there is a man in a room with no way of communicating to anyone or anything outside of the room except for a piece of paper with symbols written on it that is passed under the door. With the paper, the man is to use a series of provided rule books to return paper containing different symbols. Unknown to the man in the room, these symbols are of a Chinese language, and this process generates a conversation that a Chinese speaker outside of the room can actually understand. Searle contends that the man in the room does not understand the Chinese conversation. This is essentially what the computational theory of mind presents us—a model in which the mind simply decodes symbols and outputs more symbols. Searle argues that this is not real understanding or intentionality. Though originally written as a repudiation of the idea that computers work like minds, it is not a stretch to also argue from this position that minds do not work like computers.

Searle has further raised questions about what exactly constitutes a computation:
the wall behind my back is right now implementing the WordStar program, because there is some pattern of molecule movements that is isomorphic with the formal structure of WordStar. But if the wall is implementing WordStar, if it is a big enough wall it is implementing any program, including any program implemented in the brain.[7]
Objections like Searle’s might be called insufficiency objections. They claim that computational theories of mind fail because computation is insufficient to account for some capacity of the mind. Arguments from qualia, such as Frank Jackson’s Knowledge argument, can be understood as objections to computational theories of mind in this way—though they take aim at physicalist conceptions of the mind in general, and not computational theories specifically.

There are also objections which are directly tailored for computational theories of mind.
Putnam himself (see in particular Representation and Reality and the first part of Renewing Philosophy) became a prominent critic of computationalism for a variety of reasons, including ones related to Searle's Chinese room arguments, questions of world-word reference relations, and thoughts about the mind-body relationship. Regarding functionalism in particular, Putnam has claimed along lines similar to, but more general than Searle's arguments, that the question of whether the human mind can implement computational states is not relevant to the question of the nature of mind, because "every ordinary open system realizes every abstract finite automaton."[8] Computationalists have responded by aiming to develop criteria describing what exactly counts as an implementation.[9] [10] [11]

Roger Penrose has proposed the idea that the human mind does not use a knowably sound calculation procedure to understand and discover mathematical intricacies. This would mean that a normal Turing complete computer would not be able to ascertain certain mathematical truths that human minds can.[12]

Prominent scholars

  • Daniel Dennett proposed the Multiple Drafts Model, in which consciousness seems linear but is actually blurry and gappy, distributed over space and time in the brain. Consciousness is the computation, there is no extra step or "Cartesian Theater" in which you become conscious of the computation.
  • Jerry Fodor argues that mental states, such as beliefs and desires, are relations between individuals and mental representations. He maintains that these representations can only be correctly explained in terms of a language of thought (LOT) in the mind. Further, this language of thought itself is codified in the brain, not just a useful explanatory tool. Fodor adheres to a species of functionalism, maintaining that thinking and other mental processes consist primarily of computations operating on the syntax of the representations that make up the language of thought. In later work (Concepts and The Elm and the Expert), Fodor has refined and even questioned some of his original computationalist views, and adopted a highly modified version of LOT (see LOT2).
  • David Marr proposed that cognitive processes have three levels of description: the computational level (which describes that computational problem (i.e., input/output mapping) computed by the cognitive process); the algorithmic level (which presents the algorithm used for computing the problem postulated at the computational level); and the implementational level (which describes the physical implementation of the algorithm postulated at the algorithmic level in biological matter, e.g. the brain). (Marr 1981)
  • Ulric Neisser coined the term 'cognitive psychology' in his book published in 1967 (Cognitive Psychology), wherein Neisser characterizes people as dynamic information-processing systems whose mental operations might be described in computational terms.
  • Steven Pinker described a "language instinct," an evolved, built-in capacity to learn language (if not writing).
  • Hilary Putnam proposed functionalism to describe consciousness, asserting that it is the computation that equates to consciousness, regardless of whether the computation is operating in a brain, in a computer, or in a "brain in a vat."
  • Georges Rey, professor at the University of Maryland, builds on Jerry Fodor's representational theory of mind to produce his own version of a Computational/Representational Theory of Thought.

Alternative theories

Artificial consciousness

From Wikipedia, the free encyclopedia

Artificial consciousness[1] (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics. The aim of the theory of artificial consciousness is to "Define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995).
Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC, though there are challenges to that perspective. Proponents of AC believe it is possible to construct systems (e.g., computer systems) that can emulate this NCC interoperation.[2]

Artificial consciousness concepts are also pondered in the philosophy of artificial intelligence through questions about mind, consciousness, and mental states.[3]

Philosophical views

As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of “raw feels”, “what it is like” or qualia (Block 1997).

Plausibility debate

Type-identity theorists and other skeptics hold the view that consciousness can only be realized in particular physical systems because consciousness has properties that necessarily depend on physical constitution (Block 1978; Bickle 2003).[4][5]

In his article "Artificial Consciousness: Utopia or Real Possibility" Giorgio Buttazzo says that despite our current technology's ability to simulate autonomy, "Working in a fully automated mode, they [the computers] cannot exhibit creativity, emotions, or free will. A computer, like a washing machine, is a slave operated by its components."[6]

For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness (Putnam 1967).

Computational Foundation argument

One of the most explicit arguments for the plausibility of AC comes from David Chalmers. His proposal, found within his article Chalmers 2011, is roughly that the right kinds of computations are sufficient for the possession of a conscious mind. In the outline, he defends his claim thus: Computers perform computations. Computations can capture other systems' abstract causal organization.

The most controversial part of Chalmers' proposal is that mental properties are "organizationally invariant". Mental properties are of two kinds, psychological and phenomenological. Psychological properties, such as belief and perception, are those that are "characterized by their causal role". He adverts to the work of Armstrong 1968 and Lewis 1972 in claiming that "[s]ystems with the same causal topology…will share their psychological properties".

Phenomenological properties are not prima facie definable in terms of their causal roles. Establishing that phenomenological properties are amenable to individuation by causal role therefore requires argument. Chalmers provides his Dancing Qualia Argument for this purpose.[7]

Chalmers begins by assuming that agents with identical causal organizations could have different experiences. He then asks us to conceive of changing one agent into the other by the replacement of parts (neural parts replaced by silicon, say) while preserving its causal organization. Ex hypothesi, the experience of the agent under transformation would change (as the parts were replaced), but there would be no change in causal topology and therefore no means whereby the agent could "notice" the shift in experience.

Critics of AC object that Chalmers begs the question in assuming that all mental properties and external connections are sufficiently captured by abstract causal organization.

Ethics

If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer of a building or large machine is a particular ambiguity. Should laws be made for such a case, consciousness would also require a legal definition (for example a machine's ability to experience pleasure or pain, known as sentience). Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction (see below).

The rules for the 2003 Loebner Prize competition explicitly addressed the question of robot rights:
61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[8]

Research and implementation proposals

Aspects of consciousness

There are various aspects of consciousness generally deemed necessary for a machine to be artificially conscious. A variety of functions in which consciousness plays a role were suggested by Bernard Baars (Baars 1988) and others. The functions of consciousness suggested by Bernard Baars are Definition and Context Setting, Adaptation and Learning, Editing, Flagging and Debugging, Recruiting and Control, Prioritizing and Access-Control, Decision-making or Executive Function, Analogy-forming Function, Metacognitive and Self-monitoring Function, and Autoprogramming and Self-maintenance Function. Igor Aleksander suggested 12 principles for artificial consciousness (Aleksander 1995) and these are: The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.

Awareness

Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined, and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling of the physical world, modeling of one's own internal states and processes, and modeling of other conscious entities.

There are at least three types of awareness:[9] agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.

Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.[10]

Memory

Conscious events interact with memory systems in learning, rehearsal, and retrieval.[11] The IDA model[12] elucidates the role of consciousness in the updating of perceptual memory,[13] transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA, there is evidence that this is also the case in the nervous system.[14] In IDA, these two memories are implemented computationally using a modified version of Kanerva’s Sparse distributed memory architecture.[15]

Learning

Learning is also considered necessary for AC. By Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events (Baars 1988). By Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments" (Cleeremans 2001).

Anticipation

The ability to predict (or anticipate) foreseeable events is considered important for AC by Igor Aleksander.[16] The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.

Relationships between real world states are mirrored in the state structure of a conscious organism enabling the organism to predict events.[16] An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.

Subjective experience

Subjective experiences or qualia are widely considered to be the hard problem of consciousness. Indeed, it is held to pose a challenge to physicalism, let alone computationalism. On the other hand, there are problems in other fields of science which limit that which we can observe, such as the uncertainty principle in physics, which have not made the research in these fields of science impossible.

Role of cognitive architectures

The term "cognitive architecture" may refer to a theory about the structure of the human mind, or any portion or function thereof, including consciousness. In another context, a cognitive architecture implements the theory on computers. An example is QuBIC: Quantum and Bio-inspired Cognitive Architecture for Machine Consciousness. One of the main goals of a cognitive architecture is to summarize the various results of cognitive psychology in a comprehensive computer model. However, the results need to be in a formalized form so they can be the basis of a computer program. Also, the role of cognitive architecture is for the A.I. to clearly structure, build, and implement it's thought process.

Symbolic or hybrid proposals

Franklin's Intelligent Distribution Agent

Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is capable of several of the functions of consciousness as identified by Bernard Baars' Global Workspace Theory (Baars 1988, 1997). His brain child IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it functionally conscious by definition. IDA's task is to negotiate new assignments for sailors in the US Navy after they end a tour of duty, by matching each individual's skills and preferences with the Navy's needs. IDA interacts with Navy databases and communicates with the sailors via natural language e-mail dialog while obeying a large set of Navy policies. The IDA computational model was developed during 1996–2001 at Stan Franklin's "Conscious" Software Research Group at the University of Memphis. It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA's top-down architecture, high-level cognitive functions are explicitly modeled (see Franklin 1995 and Franklin 2003 for details). While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to his own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task."

Ron Sun's cognitive architecture CLARION

CLARION posits a two-level representation that explains the distinction between conscious and unconscious mental processes.

CLARION has been successful in accounting for a variety of psychological data. A number of well-known skill learning tasks have been simulated using CLARION that span the spectrum ranging from simple reactive skills to complex cognitive skills. The tasks include serial reaction time (SRT) tasks, artificial grammar learning (AGL) tasks, process control (PC) tasks, the categorical inference (CI) task, the alphabetical arithmetic (AA) task, and the Tower of Hanoi (TOH) task (Sun 2002). Among them, SRT, AGL, and PC are typical implicit learning tasks, very much relevant to the issue of consciousness as they operationalized the notion of consciousness in the context of psychological experiments.

Ben Goertzel's OpenCog

Ben Goertzel is pursuing an embodied AGI through the open-source OpenCog project. Current code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, being done at the Hong Kong Polytechnic University.

Connectionist proposals

Haikonen's cognitive architecture

Pentti Haikonen (2003) considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection." Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many, e.g. Freeman (1999) and Cotterill (2003). A low-complexity implementation of the architecture proposed by Haikonen (2003) was reportedly not capable of AC, but did exhibit emotions as expected. See Doan (2009) for a comprehensive introduction to Haikonen's cognitive architecture. An updated account of Haikonen's architecture, along with a summary of his philosophical views, is given in Haikonen (2012).

Shanahan's cognitive architecture

Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination") (Shanahan 2006). For discussions of Shanahan's architecture, see (Gamez 2008) and (Reggia 2013) and Chapter 20 of (Haikonen 2012).

Takeno's self-awareness research

Self-awareness in robots is being investigated by Junichi Takeno[17] at Meiji University in Japan. Takeno is asserting that he has developed a robot capable of discriminating between a self-image in a mirror and any other having an identical image to it,[18][19] and this claim has already been reviewed (Takeno, Inaba & Suzuki 2005). Takeno asserts that he first contrived the computational module called a MoNAD, which has a self-aware function, and he then constructed the artificial consciousness system by formulating the relationships between emotions, feelings and reason by connecting the modules in a hierarchy (Igarashi, Takeno 2007). Takeno completed a mirror image cognition experiment using a robot equipped with the MoNAD system. Takeno proposed the Self-Body Theory stating that "humans feel that their own mirror image is closer to themselves than an actual part of themselves." The most important point in developing artificial consciousness or clarifying human consciousness is the development of a function of self awareness, and he claims that he has demonstrated physical and mathematical evidence for this in his thesis.[20] He also demonstrated that robots can study episodes in memory where the emotions were stimulated and use this experience to take predictive actions to prevent the recurrence of unpleasant emotions (Torigoe, Takeno 2009).

Aleksander's impossible mind

Igor Aleksander, emeritus professor of Neural Systems Engineering at Imperial College, has extensively researched artificial neural networks and claims in his book Impossible Minds: My Neurons, My Consciousness that the principles for creating a conscious machine already exist but that it would take forty years to train such a machine to understand language.[21] Whether this is true remains to be demonstrated and the basic principle stated in Impossible Minds—that the brain is a neural state machine—is open to doubt.[22]

Thaler's Creativity Machine Paradigm

Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI),[23][24][25] or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies.[26] He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity.[27][28][29] Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.[28][30][31][32][33][34]

Michael Graziano's attention schema

In 2011, Michael Graziano and Sabine Kastler published a paper named "Human consciousness and its relationship to social neuroscience: A novel hypothesis" proposing a theory of consciousness as an attention schema.[35] Graziano went on to publish an expanded discussion of this theory in his book "Consciousness and the Social Brain".[2] This Attention Schema Theory of Consciousness, as he named it, proposes that the brain tracks attention to various sensory inputs by way of an attention schema, analogous to the well study body schema that tracks the spatial place of a person's body.[2] This relates to artificial consciousness by proposing a specific mechanism of information handling, that produces what we allegedly experience and describe as consciousness, and which should be able to be duplicated by a machine using current technology. When the brain finds that person X is aware of thing Y, it is in effect modeling the state in which person X is applying an attentional enhancement to Y. In the attention schema theory, the same process can be applied to oneself. The brain tracks attention to various sensory inputs, and one's own awareness is a schematized model of one's attention. Graziano proposes specific locations in the brain for this process, and suggests that such awareness is a computed feature constructed by an expert system in the brain.

Testing

The most well-known method for testing machine intelligence is the Turing test. But when interpreted as only observational, this test contradicts the philosophy of science principles of theory dependence of observations. It also has been suggested that Alan Turing's recommendation of imitating not a human adult consciousness, but a human child consciousness, should be taken seriously.[36]

Other tests, such as ConsScale, test the presence of features inspired by biological systems, or measure the cognitive development of artificial systems.

Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Although various systems may display various signs of behavior correlated with functional consciousness, there is no conceivable way in which third-person tests can have access to first-person phenomenological features. Because of that, and because there is no empirical definition of consciousness,[37] a test of presence of consciousness in AC may be impossible.

In 2014, Victor Argonov suggested a non-Turing test for machine consciousness based on machine's ability to produce philosophical judgments.[38] He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures’ consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine’s intellect, not by absence of consciousness.

In fiction

Characters with artificial consciousness (or at least with personalities that imply they have consciousness), from works of fiction:

Transcendentalism

From Wikipedia, the free encyclopedia

Transcendentalism is a philosophical movement that developed in the late 1820s and 1830s in the eastern United States.[1][2][3] It arose as a reaction to protest against the general state of intellectualism and spirituality at the time.[4] The doctrine of the Unitarian church as taught at Harvard Divinity School was of particular interest.

Transcendentalism emerged from "English and German Romanticism, the Biblical criticism of Johann Gottfried Herder and Friedrich Schleiermacher, the skepticism of David Hume",[1] and the transcendental philosophy of Immanuel Kant and German Idealism. Miller and Versluis regard Emanuel Swedenborg as a pervasive influence on transcendentalism.[5][6] It was also strongly influenced by Hindu texts on philosophy of the mind and spirituality, especially the Upanishads.

A core belief of transcendentalism is in the inherent goodness of people and nature. Adherents believe that society and its institutions have corrupted the purity of the individual, and they have faith that people are at their best when truly "self-reliant" and independent.

Transcendentalism emphasizes subjective intuition over objective empiricism. Adherents believe that individuals are capable of generating completely original insights with little attention and deference to past masters.

Origin

Transcendentalism is closely related to Unitarianism, the dominant religious movement in Boston in the early nineteenth century. It started to develop after Unitarianism took hold at Harvard University, following the elections of Henry Ware as the Hollis Professor of Divinity in 1805 and of John Thornton Kirkland as President in 1810. Transcendentalism was not a rejection of Unitarianism; rather, it developed as an organic consequence of the Unitarian emphasis on free conscience and the value of intellectual reason. The transcendentalists were not content with the sobriety, mildness, and calm rationalism of Unitarianism. Instead, they longed for a more intense spiritual experience. Thus, transcendentalism was not born as a counter-movement to Unitarianism, but as a parallel movement to the very ideas introduced by the Unitarians.[7]

Transcendental Club


Transcendentalism became a coherent movement and a sacred organization with the founding of the Transcendental Club in Cambridge, Massachusetts, on September 8, 1836 by prominent New England intellectuals, including George Putnam (1807–78, the Unitarian minister in Roxbury),[8] Ralph Waldo Emerson, and Frederic Henry Hedge. From 1840, the group frequently published in their journal The Dial, along with other venues.

Second wave of transcendentalists

By the late 1840s, Emerson believed that the movement was dying out, and even more so after the death of Margaret Fuller in 1850. "All that can be said," Emerson wrote, "is that she represents an interesting hour and group in American cultivation."[9] There was, however, a second wave of transcendentalists, including Moncure Conway, Octavius Brooks Frothingham, Samuel Longfellow and Franklin Benjamin Sanborn.[10] Notably, the transgression of the spirit, most often evoked by the poet's prosaic voice, is said to endow in the reader a sense of purposefulness. This is the underlying theme in the majority of transcendentalist essays and papers—all of which are centered on subjects which assert a love for individual expression.[11] Though the group was mostly made up of struggling aesthetes, the wealthiest among them was Samuel Gray Ward, who, after a few contributions to The Dial, focused on his banking career.[12]

Beliefs

Transcendentalists are strong believers in the power of the individual. It focuses primarily on personal freedom. Their beliefs are closely linked with those of the Romantics, but differ by an attempt to embrace or, at least, to not oppose the empiricism of science.

Transcendental knowledge

Transcendentalists desire to ground their religion and philosophy in principles based upon the German Romanticism of Herder and Schleiermacher. Transcendentalism merged "English and German Romanticism, the Biblical criticism of Herder and Schleiermacher, and the skepticism of Hume",[1] and the transcendental philosophy of Immanuel Kant (and of German Idealism more generally), interpreting Kant's a priori categories as a priori knowledge. Early transcendentalists were largely unacquainted with German philosophy in the original and relied primarily on the writings of Thomas Carlyle, Samuel Taylor Coleridge, Victor Cousin, Germaine de Staël, and other English and French commentators for their knowledge of it. The transcendental movement can be described as an American outgrowth of English Romanticism.

Individualism

Transcendentalists believe that society and its institutions—particularly organized religion and political parties—corrupt the purity of the individual. They have faith that people are at their best when truly "self-reliant" and independent. It is only from such real individuals that true community can form. Even with this necessary individuality, transcendentalists also believe that all people are outlets for the "Over-soul." Because the Over-soul is one, this unites all people as one being. Emerson alludes to this concept in the introduction of the American Scholar address, "that there is One Man, - present to all particular men only partially, or through one faculty; and that you must take the whole society to find the whole man."[14] Such an ideal is in harmony with Transcendentalist individualism, as each person is empowered to behold within him or herself a piece of the divine Over-soul.

Indian religions

Transcendentalism has been directly influenced by Indian religions.[15][16][note 1] Thoreau in Walden spoke of the Transcendentalists' debt to Indian religions directly:

In the morning I bathe my intellect in the stupendous and cosmogonal philosophy of the Bhagavat Geeta, since whose composition years of the gods have elapsed, and in comparison with which our modern world and its literature seem puny and trivial; and I doubt if that philosophy is not to be referred to a previous state of existence, so remote is its sublimity from our conceptions. I lay down the book and go to my well for water, and lo! there I meet the servant of the Brahmin, priest of Brahma, and Vishnu and Indra, who still sits in his temple on the Ganges reading the Vedas, or dwells at the root of a tree with his crust and water-jug. I meet his servant come to draw water for his master, and our buckets as it were grate together in the same well. The pure Walden water is mingled with the sacred water of the Ganges.[17]
In 1844, the first English translation of the Lotus Sutra was included in The Dial, a publication of the New England Transcendentalists, translated from French by Elizabeth Palmer Peabody.[18] [19]

Idealism

Transcendentalists differ in their interpretations of the practical aims of will. Some adherents link it with utopian social change; Brownson, for example, connected it with early socialism, but others consider it an exclusively individualist and idealist project. Emerson believed the latter; in his 1842 lecture "The Transcendentalist", he suggested that the goal of a purely transcendental outlook on life was impossible to attain in practice:
You will see by this sketch that there is no such thing as a transcendental party; that there is no pure transcendentalist; that we know of no one but prophets and heralds of such a philosophy; that all who by strong bias of nature have leaned to the spiritual side in doctrine, have stopped short of their goal. We have had many harbingers and forerunners; but of a purely spiritual life, history has afforded no example. I mean, we have yet no man who has leaned entirely on his character, and eaten angels' food; who, trusting to his sentiments, found life made of miracles; who, working for universal aims, found himself fed, he knew not how; clothed, sheltered, and weaponed, he knew not how, and yet it was done by his own hands. ...Shall we say, then, that transcendentalism is the Saturnalia or excess of Faith; the presentiment of a faith proper to man in his integrity, excessive only when his imperfect obedience hinders the satisfaction of his wish.

Influence on other movements

Transcendentalism is, in many aspects, the first notable American intellectual movement. It has inspired succeeding generations of American intellectuals, as well as some literary movements.[20]
Transcendentalism influenced the growing movement of "Mental Sciences" of the mid-19th century, which would later become known as the New Thought movement. New Thought considers Emerson its intellectual father.[21] Emma Curtis Hopkins "the teacher of teachers", Ernest Holmes, founder of Religious Science, the Fillmores, founders of Unity, and Malinda Cramer and Nona L. Brooks, the founders of Divine Science, were all greatly influenced by Transcendentalism.[22]

Transcendentalism also influenced Hinduism. Ram Mohan Roy (1772–1833), the founder of the Brahmo Samaj, rejected Hindu mythology, but also the Christian trinity.[23] He found that Unitarianism came closest to true Christianity,[23] and had a strong sympathy for the Unitarians,[24] who were closely connected to the Transcendentalists.[15] Ram Mohan Roy founded a missionary committee in Calcutta, and in 1828 asked for support for missionary activities from the American Unitarians.[25] By 1829, Roy had abandoned the Unitarian Committee,[26] but after Roy's death, the Brahmo Samaj kept close ties to the Unitarian Church,[27] who strived towards a rational faith, social reform, and the joining of these two in a renewed religion.[24] Its theology was called "neo-Vedanta" by Christian commentators,[28][29] and has been highly influential in the modern popular understanding of Hinduism,[30] but also of modern western spirituality, which re-imported the Unitarian influences in the disguise of the seemingly age-old Neo-Vedanta.[30][31][32]

Major figures


Major figures in the transcendentalist movement were Ralph Waldo Emerson, Henry David Thoreau, Margaret Fuller, and Amos Bronson Alcott. Other prominent transcendentalists included Louisa May Alcott, Charles Timothy Brooks, Orestes Brownson, William Ellery Channing, William Henry Channing, James Freeman Clarke, Christopher Pearse Cranch, John Sullivan Dwight, Convers Francis, William Henry Furness, Frederic Henry Hedge, Sylvester Judd, Theodore Parker, Elizabeth Palmer Peabody, George Ripley, Thomas Treadwell Stone, Jones Very, and Walt Whitman.[33]

Criticism

Early in the movement's history, the term "Transcendentalists" was used as a pejorative term by critics, who were suggesting their position was beyond sanity and reason.[34]

Nathaniel Hawthorne wrote a novel, The Blithedale Romance (1852), satirizing the movement, and based it on his experiences at Brook Farm, a short-lived utopian community founded on transcendental principles.[35]

Edgar Allan Poe wrote a story, "Never Bet the Devil Your Head" (1841), in which he embedded elements of deep dislike for transcendentalism, calling its followers "Frogpondians" after the pond on Boston Common.[36] The narrator ridiculed their writings by calling them "metaphor-run" lapsing into "mysticism for mysticism's sake",[37] and called it a "disease." The story specifically mentions the movement and its flagship journal The Dial, though Poe denied that he had any specific targets.[38] In Poe's essay "The Philosophy of Composition" (1846), he offers criticism denouncing "the excess of the suggested meaning... which turns into prose (and that of the very flattest kind) the so-called poetry of the so-called transcendentalists."[39]

Computer-aided software engineering

From Wikipedia, the free encyclopedia ...