Search This Blog

Wednesday, March 25, 2015

Cognitive science


From Wikipedia, the free encyclopedia


Figure illustrating the fields that contributed to the birth of cognitive science, including linguistics, neuroscience, artificial Intelligence, philosophy, anthropology, and psychology.[1]

Cognitive science is the interdisciplinary scientific study of the mind and its processes.[2] It examines what cognition is, what it does and how it works. It includes research on intelligence and behaviour, especially focusing on how information is represented, processed, and transformed (in faculties such as perception, language, memory, reasoning, and emotion) within nervous systems (humans or other animals) and machines (e.g. computers). Cognitive science consists of multiple research disciplines, including psychology, artificial intelligence, philosophy, neuroscience, linguistics, and anthropology.[3] It spans many levels of analysis, from low-level learning and decision mechanisms to high-level logic and planning; from neural circuitry to modular brain organization. The fundamental concept of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."[3]

Principles

Levels of analysis

A central tenet of cognitive science is that a complete understanding of the mind/brain cannot be attained by studying only a single level. An example would be the problem of remembering a phone number and recalling it later. One approach to understanding this process would be to study behavior through direct observation. A person could be presented with a phone number, asked to recall it after some delay. Then the accuracy of the response could be measured. Another approach would be to study the firings of individual neurons while a person is trying to remember the phone number. Neither of these experiments on its own would fully explain how the process of remembering a phone number works. Even if the technology to map out every neuron in the brain in real-time were available, and it were known when each neuron was firing, it would still be impossible to know how a particular firing of neurons translates into the observed behavior. Thus an understanding of how these two levels relate to each other is needed. The Embodied Mind: Cognitive Science and Human Experience says “the new sciences of the mind need to enlarge their horizon to encompass both lived human experience and the possibilities for transformation inherent in human experience.”[4] This can be provided by a functional level account of the process. Studying a particular phenomenon from multiple levels creates a better understanding of the processes that occur in the brain to give rise to a particular behavior. Marr[5] gave a famous description of three levels of analysis:
  1. the computational theory, specifying the goals of the computation;
  2. representation and algorithms, giving a representation of the inputs and outputs and the algorithms which transform one into the other; and
  3. the hardware implementation, how algorithm and representation may be physically realized.
(See also the entry on functionalism.)

Interdisciplinary nature

Cognitive science is an interdisciplinary field with contributors from various fields, including psychology, neuroscience, linguistics, philosophy of mind, computer science, anthropology, sociology, and biology. Cognitive science tends to view the world outside the mind much as other sciences do. Thus it too has an objective, observer-independent existence. The field is usually seen as compatible with the physical sciences, and uses the scientific method as well as simulation or modeling, often comparing the output of models with aspects of human behavior. Some doubt whether there is a unified cognitive science and prefer to speak of the cognitive sciences in plural.[6]
Many, but not all, who consider themselves cognitive scientists have a functionalist view of the mind—the view that mental states are classified functionally, such that any system that performs the proper function for some mental state is considered to be in that mental state. According to some versions of functionalism, even non-human systems, such as other animal species, alien life forms, or advanced computers can, in principle, have mental states.

Cognitive science: the term

The term "cognitive" in "cognitive science" is used for "any kind of mental operation or structure that can be studied in precise terms" (Lakoff and Johnson, 1999). This conceptualization is very broad, and should not be confused with how "cognitive" is used in some traditions of analytic philosophy, where "cognitive" has to do only with formal rules and truth conditional semantics.

The earliest entries for the word "cognitive" in the OED take it to mean roughly "pertaining to the action or process of knowing". The first entry, from 1586, shows the word was at one time used in the context of discussions of Platonic theories of knowledge. Most in cognitive science, however, presumably do not believe their field is the study of anything as certain as the knowledge sought by Plato.[citation needed]

Scope

Cognitive science is a large field, and covers a wide array of topics on cognition. However, it should be recognized that cognitive science is not equally concerned with every topic that might bear on the nature and operation of the mind or intelligence. Social and cultural factors, emotion, consciousness, animal cognition, comparative and evolutionary approaches are frequently de-emphasized or excluded outright, often based on key philosophical conflicts. Another important mind-related subject that the cognitive sciences tend to avoid is the existence of qualia, with discussions over this issue being sometimes limited to only mentioning qualia as a philosophically open matter. Some within the cognitive science community, however, consider these to be vital topics, and advocate the importance of investigating them.[7]

Below are some of the main topics that cognitive science is concerned with. This is not an exhaustive list, but is meant to cover the wide range of intelligent behaviors. See List of cognitive science topics for a list of various aspects of the field.

Artificial intelligence

"... One major contribution of AI and cognitive science to psychology has been the information processing model of human thinking in which the metaphor of brain-as-computer is taken quite literally. ." AAAI Web pages.
Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One of the practical goals of AI is to implement aspects of human intelligence in computers. Computers are also widely used as a tool with which to study cognitive phenomena. Computational modeling uses simulations to study how human intelligence may be structured.[8] (See the section on computational modeling in the Research Methods section.)

There is some debate in the field as to whether the mind is best viewed as a huge array of small but individually feeble elements (i.e. neurons), or as a collection of higher-level structures such as symbols, schemas, plans, and rules. The former view uses connectionism to study the mind, whereas the latter emphasizes symbolic computations. One way to view the issue is whether it is possible to accurately simulate a human brain on a computer without accurately simulating the neurons that make up the human brain.

Attention

Attention is the selection of important information. The human mind is bombarded with millions of stimuli and it must have a way of deciding which of this information to process. Attention is sometimes seen as a spotlight, meaning one can only shine the light on a particular set of information. Experiments that support this metaphor include the dichotic listening task (Cherry, 1957) and studies of inattentional blindness (Mack and Rock, 1998). In the dichotic listening task, subjects are bombarded with two different messages, one in each ear, and told to focus on only one of the messages. At the end of the experiment, when asked about the content of the unattended message, subjects cannot report it.

Knowledge and processing of language


A well known example of a Phrase structure tree. This is one way of representing human language that shows how different components are organized hierarchically.

The ability to learn and understand language is an extremely complex process. Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language?, and (3) How are humans able to understand novel sentences?

The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences. Linguistics often divides language processing into orthography, phonology and phonetics, morphology, syntax, semantics, and pragmatics. Many aspects of language can be studied from each of these components and from their interaction.

The study of language processing in cognitive science is closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of.[9] Linguists have found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech. Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration.

Learning and development

Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, and recognize people and objects. Research in learning and development aims to explain the mechanisms by which these processes might take place.
A major question in the study of cognitive development is the extent to which certain abilities are innate or learned. This is often framed in terms of the nature and nurture debate. The nativist view emphasizes that certain features are innate to an organism and are determined by its genetic endowment. The empiricist view, on the other hand, emphasizes that certain abilities are learned from the environment. Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains about how genetic information might guide cognitive development. In the area of language acquisition, for example, some (such as Steven Pinker)[10] have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues in Rethinking Innateness) have argued that Pinker's claims are biologically unrealistic. They argue that genes determine the architecture of a learning system, but that specific "facts" about how grammar works can only be learned as a result of experience.

Memory

Memory allows us to store information for later retrieval. Memory is often thought of consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes).
Memory is also often grouped into declarative and procedural forms. Declarative memory—grouped into subsets of semantic and episodic forms of memory—refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e.g. "Who was the first president of the U.S.A.?", or "What did I eat for breakfast four days ago?"). Procedural memory allows us to remember actions and motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or memory .

Cognitive scientists study memory just as psychologists do, but tend to focus in more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in "fill-in-the-blank")?

Perception and action


The Necker cube, an example of an optical illusion

An optical illusion. The square A is exactly the same shade of gray as square B. See checker shadow illusion.

Perception is the ability to take in information via the senses, and process it in some way. Vision and hearing are two dominant senses that allow us to perceive the environment. Some questions in the study of visual perception, for example, include: (1) How are we able to recognize objects?, (2) Why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? One tool for studying visual perception is by looking at how people process optical illusions. The image on the right of a Necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions.

The study of haptic (tactile), olfactory, and gustatory stimuli also fall into the domain of perception.

Action is taken to refer to the output of a system. In humans, this is accomplished through motor responses. Spatial planning and movement, speech production, and complex motor movements are all aspects of action.

Research methods

Many different methodologies are used to study cognitive science. As the field is highly interdisciplinary, research often cuts across multiple areas of study, drawing on research methods from psychology, neuroscience, computer science and systems theory.

Behavioral experiments

In order to have a description of what constitutes intelligent behavior, one must study behavior itself. This type of research is closely tied to that in cognitive psychology and psychophysics. By measuring behavioral responses to different stimuli, one can understand something about how those stimuli are processed. Lewandowski and Strohmetz (2009) review a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice.[11] Behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present (e.g., litter in a parking lot or readings on an electric meter). Behavioral observations involve the direct witnessing of the actor engaging in the behavior (e.g., watching how close a person sits next to another person). Behavioral choices are when a person selects between two or more options (e.g., voting behavior, choice of a punishment for another participant).
  • Reaction time. The time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. For example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing.
  • Psychophysical responses. Psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. They typically involve making judgments of some physical property, e.g. the loudness of a sound. Correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. Some examples include:
    • sameness judgments for colors, tones, textures, etc.
    • threshold differences for colors, tones, textures, etc.
  • Eye tracking. This methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. The fixation point of the eyes is linked to an individual's focus of attention. Thus, by monitoring eye movements, we can study what information is being processed at a given time. Eye tracking allows us to study cognitive processes on extremely short time scales. Eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed.

Brain imaging

Image of the human head with the brain. The arrow indicates the position of the hypothalamus.

Brain imaging involves analyzing activity within the brain while performing various tasks. This allows us to link behavior and brain function to help understand how information is processed. Different types of imaging techniques vary in their temporal (time-based) and spatial (location-based) resolution. Brain imaging is often used in cognitive neuroscience.
  • Single photon emission computed tomography and Positron emission tomography. SPECT and PET use radioactive isotopes, which are injected into the subject's bloodstream and taken up by the brain. By observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. PET has similar spatial resolution to fMRI, but it has extremely poor temporal resolution.
  • Electroencephalography. EEG measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. This technique has an extremely high temporal resolution, but a relatively poor spatial resolution.
  • Functional magnetic resonance imaging. fMRI measures the relative amount of oxygenated blood flowing to different parts of the brain. More oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. This allows us to localize particular functions within different brain regions. fMRI has moderate spatial and temporal resolution.
  • Optical imaging. This technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. Since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active (i.e., those that have more oxygenated blood). Optical imaging has moderate temporal resolution, but poor spatial resolution. It also has the advantage that it is extremely safe and can be used to study infants' brains.
  • Magnetoencephalography. MEG measures magnetic fields resulting from cortical activity. It is similar to EEG, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in EEG is. MEG uses SQUID sensors to detect tiny magnetic fields.

Computational modeling


A neural network with two layers.

Computational models require a mathematically and logically formal representation of a problem. Computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. Computational modeling can help us to understand the functional organization of a particular cognitive phenomenon. There are two basic approaches to cognitive modeling. The first is focused on abstract mental functions of an intelligent mind and operates using symbols, and the second, which follows the neural and associative properties of the human brain, is called subsymbolic.
  • Symbolic modeling evolved from the computer science paradigms using the technologies of Knowledge-based systems, as well as a philosophical perspective, see for example "Good Old-Fashioned Artificial Intelligence" (GOFAI). They are developed by the first cognitive researchers and later used in information engineering for expert systems . Since the early 1990s it was generalized in systemics for the investigation of functional human-like intelligence models, such as personoids, and, in parallel, developed as the SOAR environment. Recently, especially in the context of cognitive decision making, symbolic cognitive modeling is extended to socio-cognitive approach including social and organization cognition interrelated with a sub-symbolic not conscious layer.
  • Subsymbolic modeling includes Connectionist/neural network models. Connectionism relies on the idea that the mind/brain is composed of simple nodes and that the power of the system comes primarily from the existence and manner of connections between the simple nodes. Neural nets are textbook implementations of this approach. Some critics of this approach feel that while these models approach biological reality as a representation of how the system works, they lack explanatory powers because complicated systems of connections with even simple rules are extremely complex and often less interpretable than the system they model.
Other approaches gaining in popularity include the use of dynamical systems theory and also techniques putting symbolic models and connectionist models into correspondence (Neural-symbolic integration). Bayesian models, often drawn from machine learning, are also gaining popularity.

All the above approaches tend to be generalized to the form of integrated computational models of a synthetic/abstract intelligence, in order to be applied to the explanation and improvement of individual and social/organizational decision-making and reasoning.

Neurobiological methods

Research methods borrowed directly from neuroscience and neuropsychology can also help us to understand aspects of intelligence. These methods allow us to understand how intelligent behavior is implemented in a physical system.

Key findings

Cognitive science has given rise to models of human cognitive bias and risk perception, and has been influential in the development of behavioral finance, part of economics. It has also given rise to a new theory of the philosophy of mathematics, and many theories of artificial intelligence, persuasion and coercion. It has made its presence known in the philosophy of language and epistemology - a modern revival of rationalism - as well as constituting a substantial wing of modern linguistics. Fields of cognitive science have been influential in understanding the brain's particular functional systems (and functional deficits) ranging from speech production to auditory processing and visual perception. It has made progress in understanding how damage to particular areas of the brain affect cognition, and it has helped to uncover the root causes and results of specific dysfunction, such as dyslexia, anopia, and hemispatial neglect.

History

Cognitive science has a pre-history traceable back to ancient Greek philosophical texts (see Plato's Meno and Aristotle's De Anima); and includes writers such as Descartes, David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre Cabanis, Leibniz and John Locke. However, although these early writers contributed greatly to the philosophical discovery of mind and this would ultimately lead to the development of psychology, they were working with an entirely different set of tools and core concepts than those of the cognitive scientist.

The modern culture of cognitive science can be traced back to the early cyberneticists in the 1930s and 1940s, such as Warren McCulloch and Walter Pitts, who sought to understand the organizing principles of the mind. McCulloch and Pitts developed the first variants of what are now known as artificial neural networks, models of computation inspired by the structure of biological neural networks.

Another precursor was the early development of the theory of computation and the digital computer in the 1940s and 1950s. Alan Turing and John von Neumann were instrumental in these developments. The modern computer, or Von Neumann machine, would play a central role in cognitive science, both as a metaphor for the mind, and as a tool for investigation.

The first instance of cognitive science experiments being done at an academic institution took place at MIT Sloan School of Management, established by J.C.R. Licklider working within the social psychology department and conducting experiments using computer memory as models for human cognition.[12]

In 1959, Noam Chomsky published a scathing review of B. F. Skinner's book Verbal Behavior. At the time, Skinner's behaviorist paradigm dominated psychology. Most psychologists focused on functional relations between stimulus and response, without positing internal representations. Chomsky argued that in order to explain language, we needed a theory like generative grammar, which not only attributed internal representations but characterized their underlying order.

The term cognitive science was coined by Christopher Longuet-Higgins in his 1973 commentary on the Lighthill report, which concerned the then-current state of Artificial Intelligence research.[13] In the same decade, the journal Cognitive Science and the Cognitive Science Society were founded.[14] In 1982, Vassar College became the first institution in the world to grant an undergraduate degree in Cognitive Science.[15]

In the 1970s and early 1980s, much cognitive science research focused on the possibility of artificial intelligence. Researchers such as Marvin Minsky would write computer programs in languages such as LISP to attempt to formally characterize the steps that human beings went through, for instance, in making decisions and solving problems, in the hope of better understanding human thought, and also in the hope of creating artificial minds. This approach is known as "symbolic AI".

Eventually the limits of the symbolic AI research program became apparent. For instance, it seemed to be unrealistic to comprehensively list human knowledge in a form usable by a symbolic computer program. The late 80s and 90s saw the rise of neural networks and connectionism as a research paradigm. Under this point of view, often attributed to James McClelland and David Rumelhart, the mind could be characterized as a set of complex associations, represented as a layered network. Critics argue that there are some phenomena which are better captured by symbolic models, and that connectionist models are often so complex as to have little explanatory power. Recently symbolic and connectionist models have been combined, making it possible to take advantage of both forms of explanation.[16]

Notable researchers

Some of the more recognized names in cognitive science are usually either the most controversial or the most cited. Within philosophy familiar names include Daniel Dennett who writes from a computational systems perspective, John Searle known for his controversial Chinese room, Jerry Fodor who advocates functionalism, David Chalmers who advocates Dualism, also known for articulating the hard problem of consciousness, Douglas Hofstadter, famous for writing Gödel, Escher, Bach, which questions the nature of words and thought. In the realm of linguistics, Noam Chomsky and George Lakoff have been influential (both have also become notable as political commentators). In artificial intelligence, Marvin Minsky, Herbert A. Simon, Allen Newell, and Kevin Warwick are prominent. Popular names in the discipline of psychology include George A. Miller, James McClelland, Philip Johnson-Laird, John O'Keefe, and Steven Pinker. Anthropologists Dan Sperber, Edwin Hutchins, Scott Atran, Pascal Boyer, Michael Posner, and Joseph Henrich have been involved in collaborative projects with cognitive and social psychologists, political scientists and evolutionary biologists in attempts to develop general theories of culture formation, religion, and political association.

Hard problem of consciousness


From Wikipedia, the free encyclopedia

The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences — how sensations acquire characteristics, such as colours and tastes.[1] David Chalmers, who introduced the term "hard problem" of consciousness,[2] contrasts this with the "easy problems" of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomena. Chalmers claims that the problem of experience is distinct from this set, and he argues that the problem of experience will "persist even when the performance of all the relevant functions is explained".[3]

The existence of a "hard problem" is controversial and has been disputed by some philosophers.[4][5] Providing an answer to this question could lie in understanding the roles that physical processes play in creating consciousness and the extent to which these processes create our subjective qualities of experience.[3]

Several questions about consciousness must be resolved in order to acquire a full understanding of it. These questions include, but are not limited to, whether being conscious could be wholly described in physical terms, such as the aggregation of neural processes in the brain. If consciousness cannot be explained exclusively by physical events, it must transcend the capabilities of physical systems and require an explanation of nonphysical means. For philosophers who assert that consciousness is nonphysical in nature, there remains a question about what outside of physical theory is required to explain consciousness.

Formulation of the problem

Chalmers' formulation

In Facing Up to the Problem of Consciousness, Chalmers wrote:[3]

Easy problems

Chalmers contrasts the Hard Problem with a number of (relatively) Easy Problems that consciousness presents. (He emphasizes that what the easy problems have in common is that they all represent some ability, or the performance of some function or behavior).
  • the ability to discriminate, categorize, and react to environmental stimuli;
  • the integration of information by a cognitive system;
  • the reportability of mental states;
  • the ability of a system to access its own internal states;
  • the focus of attention;
  • the deliberate control of behavior;
  • the difference between wakefulness and sleep.

Other formulations

Various formulations of the "hard problem":
  • "How is it that some organisms are subjects of experience?"
  • "Why does awareness of sensory information exist at all?"
  • "Why do qualia exist?"
  • "Why is there a subjective component to experience?"
  • "Why aren't we philosophical zombies?"
James Trefil notes that "it is the only major question in the sciences that we don't even know how to ask."[6]

Historical predecessors

The hard problem has scholarly antecedents considerably earlier than Chalmers.

Gottfried Leibniz wrote, as an example also known as Leibniz's gap:
Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.[7]
Isaac Newton wrote in a letter to Henry Oldenburg:
to determine by what modes or actions light produceth in our minds the phantasm of colour is not so easie.[8]
T.H. Huxley remarked:
how it is that any thing so remarkable as a state of consciousness comes about as the result of irritating nervous tissue, is just as unaccountable as the appearance of the Djin when Aladdin rubbed his lamp.[9]

Responses

Scientific attempts

There have been scientific attempts to explain subjective aspects of consciousness, which is related to the binding problem in neuroscience. Many eminent theorists, including Francis Crick and Roger Penrose, have worked in this field. Nevertheless, even as sophisticated accounts are given, it is unclear if such theories address the hard problem. Eliminative materialist philosopher Patricia Smith Churchland has famously remarked about Penrose's theories that "Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules."[10]

Consciousness is fundamental or elusive

Some philosophers, including David Chalmers and Alfred North Whitehead, argue that conscious experience is a fundamental constituent of the universe, a form of panpsychism sometimes referred to as panexperientialism.
Chalmers argues that a "rich inner life" is not logically reducible to the functional properties of physical processes. He states that consciousness must be described using nonphysical means. This description involves a fundamental ingredient capable of clarifying phenomena that has not been explained using physical means. Use of this fundamental property, Chalmers argues, is necessary to explain certain functions of the world, much like other fundamental features, such as mass and time, and to explain significant principles in nature.

Thomas Nagel has posited that experiences are essentially subjective (accessible only to the individual undergoing them), while physical states are essentially objective (accessible to multiple individuals). So at this stage, we have no idea what it could even mean to claim that an essentially subjective state just is an essentially non-subjective state. In other words, we have no idea of what reductivism really amounts to.[11]

New mysterianism, such as that of Colin McGinn, proposes that the human mind, in its current form, will not be able to explain consciousness.[12]

Deflationary accounts

Some philosophers, such as Daniel Dennett,[4] Stanislas Dehaene,[5] and Peter Hacker,[13] oppose the idea that there is a hard problem. These theorists argue that once we really come to understand what consciousness is, we will realize that the hard problem is unreal. For instance, Dennett asserts that the so-called hard problem will be solved in the process of answering the "easy" ones (which, as he has clarified, he does not consider "easy" at all).[4] In contrast with Chalmers, he argues that consciousness is not a fundamental feature of the universe and instead will eventually be fully explained by natural phenomena. Instead of involving the nonphysical, he says, consciousness merely plays tricks on people so that it appears nonphysical—in other words, it simply seems like it requires nonphysical features to account for its powers. In this way, Dennett compares consciousness to stage magic and its capability to create extraordinary illusions out of ordinary things.[14]

To show how people might be commonly fooled into overstating the powers of consciousness, Dennett describes a normal phenomenon called change blindness, a visual process that involves failure to detect scenery changes in a series of alternating images.[15] He uses this concept to argue that the overestimation of the brain's visual processing implies that the conception of our consciousness is likely not as pervasive as we make it out to be. He claims that this error of making consciousness more mysterious than it is could be a misstep in any developments toward an effective explanatory theory. Critics such as Galen Strawson reply that, in the case of consciousness, even a mistaken experience retains the essential face of experience that needs to be explained, contra Dennett.

To address the question of the hard problem, or how and why physical processes give rise to experience, Dennett states that the phenomenon of having experience is nothing more than the performance of functions or the production of behavior, which can also be referred to as the easy problems of consciousness.[4] He states that consciousness itself is driven simply by these functions, and to strip them away would wipe out any ability to identify thoughts, feelings, and consciousness altogether. So, unlike Chalmers and other dualists, Dennett says that the easy problems and the hard problem cannot be separated from each other. To him, the hard problem of experience is included among—not separate from—the easy problems, and therefore they can only be explained together as a cohesive unit.[14]

Dehaene's argument has similarities with those of Dennett. He says Chalmers' 'easy problems of consciousness' are actually the hard problems and the 'hard problems' are based only upon intuitions that, according to Dehaene, are continually shifting as understanding evolves. "Once our intuitions are educated ...Chalmers' hard problem will evaporate" and "qualia...will be viewed as a peculiar idea of the prescientific era, much like vitalism...[Just as science dispatched vitalism] the science of consciousness will eat away at the hard problem of consciousness until it vanishes."[5]

Like Dennett, Peter Hacker argues that the hard problem is fundamentally incoherent and that "consciousness studies," as it exists today, is "literally a total waste of time:"[13]
“The whole endeavour of the consciousness studies community is absurd – they are in pursuit of a chimera. They misunderstand the nature of consciousness. The conception of consciousness which they have is incoherent. The questions they are asking don’t make sense. They have to go back to the drawing board and start all over again.”
Critics of Dennett's approach, such as David Chalmers and Thomas Nagel, argue that Dennett's argument misses the point of the inquiry by merely re-defining consciousness as an external property and ignoring the subjective aspect completely. This has led detractors to refer to Dennett's book Consciousness Explained as Consciousness Ignored or Consciousness Explained Away.[4] Dennett discussed this at the end of his book with a section entitled Consciousness Explained or Explained Away?[15]

Glenn Carruthers and Elizabeth Schier argue that the main arguments for the existence of a hard problem -- philosophical zombies, Mary's room, and Nagel's bats -- are only persuasive if one already assumes that "consciousness must be independent of the structure and function of mental states, i.e. that there is a hard problem." Hence, the arguments beg the question. The authors suggest that "instead of letting our conclusions on the thought experiments guide our theories of consciousness, we should let our theories of consciousness guide our conclusions from the thought experiments."[16] Contrary to this line of argument, Chalmers says: "Some may be led to deny the possibility [of zombies] in order to make some theory come out right, but the justification of such theories should ride on the question of possibility, rather than the other way round".[17]:96

A notable deflationary account is the Higher-Order Thought theories of consciousness.[18][19] Peter Carruthers discusses "recognitional concepts of experience", that is, "a capacity to recognize [a] type of experience when it occurs in one's own mental life", and suggests such a capacity does not depend upon qualia.[20] Though the most common arguments against deflationary accounts and eliminative materialism is the argument from qualia, and that conscious experiences are irreducible to physical states - or that current popular definitions of "physical" are incomplete - the objection follows that the one and same reality can appear in different ways, and that the numerical difference of these ways is consistent with a unitary mode of existence of the reality. Critics of the deflationary approach object that qualia are a case where a single reality cannot have multiple appearances. As John Searle points out: "where consciousness is concerned, the existence of the appearance is the reality."[21]

Massimo Pigliucci distances himself from eliminativism, but he insists that the hard problem is still misguided, resulting from a "category mistake":[22]
Of course an explanation isn't the same as an experience, but that’s because the two are completely independent categories, like colors and triangles. It is obvious that I cannot experience what it is like to be you, but I can potentially have a complete explanation of how and why it is possible to be you.

Sentience


From Wikipedia, the free encyclopedia

Sentience is the ability to feel, perceive, or experience subjectively.[1] Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience). In modern Western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as "qualia"). In Eastern philosophy, sentience is a metaphysical quality of all things that requires respect and care. The concept is central to the philosophy of animal rights, because sentience is necessary for the ability to suffer, and thus is held to confer certain rights.

Philosophy and sentience

In the philosophy of consciousness, sentience can refer to the ability of any entity to have subjective perceptual experiences, or as some philosophers refer to them, "qualia".[2] This is distinct from other aspects of the mind and consciousness, such as creativity, intelligence, sapience, self-awareness, and intentionality (the ability to have thoughts "about" something). Sentience is a minimalistic way of defining consciousness, which is otherwise commonly used to collectively describe sentience plus other characteristics of the mind.

Some philosophers, notably Colin McGinn, believe that sentience will never be understood, a position known as "new mysterianism". They do not deny that most other aspects of consciousness are subject to scientific investigation but they argue that subjective experiences will never be explained; i.e., sentience is the only aspect of consciousness that can't be explained. Other philosophers (such as Daniel Dennett, who also denies animals to have a consciousness) disagree, arguing that all aspects of consciousness will eventually be explained by science.[3]

Ideasthesia

According to the theory of ideasthesia a sentient system has to have the capability to categorize and to create concepts. Empirical evidence suggests that sentience about stimuli is closely related to the process of extracting the meaning of the stimuli. How we understand the stimuli determines how we will experience them.

Indian religions

Eastern religions including Hinduism, Buddhism, Sikhism, and Jainism recognize non-humans as sentient beings. In Jainism and Hinduism, this is closely related to the concept of ahimsa, nonviolence toward other beings. In Jainism, all matter is endowed with sentience; there are five degrees of sentience, from one to five.[citation needed] Water, for example, is a sentient being of the first order, as it is considered to possess only one sense, that of touch. Man is considered a sentient being of the fifth order. According to Buddhism, sentient beings made of pure consciousness are possible. In Mahayana Buddhism, which includes Zen and Tibetan Buddhism, the concept is related to the Bodhisattva, an enlightened being devoted to the liberation of others. The first vow of a Bodhisattva states: 
"Sentient beings are numberless; I vow to free them."
Sentience in Buddhism is the state of having senses (sat + ta in Pali, or sat + tva in Sanskrit). In Buddhism, the senses are six in number, the sixth being the subjective experience of the mind. Sentience is simply awareness prior to the arising of Skandha. Thus, an animal qualifies as a sentient being.

Animal welfare, rights, and sentience

In the philosophies of animal welfare and rights, sentience implies the ability to experience pleasure and pain. Additionally, it has been argued, as in the documentary Earthlings :
“Granted, these animals do not have all the desires we humans have; granted, they do not comprehend everything we humans comprehend; nevertheless, we and they do have some of the same desires and do comprehend some of the same things. The desires for food and water, shelter and companionship, freedom of movement and avoidance of pain".[4]
Animal-welfare advocates typically argue that any sentient being is entitled, at a minimum, to protection from unnecessary suffering, though animal-rights advocates may differ on what rights (e.g., the right to life) may be entailed by simple sentience. Sentiocentrism describes the theory that sentient individuals are the center of moral concern.

The 18th-century philosopher Jeremy Bentham compiled enlightenment beliefs in Introduction to the Principles of Morals and Legislation, and he included his own reasoning in a comparison between slavery and sadism toward animals:
The French have already discovered that the blackness of the skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor [see Louis XIV's Code Noir]... What else is it that should trace the insuperable line? Is it the faculty of reason, or, perhaps, the faculty of discourse? But a full-grown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day, or a week, or even a month, old. But suppose the case were otherwise, what would it avail? The question is not Can they reason? nor, Can they talk? but, Can they suffer?[5]
In the 20th century, Princeton University professor Peter Singer argued that Bentham's conclusion is often dismissed by an appeal to a distinction that condemns human suffering but allows non-human suffering, typically "appeals" that are logical fallacies (unless the distinction is factual, in which case the appeal is just one logical fallacy, petitio principii). Because many of the suggested distinguishing features of humanity—extreme intelligence; highly complex language; etc.—are not present in marginal cases such as young or mentally disabled humans, it appears that the only distinction is a prejudice based on species alone, which animal-rights supporters call speciesism—that is, differentiating humans from other animals purely on the grounds that they are human. His opponents accuse him of the same petitio principii.

Gary Francione also bases his abolitionist theory of animal rights, which differs significantly from Singer's, on sentience. He asserts that, "All sentient beings, humans or nonhuman, have one right: the basic right not to be treated as the property of others."[6]

Andrew Linzey, founder of the Oxford Centre for Animal Ethics in England, is known as a foremost international advocate for recognizing animals as sentient beings in biblically based faith traditions. The Interfaith Association of Animal Chaplains encourages animal ministry groups to adopt a policy of recognizing and valuing sentient beings.

In 1997 the concept of animal sentience was written into the basic law of the European Union. The legally binding protocol annexed to the Treaty of Amsterdam recognizes that animals are "sentient beings", and requires the EU and its member states to "pay full regard to the welfare requirements of animals".

The laws of several states include certain invertebrates such as cephalopods (octopuses, squids) and decapod crustaceans (lobsters, crabs) in the scope of animal protection laws, implying that these animals are also judged capable of experiencing pain and suffering.[7]

David Pearce is a British philosopher of the negative utilitarian school of ethics. He is most famous for his advocation of the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient beings.[citation needed]

There are also some who reject the argument entirely, arguing that although suffering animals feel anguish, a suffering plant also struggles to stay alive (albeit in a less visible way). In fact, no living organism 'wants' to die for another organism's sustenance. In an article written for the New York Times, Carol Kaesuk Yoon argues that:
When a plant is wounded, its body immediately kicks into protection mode. It releases a bouquet of volatile chemicals, which in some cases have been shown to induce neighboring plants to pre-emptively step up their own chemical defenses and in other cases to lure in predators of the beasts that may be causing the damage to the plants. Inside the plant, repair systems are engaged and defenses are mounted, the molecular details of which scientists are still working out, but which involve signaling molecules coursing through the body to rally the cellular troops, even the enlisting of the genome itself, which begins churning out defense-related proteins ... If you think about it, though, why would we expect any organism to lie down and die for our dinner? Organisms have evolved to do everything in their power to avoid being extinguished. How long would any lineage be likely to last if its members effectively didn’t care if you killed them? [8]

Artificial intelligence

Although the term "sentience" is usually avoided by major artificial intelligence textbooks and researchers,[9] the term is sometimes used in popular accounts of AI to describe "human level or higher intelligence" (or artificial general intelligence). Many popular accounts of AI confuse sentience with sapience or simply conflate the two concepts. Such use of the term is common in science fiction.

Science fiction

In science fiction, an alien, android, robot, hologram, or computer described as "sentient" is usually treated as a fully human character, with similar rights, qualities, and capabilities as any other character. Foremost among these properties is human level intelligence (i.e. "sapience"), but sentient characters also typically display desire, will, consciousness, ethics, personality, insight, humor, ambition and many other human qualities. Sentience is being used in this context to describe an essential human property that brings all these other qualities with it. The words "sapience", "self-awareness", and "consciousness" are used in similar ways in science fiction.

This supports usage that is incorrect outside science fiction. For example, a character describing his cat as "not sentient" in one episode of Star Trek: The Next Generation, whereas the term was originally used (by philosopher Jeremy Bentham and others) to emphasize the sentience of animals (certainly including cats).

Science fiction has explored several forms of consciousness beside that of the individual human mind, and how such forms might perceive and function. These include Group Sentience, where a single mind is composed of multiple non-sentient members (sometimes capable of reintegration, where members can be gained or lost, resulting in gradually shifting mentalities); Hive Sentience, which is the extreme form of insect hives, with a single sentience extended over huge numbers of non-sentient bodies; and Transient Sentience, where a lifeform is sentient of that transience.

Sentience quotient

The sentience quotient concept was introduced by Robert A. Freitas Jr. in the late 1970s.[10] It defines sentience as the relationship between the information processing rate of each individual processing unit (neuron), the weight/size of a single unit, and the total number of processing units (expressed as mass). It was proposed as a measure for the sentience of all living beings and computers from a single neuron up to a hypothetical being at the theoretical computational limit of the entire universe. On a logarithmic scale it runs from −70 up to +50.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...