Search This Blog

Saturday, June 19, 2021

Philosophy of artificial intelligence

The philosophy of artificial intelligence is a branch of the philosophy of technology that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures; see artificial life) so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community's dismissal of philosophy is detrimental.

The philosophy of artificial intelligence attempts to answer such questions as follows:

  • Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?
  • Can a machine have a mind, mental states, and consciousness in the same sense that a human being can? Can it feel how things are?

Questions like these reflect the divergent interests of AI researchers, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.

Important propositions in the philosophy of AI include some of the following:

  • Turing's "polite convention": If a machine behaves as intelligently as a human being, then it is as intelligent as a human being.
  • The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."
  • Allen Newell and Herbert A. Simon's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."
  • John Searle's strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."
  • Hobbes' mechanism: "For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts..."

Can a machine display general intelligence?

Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This question defines the scope of what machines could do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers; to answer this question, it does not matter whether a machine is really thinking (as a person thinks) or is just acting like it is thinking.

The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth workshop of 1956:

  • "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."

Arguments against the basic premise must show that building a working AI system is impossible because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for intelligent behavior and yet cannot be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible.

It is also possible to sidestep the connection between the two parts of the above proposal. For instance, machine learning, beginning with Turing's infamous child machine proposal essentially achieves the desired feature of intelligence without a precise design-time description as to how it would exactly work. The account on robot tacit knowledge eliminates the need for a precise description all together.

The first step to answering the question is to clearly define "intelligence".

Intelligence

The "standard interpretation" of the Turing test.

Turing test

Alan Turing reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question put to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human. Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks". Turing's test extends this polite convention to machines:

  • If a machine acts as intelligently as a human being, then it is as intelligent as a human being.

One criticism of the Turing test is that it only measures the "humanness" of the machine's behavior, rather than the "intelligence" of the behavior. Since human behavior and intelligent behavior are not exactly the same thing, the test fails to measure intelligence. Stuart J. Russell and Peter Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons'".

Intelligent agent definition

Simple reflex agent

Twenty-first century AI research defines intelligence in terms of intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent. 

  • "If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent."

Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for unintelligent human traits such as making typing mistakes or the ability to be insulted. They have the disadvantage that they can fail to differentiate between "things that think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence. 

Arguments that a machine can display general intelligence

The brain can be simulated

An MRI scan of a normal adult human brain

Hubert Dreyfus describes this argument as claiming that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then .... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". This argument, first introduced as early as 1943 and vividly described by Hans Moravec in 1988, is now associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029. A non-real-time simulation of a thalamocortical model that has the size of the human brain (1011 neurons) was performed in 2005 and it took 50 days to simulate 1 second of brain dynamics on a cluster of 27 processors.

Few disagree that a brain simulation is possible in theory, even critics of AI such as Hubert Dreyfus and John Searle. However, Searle points out that, in principle, anything can be simulated by a computer; thus, bringing the definition to its breaking point leads to the conclusion that any process at all can technically be considered "computation". "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes. Thus, merely mimicking the functioning of a brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind.

Human thinking is symbol processing

In 1963, Allen Newell and Herbert A. Simon proposed that "symbol manipulation" was the essence of both human and machine intelligence. They wrote:

  • "A physical symbol system has the necessary and sufficient means of general intelligent action."

This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence). Another version of this position was described by philosopher Hubert Dreyfus, who called it "the psychological assumption":

  • "The mind can be viewed as a device operating on bits of information according to formal rules."

The "symbols" that Newell, Simon and Dreyfus discussed were word-like and high level — symbols that directly correspond with objects in the world, such as <dog> and <tail>. Most AI programs written between 1956 and 1990 used this kind of symbol. Modern AI, based on statistics and mathematical optimization, does not use the high-level "symbol processing" that Newell and Simon discussed.

Arguments against symbol processing

These arguments show that human thinking does not consist (solely) of high level symbol manipulation. They do not show that artificial intelligence is impossible, only that more than symbol processing is required.

Gödelian anti-mechanist arguments

In 1931, Kurt Gödel proved with an incompleteness theorem that it is always possible to construct a "Gödel statement" that a given consistent formal system of logic (such as a high-level symbol manipulation program) could not prove. Despite being a true statement, the constructed Gödel statement is unprovable in the given system. (The truth of the constructed Gödel statement is contingent on the consistency of the given system; applying the same process to a subtly inconsistent system will appear to succeed, but will actually yield a false "Gödel statement" instead.) More speculatively, Gödel conjectured that the human mind can correctly eventually determine the truth or falsity of any well-grounded mathematical statement (including any possible Gödel statement), and that therefore the human mind's power is not reducible to a mechanism. Philosopher John Lucas (since 1961) and Roger Penrose (since 1989) have championed this philosophical anti-mechanist argument. Gödelian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent (completely free of error) and believes fully in its own consistency (and can make all logical inferences that follow from its own consistency, including belief in its Gödel statement). This is provably impossible for a Turing machine (and, by an informal extension, any known type of mechanical computer) to do; therefore, the Gödelian concludes that human reasoning is too powerful to be captured in a machine.

However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate. This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."

More pragmatically, Russell and Norvig note that Gödel's argument only applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems. It is not necessary to prove everything in order to be intelligent.

Less formally, Douglas Hofstadter, in his Pulitzer prize winning book Gödel, Escher, Bach: An Eternal Golden Braid, states that these "Gödel-statements" always refer to the system itself, drawing an analogy to the way the Epimenides paradox uses statements that refer to themselves, such as "this statement is false" or "I am lying". But, of course, the Epimenides paradox applies to anything that makes statements, whether they are machines or humans, even Lucas himself. Consider:

  • Lucas can't assert the truth of this statement.

This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless.

After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse of quantum mechanical states give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines. By Penrose and Lucas's arguments, existing quantum computers are not sufficient, so Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of the Planck mass via spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron. However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing.

Dreyfus: the primacy of implicit skills

Hubert Dreyfus argued that human intelligence and expertise depended primarily on implicit skill rather than explicit symbolic manipulation, and argued that these skills would never be captured in formal rules.

Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and intelligence, where he had classified this as the "argument from the informality of behavior." Turing argued in response that, just because we do not know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'"

Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern unconscious reasoning. The situated movement in robotics research attempts to capture our unconscious skills at perception and attention. Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning and learning. Statistical approaches to AI can make predictions which approach the accuracy of human intuitive guesses. Research into commonsense knowledge has focused on reproducing the "background" or context of knowledge. In fact, AI research in general has moved away from high level symbol manipulation, towards new models that are intended to capture more of our unconscious reasoning. Historian and AI researcher Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."

Can a machine have a mind, consciousness, and mental states?

This is a philosophical question, related to the problem of other minds and the hard problem of consciousness. The question revolves around a position defined by John Searle as "strong AI":

  • A physical symbol system can have a mind and mental states.

Searle distinguished this position from what he called "weak AI":

  • A physical symbol system can act intelligently.

Searle introduced the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He argued that even if we assume that we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered.

Neither of Searle's two positions are of great concern to AI research, since they do not directly answer the question "can a machine display general intelligence?" (unless it can also be shown that consciousness is necessary for intelligence). Turing wrote "I do not wish to give the impression that I think there is no mystery about consciousness… [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]." Russell and Norvig agree: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."

There are a few researchers who believe that consciousness is an essential element in intelligence, such as Igor Aleksander, Stan Franklin, Ron Sun, and Pentti Haikonen, although their definition of "consciousness" strays very close to "intelligence."

Before we can answer this question, we must be clear what we mean by "minds", "mental states" and "consciousness".

Consciousness, minds, mental states, meaning

The words "mind" and "consciousness" are used by different communities in different ways. Some new age thinkers, for example, use the word "consciousness" to describe something similar to Bergson's "élan vital": an invisible, energetic fluid that permeates life and especially the mind. Science fiction writers use the word to describe some essential property that makes us human: a machine or alien that is "conscious" will be presented as a fully human character, with intelligence, desires, will, insight, pride and so on. (Science fiction writers also use the words "sentience", "sapience," "self-awareness" or "ghost" - as in the Ghost in the Shell manga and anime series - to describe this essential human property). For others, the words "mind" or "consciousness" are used as a kind of secular synonym for the soul.

For philosophers, neuroscientists and cognitive scientists, the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a "thought in your head", like a perception, a dream, an intention or a plan, and to the way we know something, or mean something or understand something. "It's not hard to give a commonsense definition of consciousness" observes philosopher John Searle. What is mysterious and fascinating is not so much what it is but how it is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking?

Philosophers call this the hard problem of consciousness. It is the latest version of a classic problem in the philosophy of mind called the "mind-body problem." A related problem is the problem of meaning or understanding (which philosophers call "intentionality"): what is the connection between our thoughts and what we are thinking about (i.e. objects and situations out in the world)? A third issue is the problem of experience (or "phenomenology"): If two people see the same thing, do they have the same experience? Or are there things "inside their head" (called "qualia") that can be different from person to person?

Neurobiologists believe all these problems will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Some of the harshest critics of artificial intelligence agree that the brain is just a machine, and that consciousness and intelligence are the result of physical processes in the brain. The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of the neurons to create minds, with mental states (like understanding or perceiving), and ultimately, the experience of consciousness?

Arguments that a computer cannot have a mind and mental states

Searle's Chinese room

John Searle asks us to consider a thought experiment: suppose we have written a computer program that passes the Turing test and demonstrates general intelligent action. Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person who does not speak Chinese. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands Chinese? That is, is there anything that has the mental state of understanding, or which has conscious awareness of what is being discussed in Chinese? The man is clearly not aware. The room cannot be aware. The cards certainly aren't aware. Searle concludes that the Chinese room, or any other physical symbol system, cannot have a mind.

Searle goes on to argue that actual mental states and consciousness require (yet to be described) "actual physical-chemical properties of actual human brains." He argues there are special "causal properties" of brains and neurons that gives rise to minds: in his words "brains cause minds."

Related arguments: Leibniz' mill, Davis's telephone exchange, Block's Chinese nation and Blockhead

Gottfried Leibniz made essentially the same argument as Searle in 1714, using the thought experiment of expanding the brain until it was the size of a mill. In 1974, Lawrence Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called "the Chinese Nation" or "the Chinese Gym". Ned Block also proposed his Blockhead argument, which is a version of the Chinese room in which the program has been re-factored into a simple set of rules of the form "see this, do that", removing all mystery from the program.

Responses to the Chinese room

Responses to the Chinese room emphasize several different points.

  • The systems reply and the virtual mind reply: This reply argues that the system, including the man, the program, the room, and the cards, is what understands Chinese. Searle claims that the man in the room is the only thing which could possibly "have a mind" or "understand", but others disagree, arguing that it is possible for there to be two minds in the same physical place, similar to the way a computer can simultaneously "be" two machines at once: one physical (like a Macintosh) and one "virtual" (like a word processor).
  • Speed, power and complexity replies: Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.
  • Robot reply: To truly understand, some believe the Chinese Room needs eyes and hands. Hans Moravec writes: 'If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."
  • Brain simulator reply: What if the program simulates the sequence of nerve firings at the synapses of an actual brain of an actual Chinese speaker? The man in the room would be simulating an actual brain. This is a variation on the "systems reply" that appears more plausible because "the system" now clearly operates like a human brain, which strengthens the intuition that there is something besides the man in the room that could understand Chinese.
  • Other minds reply and the epiphenomena reply: Several people have noted that Searle's argument is just a version of the problem of other minds, applied to machines. Since it is difficult to decide if people are "actually" thinking, we should not be surprised that it is difficult to answer the same question about machines.
A related question is whether "consciousness" (as Searle understands it) exists. Searle argues that the experience of consciousness can't be detected by examining the behavior of a machine, a human being or any other animal. Daniel Dennett points out that natural selection cannot preserve a feature of an animal that has no effect on the behavior of the animal, and thus consciousness (as Searle understands it) can't be produced by natural selection. Therefore either natural selection did not produce consciousness, or "strong AI" is correct in that consciousness can be detected by suitably designed Turing test.

Is thinking a kind of computation?

The computational theory of mind or "computationalism" claims that the relationship between mind and brain is similar (if not identical) to the relationship between a running program and a computer. The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules). The latest version is associated with philosophers Hilary Putnam and Jerry Fodor.

This question bears on our earlier questions: if the human brain is a kind of computer then computers can be both intelligent and conscious, answering both the practical and philosophical questions of AI. In terms of the practical question of AI ("Can a machine display general intelligence?"), some versions of computationalism make the claim that (as Hobbes wrote):

  • Reasoning is nothing but reckoning.

In other words, our intelligence derives from a form of calculation, similar to arithmetic. This is the physical symbol system hypothesis discussed above, and it implies that artificial intelligence is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental states and consciousness?"), most versions of computationalism claim that (as Stevan Harnad characterizes it):

  • Mental states are just implementations of (the right) computer programs.

This is John Searle's "strong AI" discussed above, and it is the real target of the Chinese room argument (according to Harnad).

Other related questions

Can a machine have emotions?

If "emotions" are defined only in terms of their effect on behavior or on how they function inside an organism, then emotions can be viewed as a mechanism that an intelligent agent uses to maximize the utility of its actions. Given this definition of emotion, Hans Moravec believes that "robots in general will be quite emotional about being nice people". Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love." Daniel Crevier writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species."

Can a machine be self-aware?

"Self-awareness", as noted above, is sometimes used by science fiction writers as a name for the essential human property that makes a character fully human. Turing strips away all other properties of human beings and reduces the question to "can a machine be the subject of its own thought?" Can it think about itself? Viewed in this way, a program can be written that can report on its own internal states, such as a debugger. Though arguably self-awareness often presumes a bit more capability; a machine that can ascribe meaning in some way to not only its own state but in general postulating questions without solid answers: the contextual nature of its existence now; how it compares to past states or plans for the future, the limits and value of its work product, how it perceives its performance to be valued-by or compared to others.

Can a machine be original or creative?

Turing reduces this to the question of whether a machine can "take us by surprise" and argues that this is obviously true, as any programmer can attest. He notes that, with enough storage capacity, a computer can behave in an astronomical number of different ways. It must be possible, even trivial, for a computer that can represent ideas to combine them in new ways. (Douglas Lenat's Automated Mathematician, as one example, combined ideas to discover new mathematical truths.) Kaplan and Haenlein suggest that machines can display scientific creativity, while it seems likely that humans will have the upper hand where artistic creativity is concerned.

In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge designed a robot called Adam that they believe to be the first machine to independently come up with new scientific findings. Also in 2009, researchers at Cornell developed Eureqa, a computer program that extrapolates formulas to fit the data inputted, such as finding the laws of motion from a pendulum's motion.

Can a machine be benevolent or hostile?

This question (like many others in the philosophy of artificial intelligence) can be presented in two forms. "Hostility" can be defined in terms function or behavior, in which case "hostile" becomes synonymous with "dangerous". Or it can be defined in terms of intent: can a machine "deliberately" set out to do harm? The latter is the question "can a machine have conscious states?" (such as intentions) in another form.

The question of whether highly intelligent and completely autonomous machines would be dangerous has been examined in detail by futurists (such as the Machine Intelligence Research Institute). The obvious element of drama has also made the subject popular in science fiction, which has considered many differently possible scenarios where intelligent machines pose a threat to mankind; see Artificial intelligence in fiction.

One issue is that machines may acquire the autonomy and intelligence required to be dangerous very quickly. Vernor Vinge has suggested that over just a few years, computers will suddenly become thousands or millions of times more intelligent than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.

In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.

The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.

Some have suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.

Can a machine imitate all human characteristics?

Turing said "It is customary... to offer a grain of comfort, in the form of a statement that some peculiarly human characteristic could never be imitated by a machine. ... I cannot offer any such comfort, for I believe that no such bounds can be set."

Turing noted that there are many arguments of the form "a machine will never do X", where X can be many things, such as:

Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.

Turing argues that these objections are often based on naive assumptions about the versatility of machines or are "disguised forms of the argument from consciousness". Writing a program that exhibits one of these behaviors "will not make much of an impression." All of these arguments are tangential to the basic premise of AI, unless it can be shown that one of these traits is essential for general intelligence.

Can a machine have a soul?

Finally, those who believe in the existence of a soul may argue that "Thinking is a function of man's immortal soul." Alan Turing called this "the theological objection". He writes

In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.

Views on the role of philosophy

Some scholars argue that the AI community's dismissal of philosophy is detrimental. In the Stanford Encyclopedia of Philosophy, some philosophers argue that the role of philosophy in AI is underappreciated. Physicist David Deutsch argues that without an understanding of philosophy or its concepts, AI development would suffer from a lack of progress.

Mind–body dualism

From Wikipedia, the free encyclopedia
 
René Descartes's illustration of dualism. Inputs are passed on by the sensory organs to the epiphysis in the brain and from there to the immaterial spirit.

In the philosophy of mind, mind–body dualism denotes either the view that mental phenomena are non-physical, or that the mind and body are distinct and separable. Thus, it encompasses a set of views about the relationship between mind and matter, as well as between subject and object, and is contrasted with other positions, such as physicalism and enactivism, in the mind–body problem.

Aristotle shared Plato's view of multiple souls and further elaborated a hierarchical arrangement, corresponding to the distinctive functions of plants, animals, and people: a nutritive soul of growth and metabolism that all three share; a perceptive soul of pain, pleasure, and desire that only people and other animals share; and the faculty of reason that is unique to people only. In this view, a soul is the hylomorphic form of a viable organism, wherein each level of the hierarchy formally supervenes upon the substance of the preceding level. For Aristotle, the first two souls, based on the body, perish when the living organism dies, whereas remains an immortal and perpetual intellective part of mind. For Plato, however, the soul was not dependent on the physical body; he believed in metempsychosis, the migration of the soul to a new physical body. It has been considered a form of reductionism by some philosophers, since it enables the tendency to ignore very big groups of variables by its assumed association with the mind or the body, and not for its real value when it comes to explaining or predicting a studied phenomenon.

Dualism is closely associated with the thought of René Descartes (1641), which holds that the mind is a nonphysical—and therefore, non-spatial—substance. Descartes clearly identified the mind with consciousness and self-awareness and distinguished this from the brain as the seat of intelligence. Hence, he was the first to formulate the mind–body problem in the form in which it exists today. Dualism is contrasted with various kinds of monism. Substance dualism is contrasted with all forms of materialism, but property dualism may be considered a form of emergent materialism or non-reductive physicalism in some sense.

Types

Ontological dualism makes dual commitments about the nature of existence as it relates to mind and matter, and can be divided into three different types:

  1. Substance dualism asserts that mind and matter are fundamentally distinct kinds of foundations.
  2. Property dualism suggests that the ontological distinction lies in the differences between properties of mind and matter (as in emergentism).
  3. Predicate dualism claims the irreducibility of mental predicates to physical predicates.

Substance or Cartesian dualism

Substance dualism, or Cartesian dualism, most famously defended by René Descartes, argues that there are two kinds of foundation: mental and physical. This philosophy states that the mental can exist outside of the body, and the body cannot think. Substance dualism is important historically for having given rise to much thought regarding the famous mind–body problem.

Copernican Revolution and the scientific discoveries of the 17th century reinforced the belief that the scientific method was the unique way of knowledge. Bodies were seen as biological organisms to be studied in their constituent parts (materialism) by means of anatomy, physiology, biochemistry and physics (reductionism). The mind-body dualism would have remained the biomedical paradigm and model for the following three centuries.

Substance dualism is a philosophical position compatible with most theologies which claim that immortal souls occupy an independent realm of existence distinct from that of the physical world. In contemporary discussions of substance dualism, philosophers propose dualist positions that are significantly less radical than Descartes's: for instance, a position defended by William Hasker called Emergent Dualism seems, to some philosophers, more intuitively attractive than the substance dualism of Descartes in virtue of its being in line with (inter alia) evolutionary biology.

Property dualism

Property dualism asserts that an ontological distinction lies in the differences between properties of mind and matter, and that consciousness is ontologically irreducible to neurobiology and physics. It asserts that when matter is organized in the appropriate way (i.e., in the way that living human bodies are organized), mental properties emerge. Hence, it is a sub-branch of emergent materialism. What views properly fall under the property dualism rubric is itself a matter of dispute. There are different versions of property dualism, some of which claim independent categorisation.

Non-reductive physicalism is a form of property dualism in which it is asserted that all mental states are causally reducible to physical states. One argument for this has been made in the form of anomalous monism expressed by Donald Davidson, where it is argued that mental events are identical to physical events, however, relations of mental events cannot be described by strict law-governed causal relationships. Another argument for this has been expressed by John Searle, who is the advocate of a distinctive form of physicalism he calls biological naturalism. His view is that although mental states are ontologically irreducible to physical states, they are causally reducible. He has acknowledged that "to many people" his views and those of property dualists look a lot alike, but he thinks the comparison is misleading.

Epiphenomenalism

Epiphenomenalism is a form of property dualism, in which it is asserted that one or more mental states do not have any influence on physical states (both ontologically and causally irreducible). It asserts that while material causes give rise to sensations, volitions, ideas, etc., such mental phenomena themselves cause nothing further: they are causal dead-ends. This can be contrasted to interactionism, on the other hand, in which mental causes can produce material effects, and vice versa.

Predicate dualism

Predicate dualism is a view espoused by such non-reductive physicalists as Donald Davidson and Jerry Fodor, who maintain that while there is only one ontological category of substances and properties of substances (usually physical), the predicates that we use to describe mental events cannot be redescribed in terms of (or reduced to) physical predicates of natural languages.

Predicate dualism is most easily defined as the negation of predicate monism. Predicate monism can be characterized as the view subscribed to by eliminative materialists, who maintain that such intentional predicates as believe, desire, think, feel, etc., will eventually be eliminated from both the language of science and from ordinary language because the entities to which they refer do not exist. Predicate dualists believe that so-called "folk psychology," with all of its propositional attitude ascriptions, is an ineliminable part of the enterprise of describing, explaining, and understanding human mental states and behavior.

For example, Davidson subscribes to anomalous monism, according to which there can be no strict psychophysical laws which connect mental and physical events under their descriptions as mental and physical events. However, all mental events also have physical descriptions. It is in terms of the latter that such events can be connected in law-like relations with other physical events. Mental predicates are irreducibly different in character (rational, holistic, and necessary) from physical predicates (contingent, atomic, and causal).

Dualist views of mental causation

Four varieties of dualist causal interaction. The arrows indicate the direction of causations. Mental and physical states are shown in red and blue, respectively.

This part is about causation between properties and states of the thing under study, not its substances or predicates. Here a state is the set of all properties of what's being studied. Thus each state describes only one point in time.

Interactionism

Interactionism is the view that mental states, such as beliefs and desires, causally interact with physical states. This is a position which is very appealing to common-sense intuitions, notwithstanding the fact that it is very difficult to establish its validity or correctness by way of logical argumentation or empirical proof. It seems to appeal to common-sense because we are surrounded by such everyday occurrences as a child's touching a hot stove (physical event) which causes him to feel pain (mental event) and then yell and scream (physical event) which causes his parents to experience a sensation of fear and protectiveness (mental event) and so on.

Non-reductive physicalism

Non-reductive physicalism is the idea that while mental states are physical they are not reducible to physical properties, in that an ontological distinction lies in the differences between the properties of mind and matter. According to non-reductive physicalism all mental states are causally reducible to physical states where mental properties map to physical properties and vice versa. A prominent form of non-reductive physicalism, called anomalous monism, was first proposed by Donald Davidson in his 1970 paper "Mental events", in which he claims that mental events are identical with physical events, and that the mental is anomalous, i.e. under their mental descriptions these mental events are not regulated by strict physical laws.

Epiphenomenalism

Epiphenomenalism states that all mental events are caused by a physical event and have no physical consequences, and that one or more mental states do not have any influence on physical states. So, the mental event of deciding to pick up a rock ("M1") is caused by the firing of specific neurons in the brain ("P1"). When the arm and hand move to pick up the rock ("P2") this is not caused by the preceding mental event M1, nor by M1 and P1 together, but only by P1. The physical causes are in principle reducible to fundamental physics, and therefore mental causes are eliminated using this reductionist explanation. If P1 causes both M1 and P2, there is no overdetermination in the explanation for P2.

The idea that even if the animal were conscious nothing would be added to the production of behavior, even in animals of the human type, was first voiced by La Mettrie (1745), and then by Cabanis (1802), and was further explicated by Hodgson (1870) and Huxley (1874). Jackson gave a subjective argument for epiphenomenalism, but later rejected it and embraced physicalism.

Parallelism

Psychophysical parallelism is a very unusual view about the interaction between mental and physical events which was most prominently, and perhaps only truly, advocated by Gottfried Wilhelm von Leibniz. Like Malebranche and others before him, Leibniz recognized the weaknesses of Descartes' account of causal interaction taking place in a physical location in the brain. Malebranche decided that such a material basis of interaction between material and immaterial was impossible and therefore formulated his doctrine of occasionalism, stating that the interactions were really caused by the intervention of God on each individual occasion. Leibniz's idea is that God has created a pre-established harmony such that it only seems as if physical and mental events cause, and are caused by, one another. In reality, mental causes only have mental effects and physical causes only have physical effects. Hence, the term parallelism is used to describe this view.

Occasionalism

Occasionalism is a philosophical doctrine about causation which says that created substances cannot be efficient causes of events. Instead, all events are taken to be caused directly by God itself. The theory states that the illusion of efficient causation between mundane events arises out of a constant conjunction that God had instituted, such that every instance where the cause is present will constitute an "occasion" for the effect to occur as an expression of the aforementioned power. This "occasioning" relation, however, falls short of efficient causation. In this view, it is not the case that the first event causes God to cause the second event: rather, God first caused one and then caused the other, but chose to regulate such behaviour in accordance with general laws of nature. Some of its most prominent historical exponents have been Al-Ghazali, Louis de la Forge, Arnold Geulincx, and Nicolas Malebranche.

Kantianism

According to the philosophy of Immanuel Kant, there is a distinction between actions done by desire and those performed by liberty (categorical imperative). Thus, not all physical actions are caused by either matter or freedom. Some actions are purely animal in nature, while others are the result of mental action on matter.

History

Plato and Aristotle

In the dialogue Phaedo, Plato formulated his famous Theory of Forms as distinct and immaterial substances of which the objects and other phenomena that we perceive in the world are nothing more than mere shadows.

In the Phaedo, Plato makes it clear that the Forms are the universalia ante res, i.e. they are ideal universals, by which we are able to understand the world. In his allegory of the cave, Plato likens the achievement of philosophical understanding to emerging into the sun from a dark cave, where only vague shadows of what lies beyond that prison are cast dimly upon the wall. Plato's forms are non-physical and non-mental. They exist nowhere in time or space, but neither do they exist in the mind, nor in the pleroma of matter; rather, matter is said to "participate" in form (μεθεξις, methexis). It remained unclear however, even to Aristotle, exactly what Plato intended by that.

Aristotle argued at length against many aspects of Plato's forms, creating his own doctrine of hylomorphism wherein form and matter coexist. Ultimately however, Aristotle's aim was to perfect a theory of forms, rather than to reject it. Although Aristotle strongly rejected the independent existence Plato attributed to forms, his metaphysics do agree with Plato's a priori considerations quite often. For example, Aristotle argues that changeless, eternal substantial form is necessarily immaterial. Because matter provides a stable substratum for a change in form, matter always has the potential to change. Thus, if given an eternity in which to do so, it will, necessarily, exercise that potential.

Part of Aristotle's psychology, the study of the soul, is his account of the ability of humans to reason and the ability of animals to perceive. In both cases, perfect copies of forms are acquired, either by direct impression of environmental forms, in the case of perception, or else by virtue of contemplation, understanding and recollection. He believed the mind can literally assume any form being contemplated or experienced, and it was unique in its ability to become a blank slate, having no essential form. As thoughts of earth are not heavy, any more than thoughts of fire are causally efficient, they provide an immaterial complement for the formless mind.

From Neoplatonism to scholasticism

The philosophical school of Neoplatonism, most active in Late Antiquity, claimed that the physical and the spiritual are both emanations of the One. Neoplatonism exerted a considerable influence on Christianity, as did the philosophy of Aristotle via scholasticism.

In the scholastic tradition of Saint Thomas Aquinas, a number of whose doctrines have been incorporated into Roman Catholic dogma, the soul is the substantial form of a human being. Aquinas held the Quaestiones disputate de anima, or 'Disputed questions on the soul', at the Roman studium provinciale of the Dominican Order at Santa Sabina, the forerunner of the Pontifical University of Saint Thomas Aquinas, Angelicum during the academic year 1265–66. By 1268 Aquinas had written at least the first book of the Sententia Libri De anima, Aquinas' commentary on Aristotle's De anima, the translation of which from the Greek was completed by Aquinas' Dominican associate at Viterbo William of Moerbeke in 1267. Like Aristotle, Aquinas held that the human being was a unified composite substance of two substantial principles: form and matter. The soul is the substantial form and so the first actuality of a material organic body with the potentiality for life.

While Aquinas defended the unity of human nature as a composite substance constituted by these two inextricable principles of form and matter, he also argued for the incorruptibility of the intellectual soul, in contrast to the corruptibility of the vegetative and sensitive animation of plants and animals. His argument for the subsistence and incorruptibility of the intellectual soul takes its point of departure from the metaphysical principle that operation follows upon being (agiture sequitur esse), i.e., the activity of a thing reveals the mode of being and existence it depends upon. Since the intellectual soul exercises its own per se intellectual operations without employing material faculties, i.e. intellectual operations are immaterial, the intellect itself and the intellectual soul, must likewise be immaterial and so incorruptible. Even though the intellectual soul of man is able to subsist upon the death of the human being, Aquinas does not hold that the human person is able to remain integrated at death. The separated intellectual soul is neither a man nor a human person. The intellectual soul by itself is not a human person (i.e., an individual supposit of a rational nature). Hence, Aquinas held that "soul of St. Peter pray for us" would be more appropriate than "St. Peter pray for us", because all things connected with his person, including memories, ended with his corporeal life.

The Catholic doctrine of the resurrection of the body does nor subscribe that, sees body and soul as forming a whole and states that at the second coming, the souls of the departed will be reunited with their bodies as a whole person (substance) and witness to the apocalypse. The thorough consistency between dogma and contemporary science was maintained here in part from a serious attendance to the principle that there can be only one truth. Consistency with science, logic, philosophy, and faith remained a high priority for centuries, and a university doctorate in theology generally included the entire science curriculum as a prerequisite. This doctrine is not universally accepted by Christians today. Many believe that one's immortal soul goes directly to Heaven upon death of the body.

Descartes and his disciples

In his Meditations on First Philosophy, René Descartes embarked upon a quest in which he called all his previous beliefs into doubt, in order to find out what he could be certain of. In so doing, he discovered that he could doubt whether he had a body (it could be that he was dreaming of it or that it was an illusion created by an evil demon), but he could not doubt whether he had a mind. This gave Descartes his first inkling that the mind and body were different things. The mind, according to Descartes, was a "thinking thing" (Latin: res cogitans), and an immaterial substance. This "thing" was the essence of himself, that which doubts, believes, hopes, and thinks. The body, "the thing that exists" (res extensa), regulates normal bodily functions (such as heart and liver). According to Descartes, animals only had a body and not a soul (which distinguishes humans from animals). The distinction between mind and body is argued in Meditation VI as follows: I have a clear and distinct idea of myself as a thinking, non-extended thing, and a clear and distinct idea of body as an extended and non-thinking thing. Whatever I can conceive clearly and distinctly, God can so create.

The central claim of what is often called Cartesian dualism, in honor of Descartes, is that the immaterial mind and the material body, while being ontologically distinct substances, causally interact. This is an idea that continues to feature prominently in many non-European philosophies. Mental events cause physical events, and vice versa. But this leads to a substantial problem for Cartesian dualism: How can an immaterial mind cause anything in a material body, and vice versa? This has often been called the "problem of interactionism."

Descartes himself struggled to come up with a feasible answer to this problem. In his letter to Elisabeth of Bohemia, Princess Palatine, he suggested that spirits interacted with the body through the pineal gland, a small gland in the centre of the brain, between the two hemispheres. The term Cartesian dualism is also often associated with this more specific notion of causal interaction through the pineal gland. However, this explanation was not satisfactory: how can an immaterial mind interact with the physical pineal gland? Because Descartes' was such a difficult theory to defend, some of his disciples, such as Arnold Geulincx and Nicolas Malebranche, proposed a different explanation: That all mind–body interactions required the direct intervention of God. According to these philosophers, the appropriate states of mind and body were only the occasions for such intervention, not real causes. These occasionalists maintained the strong thesis that all causation was directly dependent on God, instead of holding that all causation was natural except for that between mind and body.

Recent formulations

In addition to already discussed theories of dualism (particularly the Christian and Cartesian models) there are new theories in the defense of dualism. Naturalistic dualism comes from Australian philosopher, David Chalmers (born 1966) who argues there is an explanatory gap between objective and subjective experience that cannot be bridged by reductionism because consciousness is, at least, logically autonomous of the physical properties upon which it supervenes. According to Chalmers, a naturalistic account of property dualism requires a new fundamental category of properties described by new laws of supervenience; the challenge being analogous to that of understanding electricity based on the mechanistic and Newtonian models of materialism prior to Maxwell's equations.

A similar defense comes from Australian philosopher Frank Jackson (born 1943) who revived the theory of epiphenomenalism which argues that mental states do not play a role in physical states. Jackson argues that there are two kinds of dualism:

  1. substance dualism that assumes there is second, non-corporeal form of reality. In this form, body and soul are two different substances.
  2. property dualism that says that body and soul are different properties of the same body.

He claims that functions of the mind/soul are internal, very private experiences that are not accessible to observation by others, and therefore not accessible by science (at least not yet). We can know everything, for example, about a bat's facility for echolocation, but we will never know how the bat experiences that phenomenon.

Arguments for dualism

Another one of Descartes' illustrations. The fire displaces the skin, which pulls a tiny thread, which opens a pore in the ventricle (F) allowing the "animal spirit" to flow through a hollow tube, which inflates the muscle of the leg, causing the foot to withdraw.

The subjective argument

An important fact is that minds perceive intra-mental states differently from sensory phenomena, and this cognitive difference results in mental and physical phenomena having seemingly disparate properties. The subjective argument holds that these properties are irreconcilable under a physical mind.

Mental events have a certain subjective quality to them, whereas physical ones seem not to. So, for example, one may ask what a burned finger feels like, or what the blueness of the sky looks like, or what nice music sounds like. Philosophers of mind call the subjective aspects of mental events qualia. There is something that it's like to feel pain, to see a familiar shade of blue, and so on. There are qualia involved in these mental events. And the claim is that qualia cannot be reduced to anything physical.

Thomas Nagel first characterized the problem of qualia for physicalistic monism in his article, "What Is It Like to Be a Bat?". Nagel argued that even if we knew everything there was to know from a third-person, scientific perspective about a bat's sonar system, we still wouldn't know what it is like to be a bat. However, others argue that qualia are consequent of the same neurological processes that engender the bat's mind, and will be fully understood as the science develops.

Frank Jackson formulated his well-known knowledge argument based upon similar considerations. In this thought experiment, known as Mary's room, he asks us to consider a neuroscientist, Mary, who was born, and has lived all of her life, in a black and white room with a black and white television and computer monitor where she collects all the scientific data she possibly can on the nature of colours. Jackson asserts that as soon as Mary leaves the room, she will come to have new knowledge which she did not possess before: the knowledge of the experience of colours (i.e., what they are like). Although Mary knows everything there is to know about colours from an objective, third-person perspective, she has never known, according to Jackson, what it was like to see red, orange, or green. If Mary really learns something new, it must be knowledge of something non-physical, since she already knew everything about the physical aspects of colour.

However, Jackson later rejected his argument and embraced physicalism. He notes that Mary obtains knowledge not of color, but of a new intramental state, seeing color. Also, he notes that Mary might say "wow," and as a mental state affecting the physical, this clashed with his former view of epiphenomenalism. David Lewis' response to this argument, now known as the ability argument, is that what Mary really came to know was simply the ability to recognize and identify color sensations to which she had previously not been exposed. Daniel Dennett and others also provide arguments against this notion.

The zombie argument

The zombie argument is based on a thought experiment proposed by David Chalmers. The basic idea is that one can imagine, and, therefore, conceive the existence of, an apparently functioning human being/body without any conscious states being associated with it.

Chalmers' argument is that it seems plausible that such a being could exist because all that is needed is that all and only the things that the physical sciences describe and observe about a human being must be true of the zombie. None of the concepts involved in these sciences make reference to consciousness or other mental phenomena, and any physical entity can be described scientifically via physics whether it is conscious or not. The mere logical possibility of a p-zombie demonstrates that consciousness is a natural phenomenon beyond the current unsatisfactory explanations. Chalmers states that one probably could not build a living p-zombie because living things seem to require a level of consciousness. However (unconscious?) robots built to simulate humans may become the first real p-zombies. Hence Chalmers half-joking calls for the need to build a "consciousness meter" to ascertain if any given entity, human or robot, is conscious or not.

Others such as Dennett have argued that the notion of a philosophical zombie is an incoherent, or unlikely, concept. In particular, nothing proves that an entity (e.g., a computer or robot) which would perfectly mimic human beings, and especially perfectly mimic expressions of feelings (like joy, fear, anger, ...), would not indeed experience them, thus having similar states of consciousness to what a real human would have. It is argued that under physicalism, one must either believe that anyone including oneself might be a zombie, or that no one can be a zombie—following from the assertion that one's own conviction about being (or not being) a zombie is a product of the physical world and is therefore no different from anyone else's.

Special sciences argument

Howard Robinson argues that, if predicate dualism is correct, then there are "special sciences" that are irreducible to physics. These allegedly irreducible subjects, which contain irreducible predicates, differ from hard sciences in that they are interest-relative. Here, interest-relative fields depend on the existence of minds that can have interested perspectives. Psychology is one such science; it completely depends on and presupposes the existence of the mind.

Physics is the general analysis of nature, conducted in order to understand how the universe behaves. On the other hand, the study of meteorological weather patterns or human behavior is only of interest to humans themselves. The point is that having a perspective on the world is a psychological state. Therefore, the special sciences presuppose the existence of minds which can have these states. If one is to avoid ontological dualism, then the mind that has a perspective must be part of the physical reality to which it applies its perspective. If this is the case, then in order to perceive the physical world as psychological, the mind must have a perspective on the physical. This, in turn, presupposes the existence of mind.

However, cognitive science and psychology do not require the mind to be irreducible, and operate on the assumption that it has physical basis. In fact, it is common in science to presuppose a complex system; while fields such as chemistry, biology, or geology could be verbosely expressed in terms of quantum field theory, it is convenient to use levels of abstraction like molecules, cells, or the mantle. It is often difficult to decompose these levels without heavy analysis and computation. Sober has also advanced philosophical arguments against the notion of irreducibility.

Argument from personal identity

This argument concerns the differences between the applicability of counterfactual conditionals to physical objects, on the one hand, and to conscious, personal agents on the other. In the case of any material object, e.g. a printer, we can formulate a series of counterfactuals in the following manner:

  1. This printer could have been made of straw.
  2. This printer could have been made of some other kind of plastics and vacuum-tube transistors.
  3. This printer could have been made of 95% of what it is actually made of and 5% vacuum-tube transistors, etc..

Somewhere along the way from the printer's being made up exactly of the parts and materials which actually constitute it to the printer's being made up of some different matter at, say, 20%, the question of whether this printer is the same printer becomes a matter of arbitrary convention.

Imagine the case of a person, Frederick, who has a counterpart born from the same egg and a slightly genetically modified sperm. Imagine a series of counterfactual cases corresponding to the examples applied to the printer. Somewhere along the way, one is no longer sure about the identity of Frederick. In this latter case, it has been claimed, overlap of constitution cannot be applied to the identity of mind. As Madell puts it:

But while my present body can thus have its partial counterpart in some possible world, my present consciousness cannot. Any present state of consciousness that I can imagine either is or is not mine. There is no question of degree here.

If the counterpart of Frederick, Frederickus, is 70% constituted of the same physical substance as Frederick, does this mean that it is also 70% mentally identical with Frederick? Does it make sense to say that something is mentally 70% Frederick? A possible solution to this dilemma is that of open individualism.

Richard Swinburne, in his book The Existence of God, put forward an argument for mind-body dualism based upon personal identity. He states that the brain is composed of two hemispheres and a cord linking the two and that, as modern science has shown, either of these can be removed without the person losing any memories or mental capacities.

He then cites a thought-experiment for the reader, asking what would happen if each of the two hemispheres of one person were placed inside two different people. Either, Swinburne claims, one of the two is me or neither is- and there is no way of telling which, as each will have similar memories and mental capacities to the other. In fact, Swinburne claims, even if one's mental capacities and memories are far more similar to the original person than the others' are, they still may not be him.

From here, he deduces that even if we know what has happened to every single atom inside a person's brain, we still do not know what has happened to 'them' as an identity. From here it follows that a part of our mind, or our soul, is immaterial, and, as a consequence, that mind-body dualism is true.

Argument from reason

Philosophers and scientists such as Victor Reppert, William Hasker, and Alvin Plantinga have developed an argument for dualism dubbed the "argument from reason". They credit C.S. Lewis with first bringing the argument to light in his book Miracles; Lewis called the argument "The Cardinal Difficulty of Naturalism", which was the title of chapter three of Miracles.

The argument postulates that if, as naturalism entails, all of our thoughts are the effect of a physical cause, then we have no reason for assuming that they are also the consequent of a reasonable ground. However, knowledge is apprehended by reasoning from ground to consequent. Therefore, if naturalism were true, there would be no way of knowing it (or anything else), except by a fluke.

Through this logic, the statement "I have reason to believe naturalism is valid" is inconsistent in the same manner as "I never tell the truth." That is, to conclude its truth would eliminate the grounds from which to reach it. To summarize the argument in the book, Lewis quotes J. B. S. Haldane, who appeals to a similar line of reasoning:

If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true...and hence I have no reason for supposing my brain to be composed of atoms.

— J. B. S. Haldane, Possible Worlds, p. 209

In his essay "Is Theology Poetry?", Lewis himself summarises the argument in a similar fashion when he writes:

If minds are wholly dependent on brains, and brains on biochemistry, and biochemistry (in the long run) on the meaningless flux of the atoms, I cannot understand how the thought of those minds should have any more significance than the sound of the wind in the trees.

— C. S. Lewis, The Weight of Glory and Other Addresses, p. 139

But Lewis later agreed with Elizabeth Anscombe's response to his Miracles argument. She showed that an argument could be valid and ground-consequent even if its propositions were generated via physical cause and effect by non-rational factors. Similar to Anscombe, Richard Carrier and John Beversluis have written extensive objections to the argument from reason on the untenability of its first postulate.

Cartesian arguments

Descartes puts forward two main arguments for dualism in Meditations: firstly, the "modal argument," or the "clear and distinct perception argument," and secondly the "indivisibility" or "divisibility" argument.

Summary of the 'modal argument'
It is imaginable that one's mind might exist without one's body.
therefore
It is conceivable that one's mind might exist without one's body.
therefore
It is possible one's mind might exist without one's body.
therefore
One's mind is a different entity from one's body.

The argument is distinguished from the zombie argument as it establishes that the mind could continue to exist without the body, rather than that the unaltered body could exist without the mind. Alvin Plantinga, J. P. Moreland, and Edward Feser have both supported the argument, although Feser and Moreland think that it must be carefully reformulated in order to be effective.

The indivisibility argument for dualism was phrased by Descartes as follows:

[T]here is a great difference between a mind and a body, because the body, by its very nature, is something divisible, whereas the mind is plainly indivisible…insofar as I am only a thing that thinks, I cannot distinguish any parts in me.… Although the whole mind seems to be united to the whole body, nevertheless, were a foot or an arm or any other bodily part amputated, I know that nothing would be taken away from the mind…

The argument relies upon Leibniz' principle of the identity of indiscernibles, which states that two things are the same if and only if they share all their properties. A counterargument is the idea that matter is not infinitely divisible, and thus that the mind could be identified with material things that cannot be divided, or potentially Leibnizian monads.

Arguments against dualism

Arguments from causal interaction

Cartesian dualism compared to three forms of monism.

One argument against dualism is with regard to causal interaction. If consciousness (the mind) can exist independently of physical reality (the brain), one must explain how physical memories are created concerning consciousness. Dualism must therefore explain how consciousness affects physical reality. One of the main objections to dualistic interactionism is lack of explanation of how the material and immaterial are able to interact. Varieties of dualism according to which an immaterial mind causally affects the material body and vice versa have come under strenuous attack from different quarters, especially in the 20th century. Critics of dualism have often asked how something totally immaterial can affect something totally material—this is the basic problem of causal interaction.

First, it is not clear where the interaction would take place. For example, burning one's finger causes pain. Apparently there is some chain of events, leading from the burning of skin, to the stimulation of nerve endings, to something happening in the peripheral nerves of one's body that lead to one's brain, to something happening in a particular part of one's brain, and finally resulting in the sensation of pain. But pain is not supposed to be spatially locatable. It might be responded that the pain "takes place in the brain." But evidently, the pain is in the finger. This may not be a devastating criticism.

However, there is a second problem about the interaction. Namely, the question of how the interaction takes place, where in dualism "the mind" is assumed to be non-physical and by definition outside of the realm of science. The mechanism which explains the connection between the mental and the physical would therefore be a philosophical proposition as compared to a scientific theory. For example, compare such a mechanism to a physical mechanism that is well understood. Take a very simple causal relation, such as when a cue ball strikes an eight ball and causes it to go into the pocket. What happens in this case is that the cue ball has a certain amount of momentum as its mass moves across the pool table with a certain velocity, and then that momentum is transferred to the eight ball, which then heads toward the pocket. Compare this to the situation in the brain, where one wants to say that a decision causes some neurons to fire and thus causes a body to move across the room. The intention to "cross the room now" is a mental event and, as such, it does not have physical properties such as force. If it has no force, then it would seem that it could not possibly cause any neuron to fire. However, with Dualism, an explanation is required of how something without any physical properties has physical effects.

Replies

Alfred North Whitehead and, later, David Ray Griffin framed a new ontology (process philosophy) seeking precisely to avoid the pitfalls of ontological dualism.

The explanation provided by Arnold Geulincx and Nicolas Malebranche is that of occasionalism, where all mind–body interactions require the direct intervention of God.

At the time C. S. Lewis wrote Miracles, quantum mechanics (and physical indeterminism) was only in the initial stages of acceptance, but still Lewis stated the logical possibility that, if the physical world was proved to be indeterministic, this would provide an entry (interaction) point into the traditionally viewed closed system, where a scientifically described physically probable/improbable event could be philosophically described as an action of a non-physical entity on physical reality. He states, however, that none of the arguments in his book will rely on this. Although some interpretations of quantum mechanics consider wave function collapse to be indeterminate, in others this event is defined and deterministic.

Argument from physics

The argument from physics is closely related to the argument from causal interaction. Many physicists and consciousness researchers have argued that any action of a nonphysical mind on the brain would entail the violation of physical laws, such as the conservation of energy.

By assuming a deterministic physical universe, the objection can be formulated more precisely. When a person decides to walk across a room, it is generally understood that the decision to do so, a mental event, immediately causes a group of neurons in that person's brain to fire, a physical event, which ultimately results in his walking across the room. The problem is that if there is something totally non-physical causing a bunch of neurons to fire, then there is no physical event which causes the firing. This means that some physical energy is required to be generated against the physical laws of the deterministic universe—this is by definition a miracle and there can be no scientific explanation of (repeatable experiment performed regarding) where the physical energy for the firing came from. Such interactions would violate the fundamental laws of physics. In particular, if some external source of energy is responsible for the interactions, then this would violate the law of the conservation of energy. Dualistic interactionism has therefore been criticized for violating a general heuristic principle of science: the causal closure of the physical world.

Replies

The Stanford Encyclopedia of Philosophy and the New Catholic Encyclopedia provide two possible replies to the above objections. The first reply is that the mind may influence the distribution of energy, without altering its quantity. The second possibility is to deny that the human body is causally closed, as the conservation of energy applies only to closed systems. However, physicalists object that no evidence exists for the causal non-closure of the human body. Robin Collins responds that energy conservation objections misunderstand the role of energy conservation in physics. Well understood scenarios in general relativity violate energy conservation and quantum mechanics provides precedent for causal interactions, or correlation without energy or momentum exchange. However, this does not mean the mind spends energy and, despite that, it still doesn't exclude the supernatural.

Another reply is akin to parallelism—Mills holds that behavioral events are causally overdetermined, and can be explained by either physical or mental causes alone. An overdetermined event is fully accounted for by multiple causes at once. However, J. J. C. Smart and Paul Churchland have pointed out that if physical phenomena fully determine behavioral events, then by Occam's razor an unphysical mind is unnecessary.

Robinson suggests that the interaction may involve dark energy, dark matter or some other currently unknown scientific process. However, such processes would necessarily be physical, and in this case dualism is replaced with physicalism, or the interaction point is left for study at a later time when these physical processes are understood.

Another reply is that the interaction taking place in the human body may not be described by "billiard ball" classical mechanics. If a nondeterministic interpretation of quantum mechanics is correct then microscopic events are indeterminate, where the degree of determinism increases with the scale of the system. Philosophers Karl Popper and John Eccles and physicist Henry Stapp have theorized that such indeterminacy may apply at the macroscopic scale. However, Max Tegmark has argued that classical and quantum calculations show that quantum decoherence effects do not play a role in brain activity. Indeed, macroscopic quantum states have only ever been observed in superconductors near absolute zero.

Yet another reply to the interaction problem is to note that it doesn't seem that there is an interaction problem for all forms of substance dualism. For instance, Thomistic dualism doesn't obviously face any issue with regards to interaction.

Argument from brain damage

This argument has been formulated by Paul Churchland, among others. The point is that, in instances of some sort of brain damage (e.g. caused by automobile accidents, drug abuse, pathological diseases, etc.), it is always the case that the mental substance and/or properties of the person are significantly changed or compromised. If the mind were a completely separate substance from the brain, how could it be possible that every single time the brain is injured, the mind is also injured? Indeed, it is very frequently the case that one can even predict and explain the kind of mental or psychological deterioration or change that human beings will undergo when specific parts of their brains are damaged. So the question for the dualist to try to confront is how can all of this be explained if the mind is a separate and immaterial substance from, or if its properties are ontologically independent of, the brain.

Property dualism and William Hasker's "emergent dualism" seek to avoid this problem. They assert that the mind is a property or substance that emerges from the appropriate arrangement of physical matter, and therefore could be affected by any rearrangement of matter.

Phineas Gage, who suffered destruction of one or both frontal lobes by a projectile iron rod, is often cited as an example illustrating that the brain causes mind. Gage certainly exhibited some mental changes after his accident. This physical event, the destruction of part of his brain, therefore caused some kind of change in his mind, suggesting a correlation between brain states and mental states. Similar examples abound; neuroscientist David Eagleman describes the case of another individual who exhibited escalating pedophilic tendencies at two different times, and in each case was found to have tumors growing in a particular part of his brain.

Case studies aside, modern experiments have demonstrated that the relation between brain and mind is much more than simple correlation. By damaging, or manipulating, specific areas of the brain repeatedly under controlled conditions (e.g. in monkeys) and reliably obtaining the same results in measures of mental state and abilities, neuroscientists have shown that the relation between damage to the brain and mental deterioration is likely causal. This conclusion is further supported by data from the effects of neuro-active chemicals (e.g., those affecting neurotransmitters) on mental functions, but also from research on neurostimulation (direct electrical stimulation of the brain, including transcranial magnetic stimulation).

Argument from biological development

Another common argument against dualism consists in the idea that since human beings (both phylogenetically and ontogenetically) begin their existence as entirely physical or material entities and since nothing outside of the domain of the physical is added later on in the course of development, then we must necessarily end up being fully developed material beings. There is nothing non-material or mentalistic involved in conception, the formation of the blastula, the gastrula, and so on. The postulation of a non-physical mind would seem superfluous.

Argument from neuroscience

In some contexts, the decisions that a person makes can be detected up to 10 seconds in advance by means of scanning their brain activity. Subjective experiences and covert attitudes can be detected, as can mental imagery. This is strong empirical evidence that cognitive processes have a physical basis in the brain.

Argument from simplicity

The argument from simplicity is probably the simplest and also the most common form of argument against dualism of the mental. The dualist is always faced with the question of why anyone should find it necessary to believe in the existence of two, ontologically distinct, entities (mind and brain), when it seems possible and would make for a simpler thesis to test against scientific evidence, to explain the same events and properties in terms of one. It is a heuristic principle in science and philosophy not to assume the existence of more entities than is necessary for clear explanation and prediction.

This argument was criticized by Peter Glassen in a debate with J. J. C. Smart in the pages of Philosophy in the late 1970s and early 1980s. Glassen argued that, because it is not a physical entity, Occam's razor cannot consistently be appealed to by a physicalist or materialist as a justification of mental states or events, such as the belief that dualism is false. The idea is that Occam's razor may not be as "unrestricted" as it is normally described (applying to all qualitative postulates, even abstract ones) but instead concrete (only applies to physical objects). If one applies Occam's Razor unrestrictedly, then it recommends monism until pluralism either receives more support or is disproved. If one applies Occam's Razor only concretely, then it may not be used on abstract concepts (this route, however, has serious consequences for selecting between hypotheses about the abstract).

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...