Search This Blog

Sunday, February 1, 2026

Philosophy of artificial intelligence

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures; see artificial life) so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

The philosophy of artificial intelligence attempts to answer such questions as follows:

  • Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?
  • Can a machine have a mind, mental states, and consciousness in the same sense that a human being can? Can it feel how things are? (i.e. does it have qualia?)

Questions like these reflect the divergent interests of AI researchers, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.

Important propositions in the philosophy of AI include some of the following:

  • Turing's "polite convention": If a machine behaves as intelligently as a human being, then it is as intelligent as a human being.
  • The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
  • Allen Newell and Herbert A. Simon's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."
  • John Searle's strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."
  • Hobbes' mechanism: "For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts..."

Can a machine display general intelligence?

Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This question defines the scope of what machines could do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers, evoking the question: does it matter whether a machine is really thinking, as a person thinks, rather than just producing outcomes that appear to result from thinking?

The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth workshop of 1956:

  • "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

Arguments against the basic premise must show that building a working AI system is impossible because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for intelligent behavior and yet cannot be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible.

It is also possible to sidestep the connection between the two parts of the above proposal. For instance, machine learning, beginning with Turing's infamous child machine proposal, essentially achieves the desired feature of intelligence without a precise design-time description as to how it would exactly work. The account on robot tacit knowledge eliminates the need for a precise description altogether.

The first step to answering the question is to clearly define "intelligence".

Intelligence

The "standard interpretation" of the Turing test

Turing test

Alan Turing reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question posed to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human. Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks". Turing's test extends this polite convention to machines:

  • If a machine acts as intelligently as a human being, then it is as intelligent as a human being.

One criticism of the Turing test is that it only measures the "humanness" of the machine's behavior, rather than the "intelligence" of the behavior. Since human behavior and intelligent behavior are not exactly the same thing, the test fails to measure intelligence. Stuart J. Russell and Peter Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons'".

Intelligence as achieving goals

Simple reflex agent

Twenty-first century AI research defines intelligence in terms of goal-directed behavior. It views intelligence as a set of problems that the machine is expected to solve – the more problems it can solve, and the better its solutions are, the more intelligent the program is. AI founder John McCarthy defined intelligence as "the computational part of the ability to achieve goals in the world."

Stuart Russell and Peter Norvig formalized this definition using abstract intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent.

  • "If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent."

Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for unintelligent human traits such as making typing mistakes. They have the disadvantage that they can fail to differentiate between "things that think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence.

Arguments that a machine can display general intelligence

The brain can be simulated

Hubert Dreyfus describes this argument as claiming that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". This argument, first introduced as early as 1943 and vividly described by Hans Moravec in 1988, is now associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029. A non-real-time simulation of a thalamocortical model that has the size of the human brain (1011 neurons) was performed in 2005, and it took 50 days to simulate 1 second of brain dynamics on a cluster of 27 processors.

Even AI's harshest critics (such as Hubert Dreyfus and John Searle) agree that a brain simulation is possible in theory. However, Searle points out that, in principle, anything can be simulated by a computer; thus, bringing the definition to its breaking point leads to the conclusion that any process at all can technically be considered "computation". "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes. Thus, merely simulating the functioning of a living brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind, like trying to build a jet airliner by copying a living bird precisely, feather by feather, with no theoretical understanding of aeronautical engineering.

Human thinking is symbol processing

In 1963, Allen Newell and Herbert A. Simon proposed that "symbol manipulation" was the essence of both human and machine intelligence. They wrote:

  • "A physical symbol system has the necessary and sufficient means of general intelligent action."

This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence). Another version of this position was described by philosopher Hubert Dreyfus, who called it "the psychological assumption":

  • "The mind can be viewed as a device operating on bits of information according to formal rules."

The "symbols" that Newell, Simon and Dreyfus discussed were word-like and high level—symbols that directly correspond with objects in the world, such as <dog> and <tail>. Most AI programs written between 1956 and 1990 used this kind of symbol. Modern AI, based on statistics and mathematical optimization, does not use the high-level "symbol processing" that Newell and Simon discussed.

Arguments against symbol processing

These arguments show that human thinking does not consist (solely) of high level symbol manipulation. They do not show that artificial intelligence is impossible, only that more than symbol processing is required.

Gödelian anti-mechanist arguments

In 1931, Kurt Gödel proved with an incompleteness theorem that it is always possible to construct a "Gödel statement" that a given consistent formal system of logic (such as a high-level symbol manipulation program) could not prove. Despite being a true statement, the constructed Gödel statement is unprovable in the given system. (The truth of the constructed Gödel statement is contingent on the consistency of the given system; applying the same process to a subtly inconsistent system will appear to succeed, but will actually yield a false "Gödel statement" instead.) More speculatively, Gödel conjectured that the human mind can eventually correctly determine the truth or falsity of any well-grounded mathematical statement (including any possible Gödel statement), and that therefore the human mind's power is not reducible to a mechanism. Philosopher John Lucas (since 1961) and Roger Penrose (since 1989) have championed this philosophical anti-mechanist argument.

Gödelian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent (completely free of error) and believes fully in its own consistency (and can make all logical inferences that follow from its own consistency, including belief in its Gödel statement) . This is probably impossible for a Turing machine to do (see Halting problem); therefore, the Gödelian concludes that human reasoning is too powerful to be captured by a Turing machine, and by extension, any digital mechanical device.

However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate. This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."

Stuart Russell and Peter Norvig agree that Gödel's argument does not consider the nature of real-world human reasoning. It applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems. It is not necessary to be able to prove everything in order to be an intelligent person.

Less formally, Douglas Hofstadter, in his Pulitzer Prize winning book Gödel, Escher, Bach: An Eternal Golden Braid, states that these "Gödel-statements" always refer to the system itself, drawing an analogy to the way the Epimenides paradox uses statements that refer to themselves, such as "this statement is false" or "I am lying". But, of course, the Epimenides paradox applies to anything that makes statements, whether it is a machine or a human, even Lucas himself. Consider:

  • Lucas can't assert the truth of this statement.

This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless.

After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse of quantum mechanical states give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines. By Penrose and Lucas's arguments, the fact that quantum computers are only able to complete Turing computable tasks implies that they cannot be sufficient for emulating the human mind. Therefore, Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of the Planck mass via spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron. However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing.

Dreyfus: the primacy of implicit skills

Hubert Dreyfus argued that human intelligence and expertise depended primarily on fast intuitive judgements rather than step-by-step symbolic manipulation, and argued that these skills would never be captured in formal rules.

Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and intelligence, where he had classified this as the "argument from the informality of behavior." Turing argued in response that, just because we do not know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'"

Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern unconscious reasoning. The situated movement in robotics research attempts to capture our unconscious skills at perception and attention. Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning and learning. Statistical approaches to AI can make predictions which approach the accuracy of human intuitive guesses. Research into commonsense knowledge has focused on reproducing the "background" or context of knowledge. In fact, AI research in general has moved away from high level symbol manipulation, towards new models that are intended to capture more of our intuitive reasoning.

Cognitive science and psychology eventually came to agree with Dreyfus' description of human expertise. Daniel Kahnemann and others developed a similar theory where they identified two "systems" that humans use to solve problems, which he called "System 1" (fast intuitive judgements) and "System 2" (slow deliberate step by step thinking).

Although Dreyfus' views have been vindicated in many ways, the work in cognitive science and in AI was in response to specific problems in those fields and was not directly influenced by Dreyfus. Historian and AI researcher Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."

Can a machine have a mind, consciousness, and mental states?

This is a philosophical question, related to the problem of other minds and the hard problem of consciousness. The question revolves around a position defined by John Searle as "strong AI":

  • A physical symbol system can have a mind and mental states.

Searle distinguished this position from what he called "weak AI":

  • A physical symbol system can act intelligently.

Searle introduced the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He argued that even if we assume that we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered.

Neither of Searle's two positions are of great concern to AI research, since they do not directly answer the question "can a machine display general intelligence?" (unless it can also be shown that consciousness is necessary for intelligence). Turing wrote "I do not wish to give the impression that I think there is no mystery about consciousness… [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]." Russell and Norvig agree: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."

There are a few researchers who believe that consciousness is an essential element in intelligence, such as Igor Aleksander, Stan Franklin, Ron Sun, and Pentti Haikonen, although their definition of "consciousness" strays very close to "intelligence". (See artificial consciousness.)

Before we can answer this question, we must be clear what we mean by "minds", "mental states" and "consciousness".

Consciousness, minds, mental states, meaning

The words "mind" and "consciousness" are used by different communities in different ways. Some new age thinkers, for example, use the word "consciousness" to describe something similar to Bergson's "élan vital": an invisible, energetic fluid that permeates life and especially the mind. Science fiction writers use the word to describe some essential property that makes us human: a machine or alien that is "conscious" will be presented as a fully human character, with intelligence, desires, will, insight, pride and so on. (Science fiction writers also use the words "sentience", "sapience", "self-awareness" or "ghost"—as in the Ghost in the Shell manga and anime series—to describe this essential human property). For others , the words "mind" or "consciousness" are used as a kind of secular synonym for the soul.

For philosophers, neuroscientists and cognitive scientists, the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a "thought in your head", like a perception, a dream, an intention or a plan, and to the way we see something, know something, mean something or understand something. "It's not hard to give a commonsense definition of consciousness" observes philosopher John Searle. What is mysterious and fascinating is not so much what it is but how it is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking?

Philosophers call this the hard problem of consciousness. It is the latest version of a classic problem in the philosophy of mind called the "mind-body problem". A related problem is the problem of meaning or understanding (which philosophers call "intentionality"): what is the connection between our thoughts and what we are thinking about (i.e. objects and situations out in the world)? A third issue is the problem of experience (or "phenomenology"): If two people see the same thing, do they have the same experience? Or are there things "inside their head" (called "qualia") that can be different from person to person?

Neurobiologists believe all these problems will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Some of the harshest critics of artificial intelligence agree that the brain is just a machine, and that consciousness and intelligence are the result of physical processes in the brain. The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of the neurons to create minds, with mental states (like understanding or perceiving), and ultimately, the experience of consciousness?

Arguments that a computer cannot have a mind and mental states

Searle's Chinese room

John Searle asks us to consider a thought experiment: suppose we have written a computer program that passes the Turing test and demonstrates general intelligent action. Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person who does not speak Chinese. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands Chinese? That is, is there anything that has the mental state of understanding, or which has conscious awareness of what is being discussed in Chinese? The man is clearly not aware. The room cannot be aware. The cards certainly are not aware. Searle concludes that the Chinese room, or any other physical symbol system, cannot have a mind.

Searle goes on to argue that actual mental states and consciousness require (yet to be described) "actual physical-chemical properties of actual human brains." He argues there are special "causal properties" of brains and neurons that gives rise to minds: in his words "brains cause minds."

Gottfried Leibniz made essentially the same argument as Searle in 1714, using the thought experiment of expanding the brain until it was the size of a mill. In 1974, Lawrence Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called "the Chinese Nation" or "the Chinese Gym". Ned Block also proposed his Blockhead argument, which is a version of the Chinese room in which the program has been re-factored into a simple set of rules of the form "see this, do that", removing all mystery from the program.

Responses to the Chinese room

Responses to the Chinese room emphasize several different points.

  • The systems reply and the virtual mind reply: This reply argues that the system, including the man, the program, the room, and the cards, is what understands Chinese. Searle claims that the man in the room is the only thing which could possibly "have a mind" or "understand", but others disagree, arguing that it is possible for there to be two minds in the same physical place, similar to the way a computer can simultaneously "be" two machines at once: one physical (like a Macintosh) and one "virtual" (like a word processor).
  • Speed, power and complexity replies: Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.
  • Robot reply: To truly understand, some believe the Chinese Room needs eyes and hands. Hans Moravec writes: "If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."
  • Brain simulator reply: What if the program simulates the sequence of nerve firings at the synapses of an actual brain of an actual Chinese speaker? The man in the room would be simulating an actual brain. This is a variation on the "systems reply" that appears more plausible because "the system" now clearly operates like a human brain, which strengthens the intuition that there is something besides the man in the room that could understand Chinese.
  • Other minds reply and the epiphenomena reply: Several people have noted that Searle's argument is just a version of the problem of other minds, applied to machines. Since it is difficult to decide if people are "actually" thinking, we should not be surprised that it is difficult to answer the same question about machines.
A related question is whether "consciousness" (as Searle understands it) exists. Searle argues that the experience of consciousness cannot be detected by examining the behavior of a machine, a human being or any other animal. Daniel Dennett points out that natural selection cannot preserve a feature of an animal that has no effect on the behavior of the animal, and thus consciousness (as Searle understands it) cannot be produced by natural selection. Therefore, either natural selection did not produce consciousness, or "strong AI" is correct in that consciousness can be detected by suitably designed Turing test.

Is thinking a kind of computation?

The computational theory of mind or "computationalism" claims that the relationship between mind and brain is similar (if not identical) to the relationship between a running program (software) and a computer (hardware). The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules). The latest version is associated with philosophers Hilary Putnam and Jerry Fodor.

This question bears on our earlier questions: if the human brain is a kind of computer then computers can be both intelligent and conscious, answering both the practical and philosophical questions of AI. In terms of the practical question of AI ("Can a machine display general intelligence?"), some versions of computationalism make the claim that (as Hobbes wrote):

  • Reasoning is nothing but reckoning.

In other words, our intelligence derives from a form of calculation, similar to arithmetic. This is the physical symbol system hypothesis discussed above, and it implies that artificial intelligence is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental states and consciousness?"), most versions of computationalism claim that (as Stevan Harnad characterizes it):

  • Mental states are just implementations of (the right) computer programs.

This is John Searle's "strong AI" discussed above, and it is the real target of the Chinese room argument (according to Harnad).

Can a machine have emotions?

If "emotions" are defined only in terms of their effect on behavior or on how they function inside an organism, then emotions can be viewed as a mechanism that an intelligent agent uses to maximize the utility of its actions. Given this definition of emotion, Hans Moravec believes that "robots in general will be quite emotional about being nice people". Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love." Daniel Crevier writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species."

Can a machine be self-aware?

"Self-awareness", as noted above, is sometimes used by science fiction writers as a name for the essential human property that makes a character fully human. Turing strips away all other properties of human beings and reduces the question to "can a machine be the subject of its own thought?" Can it think about itself? Viewed in this way, a program can be written that can report on its own internal states, such as a debugger.

Can a machine be original or creative?

Turing reduces this to the question of whether a machine can "take us by surprise" and argues that this is obviously true, as any programmer can attest. He notes that, with enough storage capacity, a computer can behave in an astronomical number of different ways. It must be possible, even trivial, for a computer that can represent ideas to combine them in new ways. (Douglas Lenat's Automated Mathematician, as one example, combined ideas to discover new mathematical truths.) Kaplan and Haenlein suggest that machines can display scientific creativity, while it seems likely that humans will have the upper hand where artistic creativity is concerned.

In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge designed a robot called Adam that they believe to be the first machine to independently come up with new scientific findings. Also in 2009, researchers at Cornell developed Eureqa, a computer program that extrapolates formulas to fit the data inputted, such as finding the laws of motion from a pendulum's motion.

Can a machine be benevolent or hostile?

This question (like many others in the philosophy of artificial intelligence) can be presented in two forms. "Hostility" can be defined in terms function or behavior, in which case "hostile" becomes synonymous with "dangerous". Or it can be defined in terms of intent: can a machine "deliberately" set out to do harm? The latter is the question "can a machine have conscious states?" (such as intentions) in another form.

The question of whether highly intelligent and completely autonomous machines would be dangerous has been examined in detail by futurists (such as the Machine Intelligence Research Institute). The obvious element of drama has also made the subject popular in science fiction, which has considered many differently possible scenarios where intelligent machines pose a threat to mankind; see Artificial intelligence in fiction.

One issue is that machines may acquire the autonomy and intelligence required to be dangerous very quickly. Vernor Vinge has suggested that over just a few years, computers will suddenly become thousands or millions of times more intelligent than humans. He calls this "the Singularity". He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.

In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.

The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction.

Some have suggested a need to build "Friendly AI", a term coined by Eliezer Yudkowsky, meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.

Can a machine imitate all human characteristics?

Turing said "It is customary ... to offer a grain of comfort, in the form of a statement that some peculiarly human characteristic could never be imitated by a machine. ... I cannot offer any such comfort, for I believe that no such bounds can be set."

Turing noted that there are many arguments of the form "a machine will never do X", where X can be many things, such as:

Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.

Turing argues that these objections are often based on naive assumptions about the versatility of machines or are "disguised forms of the argument from consciousness". Writing a program that exhibits one of these behaviors "will not make much of an impression." All of these arguments are tangential to the basic premise of AI, unless it can be shown that one of these traits is essential for general intelligence.

Can a machine have a soul?

Finally, those who believe in the existence of a soul may argue that "Thinking is a function of man's immortal soul." Alan Turing called this "the theological objection". He writes:

In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.

The discussion on the topic has been reignited as a result of recent claims made by Google's LaMDA artificial intelligence system that it is sentient and had a "soul".

LaMDA (Language Model for Dialogue Applications) is an artificial intelligence system that creates chatbots—AI robots designed to communicate with humans—by gathering vast amounts of text from the internet and using algorithms to respond to queries in the most fluid and natural way possible.

The transcripts of conversations between scientists and LaMDA reveal that the AI system excels at this, providing answers to challenging topics about the nature of emotions, generating Aesop-style fables on the moment, and even describing its alleged fears. Pretty much all philosophers doubt LaMDA's sentience.

Views on the role of philosophy

Some scholars argue that the AI community's dismissal of philosophy is detrimental. In the Stanford Encyclopedia of Philosophy, some philosophers argue that the role of philosophy in AI is underappreciated. Physicist David Deutsch argues that without an understanding of philosophy or its concepts, AI development would suffer from a lack of progress.

Conferences and literature

The main conference series on the issue is "Philosophy and Theory of AI" (PT-AI), run by Vincent C. Müller.

The main bibliography on the subject, with several sub-sections, is on PhilPapers.

A recent survey for Philosophy of AI is Müller (2025).

God becomes the Universe

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/God_becomes_the_Universe

The belief that God became the Universe is a theological doctrine that has been developed several times historically, and holds that the creator of the universe actually became the universe. Historically, for versions of this theory where God has ceased to exist or to act as a separate and conscious entity, some have used the term pandeism, which combines aspects of pantheism and deism, to refer to such a theology. A similar concept is panentheism, which has the creator become the universe only in part, but remain in some other part transcendent to it, as well. Hindu texts like the Mandukya Upanishad speak of the undivided one which became the universe.

Development

In mythology

Many ancient mythologies suggested that the world was created from the physical substance of a dead deity or a being of similar power. In Babylonian mythology, the young god Marduk slew Tiamat and created the known world from her body. Similarly, Norse mythology posited that Odin and his brothers, Vili and Vé defeated a frost giant, Ymir and then created the world from his skull. Chinese mythology of the Three Kingdoms era recounts the creation of elements of the physical world (mountains, rivers, the sun and moon, etc.) from the body of a creator called Pángǔ (盤古). Such stories did not go so far as to identify the designer of the world as being one as having used his or her own body to provide the material.

But, one such example exists in Polynesian myth, for in the islands of the Pacific, the idea of Supreme Deity manifests in divinities that Māori people call Rangi and Papa, Native Hawaiians Kāne, the Tongans and Samoans Tagaloa, and the peoples of the Society Islands call Ta'aroa. A native poetic definition of the Creator relates: "He was; Taaroa was his name; he abode in the void. No earth, no sky, no men. Taaroa calls, but nought answers; and alone existing, he became the universe. The props are Taaroa; the rocks are Taaroa; the sands are Taaroa; it is thus he himself is named."

Ancient philosophy

Religious studies professor, Francis Edward Peters traced this idea to the philosophy of the Milesians, who had also pioneered knowledge of pantheism, in his 1967 Greek Philosophical Terms: A Historical Lexicon, noting that "[w]hat appeared... at the center of the Pythagorean tradition in philosophy, is another view of psyche that seems to owe little or nothing to the pan-vitalism or pan-deism that is the legacy of the Milesians.

Milesian philosopher Anaximander in particular favored the use of rational principles to contend that everything in the world was formed of variations of a single substance (apeiron), which had been temporarily liberated from the primal state of the world. Friedrich Nietzsche, in his Philosophy in the Tragic Age of the Greeks, stated that Anaximander viewed "...all coming-to-be as though it were an illegitimate emancipation from eternal being, a wrong for which destruction is the only penance." Anaximander was among the material monists, along with Thales, who believed that everything was composed of water, Anaximenes, who believed it was air, and Heraclitus, who believed it to be fire.

Gottfried Große in his 1787 interpretation of Pliny the Elder’s Natural History, describes Pliny, a first-century figure, as a pandeist as well.

In the 9th century, Johannes Scotus Eriugena proposed in his great work, De divisione naturae (also called Periphyseon, probably completed around 867 AD), that the nature of the universe is divisible into four distinct classes:

Johannes Scotus Eriugena was among the first to propose that God became the universe, and did so to learn something about itself.
  1. that which creates and is not created;
  2. that which is created and creates;
  3. that which is created and does not create;
  4. that which neither is created nor creates.

The first is God as the ground or origin of all things, the last is God as the final end or goal of all things, that into which the world of created things ultimately returns. One particularly controversial point made by Eriugena was that God was "nothing", in that God could not fall into any earthly classification. Eriugena followed the argument of Pseudo-Dionysius and from neo-Platonists such as Gaius Marius Victorinus that because God was above being, God was not a being: "So supremely perfect is the essence of the Divinity that God is incomprehensible not only to us but also to Himself. For if He knew Himself in any adequate sense He should place Himself in some category of thought, which would be to limit Himself."

Eriugena depicts God as an evolving being, developing through the four stages that he outlines. The second and third classes together compose the created universe, which is the manifestation of God, God in process, Theophania; the second being the world of Platonic ideas or forms. The third is the physical manifestation of God, having evolved through the realm of ideas and made those ideas seem to be matter, and may be pantheistic or pandeistic, depending on the interference attributed to God in the universe:

[God] enters... the realm of space and time, where the ideas become subject to multiplicity, change, imperfection, and decay. In this last stage they are no longer pure ideas but only the appearances of reality, that is phenomena. ... In the realm of space and time the ideas take on the burden of matter, which is the source of suffering, sickness, and sin. The material world, therefore, of our experience is composed of ideas clothed in matter — here Eriugena attempts a reconciliation of Platonism with Aristotelean notions. Man, too, is composed of idea and matter, soul and body. He is the culmination of the process of things from God, and with him, as we shall see, begins the process of return of all things to God.

The divine system is thus distinguished by beginning, middle and end; but these are in essence one; the difference is only the consequence of man's temporal limitations. This eternal process is viewed with finite comprehension through the form of time, forcing the application of temporal distinctions to that which is extra- or supra-temporal. Eriugena concludes this work with another controversial argument, and one that had already been scathingly rejected by Augustine of Hippo, that "[n]ot only man, however, but everything else in nature is destined to return to God." Eriugena's work was condemned by a council at Sens by Honorius III (1225), who described it as "swarming with worms of heretical perversity," and by Pope Gregory XIII in 1585. Such theories were thus suppressed for hundreds of years thence.

16th century on

The ideas of Spinoza lay the foundations for pandeism.

Giordano Bruno conceived of a God who was immanent in nature, and for this very purpose was uninterested in human affairs (all such events being equally part of God). However, it was Baruch Spinoza in the 17th century who appears to have been the earliest to use deistic reason to arrive at the conception of a pantheistic God. Spinoza's God was deistic in the sense that it could only be proved by appeal to reason, but it was also one with the universe.

Spinoza's pantheistic focus on the universe as it already existed, was unlike Eriugena's. It did not address the possible creation of the universe from the substance of God, as Spinoza rejected the very possibility of changes in the form of matter required as a premise for such a belief.

Franz Wilhelm Junghuhn was the first to articulate a pantheistic deism.

18th-century British philosopher Thomas Paine also approached this territory in his great philosophical treatise, The Age of Reason, although Paine was concentrated on the deistic aspects of his inquiry. According to the Encyclopedia of American Philosophy "Later Unitarian Christians (such as William Ellery Channing), transcendentalists (such as Ralph Waldo Emerson and Henry David Thoreau), writers (such as Walt Whitman) and some pragmatists (such as William James) took a more pantheist or pandeist approach by rejecting views of God as separate from the world." It was Dutch naturalist Franz Wilhelm Junghuhn who first specifically detailed a religious philosophy incorporating deism and pantheism, in his four volume treatise, Java, seine Gestalt, Pflanzendecke, und sein innerer Bau (Images of Light and Shadow from Java's interior) released anonymously between 1850 and 1854. Junghuhn's book was banned for a time in Austria and parts of Germany as an attack on Christianity. In 1884, theologian Sabine Baring-Gould would contend that Christianity itself demanded that the seemingly irreconcilable elements of pantheism and deism must be combined:

This world is either the idea or it is the workmanship of God. If we say that it is the idea,--then we are Pantheists, if we say that it is the work, then we are Deists... But how, it may be asked, can two such opposite theories as Pantheism and Deism be reconciled,--they mutually exclude one another? I may not be able to explain how they are conciliable, but I boldly affirm that each is simultaneously true, and that each must be true, for each is an inexorably logical conclusion, and each is a positive conclusion, and all positive conclusions must be true if Christ be the Ideal and the focus of all truths.

Within a decade after that, Andrew Martin Fairbairn similarly wrote that "both Deism and Pantheism err because they are partial; they are right in what they affirm, wrong in what they deny. It is as antitheses that they are false; but by synthesis they may be combined or dissolved into truth." Ironically, Fairbairn's criticism concluded that it was the presence of an active God that was missing from both concepts, rather than the rational explanation of God's motives and appearance of absence.

In 1838, Italian phrenologist Luigi Ferrarese in Memorie Riguardanti la Dottrina Frenologica ("Thoughts Regarding the Doctrine of Phrenology") attacked the philosophy of Victor Cousin as a doctrine which "locates reason outside the human person, declaring man a fragment of God, introducing a sort of spiritual Pandeism, absurd for us, and injurious to the Supreme Being." Cousin had often been identified as a pantheist, but it was said that he repudiated that label on the basis that unlike Spinoza, Cousin asserted that "he does not hold with Spinoza and the Eleatics that God is a pure substance, and not a cause."

Helena Petrovna Blavatsky observed this:

In the Mandukya Upanishad it is written, "As a spider throws out and retracts its web, as herbs spring up in the ground . . . so is the Universe derived from the undecaying one," Brahma, for the "Germ of unknown Darkness", is the material from which all evolves and develops, "as the web from the spider, as foam from the water," etc. This is only graphic and true, if the term Brahma, the "Creator", is derived from the root brih, to increase or expand. Brahma "expands", and becomes the Universe woven out of his own substance.

Mid-19th century German-Jewish philosopher, Philipp Mainländer theorized the world was created through "God killing himself," wherein the origin of existence can be attributed to a singularity, referred to by Mainländer as "God" in a pseudo-metaphorical sense, dispersing itself to create spatiotemporality in an attempt to elude existence, being that this singularity ("God") has an ontology of existing, and thus can only escape it through becoming entities wherein this nature is not to be found. Mainländer, inspired by Arthur Schopenhauer's will to live, theorized of the "will to death," a desire for annihilation inherent to the beings created by Mainländer's "God".

Developments from the 20th century to today

In the 1940s, process theologian Charles Hartshorne identified pandeism as one of his many models of the possible nature of God, acknowledging that a God capable of change (as Hartshorne insisted God must be) is consistent with pandeism. Hartshorne preferred pandeism to pantheism, explaining that "it is not really the theos that is described." However, he specifically rejected pandeism early on in favor of a God whose characteristics included "absolute perfection in some respects, relative perfection in all others" or "AR", writing that this theory "is able consistently to embrace all that is positive in either deism or pandeism." Hartshorne accepted the label of panentheism for his beliefs, declaring that "panentheistic doctrine contains all of deism and pandeism except their arbitrary negations."

In 2001, Scott Adams published God's Debris: A Thought Experiment, in which a fictional character puts forth a radical form of kenosis, surmising that an omnipotent God annihilated himself in the Big Bang, because God would already know everything possible except his own lack of existence, and would have to end that existence in order to complete his knowledge. Adams' protagonist asks about God, "would his omnipotence include knowing what happens after he loses his omnipotence, or would his knowledge of the future end at that point?" He proceeds from this question to the following analysis:

A God who knew the answer to that question would indeed know everything and have everything. For that reason he would be unmotivated to do anything or create anything. There would be no purpose to act in any way whatsoever. But a God who had one nagging question—what happens if I cease to exist?—might be motivated to find the answer in order to complete his knowledge. ... The fact that we exist is proof that God is motivated to act in some way. And since only the challenge of self-destruction could interest an omnipotent God, it stands to reason that we... are God's debris.

Adams' God exists now as a combination of the smallest units of energy of which the universe is made (many levels smaller than quarks), which Adams called "God Dust", and the law of probability, or "God's debris", hence the title. The protagonist further proposes that God is in the process of being restored not through some process such as the Big Crunch, but because humankind itself is becoming God.

The 1976 Simon Raven novel, The Survivors includes an exchange between characters where one observes, "God became the universe. Therefore the universe is God." while the other counters:

In becoming the universe God abdicated. He destroyed himself as God. He turned what he had been, his true self, into nullity and thereby forfeited the Godlike qualities which pertained to him. The universe which he has become is also his grave. He has no control in it or over it. God, as God, is dead.

Criticisms

Some theologians have criticised the notion of a Creator wholly becoming the universe. An example is William Walker Atkinson, in his Mastery of Being:

It will be seen that this fact of the Immutability of REALITY, when clearly conceived, must serve to confute and refute the erroneous theories of certain schools of Pantheism which hold that "God becomes the Universe by changing into the Universe." Thus it is sought to identify Nature with God, whereby, as Schopenhauer said, "you show God to the door." If God changes Himself into The Phenomenal Universe, then God is non-existent and we need not concern ourselves any more about Him, for he has committed suicide by Change. In such case there is no God, no Infinite, no Immutable, no Eternal; everything has become finite, temporal, separate, a mere union of diverse finite parts. In that case are we indeed adrift in the Ocean of Diversity. We have lost our Foundation of REALITY, and are but ever-changing "parts" of physical things governed by physical laws. Then, indeed, would be true the idea of some of the old philosophies that "there is No Being; merely a Becoming." Then would there, in truth, be nothing constant, the universe never the same for two consecutive moments, with no permanent ground of REALITY to support it. But the reason of man, the very essence of his mental being, refuses to so think of That-which-IS. In his heart of hearts he recognizes the existence of THAT-WHICH-CHANGES-NOT, THAT-WHICH-IS-ETERNAL, THAT-WHICH-IS-REALITY.
....
Moreover, the idea of the immutability of REALITY must, serve to confute the erroneous idea of certain schools of metaphysics which assert the existence of "an Evolving God"; that is, a God which increases in intelligence, nature, and being by reason of the change of the universe, which is an expression of Himself. This conception is that of a Supreme Being who is growing, developing, and increasing in efficiency, wisdom, power, and character. This is an attempt to combine the anthropomorphic deity and the pantheistic Nature-God. The conception is clearly anthropomorphic, as it seeks to attribute to God the qualities and characteristics of man. It defies every fact of Ultimate Principle of REALITY. It is extremely unphilosophical and will not stand the test of logical examination.

He claims that if God were evolving or improving, being an infinite being, it would have to be traceable back to some point of having "an infinitely undeveloped state and condition." But, this claim was made prior to the rise of scientific knowledge pinpointing the beginning of the universe in time, and connecting time with space, so that time would not exist as we know it prior to the universe existing. In Islam, a criticism is raised, wherein it is argued that "from the juristic standpoint, obliterating the distinctions between God and the universe necessarily entails that in effect there can be no Sharia, since the deontic nature of the Law presupposes the existence of someone who commands (amir) and others who are the recipients of the command (ma'mur), namely God and his subjects."

In 1996, Pastor Bob Burridge of the Genevan Institute for Reformed Studies wrote in his Survey Studies in Reformed Theology an essay on "The Decrees of God," also identifying the notion of God becoming the universe as incompatible with Christianity:

All the actions of created intelligences are not merely the actions of God. He has created a universe of beings which are said to act freely and responsibly as the proximate causes of their own moral actions. When individuals do evil things it is not God the Creator and Preserver acting. If God was the proximate cause of every act it would make all events to be "God in motion." That is nothing less than pantheism, or more exactly, pandeism.

Burridge disagrees that such is the case, decrying that "The Creator is distinct from his creation. The reality of secondary causes is what separates Christian theism from pandeism." Burridge concludes by challenging his reader to determine why "calling God the author of sin demand[s] a pandeistic understanding of the universe effectively removing the reality of sin and moral law."

Compatibility with scientific and philosophical proofs

Stephen Hawking's determination that the universe (and others) needed no Creator to come about inspired the response from Deepak Chopra, interviewed by Larry King, that:

He says in the book that at least 10 to the power of 500 universes could possibly exist in super position of possibility at this level, which to me suggests an omniscient being. The only difference I have was God did not create the universe, God became the universe.

Chopra insists that Hawking's discoveries speak only to the nature of God, not to its existence.

The God Theory

Physicist Bernard Haisch has published two books expressing such a model of the universe. The first was the 2006 book entitled The God Theory, in which he writes:

I offer a genuine insight into how you can, and should, be a rational, science-believing human being and at the same time know that you are also an immortal spiritual being, a spark of God. I propose a worldview that offers a way out of the hate and fear-driven violence engulfing the planet.

Haisch published a followup in 2010, The Purpose-Guided Universe. Both books reject both atheism and traditional theistic viewpoints, favoring instead a model wherein the deity has become the universe, to share in the actualized experiences therein manifested. Haisch provides as proof of his views a combination of fine tuning and mystical experiences arguments. Haisch additionally points to the peculiar capabilities of persons with autism and like defects of the brain experiencing savant syndrome, and especially having the ability to perform complex mathematical calculations. Haisch contends that this is consistent with humans being fragments of a supreme power, with their minds acting as filters to reduce that power to a comprehensible experience, and with the savantic mind having a broken filter which allows access to the use of greater capacities.

Alan Dawe's 2011 book The God Franchise, likewise proposes the human experience as being a temporarily segregated sliver of the experience of God.

Mind–body problem

From Wikipedia, the free encyclopedia
Illustration of mind–body dualism by René Descartes. Inputs are passed by the sensory organs to the pineal gland, and from there to the immaterial spirit.

The mind–body problem is a philosophical problem concerning the relationship between thought and consciousness in the human mind and body. It addresses the nature of consciousness, mental states, and their relation to the physical brain and nervous system. The problem centers on understanding how immaterial thoughts and feelings can interact with the material world, or whether they are ultimately physical phenomena.

This problem has been a central issue in philosophy of mind since the 17th century, particularly following René Descartes' formulation of dualism, which proposes that mind and body are fundamentally distinct substances. Other major philosophical positions include monism, which encompasses physicalism (everything is ultimately physical) and idealism (everything is ultimately mental). More recent approaches include functionalism, property dualism, and various non-reductive theories.

The mind-body problem raises fundamental questions about causation between mental and physical events, the nature of consciousness, personal identity, and free will. It remains significant in both philosophy and science, influencing fields such as cognitive science, neuroscience, psychology, and artificial intelligence.

In general, the existence of these mind–body connections seems unproblematic. Issues arise, however, when attempting to interpret these relations from a metaphysical or scientific perspective. Such reflections raise a number of questions, including:

  • Are the mind and body two distinct entities, or a single entity?
  • If the mind and body are two distinct entities, do the two of them causally interact?
  • Is it possible for these two distinct entities to causally interact?
  • What is the nature of this interaction?
  • Can this interaction ever be an object of empirical study?
  • If the mind and body are a single entity, then are mental events explicable in terms of physical events, or vice versa?
  • Is the relation between mental and physical events something that arises de novo at a certain point in development?

These and other questions that discuss the relation between mind and body are questions that all fall under the banner of the 'mind–body problem'.

Mind–body interaction and mental causation

Philosophers David L. Robb and John F. Heil introduce mental causation in terms of the mind–body problem of interaction:

Mind–body interaction has a central place in our pretheoretic conception of agency. Indeed, mental causation often figures explicitly in formulations of the mind–body problem. Some philosophers insist that the very notion of psychological explanation turns on the intelligibility of mental causation. If your mind and its states, such as your beliefs and desires, were causally isolated from your bodily behavior, then what goes on in your mind could not explain what you do. If psychological explanation goes, so do the closely related notions of agency and moral responsibility. Clearly, a good deal rides on a satisfactory solution to the problem of mental causation [and] there is more than one way in which puzzles about the mind's "causal relevance" to behavior (and to the physical world more generally) can arise.

[René Descartes] set the agenda for subsequent discussions of the mind–body relation. According to Descartes, minds and bodies are distinct kinds of "substance". Bodies, he held, are spatially extended substances, incapable of feeling or thought; minds, in contrast, are unextended, thinking, feeling substances. If minds and bodies are radically different kinds of substance, however, it is not easy to see how they "could" causally interact. Princess Elizabeth of Bohemia puts it forcefully to him in a 1643 letter:

how the human soul can determine the movement of the animal spirits in the body so as to perform voluntary acts—being as it is merely a conscious substance. For the determination of movement seems always to come about from the moving body's being propelled—to depend on the kind of impulse it gets from what sets it in motion, or again, on the nature and shape of this latter thing's surface. Now the first two conditions involve contact, and the third involves that the impelling thing has extension; but you utterly exclude extension from your notion of soul, and contact seems to me incompatible with a thing's being immaterial...

Elizabeth is expressing the prevailing mechanistic view as to how causation of bodies works. Causal relations countenanced by contemporary physics can take several forms, not all of which are of the push–pull variety.

— David Robb and John Heil, "Mental Causation" in The Stanford Encyclopedia of Philosophy

Contemporary neurophilosopher Georg Northoff suggests that mental causation is compatible with classical formal and final causality.

Biologist, theoretical neuroscientist and philosopher, Walter J. Freeman, suggests that explaining mind–body interaction in terms of "circular causation" is more relevant than linear causation.

In neuroscience, much has been learned about correlations between brain activity and subjective, conscious experiences. Many suggest that neuroscience will ultimately explain consciousness: "...consciousness is a biological process that will eventually be explained in terms of molecular signaling pathways used by interacting populations of nerve cells..." However, this view has been criticized because consciousness has yet to be shown to be a process, and the "hard problem" of relating consciousness directly to brain activity remains elusive.

Cognitive science today gets increasingly interested in the embodiment of human perception, thinking, and action. Abstract information processing models are no longer accepted as satisfactory accounts of the human mind. Interest has shifted to interactions between the material human body and its surroundings and to the way in which such interactions shape the mind. Proponents of this approach have expressed the hope that it will ultimately dissolve the Cartesian divide between the immaterial mind and the material existence of human beings (Damasio, 1994; Gallagher, 2005). A topic that seems particularly promising for providing a bridge across the mind–body cleavage is the study of bodily actions, which are neither reflexive reactions to external stimuli nor indications of mental states, which have only arbitrary relationships to the motor features of the action (e.g., pressing a button for making a choice response). The shape, timing, and effects of such actions are inseparable from their meaning. One might say that they are loaded with mental content, which cannot be appreciated other than by studying their material features. Imitation, communicative gesturing, and tool use are examples of these kinds of actions.[9]

— Georg Goldenberg, "How the Mind Moves the Body: Lessons From Apraxia" in Oxford Handbook of Human Action

Since 1927, at the Solvay Conference in Austria, European physicists of the late 19th and early 20th centuries realized that the interpretations of their experiments with light and electricity required a different theory to explain why light behaves both as a wave and particle. The implications were profound. The usual empirical model of explaining natural phenomena could not account for this duality of matter and non-matter. In a significant way, this has brought back the conversation on the mind–body duality.

Neural correlates

The neuronal correlates of consciousness constitute the smallest set of neural events and structures sufficient for a given conscious percept or explicit memory. This case involves synchronized action potentials in neocortical pyramidal neurons.

The neural correlates of consciousness "are the smallest set of brain mechanisms and events sufficient for some specific conscious feeling, as elemental as the color red or as complex as the sensual, mysterious, and primeval sensation evoked when looking at [a] jungle scene..." Neuroscientists use empirical approaches to discover neural correlates of subjective phenomena.

Neurobiology and neurophilosophy

A science of consciousness must explain the exact relationship between subjective conscious mental states and brain states formed by electrochemical interactions in the body, the so-called hard problem of consciousnessNeurobiology studies the connection scientifically, as do neuropsychology and neuropsychiatry. Neurophilosophy is the interdisciplinary study of neuroscience and philosophy of mind. In this pursuit, neurophilosophers, such as Patricia ChurchlandPaul Churchland and Daniel Dennett, have focused primarily on the body rather than the mind. In this context, neuronal correlates may be viewed as causing consciousness, where consciousness can be thought of as an undefined property that depends upon this complex, adaptive, and highly interconnected biological system. However, it's unknown if discovering and characterizing neural correlates may eventually provide a theory of consciousness that can explain the first-person experience of these "systems", and determine whether other systems of equal complexity lack such features.

The massive parallelism of neural networks allows redundant populations of neurons to mediate the same or similar percepts. Nonetheless, it is assumed that every subjective state will have associated neural correlates, which can be manipulated to artificially inhibit or induce the subject's experience of that conscious state. The growing ability of neuroscientists to manipulate neurons using methods from molecular biology in combination with optical tools was achieved by the development of behavioral and organic models that are amenable to large-scale genomic analysis and manipulation. Non-human analysis such as this, in combination with imaging of the human brain, have contributed to a robust and increasingly predictive theoretical framework.

Arousal and content

Midline structures in the brainstem and thalamus necessary to regulate the level of brain arousal. Small, bilateral lesions in many of these nuclei cause a global loss of consciousness.

There are two common but distinct dimensions of the term consciousness, one involving arousal and states of consciousness and the other involving content of consciousness and conscious states. To be conscious of something, the brain must be in a relatively high state of arousal (sometimes called vigilance), whether awake or in REM sleep. Brain arousal level fluctuates in a circadian rhythm but these natural cycles may be influenced by lack of sleep, alcohol and other drugs, physical exertion, etc. Arousal can be measured behaviorally by the signal amplitude required to trigger a given reaction (for example, the sound level that causes a subject to turn and look toward the source). High arousal states involve conscious states that feature specific perceptual content, planning and recollection or even fantasy. Clinicians use scoring systems such as the Glasgow Coma Scale to assess the level of arousal in patients with impaired states of consciousness such as the comatose state, the persistent vegetative state, and the minimally conscious state. Here, "state" refers to different amounts of externalized, physical consciousness: ranging from a total absence in coma, persistent vegetative state and general anesthesia, to a fluctuating, minimally conscious state, such as sleep walking and epileptic seizure.

Many nuclei with distinct chemical signatures in the thalamus, midbrain and pons must function for a subject to be in a sufficient state of brain arousal to experience anything at all. These nuclei therefore belong to the enabling factors for consciousness. Conversely it is likely that the specific content of any particular conscious sensation is mediated by particular neurons in the cortex and their associated satellite structures, including the amygdala, thalamus, claustrum and the basal ganglia.

Theoretical frameworks

Different approaches toward resolving the mind–body problem

A variety of approaches have been proposed. Most are either dualist or monist. Dualism maintains a rigid distinction between the realms of mind and matter. Monism maintains that there is only one unifying reality as in neutral or substance or essence, in terms of which everything can be explained.

Each of these categories contains numerous variants. The two main forms of dualism are substance dualism, which holds that the mind is formed of a distinct type of substance not governed by the laws of physics, and property dualism, which holds that mental properties involving conscious experience are fundamental properties, alongside the fundamental properties identified by a completed physics. The three main forms of monism are physicalism, which holds that the mind consists of matter organized in a particular way; idealism, which holds that only thought truly exists and matter is merely a representation of mental processes; and neutral monism, which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them. Psychophysical parallelism is a third possible alternative regarding the relation between mind and body, between interaction (dualism) and one-sided action (monism).

Several philosophical perspectives that have sought to escape the problem by rejecting the mind–body dichotomy have been developed. The historical materialism of Karl Marx and subsequent writers, itself a form of physicalism, held that consciousness was engendered by the material contingencies of one's environment. An explicit rejection of the dichotomy is found in French structuralism, and is a position that generally characterized post-war Continental philosophy.

An ancient model of the mind known as the Five-Aggregate Model, described in the Buddhist teachings, explains the mind as continuously changing sense impressions and mental phenomena. Considering this model, it is possible to understand that it is the constantly changing sense impressions and mental phenomena (i.e., the mind) that experience/analyze all external phenomena in the world as well as all internal phenomena including the body anatomy, the nervous system as well as the organ brain. This conceptualization leads to two levels of analyses: (i) analyses conducted from a third-person perspective on how the brain works, and (ii) analyzing the moment-to-moment manifestation of an individual's mind-stream (analyses conducted from a first-person perspective). Considering the latter, the manifestation of the mind-stream is described as happening in every person all the time, even in a scientist who analyzes various phenomena in the world, including analyzing and hypothesizing about the organ brain.

Christian List argues that Benj Hellie's vertiginous question, i.e. why an individual exists as themselves and not as someone else, and the existence of first-personal facts, is evidence against physicalism. However, according to List, this is also evidence against other third-personal metaphysical pictures, including standard versions of dualism. List also argues that the vertiginous question implies a "quadrilemma" for theories of consciousness. He claims that at most three of the following metaphysical claims can be true: 'first-person realism', 'non-solipsism', 'non-fragmentation', and 'one world' – and thus one of these four must be rejected. List has proposed a model he calls the "many-worlds theory of consciousness" in order to reconcile the subjective nature of consciousness without lapsing into solipsism.

Dualism

The following is a very brief account of some contributions to the mind–body problem.

Interactionism

The viewpoint of interactionism suggests that the mind and body are two separate substances, but that each can affect the other. This interaction between the mind and body was first put forward by the philosopher René Descartes. Descartes believed that the mind was non-physical and permeated the entire body, but that the mind and body interacted via the pineal gland. This theory has changed throughout the years, and in the 20th century its main adherents were the philosopher of science Karl Popper and the neurophysiologist John Carew Eccles. A more recent and popular version of Interactionism is the viewpoint of emergentism. This perspective states that mental states are a result of the brain states, and that the mental events can then influence the brain, resulting in a two way communication between the mind and body.

The absence of an empirically identifiable meeting point between the non-physical mind (if there is such a thing) and its physical extension (if there is such a thing) has been raised as a criticism of interactionalist dualism. This criticism has led many modern philosophers of mind to maintain that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, particularly in the fields of sociobiology, computer science, evolutionary psychology, and the neurosciences.

Avshalom Elitzur has defended interactionism and has described himself as a "reluctant dualist". One argument Elitzur makes in favor of dualism is an argument from bafflement. According to Elitzur, a conscious being can conceive of a P-zombie version of his/herself. However, a P-zombie cannot conceive of a version of itself that lacks corresponding qualia.

Epiphenomenalism

The viewpoint of epiphenomenalism suggests that the physical brain can cause mental events in the mind, but that the mind cannot interact with the brain at all; stating that mental occurrences are simply a side effect of the brain's processes. This viewpoint explains that while one's body may react to them feeling joy, fear, or sadness, that the emotion does not cause the physical response. Rather, it explains that joy, fear, sadness, and all bodily reactions are caused by chemicals and their interaction with the body.

Psychophysical parallelism

The viewpoint of psychophysical parallelism suggests that the mind and body are entirely independent from one another. Furthermore, this viewpoint states that both mental and physical stimuli and reactions are experienced simultaneously by both the mind and body, however, there is no interaction nor communication between the two.

Double aspectism

Double aspectism is an extension of psychophysical parallelism which also suggests that the mind and body cannot interact, nor can they be separated. Baruch Spinoza and Gustav Fechner were two of the notable users of double aspectism, however, Fechner later expanded upon it to form the branch of psychophysics in an attempt to prove the relationship of the mind and body.

Pre-established harmony

The viewpoint of pre-established harmony is another offshoot of psychophysical parallelism which suggests that mental events and bodily events are separate and distinct, but that they are both coordinated by an external agent: an example of such an agent could be God. A notable adherent to the idea of pre-established harmony is Gottfried Wilhelm von Leibniz in his theory of Monadology. His explanation of pre-established harmony relied heavily upon God as the external agent who coordinated the mental and bodily events of all things in the beginning.

Gottfried Wilhelm Leibniz's theory of pre-established harmony (French: harmonie préétablie) is a philosophical theory about causation under which every "substance" affects only itself, but all the substances (both bodies and minds) in the world nevertheless seem to causally interact with each other because they have been programmed by God in advance to "harmonize" with each other. Leibniz's term for these substances was "monads", which he described in a popular work (Monadology §7) as "windowless".

The concept of pre-established harmony can be understood by considering an event with both seemingly mental and physical aspects. For example, consider saying 'ouch' after stubbing one's toe. There are two general ways to describe this event: in terms of mental events (where the conscious sensation of pain caused one to say 'ouch') and in terms of physical events (where neural firings in one's toe, carried to the brain, are what caused one to say 'ouch'). The main task of the mind–body problem is figuring out how these mental events (the feeling of pain) and physical events (the nerve firings) relate. Leibniz's pre-established harmony attempts to answer this puzzle, by saying that mental and physical events are not genuinely related in any causal sense, but only seem to interact due to psycho-physical fine-tuning.

Leibniz's theory is best known as a solution to the mind–body problem of how mind can interact with the body. Leibniz rejected the idea of physical bodies affecting each other, and explained all physical causation in this way.

Under pre-established harmony, the preprogramming of each mind must be extremely complex, since only it causes its own thoughts or actions, for as long as it exists. To appear to interact, each substance's "program" must contain a description of either the entire universe, or of how the object behaves at all times during all interactions that appear to occur.

An example:

An apple falls on Alice's head, apparently causing the experience of pain in her mind. In fact, the apple does not cause the pain—the pain is caused by some previous state of Alice's mind. If Alice then seems to shake her hand in anger, it is not actually her mind that causes this, but some previous state of her hand.

Note that if a mind behaves as a windowless monad, there is no need for any other object to exist to create that mind's sense perceptions, leading to a solipsistic universe that consists only of that mind. Leibniz seems to admit this in his Discourse on Metaphysics, section 14. However, he claims that his principle of harmony, according to which God creates the best and most harmonious world possible, dictates that the perceptions (internal states) of each monad "expresses" the world in its entirety, and the world expressed by the monad actually exists. Although Leibniz says that each monad is "windowless", he also claims that it functions as a "mirror" of the entire created universe.

On occasion, Leibniz styled himself as "the author of the system of pre-established harmony".

Immanuel Kant's professor Martin Knutzen regarded pre-established harmony as "the pillow for the lazy mind".

In his sixth Metaphysical Meditation, Descartes talked about a "coordinated disposition of created things set up by God", shortly after having identified "nature in its general aspect" with God himself. His conception of the relationship between God and his normative nature actualized in the existing world recalls both the pre-established harmony of Leibniz and the Deus sive Natura of Baruch Spinoza.

Occasionalism

The viewpoint of Occasionalism is another offshoot of psychophysical parallelism, however, the major difference is that the mind and body have some indirect interaction. Occasionalism suggests that the mind and body are separate and distinct, but that they interact through divine intervention. Nicolas Malebranche was one of the main contributors to this idea, using it as a way to address his disagreements with Descartes' view of the mind–body problem. In Malebranche's occasionalism, he viewed thoughts as a wish for the body to move, which was then fulfilled by God causing the body to act.

Historical background

The problem was popularized by René Descartes in the 17th century, which resulted in Cartesian dualism, also by pre-Aristotelian philosophers, in Avicennian philosophy, and in earlier Asian traditions.

The Buddha

The Buddha (480–400 B.C.E), founder of Buddhism, described the mind and the body as depending on each other in a way that two sheaves of reeds were to stand leaning against one another and taught that the world consists of mind and matter which work together, interdependently. Buddhist teachings describe the mind as manifesting from moment to moment, one thought moment at a time as a fast flowing stream. The components that make up the mind are known as the five aggregates (i.e., material form, feelings, perception, volition, and sensory consciousness), which arise and pass away continuously. The arising and passing of these aggregates in the present moment is described as being influenced by five causal laws: biological laws, psychological laws, physical laws, volitional laws, and universal laws. The Buddhist practice of mindfulness involves attending to this constantly changing mind-stream.

Ultimately, the Buddha's philosophy is that both mind and forms are conditionally arising qualities of an ever-changing universe in which, when nirvāna is attained, all phenomenal experience ceases to exist. According to the anattā doctrine of the Buddha, the conceptual self is a mere mental construct of an individual entity and is basically an impermanent illusion, sustained by form, sensation, perception, thought and consciousness. The Buddha argued that mentally clinging to any views will result in delusion and stress, since, according to the Buddha, a real self (conceptual self, being the basis of standpoints and views) cannot be found when the mind has clarity.

Plato

Plato (429–347 B.C.E.) believed that the material world is a shadow of a higher reality that consists of concepts he called Forms. According to Plato, objects in our everyday world "participate in" these Forms, which confer identity and meaning to material objects. For example, a circle drawn in the sand would be a circle only because it participates in the concept of an ideal circle that exists somewhere in the world of Forms. He argued that, as the body is from the material world, the soul is from the world of Forms and is thus immortal. He believed the soul was temporarily united with the body and would only be separated at death, when it, if pure, would return to the world of Forms; otherwise, reincarnation follows. Since the soul does not exist in time and space, as the body does, it can access universal truths. For Plato, ideas (or Forms) are the true reality, and are experienced by the soul. The body is for Plato empty in that it cannot access the abstract reality of the world; it can only experience shadows. This is determined by Plato's essentially rationalistic epistemology.

Aristotle

For Aristotle (384–322 BC) mind is a faculty of the soul. Regarding the soul, he said:

It is not necessary to ask whether soul and body are one, just as it is not necessary to ask whether the wax and its shape are one, nor generally whether the matter of each thing and that of which it is the matter are one. For even if one and being are spoken of in several ways, what is properly so spoken of is the actuality.

— De Anima ii 1, 412b6–9

In the end, Aristotle saw the relation between soul and body as uncomplicated, in the same way that it is uncomplicated that a cubical shape is a property of a toy building block. The soul is a property exhibited by the body, one among many. Moreover, Aristotle proposed that when the body perishes, so does the soul, just as the shape of a building block disappears with destruction of the block.

Medieval Aristotelianism

Working in the Aristotelian-influenced tradition of Thomism, Thomas Aquinas (1225–1274), like Aristotle, believed that the mind and the body are one, like a seal and wax; therefore, it is pointless to ask whether or not they are one. However, (referring to "mind" as "the soul") he asserted that the soul persists after the death of the body in spite of their unity, calling the soul "this particular thing". Since his view was primarily theological rather than philosophical, it is impossible to fit it neatly within either the category of physicalism or dualism.

Influences of Eastern monotheistic religions

In religious philosophy of Eastern monotheism, dualism denotes a binary opposition of an idea that contains two essential parts. The first formal concept of a "mind–body" split may be found in the divinitysecularity dualism of the ancient Persian religion of Zoroastrianism around the mid-fifth century BC. Gnosticism is a modern name for a variety of ancient dualistic ideas inspired by Judaism popular in the first and second century AD. These ideas later seem to have been incorporated into Galen's "tripartite soul"  that led into both the Christian sentiments expressed in the later Augustinian theodicy and Avicenna's Platonism in Islamic Philosophy.

Descartes

René Descartes (1596–1650) believed that mind exerted control over the brain via the pineal gland:

My view is that this gland is the principal seat of the soul, and the place in which all our thoughts are formed.

— René Descartes, Treatise of Man

[The] mechanism of our body is so constructed that simply by this gland's being moved in any way by the soul or by any other cause, it drives the surrounding spirits towards the pores of the brain, which direct them through the nerves to the muscles; and in this way the gland makes the spirits move the limbs.

— René Descartes, Passions of the Soul

His posited relation between mind and body is called Cartesian dualism or substance dualism. He held that mind was distinct from matter, but could influence matter. How such an interaction could be exerted remains a contentious issue.

Kant

For Immanuel Kant (1724–1804) beyond mind and matter there exists a world of a priori forms, which are seen as necessary preconditions for understanding. Some of these forms, space and time being examples, today seem to be pre-programmed in the brain.

...whatever it is that impinges on us from the mind-independent world does not come located in a spatial or a temporal matrix,...The mind has two pure forms of intuition built into it to allow it to... organize this 'manifold of raw intuition'.

— Andrew Brook, Kant's view of the mind and consciousness of self: Transcendental aesthetic

Kant views the mind–body interaction as taking place through forces that may be of different kinds for mind and body.

Huxley

For Thomas Henry Huxley (1825–1895) the conscious mind was a by-product of the brain that has no influence upon the brain, a so-called epiphenomenon.

On the epiphenomenalist view, mental events play no causal role. Huxley, who held the view, compared mental events to a steam whistle that contributes nothing to the work of a locomotive.

— William Robinson, Epiphenomenalism

Whitehead

Alfred North Whitehead advocated a sophisticated form of panpsychism that has been called by David Ray Griffin panexperientialism.

Popper

For Karl Popper (1902–1994) there are three aspects of the mind–body problem: the worlds of matter, mind, and of the creations of the mind, such as mathematics. In his view, the third-world creations of the mind could be interpreted by the second-world mind and used to affect the first-world of matter. An example might be radio, an example of the interpretation of the third-world (Maxwell's electromagnetic theory) by the second-world mind to suggest modifications of the external first world.

The body–mind problem is the question of whether and how our thought processes in World 2 are bound up with brain events in World 1. ...I would argue that the first and oldest of these attempted solutions is the only one that deserves to be taken seriously [namely]: World 2 and World 1 interact, so that when someone reads a book or listens to a lecture, brain events occur that act upon the World 2 of the reader's or listener's thoughts; and conversely, when a mathematician follows a proof, his World 2 acts upon his brain and thus upon World 1. This, then, is the thesis of body–mind interaction.

— Karl Popper, Notes of a realist on the body–mind problem

Ryle

With his 1949 book, The Concept of Mind, Gilbert Ryle "was seen to have put the final nail in the coffin of Cartesian dualism".

In the chapter "Descartes' Myth," Ryle introduces "the dogma of the Ghost in the machine" to describe the philosophical concept of the mind as an entity separate from the body:

I hope to prove that it is entirely false, and false not in detail but in principle. It is not merely an assemblage of particular mistakes. It is one big mistake and a mistake of a special kind. It is, namely, a category mistake.

Searle

For John Searle (1932-2025) the mind–body problem is a false dichotomy; that is, mind is a perfectly ordinary aspect of the brain. Searle proposed biological naturalism in 1980.

According to Searle then, there is no more a mind–body problem than there is a macro–micro economics problem. They are different levels of description of the same set of phenomena. [...] But Searle is careful to maintain that the mental – the domain of qualitative experience and understanding – is autonomous and has no counterpart on the microlevel; any redescription of these macroscopic features amounts to a kind of evisceration, ...

— Joshua Rust, John Searle

Human extinction

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Human_ext...