Search This Blog

Saturday, June 19, 2021

Mind–body dualism

From Wikipedia, the free encyclopedia
 
René Descartes's illustration of dualism. Inputs are passed on by the sensory organs to the epiphysis in the brain and from there to the immaterial spirit.

In the philosophy of mind, mind–body dualism denotes either the view that mental phenomena are non-physical, or that the mind and body are distinct and separable. Thus, it encompasses a set of views about the relationship between mind and matter, as well as between subject and object, and is contrasted with other positions, such as physicalism and enactivism, in the mind–body problem.

Aristotle shared Plato's view of multiple souls and further elaborated a hierarchical arrangement, corresponding to the distinctive functions of plants, animals, and people: a nutritive soul of growth and metabolism that all three share; a perceptive soul of pain, pleasure, and desire that only people and other animals share; and the faculty of reason that is unique to people only. In this view, a soul is the hylomorphic form of a viable organism, wherein each level of the hierarchy formally supervenes upon the substance of the preceding level. For Aristotle, the first two souls, based on the body, perish when the living organism dies, whereas remains an immortal and perpetual intellective part of mind. For Plato, however, the soul was not dependent on the physical body; he believed in metempsychosis, the migration of the soul to a new physical body. It has been considered a form of reductionism by some philosophers, since it enables the tendency to ignore very big groups of variables by its assumed association with the mind or the body, and not for its real value when it comes to explaining or predicting a studied phenomenon.

Dualism is closely associated with the thought of René Descartes (1641), which holds that the mind is a nonphysical—and therefore, non-spatial—substance. Descartes clearly identified the mind with consciousness and self-awareness and distinguished this from the brain as the seat of intelligence. Hence, he was the first to formulate the mind–body problem in the form in which it exists today. Dualism is contrasted with various kinds of monism. Substance dualism is contrasted with all forms of materialism, but property dualism may be considered a form of emergent materialism or non-reductive physicalism in some sense.

Types

Ontological dualism makes dual commitments about the nature of existence as it relates to mind and matter, and can be divided into three different types:

  1. Substance dualism asserts that mind and matter are fundamentally distinct kinds of foundations.
  2. Property dualism suggests that the ontological distinction lies in the differences between properties of mind and matter (as in emergentism).
  3. Predicate dualism claims the irreducibility of mental predicates to physical predicates.

Substance or Cartesian dualism

Substance dualism, or Cartesian dualism, most famously defended by René Descartes, argues that there are two kinds of foundation: mental and physical. This philosophy states that the mental can exist outside of the body, and the body cannot think. Substance dualism is important historically for having given rise to much thought regarding the famous mind–body problem.

Copernican Revolution and the scientific discoveries of the 17th century reinforced the belief that the scientific method was the unique way of knowledge. Bodies were seen as biological organisms to be studied in their constituent parts (materialism) by means of anatomy, physiology, biochemistry and physics (reductionism). The mind-body dualism would have remained the biomedical paradigm and model for the following three centuries.

Substance dualism is a philosophical position compatible with most theologies which claim that immortal souls occupy an independent realm of existence distinct from that of the physical world. In contemporary discussions of substance dualism, philosophers propose dualist positions that are significantly less radical than Descartes's: for instance, a position defended by William Hasker called Emergent Dualism seems, to some philosophers, more intuitively attractive than the substance dualism of Descartes in virtue of its being in line with (inter alia) evolutionary biology.

Property dualism

Property dualism asserts that an ontological distinction lies in the differences between properties of mind and matter, and that consciousness is ontologically irreducible to neurobiology and physics. It asserts that when matter is organized in the appropriate way (i.e., in the way that living human bodies are organized), mental properties emerge. Hence, it is a sub-branch of emergent materialism. What views properly fall under the property dualism rubric is itself a matter of dispute. There are different versions of property dualism, some of which claim independent categorisation.

Non-reductive physicalism is a form of property dualism in which it is asserted that all mental states are causally reducible to physical states. One argument for this has been made in the form of anomalous monism expressed by Donald Davidson, where it is argued that mental events are identical to physical events, however, relations of mental events cannot be described by strict law-governed causal relationships. Another argument for this has been expressed by John Searle, who is the advocate of a distinctive form of physicalism he calls biological naturalism. His view is that although mental states are ontologically irreducible to physical states, they are causally reducible. He has acknowledged that "to many people" his views and those of property dualists look a lot alike, but he thinks the comparison is misleading.

Epiphenomenalism

Epiphenomenalism is a form of property dualism, in which it is asserted that one or more mental states do not have any influence on physical states (both ontologically and causally irreducible). It asserts that while material causes give rise to sensations, volitions, ideas, etc., such mental phenomena themselves cause nothing further: they are causal dead-ends. This can be contrasted to interactionism, on the other hand, in which mental causes can produce material effects, and vice versa.

Predicate dualism

Predicate dualism is a view espoused by such non-reductive physicalists as Donald Davidson and Jerry Fodor, who maintain that while there is only one ontological category of substances and properties of substances (usually physical), the predicates that we use to describe mental events cannot be redescribed in terms of (or reduced to) physical predicates of natural languages.

Predicate dualism is most easily defined as the negation of predicate monism. Predicate monism can be characterized as the view subscribed to by eliminative materialists, who maintain that such intentional predicates as believe, desire, think, feel, etc., will eventually be eliminated from both the language of science and from ordinary language because the entities to which they refer do not exist. Predicate dualists believe that so-called "folk psychology," with all of its propositional attitude ascriptions, is an ineliminable part of the enterprise of describing, explaining, and understanding human mental states and behavior.

For example, Davidson subscribes to anomalous monism, according to which there can be no strict psychophysical laws which connect mental and physical events under their descriptions as mental and physical events. However, all mental events also have physical descriptions. It is in terms of the latter that such events can be connected in law-like relations with other physical events. Mental predicates are irreducibly different in character (rational, holistic, and necessary) from physical predicates (contingent, atomic, and causal).

Dualist views of mental causation

Four varieties of dualist causal interaction. The arrows indicate the direction of causations. Mental and physical states are shown in red and blue, respectively.

This part is about causation between properties and states of the thing under study, not its substances or predicates. Here a state is the set of all properties of what's being studied. Thus each state describes only one point in time.

Interactionism

Interactionism is the view that mental states, such as beliefs and desires, causally interact with physical states. This is a position which is very appealing to common-sense intuitions, notwithstanding the fact that it is very difficult to establish its validity or correctness by way of logical argumentation or empirical proof. It seems to appeal to common-sense because we are surrounded by such everyday occurrences as a child's touching a hot stove (physical event) which causes him to feel pain (mental event) and then yell and scream (physical event) which causes his parents to experience a sensation of fear and protectiveness (mental event) and so on.

Non-reductive physicalism

Non-reductive physicalism is the idea that while mental states are physical they are not reducible to physical properties, in that an ontological distinction lies in the differences between the properties of mind and matter. According to non-reductive physicalism all mental states are causally reducible to physical states where mental properties map to physical properties and vice versa. A prominent form of non-reductive physicalism, called anomalous monism, was first proposed by Donald Davidson in his 1970 paper "Mental events", in which he claims that mental events are identical with physical events, and that the mental is anomalous, i.e. under their mental descriptions these mental events are not regulated by strict physical laws.

Epiphenomenalism

Epiphenomenalism states that all mental events are caused by a physical event and have no physical consequences, and that one or more mental states do not have any influence on physical states. So, the mental event of deciding to pick up a rock ("M1") is caused by the firing of specific neurons in the brain ("P1"). When the arm and hand move to pick up the rock ("P2") this is not caused by the preceding mental event M1, nor by M1 and P1 together, but only by P1. The physical causes are in principle reducible to fundamental physics, and therefore mental causes are eliminated using this reductionist explanation. If P1 causes both M1 and P2, there is no overdetermination in the explanation for P2.

The idea that even if the animal were conscious nothing would be added to the production of behavior, even in animals of the human type, was first voiced by La Mettrie (1745), and then by Cabanis (1802), and was further explicated by Hodgson (1870) and Huxley (1874). Jackson gave a subjective argument for epiphenomenalism, but later rejected it and embraced physicalism.

Parallelism

Psychophysical parallelism is a very unusual view about the interaction between mental and physical events which was most prominently, and perhaps only truly, advocated by Gottfried Wilhelm von Leibniz. Like Malebranche and others before him, Leibniz recognized the weaknesses of Descartes' account of causal interaction taking place in a physical location in the brain. Malebranche decided that such a material basis of interaction between material and immaterial was impossible and therefore formulated his doctrine of occasionalism, stating that the interactions were really caused by the intervention of God on each individual occasion. Leibniz's idea is that God has created a pre-established harmony such that it only seems as if physical and mental events cause, and are caused by, one another. In reality, mental causes only have mental effects and physical causes only have physical effects. Hence, the term parallelism is used to describe this view.

Occasionalism

Occasionalism is a philosophical doctrine about causation which says that created substances cannot be efficient causes of events. Instead, all events are taken to be caused directly by God itself. The theory states that the illusion of efficient causation between mundane events arises out of a constant conjunction that God had instituted, such that every instance where the cause is present will constitute an "occasion" for the effect to occur as an expression of the aforementioned power. This "occasioning" relation, however, falls short of efficient causation. In this view, it is not the case that the first event causes God to cause the second event: rather, God first caused one and then caused the other, but chose to regulate such behaviour in accordance with general laws of nature. Some of its most prominent historical exponents have been Al-Ghazali, Louis de la Forge, Arnold Geulincx, and Nicolas Malebranche.

Kantianism

According to the philosophy of Immanuel Kant, there is a distinction between actions done by desire and those performed by liberty (categorical imperative). Thus, not all physical actions are caused by either matter or freedom. Some actions are purely animal in nature, while others are the result of mental action on matter.

History

Plato and Aristotle

In the dialogue Phaedo, Plato formulated his famous Theory of Forms as distinct and immaterial substances of which the objects and other phenomena that we perceive in the world are nothing more than mere shadows.

In the Phaedo, Plato makes it clear that the Forms are the universalia ante res, i.e. they are ideal universals, by which we are able to understand the world. In his allegory of the cave, Plato likens the achievement of philosophical understanding to emerging into the sun from a dark cave, where only vague shadows of what lies beyond that prison are cast dimly upon the wall. Plato's forms are non-physical and non-mental. They exist nowhere in time or space, but neither do they exist in the mind, nor in the pleroma of matter; rather, matter is said to "participate" in form (μεθεξις, methexis). It remained unclear however, even to Aristotle, exactly what Plato intended by that.

Aristotle argued at length against many aspects of Plato's forms, creating his own doctrine of hylomorphism wherein form and matter coexist. Ultimately however, Aristotle's aim was to perfect a theory of forms, rather than to reject it. Although Aristotle strongly rejected the independent existence Plato attributed to forms, his metaphysics do agree with Plato's a priori considerations quite often. For example, Aristotle argues that changeless, eternal substantial form is necessarily immaterial. Because matter provides a stable substratum for a change in form, matter always has the potential to change. Thus, if given an eternity in which to do so, it will, necessarily, exercise that potential.

Part of Aristotle's psychology, the study of the soul, is his account of the ability of humans to reason and the ability of animals to perceive. In both cases, perfect copies of forms are acquired, either by direct impression of environmental forms, in the case of perception, or else by virtue of contemplation, understanding and recollection. He believed the mind can literally assume any form being contemplated or experienced, and it was unique in its ability to become a blank slate, having no essential form. As thoughts of earth are not heavy, any more than thoughts of fire are causally efficient, they provide an immaterial complement for the formless mind.

From Neoplatonism to scholasticism

The philosophical school of Neoplatonism, most active in Late Antiquity, claimed that the physical and the spiritual are both emanations of the One. Neoplatonism exerted a considerable influence on Christianity, as did the philosophy of Aristotle via scholasticism.

In the scholastic tradition of Saint Thomas Aquinas, a number of whose doctrines have been incorporated into Roman Catholic dogma, the soul is the substantial form of a human being. Aquinas held the Quaestiones disputate de anima, or 'Disputed questions on the soul', at the Roman studium provinciale of the Dominican Order at Santa Sabina, the forerunner of the Pontifical University of Saint Thomas Aquinas, Angelicum during the academic year 1265–66. By 1268 Aquinas had written at least the first book of the Sententia Libri De anima, Aquinas' commentary on Aristotle's De anima, the translation of which from the Greek was completed by Aquinas' Dominican associate at Viterbo William of Moerbeke in 1267. Like Aristotle, Aquinas held that the human being was a unified composite substance of two substantial principles: form and matter. The soul is the substantial form and so the first actuality of a material organic body with the potentiality for life.

While Aquinas defended the unity of human nature as a composite substance constituted by these two inextricable principles of form and matter, he also argued for the incorruptibility of the intellectual soul, in contrast to the corruptibility of the vegetative and sensitive animation of plants and animals. His argument for the subsistence and incorruptibility of the intellectual soul takes its point of departure from the metaphysical principle that operation follows upon being (agiture sequitur esse), i.e., the activity of a thing reveals the mode of being and existence it depends upon. Since the intellectual soul exercises its own per se intellectual operations without employing material faculties, i.e. intellectual operations are immaterial, the intellect itself and the intellectual soul, must likewise be immaterial and so incorruptible. Even though the intellectual soul of man is able to subsist upon the death of the human being, Aquinas does not hold that the human person is able to remain integrated at death. The separated intellectual soul is neither a man nor a human person. The intellectual soul by itself is not a human person (i.e., an individual supposit of a rational nature). Hence, Aquinas held that "soul of St. Peter pray for us" would be more appropriate than "St. Peter pray for us", because all things connected with his person, including memories, ended with his corporeal life.

The Catholic doctrine of the resurrection of the body does nor subscribe that, sees body and soul as forming a whole and states that at the second coming, the souls of the departed will be reunited with their bodies as a whole person (substance) and witness to the apocalypse. The thorough consistency between dogma and contemporary science was maintained here in part from a serious attendance to the principle that there can be only one truth. Consistency with science, logic, philosophy, and faith remained a high priority for centuries, and a university doctorate in theology generally included the entire science curriculum as a prerequisite. This doctrine is not universally accepted by Christians today. Many believe that one's immortal soul goes directly to Heaven upon death of the body.

Descartes and his disciples

In his Meditations on First Philosophy, René Descartes embarked upon a quest in which he called all his previous beliefs into doubt, in order to find out what he could be certain of. In so doing, he discovered that he could doubt whether he had a body (it could be that he was dreaming of it or that it was an illusion created by an evil demon), but he could not doubt whether he had a mind. This gave Descartes his first inkling that the mind and body were different things. The mind, according to Descartes, was a "thinking thing" (Latin: res cogitans), and an immaterial substance. This "thing" was the essence of himself, that which doubts, believes, hopes, and thinks. The body, "the thing that exists" (res extensa), regulates normal bodily functions (such as heart and liver). According to Descartes, animals only had a body and not a soul (which distinguishes humans from animals). The distinction between mind and body is argued in Meditation VI as follows: I have a clear and distinct idea of myself as a thinking, non-extended thing, and a clear and distinct idea of body as an extended and non-thinking thing. Whatever I can conceive clearly and distinctly, God can so create.

The central claim of what is often called Cartesian dualism, in honor of Descartes, is that the immaterial mind and the material body, while being ontologically distinct substances, causally interact. This is an idea that continues to feature prominently in many non-European philosophies. Mental events cause physical events, and vice versa. But this leads to a substantial problem for Cartesian dualism: How can an immaterial mind cause anything in a material body, and vice versa? This has often been called the "problem of interactionism."

Descartes himself struggled to come up with a feasible answer to this problem. In his letter to Elisabeth of Bohemia, Princess Palatine, he suggested that spirits interacted with the body through the pineal gland, a small gland in the centre of the brain, between the two hemispheres. The term Cartesian dualism is also often associated with this more specific notion of causal interaction through the pineal gland. However, this explanation was not satisfactory: how can an immaterial mind interact with the physical pineal gland? Because Descartes' was such a difficult theory to defend, some of his disciples, such as Arnold Geulincx and Nicolas Malebranche, proposed a different explanation: That all mind–body interactions required the direct intervention of God. According to these philosophers, the appropriate states of mind and body were only the occasions for such intervention, not real causes. These occasionalists maintained the strong thesis that all causation was directly dependent on God, instead of holding that all causation was natural except for that between mind and body.

Recent formulations

In addition to already discussed theories of dualism (particularly the Christian and Cartesian models) there are new theories in the defense of dualism. Naturalistic dualism comes from Australian philosopher, David Chalmers (born 1966) who argues there is an explanatory gap between objective and subjective experience that cannot be bridged by reductionism because consciousness is, at least, logically autonomous of the physical properties upon which it supervenes. According to Chalmers, a naturalistic account of property dualism requires a new fundamental category of properties described by new laws of supervenience; the challenge being analogous to that of understanding electricity based on the mechanistic and Newtonian models of materialism prior to Maxwell's equations.

A similar defense comes from Australian philosopher Frank Jackson (born 1943) who revived the theory of epiphenomenalism which argues that mental states do not play a role in physical states. Jackson argues that there are two kinds of dualism:

  1. substance dualism that assumes there is second, non-corporeal form of reality. In this form, body and soul are two different substances.
  2. property dualism that says that body and soul are different properties of the same body.

He claims that functions of the mind/soul are internal, very private experiences that are not accessible to observation by others, and therefore not accessible by science (at least not yet). We can know everything, for example, about a bat's facility for echolocation, but we will never know how the bat experiences that phenomenon.

Arguments for dualism

Another one of Descartes' illustrations. The fire displaces the skin, which pulls a tiny thread, which opens a pore in the ventricle (F) allowing the "animal spirit" to flow through a hollow tube, which inflates the muscle of the leg, causing the foot to withdraw.

The subjective argument

An important fact is that minds perceive intra-mental states differently from sensory phenomena, and this cognitive difference results in mental and physical phenomena having seemingly disparate properties. The subjective argument holds that these properties are irreconcilable under a physical mind.

Mental events have a certain subjective quality to them, whereas physical ones seem not to. So, for example, one may ask what a burned finger feels like, or what the blueness of the sky looks like, or what nice music sounds like. Philosophers of mind call the subjective aspects of mental events qualia. There is something that it's like to feel pain, to see a familiar shade of blue, and so on. There are qualia involved in these mental events. And the claim is that qualia cannot be reduced to anything physical.

Thomas Nagel first characterized the problem of qualia for physicalistic monism in his article, "What Is It Like to Be a Bat?". Nagel argued that even if we knew everything there was to know from a third-person, scientific perspective about a bat's sonar system, we still wouldn't know what it is like to be a bat. However, others argue that qualia are consequent of the same neurological processes that engender the bat's mind, and will be fully understood as the science develops.

Frank Jackson formulated his well-known knowledge argument based upon similar considerations. In this thought experiment, known as Mary's room, he asks us to consider a neuroscientist, Mary, who was born, and has lived all of her life, in a black and white room with a black and white television and computer monitor where she collects all the scientific data she possibly can on the nature of colours. Jackson asserts that as soon as Mary leaves the room, she will come to have new knowledge which she did not possess before: the knowledge of the experience of colours (i.e., what they are like). Although Mary knows everything there is to know about colours from an objective, third-person perspective, she has never known, according to Jackson, what it was like to see red, orange, or green. If Mary really learns something new, it must be knowledge of something non-physical, since she already knew everything about the physical aspects of colour.

However, Jackson later rejected his argument and embraced physicalism. He notes that Mary obtains knowledge not of color, but of a new intramental state, seeing color. Also, he notes that Mary might say "wow," and as a mental state affecting the physical, this clashed with his former view of epiphenomenalism. David Lewis' response to this argument, now known as the ability argument, is that what Mary really came to know was simply the ability to recognize and identify color sensations to which she had previously not been exposed. Daniel Dennett and others also provide arguments against this notion.

The zombie argument

The zombie argument is based on a thought experiment proposed by David Chalmers. The basic idea is that one can imagine, and, therefore, conceive the existence of, an apparently functioning human being/body without any conscious states being associated with it.

Chalmers' argument is that it seems plausible that such a being could exist because all that is needed is that all and only the things that the physical sciences describe and observe about a human being must be true of the zombie. None of the concepts involved in these sciences make reference to consciousness or other mental phenomena, and any physical entity can be described scientifically via physics whether it is conscious or not. The mere logical possibility of a p-zombie demonstrates that consciousness is a natural phenomenon beyond the current unsatisfactory explanations. Chalmers states that one probably could not build a living p-zombie because living things seem to require a level of consciousness. However (unconscious?) robots built to simulate humans may become the first real p-zombies. Hence Chalmers half-joking calls for the need to build a "consciousness meter" to ascertain if any given entity, human or robot, is conscious or not.

Others such as Dennett have argued that the notion of a philosophical zombie is an incoherent, or unlikely, concept. In particular, nothing proves that an entity (e.g., a computer or robot) which would perfectly mimic human beings, and especially perfectly mimic expressions of feelings (like joy, fear, anger, ...), would not indeed experience them, thus having similar states of consciousness to what a real human would have. It is argued that under physicalism, one must either believe that anyone including oneself might be a zombie, or that no one can be a zombie—following from the assertion that one's own conviction about being (or not being) a zombie is a product of the physical world and is therefore no different from anyone else's.

Special sciences argument

Howard Robinson argues that, if predicate dualism is correct, then there are "special sciences" that are irreducible to physics. These allegedly irreducible subjects, which contain irreducible predicates, differ from hard sciences in that they are interest-relative. Here, interest-relative fields depend on the existence of minds that can have interested perspectives. Psychology is one such science; it completely depends on and presupposes the existence of the mind.

Physics is the general analysis of nature, conducted in order to understand how the universe behaves. On the other hand, the study of meteorological weather patterns or human behavior is only of interest to humans themselves. The point is that having a perspective on the world is a psychological state. Therefore, the special sciences presuppose the existence of minds which can have these states. If one is to avoid ontological dualism, then the mind that has a perspective must be part of the physical reality to which it applies its perspective. If this is the case, then in order to perceive the physical world as psychological, the mind must have a perspective on the physical. This, in turn, presupposes the existence of mind.

However, cognitive science and psychology do not require the mind to be irreducible, and operate on the assumption that it has physical basis. In fact, it is common in science to presuppose a complex system; while fields such as chemistry, biology, or geology could be verbosely expressed in terms of quantum field theory, it is convenient to use levels of abstraction like molecules, cells, or the mantle. It is often difficult to decompose these levels without heavy analysis and computation. Sober has also advanced philosophical arguments against the notion of irreducibility.

Argument from personal identity

This argument concerns the differences between the applicability of counterfactual conditionals to physical objects, on the one hand, and to conscious, personal agents on the other. In the case of any material object, e.g. a printer, we can formulate a series of counterfactuals in the following manner:

  1. This printer could have been made of straw.
  2. This printer could have been made of some other kind of plastics and vacuum-tube transistors.
  3. This printer could have been made of 95% of what it is actually made of and 5% vacuum-tube transistors, etc..

Somewhere along the way from the printer's being made up exactly of the parts and materials which actually constitute it to the printer's being made up of some different matter at, say, 20%, the question of whether this printer is the same printer becomes a matter of arbitrary convention.

Imagine the case of a person, Frederick, who has a counterpart born from the same egg and a slightly genetically modified sperm. Imagine a series of counterfactual cases corresponding to the examples applied to the printer. Somewhere along the way, one is no longer sure about the identity of Frederick. In this latter case, it has been claimed, overlap of constitution cannot be applied to the identity of mind. As Madell puts it:

But while my present body can thus have its partial counterpart in some possible world, my present consciousness cannot. Any present state of consciousness that I can imagine either is or is not mine. There is no question of degree here.

If the counterpart of Frederick, Frederickus, is 70% constituted of the same physical substance as Frederick, does this mean that it is also 70% mentally identical with Frederick? Does it make sense to say that something is mentally 70% Frederick? A possible solution to this dilemma is that of open individualism.

Richard Swinburne, in his book The Existence of God, put forward an argument for mind-body dualism based upon personal identity. He states that the brain is composed of two hemispheres and a cord linking the two and that, as modern science has shown, either of these can be removed without the person losing any memories or mental capacities.

He then cites a thought-experiment for the reader, asking what would happen if each of the two hemispheres of one person were placed inside two different people. Either, Swinburne claims, one of the two is me or neither is- and there is no way of telling which, as each will have similar memories and mental capacities to the other. In fact, Swinburne claims, even if one's mental capacities and memories are far more similar to the original person than the others' are, they still may not be him.

From here, he deduces that even if we know what has happened to every single atom inside a person's brain, we still do not know what has happened to 'them' as an identity. From here it follows that a part of our mind, or our soul, is immaterial, and, as a consequence, that mind-body dualism is true.

Argument from reason

Philosophers and scientists such as Victor Reppert, William Hasker, and Alvin Plantinga have developed an argument for dualism dubbed the "argument from reason". They credit C.S. Lewis with first bringing the argument to light in his book Miracles; Lewis called the argument "The Cardinal Difficulty of Naturalism", which was the title of chapter three of Miracles.

The argument postulates that if, as naturalism entails, all of our thoughts are the effect of a physical cause, then we have no reason for assuming that they are also the consequent of a reasonable ground. However, knowledge is apprehended by reasoning from ground to consequent. Therefore, if naturalism were true, there would be no way of knowing it (or anything else), except by a fluke.

Through this logic, the statement "I have reason to believe naturalism is valid" is inconsistent in the same manner as "I never tell the truth." That is, to conclude its truth would eliminate the grounds from which to reach it. To summarize the argument in the book, Lewis quotes J. B. S. Haldane, who appeals to a similar line of reasoning:

If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true...and hence I have no reason for supposing my brain to be composed of atoms.

— J. B. S. Haldane, Possible Worlds, p. 209

In his essay "Is Theology Poetry?", Lewis himself summarises the argument in a similar fashion when he writes:

If minds are wholly dependent on brains, and brains on biochemistry, and biochemistry (in the long run) on the meaningless flux of the atoms, I cannot understand how the thought of those minds should have any more significance than the sound of the wind in the trees.

— C. S. Lewis, The Weight of Glory and Other Addresses, p. 139

But Lewis later agreed with Elizabeth Anscombe's response to his Miracles argument. She showed that an argument could be valid and ground-consequent even if its propositions were generated via physical cause and effect by non-rational factors. Similar to Anscombe, Richard Carrier and John Beversluis have written extensive objections to the argument from reason on the untenability of its first postulate.

Cartesian arguments

Descartes puts forward two main arguments for dualism in Meditations: firstly, the "modal argument," or the "clear and distinct perception argument," and secondly the "indivisibility" or "divisibility" argument.

Summary of the 'modal argument'
It is imaginable that one's mind might exist without one's body.
therefore
It is conceivable that one's mind might exist without one's body.
therefore
It is possible one's mind might exist without one's body.
therefore
One's mind is a different entity from one's body.

The argument is distinguished from the zombie argument as it establishes that the mind could continue to exist without the body, rather than that the unaltered body could exist without the mind. Alvin Plantinga, J. P. Moreland, and Edward Feser have both supported the argument, although Feser and Moreland think that it must be carefully reformulated in order to be effective.

The indivisibility argument for dualism was phrased by Descartes as follows:

[T]here is a great difference between a mind and a body, because the body, by its very nature, is something divisible, whereas the mind is plainly indivisible…insofar as I am only a thing that thinks, I cannot distinguish any parts in me.… Although the whole mind seems to be united to the whole body, nevertheless, were a foot or an arm or any other bodily part amputated, I know that nothing would be taken away from the mind…

The argument relies upon Leibniz' principle of the identity of indiscernibles, which states that two things are the same if and only if they share all their properties. A counterargument is the idea that matter is not infinitely divisible, and thus that the mind could be identified with material things that cannot be divided, or potentially Leibnizian monads.

Arguments against dualism

Arguments from causal interaction

Cartesian dualism compared to three forms of monism.

One argument against dualism is with regard to causal interaction. If consciousness (the mind) can exist independently of physical reality (the brain), one must explain how physical memories are created concerning consciousness. Dualism must therefore explain how consciousness affects physical reality. One of the main objections to dualistic interactionism is lack of explanation of how the material and immaterial are able to interact. Varieties of dualism according to which an immaterial mind causally affects the material body and vice versa have come under strenuous attack from different quarters, especially in the 20th century. Critics of dualism have often asked how something totally immaterial can affect something totally material—this is the basic problem of causal interaction.

First, it is not clear where the interaction would take place. For example, burning one's finger causes pain. Apparently there is some chain of events, leading from the burning of skin, to the stimulation of nerve endings, to something happening in the peripheral nerves of one's body that lead to one's brain, to something happening in a particular part of one's brain, and finally resulting in the sensation of pain. But pain is not supposed to be spatially locatable. It might be responded that the pain "takes place in the brain." But evidently, the pain is in the finger. This may not be a devastating criticism.

However, there is a second problem about the interaction. Namely, the question of how the interaction takes place, where in dualism "the mind" is assumed to be non-physical and by definition outside of the realm of science. The mechanism which explains the connection between the mental and the physical would therefore be a philosophical proposition as compared to a scientific theory. For example, compare such a mechanism to a physical mechanism that is well understood. Take a very simple causal relation, such as when a cue ball strikes an eight ball and causes it to go into the pocket. What happens in this case is that the cue ball has a certain amount of momentum as its mass moves across the pool table with a certain velocity, and then that momentum is transferred to the eight ball, which then heads toward the pocket. Compare this to the situation in the brain, where one wants to say that a decision causes some neurons to fire and thus causes a body to move across the room. The intention to "cross the room now" is a mental event and, as such, it does not have physical properties such as force. If it has no force, then it would seem that it could not possibly cause any neuron to fire. However, with Dualism, an explanation is required of how something without any physical properties has physical effects.

Replies

Alfred North Whitehead and, later, David Ray Griffin framed a new ontology (process philosophy) seeking precisely to avoid the pitfalls of ontological dualism.

The explanation provided by Arnold Geulincx and Nicolas Malebranche is that of occasionalism, where all mind–body interactions require the direct intervention of God.

At the time C. S. Lewis wrote Miracles, quantum mechanics (and physical indeterminism) was only in the initial stages of acceptance, but still Lewis stated the logical possibility that, if the physical world was proved to be indeterministic, this would provide an entry (interaction) point into the traditionally viewed closed system, where a scientifically described physically probable/improbable event could be philosophically described as an action of a non-physical entity on physical reality. He states, however, that none of the arguments in his book will rely on this. Although some interpretations of quantum mechanics consider wave function collapse to be indeterminate, in others this event is defined and deterministic.

Argument from physics

The argument from physics is closely related to the argument from causal interaction. Many physicists and consciousness researchers have argued that any action of a nonphysical mind on the brain would entail the violation of physical laws, such as the conservation of energy.

By assuming a deterministic physical universe, the objection can be formulated more precisely. When a person decides to walk across a room, it is generally understood that the decision to do so, a mental event, immediately causes a group of neurons in that person's brain to fire, a physical event, which ultimately results in his walking across the room. The problem is that if there is something totally non-physical causing a bunch of neurons to fire, then there is no physical event which causes the firing. This means that some physical energy is required to be generated against the physical laws of the deterministic universe—this is by definition a miracle and there can be no scientific explanation of (repeatable experiment performed regarding) where the physical energy for the firing came from. Such interactions would violate the fundamental laws of physics. In particular, if some external source of energy is responsible for the interactions, then this would violate the law of the conservation of energy. Dualistic interactionism has therefore been criticized for violating a general heuristic principle of science: the causal closure of the physical world.

Replies

The Stanford Encyclopedia of Philosophy and the New Catholic Encyclopedia provide two possible replies to the above objections. The first reply is that the mind may influence the distribution of energy, without altering its quantity. The second possibility is to deny that the human body is causally closed, as the conservation of energy applies only to closed systems. However, physicalists object that no evidence exists for the causal non-closure of the human body. Robin Collins responds that energy conservation objections misunderstand the role of energy conservation in physics. Well understood scenarios in general relativity violate energy conservation and quantum mechanics provides precedent for causal interactions, or correlation without energy or momentum exchange. However, this does not mean the mind spends energy and, despite that, it still doesn't exclude the supernatural.

Another reply is akin to parallelism—Mills holds that behavioral events are causally overdetermined, and can be explained by either physical or mental causes alone. An overdetermined event is fully accounted for by multiple causes at once. However, J. J. C. Smart and Paul Churchland have pointed out that if physical phenomena fully determine behavioral events, then by Occam's razor an unphysical mind is unnecessary.

Robinson suggests that the interaction may involve dark energy, dark matter or some other currently unknown scientific process. However, such processes would necessarily be physical, and in this case dualism is replaced with physicalism, or the interaction point is left for study at a later time when these physical processes are understood.

Another reply is that the interaction taking place in the human body may not be described by "billiard ball" classical mechanics. If a nondeterministic interpretation of quantum mechanics is correct then microscopic events are indeterminate, where the degree of determinism increases with the scale of the system. Philosophers Karl Popper and John Eccles and physicist Henry Stapp have theorized that such indeterminacy may apply at the macroscopic scale. However, Max Tegmark has argued that classical and quantum calculations show that quantum decoherence effects do not play a role in brain activity. Indeed, macroscopic quantum states have only ever been observed in superconductors near absolute zero.

Yet another reply to the interaction problem is to note that it doesn't seem that there is an interaction problem for all forms of substance dualism. For instance, Thomistic dualism doesn't obviously face any issue with regards to interaction.

Argument from brain damage

This argument has been formulated by Paul Churchland, among others. The point is that, in instances of some sort of brain damage (e.g. caused by automobile accidents, drug abuse, pathological diseases, etc.), it is always the case that the mental substance and/or properties of the person are significantly changed or compromised. If the mind were a completely separate substance from the brain, how could it be possible that every single time the brain is injured, the mind is also injured? Indeed, it is very frequently the case that one can even predict and explain the kind of mental or psychological deterioration or change that human beings will undergo when specific parts of their brains are damaged. So the question for the dualist to try to confront is how can all of this be explained if the mind is a separate and immaterial substance from, or if its properties are ontologically independent of, the brain.

Property dualism and William Hasker's "emergent dualism" seek to avoid this problem. They assert that the mind is a property or substance that emerges from the appropriate arrangement of physical matter, and therefore could be affected by any rearrangement of matter.

Phineas Gage, who suffered destruction of one or both frontal lobes by a projectile iron rod, is often cited as an example illustrating that the brain causes mind. Gage certainly exhibited some mental changes after his accident. This physical event, the destruction of part of his brain, therefore caused some kind of change in his mind, suggesting a correlation between brain states and mental states. Similar examples abound; neuroscientist David Eagleman describes the case of another individual who exhibited escalating pedophilic tendencies at two different times, and in each case was found to have tumors growing in a particular part of his brain.

Case studies aside, modern experiments have demonstrated that the relation between brain and mind is much more than simple correlation. By damaging, or manipulating, specific areas of the brain repeatedly under controlled conditions (e.g. in monkeys) and reliably obtaining the same results in measures of mental state and abilities, neuroscientists have shown that the relation between damage to the brain and mental deterioration is likely causal. This conclusion is further supported by data from the effects of neuro-active chemicals (e.g., those affecting neurotransmitters) on mental functions, but also from research on neurostimulation (direct electrical stimulation of the brain, including transcranial magnetic stimulation).

Argument from biological development

Another common argument against dualism consists in the idea that since human beings (both phylogenetically and ontogenetically) begin their existence as entirely physical or material entities and since nothing outside of the domain of the physical is added later on in the course of development, then we must necessarily end up being fully developed material beings. There is nothing non-material or mentalistic involved in conception, the formation of the blastula, the gastrula, and so on. The postulation of a non-physical mind would seem superfluous.

Argument from neuroscience

In some contexts, the decisions that a person makes can be detected up to 10 seconds in advance by means of scanning their brain activity. Subjective experiences and covert attitudes can be detected, as can mental imagery. This is strong empirical evidence that cognitive processes have a physical basis in the brain.

Argument from simplicity

The argument from simplicity is probably the simplest and also the most common form of argument against dualism of the mental. The dualist is always faced with the question of why anyone should find it necessary to believe in the existence of two, ontologically distinct, entities (mind and brain), when it seems possible and would make for a simpler thesis to test against scientific evidence, to explain the same events and properties in terms of one. It is a heuristic principle in science and philosophy not to assume the existence of more entities than is necessary for clear explanation and prediction.

This argument was criticized by Peter Glassen in a debate with J. J. C. Smart in the pages of Philosophy in the late 1970s and early 1980s. Glassen argued that, because it is not a physical entity, Occam's razor cannot consistently be appealed to by a physicalist or materialist as a justification of mental states or events, such as the belief that dualism is false. The idea is that Occam's razor may not be as "unrestricted" as it is normally described (applying to all qualitative postulates, even abstract ones) but instead concrete (only applies to physical objects). If one applies Occam's Razor unrestrictedly, then it recommends monism until pluralism either receives more support or is disproved. If one applies Occam's Razor only concretely, then it may not be used on abstract concepts (this route, however, has serious consequences for selecting between hypotheses about the abstract).

Brain in a vat

From Wikipedia, the free encyclopedia

A brain in a vat that believes it is walking

In philosophy, the brain in a vat (BIV) is a scenario used in a variety of thought experiments intended to draw out certain features of human conceptions of knowledge, reality, truth, mind, consciousness, and meaning. It is a modern incarnation of René Descartes's evil demon thought experiment originated by Gilbert Harman. Common to many science fiction stories, it outlines a scenario in which a mad scientist, machine, or other entity might remove a person's brain from the body, suspend it in a vat of life-sustaining liquid, and connect its neurons by wires to a supercomputer which would provide it with electrical impulses identical to those the brain normally receives. According to such stories, the computer would then be simulating reality (including appropriate responses to the brain's own output) and the "disembodied" brain would continue to have perfectly normal conscious experiences, such as those of a person with an embodied brain, without these being related to objects or events in the real world.

Uses

The simplest use of brain-in-a-vat scenarios is as an argument for philosophical skepticism and solipsism. A simple version of this runs as follows: Since the brain in a vat gives and receives exactly the same impulses as it would if it were in a skull, and since these are its only way of interacting with its environment, then it is not possible to tell, from the perspective of that brain, whether it is in a skull or a vat. Yet in the first case, most of the person's beliefs may be true (if they believe, say, that they are walking down the street, or eating ice-cream); in the latter case, their beliefs are false. Since the argument says one cannot know whether one is a brain in a vat, then one cannot know whether most of one's beliefs might be completely false. Since, in principle, it is impossible to rule out oneself being a brain in a vat, there cannot be good grounds for believing any of the things one believes; a skeptical argument would contend that one certainly cannot know them, raising issues with the definition of knowledge. Other philosophers have drawn upon sensation and its relationship to meaning in order to question whether brains in vats are really deceived at all, thus raising wider questions concerning perception, metaphysics, and the philosophy of language.

The brain-in-a-vat is a contemporary version of the argument given in Hindu Maya illusion, Plato's Allegory of the Cave, Zhuangzi's "Zhuangzi dreamed he was a butterfly", and the evil demon in René Descartes' Meditations on First Philosophy.

Recently, many contemporary philosophers believe that virtual reality will seriously affect human autonomy as a form of brain in a vat. But another view is that VR will not destroy our cognitive structure or take away our connection with reality. On the contrary, VR will allow us to have more new propositions, new insights and new perspectives to see the world.

Philosophical debates

While the disembodied brain (the brain in a vat) can be seen as a helpful thought experiment, there are several philosophical debates surrounding the plausibility of the thought experiment. If these debates conclude that the thought experiment is implausible, a possible consequence would be that we are no closer to knowledge, truth, consciousness, representation, etc. than we were prior to the experiment.

Argument from biology

One argument against the BIV thought experiment derives from the idea that the BIV is not – and cannot be – biologically similar to that of an embodied brain (that is, a brain found in a person). Since the BIV is dis embodied, it follows that it does not have similar biology to that of an embodied brain. That is, the BIV lacks the connections from the body to the brain, which renders the BIV neither neuroanatomically nor neurophysiologically similar to that of an embodied brain. If this is the case, we cannot say that it is even possible for the BIV to have similar experiences to the embodied brain, since the brains are not equal. However, it could be counter-argued that the hypothetical machine could be made to also replicate those types of inputs.

Argument from externalism

A second argument deals directly with the stimuli coming into the brain. This is often referred to as the account from externalism or ultra-externalism. In the BIV, the brain receives stimuli from a machine. In an embodied brain, however, the brain receives the stimuli from the sensors found in the body (via touching, tasting, smelling, etc.) which receive their input from the external environment. This argument oftentimes leads to the conclusion that there is a difference between what the BIV is representing and what the embodied brain is representing. This debate has been hashed out, but remains unresolved, by several philosophers including Uriah Kriegel, Colin McGinn, and Robert D. Rupert, and has ramifications for philosophy of mind discussions on (but not limited to) representation, consciousness, content, cognition, and embodied cognition.

Argument from incoherence

A third argument from the philosopher Hilary Putnam attempts to demonstrate the thought experiment's incoherence on the basis that it is self-refuting. To do this, Putnam first argued in favor of a theory of reference that would later become known as semantic externalism. He offers the "Twin Earth" example to demonstrate that two identical individuals, one on our earth and another on a "twin earth", may possess the exact same mental state and thoughts, yet refer to two different things. For instance, when we think of cats, the referent of our thoughts would be the cats that we find here on earth. However, our twins on twin earth, though possessing the same thoughts, would instead be referring not to our cats, but to twin earth's cats. Bearing this in mind, he writes that a "pure" brain in a vat, i.e., one that has never existed outside of the simulation, could not even truthfully say that it was a brain in a vat. This is because the BIV, when it says "brain" and "vat", can only refer to objects within the simulation, not to things outside the simulation it does not have a relationship with. Therefore, what it says is demonstrably false. Alternatively, if the speaker is not actually a BIV, then the statement is also false. He concludes, then, that the statement "I'm a BIV" is necessarily false and self-refuting. This argument has been explored at length in philosophical literature since its publication. One counter-argument says that, even assuming Putnam's reference theory, a brain on our earth that is "kidnapped", placed into a vat, and subjected to a simulation could still refer to "real" brains and vats, and thus correctly say it is a brain in a vat. However, the notion that the "pure" BIV is incorrect and the reference theory underpinning it remains influential in the philosophy of mind, language and metaphysics.

Boltzmann brain

From Wikipedia, the free encyclopedia
 
Ludwig Boltzmann, after whom Boltzmann brains are named

The Boltzmann brain argument suggests that it is more likely for a single brain to spontaneously and briefly form in a void (complete with a false memory of having existed in our universe) than it is for the universe to have come about in the way modern science thinks it actually did. It was first proposed as a reductio ad absurdum response to Ludwig Boltzmann's early explanation for the low-entropy state of our universe.

In this physics thought experiment, a Boltzmann brain is a fully formed brain, complete with memories of a full human life in our universe, that arises due to extremely rare random fluctuations out of a state of thermodynamic equilibrium. Theoretically, over an extremely large but not infinite amount of time, by sheer chance atoms in a void could spontaneously come together in such a way as to assemble a functioning human brain. Like any brain in such circumstances, it would almost immediately stop functioning and begin to deteriorate.

The idea is named after the Austrian physicist Ludwig Boltzmann (1844–1906), who in 1896 published a theory that tried to account for the fact that humans find themselves in a universe that is not as chaotic as the budding field of thermodynamics seemed to predict. He offered several explanations, one of them being that the universe, even one that is fully random (or at thermal equilibrium), would spontaneously fluctuate to a more ordered (or low-entropy) state.

Boltzmann brains gained new relevance around 2002, when some cosmologists started to become concerned that, in many theories about the Universe, human brains in the current universe appear to be vastly less likely than Boltzmann brains will be in the future; this leads to the conclusion that, statistically, humans are likely to be Boltzmann brains. Such a reductio ad absurdum argument is sometimes used to argue against certain theories of the Universe. When applied to more recent theories about the multiverse, Boltzmann brain arguments are part of the unsolved measure problem of cosmology. Physics, being an experimental science, uses the Boltzmann brain thought experiment as a tool for evaluating competing scientific theories.

Boltzmann universe

In 1896, the mathematician Ernst Zermelo advanced a theory that the second law of thermodynamics was absolute rather than statistical. Zermelo bolstered his theory by pointing out that the Poincaré recurrence theorem shows statistical entropy in a closed system must eventually be a periodic function; therefore, the Second Law, which is always observed to increase entropy, is unlikely to be statistical. To counter Zermelo's argument, the Austrian physicist Ludwig Boltzmann advanced two theories. The first theory, now believed to be the correct one, is that the Universe started for some unknown reason in a low-entropy state. The second and alternative theory, published in 1896 but attributed in 1895 to Boltzmann's assistant Ignaz Schütz, is the "Boltzmann universe" scenario. In this scenario, the Universe spends the vast majority of eternity in a featureless state of heat death; however, over enough eons, eventually a very rare thermal fluctuation will occur where atoms bounce off each other in exactly such a way as to form a substructure equivalent to our entire observable universe. Boltzmann argues that, while most of the universe is featureless, humans do not see those regions because they are devoid of intelligent life; to Boltzmann, it is unremarkable that humanity views solely the interior of its Boltzmann universe, as that is the only place where intelligent life lives. (This may be the first use in modern science of the anthropic principle).

In 1931, astronomer Arthur Eddington pointed out that, because a large fluctuation is exponentially less probable than a small fluctuation, observers in Boltzmann universes will be vastly outnumbered by observers in smaller fluctuations. Physicist Richard Feynman published a similar counterargument within his widely-read 1964 Feynman Lectures on Physics. By 2004, physicists had pushed Eddington's observation to its logical conclusion: the most numerous observers in an eternity of thermal fluctuations would be minimal "Boltzmann brains" popping up in an otherwise featureless universe.

Spontaneous formation

In the universe's eventual state of ergodic "heat death", given enough time, every possible structure (including every possible brain) gets formed via random fluctuation. The timescale for this is related to the Poincaré recurrence time. Boltzmann-style thought experiments focus on structures like human brains that are presumably self-aware observers. Given any arbitrary criteria for what constitutes a Boltzmann brain (or planet, or universe), smaller structures that minimally and barely meet the criteria are vastly and exponentially more common than larger structures; a rough analogy is how the odds of a real English word showing up when one shakes a box of Scrabble letters are greater than the odds that a whole English sentence or paragraph will form. The average timescale required for the formation of a Boltzmann brain is vastly greater than the current age of the Universe. In modern physics, Boltzmann brains can be formed either by quantum fluctuation, or by a thermal fluctuation generally involving nucleation.

Via quantum fluctuation

By one calculation, a Boltzmann brain would appear as a quantum fluctuation in the vacuum after a time interval of years. This fluctuation can occur even in a true Minkowski vacuum (a flat spacetime vacuum lacking vacuum energy). Quantum mechanics heavily favors smaller fluctuations that "borrow" the least amount of energy from the vacuum. Typically, a quantum Boltzmann brain would suddenly appear from the vacuum (alongside an equivalent amount of virtual antimatter), remain only long enough to have a single coherent thought or observation, and then disappear into the vacuum as suddenly as it appeared. Such a brain is completely self-contained, and can never radiate energy out to infinity.

Via nucleation

Current evidence suggests that the vacuum permeating the observable Universe is not a Minkowski space, but rather a de Sitter space with a positive cosmological constant. In a de Sitter vacuum (but not in a Minkowski vacuum), a Boltzmann brain can form via nucleation of non-virtual particles gradually assembled by chance from the Hawking radiation emitted from the de Sitter space's bounded cosmological horizon. One estimate for the average time required until nucleation is around years. A typical nucleated Boltzmann brain will, after it finishes its activity, cool off to absolute zero and eventually completely decay, as any isolated object would in the vacuum of space. Unlike the quantum fluctuation case, the Boltzmann brain will radiate energy out to infinity. In nucleation, the most common fluctuations are as close to thermal equilibrium overall as possible given whatever arbitrary criteria are provided for labeling a fluctuation a "Boltzmann brain".

Theoretically a Boltzmann brain can also form, albeit again with a tiny probability, at any time during the matter-dominated early universe.

Modern reactions to the Boltzmann brain problem

The consensus amongst cosmologists is that some yet to be revealed error is hinted at by the surprising calculation that Boltzmann brains should vastly outnumber normal human brains. Sean Carroll states "We're not arguing that Boltzmann Brains exist—we're trying to avoid them." Carroll has stated that the hypothesis of being a Boltzmann brain results in "cognitive instability". Because, he argues, it would take longer than the current age of the universe for a brain to form, and yet it thinks that it observes that it exists in a younger universe, this shows that memories and reasoning processes would be untrustworthy if it were indeed a Boltzmann brain. Seth Lloyd has stated "they fail the Monty Python test: Stop that! That's too silly!" A New Scientist journalist summarizes that "the starting point for our understanding of the universe and its behavior is that humans, not disembodied brains, are typical observers."

Some argue that brains produced via quantum fluctuation, and maybe even brains produced via nucleation in the de Sitter vacuum, do not count as observers. Quantum fluctuations are easier to exclude than nucleated brains, as quantum fluctuations can more easily be targeted by straightforward criteria (such as their lack of interaction with the environment at infinity).

Some cosmologists believe that a better understanding of the degrees of freedom in the quantum vacuum of holographic string theory can solve the Boltzmann brain problem.

Brian Greene states: "I am confident that I am not a Boltzmann brain. However, we want our theories to similarly concur that we are not Boltzmann brains, but so far it has proved surprisingly difficult for them to do so."

In single-Universe scenarios

In a single de Sitter Universe with a cosmological constant, and starting from any finite spatial slice, the number of "normal" observers is finite and bounded by the heat death of the Universe. If the Universe lasts forever, the number of nucleated Boltzmann brains is, in most models, infinite; cosmologists such as Alan Guth worry that this would make it seem "infinitely unlikely for us to be normal brains". One caveat is that if the Universe is a false vacuum that locally decays into a Minkowski or a Big Crunch-bound anti-de Sitter space in less than 20 billion years, then infinite Boltzmann nucleation is avoided. (If the average local false vacuum decay rate is over 20 billion years, Boltzmann brain nucleation is still infinite, as the Universe increases in size faster than local vacuum collapses destroy the portions of the Universe within the collapses' future light cones). Proposed hypothetical mechanisms to destroy the universe within that timeframe range from superheavy gravitinos to a heavier-than-observed top quark triggering "death by Higgs".

If no cosmological constant exists, and if the presently observed vacuum energy is from quintessence that will eventually completely dissipate, then infinite Boltzmann nucleation is also avoided.

In eternal inflation

One class of solutions to the Boltzmann brain problem makes use of differing approaches to the measure problem in cosmology: in infinite multiverse theories, the ratio of normal observers to Boltzmann brains depends on how infinite limits are taken. Measures might be chosen to avoid appreciable fractions of Boltzmann brains. Unlike the single-universe case, one challenge in finding a global solution in eternal inflation is that all possible string landscapes must be summed over; in some measures, having even a small fraction of universes infested with Boltzmann brains causes the measure of the multiverse as a whole to be dominated by Boltzmann brains.

The measurement problem in cosmology also grapples with the ratio of normal observers to abnormally early observers. In measures such as the proper time measure that suffer from an extreme "youngness" problem, the typical observer is a "Boltzmann baby" formed by rare fluctuation in an extremely hot, early universe.

Identifying whether oneself is a Boltzmann observer

In Boltzmann brain scenarios, the ratio of Boltzmann brains to "normal observers" is astronomically large. Almost any relevant subset of Boltzmann brains, such as "brains embedded within functioning bodies", "observers who believe they are perceiving 3 K microwave background radiation through telescopes", "observers who have a memory of coherent experiences", or "observers who have the same series of experiences as me", also vastly outnumber "normal observers". Therefore, under most models of consciousness, it is unclear that one can reliably conclude that oneself is not such a "Boltzmann observer", in a case where Boltzmann brains dominate the Universe. Even under "content externalism" models of consciousness, Boltzmann observers living in a consistent Earth-sized fluctuation over the course of the past several years outnumber the "normal observers" spawned before a Universe's "heat death".

As stated earlier, most Boltzmann brains have "abnormal" experiences; Feynman has pointed out that, if one knows oneself to be a typical Boltzmann brain, one does not expect "normal" observations to continue in the future. In other words, in a Boltzmann-dominated Universe, most Boltzmann brains have "abnormal" experiences, but most observers with only "normal" experiences are Boltzmann brains, due to the overwhelming vastness of the population of Boltzmann brains in such a Universe.

Law of truly large numbers

From Wikipedia, the free encyclopedia

The law of truly large numbers (a statistical adage), attributed to Persi Diaconis and Frederick Mosteller, states that with a large enough number of samples, any outrageous (i.e. unlikely in any single sample) thing is likely to be observed. Because we never find it notable when likely events occur, we highlight unlikely events and notice them more. The law is often used to falsify different pseudo-scientific claims, as such it and its use are sometimes criticized by fringe scientists.

The law is meant to make a statement about probabilities and statistical significance: in large enough masses of statistical data, even minuscule fluctuations attain statistical significance. Thus in truly large numbers of observations, it is paradoxically easy to find significant correlations, in large numbers, which still do not lead to causal theories (see: spurious correlation), and which by their collective number, might lead to obfuscation as well.

The law can be rephrased as "large numbers also deceive", something which is counter-intuitive to a descriptive statistician. More concretely, skeptic Penn Jillette has said, "Million-to-one odds happen eight times a day in New York" (population about 8,000,000).

Example

For a simplified example of the law, assume that a given event happens with a probability for its occurrence of 0.1%, within a single trial. Then, the probability that this so-called unlikely event does not happen (improbability) in a single trial is 99.9% (0.999).

Already for a sample of 1000 independent trials, however, the probability that the event does not happen in any of them, even once (improbability), is only 0.9991000 ≈ 0.3677 = 36.77%. Then, the probability that the event does happen, at least once, in 1000 trials is 1 − 0.9991000 ≈ 0.6323 or 63.23%. This means that this "unlikely event" has a probability of 63.23% of happening if 1000 independent trials are conducted, or over 99.9% for 10,000 trials.

The probability that it happens at least once in 10,000 trials is 1 − 0.99910000 ≈ 0.99995 = 99.995%. In other words, a highly unlikely event, given enough trials with some fixed number of draws per trial, is even more likely to occur.

This calculation can be generalized, formalized to use in mathematical proof that: "the probability c for the less likely event X to happen in N independent trials can become arbitrarily near to 1, no matter how small the probability a of the event X in one single trial is, provided that N is truly large."

In criticism of pseudoscience

The law comes up in criticism of pseudoscience and is sometimes called the Jeane Dixon effect (see also Postdiction). It holds that the more predictions a psychic makes, the better the odds that one of them will "hit". Thus, if one comes true, the psychic expects us to forget the vast majority that did not happen (confirmation bias). Humans can be susceptible to this fallacy.

Another similar (to some degree) manifestation of the law can be found in gambling, where gamblers tend to remember their wins and forget their losses, even if the latter far outnumbers the former (though depending on a particular person, the opposite may also be truth when they think they need more analysis of their losses to achieve fine tuning of their playing system). Mikal Aasved links it with "selective memory bias", allowing gamblers to mentally distance themselves from the consequences of their gambling by holding an inflated view of their real winnings (or losses in the opposite case - "selective memory bias in either direction").

 

Infinite monkey theorem

From Wikipedia, the free encyclopedia
 
Chimpanzee seated at a typewriter

The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, such as the complete works of William Shakespeare. In fact, the monkey would almost surely type every possible finite text an infinite number of times. However, the probability that monkeys filling the entire observable universe would type a single complete work, such as Shakespeare's Hamlet, is so tiny that the chance of it occurring during a period of time hundreds of thousands of orders of magnitude longer than the age of the universe is extremely low (but technically not zero). The theorem can be generalized to state that any sequence of events which has a non-zero probability of happening, at least as long as it hasn't occurred, will almost certainly eventually occur.

In this context, "almost surely" is a mathematical term with a precise meaning, and the "monkey" is not an actual monkey, but a metaphor for an abstract device that produces an endless random sequence of letters and symbols. One of the earliest instances of the use of the "monkey metaphor" is that of French mathematician Émile Borel in 1913, but the first instance may have been even earlier.

Variants of the theorem include multiple and even infinitely many typists, and the target text varies between an entire library and a single sentence. Jorge Luis Borges traced the history of this idea from Aristotle's On Generation and Corruption and Cicero's De Natura Deorum (On the Nature of the Gods), through Blaise Pascal and Jonathan Swift, up to modern statements with their iconic simians and typewriters. In the early 20th century, Borel and Arthur Eddington used the theorem to illustrate the timescales implicit in the foundations of statistical mechanics.

Solution

Direct proof

There is a straightforward proof of this theorem. As an introduction, recall that if two events are statistically independent, then the probability of both happening equals the product of the probabilities of each one happening independently. For example, if the chance of rain in Moscow on a particular day in the future is 0.4 and the chance of an earthquake in San Francisco on any particular day is 0.00003, then the chance of both happening on the same day is 0.4 × 0.00003 = 0.000012, assuming that they are indeed independent.

Suppose the typewriter has 50 keys, and the word to be typed is banana. If the keys are pressed randomly and independently, it means that each key has an equal chance of being pressed. Then, the chance that the first letter typed is 'b' is 1/50, and the chance that the second letter typed is 'a' is also 1/50, and so on. Therefore, the chance of the first six letters spelling banana is

(1/50) × (1/50) × (1/50) × (1/50) × (1/50) × (1/50) = (1/50)6 = 1/15,625,000,000.

Less than one in 15 billion, but not zero.

From the above, the chance of not typing banana in a given block of 6 letters is 1 − (1/50)6. Because each block is typed independently, the chance Xn of not typing banana in any of the first n blocks of 6 letters is

As n grows, Xn gets smaller. For n = 1 million, Xn is roughly 0.9999, but for n = 10 billion Xn is roughly 0.53 and for n = 100 billion it is roughly 0.0017. As n approaches infinity, the probability Xn approaches zero; that is, by making n large enough, Xn can be made as small as is desired, and the chance of typing banana approaches 100%.

The same argument shows why at least one of infinitely many monkeys will produce a text as quickly as it would be produced by a perfectly accurate human typist copying it from the original. In this case Xn = (1 − (1/50)6)n where Xn represents the probability that none of the first n monkeys types banana correctly on their first try. When we consider 100 billion monkeys, the probability falls to 0.17%, and as the number of monkeys n increases, the value of Xn – the probability of the monkeys failing to reproduce the given text – approaches zero arbitrarily closely. The limit, for n going to infinity, is zero. So the probability of the word banana appearing at some point in an infinite sequence of keystrokes is equal to one.

Infinite strings

This can be stated more generally and compactly in terms of strings, which are sequences of characters chosen from some finite alphabet:

  • Given an infinite string where each character is chosen uniformly at random, any given finite string almost surely occurs as a substring at some position.
  • Given an infinite sequence of infinite strings, where each character of each string is chosen uniformly at random, any given finite string almost surely occurs as a prefix of one of these strings.

Both follow easily from the second Borel–Cantelli lemma. For the second theorem, let Ek be the event that the kth string begins with the given text. Because this has some fixed nonzero probability p of occurring, the Ek are independent, and the below sum diverges,

the probability that infinitely many of the Ek occur is 1. The first theorem is shown similarly; one can divide the random string into nonoverlapping blocks matching the size of the desired text, and make Ek the event where the kth block equals the desired string.

Probabilities

However, for physically meaningful numbers of monkeys typing for physically meaningful lengths of time the results are reversed. If there were as many monkeys as there are atoms in the observable universe typing extremely fast for trillions of times the life of the universe, the probability of the monkeys replicating even a single page of Shakespeare is unfathomably small.

Ignoring punctuation, spacing, and capitalization, a monkey typing letters uniformly at random has a chance of one in 26 of correctly typing the first letter of Hamlet. It has a chance of one in 676 (26 × 26) of typing the first two letters. Because the probability shrinks exponentially, at 20 letters it already has only a chance of one in 2620 = 19,928,148,895,209,409,152,340,197,376 (almost 2 × 1028). In the case of the entire text of Hamlet, the probabilities are so vanishingly small as to be inconceivable. The text of Hamlet contains approximately 130,000 letters. Thus there is a probability of one in 3.4 × 10183,946 to get the text right at the first trial. The average number of letters that needs to be typed until the text appears is also 3.4 × 10183,946, or including punctuation, 4.4 × 10360,783.

Even if every proton in the observable universe were a monkey with a typewriter, typing from the Big Bang until the end of the universe (when protons might no longer exist), they would still need a far greater amount of time – more than three hundred and sixty thousand orders of magnitude longer – to have even a 1 in 10500 chance of success. To put it another way, for a one in a trillion chance of success, there would need to be 10360,641 observable universes made of protonic monkeys. As Kittel and Kroemer put it in their textbook on thermodynamics, the field whose statistical foundations motivated the first known expositions of typing monkeys, "The probability of Hamlet is therefore zero in any operational sense of an event ...", and the statement that the monkeys must eventually succeed "gives a misleading conclusion about very, very large numbers."

In fact there is less than a one in a trillion chance of success that such a universe made of monkeys could type any particular document a mere 79 characters long.

Almost surely

The probability that an infinite randomly generated string of text will contain a particular finite substring is 1. However, this does not mean the substring's absence is "impossible", despite the absence having a prior probability of 0. For example, the immortal monkey could randomly type G as its first letter, G as its second, and G as every single letter thereafter, producing an infinite string of Gs; at no point must the monkey be "compelled" to type anything else. (To assume otherwise implies the gambler's fallacy.) However long a randomly generated finite string is, there is a small but nonzero chance that it will turn out to consist of the same character repeated throughout; this chance approaches zero as the string's length approaches infinity. There is nothing special about such a monotonous sequence except that it is easy to describe; the same fact applies to any nameable specific sequence, such as "RGRGRG" repeated forever, or "a-b-aa-bb-aaa-bbb-...", or "Three, Six, Nine, Twelve…".

If the hypothetical monkey has a typewriter with 90 equally likely keys that include numerals and punctuation, then the first typed keys might be "3.14" (the first three digits of pi) with a probability of (1/90)4, which is 1/65,610,000. Equally probable is any other string of four characters allowed by the typewriter, such as "GGGG", "mATh", or "q%8e". The probability that 100 randomly typed keys will consist of the first 99 digits of pi (including the separator key), or any other particular sequence of that length, is much lower: (1/90)100. If the monkey's allotted length of text is infinite, the chance of typing only the digits of pi is 0, which is just as possible (mathematically probable) as typing nothing but Gs (also probability 0).

The same applies to the event of typing a particular version of Hamlet followed by endless copies of itself; or Hamlet immediately followed by all the digits of pi; these specific strings are equally infinite in length, they are not prohibited by the terms of the thought problem, and they each have a prior probability of 0. In fact, any particular infinite sequence the immortal monkey types will have had a prior probability of 0, even though the monkey must type something.

This is an extension of the principle that a finite string of random text has a lower and lower probability of being a particular string the longer it is (though all specific strings are equally unlikely). This probability approaches 0 as the string approaches infinity. Thus, the probability of the monkey typing an endlessly long string, such as all of the digits of pi in order, on a 90-key keyboard is (1/90) which equals (1/∞) which is essentially 0. At the same time, the probability that the sequence contains a particular subsequence (such as the word MONKEY, or the 12th through 999th digits of pi, or a version of the King James Bible) increases as the total string increases. This probability approaches 1 as the total string approaches infinity, and thus the original theorem is correct.

Correspondence between strings and numbers

In a simplification of the thought experiment, the monkey could have a typewriter with just two keys: 1 and 0. The infinitely long string thusly produced would correspond to the binary digits of a particular real number between 0 and 1. A countably infinite set of possible strings end in infinite repetitions, which means the corresponding real number is rational. Examples include the strings corresponding to one-third (010101...), five-sixths (11010101...) and five-eighths (1010000...). Only a subset of such real number strings (albeit a countably infinite subset) contains the entirety of Hamlet (assuming that the text is subjected to a numerical encoding, such as ASCII).

Meanwhile, there is an uncountably infinite set of strings which do not end in such repetition; these correspond to the irrational numbers. These can be sorted into two uncountably infinite subsets: those which contain Hamlet and those which do not. However, the "largest" subset of all the real numbers are those which not only contain Hamlet, but which contain every other possible string of any length, and with equal distribution of such strings. These irrational numbers are called normal. Because almost all numbers are normal, almost all possible strings contain all possible finite substrings. Hence, the probability of the monkey typing a normal number is 1. The same principles apply regardless of the number of keys from which the monkey can choose; a 90-key keyboard can be seen as a generator of numbers written in base 90.

History

Statistical mechanics

In one of the forms in which probabilists now know this theorem, with its "dactylographic" [i.e., typewriting] monkeys (French: singes dactylographes; the French word singe covers both the monkeys and the apes), appeared in Émile Borel's 1913 article "Mécanique Statistique et Irréversibilité" (Statistical mechanics and irreversibility), and in his book "Le Hasard" in 1914. His "monkeys" are not actual monkeys; rather, they are a metaphor for an imaginary way to produce a large, random sequence of letters. Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

The physicist Arthur Eddington drew on Borel's image further in The Nature of the Physical World (1928), writing:

If I let my fingers wander idly over the keys of a typewriter it might happen that my screed made an intelligible sentence. If an army of monkeys were strumming on typewriters they might write all the books in the British Museum. The chance of their doing so is decidedly more favourable than the chance of the molecules returning to one half of the vessel.

These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys' success is effectively impossible, and it may safely be said that such a process will never happen. It is clear from the context that Eddington is not suggesting that the probability of this happening is worthy of serious consideration. On the contrary, it was a rhetorical illustration of the fact that below certain levels of probability, the term improbable is functionally equivalent to impossible.

Origins and "The Total Library"

In a 1939 essay entitled "The Total Library", Argentine writer Jorge Luis Borges traced the infinite-monkey concept back to Aristotle's Metaphysics. Explaining the views of Leucippus, who held that the world arose through the random combination of atoms, Aristotle notes that the atoms themselves are homogeneous and their possible arrangements only differ in shape, position and ordering. In On Generation and Corruption, the Greek philosopher compares this to the way that a tragedy and a comedy consist of the same "atoms", i.e., alphabetic characters. Three centuries later, Cicero's De natura deorum (On the Nature of the Gods) argued against the atomist worldview:

He who believes this may as well believe that if a great quantity of the one-and-twenty letters, composed either of gold or any other matter, were thrown upon the ground, they would fall into such order as legibly to form the Annals of Ennius. I doubt whether fortune could make a single verse of them.

Borges follows the history of this argument through Blaise Pascal and Jonathan Swift, then observes that in his own time, the vocabulary had changed. By 1939, the idiom was "that a half-dozen monkeys provided with typewriters would, in a few eternities, produce all the books in the British Museum." (To which Borges adds, "Strictly speaking, one immortal monkey would suffice.") Borges then imagines the contents of the Total Library which this enterprise would produce if carried to its fullest extreme:

Everything would be in its blind volumes. Everything: the detailed history of the future, Aeschylus' The Egyptians, the exact number of times that the waters of the Ganges have reflected the flight of a falcon, the secret and true nature of Rome, the encyclopedia Novalis would have constructed, my dreams and half-dreams at dawn on August 14, 1934, the proof of Pierre Fermat's theorem, the unwritten chapters of Edwin Drood, those same chapters translated into the language spoken by the Garamantes, the paradoxes Berkeley invented concerning Time but didn't publish, Urizen's books of iron, the premature epiphanies of Stephen Dedalus, which would be meaningless before a cycle of a thousand years, the Gnostic Gospel of Basilides, the song the sirens sang, the complete catalog of the Library, the proof of the inaccuracy of that catalog. Everything: but for every sensible line or accurate fact there would be millions of meaningless cacophonies, verbal farragoes, and babblings. Everything: but all the generations of mankind could pass before the dizzying shelves – shelves that obliterate the day and on which chaos lies – ever reward them with a tolerable page.

Borges' total library concept was the main theme of his widely read 1941 short story "The Library of Babel", which describes an unimaginably vast library consisting of interlocking hexagonal chambers, together containing every possible volume that could be composed from the letters of the alphabet and some punctuation characters.

Actual monkeys

In 2002, lecturers and students from the University of Plymouth MediaLab Arts course used a £2,000 grant from the Arts Council to study the literary output of real monkeys. They left a computer keyboard in the enclosure of six Celebes crested macaques in Paignton Zoo in Devon, England for a month, with a radio link to broadcast the results on a website.

Not only did the monkeys produce nothing but five total pages largely consisting of the letter 'S', the lead male began striking the keyboard with a stone, and other monkeys followed by soiling it. Mike Phillips, director of the university's Institute of Digital Arts and Technology (i-DAT), said that the artist-funded project was primarily performance art, and they had learned "an awful lot" from it. He concluded that monkeys "are not random generators. They're more complex than that. ... They were quite interested in the screen, and they saw that when they typed a letter, something happened. There was a level of intention there."

The full text created by the monkeys is available to read "here" (PDF). Archived from the original (PDF) on 2009-03-18.

Applications and criticisms

Evolution

Thomas Huxley is sometimes misattributed with proposing a variant of the theory in his debates with Samuel Wilberforce.

In his 1931 book The Mysterious Universe, Eddington's rival James Jeans attributed the monkey parable to a "Huxley", presumably meaning Thomas Henry Huxley. This attribution is incorrect. Today, it is sometimes further reported that Huxley applied the example in a now-legendary debate over Charles Darwin's On the Origin of Species with the Anglican Bishop of Oxford, Samuel Wilberforce, held at a meeting of the British Association for the Advancement of Science at Oxford on 30 June 1860. This story suffers not only from a lack of evidence, but the fact that in 1860 the typewriter itself had yet to emerge.

Despite the original mix-up, monkey-and-typewriter arguments are now common in arguments over evolution. As an example of Christian apologetics Doug Powell argued that even if a monkey accidentally types the letters of Hamlet, it has failed to produce Hamlet because it lacked the intention to communicate. His parallel implication is that natural laws could not produce the information content in DNA. A more common argument is represented by Reverend John F. MacArthur, who claimed that the genetic mutations necessary to produce a tapeworm from an amoeba are as unlikely as a monkey typing Hamlet's soliloquy, and hence the odds against the evolution of all life are impossible to overcome.

Evolutionary biologist Richard Dawkins employs the typing monkey concept in his book The Blind Watchmaker to demonstrate the ability of natural selection to produce biological complexity out of random mutations. In a simulation experiment Dawkins has his weasel program produce the Hamlet phrase METHINKS IT IS LIKE A WEASEL, starting from a randomly typed parent, by "breeding" subsequent generations and always choosing the closest match from progeny that are copies of the parent, with random mutations. The chance of the target phrase appearing in a single step is extremely small, yet Dawkins showed that it could be produced rapidly (in about 40 generations) using cumulative selection of phrases. The random choices furnish raw material, while cumulative selection imparts information. As Dawkins acknowledges, however, the weasel program is an imperfect analogy for evolution, as "offspring" phrases were selected "according to the criterion of resemblance to a distant ideal target." In contrast, Dawkins affirms, evolution has no long-term plans and does not progress toward some distant goal (such as humans). The weasel program is instead meant to illustrate the difference between non-random cumulative selection, and random single-step selection. In terms of the typing monkey analogy, this means that Romeo and Juliet could be produced relatively quickly if placed under the constraints of a nonrandom, Darwinian-type selection because the fitness function will tend to preserve in place any letters that happen to match the target text, improving each successive generation of typing monkeys.

A different avenue for exploring the analogy between evolution and an unconstrained monkey lies in the problem that the monkey types only one letter at a time, independently of the other letters. Hugh Petrie argues that a more sophisticated setup is required, in his case not for biological evolution but the evolution of ideas:

In order to get the proper analogy, we would have to equip the monkey with a more complex typewriter. It would have to include whole Elizabethan sentences and thoughts. It would have to include Elizabethan beliefs about human action patterns and the causes, Elizabethan morality and science, and linguistic patterns for expressing these. It would probably even have to include an account of the sorts of experiences which shaped Shakespeare's belief structure as a particular example of an Elizabethan. Then, perhaps, we might allow the monkey to play with such a typewriter and produce variants, but the impossibility of obtaining a Shakespearean play is no longer obvious. What is varied really does encapsulate a great deal of already-achieved knowledge.

James W. Valentine, while admitting that the classic monkey's task is impossible, finds that there is a worthwhile analogy between written English and the metazoan genome in this other sense: both have "combinatorial, hierarchical structures" that greatly constrain the immense number of combinations at the alphabet level.

Literary theory

R. G. Collingwood argued in 1938 that art cannot be produced by accident, and wrote as a sarcastic aside to his critics,

... some ... have denied this proposition, pointing out that if a monkey played with a typewriter ... he would produce ... the complete text of Shakespeare. Any reader who has nothing to do can amuse himself by calculating how long it would take for the probability to be worth betting on. But the interest of the suggestion lies in the revelation of the mental state of a person who can identify the 'works' of Shakespeare with the series of letters printed on the pages of a book ...

Nelson Goodman took the contrary position, illustrating his point along with Catherine Elgin by the example of Borges' "Pierre Menard, Author of the Quixote",

What Menard wrote is simply another inscription of the text. Any of us can do the same, as can printing presses and photocopiers. Indeed, we are told, if infinitely many monkeys ... one would eventually produce a replica of the text. That replica, we maintain, would be as much an instance of the work, Don Quixote, as Cervantes' manuscript, Menard's manuscript, and each copy of the book that ever has been or will be printed.

In another writing, Goodman elaborates, "That the monkey may be supposed to have produced his copy randomly makes no difference. It is the same text, and it is open to all the same interpretations. ..." Gérard Genette dismisses Goodman's argument as begging the question.

For Jorge J. E. Gracia, the question of the identity of texts leads to a different question, that of author. If a monkey is capable of typing Hamlet, despite having no intention of meaning and therefore disqualifying itself as an author, then it appears that texts do not require authors. Possible solutions include saying that whoever finds the text and identifies it as Hamlet is the author; or that Shakespeare is the author, the monkey his agent, and the finder merely a user of the text. These solutions have their own difficulties, in that the text appears to have a meaning separate from the other agents: What if the monkey operates before Shakespeare is born, or if Shakespeare is never born, or if no one ever finds the monkey's typescript?

Random document generation

The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation.

One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on 4 August 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, 

"VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".

A website entitled The Monkey Shakespeare Simulator, launched on 1 July 2003, contained a Java applet that simulated a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters:

RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d

Due to processing power limitations, the program used a probabilistic model (by using a random number generator or RNG) instead of actually generating random text and comparing it to Shakespeare. When the simulator "detected a match" (that is, the RNG generated a certain value or a value within a certain range), the simulator simulated the match by generating matched text.

More sophisticated methods are used in practice for natural language generation. If instead of simply generating random characters one restricts the generator to a meaningful vocabulary and conservatively following grammar rules, like using a context-free grammar, then a random document generated this way can even fool some humans (at least on a cursory reading) as shown in the experiments with SCIgen, snarXiv, and the Postmodernism Generator.

In February 2019, the OpenAI group published the Generative Pre-trained Transformer 2 (GPT-2) artificial intelligence to GitHub, which is able to produce a fully plausible news article given a two sentence input from a human hand. The AI was so effective that instead of publishing the full code, the group chose to publish a scaled-back version and released a statement regarding "concerns about large language models being used to generate deceptive, biased, or abusive language at scale."

Testing of random-number generators

Questions about the statistics describing how often an ideal monkey is expected to type certain strings translate into practical tests for random-number generators; these range from the simple to the "quite sophisticated". Computer-science professors George Marsaglia and Arif Zaman report that they used to call one such category of tests "overlapping m-tuple tests" in lectures, since they concern overlapping m-tuples of successive elements in a random sequence. But they found that calling them "monkey tests" helped to motivate the idea with students. They published a report on the class of tests and their results for various RNGs in 1993.

In popular culture

The infinite monkey theorem and its associated imagery is considered a popular and proverbial illustration of the mathematics of probability, widely known to the general public because of its transmission through popular culture rather than through formal education. This is helped by the innate humor stemming from the image of literal monkeys rattling away on a set of typewriters, and is a popular visual gag.

A quotation attributed to a 1996 speech by Robert Wilensky stated, "We've heard that a million monkeys at a million keyboards could produce the complete works of Shakespeare; now, thanks to the Internet, we know that is not true."

The enduring, widespread popularity of the theorem was noted in the introduction to a 2001 paper, "Monkeys, Typewriters and Networks: The Internet in the Light of the Theory of Accidental Excellence". In 2002, an article in The Washington Post said, "Plenty of people have had fun with the famous notion that an infinite number of monkeys with an infinite number of typewriters and an infinite amount of time could eventually write the works of Shakespeare". In 2003, the previously mentioned Arts Council funded experiment involving real monkeys and a computer keyboard received widespread press coverage. In 2007, the theorem was listed by Wired magazine in a list of eight classic thought experiments.

American playwright David Ives' short one-act play Words, Words, Words, from the collection All in the Timing, pokes fun of the concept of the infinite monkey theorem.

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...