Google+ Badge

Follow by Email

Search This Blog

Sunday, September 21, 2014

Functionalism (philosophy of mind)

Functionalism (philosophy of mind)

From Wikipedia, the free encyclopedia
Functionalism is a theory of the mind in contemporary philosophy, developed largely as an alternative to both the identity theory of mind and behaviorism. Its core idea is that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they are causal relations to other mental states, sensory inputs, and behavioral outputs.[1] Functionalism is a theoretical level between the physical implementation and behavioral output.[2] Therefore, it is different from its predecessors of Cartesian dualism (advocating independent mental and physical substances) and Skinnerian behaviorism and physicalism (declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its "software programs".

Since mental states are identified by a functional role, they are said to be realized on multiple levels; in other words, they are able to be manifested in various systems, even perhaps computers, so long as the system performs the appropriate functions. While computers are physical devices with electronic substrate that perform computations on inputs to give outputs, so brains are physical devices with neural substrate that perform computations on inputs which produce behaviors.
While functionalism has its advantages, there have been several arguments against it, claiming that it is an insufficient account of the mind.

Multiple realizability

An important part of some accounts of functionalism is the idea of multiple realizability. Since, according to standard functionalist theories, mental states are the corresponding functional role, mental states can be sufficiently explained without taking into account the underlying physical medium (e.g. the brain, neurons, etc.) that realizes such states; one need only take into account the higher-level functions in the cognitive system. Since mental states are not limited to a particular medium, they can be realized in multiple ways, including, theoretically, within non-biological systems, such as computers. In other words, a silicon-based machine could, in principle, have the same sort of mental life that a human being has, provided that its cognitive system realized the proper functional roles. Thus, mental states are individuated much like a valve; a valve can be made of plastic or metal or whatever material, as long as it performs the proper function (say, controlling the flow of liquid through a tube by blocking and unblocking its pathway).
However, there have been some functionalist theories that combine with the identity theory of mind, which deny multiple realizability. Such Functional Specification Theories (FSTs) (Levin, § 3.4), as they are called, were most notably developed by David Lewis[3] and David Malet Armstrong.[4]
According to FSTs, mental states are the particular "realizers" of the functional role, not the functional role itself. The mental state of belief, for example, just is whatever brain or neurological process that realizes the appropriate belief function. Thus, unlike standard versions of functionalism (often called Functional State Identity Theories), FSTs do not allow for the multiple realizability of mental states, because the fact that mental states are realized by brain states is essential. What often drives this view is the belief that if we were to encounter an alien race with a cognitive system composed of significantly different material from humans' (e.g., silicon-based) but performed the same functions as human mental states (e.g., they tend to yell "Yowzas!" when poked with sharp objects, etc.) then we would say that their type of mental state is perhaps similar to ours, but too different to say it's the same. For some, this may be a disadvantage to FSTs. Indeed, one of Hilary Putnam's[5][6] arguments for his version of functionalism relied on the intuition that such alien creatures would have the same mental states as humans do, and that the multiple realizability of standard functionalism makes it a better theory of mind.

Types of functionalism

Machine-state functionalism

Artistic representation of a Turing machine.

The broad position of "functionalism" can be articulated in many different varieties. The first formulation of a functionalist theory of mind was put forth by Hilary Putnam.[5][6] This formulation, which is now called machine-state functionalism, or just machine functionalism, was inspired by the analogies which Putnam and others noted between the mind and the theoretical "machines" or computers capable of computing any given algorithm which were developed by Alan Turing (called Universal Turing machines).

In non-technical terms, a Turing machine can be visualized as an indefinitely and infinitely long tape divided into rectangles (the memory) with a box-shaped scanning device that sits over and scans one component of the memory at a time. Each unit is either blank (B) or has a 1 written on it. These are the inputs to the machine. The possible outputs are:
  • Halt: Do nothing.
  • R: move one square to the right.
  • L: move one square to the left.
  • B: erase whatever is on the square.
  • 1: erase whatever is on the square and print a '1.
An extremely simple example of a Turing machine which writes out the sequence '111' after scanning three blank squares and then stops as specified by the following machine table:

State One State Two State Three
B write 1; stay in state 1 write 1; stay in state 2 write 1; stay in state 3
1 go right; go to state 2 go right; go to state 3 [halt]

This table states that if the machine is in state one and scans a blank square (B), it will print a 1 and remain in state one. If it is in state one and reads a 1, it will move one square to the right and also go into state two. If it is in state two and reads a B, it will print a 1 and stay in state two. If it is in state two and reads a 1, it will move one square to the right and go into state three. If it is in state three and reads a B, it prints a 1 and remains in state three. Finally, if it is in state three and reads a 1, then it will stay in state three.

The essential point to consider here is the nature of the states of the Turing machine. Each state can be defined exclusively in terms of its relations to the other states as well as inputs and outputs. State one, for example, is simply the state in which the machine, if it reads a B, writes a 1 and stays in that state, and in which, if it reads a 1, it moves one square to the right and goes into a different state. This is the functional definition of state one; it is its causal role in the overall system. The details of how it accomplishes what it accomplishes and of its material constitution are completely irrelevant.

According to machine-state functionalism, the nature of a mental state is just like the nature of the automaton states described above. Just as state one simply is the state in which, given an input B, such and such happens, so being in pain is the state which disposes one to cry "ouch", become distracted, wonder what the cause is, and so forth.


A second form of functionalism is based on the rejection of behaviorist theories in psychology and their replacement with empirical cognitive models of the mind. This view is most closely associated with Jerry Fodor and Zenon Pylyshyn and has been labeled psychofunctionalism.

The fundamental idea of psychofunctionalism is that psychology is an irreducibly complex science and that the terms that we use to describe the entities and properties of the mind in our best psychological theories cannot be redefined in terms of simple behavioral dispositions, and further, that such a redefinition would not be desirable or salient were it achievable. Psychofunctionalists view psychology as employing the same sorts of irreducibly teleological or purposive explanations as the biological sciences. Thus, for example, the function or role of the heart is to pump blood, that of the kidney is to filter it and to maintain certain chemical balances and so on—this is what accounts for the purposes of scientific explanation and taxonomy. There may be an infinite variety of physical realizations for all of the mechanisms, but what is important is only their role in the overall biological theory. In an analogous manner, the role of mental states, such as belief and desire, is determined by the functional or causal role that is designated for them within our best scientific psychological theory. If some mental state which is postulated by folk psychology (e.g. hysteria) is determined not to have any fundamental role in cognitive psychological explanation, then that particular state may be considered not to exist . On the other hand, if it turns out that there are states which theoretical cognitive psychology posits as necessary for explanation of human behavior but which are not foreseen by ordinary folk psychological language, then these entities or states exist.

Analytic functionalism

A third form of functionalism is concerned with the meanings of theoretical terms in general. This view is most closely associated with David Lewis and is often referred to as analytic functionalism or conceptual functionalism. The basic idea of analytic functionalism is that theoretical terms are implicitly defined by the theories in whose formulation they occur and not by intrinsic properties of the phonemes they comprise. In the case of ordinary language terms, such as "belief", "desire", or "hunger", the idea is that such terms get their meanings from our common-sense "folk psychological" theories about them, but that such conceptualizations are not sufficient to withstand the rigor imposed by materialistic theories of reality and causality. Such terms are subject to conceptual analyses which take something like the following form:
Mental state M is the state that is preconceived by P and causes Q.
For example, the state of pain is caused by sitting on a tack and causes loud cries, and higher order mental states of anger and resentment directed at the careless person who left a tack lying around. These sorts of functional definitions in terms of causal roles are claimed to be analytic and a priori truths about the submental states and the (largely fictitious) propositional attitudes they describe.
Hence, its proponents are known as analytic or conceptual functionalists. The essential difference between analytic and psychofunctionalism is that the latter emphasizes the importance of laboratory observation and experimentation in the determination of which mental state terms and concepts are genuine and which functional identifications may be considered to be genuinely contingent and a posteriori identities. The former, on the other hand, claims that such identities are necessary and not subject to empirical scientific investigation.

Homuncular functionalism

Homuncular functionalism was developed largely by Daniel Dennett and has been advocated by William Lycan. It arose in response to the challenges that Ned Block's China Brain (a.k.a. Chinese nation) and John Searle's Chinese room thought experiments presented for the more traditional forms of functionalism (see below under "Criticism"). In attempting to overcome the conceptual difficulties that arose from the idea of a nation full of Chinese people wired together, each person working as a single neuron to produce in the wired-together whole the functional mental states of an individual mind, many functionalists simply bit the bullet, so to speak, and argued that such a Chinese nation would indeed possess all of the qualitative and intentional properties of a mind; i.e. it would become a sort of systemic or collective mind with propositional attitudes and other mental characteristics.
Whatever the worth of this latter hypothesis, it was immediately objected that it entailed an unacceptable sort of mind-mind supervenience: the systemic mind which somehow emerged at the higher-level must necessarily supervene on the individual minds of each individual member of the Chinese nation, to stick to Block's formulation. But this would seem to put into serious doubt, if not directly contradict, the fundamental idea of the supervenience thesis: there can be no change in the mental realm without some change in the underlying physical substratum. This can be easily seen if we label the set of mental facts that occur at the higher-level M1 and the set of mental facts that occur at the lower-level M2. Given the transitivity of supervenience, if M1 supervenes on M2, and M2 supervenes on P (physical base), then M1 and M2 both supervene on P, even though they are (allegedly) totally different sets of mental facts.

Since mind-mind supervenience seemed to have become acceptable in functionalist circles, it seemed to some that the only way to resolve the puzzle was to postulate the existence of an entire hierarchical series of mind levels (analogous to homunculi) which became less and less sophisticated in terms of functional organization and physical composition all the way down to the level of the physico-mechanical neuron or group of neurons. The homunculi at each level, on this view, have authentic mental properties but become simpler and less intelligent as one works one's way down the hierarchy.

Functionalism and physicalism

There is much confusion about the sort of relationship that is claimed to exist (or not exist) between the general thesis of functionalism and physicalism. It has often been claimed that functionalism somehow "disproves" or falsifies physicalism tout court (i.e. without further explanation or description). On the other hand, most philosophers of mind who are functionalists claim to be physicalists—indeed, some of them, such as David Lewis, have claimed to be strict reductionist-type physicalists.

Functionalism is fundamentally what Ned Block has called a broadly metaphysical thesis as opposed to a narrowly ontological one. That is, functionalism is not so much concerned with what there is than with what it is that characterizes a certain type of mental state, e.g. pain, as the type of state that it is. Previous attempts to answer the mind-body problem have all tried to resolve it by answering both questions: dualism says there are two substances and that mental states are characterized by their immateriality; behaviorism claimed that there was one substance and that mental states were behavioral disposition; physicalism asserted the existence of just one substance and characterized the mental states as physical states (as in "pain = C-fiber firings").

On this understanding, type physicalism can be seen as incompatible with functionalism, since it claims that what characterizes mental states (e.g. pain) is that they are physical in nature, while functionalism says that what characterizes pain is its functional/causal role and its relationship with yelling "ouch", etc. However, any weaker sort of physicalism which makes the simple ontological claim that everything that exists is made up of physical matter is perfectly compatible with functionalism. Moreover, most functionalists who are physicalists require that the properties that are quantified over in functional definitions be physical properties. Hence, they are physicalists, even though the general thesis of functionalism itself does not commit them to being so.

In the case of David Lewis, there is a distinction in the concepts of "having pain" (a rigid designator true of the same things in all possible worlds) and just "pain" (a non-rigid designator). Pain, for Lewis, stands for something like the definite description "the state with the causal role x". The referent of the description in humans is a type of brain state to be determined by science. The referent among silicon-based life forms is something else. The referent of the description among angels is some immaterial, non-physical state. For Lewis, therefore, local type-physical reductions are possible and compatible with conceptual functionalism. (See also Lewis's Mad pain and Martian pain.) There seems to be some confusion between types and tokens that needs to be cleared up in the functionalist analysis.


China brain

Ned Block[7] argues against the functionalist proposal of multiple realizability, where hardware implementation is irrelevant because only the functional level is important. The "China brain" or "Chinese nation" thought experiment involves supposing that the entire nation of China systematically organizes itself to operate just like a brain, with each individual acting as a neuron (forming what has come to be called a "Blockhead"). According to functionalism, so long as the people are performing the proper functional roles, with the proper causal relations between inputs and outputs, the system will be a real mind, with mental states, consciousness, and so on. However, Block argues, this is patently absurd, so there must be something wrong with the thesis of functionalism since it would allow this to be a legitimate description of a mind.
Some functionalists believe China would have qualia but that due to the size it is impossible to imagine China being conscious.[8] Indeed, it may be the case that we are constrained by our theory of mind[9] and will never be able to understand what Chinese-nation consciousness is like. Therefore, if functionalism is true either qualia will exist across all hardware or will not exist at all but are illusory.[10]

The Chinese room

The Chinese room argument by John Searle[11] is a direct attack on the claim that thought can be represented as a set of functions. The thought experiment asserts that it is possible to mimic intelligent action without any interpretation or understanding through the use of a purely functional system. In short, Searle describes a person who only speaks English who is in a room with only Chinese symbols in baskets and a rule book in English for moving the symbols around. The person is then ordered by people outside of the room to follow the rule book for sending certain symbols out of the room when given certain symbols. Further suppose that the people outside of the room are Chinese speakers and are communicating with the person inside via the Chinese symbols. According to Searle, it would be absurd to claim that the English speaker inside knows Chinese simply based on these syntactic processes. This thought experiment attempts to show that systems which operate merely on syntactic processes (inputs and outputs, based on algorithms) cannot realize any semantics (meaning) or intentionality (aboutness). Thus, Searle attacks the idea that thought can be equated with following a set of syntactic rules; that is, functionalism is an insufficient theory of the mind.
As noted above, in connection with Block's Chinese nation, many functionalists responded to Searle's thought experiment by suggesting that there was a form of mental activity going on at a higher level than the man in the Chinese room could comprehend (the so-called "system reply"); that is, the system does know Chinese. Of course, Searle responds that there is nothing more than syntax going on at the higher-level as well, so this reply is subject to the same initial problems. Furthermore, Searle suggests the man in the room could simply memorize the rules and symbol relations. Again, though he would convincingly mimic communication, he would be aware only of the symbols and rules, not of the meaning behind them.

Inverted spectrum

Another main criticism of functionalism is the inverted spectrum or inverted qualia scenario, most specifically proposed as an objection to functionalism by Ned Block.[7][12] This thought experiment involves supposing that there is a person, call her Jane, that is born with a condition which makes her see the opposite spectrum of light that is normally perceived. Unlike "normal" people, Jane sees the color violet as yellow, orange as blue, and so forth. So, suppose, for example, that you and Jane are looking at the same orange. While you perceive the fruit as colored orange, Jane sees it as colored blue. However, when asked what color the piece of fruit is, both you and Jane will report "orange". In fact, one can see that all of your behavioral as well as functional relations to colors will be the same. Jane will, for example, properly obey traffic signs just as any other person would, even though this involves the color perception. Therefore, the argument goes, since there can be two people who are functionally identical, yet have different mental states (differing in their qualitative or phenomenological aspects), functionalism is not robust enough to explain individual differences in qualia.[13]
David Chalmers tries to show[14] that even though mental content cannot be fully accounted for in functional terms, there is nevertheless a nomological correlation between mental states and functional states in this world. A silicon-based robot, for example, whose functional profile matched our own, would have to be fully conscious. His argument for this claim takes the form of a reductio ad absurdum. The general idea is that since it would be very unlikely for a conscious human being to experience a change in its qualia which it utterly fails to notice, mental content and functional profile appear to be inextricably bound together, at least in the human case. If the subject's qualia were to change, we would expect the subject to notice, and therefore his functional profile to follow suit. A similar argument is applied to the notion of absent qualia. In this case, Chalmers argues that it would be very unlikely for a subject to experience a fading of his qualia which he fails to notice and respond to. This, coupled with the independent assertion that a conscious being's functional profile just could be maintained, irrespective of its experiential state, leads to the conclusion that the subject of these experiments would remain fully conscious. The problem with this argument, however, as Brian G. Crabb (2005) has observed, is that it begs the central question: How could Chalmers know that functional profile can be preserved, for example while the conscious subject's brain is being supplanted with a silicon substitute, unless he already assumes that the subject's possibly changing qualia would not be a determining factor? And while changing or fading qualia in a conscious subject might force changes in its functional profile, this tells us nothing about the case of a permanently inverted or unconscious robot. A subject with inverted qualia from birth would have nothing to notice or adjust to. Similarly, an unconscious functional simulacrum of ourselves (a zombie) would have no experiential changes to notice or adjust to. Consequently, Crabb argues, Chalmers' "fading qualia" and "dancing qualia" arguments fail to establish that cases of permanently inverted or absent qualia are nomologically impossible.

A related critique of the inverted spectrum argument is that it assumes that mental states (differing in their qualitative or phenomenological aspects) can be independent of the functional relations in the brain. Thus, it begs the question of functional mental states: its assumption denies the possibility of functionalism itself, without offering any independent justification for doing so. (Functionalism says that mental states are produced by the functional relations in the brain.) This same type of problem—that there is no argument, just an antithetical assumption at their base—can also be said of both the Chinese room and the Chinese nation arguments. Notice, however, that Crabb's response to Chalmers does not commit this fallacy: His point is the more restricted observation that even if inverted or absent qualia turn out to be nomologically impossible, and it is perfectly possible that we might subsequently discover this fact by other means, Chalmers' argument fails to demonstrate that they are impossible.

Twin Earth

The Twin Earth thought experiment, introduced by Hilary Putnam,[15] is responsible for one of the main arguments used against functionalism, although it was originally intended as an argument against semantic internalism. The thought experiment is simple and runs as follows. Imagine a Twin Earth which is identical to Earth in every way but one: water does not have the chemical structure H₂O, but rather some other structure, say XYZ. It is critical, however, to note that XYZ on Twin Earth is still called "water" and exhibits all the same macro-level properties that H₂O exhibits on Earth (i.e., XYZ is also a clear drinkable liquid that is in lakes, rivers, and so on). Since these worlds are identical in every way except in the underlying chemical structure of water, you and your Twin Earth doppelgänger see exactly the same things, meet exactly the same people, have exactly the same jobs, behave exactly the same way, and so on. In other words, since you share the same inputs, outputs, and relations between other mental states, you are functional duplicates. So, for example, you both believe that water is wet. However, the content of your mental state of believing that water is wet differs from your duplicate's because your belief is of H₂O, while your duplicate's is of XYZ.
Therefore, so the argument goes, since two people can be functionally identical, yet have different mental states, functionalism cannot sufficiently account for all mental states.

Most defenders of functionalism initially responded to this argument by attempting to maintain a sharp distinction between internal and external content. The internal contents of propositional attitudes, for example, would consist exclusively in those aspects of them which have no relation with the external world and which bear the necessary functional/causal properties that allow for relations with other internal mental states. Since no one has yet been able to formulate a clear basis or justification for the existence of such a distinction in mental contents, however, this idea has generally been abandoned in favor of externalist causal theories of mental contents (also known as informational semantics). Such a position is represented, for example, by Jerry Fodor's account of an "asymmetric causal theory" of mental content. This view simply entails the modification of functionalism to include within its scope a very broad interpretation of input and outputs to include the objects that are the causes of mental representations in the external world.

The twin earth argument hinges on the assumption that experience with an imitation water would cause a different mental state than experience with natural water. However, since no one would notice the difference between the two waters, this assumption is likely false. Further, this basic assumption is directly antithetical to functionalism; and, thereby, the twin earth argument does not constitute a genuine argument: as this assumption entails a flat denial of functionalism itself (which would say that the two waters would not produce different mental states, because the functional relationships would remain unchanged).

Meaning holism

Another common criticism of functionalism is that it implies a radical form of semantic holism. Block and Fodor[12] referred to this as the damn/darn problem. The difference between saying "damn" or "darn" when one smashes one's finger with a hammer can be mentally significant. But since these outputs are, according to functionalism, related to many (if not all) internal mental states, two people who experience the same pain and react with different outputs must share little (perhaps nothing) in common in any of their mental states. But this is counter-intuitive; it seems clear that two people share something significant in their mental states of being in pain if they both smash their finger with a hammer, whether or not they utter the same word when they cry out in pain.

Another possible solution to this problem is to adopt a moderate (or molecularist) form of holism. But even if this succeeds in the case of pain, in the case of beliefs and meaning, it faces the difficulty of formulating a distinction between relevant and non-relevant contents (which can be difficult to do without invoking an analytic-synthetic distinction, as many seek to avoid).

Triviality arguments

Hilary Putnam,[16] John Searle,[17] and others[18][19] have offered arguments that functionalism is trivial, i.e. that the internal structures functionalism tries to discuss turn out to be present everywhere, so that either functionalism turns out to reduce to behaviorism, or to complete triviality and therefore a form of panpsychism. These arguments typically use the assumption that physics leads to a progression of unique states, and that functionalist realization is present whenever there is a mapping from the proposed set of mental states to physical states of the system. Given that the states of a physical system are always at least slightly unique, such a mapping will always exist, so any system is a mind. Formulations of functionalism which stipulate absolute requirements on interaction with external objects (external to the functional account, meaning not defined functionally) are reduced to behaviorism instead of absolute triviality, because the input-output behavior is still required.

Peter Godfrey-Smith has argued further[20] that such formulations can still be reduced to triviality if they accept a somewhat innocent-seeming additional assumption. The assumption is that adding a transducer layer, that is, an input-output system, to an object should not change whether that object has mental states. The transducer layer is restricted to producing behavior according to a simple mapping, such as a lookup table, from inputs to actions on the system, and from the state of the system to outputs. However, since the system will be in unique states at each moment and at each possible input, such a mapping will always exist so there will be a transducer layer which will produce whatever physical behavior is desired.

Godfrey-Smith believes that these problems can be addressed using causality, but that it may be necessary to posit a continuum between objects being minds and not being minds rather than an absolute distinction. Furthermore, constraining the mappings seems to require either consideration of the external behavior as in behaviorism, or discussion of the internal structure of the realization as in identity theory; and though multiple realizability does not seem to be lost, the functionalist claim of the autonomy of high-level functional description becomes questionable.[20]

Hard problem of consciousness

Hard problem of consciousness

From Wikipedia, the free encyclopedia
The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences — how sensations acquire characteristics, such as colours and tastes.[1] David Chalmers, who introduced the term "hard problem" of consciousness,[2] contrasts this with the "easy problems" of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomena. Chalmers claims that the problem of experience is distinct from this set, and he argues that the problem of experience will "persist even when the performance of all the relevant functions is explained".[3]

The existence of a "hard problem" is controversial and has been disputed by some philosophers.[4][5] Providing an answer to this question could lie in understanding the roles that physical processes play in creating consciousness and the extent to which these processes create our subjective qualities of experience.[3]

Several questions about consciousness must be resolved in order to acquire a full understanding of it. These questions include, but are not limited to, whether being conscious could be wholly described in physical terms, such as the aggregation of neural processes in the brain. If consciousness cannot be explained exclusively by physical events, it must transcend the capabilities of physical systems and require an explanation of nonphysical means. For philosophers who assert that consciousness is nonphysical in nature, there remains a question about what outside of physical theory is required to explain consciousness.

Formulation of the problem

Chalmers' formulation

In Facing Up to the Problem of Consciousness, Chalmers wrote:[3]

Easy problems

Chalmers contrasts the Hard Problem with a number of (relatively) Easy Problems that consciousness presents. (He emphasizes that what the easy problems have in common is that they all represent some ability, or the performance of some function or behavior).
  • the ability to discriminate, categorize, and react to environmental stimuli;
  • the integration of information by a cognitive system;
  • the reportability of mental states;
  • the ability of a system to access its own internal states;
  • the focus of attention;
  • the deliberate control of behavior;
  • the difference between wakefulness and sleep.

Other formulations

Various formulations of the "hard problem":
  • "How is it that some organisms are subjects of experience?"
  • "Why does awareness of sensory information exist at all?"
  • "Why do qualia exist?"
  • "Why is there a subjective component to experience?"
  • "Why aren't we philosophical zombies?"
James Trefil notes that "it is the only major question in the sciences that we don't even know how to ask."[6]

Historical predecessors

The hard problem has scholarly antecedents considerably earlier than Chalmers.

Gottfried Leibniz wrote, as an example also known as Leibniz's gap:
Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.[7]
Isaac Newton wrote in a letter to Henry Oldenburg:
to determine by what modes or actions light produceth in our minds the phantasm of colour is not so easie.[8]
T.H. Huxley remarked:
how it is that any thing so remarkable as a state of consciousness comes about as the result of irritating nervous tissue, is just as unaccountable as the appearance of the Djin when Aladdin rubbed his lamp.[9]


Scientific attempts

There have been scientific attempts to explain subjective aspects of consciousness, which is related to the binding problem in neuroscience. Many eminent theorists, including Francis Crick and Roger Penrose, have worked in this field. Nevertheless, even as sophisticated accounts are given, it is unclear if such theories address the hard problem. Eliminative materialist philosopher Patricia Smith Churchland has famously remarked about Penrose's theories that "Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules."[10]

Consciousness is fundamental or elusive

Some philosophers, including David Chalmers and Alfred North Whitehead, argue that conscious experience is a fundamental constituent of the universe, a form of panpsychism sometimes referred to as panexperientialism. Chalmers argues that a "rich inner life" is not logically reducible to the functional properties of physical processes. He states that consciousness must be described using nonphysical means. This description involves a fundamental ingredient capable of clarifying phenomena that has not been explained using physical means. Use of this fundamental property, Chalmers argues, is necessary to explain certain functions of the world, much like other fundamental features, such as mass and time, and to explain significant principles in nature.

Thomas Nagel has posited that experiences are essentially subjective (accessible only to the individual undergoing them), while physical states are essentially objective (accessible to multiple individuals). So at this stage, we have no idea what it could even mean to claim that an essentially subjective state just is an essentially non-subjective state. In other words, we have no idea of what reductivism really amounts to.[11]

New mysterianism, such as that of Colin McGinn, proposes that the human mind, in its current form, will not be able to explain consciousness.[12]

Deflationary accounts

Some philosophers, such as Daniel Dennett,[4] Stanislas Dehaene,[5] and Peter Hacker,[13] oppose the idea that there is a hard problem. These theorists argue that once we really come to understand what consciousness is, we will realize that the hard problem is unreal. For instance, Dennett asserts that the so-called hard problem will be solved in the process of answering the easy ones.[4] In contrast with Chalmers, he argues that consciousness is not a fundamental feature of the universe and instead will eventually be fully explained by natural phenomena. Instead of involving the nonphysical, he says, consciousness merely plays tricks on people so that it appears nonphysical—in other words, it simply seems like it requires nonphysical features to account for its powers. In this way, Dennett compares consciousness to stage magic and its capability to create extraordinary illusions out of ordinary things.[14]

To show how people might be commonly fooled into overstating the powers of consciousness, Dennett describes a normal phenomenon called change blindness, a visual process that involves failure to detect scenery changes in a series of alternating images.[15] He uses this concept to argue that the overestimation of the brain's visual processing implies that the conception of our consciousness is likely not as pervasive as we make it out to be. He claims that this error of making consciousness more mysterious than it is could be a misstep in any developments toward an effective explanatory theory. Critics such as Galen Strawson reply that, in the case of consciousness, even a mistaken experience retains the essential face of experience that needs to be explained, contra Dennett.

To address the question of the hard problem, or how and why physical processes give rise to experience, Dennett states that the phenomenon of having experience is nothing more than the performance of functions or the production of behavior, which can also be referred to as the easy problems of consciousness.[4] He states that consciousness itself is driven simply by these functions, and to strip them away would wipe out any ability to identify thoughts, feelings, and consciousness altogether. So, unlike Chalmers and other dualists, Dennett says that the easy problems and the hard problem cannot be separated from each other. To him, the hard problem of experience is included among—not separate from—the easy problems, and therefore they can only be explained together as a cohesive unit.[14]

Dehaene's argument has similarities with those of Dennett. He says Chalmers' 'easy problems of consciousness' are actually the hard problems and the 'hard problems' are based only upon intuitions that, according to Dehaene, are continually shifting as understanding evolves. "Once our intuitions are educated ...Chalmers' hard problem will evaporate" and "qualia...will be viewed as a peculiar idea of the prescientific era, much like vitalism...[Just as science dispatched vitalism] the science of consciousness will eat away at the hard problem of consciousness until it vanishes."[5]

Like Dennett, Peter Hacker argues that the hard problem is fundamentally incoherent and that "consciousness studies," as it exists today, is "literally a total waste of time:"[13]
“The whole endeavour of the consciousness studies community is absurd – they are in pursuit of a chimera. They misunderstand the nature of consciousness. The conception of consciousness which they have is incoherent. The questions they are asking don’t make sense. They have to go back to the drawing board and start all over again.”
Critics of Dennett's approach, such as David Chalmers and Thomas Nagel, argue that Dennett's argument misses the point of the inquiry by merely re-defining consciousness as an external property and ignoring the subjective aspect completely. This has led detractors to refer to Dennett's book Consciousness Explained as Consciousness Ignored or Consciousness Explained Away.[4] Dennett discussed this at the end of his book with a section entitled Consciousness Explained or Explained Away?[15]

Glenn Carruthers and Elizabeth Schier argue that the main arguments for the existence of a hard problem -- philosophical zombies, Mary's room, and Nagel's bats -- are only persuasive if one already assumes that "consciousness must be independent of the structure and function of mental states, i.e. that there is a hard problem." Hence, the arguments beg the question. The authors suggest that "instead of letting our conclusions on the thought experiments guide our theories of consciousness, we should let our theories of consciousness guide our conclusions from the thought experiments."[16] Contrary to this line of argument, Chalmers says: "Some may be led to deny the possibility [of zombies] in order to make some theory come out right, but the justification of such theories should ride on the question of possibility, rather than the other way round".[17]:96

A notable deflationary account is the Higher-Order Thought theories of consciousness.[18][19] Peter Carruthers discusses "recognitional concepts of experience", that is, "a capacity to recognize [a] type of experience when it occurs in one's own mental life", and suggests such a capacity does not depend upon qualia.[20] Though the most common arguments against deflationary accounts and eliminative materialism is the argument from qualia, and that conscious experiences are irreducible to physical states - or that current popular definitions of "physical" are incomplete - the objection follows that the one and same reality can appear in different ways, and that the numerical difference of these ways is consistent with a unitary mode of existence of the reality. Critics of the deflationary approach object that qualia are a case where a single reality cannot have multiple appearances. As John Searle points out: "where consciousness is concerned, the existence of the appearance is the reality."[21]

Massimo Pigliucci distances himself from eliminativism, but he insists that the hard problem is still misguided, resulting from a "category mistake":[22]
Of course an explanation isn't the same as an experience, but that’s because the two are completely independent categories, like colors and triangles. It is obvious that I cannot experience what it is like to be you, but I can potentially have a complete explanation of how and why it is possible to be you.

Quantum mind

Quantum mind

From Wikipedia, the free encyclopedia
The quantum mind or quantum consciousness[1] hypothesis proposes that classical mechanics cannot explain consciousness, while quantum mechanical phenomena, such as quantum entanglement and superposition, may play an important part in the brain's function, and could form the basis of an explanation of consciousness. It is not a single theory, but rather a collection of distinct ideas.

A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, whereas quantum mechanics can. The idea that quantum theory has something to do with the workings of the mind go back to Eugene Wigner, who assumed that the wave function collapses due to its interaction with consciousness. However, most contemporary physicists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing.[2] Physicist Victor Stenger characterized quantum consciousness as a "myth" having "no scientific basis" that "should take its place along with gods, unicorns and dragons."[3]

The philosopher David Chalmers has argued against quantum consciousness. He has discussed how quantum mechanics may relate to dualistic consciousness.[4] Indeed, Chalmers is skeptical of the ability of any new physics to resolve the hard problem of consciousness.[5][6]

Description of main quantum mind approaches

David Bohm

David Bohm took the view that quantum theory and relativity contradicted one another, and that this contradiction implied that there existed a more fundamental level in the physical universe.[7] He claimed that both quantum theory and relativity pointed towards this deeper theory, which he formulated in terms of a quantum field theory. This more fundamental level was proposed to represent an undivided wholeness and an implicate order, from which arises the explicate order of the universe as we experience it.

Bohm's proposed implicate order applies both to matter and consciousness, and he suggests that it could explain the relationship between them. Mind and matter are here seen as projections into our explicate order from the underlying reality of the implicate order. Bohm claims that when we look at the matter in space, we can see nothing in these concepts that helps us to understand consciousness.
In trying to describe the nature of consciousness, Bohm discusses the experience of listening to music. He believed that the feeling of movement and change that make up our experience of music derives from both the immediate past and the present both being held in the brain together, with the notes from the past seen as transformations rather than memories. The notes that were implicate in the immediate past are seen as becoming explicate in the present. Bohm views this as consciousness emerging from the implicate order.

Bohm sees the movement, change or flow and also the coherence of experiences, such as listening to music as a manifestation of the implicate order. He claims to derive evidence for this from the work of Jean Piaget[8] in studying infants. He states that these studies show that young children have to learn about time and space, because they are part of the explicate order, but have a "hard-wired" understanding of movement, because it is part of the implicate order. He compares this "hard-wiring" to Chomsky's theory that grammar is "hard-wired" into young human brains.

In his writings, Bohm never proposed any specific brain mechanism by which his implicate order could emerge in a way that was relevant to consciousness, nor any means by which the propositions could be tested or falsified.[citation needed]

Roger Penrose and Stuart Hameroff

Theoretical physicist Roger Penrose and anaesthesiologist Stuart Hameroff collaborated to produce the theory known as Orchestrated Objective Reduction (Orch-OR). Penrose and Hameroff initially developed their ideas separately, and only later collaborated to produce Orch-OR in the early 1990s. The theory was reviewed and updated by the original authors in late 2013.[9][10]
Penrose's controversial argument began from Gödel's incompleteness theorems. In his first book on consciousness, The Emperor's New Mind (1989), he argued that while a formal proof system cannot prove its own inconsistency, Gödel-unprovable results are provable by human mathematicians.[citation needed] He took this disparity to mean that human mathematicians are not describable as formal proof systems, and are not therefore running a computable algorithm.[citation needed]

Penrose determined that wave function collapse was the only possible physical basis for a non-computable process. Dissatisfied with its randomness, Penrose proposed a new form of wave function collapse that occurred in isolation, called objective reduction. He suggested that each quantum superposition has its own piece of spacetime curvature, and when these become separated by more than one Planck length, they become unstable and collapse.[citation needed] Penrose suggested that objective reduction represented neither randomness nor algorithmic processing, but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derived.[citation needed]

Originally, Penrose lacked a detailed proposal for how quantum processing could be implemented in the brain. However, Hameroff read Penrose's work, and suggested that microtubules would be suitable candidates.[citation needed]

Microtubules are composed of tubulin protein dimer subunits. The tubulin dimers each have hydrophobic pockets that are 8 nm apart, and which may contain delocalised pi electrons. Tubulins have other smaller non-polar regions that contain pi electron-rich indole rings separated by only about 2 nm. Hameroff proposes that these electrons are close enough to become quantum entangled.[11] Hameroff originally suggested the tubulin-subunit electrons would form a Bose–Einstein condensate, but this was discredited.[12] He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules. However, this too has been experimentally discredited.[13]

Furthermore, he proposed that condensates in one neuron could extend to many others via gap junctions between neurons, thus forming a macroscopic quantum feature across an extended area of the brain. When the wave function of this extended condensate collapsed, it was suggested to non-computationally access mathematical understanding and ultimately conscious experience, that are hypothetically embedded in the geometry of spacetime.[citation needed]

However, Orch-OR made numerous false biological predictions, and is considered to be an extremely poor model of brain physiology. The proposed predominance of 'A' lattice microtubules, more suitable for information processing, was falsified by Kikkawa et al.,[14][15] who showed that all in vivo microtubules have a 'B' lattice and a seam. The proposed existence of gap junctions between neurons and glial cells was also falsified.[16] Orch-OR predicted that microtubule coherence reaches the synapses via dendritic lamellar bodies (DLBs), however De Zeeuw et al. proved this impossible,[17] by showing that DLBs are located micrometers away from gap junctions.[18]

In January 2014, Hameroff and Penrose announced that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013[19] confirms the hypothesis of the Orch-OR theory.[10][20]

Umezawa, Vitiello, Freeman, Kak

Hiroomi Umezawa and collaborators proposed a quantum field theory of memory storage. Giuseppe Vitiello and Walter Freeman have proposed a dialog model of the mind, where this dialog takes place between the classical and the quantum parts of the brain.[21][22] Quantum field theory models of brain dynamics are fundamentally different from the Penrose-Hameroff theory. Subhash Kak has proposed that the physical substratum to neural networks has a quantum basis,[23] but he also points out that the quantum mind will still have machine-like limitations.[24] He points to a role for quantum theory in the distinction between machine intelligence and biological intelligence.[25][26]

Henry Stapp

Henry Stapp favors the idea that quantum waves are reduced only when they interact with consciousness. He argues from the Orthodox Quantum Mechanics of John von Neumann that the quantum state collapses when the observer selects one among the alternative quantum possibilities as a basis for future action. The collapse, therefore, takes place in the expectation that the observer associated with the state.[citation needed]

His theory of how mind may interact with matter via quantum processes in the brain differs from that of Penrose and Hameroff.[27]

Criticism by Max Tegmark

The main argument against the quantum mind proposition is that quantum states in the brain would decohere before they reached a spatial or temporal scale at which they could be useful for neural processing, although in photosynthetic organisms quantum coherence is involved in the efficient transfer of energy, within the timescales calculated by Tegmark.Quantum biology[28] This argument was elaborated by the physicist, Max Tegmark. Based on his calculations, Tegmark concluded that quantum systems in the brain decohere at sub-picosecond timescales commonly assumed to be too short to control brain function.[29][30]

Roger Penrose

Roger Penrose

From Wikipedia, the free encyclopedia

Sir Roger Penrose
Roger Penrose-6Nov2005.jpg
Roger Penrose, 2005
Born 8 August 1931 (age 83)
Colchester, Essex, England
Residence United Kingdom
Nationality British
Fields Mathematical physics
Alma mater
Doctoral advisor John A. Todd
Other academic advisors W. V. D. Hodge
Doctoral students
Known for
Notable awards
He is the brother of Jonathan Penrose, Oliver Penrose and Shirley Hodgson; son of Lionel Penrose; nephew of Roland Penrose.

Sir Roger Penrose OM FRS (born 8 August 1931), is an English mathematical physicist, mathematician and philosopher of science. He is the Emeritus Rouse Ball Professor of Mathematics at the Mathematical Institute of the University of Oxford, as well as an Emeritus Fellow of Wadham College.

Penrose is known for his work in mathematical physics, in particular for his contributions to general relativity and cosmology. He has received a number of prizes and awards, including the 1988 Wolf Prize for physics, which he shared with Stephen Hawking for their contribution to our understanding of the universe.[1]

Early life and academia

Born in Colchester, Essex, England, Roger Penrose is a son of psychiatrist and mathematician Lionel Penrose and Margaret Leathes,[2] and the grandson of the physiologist John Beresford Leathes. His uncle was artist Roland Penrose, whose son with photographer Lee Miller is Antony Penrose. Penrose is the brother of mathematician Oliver Penrose and of chess Grandmaster Jonathan Penrose. Penrose attended University College School and University College, London, where he graduated with a first class degree in mathematics. In 1955, while still a student, Penrose reintroduced the E. H. Moore generalised matrix inverse, also known as the Moore–Penrose inverse,[3] after it had been reinvented by Arne Bjerhammar (1951). Penrose earned his PhD at Cambridge (St John's College) in 1958, writing a thesis on "tensor methods in algebraic geometry" under algebraist and geometer John A. Todd. He devised and popularised the Penrose triangle in the 1950s, describing it as "impossibility in its purest form" and exchanged material with the artist M. C. Escher, whose earlier depictions of impossible objects partly inspired it. Escher's Waterfall, and Ascending and Descending were in turn inspired by Penrose. As reviewer Manjit Kumar puts it:
As a student in 1954, Penrose was attending a conference in Amsterdam when by chance he came across an exhibition of Escher's work. Soon he was trying to conjure up impossible figures of his own and discovered the tri-bar – a triangle that looks like a real, solid three-dimensional object, but isn't. Together with his father, a physicist and mathematician, Penrose went on to design a staircase that simultaneously loops up and down. An article followed and a copy was sent to Escher. Completing a cyclical flow of creativity, the Dutch master of geometrical illusions was inspired to produce his two masterpieces.[4]
In 1965, at Cambridge, Penrose proved that singularities (such as black holes) could be formed from the gravitational collapse of immense, dying stars.[5] This work was extended by Hawking to prove the Penrose–Hawking singularity theorems.
Oil painting by Urs Schmid (1995) of a Penrose tiling using fat and thin rhombi.

In 1967, Penrose invented the twistor theory which maps geometric objects in Minkowski space into the 4-dimensional complex space with the metric signature (2,2). In 1969, he conjectured the cosmic censorship hypothesis. This proposes (rather informally) that the universe protects us from the inherent unpredictability of singularities (such as the one in the centre of a black hole) by hiding them from our view behind an event horizon. This form is now known as the "weak censorship hypothesis"; in 1979, Penrose formulated a stronger version called the "strong censorship hypothesis". Together with the BKL conjecture and issues of nonlinear stability, settling the censorship conjectures is one of the most important outstanding problems in general relativity. Also from 1979 dates Penrose's influential Weyl curvature hypothesis on the initial conditions of the observable part of the Universe and the origin of the second law of thermodynamics.[6] Penrose and James Terrell independently realised that objects travelling near the speed of light will appear to undergo a peculiar skewing or rotation. This effect has come to be called the Terrell rotation or Penrose–Terrell rotation.[7][8]
A Penrose tiling

Penrose is well known for his 1974 discovery of Penrose tilings, which are formed from two tiles that can only tile the plane nonperiodically, and are the first tilings to exhibit fivefold rotational symmetry. Penrose developed these ideas based on the article Deux types fondamentaux de distribution statistique[9] (1938; an English translation Two Basic Types of Statistical Distribution) by Czech geographer, demographer and statistician Jaromír Korčák. In 1984, such patterns were observed in the arrangement of atoms in quasicrystals.[10] Another noteworthy contribution is his 1971 invention of spin networks, which later came to form the geometry of spacetime in loop quantum gravity. He was influential in popularising what are commonly known as Penrose diagrams (causal diagrams).

In 1983, Penrose was invited to teach at Rice University in Houston, by the then provost Bill Grodon. Roger Penrose worked at Rice University form 1983 to 1987.[11]

Later activity

In 2004 Penrose released The Road to Reality: A Complete Guide to the Laws of the Universe, a 1,099-page book aimed at giving a comprehensive guide to the laws of physics. He has proposed a novel interpretation of quantum mechanics.[12]

Penrose is the Francis and Helen Pentz Distinguished (visiting) Professor of Physics and Mathematics at Pennsylvania State University.[13] He is also a member of the Astronomical Review Editorial Board.

An earlier universe

WMAP image of the (extremely tiny) anisotropies in the cosmic background radiation

In 2010, Penrose reported possible evidence, based on concentric circles found in WMAP data of the CMB sky, of an earlier universe existing before the Big Bang of our own present universe.[14] He mentions this evidence in the epilogue of his 2010 book Cycles of Time,[15] a book in which he presents his reasons, to do with Einstein's field equations, the Weyl curvature C, and the Weyl curvature hypothesis, that the transition at the Big Bang could have been smooth enough for a previous universe to survive it. He made several conjectures about C and the WCH, some of which were subsequently proved by others, and the smoothness is real. In simple terms, he believes that the singularity in Einstein's field equation at the Big Bang is only an apparent singularity, similar to the well-known apparent singularity at the event horizon of a black hole. The latter singularity can be removed by a change of coordinate system, and Penrose proposes a different change of coordinate system that will remove the singularity at the big bang. This was a daring step, relying on certain conjectures being proved, but these have subsequently been proved.[citation needed] One implication of this is that the major events at the Big Bang can be understood without unifying general relativity and quantum mechanics, and therefore we are not necessarily constrained by the Wheeler–DeWitt equation, which disrupts time.

Physics and consciousness

Prof. Penrose at a conference.

Penrose has written books on the connection between fundamental physics and human (or animal) consciousness. In The Emperor's New Mind (1989), he argues that known laws of physics are inadequate to explain the phenomenon of consciousness. Penrose proposes the characteristics this new physics may have and specifies the requirements for a bridge between classical and quantum mechanics (what he calls correct quantum gravity). Penrose uses a variant of Turing's halting theorem to demonstrate that a system can be deterministic without being algorithmic. (For example, imagine a system with only two states, ON and OFF. If the system's state is ON when a given Turing machine halts and OFF when the Turing machine does not halt, then the system's state is completely determined by the machine; nevertheless, there is no algorithmic way to determine whether the Turing machine stops.)

Penrose believes that such deterministic yet non-algorithmic processes may come into play in the quantum mechanical wave function reduction, and may be harnessed by the brain. He argues that the present computer is unable to have intelligence because it is an algorithmically deterministic system. He argues against the viewpoint that the rational processes of the mind are completely algorithmic and can thus be duplicated by a sufficiently complex computer. This contrasts with supporters of strong artificial intelligence, who contend that thought can be simulated algorithmically. He bases this on claims that consciousness transcends formal logic because things such as the insolubility of the halting problem and Gödel's incompleteness theorem prevent an algorithmically based system of logic from reproducing such traits of human intelligence as mathematical insight. These claims were originally espoused by the philosopher John Lucas of Merton College, Oxford.

The Penrose/Lucas argument about the implications of Gödel's incompleteness theorem for computational theories of human intelligence has been widely criticised by mathematicians, computer scientists and philosophers, and the consensus among experts in these fields seems to be that the argument fails, though different authors may choose different aspects of the argument to attack.[16] Marvin Minsky, a leading proponent of artificial intelligence, was particularly critical, stating that Penrose "tries to show, in chapter after chapter, that human thought cannot be based on any known scientific principle." Minsky's position is exactly the opposite – he believes that humans are, in fact, machines, whose functioning, although complex, is fully explainable by current physics. Minsky maintains that "one can carry that quest [for scientific explanation] too far by only seeking new basic principles instead of attacking the real detail. This is what I see in Penrose's quest for a new basic principle of physics that will account for consciousness."[17]

Penrose responded to criticism of The Emperor's New Mind with his follow up 1994 book Shadows of the Mind, and in 1997 with The Large, the Small and the Human Mind. In those works, he also combined his observations with that of anesthesiologist Stuart Hameroff.

Penrose and Hameroff have argued that consciousness is the result of quantum gravity effects in microtubules, which they dubbed Orch-OR (orchestrated objective reduction). Max Tegmark, in a paper in Physical Review E,[18] calculated that the time scale of neuron firing and excitations in microtubules is slower than the decoherence time by a factor of at least 10,000,000,000. The reception of the paper is summed up by this statement in Tegmark's support: "Physicists outside the fray, such as IBM's John A. Smolin, say the calculations confirm what they had suspected all along. 'We're not working with a brain that's near absolute zero. It's reasonably unlikely that the brain evolved quantum behavior'".[19] Tegmark's paper has been widely cited by critics of the Penrose–Hameroff position.

In their reply to Tegmark's paper, also published in Physical Review E, the physicists Scott Hagan, Jack Tuszynski and Hameroff[20][21] claimed that Tegmark did not address the Orch-OR model, but instead a model of his own construction. This involved superpositions of quanta separated by 24 nm rather than the much smaller separations stipulated for Orch-OR. As a result, Hameroff's group claimed a decoherence time seven orders of magnitude greater than Tegmark's, but still well short of the 25 ms required if the quantum processing in the theory was to be linked to the 40 Hz gamma synchrony, as Orch-OR suggested. To bridge this gap, the group made a series of proposals. It was supposed that the interiors of neurons could alternate between liquid and gel states. In the gel state, it was further hypothesized that the water electrical dipoles are oriented in the same direction, along the outer edge of the microtubule tubulin subunits. Hameroff et al. proposed that this ordered water could screen any quantum coherence within the tubulin of the microtubules from the environment of the rest of the brain. Each tubulin also has a tail extending out from the microtubules, which is negatively charged, and therefore attracts positively charged ions. It is suggested that this could provide further screening. Further to this, there was a suggestion that the microtubules could be pumped into a coherent state by biochemical energy.
Roger Penrose in the University of Santiago de Compostela to receive the Fonseca Prize.

Finally, it is suggested that the configuration of the microtubule lattice might be suitable for quantum error correction, a means of holding together quantum coherence in the face of environmental interaction. In the last decade, some researchers who are sympathetic to Penrose's ideas have proposed an alternative scheme for quantum processing in microtubules based on the interaction of tubulin tails with microtubule-associated proteins, motor proteins and presynaptic scaffold proteins. These proposed alternative processes have the advantage of taking place within Tegmark's time to decoherence.

Hameroff, in a lecture in part of a Google Tech talks series exploring quantum biology, gave an overview of current research in the area, and responded to subsequent criticisms of the Orch-OR model.[22] In addition to this, a recent 2011 paper by Roger Penrose and Stuart Hameroff gives an updated model of their Orch-OR theory, in light of criticisms, and discusses the place of consciousness within the universe.[23]

Phillip Tetlow, although himself supportive of Penrose's views, acknowledges that Penrose's ideas about the human thought process are at present a minority view in scientific circles, citing Minsky's criticisms and quoting science journalist Charles Seife's description of Penrose as "one of a handful of scientists" who believe that the nature of consciousness suggests a quantum process.[19]

In January 2014 Hameroff and Penrose announced that a discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan[24] confirms the hypothesis of Orch-OR theory.[25] A reviewed and updated version of the theory was published along with critical commentary and debate in the March 2014 issue of Physics of Life Reviews.[26]

Personal life

Family life

Penrose is married to Vanessa Thomas, head of mathematics at Abingdon School,[27][28] with whom he has one son.[27] He has three sons from a previous marriage to American Joan Isabel Wedge, whom he married in 1959. He is the elder brother of Jonathan Penrose, the chessplayer.

Religious views

Penrose does not hold to any religious doctrine,[29] and refers to himself as an atheist.[30] In the film A Brief History of Time, he said, "I think I would say that the universe has a purpose, it's not somehow just there by chance ... some people, I think, take the view that the universe is just there and it runs along – it's a bit like it just sort of computes, and we happen somehow by accident to find ourselves in this thing. But I don't think that's a very fruitful or helpful way of looking at the universe, I think that there is something much deeper about it."[31] Penrose is a Distinguished Supporter of the British Humanist Association.

Awards and honours

Roger Penrose during a lecture

Penrose has been awarded many prizes for his contributions to science. He was elected a Fellow of the Royal Society of London in 1972. In 1975, Stephen Hawking and Penrose were jointly awarded the Eddington Medal of the Royal Astronomical Society. In 1985, he was awarded the Royal Society Royal Medal. Along with Stephen Hawking, he was awarded the prestigious Wolf Foundation Prize for Physics in 1988. In 1989 he was awarded the Dirac Medal and Prize of the British Institute of Physics. In 1990 Penrose was awarded the Albert Einstein Medal for outstanding work related to the work of Albert Einstein by the Albert Einstein Society. In 1991, he was awarded the Naylor Prize of the London Mathematical Society. From 1992 to 1995 he served as President of the International Society on General Relativity and Gravitation. In 1994, Penrose was knighted for services to science.[32] In the same year he was also awarded an Honorary Degree (Doctor of Science) by the University of Bath.[33] In 1998, he was elected Foreign Associate of the United States National Academy of Sciences. In 2000 he was appointed to the Order of Merit. In 2004 he was awarded the De Morgan Medal for his wide and original contributions to mathematical physics. To quote the citation from the London Mathematical Society:
His deep work on General Relativity has been a major factor in our understanding of black holes. His development of Twistor Theory has produced a beautiful and productive approach to the classical equations of mathematical physics. His tilings of the plane underlie the newly discovered quasi-crystals.[34]
In 2005 Penrose was awarded an honorary doctorate by Warsaw University and Katholieke Universiteit Leuven (Belgium), and in 2006 by the University of York. In 2008 Penrose was awarded the Copley Medal. He is also a Distinguished Supporter of the British Humanist Association and one of the patrons of the Oxford University Scientific Society. In 2011, Penrose was awarded the Fonseca Prize by the University of Santiago de Compostela. In 2012, Penrose was awarded the Richard R. Ernst Medal by ETH Zürich for his contributions to science and strengthening the connection between science and society.


Forewords to Beating the Odds: The Life and Times of E. A. Milne, written by Meg Weston Smith. Published by World Scientific Publishing Co in June 2013.
Penrose also wrote forewords to Quantum Aspects of Life and Anthony Zee's book Fearful Symmetry (foreword).