Search This Blog

Wednesday, May 12, 2021

Quantum mind

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Quantum_mind

The quantum mind or quantum consciousness is a group of hypotheses proposing that classical mechanics cannot explain consciousness. It posits that quantum-mechanical phenomena, such as entanglement and superposition, may play an important part in the brain's function and could explain consciousness.

Assertions that consciousness is somehow quantum-mechanical can overlap with quantum mysticism, a pseudoscientific movement that assigns supernatural characteristics to various quantum phenomena such as nonlocality and the observer effect.

History

Eugene Wigner developed the idea that quantum mechanics has something to do with the workings of the mind. He proposed that the wave function collapses due to its interaction with consciousness. Freeman Dyson argued that "mind, as manifested by the capacity to make choices, is to some extent inherent in every electron."

Other contemporary physicists and philosophers considered these arguments unconvincing. Victor Stenger characterized quantum consciousness as a "myth" having "no scientific basis" that "should take its place along with gods, unicorns and dragons."

David Chalmers argues against quantum consciousness. He instead discusses how quantum mechanics may relate to dualistic consciousness. Chalmers is skeptical that any new physics can resolve the hard problem of consciousness.

Approaches

Bohm

David Bohm viewed quantum theory and relativity as contradictory, which implied a more fundamental level in the universe. He claimed both quantum theory and relativity pointed to this deeper theory, which he formulated as a quantum field theory. This more fundamental level was proposed to represent an undivided wholeness and an implicate order, from which arises the explicate order of the universe as we experience it.

Bohm's proposed implicate order applies both to matter and consciousness. He suggested that it could explain the relationship between them. He saw mind and matter as projections into our explicate order from the underlying implicate order. Bohm claimed that when we look at matter, we see nothing that helps us to understand consciousness.

Bohm discussed the experience of listening to music. He believed the feeling of movement and change that make up our experience of music derive from holding the immediate past and the present in the brain together. The musical notes from the past are transformations rather than memories. The notes that were implicate in the immediate past become explicate in the present. Bohm viewed this as consciousness emerging from the implicate order.

Bohm saw the movement, change or flow, and the coherence of experiences, such as listening to music, as a manifestation of the implicate order. He claimed to derive evidence for this from Jean Piaget's work on infants. He held these studies to show that young children learn about time and space because they have a "hard-wired" understanding of movement as part of the implicate order. He compared this hard-wiring to Chomsky's theory that grammar is hard-wired into human brains.

Bohm never proposed a specific means by which his proposal could be falsified, nor a neural mechanism through which his "implicate order" could emerge in a way relevant to consciousness. He later collaborated on Karl Pribram's holonomic brain theory as a model of quantum consciousness.

According to philosopher Paavo Pylkkänen, Bohm's suggestion "leads naturally to the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain, while the basic thinking process is at the quantum-theoretically describable level."

Penrose and Hameroff

Theoretical physicist Roger Penrose and anaesthesiologist Stuart Hameroff collaborated to produce the theory known as Orchestrated Objective Reduction (Orch-OR). Penrose and Hameroff initially developed their ideas separately and later collaborated to produce Orch-OR in the early 1990s. They reviewed and updated their theory in 2013.

Penrose's argument stemmed from Gödel's incompleteness theorems. In his first book on consciousness, The Emperor's New Mind (1989), he argued that while a formal system cannot prove its own consistency, Gödel's unprovable results are provable by human mathematicians. Penrose took this to mean that human mathematicians are not formal proof systems and not running a computable algorithm. According to Bringsjord and Xiao, this line of reasoning is based on fallacious equivocation on the meaning of computation. In the same book, Penrose wrote, "One might speculate, however, that somewhere deep in the brain, cells are to be found of single quantum sensitivity. If this proves to be the case, then quantum mechanics will be significantly involved in brain activity."

Penrose determined wave function collapse was the only possible physical basis for a non-computable process. Dissatisfied with its randomness, he proposed a new form of wave function collapse that occurs in isolation and called it objective reduction. He suggested each quantum superposition has its own piece of spacetime curvature and that when these become separated by more than one Planck length they become unstable and collapse. Penrose suggested that objective reduction represents neither randomness nor algorithmic processing but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derives.

Hameroff provided a hypothesis that microtubules would be suitable hosts for quantum behavior. Microtubules are composed of tubulin protein dimer subunits. The dimers each have hydrophobic pockets that are 8 nm apart and may contain delocalized pi electrons. Tubulins have other smaller non-polar regions that contain pi electron-rich indole rings separated by about 2 nm. Hameroff proposed that these electrons are close enough to become entangled. He originally suggested the tubulin-subunit electrons would form a Bose–Einstein condensate, but this was discredited. He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules, but this too was experimentally discredited.

Orch-OR has made numerous false biological predictions, and is not an accepted model of brain physiology. In other words, there is a missing link between physics and neuroscience. For instance, the proposed predominance of 'A' lattice microtubules, more suitable for information processing, was falsified by Kikkawa et al., who showed that all in vivo microtubules have a 'B' lattice and a seam. The proposed existence of gap junctions between neurons and glial cells was also falsified. Orch-OR predicted that microtubule coherence reaches the synapses via dendritic lamellar bodies (DLBs), but De Zeeuw et al. proved this impossible by showing that DLBs are micrometers away from gap junctions.

In 2014, Hameroff and Penrose claimed that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013 corroborates Orch-OR theory.

Although these theories are stated in a scientific framework, it is difficult to separate them from scientists' personal opinions. The opinions are often based on intuition or subjective ideas about the nature of consciousness. For example, Penrose wrote,

my own point of view asserts that you can't even simulate conscious activity. What's going on in conscious thinking is something you couldn't properly imitate at all by computer.... If something behaves as though it's conscious, do you say it is conscious? People argue endlessly about that. Some people would say, 'Well, you've got to take the operational viewpoint; we don't know what consciousness is. How do you judge whether a person is conscious or not? Only by the way they act. You apply the same criterion to a computer or a computer-controlled robot.' Other people would say, 'No, you can't say it feels something merely because it behaves as though it feels something.' My view is different from both those views. The robot wouldn't even behave convincingly as though it was conscious unless it really was—which I say it couldn't be, if it's entirely computationally controlled.

Penrose continues,

A lot of what the brain does you could do on a computer. I'm not saying that all the brain's action is completely different from what you do on a computer. I am claiming that the actions of consciousness are something different. I'm not saying that consciousness is beyond physics, either—although I'm saying that it's beyond the physics we know now.... My claim is that there has to be something in physics that we don't yet understand, which is very important, and which is of a noncomputational character. It's not specific to our brains; it's out there, in the physical world. But it usually plays a totally insignificant role. It would have to be in the bridge between quantum and classical levels of behavior—that is, where quantum measurement comes in.

W. Daniel Hillis responded, "Penrose has committed the classical mistake of putting humans at the center of the universe. His argument is essentially that he can't imagine how the mind could be as complicated as it is without having some magic elixir brought in from some new principle of physics, so therefore it must involve that. It's a failure of Penrose's imagination.... It's true that there are unexplainable, uncomputable things, but there's no reason whatsoever to believe that the complex behavior we see in humans is in any way related to uncomputable, unexplainable things."

Lawrence Krauss is also blunt in criticizing Penrose's ideas. He has said, "Roger Penrose has given lots of new-age crackpots ammunition by suggesting that at some fundamental scale, quantum mechanics might be relevant for consciousness. When you hear the term 'quantum consciousness,' you should be suspicious.... Many people are dubious that Penrose's suggestions are reasonable, because the brain is not an isolated quantum-mechanical system."

Umezawa, Vitiello, Freeman

Hiroomi Umezawa and collaborators proposed a quantum field theory of memory storage. Giuseppe Vitiello and Walter Freeman proposed a dialog model of the mind. This dialog takes place between the classical and the quantum parts of the brain. Their quantum field theory models of brain dynamics are fundamentally different from the Penrose-Hameroff theory.

Pribram, Bohm, Kak

Karl Pribram's holonomic brain theory (quantum holography) invoked quantum mechanics to explain higher order processing by the mind. He argued that his holonomic model solved the binding problem. Pribram collaborated with Bohm in his work on quantum approaches to mind and he provided evidence on how much of the processing in the brain was done in wholes. He proposed that ordered water at dendritic membrane surfaces might operate by structuring Bose-Einstein condensation supporting quantum dynamics.

Stapp

Henry Stapp proposed that quantum waves are reduced only when they interact with consciousness. He argues from the Orthodox Quantum Mechanics of John von Neumann that the quantum state collapses when the observer selects one among the alternative quantum possibilities as a basis for future action. The collapse, therefore, takes place in the expectation that the observer associated with the state. Stapp's work drew criticism from scientists such as David Bourget and Danko Georgiev. Georgiev criticized Stapp's model in two respects:

  • Stapp's mind does not have its own wavefunction or density matrix, but nevertheless can act upon the brain using projection operators. Such usage is not compatible with standard quantum mechanics because one can attach any number of ghostly minds to any point in space that act upon physical quantum systems with any projection operators. Stapp's model therefore negates "the prevailing principles of physics".
  • Stapp's claim that quantum Zeno effect is robust against environmental decoherence directly contradicts a basic theorem in quantum information theory: that acting with projection operators upon the density matrix of a quantum system can only increase the system's Von Neumann entropy.

Stapp has responded to both of Georgiev's objections.

David Pearce

British philosopher David Pearce defends what he calls physicalistic idealism ("the non-materialist physicalist claim that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions"), and has conjectured that unitary conscious minds are physical states of quantum coherence (neuronal superpositions). This conjecture is, according to Pearce, amenable to falsification, unlike most theories of consciousness, and Pearce has outlined an experimental protocol describing how the hypothesis could be tested using matter-wave interferometry to detect nonclassical interference patterns of neuronal superpositions at the onset of thermal decoherence. Pearce admits that his ideas are "highly speculative," "counterintuitive," and "incredible."

Yu Feng

In a recent paper Yu Feng argues that panpsychism (or panprotopsychism) is compatible with Everett’s relative-state interpretation of quantum mechanics. With the help of quantum Darwinism Feng proposes a hierarchy of co-consciousness relations and claims it may solve the combination problem. Making a comparison with the emergent theory of physical space, Feng suggests that the phenomenal space may emerge from quantum information by the same mechanism, and argues that quantum mechanics resolves any structural mismatch between the mind and the physical brain.

Criticism

These hypotheses of the quantum mind remain hypothetical speculation, as Penrose and Pearce admit in their discussions. Until they make a prediction that is tested by experiment, the hypotheses aren't based on empirical evidence. According to Krauss, "It is true that quantum mechanics is extremely strange, and on extremely small scales for short times, all sorts of weird things happen. And in fact we can make weird quantum phenomena happen. But what quantum mechanics doesn't change about the universe is, if you want to change things, you still have to do something. You can't change the world by thinking about it."

The process of testing the hypotheses with experiments is fraught with conceptual/theoretical, practical, and ethical problems.

Conceptual problems

The idea that a quantum effect is necessary for consciousness to function is still in the realm of philosophy. Penrose proposes that it is necessary, but other theories of consciousness do not indicate that it is needed. For example, Daniel Dennett proposed a theory called multiple drafts model that doesn't indicate that quantum effects are needed in his 1991 book Consciousness Explained. A philosophical argument on either side isn't scientific proof, although philosophical analysis can indicate key differences in the types of models and show what type of experimental differences might be observed. But since there isn't a clear consensus among philosophers, it isn't conceptual support that a quantum mind theory is needed.

There are computers that are specifically designed to compute using quantum mechanical effects. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement. They are different from binary digital electronic computers based on transistors. Whereas common digital computing requires that the data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses quantum bits, which can be in superpositions of states. One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. Some quantum computers require their qubits to be cooled to 20 millikelvins in order to prevent significant decoherence. As a result, time-consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits long enough eventually corrupts the superpositions. There aren't any obvious analogies between the functioning of quantum computers and the human brain. Some hypothetical models of quantum mind have proposed mechanisms for maintaining quantum coherence in the brain, but they have not been shown to operate.

Quantum entanglement is a physical phenomenon often invoked for quantum mind models. This effect occurs when pairs or groups of particles interact so that the quantum state of each particle cannot be described independently of the other(s), even when the particles are separated by a large distance. Instead, a quantum state has to be described for the whole system. Measurements of physical properties such as position, momentum, spin, and polarization, performed on entangled particles are found to be correlated. If one particle is measured, the same property of the other particle immediately adjusts to maintain the conservation of the physical phenomenon. According to the formalism of quantum theory, the effect of measurement happens instantly, no matter how far apart the particles are. It is not possible to use this effect to transmit classical information at faster-than-light speeds. Entanglement is broken when the entangled particles decohere through interaction with the environment—for example, when a measurement is made or the particles undergo random collisions or interactions. According to Pearce, "In neuronal networks, ion-ion scattering, ion-water collisions, and long-range Coulomb interactions from nearby ions all contribute to rapid decoherence times; but thermally-induced decoherence is even harder experimentally to control than collisional decoherence." He anticipated that quantum effects would have to be measured in femtoseconds, a trillion times faster than the rate at which neurons function (milliseconds).

Another possible conceptual approach is to use quantum mechanics as an analogy to understand a different field of study like consciousness, without expecting that the laws of quantum physics will apply. An example of this approach is the idea of Schrödinger's cat. Erwin Schrödinger described how one could, in principle, create entanglement of a large-scale system by making it dependent on an elementary particle in a superposition. He proposed a scenario with a cat in a locked steel chamber, wherein the cat's survival depended on the state of a radioactive atom—whether it had decayed and emitted radiation. According to Schrödinger, the Copenhagen interpretation implies that the cat is both alive and dead until the state has been observed. Schrödinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; he intended the example to illustrate the absurdity of the existing view of quantum mechanics. But since Schrödinger's time, physicists have given other interpretations of the mathematics of quantum mechanics, some of which regard the "alive and dead" cat superposition as quite real. Schrödinger's famous thought experiment poses the question, "when does a quantum system stop existing as a superposition of states and become one or the other?" In the same way, one can ask whether the act of making a decision is analogous to having a superposition of states of two decision outcomes, so that making a decision means "opening the box" to reduce the brain from a combination of states to one state. This analogy about decision-making uses a formalism derived from quantum mechanics, but doesn't indicate the actual mechanism by which the decision is made. In this way, the idea is similar to quantum cognition. This field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum mechanical about the brain. Quantum cognition is based on the quantum-like paradigm, generalized quantum paradigm, or quantum structure paradigm that information processing by complex systems such as the brain can be mathematically described in the framework of quantum information and quantum probability theory. This model uses quantum mechanics only as an analogy, but doesn't propose that quantum mechanics is the physical mechanism by which it operates. For example, quantum cognition proposes that some decisions can be analyzed as if there is interference between two alternatives, but it is not a physical quantum interference effect.

Practical problems

The demonstration of a quantum mind effect by experiment is necessary. Is there a way to show that consciousness is impossible without a quantum effect? Can a sufficiently complex digital, non-quantum computer be shown to be incapable of consciousness? Perhaps a quantum computer will show that quantum effects are needed. In any case, complex computers that are either digital or quantum computers may be built. These could demonstrate which type of computer is capable of conscious, intentional thought. But they don't exist yet, and no experimental test has been demonstrated.

Quantum mechanics is a mathematical model that can provide some extremely accurate numerical predictions. Richard Feynman called quantum electrodynamics, based on the quantum mechanics formalism, "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen. So it is not impossible that the model could provide an accurate prediction about consciousness that would confirm that a quantum effect is involved. If the mind depends on quantum mechanical effects, the true proof is to find an experiment that provides a calculation that can be compared to an experimental measurement. It has to show a measurable difference between a classical computation result in a brain and one that involves quantum effects.

The main theoretical argument against the quantum mind hypothesis is the assertion that quantum states in the brain would lose coherency before they reached a scale where they could be useful for neural processing. This supposition was elaborated by Tegmark. His calculations indicate that quantum systems in the brain decohere at sub-picosecond timescales. No response by a brain has shown computational results or reactions on this fast of a timescale. Typical reactions are on the order of milliseconds, trillions of times longer than sub-picosecond timescales.

Daniel Dennett uses an experimental result in support of his Multiple Drafts Model of an optical illusion that happens on a time scale of less than a second or so. In this experiment, two different colored lights, with an angular separation of a few degrees at the eye, are flashed in succession. If the interval between the flashes is less than a second or so, the first light that is flashed appears to move across to the position of the second light. Furthermore, the light seems to change color as it moves across the visual field. A green light will appear to turn red as it seems to move across to the position of a red light. Dennett asks how we could see the light change color before the second light is observed. Velmans argues that the cutaneous rabbit illusion, another illusion that happens in about a second, demonstrates that there is a delay while modelling occurs in the brain and that this delay was discovered by Libet. These slow illusions that happen at times of less than a second don't support a proposal that the brain functions on the picosecond time scale.

According to David Pearce, a demonstration of picosecond effects is "the fiendishly hard part – feasible in principle, but an experimental challenge still beyond the reach of contemporary molecular matter-wave interferometry. ...The conjecture predicts that we'll discover the interference signature of sub-femtosecond macro-superpositions."

Penrose says,

The problem with trying to use quantum mechanics in the action of the brain is that if it were a matter of quantum nerve signals, these nerve signals would disturb the rest of the material in the brain, to the extent that the quantum coherence would get lost very quickly. You couldn't even attempt to build a quantum computer out of ordinary nerve signals, because they're just too big and in an environment that's too disorganized. Ordinary nerve signals have to be treated classically. But if you go down to the level of the microtubules, then there's an extremely good chance that you can get quantum-level activity inside them.

For my picture, I need this quantum-level activity in the microtubules; the activity has to be a large scale thing that goes not just from one microtubule to the next but from one nerve cell to the next, across large areas of the brain. We need some kind of coherent activity of a quantum nature which is weakly coupled to the computational activity that Hameroff argues is taking place along the microtubules.

There are various avenues of attack. One is directly on the physics, on quantum theory, and there are certain experiments that people are beginning to perform, and various schemes for a modification of quantum mechanics. I don't think the experiments are sensitive enough yet to test many of these specific ideas. One could imagine experiments that might test these things, but they'd be very hard to perform.

A demonstration of a quantum effect in the brain has to explain this problem or explain why it is not relevant, or that the brain somehow circumvents the problem of the loss of quantum coherency at body temperature. As Penrose proposes, it may require a new type of physical theory.

Ethical problems

According to Lawrence Krauss, "You should be wary whenever you hear something like, 'Quantum mechanics connects you with the universe' ... or 'quantum mechanics unifies you with everything else.' You can begin to be skeptical that the speaker is somehow trying to use quantum mechanics to argue fundamentally that you can change the world by thinking about it." A subjective feeling is not sufficient to make this determination. Humans don't have a reliable subjective feeling for how we do a lot of functions. According to Daniel Dennett, "On this topic, Everybody's an expert... but they think that they have a particular personal authority about the nature of their own conscious experiences that can trump any hypothesis they find unacceptable."

Since humans are the only animals that can verbally communicate their conscious experience, performing experiments to demonstrate quantum effects in consciousness requires experimentation on a living human brain. This is not automatically excluded or impossible, but it seriously limits the kinds of experiments that can be done. Studies of the ethics of brain studies are being actively solicited by the BRAIN Initiative, a U.S. Federal Government funded effort to document the connections of neurons in the brain.

An ethically objectionable practice by proponents of quantum mind theories involves the practice of using quantum mechanical terms in an effort to make the argument sound more impressive, even when they know that those terms are irrelevant. Dale DeBakcsy notes that "trendy parapsychologists, academic relativists, and even the Dalai Lama have all taken their turn at robbing modern physics of a few well-sounding phrases and stretching them far beyond their original scope in order to add scientific weight to various pet theories." At the very least, these proponents must make a clear statement about whether quantum formalism is being used as an analogy or as an actual physical mechanism, and what evidence they are using for support. An ethical statement by a researcher should specify what kind of relationship their hypothesis has to the physical laws.

Misleading statements of this type have been given by, for example, Deepak Chopra. Chopra has commonly referred to topics such as quantum healing or quantum effects of consciousness. Seeing the human body as being undergirded by a "quantum mechanical body" composed not of matter but of energy and information, he believes that "human aging is fluid and changeable; it can speed up, slow down, stop for a time, and even reverse itself," as determined by one's state of mind. Robert Carroll states Chopra attempts to integrate Ayurveda with quantum mechanics to justify his teachings. Chopra argues that what he calls "quantum healing" cures any manner of ailments, including cancer, through effects that he claims are literally based on the same principles as quantum mechanics. This has led physicists to object to his use of the term quantum in reference to medical conditions and the human body. Chopra said, "I think quantum theory has a lot of things to say about the observer effect, about non-locality, about correlations. So I think there’s a school of physicists who believe that consciousness has to be equated, or at least brought into the equation, in understanding quantum mechanics." On the other hand, he also claims "[Quantum effects are] just a metaphor. Just like an electron or a photon is an indivisible unit of information and energy, a thought is an indivisible unit of consciousness." In his book Quantum Healing, Chopra stated the conclusion that quantum entanglement links everything in the Universe, and therefore it must create consciousness. In either case, the references to the word "quantum" don't mean what a physicist would claim, and arguments that use the word "quantum" shouldn't be taken as scientifically proven.

Chris Carter includes statements in his book, Science and Psychic Phenomena, of quotes from quantum physicists in support of psychic phenomena. In a review of the book, Benjamin Radford wrote that Carter used such references to "quantum physics, which he knows nothing about and which he (and people like Deepak Chopra) love to cite and reference because it sounds mysterious and paranormal.... Real, actual physicists I've spoken to break out laughing at this crap.... If Carter wishes to posit that quantum physics provides a plausible mechanism for psi, then it is his responsibility to show that, and he clearly fails to do so." Sharon Hill has studied amateur paranormal research groups, and these groups like to use "vague and confusing language: ghosts 'use energy,' are made up of 'magnetic fields', or are associated with a 'quantum state.'"

Statements like these about quantum mechanics indicate a temptation to misinterpret technical, mathematical terms like entanglement in terms of mystical feelings. This approach can be interpreted as a kind of Scientism, using the language and authority of science when the scientific concepts don't apply.

Perhaps the final question is, what difference does it make if quantum effects are involved in computations in the brain? It is already known that quantum mechanics plays a role in the brain, since quantum mechanics determines the shapes and properties of molecules like neurotransmitters and proteins, and these molecules affect how the brain works. This is the reason that drugs such as morphine affect consciousness. As Daniel Dennett said, "quantum effects are there in your car, your watch, and your computer. But most things — most macroscopic objects — are, as it were, oblivious to quantum effects. They don't amplify them; they don't hinge on them." Lawrence Krauss said, "We're also connected to the universe by gravity, and we're connected to the planets by gravity. But that doesn't mean that astrology is true.... Often, people who are trying to sell whatever it is they're trying to sell try to justify it on the basis of science. Everyone knows quantum mechanics is weird, so why not use that to justify it? ... I don't know how many times I've heard people say, 'Oh, I love quantum mechanics because I'm really into meditation, or I love the spiritual benefits that it brings me.' But quantum mechanics, for better or worse, doesn't bring any more spiritual benefits than gravity does."

Neural correlates of consciousness

From Wikipedia, the free encyclopedia
 
The Neuronal Correlates of Consciousness (NCC) constitute the smallest set of neural events and structures sufficient for a given conscious percept or explicit memory. This case involves synchronized action potentials in neocortical pyramidal neurons.

The neural correlates of consciousness (NCC) constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept. Neuroscientists use empirical approaches to discover neural correlates of subjective phenomena; that is, neural changes which necessarily and regularly correlate with a specific experience. The set should be minimal because, under the assumption that the brain is sufficient to give rise to any given conscious experience, the question is which of its components is necessary to produce it.

Neurobiological approach to consciousness

A science of consciousness must explain the exact relationship between subjective mental states and brain states, the nature of the relationship between the conscious mind and the electro-chemical interactions in the body (mind–body problem). Progress in neuropsychology and neurophilosophy has come from focusing on the body rather than the mind. In this context the neuronal correlates of consciousness may be viewed as its causes, and consciousness may be thought of as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.

Discovering and characterizing neural correlates does not offer a theory of consciousness that can explain how particular systems experience anything at all, or how and why they are associated with consciousness, the so-called hard problem of consciousness, but understanding the NCC may be a step toward such a theory. Most neurobiologists assume that the variables giving rise to consciousness are to be found at the neuronal level, governed by classical physics, though a few scholars have proposed theories of quantum consciousness based on quantum mechanics.

There is great apparent redundancy and parallelism in neural networks so, while activity in one group of neurons may correlate with a percept in one case, a different population might mediate a related percept if the former population is lost or inactivated. It may be that every phenomenal, subjective state has a neural correlate. Where the NCC can be induced artificially the subject will experience the associated percept, while perturbing or inactivating the region of correlation for a specific percept will affect the percept or cause it to disappear, giving a cause-effect relationship from the neural region to the nature of the percept.

What characterizes the NCC? What are the commonalities between the NCC for seeing and for hearing? Will the NCC involve all the pyramidal neurons in the cortex at any given point in time? Or only a subset of long-range projection cells in the frontal lobes that project to the sensory cortices in the back? Neurons that fire in a rhythmic manner? Neurons that fire in a synchronous manner? These are some of the proposals that have been advanced over the years.

The growing ability of neuroscientists to manipulate neurons using methods from molecular biology in combination with optical tools (e.g., Adamantidis et al. 2007) depends on the simultaneous development of appropriate behavioral assays and model organisms amenable to large-scale genomic analysis and manipulation. It is the combination of such fine-grained neuronal analysis in animals with ever more sensitive psychophysical and brain imaging techniques in humans, complemented by the development of a robust theoretical predictive framework, that will hopefully lead to a rational understanding of consciousness, one of the central mysteries of life.

Level of arousal and content of consciousness

There are two common but distinct dimensions of the term consciousness, one involving arousal and states of consciousness and the other involving content of consciousness and conscious states. To be conscious of anything the brain must be in a relatively high state of arousal (sometimes called vigilance), whether in wakefulness or REM sleep, vividly experienced in dreams although usually not remembered. Brain arousal level fluctuates in a circadian rhythm but may be influenced by lack of sleep, drugs and alcohol, physical exertion, etc. Arousal can be measured behaviorally by the signal amplitude that triggers some criterion reaction (for instance, the sound level necessary to evoke an eye movement or a head turn toward the sound source). Clinicians use scoring systems such as the Glasgow Coma Scale to assess the level of arousal in patients.

High arousal states are associated with conscious states that have specific content, seeing, hearing, remembering, planning or fantasizing about something. Different levels or states of consciousness are associated with different kinds of conscious experiences. The "awake" state is quite different from the "dreaming" state (for instance, the latter has little or no self-reflection) and from the state of deep sleep. In all three cases the basic physiology of the brain is affected, as it also is in altered states of consciousness, for instance after taking drugs or during meditation when conscious perception and insight may be enhanced compared to the normal waking state.

Clinicians talk about impaired states of consciousness as in "the comatose state", "the persistent vegetative state" (PVS), and "the minimally conscious state" (MCS). Here, "state" refers to different "amounts" of external/physical consciousness, from a total absence in coma, persistent vegetative state and general anesthesia, to a fluctuating and limited form of conscious sensation in a minimally conscious state such as sleep walking or during a complex partial epileptic seizure. The repertoire of conscious states or experiences accessible to a patient in a minimally conscious state is comparatively limited. In brain death there is no arousal, but it is unknown whether the subjectivity of experience has been interrupted, rather than its observable link with the organism. Functional neuroimaging have shown that parts of the cortex are still active in vegetative patients that are presumed to be unconscious; however, these areas appear to be functionally disconnected from associative cortical areas whose activity is needed for awareness.

The potential richness of conscious experience appears to increase from deep sleep to drowsiness to full wakefulness, as might be quantified using notions from complexity theory that incorporate both the dimensionality as well as the granularity of conscious experience to give an integrated-information-theoretical account of consciousness. As behavioral arousal increases so does the range and complexity of possible behavior. Yet in REM sleep there is a characteristic atonia, low motor arousal and the person is difficult to wake up, but there is still high metabolic and electric brain activity and vivid perception.

Many nuclei with distinct chemical signatures in the thalamus, midbrain and pons must function for a subject to be in a sufficient state of brain arousal to experience anything at all. These nuclei therefore belong to the enabling factors for consciousness. Conversely it is likely that the specific content of any particular conscious sensation is mediated by particular neurons in cortex and their associated satellite structures, including the amygdala, thalamus, claustrum and the basal ganglia.

The neuronal basis of perception

The possibility of precisely manipulating visual percepts in time and space has made vision a preferred modality in the quest for the NCC. Psychologists have perfected a number of techniques – masking, binocular rivalry, continuous flash suppression, motion induced blindness, change blindness, inattentional blindness – in which the seemingly simple and unambiguous relationship between a physical stimulus in the world and its associated percept in the privacy of the subject's mind is disrupted. In particular a stimulus can be perceptually suppressed for seconds or even minutes at a time: the image is projected into one of the observer's eyes but is invisible, not seen. In this manner the neural mechanisms that respond to the subjective percept rather than the physical stimulus can be isolated, permitting visual consciousness to be tracked in the brain. In a perceptual illusion, the physical stimulus remains fixed while the percept fluctuates. The best known example is the Necker cube whose 12 lines can be perceived in one of two different ways in depth.

The Necker Cube: The left line drawing can be perceived in one of two distinct depth configurations shown on the right. Without any other cue, the visual system flips back and forth between these two interpretations.

A perceptual illusion that can be precisely controlled is binocular rivalry. Here, a small image, e.g., a horizontal grating, is presented to the left eye, and another image, e.g., a vertical grating, is shown to the corresponding location in the right eye. In spite of the constant visual stimulus, observers consciously see the horizontal grating alternate every few seconds with the vertical one. The brain does not allow for the simultaneous perception of both images.

Logothetis and colleagues recorded a variety of visual cortical areas in awake macaque monkeys performing a binocular rivalry task. Macaque monkeys can be trained to report whether they see the left or the right image. The distribution of the switching times and the way in which changing the contrast in one eye affects these leaves little doubt that monkeys and humans experience the same basic phenomenon. In the primary visual cortex (V1) only a small fraction of cells weakly modulated their response as a function of the percept of the monkey while most cells responded to one or the other retinal stimulus with little regard to what the animal perceived at the time. But in a high-level cortical area such as the inferior temporal cortex along the ventral stream almost all neurons responded only to the perceptually dominant stimulus, so that a "face" cell only fired when the animal indicated that it saw the face and not the pattern presented to the other eye. This implies that NCC involve neurons active in the inferior temporal cortex: it is likely that specific reciprocal actions of neurons in the inferior temporal and parts of the prefrontal cortex are necessary.

A number of fMRI experiments that have exploited binocular rivalry and related illusions to identify the hemodynamic activity underlying visual consciousness in humans demonstrate quite conclusively that activity in the upper stages of the ventral pathway (e.g., the fusiform face area and the parahippocampal place area) as well as in early regions, including V1 and the lateral geniculate nucleus (LGN), follow the percept and not the retinal stimulus. Further, a number of fMRI and DTI experiments suggest V1 is necessary but not sufficient for visual consciousness.

In a related perceptual phenomenon, flash suppression, the percept associated with an image projected into one eye is suppressed by flashing another image into the other eye while the original image remains. Its methodological advantage over binocular rivalry is that the timing of the perceptual transition is determined by an external trigger rather than by an internal event. The majority of cells in the inferior temporal cortex and the superior temporal sulcus of monkeys trained to report their percept during flash suppression follow the animal's percept: when the cell's preferred stimulus is perceived, the cell responds. If the picture is still present on the retina but is perceptually suppressed, the cell falls silent, even though primary visual cortex neurons fire. Single-neuron recordings in the medial temporal lobe of epilepsy patients during flash suppression likewise demonstrate abolishment of response when the preferred stimulus is present but perceptually masked.

Global disorders of consciousness

Given the absence of any accepted criterion of the minimal neuronal correlates necessary for consciousness, the distinction between a persistently vegetative patient who shows regular sleep-wave transitions and may be able to move or smile, and a minimally conscious patient who can communicate (on occasion) in a meaningful manner (for instance, by differential eye movements) and who shows some signs of consciousness, is often difficult. In global anesthesia the patient should not experience psychological trauma but the level of arousal should be compatible with clinical exigencies.

Midline structures in the brainstem and thalamus necessary to regulate the level of brain arousal. Small, bilateral lesions in many of these nuclei cause a global loss of consciousness.

Blood-oxygen-level-dependent fMRI have demonstrated normal patterns of brain activity in a patient in a vegetative state following a severe traumatic brain injury when asked to imagine playing tennis or visiting rooms in his/her house. Differential brain imaging of patients with such global disturbances of consciousness (including akinetic mutism) reveal that dysfunction in a widespread cortical network including medial and lateral prefrontal and parietal associative areas is associated with a global loss of awareness. Impaired consciousness in epileptic seizures of the temporal lobe was likewise accompanied by a decrease in cerebral blood flow in frontal and parietal association cortex and an increase in midline structures such as the mediodorsal thalamus.

Relatively local bilateral injuries to midline (paramedian) subcortical structures can also cause a complete loss of awareness. These structures therefore enable and control brain arousal (as determined by metabolic or electrical activity) and are necessary neural correlates. One such example is the heterogeneous collection of more than two dozen nuclei on each side of the upper brainstem (pons, midbrain and in the posterior hypothalamus), collectively referred to as the reticular activating system (RAS). Their axons project widely throughout the brain. These nuclei – three-dimensional collections of neurons with their own cyto-architecture and neurochemical identity – release distinct neuromodulators such as acetylcholine, noradrenaline/norepinephrine, serotonin, histamine and orexin/hypocretin to control the excitability of the thalamus and forebrain, mediating alternation between wakefulness and sleep as well as general level of behavioral and brain arousal. After such trauma, however, eventually the excitability of the thalamus and forebrain can recover and consciousness can return. Another enabling factor for consciousness are the five or more intralaminar nuclei (ILN) of the thalamus. These receive input from many brainstem nuclei and project strongly, directly to the basal ganglia and, in a more distributed manner, into layer I of much of the neocortex. Comparatively small (1 cm3 or less) bilateral lesions in the thalamic ILN completely knock out all awareness.

Forward versus feedback projections

Many actions in response to sensory inputs are rapid, transient, stereotyped, and unconscious. They could be thought of as cortical reflexes and are characterized by rapid and somewhat stereotyped responses that can take the form of rather complex automated behavior as seen, e.g., in complex partial epileptic seizures. These automated responses, sometimes called zombie behaviors, could be contrasted by a slower, all-purpose conscious mode that deals more slowly with broader, less stereotyped aspects of the sensory inputs (or a reflection of these, as in imagery) and takes time to decide on appropriate thoughts and responses. Without such a consciousness mode, a vast number of different zombie modes would be required to react to unusual events.

A feature that distinguishes humans from most animals is that we are not born with an extensive repertoire of behavioral programs that would enable us to survive on our own ("physiological prematurity"). To compensate for this, we have an unmatched ability to learn, i.e., to consciously acquire such programs by imitation or exploration. Once consciously acquired and sufficiently exercised, these programs can become automated to the extent that their execution happens beyond the realms of our awareness. Take, as an example, the incredible fine motor skills exerted in playing a Beethoven piano sonata or the sensorimotor coordination required to ride a motorcycle along a curvy mountain road. Such complex behaviors are possible only because a sufficient number of the subprograms involved can be executed with minimal or even suspended conscious control. In fact, the conscious system may actually interfere somewhat with these automated programs.

From an evolutionary standpoint it clearly makes sense to have both automated behavioral programs that can be executed rapidly in a stereotyped and automated manner, and a slightly slower system that allows time for thinking and planning more complex behavior. This latter aspect may be one of the principal functions of consciousness. Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes. No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., a philosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between function F being performed by conscious organism O and non-conscious organism O*, it is unclear what adaptive advantage consciousness could provide. As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was an exaptation arising as a consequence of other developments such as increases in brain size or cortical rearrangement. Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired. Several scholars including Pinker, Chomsky, Edelman, and Luria have indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness.

It seems possible that visual zombie modes in the cortex mainly use the dorsal stream in the parietal region. However, parietal activity can affect consciousness by producing attentional effects on the ventral stream, at least under some circumstances. The conscious mode for vision depends largely on the early visual areas (beyond V1) and especially on the ventral stream.

Seemingly complex visual processing (such as detecting animals in natural, cluttered scenes) can be accomplished by the human cortex within 130–150 ms, far too brief for eye movements and conscious perception to occur. Furthermore, reflexes such as the oculovestibular reflex take place at even more rapid time-scales. It is quite plausible that such behaviors are mediated by a purely feed-forward moving wave of spiking activity that passes from the retina through V1, into V4, IT and prefrontal cortex, until it affects motorneurons in the spinal cord that control the finger press (as in a typical laboratory experiment). The hypothesis that the basic processing of information is feedforward is supported most directly by the short times (approx. 100 ms) required for a selective response to appear in IT cells.

Conversely, conscious perception is believed to require more sustained, reverberatory neural activity, most likely via global feedback from frontal regions of neocortex back to sensory cortical areas that builds up over time until it exceeds a critical threshold. At this point, the sustained neural activity rapidly propagates to parietal, prefrontal and anterior cingulate cortical regions, thalamus, claustrum and related structures that support short-term memory, multi-modality integration, planning, speech, and other processes intimately related to consciousness. Competition prevents more than one or a very small number of percepts to be simultaneously and actively represented. This is the core hypothesis of the global workspace theory of consciousness.

In brief, while rapid but transient neural activity in the thalamo-cortical system can mediate complex behavior without conscious sensation, it is surmised that consciousness requires sustained but well-organized neural activity dependent on long-range cortico-cortical feedback.

History

The neurobiologist Christfried Jakob (1866-1956) argued that the only conditions which must have neural correlates are direct sensations and reactions; these are called "intonations".

Neurophysiological studies in animals provided some insights on the neural correlates of conscious behavior. Vernon Mountcastle, in the early 1960s, set up to study this set of problems, which he termed "the Mind/Brain problem", by studying the neural basis of perception in the somatic sensory system. His labs at Johns Hopkins were among the first, along with Edward V.Evarts at NIH, to record neural activity from behaving monkeys. Struck with the elegance of SS Stevens approach of magnitude estimation, Mountcastle's group discovered three different modalities of somatic sensation shared one cognitive attribute: in all cases the firing rate of peripheral neurons was linearly related to the strength of the percept elicited. More recently, Ken H. Britten, William T. Newsome, and C. Daniel Salzman have shown that in area MT of monkeys, neurons respond with variability that suggests they are the basis of decision making about direction of motion. They first showed that neuronal rates are predictive of decisions using signal detection theory, and then that stimulation of these neurons could predictably bias the decision. Such studies were followed by Ranulfo Romo in the somatic sensory system, to confirm, using a different percept and brain area, that a small number of neurons in one brain area underlie perceptual decisions.

Other lab groups have followed Mountcastle's seminal work relating cognitive variables to neuronal activity with more complex cognitive tasks. Although monkeys cannot talk about their perceptions, behavioral tasks have been created in which animals made nonverbal reports, for example by producing hand movements. Many of these studies employ perceptual illusions as a way to dissociate sensations (i.e., the sensory information that the brain receives) from perceptions (i.e., how the consciousness interprets them). Neuronal patterns that represent perceptions rather than merely sensory input are interpreted as reflecting the neuronal correlate of consciousness.

Using such design, Nikos Logothetis and colleagues discovered perception-reflecting neurons in the temporal lobe. They created an experimental situation in which conflicting images were presented to different eyes (i.e., binocular rivalry). Under such conditions, human subjects report bistable percepts: they perceive alternatively one or the other image. Logothetis and colleagues trained the monkeys to report with their arm movements which image they perceived. Temporal lobe neurons in Logothetis experiments often reflected what the monkeys' perceived. Neurons with such properties were less frequently observed in the primary visual cortex that corresponds to relatively early stages of visual processing. Another set of experiments using binocular rivalry in humans showed that certain layers of the cortex can be excluded as candidates of the neural correlate of consciousness. Logothetis and colleagues switched the images between eyes during the percept of one of the images. Surprisingly the percept stayed stable. This means that the conscious percept stayed stable and at the same time the primary input to layer 4, which is the input layer, in the visual cortex changed. Therefore layer 4 can not be a part of the neural correlate of consciousness. Mikhail Lebedev and their colleagues observed a similar phenomenon in monkey prefrontal cortex. In their experiments monkeys reported the perceived direction of visual stimulus movement (which could be an illusion) by making eye movements. Some prefrontal cortex neurons represented actual and some represented perceived displacements of the stimulus. Observation of perception related neurons in prefrontal cortex is consistent with the theory of Christof Koch and Francis Crick who postulated that neural correlate of consciousness resides in prefrontal cortex. Proponents of distributed neuronal processing may likely dispute the view that consciousness has a precise localization in the brain.

Francis Crick wrote a popular book, "The Astonishing Hypothesis," whose thesis is that the neural correlate for consciousness lies in our nerve cells and their associated molecules. Crick and his collaborator Christof Koch have sought to avoid philosophical debates that are associated with the study of consciousness, by emphasizing the search for "correlation" and not "causation".

There is much room for disagreement about the nature of this correlate (e.g., does it require synchronous spikes of neurons in different regions of the brain? Is the co-activation of frontal or parietal areas necessary?). The philosopher David Chalmers maintains that a neural correlate of consciousness, unlike other correlates such as for memory, will fail to offer a satisfactory explanation of the phenomenon; he calls this the hard problem of consciousness.

Taxon

From Wikipedia, the free encyclopedia

African elephants form the genus Loxodonta, a widely accepted taxon.

In biology, a taxon (back-formation from taxonomy; plural taxa) is a group of one or more populations of an organism or organisms seen by taxonomists to form a unit. Although neither is required, a taxon is usually known by a particular name and given a particular ranking, especially if and when it is accepted or becomes established. It is very common, however, for taxonomists to remain at odds over what belongs to a taxon and the criteria used for inclusion. If a taxon is given a formal scientific name, its use is then governed by one of the nomenclature codes specifying which scientific name is correct for a particular grouping.

Initial attempts at classifying and ordering organisms (plants and animals) were set forth in Linnaeus's system in Systema Naturae, 10th edition (1758), as well as an unpublished work by Bernard and Antoine Laurent de Jussieu. The idea of a unit-based system of biological classification was first made widely available in 1805 in the introduction of Jean-Baptiste Lamarck's Flore françoise, of Augustin Pyramus de Candolle's Principes élémentaires de botanique. Lamarck set out a system for the "natural classification" of plants. Since then, systematists continue to construct accurate classifications encompassing the diversity of life; today, a "good" or "useful" taxon is commonly taken to be one that reflects evolutionary relationships.

Many modern systematists, such as advocates of phylogenetic nomenclature, use cladistic methods that require taxa to be monophyletic (all descendants of some ancestor). Their basic unit, therefore, is the clade rather than the taxon. Similarly, among those contemporary taxonomists working with the traditional Linnean (binomial) nomenclature, few propose taxa they know to be paraphyletic. An example of a well-established taxon that is not also a clade is the class Reptilia, the reptiles; birds are descendants of reptiles but are not included in the Reptilia (birds are included in the Aves).

History

The term taxon was first used in 1926 by Adolf Meyer-Abich for animal groups, as a backformation from the word Taxonomy; the word Taxonomy had been coined a century before from the Greek components τάξις (taxis, meaning arrangement) and -νομία (-nomia meaning method). For plants, it was proposed by Herman Johannes Lam in 1948, and it was adopted at the VII International Botanical Congress, held in 1950.

Definition

The Glossary of the International Code of Zoological Nomenclature (1999) defines a

  • "taxon, (pl. taxa), n.
A taxonomic unit, whether named or not: i.e. a population, or group of populations of organisms which are usually inferred to be phylogenetically related and which have characters in common which differentiate (q.v.) the unit (e.g. a geographic population, a genus, a family, an order) from other such units. A taxon encompasses all included taxa of lower rank (q.v.) and individual organisms. [...]"

Ranks

LifeDomainKingdomPhylumClassOrderFamilyGenusSpecies
The hierarchy of biological classification's eight major taxonomic ranks. Intermediate minor rankings are not shown.

A taxon can be assigned a taxonomic rank, usually (but not necessarily) when it is given a formal name.

"Phylum" applies formally to any biological domain, but traditionally it was always used for animals, whereas "Division" was traditionally often used for plants, fungi, etc.

A prefix is used to indicate a ranking of lesser importance. The prefix super- indicates a rank above, the prefix sub- indicates a rank below. In zoology the prefix infra- indicates a rank below sub-. For instance, among the additional ranks of class are superclass, subclass and infraclass.

Rank is relative, and restricted to a particular systematic schema. For example, liverworts have been grouped, in various systems of classification, as a family, order, class, or division (phylum). The use of a narrow set of ranks is challenged by users of cladistics; for example, the mere 10 ranks traditionally used between animal families (governed by the ICZN) and animal phyla (usually the highest relevant rank in taxonomic work) often cannot adequately represent the evolutionary history as more about a lineage's phylogeny becomes known.

In addition, the class rank is quite often not an evolutionary but a phenetic or paraphyletic group and as opposed to those ranks governed by the ICZN (family-level, genus-level and species-level taxa), can usually not be made monophyletic by exchanging the taxa contained therein. This has given rise to phylogenetic taxonomy and the ongoing development of the PhyloCode, which has been proposed as a new alternative to replace Linnean classification and govern the application of names to clades. Many cladists do not see any need to depart from traditional nomenclature as governed by the ICZN, ICN, etc.

Pain in animals

From Wikipedia, the free encyclopedia

A Galapagos shark hooked by a fishing boat
 
recognition of nonhuman animal sentience and suffering
  
National recognition of animal sentience
  
Partial recognition of animal sentience1
  
National recognition of animal suffering
  
Partial recognition of animal suffering2
  
No official recognition of animal sentience or suffering
  
Unknown
1certain animals are excluded, only mental health is acknowledged, and/or the laws vary internally
2only includes domestic animals

Pain negatively affects the health and welfare of animals. "Pain" is defined by the International Association for the Study of Pain as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage." Only the animal experiencing the pain can know the pain's quality and intensity, and the degree of suffering. It is harder, if even possible, for an observer to know whether an emotional experience has occurred, especially if the sufferer cannot communicate. Therefore, this concept is often excluded in definitions of pain in animals, such as that provided by Zimmerman: "an aversive sensory experience caused by actual or potential injury that elicits protective motor and vegetative reactions, results in learned avoidance and may modify species-specific behaviour, including social behaviour." Nonhuman animals cannot report their feelings to language-using humans in the same manner as human communication, but observation of their behaviour provides a reasonable indication as to the extent of their pain. Just as with doctors and medics who sometimes share no common language with their patients, the indicators of pain can still be understood.

According to the U.S. National Research Council Committee on Recognition and Alleviation of Pain in Laboratory Animals, pain is experienced by many animal species, including mammals and possibly all vertebrates.

The experience of pain

Although there are numerous definitions of pain, almost all involve two key components. First, nociception is required. This is the ability to detect noxious stimuli which evoke a reflex response that rapidly moves the entire animal, or the affected part of its body, away from the source of the stimulus. The concept of nociception does not imply any adverse, subjective "feeling" – it is a reflex action. An example in humans would be the rapid withdrawal of a finger that has touched something hot – the withdrawal occurs before any sensation of pain is actually experienced.

The second component is the experience of "pain" itself, or suffering – the internal, emotional interpretation of the nociceptive experience. Again in humans, this is when the withdrawn finger begins to hurt, moments after the withdrawal. Pain is therefore a private, emotional experience. Pain cannot be directly measured in other animals, including other humans; responses to putatively painful stimuli can be measured, but not the experience itself. To address this problem when assessing the capacity of other species to experience pain, argument-by-analogy is used. This is based on the principle that if an animal responds to a stimulus in a similar way to ourselves, it is likely to have had an analogous experience.

Reflex response to painful stimuli

Reflex arc of a dog when its paw is stuck with a pin. The spinal cord responds to signals from receptors in the paw, producing a reflex withdrawal of the paw. This localized response does not involve brain processes that might mediate a consciousness of pain, though these might also occur.

Nociception usually involves the transmission of a signal along nerve fibers from the site of a noxious stimulus at the periphery to the spinal cord. Although this signal is also transmitted on to the brain, a reflex response, such as flinching or withdrawal of a limb, is produced by return signals originating in the spinal cord. Thus, both physiological and behavioral responses to nociception can be detected, and no reference need be made to a conscious experience of pain. Based on such criteria, nociception has been observed in all major animal taxa.

Awareness of pain

Nerve impulses from nociceptors may reach the brain, where information about the stimulus (e.g. quality, location, and intensity), and affect (unpleasantness) are registered. Though the brain activity involved has been studied, the brain processes underlying conscious awareness are not well known.

Adaptive value

The adaptive value of nociception is obvious; an organism detecting a noxious stimulus immediately withdraws the limb, appendage or entire body from the noxious stimulus and thereby avoids further (potential) injury. However, a characteristic of pain (in mammals at least) is that pain can result in hyperalgesia (a heightened sensitivity to noxious stimuli) and allodynia (a heightened sensitivity to non-noxious stimuli). When this heightened sensitisation occurs, the adaptive value is less clear. First, the pain arising from the heightened sensitisation can be disproportionate to the actual tissue damage caused. Second, the heightened sensitisation may also become chronic, persisting well beyond the tissues healing. This can mean that rather than the actual tissue damage causing pain, it is the pain due to the heightened sensitisation that becomes the concern. This means the sensitisation process is sometimes termed maladaptive. It is often suggested hyperalgesia and allodynia assist organisms to protect themselves during healing, but experimental evidence to support this has been lacking.

In 2014, the adaptive value of sensitisation due to injury was tested using the predatory interactions between longfin inshore squid (Doryteuthis pealeii) and black sea bass (Centropristis striata) which are natural predators of this squid. If injured squid are targeted by a bass, they began their defensive behaviours sooner (indicated by greater alert distances and longer flight initiation distances) than uninjured squid. If anaesthetic (1% ethanol and MgCl2) is administered prior to the injury, this prevents the sensitisation and blocks the behavioural effect. The authors claim this study is the first experimental evidence to support the argument that nociceptive sensitisation is actually an adaptive response to injuries.

Argument-by-analogy

To assess the capacity of other species to consciously suffer pain we resort to argument-by-analogy. That is, if an animal responds to a stimulus the way a human does, it is likely to have had an analogous experience. If we stick a pin in a chimpanzee's finger and she rapidly withdraws her hand, we use argument-by-analogy and infer that like us, she felt pain. It might be argued that consistency requires us infer, also, that a cockroach experiences conscious pain when it writhes after being stuck with a pin. The usual counter-argument is that although the physiology of consciousness is not understood, it clearly involves complex brain processes not present in relatively simple organisms. Other analogies have been pointed out. For example, when given a choice of foods, rats and chickens with clinical symptoms of pain will consume more of an analgesic-containing food than animals not in pain. Additionally, the consumption of the analgesic carprofen in lame chickens was positively correlated to the severity of lameness, and consumption resulted in an improved gait. Such anthropomorphic arguments face the criticism that physical reactions indicating pain may be neither the cause nor result of conscious states, and the approach is subject to criticism of anthropomorphic interpretation. For example, a single-celled organism such as an amoeba may writhe after being exposed to noxious stimuli despite the absence of nociception.

History

The idea that animals might not experience pain or suffering as humans do traces back at least to the 17th-century French philosopher, René Descartes, who argued that animals lack consciousness. Researchers remained unsure into the 1980s as to whether animals experience pain, and veterinarians trained in the U.S. before 1989 were simply taught to ignore animal pain. In his interactions with scientists and other veterinarians, Bernard Rollin was regularly asked to "prove" that animals are conscious, and to provide "scientifically acceptable" grounds for claiming that they feel pain. Some authors say that the view that animals feel pain differently is now a minority view. Academic reviews of the topic are more equivocal, noting that, although it is likely that some animals have at least simple conscious thoughts and feelings, some authors continue to question how reliably animal mental states can be determined.

In different species

The ability to experience pain in an animal, or another human for that matter, cannot be determined directly but it may be inferred through analogous physiological and behavioral reactions. Although many animals share similar mechanisms of pain detection to those of humans, have similar areas of the brain involved in processing pain, and show similar pain behaviours, it is notoriously difficult to assess how animals actually experience pain.

Nociception

Nociceptive nerves, which preferentially detect (potential) injury-causing stimuli, have been identified in a variety of animals, including invertebrates. The medicinal leech, Hirudo medicinalis, and sea slug are classic model systems for studying nociception. Many other vertebrate and invertebrate animals also show nociceptive reflex responses similar to our own.

Pain

Many animals also exhibit more complex behavioural and physiological changes indicative of the ability to experience pain: they eat less food, their normal behaviour is disrupted, their social behaviour is suppressed, they may adopt unusual behaviour patterns, they may emit characteristic distress calls, experience respiratory and cardiovascular changes, as well as inflammation and release of stress hormones.

Some criteria that may indicate the potential of another species to feel pain include:

  1. Has a suitable nervous system and sensory receptors
  2. Physiological changes to noxious stimuli
  3. Displays protective motor reactions that might include reduced use of an affected area such as limping, rubbing, holding or autotomy
  4. Has opioid receptors and shows reduced responses to noxious stimuli when given analgesics and local anaesthetics
  5. Shows trade-offs between stimulus avoidance and other motivational requirements
  6. Shows avoidance learning
  7. High cognitive ability and sentience

Vertebrates

Fish

A typical human cutaneous nerve contains 83% C type trauma receptors (the type responsible for transmitting signals described by humans as excruciating pain); the same nerves in humans with congenital insensitivity to pain have only 24-28% C type receptors. The rainbow trout has about 5% C type fibres, while sharks and rays have 0%. Nevertheless, fish have been shown to have sensory neurons that are sensitive to damaging stimuli and are physiologically identical to human nociceptors. Behavioural and physiological responses to a painful event appear comparable to those seen in amphibians, birds, and mammals, and administration of an analgesic drug reduces these responses in fish.

Animal welfare advocates have raised concerns about the possible suffering of fish caused by angling. Some countries, e.g. Germany, have banned specific types of fishing, and the British RSPCA now formally prosecutes individuals who are cruel to fish.

Invertebrates

Though it has been argued that most invertebrates do not feel pain, there is some evidence that invertebrates, especially the decapod crustaceans (e.g. crabs and lobsters) and cephalopods (e.g. octopuses), exhibit behavioural and physiological reactions indicating they may have the capacity for this experience. Nociceptors have been found in nematodes, annelids and mollusks. Most insects do not possess nociceptors, one known exception being the fruit fly. In vertebrates, endogenous opioids are neurochemicals that moderate pain by interacting with opiate receptors. Opioid peptides and opiate receptors occur naturally in nematodes, mollusks, insects and crustaceans. The presence of opioids in crustaceans has been interpreted as an indication that lobsters may be able to experience pain, although it has been claimed "at present no certain conclusion can be drawn".

One suggested reason for rejecting a pain experience in invertebrates is that invertebrate brains are too small. However, brain size does not necessarily equate to complexity of function. Moreover, weight for body-weight, the cephalopod brain is in the same size bracket as the vertebrate brain, smaller than that of birds and mammals, but as big as or bigger than most fish brains.

Since September 2010, all cephalopods being used for scientific purposes in the EU are protected by EU Directive 2010/63/EU which states "...there is scientific evidence of their [cephalopods] ability to experience pain, suffering, distress and lasting harm. In the UK, animal protection legislation means that cephalopods used for scientific purposes must be killed humanely, according to prescribed methods (known as "Schedule 1 methods of euthanasia") known to minimise suffering.

In medicine and research

Veterinary medicine

Veterinary medicine uses, for actual or potential animal pain, the same analgesics and anesthetics as used in humans.

Dolorimetry

Dolorimetry (dolor: Latin: pain, grief) is the measurement of the pain response in animals, including humans. It is practiced occasionally in medicine, as a diagnostic tool, and is regularly used in research into the basic science of pain, and in testing the efficacy of analgesics. Nonhuman animal pain measurement techniques include the paw pressure test, tail flick test, hot plate test and grimace scales.

Grimace scales are used to assess post-operative and disease pain in mammals. Scales have been developed for ten mammalian species such as mice, rats, and rabbits. Dale Langford established and published the Mouse Grimace Scale in 2010, with Susana Sotocinal inventing the Rat Grimace Scale a year later in 2011. Using video stills from recorders, researchers can track changes in an animal's the positioning of ears and whiskers, orbital tightening, and bulging or flattening of the nose area, and match these images against the images in the grimace scale. Laboratory researcher and veterinarians may use the grimace scales to evaluate when to administer analgesia to an animal or whether severity of pain warrants a humane endpoint (euthanization) or the animal in a study.

Laboratory animals

Animals are kept in laboratories for a wide range of reasons, some of which may involve pain, suffering or distress, whilst others (e.g. many of those involved in breeding) will not. The extent to which animal testing causes pain and suffering in laboratory animals is the subject of much debate. Marian Stamp Dawkins defines "suffering" in laboratory animals as the experience of one of "a wide range of extremely unpleasant subjective (mental) states." The U.S. National Research Council has published guidelines on the care and use of laboratory animals, as well as a report on recognizing and alleviating pain in vertebrates. The United States Department of Agriculture defines a "painful procedure" in an animal study as one that would "reasonably be expected to cause more than slight or momentary pain or distress in a human being to which that procedure was applied." Some critics argue that, paradoxically, researchers raised in the era of increased awareness of animal welfare may be inclined to deny that animals are in pain simply because they do not want to see themselves as people who inflict it. PETA however argues that there is no doubt about animals in laboratories being inflicted with pain. In the UK, animal research likely to cause "pain, suffering, distress or lasting harm" is regulated by the Animals (Scientific Procedures) Act 1986 and research with the potential to cause pain is regulated by the Animal Welfare Act of 1966 in the US.

In the U.S., researchers are not required to provide laboratory animals with pain relief if the administration of such drugs would interfere with their experiment. Laboratory animal veterinarian Larry Carbone writes, "Without question, present public policy allows humans to cause laboratory animals unalleviated pain. The AWA, the Guide for the Care and Use of Laboratory Animals, and current Public Health Service policy all allow for the conduct of what are often called "Category E" studies – experiments in which animals are expected to undergo significant pain or distress that will be left untreated because treatments for pain would be expected to interfere with the experiment."

Severity scales

Eleven countries have national classification systems of pain and suffering experienced by animals used in research: Australia, Canada, Finland, Germany, The Republic of Ireland, The Netherlands, New Zealand, Poland, Sweden, Switzerland, and the UK. The US also has a mandated national scientific animal-use classification system, but it is markedly different from other countries in that it reports on whether pain-relieving drugs were required and/or used. The first severity scales were implemented in 1986 by Finland and the UK. The number of severity categories ranges between 3 (Sweden and Finland) and 9 (Australia). In the UK, research projects are classified as "mild", "moderate", and "substantial" in terms of the suffering the researchers conducting the study say they may cause; a fourth category of "unclassified" means the animal was anesthetized and killed without recovering consciousness. It should be remembered that in the UK system, many research projects (e.g. transgenic breeding, feeding distasteful food) will require a license under the Animals (Scientific Procedures) Act 1986, but may cause little or no pain or suffering. In December 2001, 39 percent (1,296) of project licenses in force were classified as "mild", 55 percent (1,811) as "moderate", two percent (63) as "substantial", and 4 percent (139) as "unclassified". In 2009, of the project licenses issued, 35 percent (187) were classified as "mild", 61 percent (330) as "moderate", 2 percent (13) as "severe" and 2 percent (11) as unclassified.

In the US, the Guide for the Care and Use of Laboratory Animals defines the parameters for animal testing regulations. It states, "The ability to experience and respond to pain is widespread in the animal kingdom...Pain is a stressor and, if not relieved, can lead to unacceptable levels of stress and distress in animals. " The Guide states that the ability to recognize the symptoms of pain in different species is essential for the people caring for and using animals. Accordingly, all issues of animal pain and distress, and their potential treatment with analgesia and anesthesia, are required regulatory issues for animal protocol approval.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...