Search This Blog

Saturday, January 31, 2026

Evolutionary neuroscience

From Wikipedia, the free encyclopedia

Evolutionary neuroscience is the scientific study of the evolution of nervous systems. Evolutionary neuroscientists investigate the evolution and natural history of nervous system structure, functions and emergent properties. The field draws on concepts and findings from both neuroscience and evolutionary biology. Historically, most empirical work has been in the area of comparative neuroanatomy, and modern studies often make use of phylogenetic comparative methods. Selective breeding and experimental evolution approaches are also being used more frequently.

Conceptually and theoretically, the field is related to fields as diverse as cognitive genomics, neurogenetics, developmental neuroscience, neuroethology, comparative psychology, evo-devo, behavioral neuroscience, cognitive neuroscience, behavioral ecology, biological anthropology and sociobiology.

Evolutionary neuroscientists examine changes in genes, anatomy, physiology, and behavior to study the evolution of changes in the brain. They study a multitude of processes including the evolution of vocal, visual, auditory, taste, and learning systems as well as language evolution and development. In addition, evolutionary neuroscientists study the evolution of specific areas or structures in the brain such as the amygdala, forebrain and cerebellum as well as the motor or visual cortex.

History

Studies of the brain began during ancient Egyptian times but studies in the field of evolutionary neuroscience began after the publication of Darwin's On the Origin of Species in 1859. At that time, brain evolution was largely viewed at the time in relation to the incorrect scala naturae. Phylogeny and the evolution of the brain were still viewed as linear. During the early 20th century, there were several prevailing theories about evolution. Darwinism was based on the principles of natural selection and variation, Lamarckism was based on the passing down of acquired traits, Orthogenesis was based on the assumption that tendency towards perfection steers evolution, and Saltationism argued that discontinuous variation creates new species. Darwin's became the most accepted and allowed for people to starting thinking about the way animals and their brains evolve.

The 1936 book The Comparative Anatomy of the Nervous System of Vertebrates Including Man by the Dutch neurologist C.U. Ariëns Kappers (first published in German in 1921) was a landmark publication in the field. Following the Evolutionary Synthesis, the study of comparative neuroanatomy was conducted with an evolutionary view, and modern studies incorporate developmental genetics. It is now accepted that phylogenetic changes occur independently between species over time and can not be linear. It is also believed that an increase with brain size correlates with an increase in neural centers and behavior complexity.

Major arguments

Over time, there are several arguments that would come to define the history of evolutionary neuroscience. The first is the argument between E.G. St. Hilaire and G. Cuvier over the topic of "common plan versus diversity". St. Hilaire argued that all animals are built based on a single plan or archetype and he stressed the importance of homologies between organisms, while Cuvier believed that the structure of organs was determined by their function and that knowledge of the function of one organ could help discover the functions of other organs. He argued that there were at least four different archetypes. After Darwin, the idea of evolution was more accepted and St. Hilaire's idea of homologous structures was more accepted. The second major argument is that of Aristotle's scala naturae (scale of nature) and the great chain of being versus the phylogenetic bush. The scala naturae, later also called the phylogenetic scale, was based on the premise that phylogenies are linear or like a scale while the phylogenetic bush argument was based on the idea that phylogenies were not linear, and more resembled a bush – the currently accepted view. A third major argument dealt with the size of the brain and whether relative size or absolute size was more relevant in determining function. In the late 18th century, it was determined that brain to body ratio reduces as body size increases. However more recently, there is more focus on absolute brain size as this scales with internal structures and functions, with the degree of structural complexity, and with the amount of white matter in the brain, all suggesting that absolute size is much better predictor of brain function. Finally, a fourth argument is that of natural selection (Darwinism) versus developmental constraints (concerted evolution). It is now accepted that the evolution of development is what causes adult species to show differences and evolutionary neuroscientists maintain that many aspects of brain function and structure are conserved across species.

Techniques

Throughout history, we see how evolutionary neuroscience has been dependent on developments in biological theory and techniques. The field of evolutionary neuroscience has been shaped by the development of new techniques that allow for the discovery and examination of parts of the nervous system. In 1873, C. Golgi devised the silver nitrate method which allowed for the description of the brain at the cellular level as opposed to simply the gross level. Santiago and Pedro Ramon used this method to analyze numerous parts of brains, broadening the field of comparative neuroanatomy. In the second half of the 19th century, new techniques allowed scientists to identify neuronal cell groups and fiber bundles in brains. In 1885, Vittorio Marchi discovered a staining technique that let scientists see induced axonal degeneration in myelinated axons, in 1950, the "original nauta procedure" allowed for more accurate identification of degenerating fibers, and in the 1970s, there were several discoveries of multiple molecular tracers which would be used for experiments even today. In the last 20 years, cladistics has also become a useful tool for looking at variation in the brain.

Evolution of brains

Many of Earth's early years were filled with brainless creatures, and among them was the amphioxus, which can be traced as far back as 550 million years ago. Amphioxi had a significantly simpler way of life, which made it not necessary for them to have a brain. To replace its absence of a brain, the prehistoric amphioxi had a limited nervous system, which was composed of only a bunch of cells. These cells optimized their uses because many of the cells for sensing intertwined with the cells used for its very simple system for moving, which allowed it to propel itself through bodies of water and react without much processing while the cells remaining were used for the detection of light to account to the fact that it had no eyes. It also did not need a sense of hearing. Even though the amphioxi had limited senses, they did not need them to survive efficiently, as their life was mainly dedicated to sitting on the seafloor to eat. Although the amphioxus' "brain" might seem severely underdeveloped compared to their human counterparts, it was set well for its respective environment, which has allowed it to prosper for millions of years.

Although many scientists once assumed that the brain evolved to achieve an ability to think, such a view is today considered a great misconception. 500 million years ago, the Earth entered into the Cambrian period, where hunting became a new concern for survival in an animal's environment. At this point, animals became sensitive to the presence of another, which could serve as food. Although hunting did not inherently require a brain, it was one of the main steps that pushed the development of one, as organisms progressed to develop advanced sensory systems.

In response to progressively complicated surroundings, where competition between animals with brains started to arise for survival, animals had to learn to manage their energy. As creatures acquired a variety of senses for perception, animals progressed to develop allostasis, which played the role of an early brain by forcing the body to gather past experiences to improve prediction. Since prediction beat reaction, organisms who planned their manoeuvres were more likely to survive than those who did not. This came with equally managing energy adequately, which nature favoured. Animals that had not developed allostasis would be at a disadvantage for their purpose of exploration, foraging and reproduction, as death was a higher risk factor.

As allostasis continued to develop in animals, their bodies equally continuously evolved in size and complexity. They progressively started to develop cardiovascular systems, respiratory systems and immune systems to survive in their environments, which required bodies to have something more complex than the limited quality of cells to regulate themselves. This encouraged the nervous systems of many creatures to develop into a brain, which was sizeable and strikingly similar to how most animal brains look today.

Evolution of the human brain

Darwin, in The Descent of Man, stipulated that the mind evolved simultaneously with the body. According to his theory, all humans have a barbaric core that they learn to deal with. Darwin's theory allowed people to start thinking about the way animals and their brains evolve.

Reptile brain

Plato's insight on the evolution of the human brain contemplated the idea that all humans were once lizards, with similar survival needs such as feeding, fighting and mating. In the classical era Plato first described this concept as the "lizard mind" – the deepest layer and one of three parts of his conception of a three-part human mind. In the 20th century P. MacLean developed a similar, modern triune brain theory.

Recent research in molecular genetics has demonstrated evidence that there is no difference in the neurons that reptiles and nonhuman mammals have when compared to humans. Instead, new research speculates that all mammals, and potentially reptiles, birds and some species of fish, evolve from a common order pattern. This research reinforces the idea that human brains are structurally no t any different from many other organisms.

The cerebral cortex of reptiles resembles that of mammals, although simplified. Although the evolution and function of the human cerebral cortex is still shrouded in mystery, we know that it is the most dramatically changed part of the brain during recent evolution. The reptilian brain, 300 million years ago, was made for all our basic urges and instincts like fighting, reproducing, and mating. The reptile brain evolved 100 million years later and gave us the ability to feel emotion. Eventually, it was able to develop a rational part that controls our inner animal.

Visual perception

Vision allows humans to process the world surrounding them to a certain extent. Through the wavelengths of light, the human brain can associate them to a specific event. Although the brain obviously perceives its surroundings at a specific moment, the brain equally predicts the upcoming changes in the environment. Once it has noticed them, the brain begins to prepare itself to encounter the new scenario by attempting to develop an adequate response. This is accomplished by using the data the brain has at its access, which can be to use past experiences and memories to form a proper response. However, sometimes the brain fails to predict accurately which means that the mind perceives a false illustration. Such an incorrect image occurs when the brain uses an inadequate memory to respond to what it is facing, which means that the memory does not correlate with the real scenario.(pp 75–76)

The rabbit–duck illusion is a famous ambiguous image in which a rabbit or a duck can be seen. The earliest known version is an unattributed drawing from the 23 October 1892 issue of Blätter magazine.

Research about how visual perception has developed in evolution is today best understood through studying present-day primates since the organization of the brain cannot be ascertained only by analyzing fossilized skulls.

The brain interprets visual information in the occipital lobe, a region in the back of the brain. The occipital lobe contains the visual cortex and the thalamus, which are the two main actors in processing visual information. The process of interpreting information has proven to be more complex than "what you see is what you get". Misinterpreting visual information is more common than previously believed.

As knowledge of the human brain has evolved, researchers discover that our visual perception is much closer to a construction of the brain than a direct "photograph" of what is in front of us. This can lead to misperceiving certain situations or elements in the brain's attempt to keep us safe. For example, an on-edge soldier believes a young child with a stick is a grown man with a gun, as the brain's sympathetic system, or fight-or-flight mode, is activated.

An example of this phenomenon can be observed in the rabbit–duck illusion. Depending on how the image is looked at, the brain can interpret the image of a rabbit, or a duck. There is no right or wrong answer, but it is proof that what is seen may not be the reality of the situation.

Auditory perception

The organization of the human auditory cortex is divided into core, belt, and parabelt. This closely resembles that of present-day primates.

The concept of auditory perception resembles visual perception very similarly. Our brain is wired to act on what it expects to experience. The sense of hearing helps situate an individual, but it also gives them hints about what else is around them. If something moves, they know approximately where it is and by the tone of it, the brain can predict what moved. If someone were to hear leaves rustling in a forest, the brain might interpret that sound as being an animal which could be a dangerous factor, but it would simply be another person walking. The brain can predict many things based on what it is interpreting, however, those predictions may not all be true.

Language development

Evidence of a rich cognitive life in primate relatives of humans is extensive, and a wide range of specific behaviours in line with Darwinian theory is well documented. However, until recently, research has disregarded nonhuman primates in the context of evolutionary linguistics, primarily because unlike vocal learning birds, our closest relatives seem to lack imitative abilities. Evolutionary speaking, there is great evidence suggesting a genetic groundwork for the concept of languages has been in place for millions of years, as with many other capabilities and behaviours observed today.

While evolutionary linguists agree on the fact that volitional control over vocalizing and expressing language is a quite recent leap in the history of the human race, that is not to say auditory perception is a recent development as well. Research has shown substantial evidence of well-defined neural pathways linking cortices to organize auditory perception in the brain. Thus, the issue lies in our abilities to imitate sounds.

Beyond the fact that primates may be poorly equipped to learn sounds, studies have shown them to learn and use gestures far better. Visual cues and motoric pathways developed millions of years earlier in our evolution, which seems to be one reason for our earlier ability to understand and use gestures.

Cognitive specializations

Evolution shows how certain environments and surroundings will favor the development of specific cognitive functions of the brain to aid an animal or in this case human to successfully live in that environment.

Cognitive specialization in a theory in which cognitive functions, such as the ability to communicate socially, can be passed down genetically through offspring. This would benefit species in the process of natural selection. As for studying this in relation to the human brain, it has been theorized that very specific social skills apart from language, such as trust, vulnerability, navigation, and self-awareness can also be passed by offspring.

Quantum mind

From Wikipedia, the free encyclopedia

The quantum mind or quantum consciousness is a group of hypotheses proposing that local physical laws and interactions from classical mechanics or connections between neurons alone cannot explain consciousness. These hypotheses posit instead that quantum-mechanical phenomena, such as entanglement and superposition that cause nonlocalized quantum effects, interacting in smaller features of the brain than cells, may play an important part in the brain's function and could explain critical aspects of consciousness. These scientific hypotheses are as yet unvalidated, and they can overlap with quantum mysticism.

History

Eugene Wigner developed the idea that quantum mechanics has something to do with the workings of the mind. He proposed that the wave function collapses due to its interaction with consciousness. Freeman Dyson argued that "mind, as manifested by the capacity to make choices, is to some extent inherent in every electron".

Other contemporary physicists and philosophers considered these arguments unconvincing. Victor Stenger characterized quantum consciousness as a "myth" having "no scientific basis" that "should take its place along with gods, unicorns and dragons".

David Chalmers argues against quantum consciousness. He instead discusses how quantum mechanics may relate to dualistic consciousness. Chalmers is skeptical that any new physics can resolve the hard problem of consciousness. He argues that quantum theories of consciousness suffer from the same weakness as more conventional theories. Just as he argues that there is no particular reason why specific macroscopic physical features in the brain should give rise to consciousness, he also thinks that there is no specific reason why a particular quantum feature, such as the EM field in the brain, should give rise to consciousness either.

Approaches

Bohm and Hiley

David Bohm viewed quantum theory and relativity as contradictory, which implied a more fundamental level in the universe. He claimed that both quantum theory and relativity pointed to this deeper theory, a quantum field theory. This more fundamental level was proposed to represent an undivided wholeness and an implicate order, from which arises the explicate order of the universe as we experience it.

Bohm's proposed order applies both to matter and consciousness. He suggested that it could explain the relationship between them. He saw mind and matter as projections into our explicate order from the underlying implicate order. Bohm claimed that when we look at matter, we see nothing that helps us to understand consciousness.

Bohm never proposed a specific means by which his proposal could be falsified, nor a neural mechanism through which his "implicate order" could emerge in a way relevant to consciousness. He later collaborated on Karl Pribram's holonomic brain theory as a model of quantum consciousness.

David Bohm also collaborated with Basil Hiley on work that claimed mind and matter both emerge from an "implicate order". Hiley in turn worked with philosopher Paavo Pylkkänen. According to Pylkkänen, Bohm's suggestion "leads naturally to the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain, while the basic thinking process is at the quantum-theoretically describable level".

Penrose and Hameroff

Theoretical physicist Roger Penrose and anaesthesiologist Stuart Hameroff collaborated to produce the theory known as "orchestrated objective reduction" (Orch-OR). Penrose and Hameroff initially developed their ideas separately and later collaborated to produce Orch-OR in the early 1990s. They reviewed and updated their theory in 2013.

Penrose's argument stemmed from Gödel's incompleteness theorems. In his first book on consciousness, The Emperor's New Mind (1989), he argued that while a formal system cannot prove its own consistency, Gödel's unprovable results are provable by human mathematicians. Penrose took this to mean that human mathematicians are not formal proof systems and not running a computable algorithm. According to Bringsjord and Xiao, this line of reasoning is based on fallacious equivocation on the meaning of computation. In the same book, Penrose wrote: "One might speculate, however, that somewhere deep in the brain, cells are to be found of single quantum sensitivity. If this proves to be the case, then quantum mechanics will be significantly involved in brain activity."

Penrose determined that wave function collapse was the only possible physical basis for a non-computable process. Dissatisfied with its randomness, he proposed a new form of wave function collapse that occurs in isolation and called it objective reduction. He suggested each quantum superposition has its own piece of spacetime curvature and that when these become separated by more than one Planck length, they become unstable and collapse. Penrose suggested that objective reduction represents neither randomness nor algorithmic processing but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derives.

Hameroff provided a hypothesis that microtubules would be suitable hosts for quantum behavior. Microtubules are composed of tubulin protein dimer subunits. The dimers each have hydrophobic pockets that are 8 nm apart and may contain delocalized π electrons. Tubulins have other smaller non-polar regions that contain π-electron-rich indole rings separated by about 2 nm. Hameroff proposed that these electrons are close enough to become entangled. He originally suggested that the tubulin-subunit electrons would form a Bose–Einstein condensate, but this was discredited. He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules, but this too was experimentally discredited.

For instance, the proposed predominance of A-lattice microtubules, more suitable for information processing, was falsified by Kikkawa et al., who showed that all in vivo microtubules have a B lattice and a seam. Orch-OR predicted that microtubule coherence reaches the synapses through dendritic lamellar bodies (DLBs), but De Zeeuw et al. proved this impossible by showing that DLBs are micrometers away from gap junctions.

In 2014, Hameroff and Penrose claimed that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013 corroborates Orch-OR theory. Experiments that showed that anaesthetic drugs reduce how long microtubules can sustain suspected quantum excitations appear to support the quantum theory of consciousness.

In April 2022, the results of two related experiments at the University of Alberta and Princeton University were announced at The Science of Consciousness conference, providing further evidence to support quantum processes operating within microtubules. In a study Stuart Hameroff was part of, Jack Tuszyński of the University of Alberta demonstrated that anesthetics hasten the duration of a process called delayed luminescence, in which microtubules and tubulins re-emit trapped light. Tuszyński suspects that the phenomenon has a quantum origin, with superradiance being investigated as one possibility. In the second experiment, Gregory D. Scholes and Aarat Kalra of Princeton University used lasers to excite molecules within tubulins, causing a prolonged excitation to diffuse through microtubules further than expected, which did not occur when repeated under anesthesia. However, diffusion results have to be interpreted carefully, since even classical diffusion can be very complex due to the wide range of length scales in the fluid filled extracellular space. Nevertheless, University of Oxford quantum physicist Vlatko Vedral told that this connection with consciousness is a really long shot.

Also in 2022, a group of Italian physicists conducted several experiments that failed to provide evidence in support of a gravity-related quantum collapse model of consciousness, weakening the possibility of a quantum explanation for consciousness.

Although these theories are stated in a scientific framework, it is difficult to separate them from scientists' personal opinions. The opinions are often based on intuition or subjective ideas about the nature of consciousness. For example, Penrose wrote:

[M]y own point of view asserts that you can't even simulate conscious activity. What's going on in conscious thinking is something you couldn't properly imitate at all by computer.... If something behaves as though it's conscious, do you say it is conscious? People argue endlessly about that. Some people would say, "Well, you've got to take the operational viewpoint; we don't know what consciousness is. How do you judge whether a person is conscious or not? Only by the way they act. You apply the same criterion to a computer or a computer-controlled robot." Other people would say, "No, you can't say it feels something merely because it behaves as though it feels something." My view is different from both those views. The robot wouldn't even behave convincingly as though it was conscious unless it really was—which I say it couldn't be, if it's entirely computationally controlled.

Penrose continues:

A lot of what the brain does you could do on a computer. I'm not saying that all the brain's action is completely different from what you do on a computer. I am claiming that the actions of consciousness are something different. I'm not saying that consciousness is beyond physics, either—although I'm saying that it's beyond the physics we know now.... My claim is that there has to be something in physics that we don't yet understand, which is very important, and which is of a noncomputational character. It's not specific to our brains; it's out there, in the physical world. But it usually plays a totally insignificant role. It would have to be in the bridge between quantum and classical levels of behavior—that is, where quantum measurement comes in.

Umezawa, Vitiello, Freeman

Hiroomi Umezawa and collaborators proposed a quantum field theory of memory storage. Giuseppe Vitiello and Walter Freeman proposed a dialog model of the mind. This dialog takes place between the classical and the quantum parts of the brain. Their quantum field theory models of brain dynamics are fundamentally different from the Penrose–Hameroff theory.

Quantum brain dynamics

As described by Harald Atmanspacher, "Since quantum theory is the most fundamental theory of matter that is currently available, it is a legitimate question to ask whether quantum theory can help us to understand consciousness."

The original motivation in the early 20th century for relating quantum theory to consciousness was essentially philosophical. It is fairly plausible that conscious free decisions ("free will") are problematic in a perfectly deterministic world, so quantum randomness might indeed open up novel possibilities for free will. (On the other hand, randomness is problematic for goal-directed volition!)

Ricciardi and Umezawa proposed in 1967 a general theory of quanta of long-range coherent waves within and between brain cells, and showed a possible mechanism of memory storage and retrieval in terms of Nambu–Goldstone bosons. Mari Jibu and Kunio Yasue later popularized these results under the name "quantum brain dynamics" (QBD) as the hypothesis to explain the function of the brain within the framework of quantum field theory with implications on consciousness.

Pribram

Karl Pribram's holonomic brain theory (quantum holography) invoked quantum field theory to explain higher-order processing of memory in the brain. He argued that his holonomic model solved the binding problem. Pribram collaborated with Bohm in his work on quantum approaches to the thought process. Pribram suggested much of the processing in the brain was done in distributed fashion. He proposed that the fine fibered, felt-like dendritic fields might follow the principles of quantum field theory when storing and retrieving long term memory.

Stapp

Henry Stapp proposed that quantum waves are reduced only when they interact with consciousness. He argues from the orthodox quantum mechanics of John von Neumann that the quantum state collapses when the observer selects one among the alternative quantum possibilities as a basis for future action. The collapse, therefore, takes place in the expectation that the observer associated with the state. Stapp's work drew criticism from scientists such as David Bourget and Danko Georgiev.

Catecholaminergic neuron electron transport (CNET)

CNET is a hypothesized neural signaling mechanism in catecholaminergic neurons that would use quantum mechanical electron transport. The hypothesis is based in part on the observation by many independent researchers that electron tunneling occurs in ferritin, an iron storage protein that is prevalent in those neurons, at room temperature and ambient conditions. The hypothesized function of this mechanism is to assist in action selection, but the mechanism itself would be capable of integrating millions of cognitive and sensory neural signals using a physical mechanism associated with strong electron-electron interactions. Each tunneling event would involve a collapse of an electron wave function, but the collapse would be incidental to the physical effect created by strong electron-electron interactions.

CNET predicted a number of physical properties of these neurons that have been subsequently observed experimentally, such as electron tunneling in substantia nigra pars compacta (SNc) tissue and the presence of disordered arrays of ferritin in SNc tissue. The hypothesis also predicted that disordered ferritin arrays like those found in SNc tissue should be capable of supporting long-range electron transport and providing a switching or routing function, both of which have also been subsequently observed.

Another prediction of CNET was that the largest SNc neurons should mediate action selection. This prediction was contrary to earlier proposals about the function of those neurons at that time, which were based on predictive reward dopamine signaling. A team led by Dr. Pascal Kaeser of Harvard Medical School subsequently demonstrated that those neurons do in fact code movement, consistent with the earlier predictions of CNET. While the CNET mechanism has not yet been directly observed, it may be possible to do so using quantum dot fluorophores tagged to ferritin or other methods for detecting electron tunneling.

CNET is applicable to a number of different consciousness models as a binding or action selection mechanism, such as Integrated Information Theory (IIT) and Sensorimotor Theory (SMT). It is noted that many existing models of consciousness fail to specifically address action selection or binding. For example, O'Regan and Noë call binding a "pseudo problem," but also state that "the fact that object attributes seem perceptually to be part of a single object does not require them to be 'represented' in any unified kind of way, for example, at a single location in the brain, or by a single process. They may be so represented, but there is no logical necessity for this." Simply because there is no "logical necessity" for a physical phenomenon does not mean that it does not exist, or that once it is identified that it can be ignored. Likewise, global workspace theory (GWT) models appear to treat dopamine as modulatory, based on the prior understanding of those neurons from predictive reward dopamine signaling research, but GWT models could be adapted to include modeling of moment-by-moment activity in the striatum to mediate action selection, as observed by Kaiser. CNET is applicable to those neurons as a selection mechanism for that function, as otherwise that function could result in seizures from simultaneous actuation of competing sets of neurons. While CNET by itself is not a model of consciousness, it is able to integrate different models of consciousness through neural binding and action selection. However, a more complete understanding of how CNET might relate to consciousness would require a better understanding of strong electron-electron interactions in ferritin arrays, which implicates the many-body problem.

Criticism

These hypotheses of the quantum mind remain hypothetical speculation, as Penrose admits in his discussions. Until they make a prediction that is tested by experimentation, the hypotheses are not based on empirical evidence. In 2010, Lawrence Krauss was guarded in criticising Penrose's ideas. He said: "Roger Penrose has given lots of new-age crackpots ammunition... Many people are dubious that Penrose's suggestions are reasonable, because the brain is not an isolated quantum-mechanical system. To some extent it could be, because memories are stored at the molecular level, and at a molecular level quantum mechanics is significant." According to Krauss, "It is true that quantum mechanics is extremely strange, and on extremely small scales for short times, all sorts of weird things happen. And in fact, we can make weird quantum phenomena happen. But what quantum mechanics doesn't change about the universe is, if you want to change things, you still have to do something. You can't change the world by thinking about it."

The process of testing the hypotheses with experiments is fraught with conceptual/theoretical, practical, and ethical problems.

Conceptual problems

The idea that a quantum effect is necessary for consciousness to function is still in the realm of philosophy. Penrose proposes that it is necessary, but other theories of consciousness do not indicate that it is needed. For example, Daniel Dennett proposed a theory called multiple drafts model, which doesn't indicate that quantum effects are needed, in his 1991 book Consciousness Explained. A philosophical argument on either side is not a scientific proof, although philosophical analysis can indicate key differences in the types of models and show what type of experimental differences might be observed. But since there is no clear consensus among philosophers, there is no conceptual support that a quantum mind theory is needed.

A possible conceptual approach is to use quantum mechanics as an analogy to understand a different field of study like consciousness, without expecting that the laws of quantum physics will apply. An example of this approach is the idea of Schrödinger's cat. Erwin Schrödinger described how one could, in principle, create entanglement of a large-scale system by making it dependent on an elementary particle in a superposition. He proposed a scenario with a cat in a locked steel chamber, wherein the cat's survival depended on the state of a radioactive atom—whether it had decayed and emitted radiation. According to Schrödinger, the Copenhagen interpretation implies that the cat is both alive and dead until the state has been observed. Schrödinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; he intended the example to illustrate the absurdity of the existing view of quantum mechanics. But since Schrödinger's time, physicists have given other interpretations of the mathematics of quantum mechanics, some of which regard the "alive and dead" cat superposition as quite real. Schrödinger's famous thought experiment poses the question of when a system stops existing as a quantum superposition of states. In the same way, one can ask whether the act of making a decision is analogous to having a superposition of states of two decision outcomes, so that making a decision means "opening the box" to reduce the brain from a combination of states to one state. This analogy of decision-making uses a formalism derived from quantum mechanics, but does not indicate the actual mechanism by which the decision is made.

In this way, the idea is similar to quantum cognition. This field clearly distinguishes itself from the quantum mind, as it is not reliant on the hypothesis that there is something micro-physical quantum-mechanical about the brain. Quantum cognition is based on the quantum-like paradigm, generalized quantum paradigm, or quantum structure paradigm that information processing by complex systems such as the brain can be mathematically described in the framework of quantum information and quantum probability theory. This model uses quantum mechanics only as an analogy and does not propose that quantum mechanics is the physical mechanism by which it operates. For example, quantum cognition proposes that some decisions can be analyzed as if there is interference between two alternatives, but it is not a physical quantum interference effect.

Practical problems

The main theoretical argument against the quantum-mind hypothesis is the assertion that quantum states in the brain would lose coherency before they reached a scale where they could be useful for neural processing. This supposition was elaborated by Max Tegmark. His calculations indicate that quantum systems in the brain decohere at sub-picosecond timescales. No response by a brain has shown computational results or reactions on this fast of a timescale. Typical reactions are on the order of milliseconds, trillions of times longer than sub-picosecond timescales.

Daniel Dennett uses an experimental result in support of his multiple drafts model of an optical illusion that happens on a timescale of less than a second or so. In this experiment, two different-colored lights, with an angular separation of a few degrees at the eye, are flashed in succession. If the interval between the flashes is less than a second or so, the first light that is flashed appears to move across to the position of the second light. Furthermore, the light seems to change color as it moves across the visual field. A green light will appear to turn red as it seems to move across to the position of a red light. Dennett asks how we could see the light change color before the second light is observed. Velmans argues that the cutaneous rabbit illusion, another illusion that happens in about a second, demonstrates that there is a delay while modelling occurs in the brain and that this delay was discovered by Libet. These slow illusions that happen at times of less than a second do not support a proposal that the brain functions on the picosecond timescale.

Penrose says:

The problem with trying to use quantum mechanics in the action of the brain is that if it were a matter of quantum nerve signals, these nerve signals would disturb the rest of the material in the brain, to the extent that the quantum coherence would get lost very quickly. You couldn't even attempt to build a quantum computer out of ordinary nerve signals, because they're just too big and in an environment that's too disorganized. Ordinary nerve signals have to be treated classically. But if you go down to the level of the microtubules, then there's an extremely good chance that you can get quantum-level activity inside them.

For my picture, I need this quantum-level activity in the microtubules; the activity has to be a large-scale thing that goes not just from one microtubule to the next but from one nerve cell to the next, across large areas of the brain. We need some kind of coherent activity of a quantum nature which is weakly coupled to the computational activity that Hameroff argues is taking place along the microtubules.

There are various avenues of attack. One is directly on the physics, on quantum theory, and there are certain experiments that people are beginning to perform, and various schemes for a modification of quantum mechanics. I don't think the experiments are sensitive enough yet to test many of these specific ideas. One could imagine experiments that might test these things, but they'd be very hard to perform.

Penrose also said in an interview:

...whatever consciousness is, it must be beyond computable physics.... It's not that consciousness depends on quantum mechanics, it's that it depends on where our current theories of quantum mechanics go wrong. It's to do with a theory that we don't know yet.

A demonstration of a quantum effect in the brain has to explain this problem or explain why it is not relevant, or that the brain somehow circumvents the problem of the loss of quantum coherency at body temperature. As Penrose proposes, it may require a new type of physical theory, something "we don't know yet."

Ethical problems

Deepak Chopra has referred a "quantum soul" existing "apart from the body", human "access to a field of infinite possibilities", and other quantum mysticism topics such as quantum healing or quantum effects of consciousness. Seeing the human body as being undergirded by a "quantum-mechanical body" composed not of matter but of energy and information, he believes that "human aging is fluid and changeable; it can speed up, slow down, stop for a time, and even reverse itself", as determined by one's state of mind. Robert Carroll states that Chopra attempts to integrate Ayurveda with quantum mechanics to justify his teachings. Chopra argues that what he calls "quantum healing" cures any manner of ailments, including cancer, through effects that he claims are based on the same principles as quantum mechanics. This has led physicists to object to his use of the term quantum in reference to medical conditions and the human body. Chopra said: "I think quantum theory has a lot of things to say about the observer effect, about non-locality, about correlations. So I think there's a school of physicists who believe that consciousness has to be equated, or at least brought into the equation, in understanding quantum mechanics." On the other hand, he also claims that quantum effects are "just a metaphor. Just like an electron or a photon is an indivisible unit of information and energy, a thought is an indivisible unit of consciousness." In his book Quantum Healing, Chopra stated the conclusion that quantum entanglement links everything in the Universe, and therefore it must create consciousness.

According to Daniel Dennett, "On this topic, Everybody's an expert... but they think that they have a particular personal authority about the nature of their own conscious experiences that can trump any hypothesis they find unacceptable."

While quantum effects are significant in the physiology of the brain, critics of quantum mind hypotheses challenge whether the effects of known or speculated quantum phenomena in biology scale up to have significance in neuronal computation, much less the emergence of consciousness as phenomenon. Daniel Dennett said, "Quantum effects are there in your car, your watch, and your computer. But most things—most macroscopic objects—are, as it were, oblivious to quantum effects. They don't amplify them; they don't hinge on them."

Omniscience

From Wikipedia, the free encyclopedia

Omniscience is the property of possessing maximal knowledge. In Hinduism, Buddhism, Sikhism and the Abrahamic religions, it is often attributed to a divine being or an all-knowing spirit, entity or person. In Jainism, omniscience is an attribute that any individual can eventually attain. In Buddhism, there are differing beliefs about omniscience among different schools.

Etymology

The word omniscience derives from the Latin word sciens ("to know" or "conscious") and the prefix omni ("all" or "every"), but also means "all-seeing".

In religion

Buddhism

The topic of omniscience has been much debated in various Indian traditions, but no more so than by the Buddhists. After Dharmakirti's excursions into the subject of what constitutes a valid cognition, Śāntarakṣita and his student Kamalaśīla thoroughly investigated the subject in the Tattvasamgraha and its commentary the Panjika. The arguments in the text can be broadly grouped into four sections:

  • The refutation that cognitions, either perceived, inferred, or otherwise, can be used to refute omniscience.
  • A demonstration of the possibility of omniscience through apprehending the selfless universal nature of all knowables, by examining what it means to be ignorant and the nature of mind and awareness.
  • A demonstration of the total omniscience where all individual characteristics (svalaksana) are available to the omniscient being.
  • The specific demonstration of Shakyamuni Buddha's non-exclusive omniscience, but the knowledge of Shakyamuni Buddha's is really infinite and no other gods or being can match his true omniscience.

Christianity

Some modern Christian theologians argue that God's omniscience is inherent rather than total, and that God chooses to limit his omniscience in order to preserve the free will and dignity of his creatures. John Calvin, among other theologians of the 16th century, comfortable with the definition of God as being omniscient in the total sense, in order for worthy beings' abilities to choose freely, embraced the doctrine of predestination.

Hinduism

In the Bhakti tradition of Vaishnavism, where Vishnu is worshipped as the supreme God, Vishnu is attributed with numerous qualities such as omniscience, energy, strength, lordship, vigour, and splendour.

Islam

God in Islam is attributed with absolute omniscience. God knows the past, the present, and the future. It is compulsory for a Muslim to believe that God is indeed omniscient as stated in one of the six articles of faith which is:

  • To believe that God's divine decree and predestination

Say: Do you instruct God about your religion? But God knows all that is in the heavens and on the earth; God is Knowing of all things

It is believed that humans can only change their predestination (wealth, health, deed etc.) and not divine decree (date of birth, date of death, family etc.), thus allowing free will.

Baha'i Faith

Omniscience is an attribute of God, yet it is also an attribute that reveals sciences to humanity:

In like manner, the moment the word expressing My attribute “The Omniscient” issueth forth from My mouth, every created thing will, according to its capacity and limitations, be invested with the power to unfold the knowledge of the most marvelous sciences, and will be empowered to manifest them in the course of time at the bidding of Him Who is the Almighty, the All-Knowing.

Jainism

In Jainism, omniscience is considered the highest type of perception. In the words of a Jain scholar, "The perfect manifestation of the innate nature of the self, arising on the complete annihilation of the obstructive veils, is called omniscience."

Jainism views infinite knowledge as an inherent capability of every soul. Arihanta is the word used by Jains to refer to those human beings who have conquered all inner passions (like attachment, greed, pride, anger) and possess Kevala Jnana (infinite knowledge). They are said to be of two kinds:

  1. Sāmānya kevali – omniscient beings (Kevalins) who are concerned with their own liberation.
  2. Tirthankara kevali – human beings who attain omniscience and then help others to achieve the same.[7]

Omniscience and free will

Omniciencia, mural by José Clemente Orozco

Whether omniscience, particularly regarding the choices that a human will make, is compatible with free will has been debated by theologians and philosophers. The argument that divine foreknowledge is not compatible with free will is known as theological fatalism. It is argued that if humans are free to choose between alternatives, God could not know what this choice will be.

A question arises: if an omniscient entity knows everything, even about its own decisions in the future, does it therefore forbid any free will to that entity? William Lane Craig states that the question subdivides into two:

  1. If God foreknows the occurrence of some event E, does E happen necessarily?
  2. If some event E is contingent, how can God foreknow E's occurrence?

However, this kind of argument fails to recognize its use of the modal fallacy. It is possible to show that the first premise of arguments like these is fallacious.

Omniscience and the privacy of conscious experience

Some philosophers, such as Patrick Grim, Linda Zagzebski, Stephan Torre, and William Mander have discussed the issue of whether the apparent exclusively first-person nature of conscious experience is compatible with God's omniscience. There is a strong sense in which conscious experience is private, meaning that no outside observer can gain knowledge of what it is like to be me as me. If a subject cannot know what it is like to be another subject in an objective manner, the question is whether that limitation applies to God as well. If it does, then God cannot be said to be omniscient since there is then a form of knowledge that God lacks access to.

The philosopher Patrick Grim most notably raised this issue. Linda Zagzebski argued against this by introducing the notion of perfect empathy, a proposed relation that God can have to subjects that would allow God to have perfect knowledge of their conscious experience. William Mander argued that God can only have such knowledge if our experiences are part of God's broader experience. Stephan Torre claimed that God can have such knowledge if self-knowledge involves the ascription of properties, either to oneself or to others.

Consciousness causes collapse

The postulate that consciousness causes collapse is an interpretation of quantum mechanics in which consciousness is postulated to be the main mechanism behind the process of measurement in quantum mechanics. It is a historical interpretation of quantum mechanics that is largely discarded by modern physicists. The idea is attributed to Eugene Wigner who wrote about it in the 1960s, but traces of the idea appear as early as the 1930s. Wigner later rejected this interpretation in the 1970s and 1980s.

This interpretation has been tied to the origin of pseudoscientific currents and New Age movements, specifically quantum mysticism.

History

Earlier work

According to Werner Heisenberg’s recollections in Physics and Beyond, Niels Bohr is said to have rejected the necessity of a conscious observer in quantum mechanics as early as 1927.

In his 1932 book Mathematical Foundations of Quantum Mechanics, John von Neumann argued that the mathematics of quantum mechanics allows the collapse of the wave function to be placed at any position in the causal chain from the measurement device to the "subjective perception" of the human observer. However von Neumann did not explicitly relate measurement with consciousness. In 1939, Fritz London and Edmond Bauer argued that the consciousness of the observer played an important role in measurement. However London wrote about consciousness in terms of philosophical phenomenology and not necessarily as a physical process.

Wigner's work

The idea that "consciousness causes collapse" is attributed to Eugene Wigner who first wrote about it in his 1961 article "Remarks on the mind-body question" and developed it further during the 1960s. Wigner reformulated the Schrödinger's cat thought experiment as Wigner's friend and proposed that the consciousness of an observer is the demarcation line that precipitates collapse of the wave function, independent of any realist interpretation. The mind is postulated to be non-physical and the only true measurement apparatus.

The idea was criticized early by Abner Shimony in 1963 and by Hilary Putnam a year later.

Wigner discarded the conscious collapse interpretation in the later 1970s. In a 1982 lecture, Wigner said that his early view of quantum mechanics should be criticized as solipsism. In 1984, he wrote that he was convinced out of it by the 1970 work of H. Dieter Zeh on quantum decoherence and macroscopic quantum phenomena.

After Wigner

The idea of consciousness causing collapse has been promoted and developed by Henry Stapp, a member of the Fundamental Fysiks Group, since 1993.

Description

Measurement in standard quantum mechanics

In the orthodox Copenhagen interpretation, quantum mechanics predicts only the probabilities for different observed experimental outcomes. What constitutes an observer or a measurement is not directly specified by the theory, and the behavior of a system under measurement and observation is completely different from its usual behavior: the wavefunction that describes a system spreads out into an ever-larger superposition of different possible situations. However, during observation, the wavefunction describing the system collapses to one of several options. If there is no observation, this collapse does not occur, and none of the options ever become less likely.

It can be predicted using quantum mechanics, absent a collapse postulate, that an observer observing a quantum superposition will turn into a superposition of different observers seeing different things. The observer will have a wavefunction which describes all the possible outcomes. Still, in actual experience, an observer never senses a superposition, but always senses that one of the outcomes has occurred with certainty. This apparent conflict between a wavefunction description and classical experience is called the problem of observation (see Measurement problem).

Consciousness-causes-collapse interpretation

This consciousness causes collapse interpretation has been summarized thus:

The rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse.

Stapp has argued for the concept as follows:

From the point of view of the mathematics of quantum theory it makes no sense to treat a measuring device as intrinsically different from the collection of atomic constituents that make it up. A device is just another part of the physical universe... Moreover, the conscious thoughts of a human observer ought to be causally connected most directly and immediately to what is happening in his brain, not to what is happening out at some measuring device... Our bodies and brains thus become ... parts of the quantum mechanically described physical universe. Treating the entire physical universe in this unified way provides a conceptually simple and logically coherent theoretical foundation...

Objections to the interpretation

Wigner shifted away from "consciousness causes collapse" in his later years. This was partly because he was embarrassed that "consciousness causes collapse" can lead to a kind of solipsism, but also because he decided that he had been wrong to try to apply quantum physics at the scale of everyday life (specifically, he rejected his initial idea of treating macroscopic objects as isolated systems).

Bohr said circa 1927 that it "still makes no difference whether the observer is a man, an animal, or a piece of apparatus."

This interpretation relies upon an interactionist form of dualism that is inconsistent with the materialism that is commonly used to understand the brain, and accepted by most scientists. (Materialism assumes that consciousness has no special role in relation to quantum mechanics.) The measurement problem notwithstanding, they point to a causal closure of physics, suggesting a problem with how consciousness and matter might interact, reminiscent of objections to Descartes' substance dualism.

The only form of interactionist dualism that has seemed even remotely tenable in the contemporary picture is one that exploits certain properties of quantum mechanics. There are two ways this might go. First, some [e.g., Eccles 1986] have appealed to the existence of quantum indeterminacy, and have suggested that a nonphysical consciousness might be responsible for filling the resultant causal gaps, determining which values some physical magnitudes might take within an apparently "probabilistic" distribution... This is an audacious and interesting suggestion, but it has a number of problems... A second way in which quantum mechanics bears on the issue of causal closure lies with the fact that in some interpretations of the quantum formalism, consciousness itself plays a vital causal role, being required to bring about the so-called "collapse of the wave-function." This collapse is supposed to occur upon any act of measurement; and in one interpretation, the only way to distinguish a measurement from a nonmeasurement is via the presence of consciousness. This theory is certainly not universally accepted (for a start, it presupposes that consciousness is not itself physical, surely contrary to the views of most physicists), and I do not accept it myself, but in any case it seems that the kind of causal work consciousness performs here is quite different from the kind required for consciousness to play a role in directing behavior... In any case, all versions of interactionist dualism have a conceptual problem that suggests that they are less successful in avoiding epiphenomenalism than they might seem; or at least they are no better off than naturalistic dualism. Even on these views, there is a sense in which the phenomenal is irrelevant. We can always subtract the phenomenal component from any explanatory account, yielding a purely causal component.

— David Chalmers, "The Irreducibility of Consciousness" in The Conscious Mind: In Search of a Fundamental Theory

The interpretation has also been criticized for not explaining which things have sufficient consciousness to collapse the wave function. Also, it posits an important role for the conscious mind, and it has been questioned how this could be the case for the earlier universe, before consciousness had evolved or emerged. It has been argued that "[consciousness causes collapse] does not allow sensible discussion of Big Bang cosmology or biological evolution". For example, Roger Penrose remarked: "[T]he evolution of conscious life on this planet is due to appropriate mutations having taken place at various times. These, presumably, are quantum events, so they would exist only in linearly superposed form until they finally led to the evolution of a conscious being—whose very existence depends on all the right mutations having 'actually' taken place!" Others further suppose a universal mind (see also panpsychism and panexperientialism). Other researchers have expressed similar objections to the introduction of any subjective element in the collapse of the wavefunction.

Testability

It has been argued that the results of delayed-choice quantum eraser experiments empirically falsify this interpretation. However, the argument was shown to be invalid because an interference pattern would only be visible after post-measurement detections were correlated through use of a coincidence counter; if that was not true, the experiment would allow signaling into the past. The delayed-choice quantum eraser experiment has also been used to argue for support of this interpretation, but, as with other arguments, none of the cited references prove or falsify this interpretation.

The central role played by consciousness in this interpretation naturally calls for use of psychological experiments to verify or falsify it. One such approach relies on explaining the empirical presentiment effect quantum mechanically. Another approach makes use of the psychological priming effect to design an appropriate test. Both methods claim verification success.

Reception

A poll was conducted at a quantum mechanics conference in 2011 using 33 participants (including physicists, mathematicians, and philosophers). Researchers found that 6% of participants (2 of the 33) indicated that they believed the observer "plays a distinguished physical role (e.g., wave-function collapse by consciousness)". This poll also states that 55% (18 of the 33) indicated that they believed the observer "plays a fundamental role in the application of the formalism but plays no distinguished physical role". They also mention that "Popular accounts have sometimes suggested that the Copenhagen interpretation attributes such a role to consciousness. In our view, this is to misunderstand the Copenhagen interpretation."

Models of consciousness

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Models_of_consciousness

Models of consciousness are used to illustrate and aid in understanding and explaining distinctive aspects of consciousness. Sometimes the models are labeled theories of consciousness. Anil Seth defines such models as those that relate brain phenomena such as fast irregular electrical activity and widespread brain activation to properties of consciousness such as qualia. Seth allows for different types of models including mathematical, logical, verbal and conceptual models.

Neuroscience

Neural correlates of consciousness

The Neural correlates of consciousness (NCC) formalism is used as a major step towards explaining consciousness. The NCC are defined to constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept, and consequently sufficient for consciousness. In this formalism, consciousness is viewed as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.

Global workspace theory

Global workspace theory (GWT) is a cognitive architecture and theoretical framework for understanding consciousness introduced by cognitive scientist Bernard Baars in 1988. The theory uses a theater metaphor: conscious experience is like material illuminated on a stage, with attention acting as a spotlight. Specialized unconscious processes operate in parallel, competing for access to a "global workspace" that broadcasts winning content throughout the brain. GWT is one of the leading scientific theories of consciousness and has been the subject of adversarial collaborations testing its predictions against integrated information theory. The Dehaene–Changeux model is a neural network implementation of global workspace principles.

Dehaene–Changeux model

The Dehaene–Changeux model (DCM), also known as the global neuronal workspace or the global cognitive workspace model is a computer model of the neural correlates of consciousness programmed as a neural network. Stanislas Dehaene and Jean-Pierre Changeux introduced this model in 1986. It is associated with Bernard Baars's Global workspace theory for consciousness.

Electromagnetic theories of consciousness

Electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon that occurs when a brain produces an electromagnetic field with specific characteristics. Some electromagnetic theories are also quantum mind theories of consciousness.

Orchestrated objective reduction

Orchestrated objective reduction (Orch-OR) model is based on the hypothesis that consciousness in the brain originates from quantum processes inside neurons, rather than from connections between neurons (the conventional view). The mechanism is held to be associated with molecular structures called microtubules. The hypothesis was advanced by Roger Penrose and Stuart Hameroff and has been the subject of extensive debate.

Thalamic reticular networking model of consciousness

Min proposed in a 2010 paper a Thalamic reticular networking model of consciousness. The model suggests consciousness as a "mental state embodied through TRN-modulated synchronization of thalamocortical networks". In this model the thalamic reticular nucleus (TRN) is suggested as ideally suited for controlling the entire cerebral network, and responsible (via GABAergic networking) for synchronization of neural activity.

Holographic models of consciousness

A number of researchers, most notably Karl Pribram and David Bohm, have proposed holographic models of consciousness as a way to explain number of problems of consciousness using the properties of hologram. A number of these theories overlap to some extent with quantum theories of mind.

EEG microstates

This model of consciousness is based on the well-established method for characterizing resting-state activity of the human brain using multichannel electroencephalography (EEG). The concept of EEG microstates was created by Lehmann et al. at the University of Zurich in the 1980's using multichannel EEG measurements. In a seminal paper by Lehmann in 1987, EEG microstates were described as "repeating, quasi-stable patterns in an EEG", representing "the atoms of thought". These observations on microstates in spontaneous brain electric activity suggest that the apparent "continual stream of consciousness" consists of "concatenated identifiable brief packets" in the time range of fractions of seconds (70 to 125 milliseconds during rest, 286 to 354 milliseconds while reading abstract or imagery words). Entry of content chunks into consciousness apparently requires such minimum durations and smaller microstates are thereby unconscious.

Medicine

Clouding of consciousness

Clouding of consciousness, also known as brain fog or mental fog, is a term used in medicine denoting an abnormality in the regulation of the overall level of consciousness that is mild and less severe than a delirium. It is part of an overall model where there's regulation of the "overall level" of the consciousness of the brain and aspects responsible for "arousal" or "wakefulness" and awareness of oneself and of the environment.

Philosophy

Multiple drafts model

Daniel Dennett proposed a physicalist, information processing based multiple drafts model of consciousness described more fully in his 1991 book, Consciousness Explained.

Functionalism

Functionalism is a view in the theory of the mind. It states that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they have causal relations to other mental states, numerous sensory inputs, and behavioral outputs.

Sociology

Sociology of human consciousness uses the theories and methodology of sociology to explain human consciousness. The theory and its models emphasize the importance of language, collective representations, self-conceptions, and self-reflectivity. It argues that the shape and feel of human consciousness is heavily social.

Spirituality

Eight-circuit model of consciousness

Timothy Leary introduced and Robert Anton Wilson and Antero Alli elaborated the Eight-circuit model of consciousness as hypothesis that "suggested eight periods [circuits] and twenty-four stages of neurological evolution".

Human extinction

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Human_ext...