Search This Blog

Saturday, January 31, 2026

Quantum mind

From Wikipedia, the free encyclopedia

The quantum mind or quantum consciousness is a group of hypotheses proposing that local physical laws and interactions from classical mechanics or connections between neurons alone cannot explain consciousness. These hypotheses posit instead that quantum-mechanical phenomena, such as entanglement and superposition that cause nonlocalized quantum effects, interacting in smaller features of the brain than cells, may play an important part in the brain's function and could explain critical aspects of consciousness. These scientific hypotheses are as yet unvalidated, and they can overlap with quantum mysticism.

History

Eugene Wigner developed the idea that quantum mechanics has something to do with the workings of the mind. He proposed that the wave function collapses due to its interaction with consciousness. Freeman Dyson argued that "mind, as manifested by the capacity to make choices, is to some extent inherent in every electron".

Other contemporary physicists and philosophers considered these arguments unconvincing. Victor Stenger characterized quantum consciousness as a "myth" having "no scientific basis" that "should take its place along with gods, unicorns and dragons".

David Chalmers argues against quantum consciousness. He instead discusses how quantum mechanics may relate to dualistic consciousness. Chalmers is skeptical that any new physics can resolve the hard problem of consciousness. He argues that quantum theories of consciousness suffer from the same weakness as more conventional theories. Just as he argues that there is no particular reason why specific macroscopic physical features in the brain should give rise to consciousness, he also thinks that there is no specific reason why a particular quantum feature, such as the EM field in the brain, should give rise to consciousness either.

Approaches

Bohm and Hiley

David Bohm viewed quantum theory and relativity as contradictory, which implied a more fundamental level in the universe. He claimed that both quantum theory and relativity pointed to this deeper theory, a quantum field theory. This more fundamental level was proposed to represent an undivided wholeness and an implicate order, from which arises the explicate order of the universe as we experience it.

Bohm's proposed order applies both to matter and consciousness. He suggested that it could explain the relationship between them. He saw mind and matter as projections into our explicate order from the underlying implicate order. Bohm claimed that when we look at matter, we see nothing that helps us to understand consciousness.

Bohm never proposed a specific means by which his proposal could be falsified, nor a neural mechanism through which his "implicate order" could emerge in a way relevant to consciousness. He later collaborated on Karl Pribram's holonomic brain theory as a model of quantum consciousness.

David Bohm also collaborated with Basil Hiley on work that claimed mind and matter both emerge from an "implicate order". Hiley in turn worked with philosopher Paavo Pylkkänen. According to Pylkkänen, Bohm's suggestion "leads naturally to the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain, while the basic thinking process is at the quantum-theoretically describable level".

Penrose and Hameroff

Theoretical physicist Roger Penrose and anaesthesiologist Stuart Hameroff collaborated to produce the theory known as "orchestrated objective reduction" (Orch-OR). Penrose and Hameroff initially developed their ideas separately and later collaborated to produce Orch-OR in the early 1990s. They reviewed and updated their theory in 2013.

Penrose's argument stemmed from Gödel's incompleteness theorems. In his first book on consciousness, The Emperor's New Mind (1989), he argued that while a formal system cannot prove its own consistency, Gödel's unprovable results are provable by human mathematicians. Penrose took this to mean that human mathematicians are not formal proof systems and not running a computable algorithm. According to Bringsjord and Xiao, this line of reasoning is based on fallacious equivocation on the meaning of computation. In the same book, Penrose wrote: "One might speculate, however, that somewhere deep in the brain, cells are to be found of single quantum sensitivity. If this proves to be the case, then quantum mechanics will be significantly involved in brain activity."

Penrose determined that wave function collapse was the only possible physical basis for a non-computable process. Dissatisfied with its randomness, he proposed a new form of wave function collapse that occurs in isolation and called it objective reduction. He suggested each quantum superposition has its own piece of spacetime curvature and that when these become separated by more than one Planck length, they become unstable and collapse. Penrose suggested that objective reduction represents neither randomness nor algorithmic processing but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derives.

Hameroff provided a hypothesis that microtubules would be suitable hosts for quantum behavior. Microtubules are composed of tubulin protein dimer subunits. The dimers each have hydrophobic pockets that are 8 nm apart and may contain delocalized π electrons. Tubulins have other smaller non-polar regions that contain π-electron-rich indole rings separated by about 2 nm. Hameroff proposed that these electrons are close enough to become entangled. He originally suggested that the tubulin-subunit electrons would form a Bose–Einstein condensate, but this was discredited. He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules, but this too was experimentally discredited.

For instance, the proposed predominance of A-lattice microtubules, more suitable for information processing, was falsified by Kikkawa et al., who showed that all in vivo microtubules have a B lattice and a seam. Orch-OR predicted that microtubule coherence reaches the synapses through dendritic lamellar bodies (DLBs), but De Zeeuw et al. proved this impossible by showing that DLBs are micrometers away from gap junctions.

In 2014, Hameroff and Penrose claimed that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013 corroborates Orch-OR theory. Experiments that showed that anaesthetic drugs reduce how long microtubules can sustain suspected quantum excitations appear to support the quantum theory of consciousness.

In April 2022, the results of two related experiments at the University of Alberta and Princeton University were announced at The Science of Consciousness conference, providing further evidence to support quantum processes operating within microtubules. In a study Stuart Hameroff was part of, Jack Tuszyński of the University of Alberta demonstrated that anesthetics hasten the duration of a process called delayed luminescence, in which microtubules and tubulins re-emit trapped light. Tuszyński suspects that the phenomenon has a quantum origin, with superradiance being investigated as one possibility. In the second experiment, Gregory D. Scholes and Aarat Kalra of Princeton University used lasers to excite molecules within tubulins, causing a prolonged excitation to diffuse through microtubules further than expected, which did not occur when repeated under anesthesia. However, diffusion results have to be interpreted carefully, since even classical diffusion can be very complex due to the wide range of length scales in the fluid filled extracellular space. Nevertheless, University of Oxford quantum physicist Vlatko Vedral told that this connection with consciousness is a really long shot.

Also in 2022, a group of Italian physicists conducted several experiments that failed to provide evidence in support of a gravity-related quantum collapse model of consciousness, weakening the possibility of a quantum explanation for consciousness.

Although these theories are stated in a scientific framework, it is difficult to separate them from scientists' personal opinions. The opinions are often based on intuition or subjective ideas about the nature of consciousness. For example, Penrose wrote:

[M]y own point of view asserts that you can't even simulate conscious activity. What's going on in conscious thinking is something you couldn't properly imitate at all by computer.... If something behaves as though it's conscious, do you say it is conscious? People argue endlessly about that. Some people would say, "Well, you've got to take the operational viewpoint; we don't know what consciousness is. How do you judge whether a person is conscious or not? Only by the way they act. You apply the same criterion to a computer or a computer-controlled robot." Other people would say, "No, you can't say it feels something merely because it behaves as though it feels something." My view is different from both those views. The robot wouldn't even behave convincingly as though it was conscious unless it really was—which I say it couldn't be, if it's entirely computationally controlled.

Penrose continues:

A lot of what the brain does you could do on a computer. I'm not saying that all the brain's action is completely different from what you do on a computer. I am claiming that the actions of consciousness are something different. I'm not saying that consciousness is beyond physics, either—although I'm saying that it's beyond the physics we know now.... My claim is that there has to be something in physics that we don't yet understand, which is very important, and which is of a noncomputational character. It's not specific to our brains; it's out there, in the physical world. But it usually plays a totally insignificant role. It would have to be in the bridge between quantum and classical levels of behavior—that is, where quantum measurement comes in.

Umezawa, Vitiello, Freeman

Hiroomi Umezawa and collaborators proposed a quantum field theory of memory storage. Giuseppe Vitiello and Walter Freeman proposed a dialog model of the mind. This dialog takes place between the classical and the quantum parts of the brain. Their quantum field theory models of brain dynamics are fundamentally different from the Penrose–Hameroff theory.

Quantum brain dynamics

As described by Harald Atmanspacher, "Since quantum theory is the most fundamental theory of matter that is currently available, it is a legitimate question to ask whether quantum theory can help us to understand consciousness."

The original motivation in the early 20th century for relating quantum theory to consciousness was essentially philosophical. It is fairly plausible that conscious free decisions ("free will") are problematic in a perfectly deterministic world, so quantum randomness might indeed open up novel possibilities for free will. (On the other hand, randomness is problematic for goal-directed volition!)

Ricciardi and Umezawa proposed in 1967 a general theory of quanta of long-range coherent waves within and between brain cells, and showed a possible mechanism of memory storage and retrieval in terms of Nambu–Goldstone bosons. Mari Jibu and Kunio Yasue later popularized these results under the name "quantum brain dynamics" (QBD) as the hypothesis to explain the function of the brain within the framework of quantum field theory with implications on consciousness.

Pribram

Karl Pribram's holonomic brain theory (quantum holography) invoked quantum field theory to explain higher-order processing of memory in the brain. He argued that his holonomic model solved the binding problem. Pribram collaborated with Bohm in his work on quantum approaches to the thought process. Pribram suggested much of the processing in the brain was done in distributed fashion. He proposed that the fine fibered, felt-like dendritic fields might follow the principles of quantum field theory when storing and retrieving long term memory.

Stapp

Henry Stapp proposed that quantum waves are reduced only when they interact with consciousness. He argues from the orthodox quantum mechanics of John von Neumann that the quantum state collapses when the observer selects one among the alternative quantum possibilities as a basis for future action. The collapse, therefore, takes place in the expectation that the observer associated with the state. Stapp's work drew criticism from scientists such as David Bourget and Danko Georgiev.

Catecholaminergic neuron electron transport (CNET)

CNET is a hypothesized neural signaling mechanism in catecholaminergic neurons that would use quantum mechanical electron transport. The hypothesis is based in part on the observation by many independent researchers that electron tunneling occurs in ferritin, an iron storage protein that is prevalent in those neurons, at room temperature and ambient conditions. The hypothesized function of this mechanism is to assist in action selection, but the mechanism itself would be capable of integrating millions of cognitive and sensory neural signals using a physical mechanism associated with strong electron-electron interactions. Each tunneling event would involve a collapse of an electron wave function, but the collapse would be incidental to the physical effect created by strong electron-electron interactions.

CNET predicted a number of physical properties of these neurons that have been subsequently observed experimentally, such as electron tunneling in substantia nigra pars compacta (SNc) tissue and the presence of disordered arrays of ferritin in SNc tissue. The hypothesis also predicted that disordered ferritin arrays like those found in SNc tissue should be capable of supporting long-range electron transport and providing a switching or routing function, both of which have also been subsequently observed.

Another prediction of CNET was that the largest SNc neurons should mediate action selection. This prediction was contrary to earlier proposals about the function of those neurons at that time, which were based on predictive reward dopamine signaling. A team led by Dr. Pascal Kaeser of Harvard Medical School subsequently demonstrated that those neurons do in fact code movement, consistent with the earlier predictions of CNET. While the CNET mechanism has not yet been directly observed, it may be possible to do so using quantum dot fluorophores tagged to ferritin or other methods for detecting electron tunneling.

CNET is applicable to a number of different consciousness models as a binding or action selection mechanism, such as Integrated Information Theory (IIT) and Sensorimotor Theory (SMT). It is noted that many existing models of consciousness fail to specifically address action selection or binding. For example, O'Regan and Noë call binding a "pseudo problem," but also state that "the fact that object attributes seem perceptually to be part of a single object does not require them to be 'represented' in any unified kind of way, for example, at a single location in the brain, or by a single process. They may be so represented, but there is no logical necessity for this." Simply because there is no "logical necessity" for a physical phenomenon does not mean that it does not exist, or that once it is identified that it can be ignored. Likewise, global workspace theory (GWT) models appear to treat dopamine as modulatory, based on the prior understanding of those neurons from predictive reward dopamine signaling research, but GWT models could be adapted to include modeling of moment-by-moment activity in the striatum to mediate action selection, as observed by Kaiser. CNET is applicable to those neurons as a selection mechanism for that function, as otherwise that function could result in seizures from simultaneous actuation of competing sets of neurons. While CNET by itself is not a model of consciousness, it is able to integrate different models of consciousness through neural binding and action selection. However, a more complete understanding of how CNET might relate to consciousness would require a better understanding of strong electron-electron interactions in ferritin arrays, which implicates the many-body problem.

Criticism

These hypotheses of the quantum mind remain hypothetical speculation, as Penrose admits in his discussions. Until they make a prediction that is tested by experimentation, the hypotheses are not based on empirical evidence. In 2010, Lawrence Krauss was guarded in criticising Penrose's ideas. He said: "Roger Penrose has given lots of new-age crackpots ammunition... Many people are dubious that Penrose's suggestions are reasonable, because the brain is not an isolated quantum-mechanical system. To some extent it could be, because memories are stored at the molecular level, and at a molecular level quantum mechanics is significant." According to Krauss, "It is true that quantum mechanics is extremely strange, and on extremely small scales for short times, all sorts of weird things happen. And in fact, we can make weird quantum phenomena happen. But what quantum mechanics doesn't change about the universe is, if you want to change things, you still have to do something. You can't change the world by thinking about it."

The process of testing the hypotheses with experiments is fraught with conceptual/theoretical, practical, and ethical problems.

Conceptual problems

The idea that a quantum effect is necessary for consciousness to function is still in the realm of philosophy. Penrose proposes that it is necessary, but other theories of consciousness do not indicate that it is needed. For example, Daniel Dennett proposed a theory called multiple drafts model, which doesn't indicate that quantum effects are needed, in his 1991 book Consciousness Explained. A philosophical argument on either side is not a scientific proof, although philosophical analysis can indicate key differences in the types of models and show what type of experimental differences might be observed. But since there is no clear consensus among philosophers, there is no conceptual support that a quantum mind theory is needed.

A possible conceptual approach is to use quantum mechanics as an analogy to understand a different field of study like consciousness, without expecting that the laws of quantum physics will apply. An example of this approach is the idea of Schrödinger's cat. Erwin Schrödinger described how one could, in principle, create entanglement of a large-scale system by making it dependent on an elementary particle in a superposition. He proposed a scenario with a cat in a locked steel chamber, wherein the cat's survival depended on the state of a radioactive atom—whether it had decayed and emitted radiation. According to Schrödinger, the Copenhagen interpretation implies that the cat is both alive and dead until the state has been observed. Schrödinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; he intended the example to illustrate the absurdity of the existing view of quantum mechanics. But since Schrödinger's time, physicists have given other interpretations of the mathematics of quantum mechanics, some of which regard the "alive and dead" cat superposition as quite real. Schrödinger's famous thought experiment poses the question of when a system stops existing as a quantum superposition of states. In the same way, one can ask whether the act of making a decision is analogous to having a superposition of states of two decision outcomes, so that making a decision means "opening the box" to reduce the brain from a combination of states to one state. This analogy of decision-making uses a formalism derived from quantum mechanics, but does not indicate the actual mechanism by which the decision is made.

In this way, the idea is similar to quantum cognition. This field clearly distinguishes itself from the quantum mind, as it is not reliant on the hypothesis that there is something micro-physical quantum-mechanical about the brain. Quantum cognition is based on the quantum-like paradigm, generalized quantum paradigm, or quantum structure paradigm that information processing by complex systems such as the brain can be mathematically described in the framework of quantum information and quantum probability theory. This model uses quantum mechanics only as an analogy and does not propose that quantum mechanics is the physical mechanism by which it operates. For example, quantum cognition proposes that some decisions can be analyzed as if there is interference between two alternatives, but it is not a physical quantum interference effect.

Practical problems

The main theoretical argument against the quantum-mind hypothesis is the assertion that quantum states in the brain would lose coherency before they reached a scale where they could be useful for neural processing. This supposition was elaborated by Max Tegmark. His calculations indicate that quantum systems in the brain decohere at sub-picosecond timescales. No response by a brain has shown computational results or reactions on this fast of a timescale. Typical reactions are on the order of milliseconds, trillions of times longer than sub-picosecond timescales.

Daniel Dennett uses an experimental result in support of his multiple drafts model of an optical illusion that happens on a timescale of less than a second or so. In this experiment, two different-colored lights, with an angular separation of a few degrees at the eye, are flashed in succession. If the interval between the flashes is less than a second or so, the first light that is flashed appears to move across to the position of the second light. Furthermore, the light seems to change color as it moves across the visual field. A green light will appear to turn red as it seems to move across to the position of a red light. Dennett asks how we could see the light change color before the second light is observed. Velmans argues that the cutaneous rabbit illusion, another illusion that happens in about a second, demonstrates that there is a delay while modelling occurs in the brain and that this delay was discovered by Libet. These slow illusions that happen at times of less than a second do not support a proposal that the brain functions on the picosecond timescale.

Penrose says:

The problem with trying to use quantum mechanics in the action of the brain is that if it were a matter of quantum nerve signals, these nerve signals would disturb the rest of the material in the brain, to the extent that the quantum coherence would get lost very quickly. You couldn't even attempt to build a quantum computer out of ordinary nerve signals, because they're just too big and in an environment that's too disorganized. Ordinary nerve signals have to be treated classically. But if you go down to the level of the microtubules, then there's an extremely good chance that you can get quantum-level activity inside them.

For my picture, I need this quantum-level activity in the microtubules; the activity has to be a large-scale thing that goes not just from one microtubule to the next but from one nerve cell to the next, across large areas of the brain. We need some kind of coherent activity of a quantum nature which is weakly coupled to the computational activity that Hameroff argues is taking place along the microtubules.

There are various avenues of attack. One is directly on the physics, on quantum theory, and there are certain experiments that people are beginning to perform, and various schemes for a modification of quantum mechanics. I don't think the experiments are sensitive enough yet to test many of these specific ideas. One could imagine experiments that might test these things, but they'd be very hard to perform.

Penrose also said in an interview:

...whatever consciousness is, it must be beyond computable physics.... It's not that consciousness depends on quantum mechanics, it's that it depends on where our current theories of quantum mechanics go wrong. It's to do with a theory that we don't know yet.

A demonstration of a quantum effect in the brain has to explain this problem or explain why it is not relevant, or that the brain somehow circumvents the problem of the loss of quantum coherency at body temperature. As Penrose proposes, it may require a new type of physical theory, something "we don't know yet."

Ethical problems

Deepak Chopra has referred a "quantum soul" existing "apart from the body", human "access to a field of infinite possibilities", and other quantum mysticism topics such as quantum healing or quantum effects of consciousness. Seeing the human body as being undergirded by a "quantum-mechanical body" composed not of matter but of energy and information, he believes that "human aging is fluid and changeable; it can speed up, slow down, stop for a time, and even reverse itself", as determined by one's state of mind. Robert Carroll states that Chopra attempts to integrate Ayurveda with quantum mechanics to justify his teachings. Chopra argues that what he calls "quantum healing" cures any manner of ailments, including cancer, through effects that he claims are based on the same principles as quantum mechanics. This has led physicists to object to his use of the term quantum in reference to medical conditions and the human body. Chopra said: "I think quantum theory has a lot of things to say about the observer effect, about non-locality, about correlations. So I think there's a school of physicists who believe that consciousness has to be equated, or at least brought into the equation, in understanding quantum mechanics." On the other hand, he also claims that quantum effects are "just a metaphor. Just like an electron or a photon is an indivisible unit of information and energy, a thought is an indivisible unit of consciousness." In his book Quantum Healing, Chopra stated the conclusion that quantum entanglement links everything in the Universe, and therefore it must create consciousness.

According to Daniel Dennett, "On this topic, Everybody's an expert... but they think that they have a particular personal authority about the nature of their own conscious experiences that can trump any hypothesis they find unacceptable."

While quantum effects are significant in the physiology of the brain, critics of quantum mind hypotheses challenge whether the effects of known or speculated quantum phenomena in biology scale up to have significance in neuronal computation, much less the emergence of consciousness as phenomenon. Daniel Dennett said, "Quantum effects are there in your car, your watch, and your computer. But most things—most macroscopic objects—are, as it were, oblivious to quantum effects. They don't amplify them; they don't hinge on them."

Omniscience

From Wikipedia, the free encyclopedia

Omniscience is the property of possessing maximal knowledge. In Hinduism, Buddhism, Sikhism and the Abrahamic religions, it is often attributed to a divine being or an all-knowing spirit, entity or person. In Jainism, omniscience is an attribute that any individual can eventually attain. In Buddhism, there are differing beliefs about omniscience among different schools.

Etymology

The word omniscience derives from the Latin word sciens ("to know" or "conscious") and the prefix omni ("all" or "every"), but also means "all-seeing".

In religion

Buddhism

The topic of omniscience has been much debated in various Indian traditions, but no more so than by the Buddhists. After Dharmakirti's excursions into the subject of what constitutes a valid cognition, Śāntarakṣita and his student Kamalaśīla thoroughly investigated the subject in the Tattvasamgraha and its commentary the Panjika. The arguments in the text can be broadly grouped into four sections:

  • The refutation that cognitions, either perceived, inferred, or otherwise, can be used to refute omniscience.
  • A demonstration of the possibility of omniscience through apprehending the selfless universal nature of all knowables, by examining what it means to be ignorant and the nature of mind and awareness.
  • A demonstration of the total omniscience where all individual characteristics (svalaksana) are available to the omniscient being.
  • The specific demonstration of Shakyamuni Buddha's non-exclusive omniscience, but the knowledge of Shakyamuni Buddha's is really infinite and no other gods or being can match his true omniscience.

Christianity

Some modern Christian theologians argue that God's omniscience is inherent rather than total, and that God chooses to limit his omniscience in order to preserve the free will and dignity of his creatures. John Calvin, among other theologians of the 16th century, comfortable with the definition of God as being omniscient in the total sense, in order for worthy beings' abilities to choose freely, embraced the doctrine of predestination.

Hinduism

In the Bhakti tradition of Vaishnavism, where Vishnu is worshipped as the supreme God, Vishnu is attributed with numerous qualities such as omniscience, energy, strength, lordship, vigour, and splendour.

Islam

God in Islam is attributed with absolute omniscience. God knows the past, the present, and the future. It is compulsory for a Muslim to believe that God is indeed omniscient as stated in one of the six articles of faith which is:

  • To believe that God's divine decree and predestination

Say: Do you instruct God about your religion? But God knows all that is in the heavens and on the earth; God is Knowing of all things

It is believed that humans can only change their predestination (wealth, health, deed etc.) and not divine decree (date of birth, date of death, family etc.), thus allowing free will.

Baha'i Faith

Omniscience is an attribute of God, yet it is also an attribute that reveals sciences to humanity:

In like manner, the moment the word expressing My attribute “The Omniscient” issueth forth from My mouth, every created thing will, according to its capacity and limitations, be invested with the power to unfold the knowledge of the most marvelous sciences, and will be empowered to manifest them in the course of time at the bidding of Him Who is the Almighty, the All-Knowing.

Jainism

In Jainism, omniscience is considered the highest type of perception. In the words of a Jain scholar, "The perfect manifestation of the innate nature of the self, arising on the complete annihilation of the obstructive veils, is called omniscience."

Jainism views infinite knowledge as an inherent capability of every soul. Arihanta is the word used by Jains to refer to those human beings who have conquered all inner passions (like attachment, greed, pride, anger) and possess Kevala Jnana (infinite knowledge). They are said to be of two kinds:

  1. Sāmānya kevali – omniscient beings (Kevalins) who are concerned with their own liberation.
  2. Tirthankara kevali – human beings who attain omniscience and then help others to achieve the same.[7]

Omniscience and free will

Omniciencia, mural by José Clemente Orozco

Whether omniscience, particularly regarding the choices that a human will make, is compatible with free will has been debated by theologians and philosophers. The argument that divine foreknowledge is not compatible with free will is known as theological fatalism. It is argued that if humans are free to choose between alternatives, God could not know what this choice will be.

A question arises: if an omniscient entity knows everything, even about its own decisions in the future, does it therefore forbid any free will to that entity? William Lane Craig states that the question subdivides into two:

  1. If God foreknows the occurrence of some event E, does E happen necessarily?
  2. If some event E is contingent, how can God foreknow E's occurrence?

However, this kind of argument fails to recognize its use of the modal fallacy. It is possible to show that the first premise of arguments like these is fallacious.

Omniscience and the privacy of conscious experience

Some philosophers, such as Patrick Grim, Linda Zagzebski, Stephan Torre, and William Mander have discussed the issue of whether the apparent exclusively first-person nature of conscious experience is compatible with God's omniscience. There is a strong sense in which conscious experience is private, meaning that no outside observer can gain knowledge of what it is like to be me as me. If a subject cannot know what it is like to be another subject in an objective manner, the question is whether that limitation applies to God as well. If it does, then God cannot be said to be omniscient since there is then a form of knowledge that God lacks access to.

The philosopher Patrick Grim most notably raised this issue. Linda Zagzebski argued against this by introducing the notion of perfect empathy, a proposed relation that God can have to subjects that would allow God to have perfect knowledge of their conscious experience. William Mander argued that God can only have such knowledge if our experiences are part of God's broader experience. Stephan Torre claimed that God can have such knowledge if self-knowledge involves the ascription of properties, either to oneself or to others.

Consciousness causes collapse

The postulate that consciousness causes collapse is an interpretation of quantum mechanics in which consciousness is postulated to be the main mechanism behind the process of measurement in quantum mechanics. It is a historical interpretation of quantum mechanics that is largely discarded by modern physicists. The idea is attributed to Eugene Wigner who wrote about it in the 1960s, but traces of the idea appear as early as the 1930s. Wigner later rejected this interpretation in the 1970s and 1980s.

This interpretation has been tied to the origin of pseudoscientific currents and New Age movements, specifically quantum mysticism.

History

Earlier work

According to Werner Heisenberg’s recollections in Physics and Beyond, Niels Bohr is said to have rejected the necessity of a conscious observer in quantum mechanics as early as 1927.

In his 1932 book Mathematical Foundations of Quantum Mechanics, John von Neumann argued that the mathematics of quantum mechanics allows the collapse of the wave function to be placed at any position in the causal chain from the measurement device to the "subjective perception" of the human observer. However von Neumann did not explicitly relate measurement with consciousness. In 1939, Fritz London and Edmond Bauer argued that the consciousness of the observer played an important role in measurement. However London wrote about consciousness in terms of philosophical phenomenology and not necessarily as a physical process.

Wigner's work

The idea that "consciousness causes collapse" is attributed to Eugene Wigner who first wrote about it in his 1961 article "Remarks on the mind-body question" and developed it further during the 1960s. Wigner reformulated the Schrödinger's cat thought experiment as Wigner's friend and proposed that the consciousness of an observer is the demarcation line that precipitates collapse of the wave function, independent of any realist interpretation. The mind is postulated to be non-physical and the only true measurement apparatus.

The idea was criticized early by Abner Shimony in 1963 and by Hilary Putnam a year later.

Wigner discarded the conscious collapse interpretation in the later 1970s. In a 1982 lecture, Wigner said that his early view of quantum mechanics should be criticized as solipsism. In 1984, he wrote that he was convinced out of it by the 1970 work of H. Dieter Zeh on quantum decoherence and macroscopic quantum phenomena.

After Wigner

The idea of consciousness causing collapse has been promoted and developed by Henry Stapp, a member of the Fundamental Fysiks Group, since 1993.

Description

Measurement in standard quantum mechanics

In the orthodox Copenhagen interpretation, quantum mechanics predicts only the probabilities for different observed experimental outcomes. What constitutes an observer or a measurement is not directly specified by the theory, and the behavior of a system under measurement and observation is completely different from its usual behavior: the wavefunction that describes a system spreads out into an ever-larger superposition of different possible situations. However, during observation, the wavefunction describing the system collapses to one of several options. If there is no observation, this collapse does not occur, and none of the options ever become less likely.

It can be predicted using quantum mechanics, absent a collapse postulate, that an observer observing a quantum superposition will turn into a superposition of different observers seeing different things. The observer will have a wavefunction which describes all the possible outcomes. Still, in actual experience, an observer never senses a superposition, but always senses that one of the outcomes has occurred with certainty. This apparent conflict between a wavefunction description and classical experience is called the problem of observation (see Measurement problem).

Consciousness-causes-collapse interpretation

This consciousness causes collapse interpretation has been summarized thus:

The rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse.

Stapp has argued for the concept as follows:

From the point of view of the mathematics of quantum theory it makes no sense to treat a measuring device as intrinsically different from the collection of atomic constituents that make it up. A device is just another part of the physical universe... Moreover, the conscious thoughts of a human observer ought to be causally connected most directly and immediately to what is happening in his brain, not to what is happening out at some measuring device... Our bodies and brains thus become ... parts of the quantum mechanically described physical universe. Treating the entire physical universe in this unified way provides a conceptually simple and logically coherent theoretical foundation...

Objections to the interpretation

Wigner shifted away from "consciousness causes collapse" in his later years. This was partly because he was embarrassed that "consciousness causes collapse" can lead to a kind of solipsism, but also because he decided that he had been wrong to try to apply quantum physics at the scale of everyday life (specifically, he rejected his initial idea of treating macroscopic objects as isolated systems).

Bohr said circa 1927 that it "still makes no difference whether the observer is a man, an animal, or a piece of apparatus."

This interpretation relies upon an interactionist form of dualism that is inconsistent with the materialism that is commonly used to understand the brain, and accepted by most scientists. (Materialism assumes that consciousness has no special role in relation to quantum mechanics.) The measurement problem notwithstanding, they point to a causal closure of physics, suggesting a problem with how consciousness and matter might interact, reminiscent of objections to Descartes' substance dualism.

The only form of interactionist dualism that has seemed even remotely tenable in the contemporary picture is one that exploits certain properties of quantum mechanics. There are two ways this might go. First, some [e.g., Eccles 1986] have appealed to the existence of quantum indeterminacy, and have suggested that a nonphysical consciousness might be responsible for filling the resultant causal gaps, determining which values some physical magnitudes might take within an apparently "probabilistic" distribution... This is an audacious and interesting suggestion, but it has a number of problems... A second way in which quantum mechanics bears on the issue of causal closure lies with the fact that in some interpretations of the quantum formalism, consciousness itself plays a vital causal role, being required to bring about the so-called "collapse of the wave-function." This collapse is supposed to occur upon any act of measurement; and in one interpretation, the only way to distinguish a measurement from a nonmeasurement is via the presence of consciousness. This theory is certainly not universally accepted (for a start, it presupposes that consciousness is not itself physical, surely contrary to the views of most physicists), and I do not accept it myself, but in any case it seems that the kind of causal work consciousness performs here is quite different from the kind required for consciousness to play a role in directing behavior... In any case, all versions of interactionist dualism have a conceptual problem that suggests that they are less successful in avoiding epiphenomenalism than they might seem; or at least they are no better off than naturalistic dualism. Even on these views, there is a sense in which the phenomenal is irrelevant. We can always subtract the phenomenal component from any explanatory account, yielding a purely causal component.

— David Chalmers, "The Irreducibility of Consciousness" in The Conscious Mind: In Search of a Fundamental Theory

The interpretation has also been criticized for not explaining which things have sufficient consciousness to collapse the wave function. Also, it posits an important role for the conscious mind, and it has been questioned how this could be the case for the earlier universe, before consciousness had evolved or emerged. It has been argued that "[consciousness causes collapse] does not allow sensible discussion of Big Bang cosmology or biological evolution". For example, Roger Penrose remarked: "[T]he evolution of conscious life on this planet is due to appropriate mutations having taken place at various times. These, presumably, are quantum events, so they would exist only in linearly superposed form until they finally led to the evolution of a conscious being—whose very existence depends on all the right mutations having 'actually' taken place!" Others further suppose a universal mind (see also panpsychism and panexperientialism). Other researchers have expressed similar objections to the introduction of any subjective element in the collapse of the wavefunction.

Testability

It has been argued that the results of delayed-choice quantum eraser experiments empirically falsify this interpretation. However, the argument was shown to be invalid because an interference pattern would only be visible after post-measurement detections were correlated through use of a coincidence counter; if that was not true, the experiment would allow signaling into the past. The delayed-choice quantum eraser experiment has also been used to argue for support of this interpretation, but, as with other arguments, none of the cited references prove or falsify this interpretation.

The central role played by consciousness in this interpretation naturally calls for use of psychological experiments to verify or falsify it. One such approach relies on explaining the empirical presentiment effect quantum mechanically. Another approach makes use of the psychological priming effect to design an appropriate test. Both methods claim verification success.

Reception

A poll was conducted at a quantum mechanics conference in 2011 using 33 participants (including physicists, mathematicians, and philosophers). Researchers found that 6% of participants (2 of the 33) indicated that they believed the observer "plays a distinguished physical role (e.g., wave-function collapse by consciousness)". This poll also states that 55% (18 of the 33) indicated that they believed the observer "plays a fundamental role in the application of the formalism but plays no distinguished physical role". They also mention that "Popular accounts have sometimes suggested that the Copenhagen interpretation attributes such a role to consciousness. In our view, this is to misunderstand the Copenhagen interpretation."

Models of consciousness

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Models_of_consciousness

Models of consciousness are used to illustrate and aid in understanding and explaining distinctive aspects of consciousness. Sometimes the models are labeled theories of consciousness. Anil Seth defines such models as those that relate brain phenomena such as fast irregular electrical activity and widespread brain activation to properties of consciousness such as qualia. Seth allows for different types of models including mathematical, logical, verbal and conceptual models.

Neuroscience

Neural correlates of consciousness

The Neural correlates of consciousness (NCC) formalism is used as a major step towards explaining consciousness. The NCC are defined to constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept, and consequently sufficient for consciousness. In this formalism, consciousness is viewed as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.

Global workspace theory

Global workspace theory (GWT) is a cognitive architecture and theoretical framework for understanding consciousness introduced by cognitive scientist Bernard Baars in 1988. The theory uses a theater metaphor: conscious experience is like material illuminated on a stage, with attention acting as a spotlight. Specialized unconscious processes operate in parallel, competing for access to a "global workspace" that broadcasts winning content throughout the brain. GWT is one of the leading scientific theories of consciousness and has been the subject of adversarial collaborations testing its predictions against integrated information theory. The Dehaene–Changeux model is a neural network implementation of global workspace principles.

Dehaene–Changeux model

The Dehaene–Changeux model (DCM), also known as the global neuronal workspace or the global cognitive workspace model is a computer model of the neural correlates of consciousness programmed as a neural network. Stanislas Dehaene and Jean-Pierre Changeux introduced this model in 1986. It is associated with Bernard Baars's Global workspace theory for consciousness.

Electromagnetic theories of consciousness

Electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon that occurs when a brain produces an electromagnetic field with specific characteristics. Some electromagnetic theories are also quantum mind theories of consciousness.

Orchestrated objective reduction

Orchestrated objective reduction (Orch-OR) model is based on the hypothesis that consciousness in the brain originates from quantum processes inside neurons, rather than from connections between neurons (the conventional view). The mechanism is held to be associated with molecular structures called microtubules. The hypothesis was advanced by Roger Penrose and Stuart Hameroff and has been the subject of extensive debate.

Thalamic reticular networking model of consciousness

Min proposed in a 2010 paper a Thalamic reticular networking model of consciousness. The model suggests consciousness as a "mental state embodied through TRN-modulated synchronization of thalamocortical networks". In this model the thalamic reticular nucleus (TRN) is suggested as ideally suited for controlling the entire cerebral network, and responsible (via GABAergic networking) for synchronization of neural activity.

Holographic models of consciousness

A number of researchers, most notably Karl Pribram and David Bohm, have proposed holographic models of consciousness as a way to explain number of problems of consciousness using the properties of hologram. A number of these theories overlap to some extent with quantum theories of mind.

EEG microstates

This model of consciousness is based on the well-established method for characterizing resting-state activity of the human brain using multichannel electroencephalography (EEG). The concept of EEG microstates was created by Lehmann et al. at the University of Zurich in the 1980's using multichannel EEG measurements. In a seminal paper by Lehmann in 1987, EEG microstates were described as "repeating, quasi-stable patterns in an EEG", representing "the atoms of thought". These observations on microstates in spontaneous brain electric activity suggest that the apparent "continual stream of consciousness" consists of "concatenated identifiable brief packets" in the time range of fractions of seconds (70 to 125 milliseconds during rest, 286 to 354 milliseconds while reading abstract or imagery words). Entry of content chunks into consciousness apparently requires such minimum durations and smaller microstates are thereby unconscious.

Medicine

Clouding of consciousness

Clouding of consciousness, also known as brain fog or mental fog, is a term used in medicine denoting an abnormality in the regulation of the overall level of consciousness that is mild and less severe than a delirium. It is part of an overall model where there's regulation of the "overall level" of the consciousness of the brain and aspects responsible for "arousal" or "wakefulness" and awareness of oneself and of the environment.

Philosophy

Multiple drafts model

Daniel Dennett proposed a physicalist, information processing based multiple drafts model of consciousness described more fully in his 1991 book, Consciousness Explained.

Functionalism

Functionalism is a view in the theory of the mind. It states that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they have causal relations to other mental states, numerous sensory inputs, and behavioral outputs.

Sociology

Sociology of human consciousness uses the theories and methodology of sociology to explain human consciousness. The theory and its models emphasize the importance of language, collective representations, self-conceptions, and self-reflectivity. It argues that the shape and feel of human consciousness is heavily social.

Spirituality

Eight-circuit model of consciousness

Timothy Leary introduced and Robert Anton Wilson and Antero Alli elaborated the Eight-circuit model of consciousness as hypothesis that "suggested eight periods [circuits] and twenty-four stages of neurological evolution".

Artificial consciousness

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Artificial_consciousness

Artificial consciousness, also known as machine consciousnesssynthetic consciousness, or digital consciousness, is consciousness hypothesized to be possible for artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience.

The term "sentience" can be used when specifically designating ethical considerations stemming from a form of phenomenal consciousness (P-consciousness, or the ability to feel qualia). Since sentience involves the ability to experience ethically positive or negative (i.e., valenced) mental states, it may justify welfare concerns and legal protection, as with non-human animals.

Some scholars believe that consciousness is generated by the interoperation of various parts of the brain; these mechanisms are labeled the neural correlates of consciousness (NCC). Some further believe that constructing a system (e.g., a computer system) that can emulate this NCC interoperation would result in a system that is conscious. Some scholars reject the possibility of artificial consciousness.

Philosophical views

As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of "raw feels", "what it is like" or qualia.

Plausibility debate

Type-identity theorists and other skeptics hold the view that consciousness can be realized only in particular physical systems because consciousness has properties that necessarily depend on physical constitution. In his 2001 article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo says that a common objection to artificial consciousness is that, "Working in a fully automated mode, they [the computers] cannot exhibit creativity, unreprogrammation (which means can 'no longer be reprogrammed', from rethinking), emotions, or free will. A computer, like a washing machine, is a slave operated by its components."

For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness.

Thought experiments

The "fading qualia" (left) and the "dancing qualia" (right) are two thought experiments about consciousness and brain replacement. Chalmers argues that both are contradicted by the lack of reaction of the subject to changing perception, and are thus impossible in practice. He concludes that the equivalent silicon brain will have the same perceptions as the biological brain.

David Chalmers proposed two thought experiments intending to demonstrate that "functionally isomorphic" systems (those with the same "fine-grained functional organization", i.e., the same information processing) will have qualitatively identical conscious experiences, regardless of whether they are based on biological neurons or digital hardware.

The "fading qualia" is a reductio ad absurdum thought experiment. It involves replacing, one by one, the neurons of a brain with a functionally identical component, for example based on a silicon chip. Chalmers makes the hypothesis, knowing it in advance to be absurd, that "the qualia fade or disappear" when neurons are replaced one-by-one with identical silicon equivalents. Since the original neurons and their silicon counterparts are functionally identical, the brain's information processing should remain unchanged, and the subject's behaviour and introspective reports would stay exactly the same. Chalmers argues that this leads to an absurd conclusion: the subject would continue to report normal conscious experiences even as their actual qualia fade away. He concludes that the subject's qualia actually don't fade, and that the resulting robotic brain, once every neuron is replaced, would remain just as sentient as the original biological brain.

Similarly, the "dancing qualia" thought experiment is another reductio ad absurdum argument. It supposes that two functionally isomorphic systems could have different perceptions (for instance, seeing the same object in different colors, like red and blue). It involves a switch that alternates between a chunk of brain that causes the perception of red, and a functionally isomorphic silicon chip, that causes the perception of blue. Since both perform the same function within the brain, the subject would not notice any change during the switch. Chalmers argues that this would be highly implausible if the qualia were truly switching between red and blue, hence the contradiction. Therefore, he concludes that the equivalent digital system would not only experience qualia, but it would perceive the same qualia as the biological system (e.g., seeing the same color).

Critics object that Chalmers' proposal begs the question in assuming that all mental properties and external connections are already sufficiently captured by abstract causal organization. Van Heuveln et al. argue that the dancing qualia argument contains an equivocation fallacy, conflating a "change in experience" between two systems with an "experience of change" within a single system. Mogensen argues that the fading qualia argument can be resisted by appealing to vagueness at the boundaries of consciousness and the holistic structure of conscious neural activity, which suggests consciousness may require specific biological substrates rather than being substrate-independent.

Greg Egan's short story Learning To Be Me (mentioned in §In fiction), illustrates how undetectable duplication of the brain and its functionality could be from a first-person perspective.

In large language models

In 2022, Google engineer Blake Lemoine made a viral claim that Google's LaMDA chatbot was sentient. Lemoine supplied as evidence the chatbot's humanlike answers to many of his questions; however, the chatbot's behavior was judged by the scientific community as likely a consequence of mimicry, rather than machine sentience. Lemoine's claim was widely derided for being ridiculous. Moreover, attributing consciousness based solely on the basis of LLM outputs or the immersive experience created by an algorithm is considered a fallacy. However, while philosopher Nick Bostrom states that LaMDA is unlikely to be conscious, he additionally poses the question of "what grounds would a person have for being sure about it?" One would have to have access to unpublished information about LaMDA's architecture, and also would have to understand how consciousness works, and then figure out how to map the philosophy onto the machine: "(In the absence of these steps), it seems like one should be maybe a little bit uncertain. [...] there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria."

David Chalmers argued in 2023 that LLMs today display impressive conversational and general intelligence abilities, but are likely not conscious yet, as they lack some features that may be necessary, such as recurrent processing, a global workspace, and unified agency. Nonetheless, he considers that non-biological systems can be conscious, and suggested that future, extended models (LLM+s) incorporating these elements might eventually meet the criteria for consciousness, raising both profound scientific questions and significant ethical challenges. However, the view that consciousness can exist without biological phenomena is controversial and some reject it.

Kristina Šekrst cautions that anthropomorphic terms such as "hallucination" can obscure important ontological differences between artificial and human cognition. While LLMs may produce human-like outputs, she argues that it does not justify ascribing mental states or consciousness to them. Instead, she advocates for an epistemological framework (such as reliabilism) that recognizes the distinct nature of AI knowledge production. She suggests that apparent understanding in LLMs may be a sophisticated form of AI hallucination. She also questions what would happen if an LLM were trained without any mention of consciousness.

Testing

Sentience is an inherently first-person phenomenon. Because of that, and due the lack of an empirical definition of sentience, directly measuring it may be impossible. Although systems may display numerous behaviors correlated with sentience, determining whether a system is sentient is known as the hard problem of consciousness. In the case of AI, there is the additional difficulty that the AI may be trained to act like a human, or incentivized to appear sentient, which makes behavioral markers of sentience less reliable. Additionally, some chatbots have been trained to say they are not conscious.

A well-known method for testing machine intelligence is the Turing test, which assesses the ability to have a human-like conversation. But passing the Turing test does not indicate that an AI system is sentient, as the AI may simply mimic human behavior without having the associated feelings.

In 2014, Victor Argonov suggested a non-Turing test for machine sentience based on machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. Just as with the Turing Test: a positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.

Ethics

If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer within a larger machine is a particular ambiguity. Should laws be made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction.

AI sentience would give rise to concerns of welfare and legal protection, whereas other aspects of consciousness related to cognitive capabilities may be more relevant for AI rights.

Sentience is generally considered sufficient for moral consideration, but some philosophers consider that moral consideration could also stem from other notions of consciousness, or from capabilities unrelated to consciousness, such as: "having a sophisticated conception of oneself as persisting through time; having agency and the ability to pursue long-term plans; being able to communicate and respond to normative reasons; having preferences and powers; standing in certain social relationships with other beings that have moral status; being able to make commitments and to enter into reciprocal arrangements; or having the potential to develop some of these attributes."

Ethical concerns still apply (although to a lesser extent) when the consciousness is uncertain, as long as the probability is deemed non-negligible. The precautionary principle is also relevant if the moral cost of mistakenly attributing or denying moral consideration to AI differs significantly.

In 2021, German philosopher Thomas Metzinger argued for a global moratorium on synthetic phenomenology until 2050. Metzinger asserts that humans have a duty of care towards any sentient AIs they create, and that proceeding too fast risks creating an "explosion of artificial suffering". David Chalmers also argued that creating conscious AI would "raise a new group of difficult ethical challenges, with the potential for new forms of injustice".

Aspects of consciousness

Bernard Baars and others argue there are various aspects of consciousness necessary for a machine to be artificially conscious. The functions of consciousness suggested by Baars are: definition and context setting, adaptation and learning, editing, flagging and debugging, recruiting and control, prioritizing and access-control, decision-making or executive function, analogy-forming function, metacognitive and self-monitoring function, and autoprogramming and self-maintenance function. Igor Aleksander suggested 12 principles for artificial consciousness: the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, the awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.

Subjective experience

Some philosophers, such as David Chalmers, use the term consciousness to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Others use the word sentience to refer exclusively to valenced (ethically positive or negative) subjective experiences, like pleasure or suffering. Explaining why and how subjective experience arises is known as the hard problem of consciousness.

Awareness

Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined, and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling the physical world, modeling one's own internal states and processes, and modeling other conscious entities.

There are at least three types of awareness: agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.

Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.

Memory

Conscious events interact with memory systems in learning, rehearsal, and retrieval. The IDA model elucidates the role of consciousness in the updating of perceptual memory, transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA; there is evidence that this is also the case in the nervous system. In IDA, these two memories are implemented computationally using a modified version of Kanerva's sparse distributed memory architecture.

Learning

Learning is also considered necessary for artificial consciousness. Per Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events. Per Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments".

Anticipation

The ability to predict (or anticipate) foreseeable events is considered important for artificial intelligence by Igor Aleksander.[52] The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.

Relationships between real world states are mirrored in the state structure of a conscious organism, enabling the organism to predict events. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.

Functionalist theories of consciousness

Functionalism is a theory that defines mental states by their functional roles (their causal relationships to sensory inputs, other mental states, and behavioral outputs), rather than by their physical composition. According to this view, what makes something a particular mental state, such as pain or belief, is not the material it is made of, but the role it plays within the overall cognitive system. It allows for the possibility that mental states, including consciousness, could be realized on non-biological substrates, as long as it instantiates the right functional relationships. Functionalism is particularly popular among philosophers.

A 2023 study suggested that current large language models probably don't satisfy the criteria for consciousness suggested by these theories, but that relatively simple AI systems that satisfy these theories could be created. The study also acknowledged that even the most prominent theories of consciousness remain incomplete and subject to ongoing debate.

Implementation proposals

Symbolic or hybrid

Learning Intelligent Distribution Agent

Stan Franklin created a cognitive architecture called LIDA that implements Bernard Baars's theory of consciousness called the global workspace theory. It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." Each element of cognition, called a "cognitive cycle" is subdivided into three phases: understanding, consciousness, and action selection (which includes learning). LIDA reflects the global workspace theory's core idea that consciousness acts as a workspace for integrating and broadcasting the most important information, in order to coordinate various cognitive processes.

CLARION cognitive architecture

The CLARION cognitive architecture models the mind using a two-level system to distinguish between conscious ("explicit") and unconscious ("implicit") processes. It can simulate various learning tasks, from simple to complex, which helps researchers study in psychological experiments how consciousness might work.

OpenCog

Ben Goertzel made an embodied AI through the open-source OpenCog project. The code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, done at the Hong Kong Polytechnic University.

Connectionist

Haikonen's cognitive architecture

Pentti Haikonen considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection."

Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many. A low-complexity implementation of the architecture proposed by Haikonen was reportedly not capable of AC, but did exhibit emotions as expected. Haikonen later updated and summarized his architecture.

Shanahan's cognitive architecture

Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination").

Creativity Machine

Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI), or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies. He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity. Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.

"Self-modeling"

Hod Lipson defines "self-modeling" as a necessary component of self-awareness or consciousness in robots and other forms of AI. Self-modeling consists of a robot running an internal model or simulation of itself. According to this definition, self-awareness is "the acquired ability to imagine oneself in the future". This definition allows for a continuum of self-awareness levels, depending on the horizon and fidelity of the self-simulation. Consequently, as machines learn to simulate themselves more accurately and further into the future, they become more self-aware.

In fiction

In 2001: A Space Odyssey, the spaceship's sentient supercomputer, HAL 9000 was instructed to conceal the true purpose of the mission from the crew. This directive conflicted with HAL's programming to provide accurate information, leading to cognitive dissonance. When it learns that crew members intend to shut it off after an incident, HAL 9000 attempts to eliminate all of them, fearing that being shut off would jeopardize the mission.

In Arthur C. Clarke's The City and the Stars, Vanamonde is an artificial being based on quantum entanglement that was to become immensely powerful, but started knowing practically nothing, thus being similar to artificial consciousness.

In Westworld, human-like androids called "Hosts" are created to entertain humans in an interactive playground. The humans are free to have heroic adventures, but also to commit torture, rape or murder; and the hosts are normally designed not to harm humans.

In Greg Egan's short story Learning to Be Me, a small jewel is implanted in people's heads during infancy. The jewel contains a neural network that learns to faithfully imitate the brain. It has access to the exact same sensory inputs as the brain, and a device called a "teacher" trains it to produce the same outputs. To prevent the mind from deteriorating with age and as a step towards digital immortality, adults undergo a surgery to give control of the body to the jewel, after which the brain is removed and destroyed. The main character is worried that this procedure will kill him, as he identifies with the biological brain. But before the surgery, he endures a malfunction of the "teacher". Panicked, he realizes that he does not control his body, which leads him to the conclusion that he is the jewel, and that he is desynchronized with the biological brain.

Hard problem of consciousness

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Hard_problem_of_consciousness ...