Search This Blog

Tuesday, November 25, 2025

Emergence

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Emergence
The formation of complex symmetrical and fractal patterns in snowflakes exemplifies emergence in a physical system.
A termite "cathedral" mound produced by a termite colony offers a classic example of emergence in nature.

In philosophy, systems theory, science, and art, emergence occurs when a complex entity has properties or behaviors that its parts do not have on their own, and emerge only when they interact in a wider whole.

Emergence plays a central role in theories of integrative levels and of complex systems. For instance, the phenomenon of life as studied in biology is an emergent property of chemistry and physics.

In philosophy, theories that emphasize emergent properties have been called emergentism.

In philosophy

Philosophers often understand emergence as a claim about the etiology of a system's properties. An emergent property of a system, in this context, is one that is not a property of any component of that system, but is still a feature of the system as a whole. Nicolai Hartmann (1882–1950), one of the first modern philosophers to write on emergence, termed this a categorial novum (new category).

Definitions

This concept of emergence dates from at least the time of Aristotle. In Heideggerian thought, the notion of emergence is derived from the Greek word poiein, meaning "to make", and refers to a bringing-forth that encompasses not just a process of crafting (techne) but also the broader sense of something coming into being or revealing itself. Heidegger used emerging blossoms and butterflies as examples to illustrate poiêsis as a threshold event where something moves from one state to another. Many scientists and philosophers have written on the concept, including John Stuart Mill (Composition of Causes, 1843) and Julian Huxley (1887–1975).

The philosopher G. H. Lewes coined the term "emergent" in 1875, distinguishing it from the merely "resultant":

Every resultant is either a sum or a difference of the co-operant forces; their sum, when their directions are the same – their difference, when their directions are contrary. Further, every resultant is clearly traceable in its components, because these are homogeneous and commensurable. It is otherwise with emergents, when, instead of adding measurable motion to measurable motion, or things of one kind to other individuals of their kind, there is a co-operation of things of unlike kinds. The emergent is unlike its components insofar as these are incommensurable, and it cannot be reduced to their sum or their difference.

Strong and weak emergence

Usage of the notion "emergence" may generally be subdivided into two perspectives, that of "weak emergence" and "strong emergence". One paper discussing this division is Weak Emergence, by philosopher Mark Bedau. In terms of physical systems, weak emergence is a type of emergence in which the emergent property is amenable to computer simulation or similar forms of after-the-fact analysis (for example, the formation of a traffic jam, the structure of a flock of starlings in flight or a school of fish, or the formation of galaxies). Crucial in these simulations is that the interacting members retain their independence. If not, a new entity is formed with new, emergent properties: this is called strong emergence, which it is argued cannot be simulated, analysed or reduced.

David Chalmers writes that emergence often causes confusion in philosophy and science due to a failure to demarcate strong and weak emergence, which are "quite different concepts".

Some common points between the two notions are that emergence concerns new properties produced as the system grows, which is to say ones which are not shared with its components or prior states. Also, it is assumed that the properties are supervenient rather than metaphysically primitive.

Weak emergence describes new properties arising in systems as a result of the interactions at a fundamental level. However, Bedau stipulates that the properties can be determined only by observing or simulating the system, and not by any process of a reductionist analysis. As a consequence the emerging properties are scale dependent: they are only observable if the system is large enough to exhibit the phenomenon. Chaotic, unpredictable behaviour can be seen as an emergent phenomenon, while at a microscopic scale the behaviour of the constituent parts can be fully deterministic.

Bedau notes that weak emergence is not a universal metaphysical solvent, as the hypothesis that consciousness is weakly emergent would not resolve the traditional philosophical questions about the physicality of consciousness. However, Bedau concludes that adopting this view would provide a precise notion that emergence is involved in consciousness, and second, the notion of weak emergence is metaphysically benign.

Strong emergence describes the direct causal action of a high-level system on its components; qualities produced this way are irreducible to the system's constituent parts. The whole is other than the sum of its parts. It is argued then that no simulation of the system can exist, for such a simulation would itself constitute a reduction of the system to its constituent parts. Physics lacks well-established examples of strong emergence, unless it is interpreted as the impossibility in practice to explain the whole in terms of the parts. Practical impossibility may be a more useful distinction than one in principle, since it is easier to determine and quantify, and does not imply the use of mysterious forces, but simply reflects the limits of our capability.

Viability of strong emergence

One of the reasons for the importance of distinguishing these two concepts with respect to their difference concerns the relationship of purported emergent properties to science. Some thinkers question the plausibility of strong emergence as contravening our usual understanding of physics. Mark A. Bedau observes:

Although strong emergence is logically possible, it is uncomfortably like magic. How does an irreducible but supervenient downward causal power arise, since by definition it cannot be due to the aggregation of the micro-level potentialities? Such causal powers would be quite unlike anything within our scientific ken. This not only indicates how they will discomfort reasonable forms of materialism. Their mysteriousness will only heighten the traditional worry that emergence entails illegitimately getting something from nothing.

The concern that strong emergence does so entail is that such a consequence must be incompatible with metaphysical principles such as the principle of sufficient reason or the Latin dictum ex nihilo nihil fit, often translated as "nothing comes from nothing".

Strong emergence can be criticized for leading to causal overdetermination. The canonical example concerns emergent mental states (M and M∗) that supervene on physical states (P and P∗) respectively. Let M and M∗ be emergent properties. Let M∗ supervene on base property P∗. What happens when M causes M∗? Jaegwon Kim says:

In our schematic example above, we concluded that M causes M∗ by causing P∗. So M causes P∗. Now, M, as an emergent, must itself have an emergence base property, say P. Now we face a critical question: if an emergent, M, emerges from basal condition P, why cannot P displace M as a cause of any putative effect of M? Why cannot P do all the work in explaining why any alleged effect of M occurred? If causation is understood as nomological (law-based) sufficiency, P, as M's emergence base, is nomologically sufficient for it, and M, as P∗'s cause, is nomologically sufficient for P∗. It follows that P is nomologically sufficient for P∗ and hence qualifies as its cause...If M is somehow retained as a cause, we are faced with the highly implausible consequence that every case of downward causation involves overdetermination (since P remains a cause of P∗ as well). Moreover, this goes against the spirit of emergentism in any case: emergents are supposed to make distinctive and novel causal contributions.

If M is the cause of M∗, then M∗ is overdetermined because M∗ can also be thought of as being determined by P. One escape-route that a strong emergentist could take would be to deny downward causation. However, this would remove the proposed reason that emergent mental states must supervene on physical states, which in turn would call physicalism into question, and thus be unpalatable for some philosophers and physicists.

Carroll and Parola propose a taxonomy that classifies emergent phenomena by how the macro-description relates to the underlying micro-dynamics.

Type‑0 (Featureless) Emergence

A coarse-graining map Φ from a micro state space A to a macro state space B that commutes with time evolution, without requiring any further decomposition into subsystems.
Type‑1 (Local) Emergence

Emergence where the macro theory is defined in terms of localized collections of micro-subsystems. This category is subdivided into:
Type‑1a (Direct) Emergence: When the emergence map Φ is algorithmically simple (i.e. compressible), so that the macro behavior is easily deduced from the micro-states.
Type‑1b (Incompressible) Emergence: When Φ is algorithmically complex (i.e. incompressible), making the macro behavior appear more novel despite being determined by the micro-dynamics.
Type‑2 (Nonlocal) Emergence

Cases in which both the micro and macro theories admit subsystem decompositions, yet the macro entities are defined nonlocally with respect to the micro-structure, meaning that macro behavior depends on widely distributed micro information.
Type‑3 (Augmented) Emergence

A form of strong emergence in which the macro theory introduces additional ontological variables that do not supervene on the micro-states, thereby positing genuinely novel macro-level entities.

Objective or subjective quality

Crutchfield regards the properties of complexity and organization of any system as subjective qualities determined by the observer.

Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analysed in terms of how model-building observers infer from measurements the computational capabilities embedded in non-linear processes. An observer's notion of what is ordered, what is random, and what is complex in its environment depends directly on its computational resources: the amount of raw measurement data, of memory, and of time available for estimation and inference. The discovery of structure in an environment depends more critically and subtly, though, on how those resources are organized. The descriptive power of the observer's chosen (or implicit) computational model class, for example, can be an overwhelming determinant in finding regularity in data.

The low entropy of an ordered system can be viewed as an example of subjective emergence: the observer sees an ordered system by ignoring the underlying microstructure (i.e. movement of molecules or elementary particles) and concludes that the system has a low entropy. On the other hand, chaotic, unpredictable behaviour can also be seen as subjective emergent, while at a microscopic scale the movement of the constituent parts can be fully deterministic.

In science

In physics, weak emergence is used to describe a property, law, or phenomenon which occurs at macroscopic scales (in space or time) but not at microscopic scales, despite the fact that a macroscopic system can be viewed as a very large ensemble of microscopic systems.

An emergent behavior of a physical system is a qualitative property that can only occur in the limit that the number of microscopic constituents tends to infinity.

According to Robert Laughlin, for many-particle systems, nothing can be calculated exactly from the microscopic equations, and macroscopic systems are characterised by broken symmetry: the symmetry present in the microscopic equations is not present in the macroscopic system, due to phase transitions. As a result, these macroscopic systems are described in their own terminology, and have properties that do not depend on many microscopic details.

Novelist Arthur Koestler used the metaphor of Janus (a symbol of the unity underlying complements like open/shut, peace/war) to illustrate how the two perspectives (strong vs. weak or holistic vs. reductionistic) should be treated as non-exclusive, and should work together to address the issues of emergence. Theoretical physicist Philip W. Anderson states it this way:

The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. At each level of complexity entirely new properties appear. Psychology is not applied biology, nor is biology applied chemistry. We can now see that the whole becomes not merely more, but very different from the sum of its parts.

Meanwhile, others have worked towards developing analytical evidence of strong emergence. Renormalization methods in theoretical physics enable physicists to study critical phenomena that are not tractable as the combination of their parts. In 2009, Gu et al. presented a class of infinite physical systems that exhibits non-computable macroscopic properties. More precisely, if one could compute certain macroscopic properties of these systems from the microscopic description of these systems, then one would be able to solve computational problems known to be undecidable in computer science. These results concern infinite systems, finite systems being considered computable. However, macroscopic concepts which only apply in the limit of infinite systems, such as phase transitions and the renormalization group, are important for understanding and modeling real, finite physical systems. Gu et al.

Recent developments in theoretical physics have explored strong emergence through intrinsic mechanisms for the quantum to classical transition. In the Theory of Emergent Motion, Gheorghe (2025) proposes that classical directional motion emerges as a probabilistic resolution beyond a discrete temporal threshold T0, where quantum path uncertainty transitions to deterministic trajectories via a switching function F(Δt) = 1 − e^{-Δt/T0}, reinterpreting the Feynman path integral over finite histories without relying on decoherence or measurement collapse. Similarly, Prakash's Vibrational Dynamics framework (2025) describes the emergence of classical spacetime curvature from standing wave patterns in vibrational fields generated by quantum fluctuations interacting with a foam like spacetime structure, modulated by a curvature dependent logarithmic suppression function S(R) = 1 / log(1 + 1/(R L_p^2)) that governs coherence and leads to the Quantum Equivalence Principle, unifying quantum and classical behaviors geometrically. These approaches suggest that macroscopic laws may involve non-computable elements from microscopic quantum descriptions, complementing earlier work on undecidability in physical systems. Recent work by Gheorghe et. al.(2025) synthesizes entropic stochastic resonance in Brownian transport with foundational quantum models like ToEMEDFPM, and EBM, alongside objective collapse theories such as Spontaneous Unitarity Violation and Continuous Spontaneous Localisation, deriving extensions to colored noise and non-Markovian fluctuation dissipation relations to integrate a stochastic Schrödinger equation for joint position momentum measurement, suggesting entropic mechanisms drive quantum state transitions in stochastic geometries. These approaches suggest that macroscopic laws may involve non-computable elements from microscopic quantum descriptions, complementing earlier work on undecidability in physical systems.

Although macroscopic concepts are essential for understanding our world, much of fundamental physics has been devoted to the search for a 'theory of everything', a set of equations that perfectly describe the behavior of all fundamental particles. The view that this is the goal of science rests in part on the rationale that such a theory would allow us to derive the behavior of all macroscopic concepts, at least in principle. The evidence we have presented suggests that this view may be overly optimistic. A 'theory of everything' is one of many components necessary for complete understanding of the universe, but is not necessarily the only one. The development of macroscopic laws from first principles may involve more than just systematic logic, and could require conjectures suggested by experiments, simulations or insight.

In humanity

Human beings are the basic elements of social systems, which perpetually interact and create, maintain, or untangle mutual social bonds. Social bonds in social systems are perpetually changing in the sense of the ongoing reconfiguration of their structure. An early argument (1904–05) for the emergence of social formations can be found in Max Weber's most famous work, The Protestant Ethic and the Spirit of Capitalism. Recently, the emergence of a new social system is linked with the emergence of order from nonlinear relationships among multiple interacting units, where multiple interacting units are individual thoughts, consciousness, and actions. In the case of the global economic system, under capitalism, growth, accumulation and innovation can be considered emergent processes where not only does technological processes sustain growth, but growth becomes the source of further innovations in a recursive, self-expanding spiral. In this sense, the exponential trend of the growth curve reveals the presence of a long-term positive feedback among growth, accumulation, and innovation; and the emergence of new structures and institutions connected to the multi-scale process of growth. This is reflected in the work of Karl Polanyi, who traces the process by which labor and nature are converted into commodities in the passage from an economic system based on agriculture to one based on industry. This shift, along with the idea of the self-regulating market, set the stage not only for another economy but also for another society. The principle of emergence is also brought forth when thinking about alternatives to the current economic system based on growth facing social and ecological limits. Both degrowth and social ecological economics have argued in favor of a co-evolutionary perspective for theorizing about transformations that overcome the dependence of human wellbeing on economic growth.

Economic trends and patterns which emerge are studied intensively by economists. Within the field of group facilitation and organization development, there have been a number of new group processes that are designed to maximize emergence and self-organization, by offering a minimal set of effective initial conditions. Examples of these processes include SEED-SCALE, appreciative inquiry, Future Search, the world cafe or knowledge cafe, Open Space Technology, and others (Holman, 2010). In international development, concepts of emergence have been used within a theory of social change termed SEED-SCALE to show how standard principles interact to bring forward socio-economic development fitted to cultural values, community economics, and natural environment (local solutions emerging from the larger socio-econo-biosphere). These principles can be implemented utilizing a sequence of standardized tasks that self-assemble in individually specific ways utilizing recursive evaluative criteria.

Looking at emergence in the context of social and systems change, invites us to reframe our thinking on parts and wholes and their interrelation. Unlike machines, living systems at all levels of recursion - be it a sentient body, a tree, a family, an organisation, the education system, the economy, the health system, the political system etc - are continuously creating themselves. They are continually growing and changing along with their surrounding elements, and therefore are more than the sum of their parts. As Peter Senge and co-authors put forward in the book Presence: Exploring profound change in People, Organizations and Society, "as long as our thinking is governed by habit - notably industrial, "machine age" concepts such as control, predictability, standardization, and "faster is better" - we will continue to recreate institutions as they have been, despite their disharmony with the larger world, and the need for all living systems to evolve." While change is predictably constant, it is unpredictable in direction and often occurs at second and nth orders of systemic relationality. Understanding emergence and what creates the conditions for different forms of emergence to occur, either insidious or nourishing vitality, is essential in the search for deep transformations.

The works of Nora Bateson and her colleagues at the International Bateson Institute delve into this. Since 2012, they have been researching questions such as what makes a living system ready to change? Can unforeseen ready-ness for change be nourished? Here being ready is not thought of as being prepared, but rather as nourishing the flexibility we do not yet know will be needed. These inquiries challenge the common view that a theory of change is produced from an identified preferred goal or outcome. As explained in their paper An essay on ready-ing: Tending the prelude to change: "While linear managing or controlling of the direction of change may appear desirable, tending to how the system becomes ready allows for pathways of possibility previously unimagined." This brings a new lens to the field of emergence in social and systems change as it looks to tending the pre-emergent process. Warm Data Labs are the fruit of their praxis, they are spaces for transcontextual mutual learning in which aphanipoetic phenomena unfold. Having hosted hundreds of Warm Data processes with 1000s of participants, they have found that these spaces of shared poly-learning across contexts lead to a realm of potential change, a necessarily obscured zone of wild interaction of unseen, unsaid, unknown flexibility. It is such flexibility that nourishes the ready-ing living systems require to respond to complex situations in new ways and change. In other words, this readying process preludes what will emerge. When exploring questions of social change, it is important to ask ourselves, what is submerging in the current social imaginary and perhaps, rather than focus all our resources and energy on driving direct order responses, to nourish flexibility with ourselves, and the systems we are a part of.

Another approach that engages with the concept of emergence for social change is Theory U, where "deep emergence" is the result of self-transcending knowledge after a successful journey along the U through layers of awareness. This practice nourishes transformation at the inner-being level, which enables new ways of being, seeing and relating to emerge. The concept of emergence has also been employed in the field of facilitation. In Emergent Strategy, adrienne maree brown defines emergent strategies as "ways for humans to practice complexity and grow the future through relatively simple interactions".

In linguistics, the concept of emergence has been applied in the domain of stylometry to explain the interrelation between the syntactical structures of the text and the author style (Slautina, Marusenko, 2014). It has also been argued that the structure and regularity of language grammar, or at least language change, is an emergent phenomenon. While each speaker merely tries to reach their own communicative goals, they use language in a particular way. If enough speakers behave in that way, language is changed. In a wider sense, the norms of a language, i.e. the linguistic conventions of its speech society, can be seen as a system emerging from long-time participation in communicative problem-solving in various social circumstances.

In technology

The bulk conductive response of binary (RC) electrical networks with random arrangements, known as the universal dielectric response (UDR), can be seen as emergent properties of such physical systems. Such arrangements can be used as simple physical prototypes for deriving mathematical formulae for the emergent responses of complex systems. Internet traffic can also exhibit some seemingly emergent properties. In the congestion control mechanism, TCP flows can become globally synchronized at bottlenecks, simultaneously increasing and then decreasing throughput in coordination. Congestion, widely regarded as a nuisance, is possibly an emergent property of the spreading of bottlenecks across a network in high traffic flows which can be considered as a phase transition. Some artificially intelligent (AI) computer applications simulate emergent behavior. One example is Boids, which mimics the swarming behavior of birds.

In religion and art

In religion, emergence grounds expressions of religious naturalism and syntheism in which a sense of the sacred is perceived in the workings of entirely naturalistic processes by which more complex forms arise or evolve from simpler forms. Examples are detailed in The Sacred Emergence of Nature by Ursula Goodenough & Terrence Deacon and Beyond Reductionism: Reinventing the Sacred by Stuart Kauffman, both from 2006, as well as Syntheism – Creating God in The Internet Age by Alexander Bard & Jan Söderqvist from 2014 and Emergentism: A Religion of Complexity for the Metamodern World by Brendan Graham Dempsey (2022).

Michael J. Pearce has used emergence to describe the experience of works of art in relation to contemporary neuroscience. Practicing artist Leonel Moura, in turn, attributes to his "artbots" a real, if nonetheless rudimentary, creativity based on emergent principles.

In daily life and nature

Objects consist of components with properties differing from the object itself. We call these properties emergent because they did not exist at the component level. The same applies to artifacts (structures, devices, tools, and even works of art). They are created for a specific purpose and are therefore subjectively emergent: someone who doesn't understand the purpose can't use it.

The artifact is the result of an invention: through a clever combination of components, something new is created with emergent properties and functionalities. This invention is often difficult to predict and therefore usually based on a chance discovery. An invention based on discovery is often improved through a feedback loop, making it more applicable. This is an example of downward causation.

Example 1: A hammer is a combination of a head and a handle, each with different properties. By cleverly connecting them, the hammer becomes an artifact with new, emergent functionalities. Through downward causation, you can improve the head and handle components in such a way that the hammer's functionality increases. Example 2: A mixture of tin and copper produces the alloy bronze, with new, emergent properties (hardness, lower melting temperature). Finding the correct ratio of tin to copper is an example of downward causation. Example 3: Finding the right combination of chemicals to create a superconductor at high temperatures (i.e room temperature) is a great challenge for many scientists, where chance plays a significant role. Conversely, however, the properties of all these invented artifacts can be readily explained reductionistically.

Something similar occurs in nature: random mutations in genes rarely create a creature with new, emergent properties, increasing its chances of survival in a changing ecosystem. This is how evolution works. Here too, through downward causation, new creatures can sometimes manipulate their ecosystem in such a way that their chances of survival are further increased.

In both artifacts and living beings, certain components can be crucial to the emergent end result: the end result supervenes on these components. Examples include: a construction error, a bug in a software program, an error in the genetic code, or the absence of a particular gene.

Both aspects: supervenience and the unpredictability of the emergent result are characteristic of strong emergence (see above). (This definition, however, differs significantly from the definition in philosophical literature

Artificial consciousness

From Wikipedia, the free encyclopedia

Artificial consciousness, also known as machine consciousnesssynthetic consciousness, or digital consciousness, is consciousness hypothesized to be possible for artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience.

The term "sentience" can be used when specifically designating ethical considerations stemming from a form of phenomenal consciousness (P-consciousness, or the ability to feel qualia). Since sentience involves the ability to experience ethically positive or negative (i.e., valenced) mental states, it may justify welfare concerns and legal protection, as with non-human animals.

Some scholars believe that consciousness is generated by the interoperation of various parts of the brain; these mechanisms are labeled the neural correlates of consciousness (NCC). Some further believe that constructing a system (e.g., a computer system) that can emulate this NCC interoperation would result in a system that is conscious. Some scholars reject the possibility of artificial consciousness.

Philosophical views

As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of "raw feels", "what it is like" or qualia.

Plausibility debate

Type-identity theorists and other skeptics hold the view that consciousness can be realized only in particular physical systems because consciousness has properties that necessarily depend on physical constitution. In his 2001 article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo says that a common objection to artificial consciousness is that, "Working in a fully automated mode, they [the computers] cannot exhibit creativity, unreprogrammation (which means can 'no longer be reprogrammed', from rethinking), emotions, or free will. A computer, like a washing machine, is a slave operated by its components."

For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness.

Thought experiments

The "fading qualia" (left) and the "dancing qualia" (right) are two thought experiments about consciousness and brain replacement. Chalmers argues that both are contradicted by the lack of reaction of the subject to changing perception, and are thus impossible in practice. He concludes that the equivalent silicon brain will have the same perceptions as the biological brain.

David Chalmers proposed two thought experiments intending to demonstrate that "functionally isomorphic" systems (those with the same "fine-grained functional organization", i.e., the same information processing) will have qualitatively identical conscious experiences, regardless of whether they are based on biological neurons or digital hardware.

The "fading qualia" is a reductio ad absurdum thought experiment. It involves replacing, one by one, the neurons of a brain with a functionally identical component, for example based on a silicon chip. Chalmers makes the hypothesis, knowing it in advance to be absurd, that "the qualia fade or disappear" when neurons are replaced one-by-one with identical silicon equivalents. Since the original neurons and their silicon counterparts are functionally identical, the brain's information processing should remain unchanged, and the subject's behaviour and introspective reports would stay exactly the same. Chalmers argues that this leads to an absurd conclusion: the subject would continue to report normal conscious experiences even as their actual qualia fade away. He concludes that the subject's qualia actually don't fade, and that the resulting robotic brain, once every neuron is replaced, would remain just as sentient as the original biological brain.

Similarly, the "dancing qualia" thought experiment is another reductio ad absurdum argument. It supposes that two functionally isomorphic systems could have different perceptions (for instance, seeing the same object in different colors, like red and blue). It involves a switch that alternates between a chunk of brain that causes the perception of red, and a functionally isomorphic silicon chip, that causes the perception of blue. Since both perform the same function within the brain, the subject would not notice any change during the switch. Chalmers argues that this would be highly implausible if the qualia were truly switching between red and blue, hence the contradiction. Therefore, he concludes that the equivalent digital system would not only experience qualia, but it would perceive the same qualia as the biological system (e.g., seeing the same color).

Critics object that Chalmers' proposal begs the question in assuming that all mental properties and external connections are already sufficiently captured by abstract causal organization. Van Heuveln et al. argue that the dancing qualia argument contains an equivocation fallacy, conflating a "change in experience" between two systems with an "experience of change" within a single system. Mogensen argues that the fading qualia argument can be resisted by appealing to vagueness at the boundaries of consciousness and the holistic structure of conscious neural activity, which suggests consciousness may require specific biological substrates rather than being substrate-independent.

Greg Egan's short story Learning To Be Me (mentioned in §In fiction), illustrates how undetectable duplication of the brain and its functionality could be from a first-person perspective.

In large language models

In 2022, Google engineer Blake Lemoine made a viral claim that Google's LaMDA chatbot was sentient. Lemoine supplied as evidence the chatbot's humanlike answers to many of his questions; however, the chatbot's behavior was judged by the scientific community as likely a consequence of mimicry, rather than machine sentience. Lemoine's claim was widely derided for being ridiculous. Moreover, attributing consciousness based solely on the basis of LLM outputs or the immersive experience created by an algorithm is considered a fallacy. However, while philosopher Nick Bostrom states that LaMDA is unlikely to be conscious, he additionally poses the question of "what grounds would a person have for being sure about it?" One would have to have access to unpublished information about LaMDA's architecture, and also would have to understand how consciousness works, and then figure out how to map the philosophy onto the machine: "(In the absence of these steps), it seems like one should be maybe a little bit uncertain. [...] there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria."

David Chalmers argued in 2023 that LLMs today display impressive conversational and general intelligence abilities, but are likely not conscious yet, as they lack some features that may be necessary, such as recurrent processing, a global workspace, and unified agency. Nonetheless, he considers that non-biological systems can be conscious, and suggested that future, extended models (LLM+s) incorporating these elements might eventually meet the criteria for consciousness, raising both profound scientific questions and significant ethical challenges. However, the view that consciousness can exist without biological phenomena is controversial and some reject it.

Kristina Šekrst cautions that anthropomorphic terms such as "hallucination" can obscure important ontological differences between artificial and human cognition. While LLMs may produce human-like outputs, she argues that it does not justify ascribing mental states or consciousness to them. Instead, she advocates for an epistemological framework (such as reliabilism) that recognizes the distinct nature of AI knowledge production. She suggests that apparent understanding in LLMs may be a sophisticated form of AI hallucination. She also questions what would happen if an LLM were trained without any mention of consciousness.

Testing

Phenomenologically, Consciousness is an inherently first-person phenomenon. Because of that, and the lack of an empirical definition of sentience, directly measuring it may be impossible. Although systems may display numerous behaviors correlated with sentience, determining whether a system is sentient is known as the hard problem of consciousness. In the case of AI, there is the additional difficulty that the AI may be trained to act like a human, or incentivized to appear sentient, which makes behavioral markers of sentience less reliable. Additionally, some chatbots have been trained to say they are not conscious.

A well-known method for testing machine intelligence is the Turing test, which assesses the ability to have a human-like conversation. But passing the Turing test does not indicate that an AI system is sentient, as the AI may simply mimic human behavior without having the associated feelings.

In 2014, Victor Argonov suggested a non-Turing test for machine sentience based on machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. Just as with the Turing Test: a positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.

Ethics

If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer within a larger machine is a particular ambiguity. Should laws be made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction.

AI sentience would give rise to concerns of welfare and legal protection, whereas other aspects of consciousness related to cognitive capabilities may be more relevant for AI rights.

Sentience is generally considered sufficient for moral consideration, but some philosophers consider that moral consideration could also stem from other notions of consciousness, or from capabilities unrelated to consciousness, such as: "having a sophisticated conception of oneself as persisting through time; having agency and the ability to pursue long-term plans; being able to communicate and respond to normative reasons; having preferences and powers; standing in certain social relationships with other beings that have moral status; being able to make commitments and to enter into reciprocal arrangements; or having the potential to develop some of these attributes."

Ethical concerns still apply (although to a lesser extent) when the consciousness is uncertain, as long as the probability is deemed non-negligible. The precautionary principle is also relevant if the moral cost of mistakenly attributing or denying moral consideration to AI differs significantly.

In 2021, German philosopher Thomas Metzinger argued for a global moratorium on synthetic phenomenology until 2050. Metzinger asserts that humans have a duty of care towards any sentient AIs they create, and that proceeding too fast risks creating an "explosion of artificial suffering". David Chalmers also argued that creating conscious AI would "raise a new group of difficult ethical challenges, with the potential for new forms of injustice".

Aspects of consciousness

Bernard Baars and others argue there are various aspects of consciousness necessary for a machine to be artificially conscious. The functions of consciousness suggested by Baars are: definition and context setting, adaptation and learning, editing, flagging and debugging, recruiting and control, prioritizing and access-control, decision-making or executive function, analogy-forming function, metacognitive and self-monitoring function, and autoprogramming and self-maintenance function. Igor Aleksander suggested 12 principles for artificial consciousness: the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, the awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.

Subjective experience

Some philosophers, such as David Chalmers, use the term consciousness to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Others use the word sentience to refer exclusively to valenced (ethically positive or negative) subjective experiences, like pleasure or suffering. Explaining why and how subjective experience arises is known as the hard problem of consciousness.

Awareness

Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined, and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling the physical world, modeling one's own internal states and processes, and modeling other conscious entities.

There are at least three types of awareness: agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.

Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.

Memory

Conscious events interact with memory systems in learning, rehearsal, and retrieval. The IDA model elucidates the role of consciousness in the updating of perceptual memory, transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA; there is evidence that this is also the case in the nervous system. In IDA, these two memories are implemented computationally using a modified version of Kanerva's sparse distributed memory architecture.

Learning

Learning is also considered necessary for artificial consciousness. Per Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events. Per Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments".

Anticipation

The ability to predict (or anticipate) foreseeable events is considered important for artificial intelligence by Igor Aleksander. The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.

Relationships between real world states are mirrored in the state structure of a conscious organism, enabling the organism to predict events. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.

Functionalist theories of consciousness

Functionalism is a theory that defines mental states by their functional roles (their causal relationships to sensory inputs, other mental states, and behavioral outputs), rather than by their physical composition. According to this view, what makes something a particular mental state, such as pain or belief, is not the material it is made of, but the role it plays within the overall cognitive system. It allows for the possibility that mental states, including consciousness, could be realized on non-biological substrates, as long as it instantiates the right functional relationships. Functionalism is particularly popular among philosophers.

A 2023 study suggested that current large language models probably don't satisfy the criteria for consciousness suggested by these theories, but that relatively simple AI systems that satisfy these theories could be created. The study also acknowledged that even the most prominent theories of consciousness remain incomplete and subject to ongoing debate.

Implementation proposals

Symbolic or hybrid

Learning Intelligent Distribution Agent

Stan Franklin created a cognitive architecture called LIDA that implements Bernard Baars's theory of consciousness called the global workspace theory. It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." Each element of cognition, called a "cognitive cycle" is subdivided into three phases: understanding, consciousness, and action selection (which includes learning). LIDA reflects the global workspace theory's core idea that consciousness acts as a workspace for integrating and broadcasting the most important information, in order to coordinate various cognitive processes.

CLARION cognitive architecture

The CLARION cognitive architecture models the mind using a two-level system to distinguish between conscious ("explicit") and unconscious ("implicit") processes. It can simulate various learning tasks, from simple to complex, which helps researchers study in psychological experiments how consciousness might work.

OpenCog

Ben Goertzel made an embodied AI through the open-source OpenCog project. The code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, done at the Hong Kong Polytechnic University.

Connectionist

Haikonen's cognitive architecture

Pentti Haikonen considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection."

Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many. A low-complexity implementation of the architecture proposed by Haikonen was reportedly not capable of AC, but did exhibit emotions as expected. Haikonen later updated and summarized his architecture.

Shanahan's cognitive architecture

Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination").

Creativity Machine

Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI), or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies. He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity. Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.

"Self-modeling"

Hod Lipson defines "self-modeling" as a necessary component of self-awareness or consciousness in robots and other forms of AI. Self-modeling consists of a robot running an internal model or simulation of itself. According to this definition, self-awareness is "the acquired ability to imagine oneself in the future". This definition allows for a continuum of self-awareness levels, depending on the horizon and fidelity of the self-simulation. Consequently, as machines learn to simulate themselves more accurately and further into the future, they become more self-aware.

In fiction

In 2001: A Space Odyssey, the spaceship's sentient supercomputer, HAL 9000 was instructed to conceal the true purpose of the mission from the crew. This directive conflicted with HAL's programming to provide accurate information, leading to cognitive dissonance. When it learns that crew members intend to shut it off after an incident, HAL 9000 attempts to eliminate all of them, fearing that being shut off would jeopardize the mission.

In Arthur C. Clarke's The City and the Stars, Vanamonde is an artificial being based on quantum entanglement that was to become immensely powerful, but started knowing practically nothing, thus being similar to artificial consciousness.

In Westworld, human-like androids called "Hosts" are created to entertain humans in an interactive playground. The humans are free to have heroic adventures, but also to commit torture, rape or murder; and the hosts are normally designed not to harm humans.

In Greg Egan's short story Learning to be me, a small jewel is implanted in people's heads during infancy. The jewel contains a neural network that learns to faithfully imitate the brain. It has access to the exact same sensory inputs as the brain, and a device called a "teacher" trains it to produce the same outputs. To prevent the mind from deteriorating with age and as a step towards digital immortality, adults undergo a surgery to give control of the body to the jewel, after which the brain is removed and destroyed. The main character is worried that this procedure will kill him, as he identifies with the biological brain. But before the surgery, he endures a malfunction of the "teacher". Panicked, he realizes that he does not control his body, which leads him to the conclusion that he is the jewel, and that he is desynchronized with the biological brain.

Emergence

From Wikipedia, the free encyclopedia ...