Search This Blog

Friday, September 25, 2020

Self-organized criticality

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In physics, self-organized criticality (SOC) is a property of dynamical systems that have a critical point as an attractor. Their macroscopic behavior thus displays the spatial or temporal scale-invariance characteristic of the critical point of a phase transition, but without the need to tune control parameters to a precise value, because the system, effectively, tunes itself as it evolves towards criticality.

The concept was put forward by Per Bak, Chao Tang and Kurt Wiesenfeld ("BTW") in a paper published in 1987 in Physical Review Letters, and is considered to be one of the mechanisms by which complexity arises in nature. Its concepts have been applied across fields as diverse as geophysics, physical cosmology, evolutionary biology and ecology, bio-inspired computing and optimization (mathematics), economics, quantum gravity, sociology, solar physics, plasma physics, neurobiology and others.

SOC is typically observed in slowly driven non-equilibrium systems with many degrees of freedom and strongly nonlinear dynamics. Many individual examples have been identified since BTW's original paper, but to date there is no known set of general characteristics that guarantee a system will display SOC.

Overview

Self-organized criticality is one of a number of important discoveries made in statistical physics and related fields over the latter half of the 20th century, discoveries which relate particularly to the study of complexity in nature. For example, the study of cellular automata, from the early discoveries of Stanislaw Ulam and John von Neumann through to John Conway's Game of Life and the extensive work of Stephen Wolfram, made it clear that complexity could be generated as an emergent feature of extended systems with simple local interactions. Over a similar period of time, Benoît Mandelbrot's large body of work on fractals showed that much complexity in nature could be described by certain ubiquitous mathematical laws, while the extensive study of phase transitions carried out in the 1960s and 1970s showed how scale invariant phenomena such as fractals and power laws emerged at the critical point between phases.

The term self-organized criticality was firstly introduced by Bak, Tang and Wiesenfeld's 1987 paper, which clearly linked together those factors: a simple cellular automaton was shown to produce several characteristic features observed in natural complexity (fractal geometry, pink (1/f) noise and power laws) in a way that could be linked to critical-point phenomena. Crucially, however, the paper emphasized that the complexity observed emerged in a robust manner that did not depend on finely tuned details of the system: variable parameters in the model could be changed widely without affecting the emergence of critical behavior: hence, self-organized criticality. Thus, the key result of BTW's paper was its discovery of a mechanism by which the emergence of complexity from simple local interactions could be spontaneous—and therefore plausible as a source of natural complexity—rather than something that was only possible in artificial situations in which control parameters are tuned to precise critical values. The publication of this research sparked considerable interest from both theoreticians and experimentalists, producing some of the most cited papers in the scientific literature.

Due to BTW's metaphorical visualization of their model as a "sandpile" on which new sand grains were being slowly sprinkled to cause "avalanches", much of the initial experimental work tended to focus on examining real avalanches in granular matter, the most famous and extensive such study probably being the Oslo ricepile experiment. Other experiments include those carried out on magnetic-domain patterns, the Barkhausen effect and vortices in superconductors.

Early theoretical work included the development of a variety of alternative SOC-generating dynamics distinct from the BTW model, attempts to prove model properties analytically (including calculating the critical exponents), and examination of the conditions necessary for SOC to emerge. One of the important issues for the latter investigation was whether conservation of energy was required in the local dynamical exchanges of models: the answer in general is no, but with (minor) reservations, as some exchange dynamics (such as those of BTW) do require local conservation at least on average. In the long term, key theoretical issues yet to be resolved include the calculation of the possible universality classes of SOC behavior and the question of whether it is possible to derive a general rule for determining if an arbitrary algorithm displays SOC.

Alongside these largely lab-based approaches, many other investigations have centered around large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including: earthquakes (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake size, and the Omori law describing the frequency of aftershocks); solar flares; fluctuations in economic systems such as financial markets (references to SOC are common in econophysics); landscape formation; forest fires; landslides; epidemics; neuronal avalanches in the cortex; 1/f noise in the amplitude of electrophysiological signals; and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). These "applied" investigations of SOC have included both modelling (either developing new models or adapting existing ones to the specifics of a given natural system) and extensive data analysis to determine the existence and/or characteristics of natural scaling laws.

In addition, SOC has been applied to computational algorithms. Recently, it has been found that the avalanches from an SOC process, like the BTW model, make effective patterns in a random search for optimal solutions on graphs. An example of such an optimization problem is graph coloring. The SOC process apparently helps the optimization from getting stuck in a local optimum without the use of any annealing scheme, as suggested by previous work on extremal optimization.

The recent excitement generated by scale-free networks has raised some interesting new questions for SOC-related research: a number of different SOC models have been shown to generate such networks as an emergent phenomenon, as opposed to the simpler models proposed by network researchers where the network tends to be assumed to exist independently of any physical space or dynamics. While many single phenomena have been shown to exhibit scale-free properties over narrow ranges, a phenomenon offering by far a larger amount of data is solvent-accessible surface areas in globular proteins. These studies quantify the differential geometry of proteins, and resolve many evolutionary puzzles regarding the biological emergence of complexity.

Despite the considerable interest and research output generated from the SOC hypothesis, there remains no general agreement with regards to its mechanisms in abstract mathematical form. Bak Tang and Wiesenfeld based their hypothesis on the behavior of their sandpile model. However, it has been argued that this model would actually generate 1/f2 noise rather than 1/f noise. This claim was based on untested scaling assumptions, and a more rigorous analysis showed that sandpile models generally produce 1/fa spectra, with a<2.  Other simulation models were proposed later that could produce true 1/f noise, and experimental sandpile models were observed to yield 1/f noise. In addition to the nonconservative theoretical model mentioned above, other theoretical models for SOC have been based upon information theory, mean field theory, the convergence of random variables, and cluster formation. A continuous model of self-organised criticality is proposed by using tropical geometry.

Examples of self-organized critical dynamics

In chronological order of development:

Complex system

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Complex_system 

A complex system is a system composed of many components which may interact with each other. Examples of complex systems are Earth's global climate, organisms, the human brain, infrastructure such as power grid, transportation or communication systems, social and economic organizations (like cities), an ecosystem, a living cell, and ultimately the entire universe.

Complex systems are systems whose behavior is intrinsically difficult to model due to the dependencies, competitions, relationships, or other types of interactions between their parts or between a given system and its environment. Systems that are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others. Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links to their interactions.

Overview

The term complex systems often refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment. The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm to reductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them.

As an interdisciplinary domain, complex systems draws contributions from many different fields, such as the study of self-organization from physics, that of spontaneous order from the social sciences, chaos from mathematics, adaptation from biology, and many others. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, including statistical physics, information theory, nonlinear dynamics, anthropology, computer science, meteorology, sociology, economics, psychology, and biology.

Key concepts

Systems

Open systems have input and output flows, representing exchanges of matter, energy or information with their surroundings.

Complex systems are chiefly concerned with the behaviors and properties of systems. A system, broadly defined, is a set of entities that, through their interactions, relationships, or dependencies, form a unified whole. It is always defined in terms of its boundary, which determines the entities that are or are not part of the system. Entities lying outside the system then become part of the system's environment.

A system can exhibit properties that produce behaviors which are distinct from the properties and behaviors of its parts; these system-wide or global properties and behaviors are characteristics of how the system interacts with or appears to its environment, or of how its parts behave (say, in response to external stimuli) by virtue of being within the system. The notion of behavior implies that the study of systems is also concerned with processes that take place over time (or, in mathematics, some other phase space parameterization). Because of their broad, interdisciplinary applicability, systems concepts play a central role in complex systems.

As a field of study, complex system is a subset of systems theory. General systems theory focuses similarly on the collective behaviors of interacting entities, but it studies a much broader class of systems, including non-complex systems where traditional reductionist approaches may remain viable. Indeed, systems theory seeks to explore and describe all classes of systems, and the invention of categories that are useful to researchers across widely varying fields is one of the systems theory's main objectives.

As it relates to complex systems, systems theory contributes an emphasis on the way relationships and dependencies between a system's parts can determine system-wide properties. It also contributes to the interdisciplinary perspective of the study of complex systems: the notion that shared properties link systems across disciplines, justifying the pursuit of modeling approaches applicable to complex systems wherever they appear. Specific concepts important to complex systems, such as emergence, feedback loops, and adaptation, also originate in systems theory.

Complexity

"Systems exhibit complexity" means that their behaviors cannot be easily inferred from their properties. Any modeling approach that ignores such difficulties or characterizes them as noise, then, will necessarily produce models that are neither accurate nor useful. As yet no fully general theory of complex systems has emerged for addressing these problems, so researchers must solve them in domain-specific contexts. Researchers in complex systems address these problems by viewing the chief task of modeling to be capturing, rather than reducing, the complexity of their respective systems of interest.

While no generally accepted exact definition of complexity exists yet, there are many archetypal examples of complexity. Systems can be complex if, for instance, they have chaotic behavior (behavior that exhibits extreme sensitivity to initial conditions), or if they have emergent properties (properties that are not apparent from their components in isolation but which result from the relationships and dependencies they form when placed together in a system), or if they are computationally intractable to model (if they depend on a number of parameters that grows too rapidly with respect to the size of the system).

Networks

The interacting components of a complex system form a network, which is a collection of discrete objects and relationships between them, usually depicted as a graph of vertices connected by edges. Networks can describe the relationships between individuals within an organization, between logic gates in a circuit, between genes in gene regulatory networks, or between any other set of related entities.

Networks often describe the sources of complexity in complex systems. Studying complex systems as networks, therefore, enables many useful applications of graph theory and network science. Some complex systems, for example, are also complex networks, which have properties such as phase transitions and power-law degree distributions that readily lend themselves to emergent or chaotic behavior. The fact that the number of edges in a complete graph grows quadratically in the number of vertices sheds additional light on the source of complexity in large networks: as a network grows, the number of relationships between entities quickly dwarfs the number of entities in the network.

Nonlinearity

A sample solution in the Lorenz attractor when ρ = 28, σ = 10, and β = 8/3

Complex systems often have nonlinear behavior, meaning they may respond in different ways to the same input depending on their state or context. In mathematics and physics, nonlinearity describes systems in which a change in the size of the input does not produce a proportional change in the size of the output. For a given change in input, such systems may yield significantly greater than or less than proportional changes in output, or even no output at all, depending on the current state of the system or its parameter values.

Of particular interest to complex systems are nonlinear dynamical systems, which are systems of differential equations that have one or more nonlinear terms. Some nonlinear dynamical systems, such as the Lorenz system, can produce a mathematical phenomenon known as chaos. Chaos, as it applies to complex systems, refers to the sensitive dependence on initial conditions, or "butterfly effect", that a complex system can exhibit. In such a system, small changes to initial conditions can lead to dramatically different outcomes. Chaotic behavior can, therefore, be extremely hard to model numerically, because small rounding errors at an intermediate stage of computation can cause the model to generate completely inaccurate output. Furthermore, if a complex system returns to a state similar to one it held previously, it may behave completely differently in response to the same stimuli, so chaos also poses challenges for extrapolating from experience.

Emergence

Gosper's Glider Gun creating "gliders" in the cellular automaton Conway's Game of Life

Another common feature of complex systems is the presence of emergent behaviors and properties: these are traits of a system that are not apparent from its components in isolation but which result from the interactions, dependencies, or relationships they form when placed together in a system. Emergence broadly describes the appearance of such behaviors and properties, and has applications to systems studied in both the social and physical sciences. While emergence is often used to refer only to the appearance of unplanned organized behavior in a complex system, emergence can also refer to the breakdown of an organization; it describes any phenomena which are difficult or even impossible to predict from the smaller entities that make up the system.

One example of a complex system whose emergent properties have been studied extensively is cellular automata. In a cellular automaton, a grid of cells, each having one of the finitely many states, evolves according to a simple set of rules. These rules guide the "interactions" of each cell with its neighbors. Although the rules are only defined locally, they have been shown capable of producing globally interesting behavior, for example in Conway's Game of Life.

Spontaneous order and self-organization

When emergence describes the appearance of unplanned order, it is spontaneous order (in the social sciences) or self-organization (in physical sciences). Spontaneous order can be seen in herd behavior, whereby a group of individuals coordinates their actions without centralized planning. Self-organization can be seen in the global symmetry of certain crystals, for instance the apparent radial symmetry of snowflakes, which arises from purely local attractive and repulsive forces both between water molecules and their surrounding environment.

Adaptation

Complex adaptive systems are special cases of complex systems that are adaptive in that they have the capacity to change and learn from experience. Examples of complex adaptive systems include the stock market, social insect and ant colonies, the biosphere and the ecosystem, the brain and the immune system, the cell and the developing embryo, the cities, manufacturing businesses and any human social group-based endeavor in a cultural and social system such as political parties or communities.

Features

Complex systems may have the following features:

Cascading failures
Due to the strong coupling between components in complex systems, a failure in one or more components can lead to cascading failures which may have catastrophic consequences on the functioning of the system. Localized attack may lead to cascading failures and abrupt collapse in spatial networks.
Complex systems may be open
Complex systems are usually open systems — that is, they exist in a thermodynamic gradient and dissipate energy. In other words, complex systems are frequently far from energetic equilibrium: but despite this flux, there may be pattern stability, see synergetics.
Complex systems may exhibit critical transitions
Graphical representation of alternative stable states and the direction of critical slowing down prior to a critical transition (taken from Lever et al. 2020). Top panels (a) indicate stability landscapes at different conditions. Middle panels (b) indicate the rates of change akin to the slope of the stability landscapes, and bottom panels (c) indicate a recovery from a perturbation towards the system's future state (c.I) and in another direction (c.II).
Critical transitions are abrupt shifts in the state of ecosystems, the climate, financial systems or other complex systems that may occur when changing conditions pass a critical or bifurcation point. The 'direction of critical slowing down' in a system's state space may be indicative of a system's future state after such transitions when delayed negative feedbacks leading to oscillatory or other complex dynamics are weak.
Complex systems may have a memory
Recovery from a critical transition may require more than a simple return to the conditions at which a transition occurred, a phenomenon called hysteresis. The history of a complex system may thus be important. Because complex systems are dynamical systems they change over time, and prior states may have an influence on present states. Interacting systems may have complex hysteresis of many transitions.
Complex systems may be nested
The components of a complex system may themselves be complex systems. For example, an economy is made up of organisations, which are made up of people, which are made up of cells - all of which are complex systems. The arrangement of interactions within complex bipartite networks may be nested as well. More specifically, bipartite ecological and organisational networks of mutually beneficial interactions were found to have a nested structure. This structure promotes indirect facilitation and a system's capacity to persist under increasingly harsh circumstances as well as the potential for large-scale systemic regime shifts.
Dynamic network of multiplicity
As well as coupling rules, the dynamic network of a complex system is important. Small-world or scale-free networks which have many local interactions and a smaller number of inter-area connections are often employed. Natural complex systems often exhibit such topologies. In the human cortex for example, we see dense local connectivity and a few very long axon projections between regions inside the cortex and to other brain regions.
May produce emergent phenomena
Complex systems may exhibit behaviors that are emergent, which is to say that while the results may be sufficiently determined by the activity of the systems' basic constituents, they may have properties that can only be studied at a higher level. For example, the termites in a mound have physiology, biochemistry and biological development that are at one level of analysis, but their social behavior and mound building is a property that emerges from the collection of termites and needs to be analyzed at a different level.
Relationships are non-linear
In practical terms, this means a small perturbation may cause a large effect (see butterfly effect), a proportional effect, or even no effect at all. In linear systems, the effect is always directly proportional to cause. See nonlinearity.
Relationships contain feedback loops
Both negative (damping) and positive (amplifying) feedback are always found in complex systems. The effects of an element's behavior are fed back in such a way that the element itself is altered.

History

A perspective on the development of complexity science (see reference for readable version)

Although arguably, humans have been studying complex systems for thousands of years, the modern scientific study of complex systems is relatively young in comparison to established fields of science such as physics and chemistry. The history of the scientific study of these systems follows several different research trends.

In the area of mathematics, arguably the largest contribution to the study of complex systems was the discovery of chaos in deterministic systems, a feature of certain dynamical systems that is strongly related to nonlinearity. The study of neural networks was also integral in advancing the mathematics needed to study complex systems.

The notion of self-organizing systems is tied with work in nonequilibrium thermodynamics, including that pioneered by chemist and Nobel laureate Ilya Prigogine in his study of dissipative structures. Even older is the work by Hartree-Fock on the quantum chemistry equations and later calculations of the structure of molecules which can be regarded as one of the earliest examples of emergence and emergent wholes in science.

One complex system containing humans is the classical political economy of the Scottish Enlightenment, later developed by the Austrian school of economics, which argues that order in market systems is spontaneous (or emergent) in that it is the result of human action, but not the execution of any human design.

Upon this, the Austrian school developed from the 19th to the early 20th century the economic calculation problem, along with the concept of dispersed knowledge, which were to fuel debates against the then-dominant Keynesian economics. This debate would notably lead economists, politicians, and other parties to explore the question of computational complexity.

A pioneer in the field, and inspired by Karl Popper's and Warren Weaver's works, Nobel prize economist and philosopher Friedrich Hayek dedicated much of his work, from early to the late 20th century, to the study of complex phenomena, not constraining his work to human economies but venturing into other fields such as psychology, biology and cybernetics. Gregory Bateson played a key role in establishing the connection between anthropology and systems theory; he recognized that the interactive parts of cultures function much like ecosystems.

While the explicit study of complex systems dates at least to the 1970s, the first research institute focused on complex systems, the Santa Fe Institute, was founded in 1984. Early Santa Fe Institute participants included physics Nobel laureates Murray Gell-Mann and Philip Anderson, economics Nobel laureate Kenneth Arrow, and Manhattan Project scientists George Cowan and Herb Anderson. Today, there are over 50 institutes and research centers focusing on complex systems.

Applications

Complexity in practice

The traditional approach to dealing with complexity is to reduce or constrain it. Typically, this involves compartmentalization: dividing a large system into separate parts. Organizations, for instance, divide their work into departments that each deal with separate issues. Engineering systems are often designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions.

Complexity management

As projects and acquisitions become increasingly complex, companies and governments are challenged to find effective ways to manage mega-acquisitions such as the Army Future Combat Systems. Acquisitions such as the FCS rely on a web of interrelated parts which interact unpredictably. As acquisitions become more network-centric and complex, businesses will be forced to find ways to manage complexity while governments will be challenged to provide effective governance to ensure flexibility and resiliency.

Complexity economics

Over the last decades, within the emerging field of complexity economics, new predictive tools have been developed to explain economic growth. Such is the case with the models built by the Santa Fe Institute in 1989 and the more recent economic complexity index (ECI), introduced by the MIT physicist Cesar A. Hidalgo and the Harvard economist Ricardo Hausmann. Based on the ECI, Hausmann, Hidalgo and their team of The Observatory of Economic Complexity have produced GDP forecasts for the year 2020.

Complexity and education

Focusing on issues of student persistence with their studies, Forsman, Moll and Linder explore the "viability of using complexity science as a frame to extend methodological applications for physics education research", finding that "framing a social network analysis within a complexity science perspective offers a new and powerful applicability across a broad range of PER topics".

Complexity and modeling

One of Friedrich Hayek's main contributions to early complexity theory is his distinction between the human capacity to predict the behavior of simple systems and its capacity to predict the behavior of complex systems through modeling. He believed that economics and the sciences of complex phenomena in general, which in his view included biology, psychology, and so on, could not be modeled after the sciences that deal with essentially simple phenomena like physics. Hayek would notably explain that complex phenomena, through modeling, can only allow pattern predictions, compared with the precise predictions that can be made out of non-complex phenomena.

Complexity and chaos theory

Complexity theory is rooted in chaos theory, which in turn has its origins more than a century ago in the work of the French mathematician Henri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order. Chaotic systems remain deterministic, though their long-term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly accurate predictions of the system, though in practice this is impossible to do with arbitrary accuracy. Ilya Prigogine argued that complexity is non-deterministic and gives no way whatsoever to precisely predict the future.

The emergence of complexity theory shows a domain between deterministic order and randomness which is complex. This is referred to as the "edge of chaos".

A plot of the Lorenz attractor.

When one analyzes complex systems, sensitivity to initial conditions, for example, is not an issue as important as it is within chaos theory, in which it prevails. As stated by Colander, the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic sets of relationships can generate some simple behavioral patterns, whereas chaotic behavior, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions.

Therefore, the main difference between chaotic systems and complex systems is their history. Chaotic systems do not rely on their history as complex ones do. Chaotic behavior pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'. On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events, which physicist Murray Gell-Mann called "an accumulation of frozen accidents". In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations.

Complexity and network science

A complex system is usually composed of many components and their interactions. Such a system can be represented by a network where nodes represent the components and links represent their interactions. For example, the internet can be represented as a network composed of nodes (computers) and links (direct connections between computers). Its resilience to failures was studied using percolation theory. Other examples are social networks, airline networks, biological networks and climate networks. Networks can also fail and recover spontaneously. For modeling this phenomenon see Majdandzic et al. Interacting complex systems can be modeled as networks of networks. For their breakdown and recovery properties see Gao et al. Traffic in a city can be represented as a network. The weighted links represent the velocity between two junctions (nodes). This approach was found useful to characterize the global traffic efficiency in a city. For a quantitative definition of resilience in traffic and other infrastructure systems see  The complex pattern of exposures between financial institutions has been shown to trigger financial instability.

General form of complexity computation

The computational law of reachable optimality is established as a general form of computation for ordered systems.

The computational law of reachable optimality has four key components as described below.

1. Reachability of Optimality: Any intended optimality shall be reachable. Unreachable optimality has no meaning for a member in the ordered system and even for the ordered system itself.

2. Prevailing and Consistency: Maximizing reachability to explore best available optimality is the prevailing computation logic for all members in the ordered system and is accommodated by the ordered system.

3. Conditionality: Realizable tradeoff between reachability and optimality depends primarily upon the initial bet capacity and how the bet capacity evolves along with the payoff table update path triggered by bet behavior and empowered by the underlying law of reward and punishment. Precisely, it is a sequence of conditional events where the next event happens upon reached status quo from experience path.

4. Robustness: The more challenge a reachable optimality can accommodate, the more robust it is in terms of path integrity.

There are also four computation features in the law of reachable optimality.

1. Optimal Choice: Computation in realizing Optimal Choice can be very simple or very complex. A simple rule in Optimal Choice is to accept whatever is reached, Reward As You Go (RAYG). A Reachable Optimality computation reduces into optimizing reachability when RAYG is adopted. The Optimal Choice computation can be more complex when multiple NE strategies present in a reached game.

2. Initial Status: Computation is assumed to start at an interesting beginning even the absolute beginning of an ordered system in nature may not and need not present. An assumed neutral Initial Status facilitates an artificial or a simulating computation and is not expected to change the prevalence of any findings.

3. Territory: An ordered system shall have a territory where the universal computation sponsored by the system will produce an optimal solution still within the territory.

4. Reaching Pattern: The forms of Reaching Pattern in the computation space, or the Optimality Driven Reaching Pattern in the computation space, primarily depend upon the nature and dimensions of measure space underlying a computation space and the law of punishment and reward underlying the realized experience path of reaching. There are five basic forms of experience path we are interested in, persistently positive reinforcement experience path, persistently negative reinforcement experience path, mixed persistent pattern experience path, decaying scale experience path and selection experience path.

The compound computation in the selection experience path includes current and lagging interaction, dynamic topological transformation and implies both invariance and variance characteristics in an ordered system's experience path.

Also, the computation law of reachable optimality gives out the boundary between the complexity model, chaotic model, and determination model. When RAYG is the Optimal Choice computation, and the reaching pattern is a persistently positive experience path, persistently negative experience path, or mixed persistent pattern experience path, the underlying computation shall be a simple system computation adopting determination rules. If the reaching pattern has no persistent pattern experienced in the RAYG regime, the underlying computation hints there is a chaotic system. When the optimal choice computation involves non-RAYG computation, it's a complexity computation driving the compound effect.

Akathisia

From Wikipedia, the free encyclopedia
 
Akathisia
Other namesAcathisia
Common sign of akathisia
SpecialtyNeurology, psychiatry
SymptomsFeelings of restlessness, inability to stay still, uneasy
ComplicationsViolence or suicidal thoughts
DurationShort- or long-term
CausesAntipsychotics, selective serotonin reuptake inhibitors, metoclopramide, reserpine, Parkinson’s disease, untreated schizophrenia
Diagnostic methodBased on symptoms
Differential diagnosisAnxiety, tic disorders, tardive dyskinesia, dystonia, medication-induced parkinsonism, restless leg syndrome
TreatmentReduce or switch antipsychotics, correct iron deficiency
MedicationDiphenhydramine, trazodone, benzodiazepines, benztropine, mirtazapine, beta blockers
FrequencyRelatively common

Akathisia is a movement disorder characterized by a subjective feeling of inner restlessness accompanied by mental distress and an inability to sit still. Usually, the legs are most prominently affected. Those affected may fidget, rock back and forth, or pace, while some may just have an uneasy feeling in their body. The most severe cases may result in aggression, violence or suicidal thoughts.

Antipsychotics, particularly the first generation antipsychotics, are a leading cause. Other causes may include selective serotonin reuptake inhibitors, metoclopramide, reserpine, Parkinson’s disease, and untreated schizophrenia. It may also occur upon stopping antipsychotics. The underlying mechanism is believed to involve dopamine. Diagnosis is based on the symptoms. It differs from restless leg syndrome in that akathisia is not associated with sleeping.

Treatment may include switching to an antipsychotic with a lower risk of the condition. The antidepressant mirtazapine has demonstrated benefit, and there is tentative evidence of benefit for diphenhydramine, trazodone, benzatropine and beta blockers.

The term was first used by Czech neuropsychiatrist Ladislav Haškovec, who described the phenomenon in 1901. It is from Greek a-, meaning "not", and καθίζειν kathízein, meaning "to sit", or in other words an "inability to sit".

Classification

Akathisia is usually grouped as a medication-induced movement disorder but is also seen to be a neuropsychiatric concern as it can be experienced purely subjectively with no apparent movement abnormalities. Akathisia is generally associated with antipsychotics but it was already described in Parkinson's disease, and other neuropsychiatric disorders. It also presents with the use of non-psychiatric medications, including calcium channel blockers, antibiotics, anti-nausea and anti-vertigo drugs.

Signs and symptoms

Symptoms of akathisia are often described in vague terms such as feeling nervous, uneasy, tense, twitchy, restless, and an inability to relax. Reported symptoms also include insomnia, a sense of discomfort, motor restlessness, marked anxiety, and panic. Symptoms have also been said to resemble symptoms of neuropathic pain that are similar to fibromyalgia and restless legs syndrome . When due to psychiatric drugs, the symptoms are side effects that usually disappear quickly and remarkably when the medication is reduced or stopped. However, tardive akathisia which has a late onset, may go on long after the medication is discontinued, for months and sometimes years.

When misdiagnosis occurs in antipsychotic-induced akathisia, more antipsychotic may be prescribed, potentially worsening the symptoms. If symptoms are not recognised and identified akathisia can increase in severity and lead to suicidal thoughts, aggression and violence.

Visible signs of akathisia include repetitive movements such as crossing and uncrossing the legs, and constant shifting from one foot to the other. Other noted signs are rocking back and forth, fidgeting and pacing. However not all observable restless motion is akathisia. For example, mania, agitated depression, and attention deficit hyperactivity disorder may look like akathisia, but the movements feel voluntary and not due to restlessness.

Jack Henry Abbott, who was diagnosed with akathisia, described the sensation in 1981 as: “You ache with restlessness, so you feel you have to walk, to pace. And then as soon as you start pacing, the opposite occurs to you; you must sit and rest. Back and forth, up and down you go … you cannot get relief …“ 

Causes

Medication-induced

Medication related causes of akathisia
Category Examples
Antipsychotics Haloperidol, amisulpride, risperidone, aripiprazole, lurasidone, ziprasidone
SSRIs Fluoxetine, paroxetine, citalopram, sertraline
Antidepressants Venlafaxine, tricyclics, trazodone, and mirtazapine
Antiemetics Metoclopramide, prochlorperazine, and promethazine
Drug withdrawal Antipsychotic withdrawal
Serotonin syndrome Harmful combinations of psychotropic drugs

Medication-induced akathisia is termed acute akathisia and is frequently associated with the use of antipsychotics. Antipsychotics block dopamine receptors, but the pathophysiology is poorly understood. Additionally, drugs with successful therapeutic effects in the treatment of medication-induced akathisia have provided additional insight into the involvement of other transmitter systems. These include benzodiazepines, β-adrenergic blockers, and serotonin antagonists. Another major cause of the syndrome is the withdrawal observed in drug-dependent individuals. Since dopamine deficiency (or disruptions in dopamine signalling) appears to play an important role in the development of RLS, a form of akathisia focused in the legs, the sudden withdrawal or rapidly decreased dosage of drugs which increase dopamine signalling may create similar deficits of the chemical which mimic dopamine antagonism and thus can precipitate RLS. This is why sudden cessation of opioids, cocaine, serotonergics, and other euphoria-inducing substances commonly produce RLS as a side-effect.

Akathisia involves increased levels of the neurotransmitter norepinephrine, which is associated with mechanisms that regulate aggression, alertness, and arousal. It has been correlated with Parkinson's disease and related syndromes, and descriptions of akathisia predate the existence of pharmacologic agents.

Akathisia can be miscoded in side effect reports from antidepressant clinical trials as "agitation, emotional lability, and hyperkinesis (overactivity)"; misdiagnosis of akathisia as simple motor restlessness occurred, but was more properly classed as dyskinesia.

Diagnosis

The presence and severity of akathisia can be measured using the Barnes Akathisia Scale, which assesses both objective and subjective criteria. Precise assessment of akathisia is problematic, as there are various types making it difficult to differentiate from disorders with similar symptoms.

The primary distinguishing features of akathisia in comparison with other syndromes are primarily subjective characteristics, such as the feeling of inner restlessness and tension. Akathisia can commonly be mistaken for agitation secondary to psychotic symptoms or mood disorder, antipsychotic dysphoria, restless legs syndrome (RLS), anxiety, insomnia, drug withdrawal states, tardive dyskinesia, or other neurological and medical conditions.

The controversial diagnosis of "pseudoakathisia" is sometimes given.

Treatment

Acute akathisia induced by medication, often antipsychotics, is treated by reducing or discontinuing the medication. Low doses of the antidepressant mirtazapine may be of help. Benzodiazepines, such as lorazepam, beta blockers such as propranolol, anticholinergics such as benztropine, and serotonin antagonists such as cyproheptadine may also be of help in treating acute akathisia but are much less effective for treating chronic akathisia. Vitamin B, and iron supplementation if deficient, may be of help.

Epidemiology

As of 2007, published epidemiological data for akathisia was mostly limited to studies before the availability of second-generation antipsychotics. Prevalence rates may be lower for modern treatment as second-generation antipsychotics carry a lower risk of akathisia.

Approximately one out of four individuals treated with first-generation antipsychotics have akathisia.

History

The term was first used by Czech neuropsychiatrist Ladislav Haškovec, who described the phenomenon in a non-medication induced presentation in 1901.

Reports of medication-induced akathisia from chlorpromazine appeared in 1954. Later in 1960 there were reports of akathisia in response to phenothiazines (a related drug). Akathisia is classified as an extrapyramidal side effect along with other movement disorders that can be caused by antipsychotics.

Artificial consciousness

From Wikipedia, the free encyclopedia
 

Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics. The aim of the theory of artificial consciousness is to "Define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995).

Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC, though there are challenges to that perspective. Proponents of AC believe it is possible to construct systems (e.g., computer systems) that can emulate this NCC interoperation.

Artificial consciousness concepts are also pondered in the philosophy of artificial intelligence through questions about mind, consciousness, and mental states.

Philosophical views

As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of “raw feels”, “what it is like” or qualia (Block 1997).

Plausibility debate

Type-identity theorists and other skeptics hold the view that consciousness can only be realized in particular physical systems because consciousness has properties that necessarily depend on physical constitution (Block 1978; Bickle 2003).

In his article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo says that a common objection to artificial consciousness is that "Working in a fully automated mode, they [the computers] cannot exhibit creativity, emotions, or free will. A computer, like a washing machine, is a slave operated by its components."

For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness (Putnam 1967).

Computational Foundation argument

One of the most explicit arguments for the plausibility of AC comes from David Chalmers. His proposal, found within his article Chalmers 2011, is roughly that the right kinds of computations are sufficient for the possession of a conscious mind. In the outline, he defends his claim thus: Computers perform computations. Computations can capture other systems' abstract causal organization.

The most controversial part of Chalmers' proposal is that mental properties are "organizationally invariant". Mental properties are of two kinds, psychological and phenomenological. Psychological properties, such as belief and perception, are those that are "characterized by their causal role". He adverts to the work of Armstrong 1968 and Lewis 1972 in claiming that "[s]ystems with the same causal topology…will share their psychological properties".

Phenomenological properties are not prima facie definable in terms of their causal roles. Establishing that phenomenological properties are amenable to individuation by causal role therefore requires argument. Chalmers provides his Dancing Qualia Argument for this purpose.

Chalmers begins by assuming that agents with identical causal organizations could have different experiences. He then asks us to conceive of changing one agent into the other by the replacement of parts (neural parts replaced by silicon, say) while preserving its causal organization. Ex hypothesi, the experience of the agent under transformation would change (as the parts were replaced), but there would be no change in causal topology and therefore no means whereby the agent could "notice" the shift in experience.

Critics of AC object that Chalmers begs the question in assuming that all mental properties and external connections are sufficiently captured by abstract causal organization.

Ethics

If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer of a building of larger machine is a particular ambiguity. Should laws be made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction (see below).

The rules for the 2003 Loebner Prize competition explicitly addressed the question of robot rights:

61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.

Research and implementation proposals

Aspects of consciousness

There are various aspects of consciousness generally deemed necessary for a machine to be artificially conscious. A variety of functions in which consciousness plays a role were suggested by Bernard Baars (Baars 1988) and others. The functions of consciousness suggested by Bernard Baars are Definition and Context Setting, Adaptation and Learning, Editing, Flagging and Debugging, Recruiting and Control, Prioritizing and Access-Control, Decision-making or Executive Function, Analogy-forming Function, Metacognitive and Self-monitoring Function, and Autoprogramming and Self-maintenance Function. Igor Aleksander suggested 12 principles for artificial consciousness (Aleksander 1995) and these are: The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.

Awareness

Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined, and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling of the physical world, modeling of one's own internal states and processes, and modeling of other conscious entities.

There are at least three types of awareness: agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.

Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.

Memory

Conscious events interact with memory systems in learning, rehearsal, and retrieval. The IDA model elucidates the role of consciousness in the updating of perceptual memory, transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA, there is evidence that this is also the case in the nervous system. In IDA, these two memories are implemented computationally using a modified version of Kanerva’s Sparse distributed memory architecture.

Learning

Learning is also considered necessary for AC. By Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events (Baars 1988). By Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments" (Cleeremans 2001).

Anticipation

The ability to predict (or anticipate) foreseeable events is considered important for AC by Igor Aleksander. The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.

Relationships between real world states are mirrored in the state structure of a conscious organism enabling the organism to predict events. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.

Subjective experience

Subjective experiences or qualia are widely considered to be the hard problem of consciousness. Indeed, it is held to pose a challenge to physicalism, let alone computationalism. On the other hand, there are problems in other fields of science which limit that which we can observe, such as the uncertainty principle in physics, which have not made the research in these fields of science impossible.

Role of cognitive architectures

The term "cognitive architecture" may refer to a theory about the structure of the human mind, or any portion or function thereof, including consciousness. In another context, a cognitive architecture implements the theory on computers. An example is QuBIC: Quantum and Bio-inspired Cognitive Architecture for Machine Consciousness. One of the main goals of a cognitive architecture is to summarize the various results of cognitive psychology in a comprehensive computer model. However, the results need to be in a formalized form so they can be the basis of a computer program. Also, the role of cognitive architecture is for the A.I. to clearly structure, build, and implement its thought process.

Symbolic or hybrid proposals

Franklin's Intelligent Distribution Agent

Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is capable of several of the functions of consciousness as identified by Bernard Baars' Global Workspace Theory (Baars 1988, 1997). His brain child IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it functionally conscious by definition. IDA's task is to negotiate new assignments for sailors in the US Navy after they end a tour of duty, by matching each individual's skills and preferences with the Navy's needs. IDA interacts with Navy databases and communicates with the sailors via natural language e-mail dialog while obeying a large set of Navy policies. The IDA computational model was developed during 1996–2001 at Stan Franklin's "Conscious" Software Research Group at the University of Memphis. It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA's top-down architecture, high-level cognitive functions are explicitly modeled (see Franklin 1995 and Franklin 2003 for details). While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to his own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task." IDA has been extended to LIDA (Learning Intelligent Distribution Agent).

Ron Sun's cognitive architecture CLARION

CLARION posits a two-level representation that explains the distinction between conscious and unconscious mental processes.

CLARION has been successful in accounting for a variety of psychological data. A number of well-known skill learning tasks have been simulated using CLARION that span the spectrum ranging from simple reactive skills to complex cognitive skills. The tasks include serial reaction time (SRT) tasks, artificial grammar learning (AGL) tasks, process control (PC) tasks, the categorical inference (CI) task, the alphabetical arithmetic (AA) task, and the Tower of Hanoi (TOH) task (Sun 2002). Among them, SRT, AGL, and PC are typical implicit learning tasks, very much relevant to the issue of consciousness as they operationalized the notion of consciousness in the context of psychological experiments.

Ben Goertzel's OpenCog

Ben Goertzel is pursuing an embodied AGI through the open-source OpenCog project. Current code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, being done at the Hong Kong Polytechnic University.

Connectionist proposals

Haikonen's cognitive architecture

Pentti Haikonen (2003) considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection." Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many, e.g. Freeman (1999) and Cotterill (2003). A low-complexity implementation of the architecture proposed by Haikonen (2003) was reportedly not capable of AC, but did exhibit emotions as expected. See Doan (2009) for a comprehensive introduction to Haikonen's cognitive architecture. An updated account of Haikonen's architecture, along with a summary of his philosophical views, is given in Haikonen (2012), Haikonen (2019).

Shanahan's cognitive architecture

Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination") (Shanahan 2006). For discussions of Shanahan's architecture, see (Gamez 2008) and (Reggia 2013) and Chapter 20 of (Haikonen 2012).

Takeno's self-awareness research

Self-awareness in robots is being investigated by Junichi Takeno at Meiji University in Japan. Takeno is asserting that he has developed a robot capable of discriminating between a self-image in a mirror and any other having an identical image to it, and this claim has already been reviewed (Takeno, Inaba & Suzuki 2005). Takeno asserts that he first contrived the computational module called a MoNAD, which has a self-aware function, and he then constructed the artificial consciousness system by formulating the relationships between emotions, feelings and reason by connecting the modules in a hierarchy (Igarashi, Takeno 2007). Takeno completed a mirror image cognition experiment using a robot equipped with the MoNAD system. Takeno proposed the Self-Body Theory stating that "humans feel that their own mirror image is closer to themselves than an actual part of themselves." The most important point in developing artificial consciousness or clarifying human consciousness is the development of a function of self awareness, and he claims that he has demonstrated physical and mathematical evidence for this in his thesis. He also demonstrated that robots can study episodes in memory where the emotions were stimulated and use this experience to take predictive actions to prevent the recurrence of unpleasant emotions (Torigoe, Takeno 2009).

Aleksander's impossible mind

Igor Aleksander, emeritus professor of Neural Systems Engineering at Imperial College, has extensively researched artificial neural networks and claims in his book Impossible Minds: My Neurons, My Consciousness that the principles for creating a conscious machine already exist but that it would take forty years to train such a machine to understand language. Whether this is true remains to be demonstrated and the basic principle stated in Impossible Minds—that the brain is a neural state machine—is open to doubt.

Thaler's Creativity Machine Paradigm

Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI), or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies. He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity. Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.

Michael Graziano's attention schema

In 2011, Michael Graziano and Sabine Kastler published a paper named "Human consciousness and its relationship to social neuroscience: A novel hypothesis" proposing a theory of consciousness as an attention schema. Graziano went on to publish an expanded discussion of this theory in his book "Consciousness and the Social Brain". This Attention Schema Theory of Consciousness, as he named it, proposes that the brain tracks attention to various sensory inputs by way of an attention schema, analogous to the well study body schema that tracks the spatial place of a person's body. This relates to artificial consciousness by proposing a specific mechanism of information handling, that produces what we allegedly experience and describe as consciousness, and which should be able to be duplicated by a machine using current technology. When the brain finds that person X is aware of thing Y, it is in effect modeling the state in which person X is applying an attentional enhancement to Y. In the attention schema theory, the same process can be applied to oneself. The brain tracks attention to various sensory inputs, and one's own awareness is a schematized model of one's attention. Graziano proposes specific locations in the brain for this process, and suggests that such awareness is a computed feature constructed by an expert system in the brain.

"Self-modeling"

In order to be "self-aware," robots may use internal models to simulate their own actions.

Testing

The most well-known method for testing machine intelligence is the Turing test. But when interpreted as only observational, this test contradicts the philosophy of science principles of theory dependence of observations. It also has been suggested that Alan Turing's recommendation of imitating not a human adult consciousness, but a human child consciousness, should be taken seriously.

Other tests, such as ConsScale, test the presence of features inspired by biological systems, or measure the cognitive development of artificial systems.

Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Although various systems may display various signs of behavior correlated with functional consciousness, there is no conceivable way in which third-person tests can have access to first-person phenomenological features. Because of that, and because there is no empirical definition of consciousness, a test of presence of consciousness in AC may be impossible.

In 2014, Victor Argonov suggested a non-Turing test for machine consciousness based on machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures’ consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine’s intellect, not by absence of consciousness.

Computer-aided software engineering

From Wikipedia, the free encyclopedia ...