Search This Blog

Thursday, September 30, 2021

Complex system

From Wikipedia, the free encyclopedia

A complex system is a system composed of many components which may interact with each other. Examples of complex systems are Earth's global climate, organisms, the human brain, infrastructure such as power grid, transportation or communication systems, social and economic organizations (like cities), an ecosystem, a living cell, and ultimately the entire universe.

Complex systems are systems whose behavior is intrinsically difficult to model due to the dependencies, competitions, relationships, or other types of interactions between their parts or between a given system and its environment. Systems that are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others. Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links to their interactions.

The term complex systems often refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment. The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm to reductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them.

As an interdisciplinary domain, complex systems draws contributions from many different fields, such as the study of self-organization and critical phenomena from physics, that of spontaneous order from the social sciences, chaos from mathematics, adaptation from biology, and many others. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, including statistical physics, information theory, nonlinear dynamics, anthropology, computer science, meteorology, sociology, economics, psychology, and biology.

Key concepts

Systems

Open systems have input and output flows, representing exchanges of matter, energy or information with their surroundings.

Complex systems are chiefly concerned with the behaviors and properties of systems. A system, broadly defined, is a set of entities that, through their interactions, relationships, or dependencies, form a unified whole. It is always defined in terms of its boundary, which determines the entities that are or are not part of the system. Entities lying outside the system then become part of the system's environment.

A system can exhibit properties that produce behaviors which are distinct from the properties and behaviors of its parts; these system-wide or global properties and behaviors are characteristics of how the system interacts with or appears to its environment, or of how its parts behave (say, in response to external stimuli) by virtue of being within the system. The notion of behavior implies that the study of systems is also concerned with processes that take place over time (or, in mathematics, some other phase space parameterization). Because of their broad, interdisciplinary applicability, systems concepts play a central role in complex systems.

As a field of study, a complex system is a subset of systems theory. General systems theory focuses similarly on the collective behaviors of interacting entities, but it studies a much broader class of systems, including non-complex systems where traditional reductionist approaches may remain viable. Indeed, systems theory seeks to explore and describe all classes of systems, and the invention of categories that are useful to researchers across widely varying fields is one of the systems theory's main objectives.

As it relates to complex systems, systems theory contributes an emphasis on the way relationships and dependencies between a system's parts can determine system-wide properties. It also contributes to the interdisciplinary perspective of the study of complex systems: the notion that shared properties link systems across disciplines, justifying the pursuit of modeling approaches applicable to complex systems wherever they appear. Specific concepts important to complex systems, such as emergence, feedback loops, and adaptation, also originate in systems theory.

Complexity

"Systems exhibit complexity" means that their behaviors cannot be easily inferred from their properties. Any modeling approach that ignores such difficulties or characterizes them as noise, then, will necessarily produce models that are neither accurate nor useful. As yet no fully general theory of complex systems has emerged for addressing these problems, so researchers must solve them in domain-specific contexts. Researchers in complex systems address these problems by viewing the chief task of modeling to be capturing, rather than reducing, the complexity of their respective systems of interest.

While no generally accepted exact definition of complexity exists yet, there are many archetypal examples of complexity. Systems can be complex if, for instance, they have chaotic behavior (behavior that exhibits extreme sensitivity to initial conditions, among other properties), or if they have emergent properties (properties that are not apparent from their components in isolation but which result from the relationships and dependencies they form when placed together in a system), or if they are computationally intractable to model (if they depend on a number of parameters that grows too rapidly with respect to the size of the system).

Networks

The interacting components of a complex system form a network, which is a collection of discrete objects and relationships between them, usually depicted as a graph of vertices connected by edges. Networks can describe the relationships between individuals within an organization, between logic gates in a circuit, between genes in gene regulatory networks, or between any other set of related entities.

Networks often describe the sources of complexity in complex systems. Studying complex systems as networks, therefore, enables many useful applications of graph theory and network science. Many complex systems, for example, are also complex networks, which have properties such as phase transitions and power-law degree distributions that readily lend themselves to emergent or chaotic behavior. The fact that the number of edges in a complete graph grows quadratically in the number of vertices sheds additional light on the source of complexity in large networks: as a network grows, the number of relationships between entities quickly dwarfs the number of entities in the network.

Nonlinearity

A sample solution in the Lorenz attractor when ρ = 28, σ = 10, and β = 8/3

Complex systems often have nonlinear behavior, meaning they may respond in different ways to the same input depending on their state or context. In mathematics and physics, nonlinearity describes systems in which a change in the size of the input does not produce a proportional change in the size of the output. For a given change in input, such systems may yield significantly greater than or less than proportional changes in output, or even no output at all, depending on the current state of the system or its parameter values.

Of particular interest to complex systems are nonlinear dynamical systems, which are systems of differential equations that have one or more nonlinear terms. Some nonlinear dynamical systems, such as the Lorenz system, can produce a mathematical phenomenon known as chaos. Chaos, as it applies to complex systems, refers to the sensitive dependence on initial conditions, or "butterfly effect", that a complex system can exhibit. In such a system, small changes to initial conditions can lead to dramatically different outcomes. Chaotic behavior can, therefore, be extremely hard to model numerically, because small rounding errors at an intermediate stage of computation can cause the model to generate completely inaccurate output. Furthermore, if a complex system returns to a state similar to one it held previously, it may behave completely differently in response to the same stimuli, so chaos also poses challenges for extrapolating from experience.

Emergence

Gosper's Glider Gun creating "gliders" in the cellular automaton Conway's Game of Life

Another common feature of complex systems is the presence of emergent behaviors and properties: these are traits of a system that are not apparent from its components in isolation but which result from the interactions, dependencies, or relationships they form when placed together in a system. Emergence broadly describes the appearance of such behaviors and properties, and has applications to systems studied in both the social and physical sciences. While emergence is often used to refer only to the appearance of unplanned organized behavior in a complex system, emergence can also refer to the breakdown of an organization; it describes any phenomena which are difficult or even impossible to predict from the smaller entities that make up the system.

One example of a complex system whose emergent properties have been studied extensively is cellular automata. In a cellular automaton, a grid of cells, each having one of the finitely many states, evolves according to a simple set of rules. These rules guide the "interactions" of each cell with its neighbors. Although the rules are only defined locally, they have been shown capable of producing globally interesting behavior, for example in Conway's Game of Life.

Spontaneous order and self-organization

When emergence describes the appearance of unplanned order, it is spontaneous order (in the social sciences) or self-organization (in physical sciences). Spontaneous order can be seen in herd behavior, whereby a group of individuals coordinates their actions without centralized planning. Self-organization can be seen in the global symmetry of certain crystals, for instance the apparent radial symmetry of snowflakes, which arises from purely local attractive and repulsive forces both between water molecules and their surrounding environment.

Adaptation

Complex adaptive systems are special cases of complex systems that are adaptive in that they have the capacity to change and learn from experience. Examples of complex adaptive systems include the stock market, social insect and ant colonies, the biosphere and the ecosystem, the brain and the immune system, the cell and the developing embryo, the cities, manufacturing businesses and any human social group-based endeavor in a cultural and social system such as political parties or communities.

Features

Complex systems may have the following features:

Cascading failures
Due to the strong coupling between components in complex systems, a failure in one or more components can lead to cascading failures which may have catastrophic consequences on the functioning of the system. Localized attack may lead to cascading failures and abrupt collapse in spatial networks.
Complex systems may be open
Complex systems are usually open systems — that is, they exist in a thermodynamic gradient and dissipate energy. In other words, complex systems are frequently far from energetic equilibrium: but despite this flux, there may be pattern stability, see synergetics.
Complex systems may exhibit critical transitions
Graphical representation of alternative stable states and the direction of critical slowing down prior to a critical transition (taken from Lever et al. 2020). Top panels (a) indicate stability landscapes at different conditions. Middle panels (b) indicate the rates of change akin to the slope of the stability landscapes, and bottom panels (c) indicate a recovery from a perturbation towards the system's future state (c.I) and in another direction (c.II).
Critical transitions are abrupt shifts in the state of ecosystems, the climate, financial systems or other complex systems that may occur when changing conditions pass a critical or bifurcation point. The 'direction of critical slowing down' in a system's state space may be indicative of a system's future state after such transitions when delayed negative feedbacks leading to oscillatory or other complex dynamics are weak.
Complex systems may have a memory
Recovery from a critical transition may require more than a simple return to the conditions at which a transition occurred, a phenomenon called hysteresis. The history of a complex system may thus be important. Because complex systems are dynamical systems they change over time, and prior states may have an influence on present states. Interacting systems may have complex hysteresis of many transitions. An example of hysteresis has been observed in urban traffic. 
Complex systems may be nested
The components of a complex system may themselves be complex systems. For example, an economy is made up of organisations, which are made up of people, which are made up of cells - all of which are complex systems. The arrangement of interactions within complex bipartite networks may be nested as well. More specifically, bipartite ecological and organisational networks of mutually beneficial interactions were found to have a nested structure. This structure promotes indirect facilitation and a system's capacity to persist under increasingly harsh circumstances as well as the potential for large-scale systemic regime shifts.
Dynamic network of multiplicity
As well as coupling rules, the dynamic network of a complex system is important. Small-world or scale-free networks which have many local interactions and a smaller number of inter-area connections are often employed. Natural complex systems often exhibit such topologies. In the human cortex for example, we see dense local connectivity and a few very long axon projections between regions inside the cortex and to other brain regions.
May produce emergent phenomena
Complex systems may exhibit behaviors that are emergent, which is to say that while the results may be sufficiently determined by the activity of the systems' basic constituents, they may have properties that can only be studied at a higher level. For example, the termites in a mound have physiology, biochemistry and biological development that are at one level of analysis, but their social behavior and mound building is a property that emerges from the collection of termites and needs to be analyzed at a different level.
Relationships are non-linear
In practical terms, this means a small perturbation may cause a large effect (see butterfly effect), a proportional effect, or even no effect at all. In linear systems, the effect is always directly proportional to cause. See nonlinearity.
Relationships contain feedback loops
Both negative (damping) and positive (amplifying) feedback are always found in complex systems. The effects of an element's behavior are fed back in such a way that the element itself is altered.

History

A perspective on the development of complexity science (see reference for readable version)

Although arguably, humans have been studying complex systems for thousands of years, the modern scientific study of complex systems is relatively young in comparison to established fields of science such as physics and chemistry. The history of the scientific study of these systems follows several different research trends.

In the area of mathematics, arguably the largest contribution to the study of complex systems was the discovery of chaos in deterministic systems, a feature of certain dynamical systems that is strongly related to nonlinearity. The study of neural networks was also integral in advancing the mathematics needed to study complex systems.

The notion of self-organizing systems is tied with work in nonequilibrium thermodynamics, including that pioneered by chemist and Nobel laureate Ilya Prigogine in his study of dissipative structures. Even older is the work by Hartree-Fock on the quantum chemistry equations and later calculations of the structure of molecules which can be regarded as one of the earliest examples of emergence and emergent wholes in science.

One complex system containing humans is the classical political economy of the Scottish Enlightenment, later developed by the Austrian school of economics, which argues that order in market systems is spontaneous (or emergent) in that it is the result of human action, but not the execution of any human design.

Upon this, the Austrian school developed from the 19th to the early 20th century the economic calculation problem, along with the concept of dispersed knowledge, which were to fuel debates against the then-dominant Keynesian economics. This debate would notably lead economists, politicians, and other parties to explore the question of computational complexity.

A pioneer in the field, and inspired by Karl Popper's and Warren Weaver's works, Nobel prize economist and philosopher Friedrich Hayek dedicated much of his work, from early to the late 20th century, to the study of complex phenomena, not constraining his work to human economies but venturing into other fields such as psychology, biology and cybernetics. Cybernetician Gregory Bateson played a key role in establishing the connection between anthropology and systems theory; he recognized that the interactive parts of cultures function much like ecosystems.

While the explicit study of complex systems dates at least to the 1970s, the first research institute focused on complex systems, the Santa Fe Institute, was founded in 1984. Early Santa Fe Institute participants included physics Nobel laureates Murray Gell-Mann and Philip Anderson, economics Nobel laureate Kenneth Arrow, and Manhattan Project scientists George Cowan and Herb Anderson. Today, there are over 50 institutes and research centers focusing on complex systems.

Since the late 1990s, the interest of mathematical physicists in researching economic phenomena has been on the rise. The proliferation of cross-disciplinary research with the application of solutions originated from the physics epistemology has entailed a gradual paradigm shift in the theoretical articulations and methodological approaches in economics, primarily in financial economics. The development has resulted in the emergence of a new branch of discipline, namely “econophysics,” which is broadly defined as a cross-discipline that applies statistical physics methodologies which are mostly based on the complex systems theory and the chaos theory for economics analysis.

Applications

Complexity in practice

The traditional approach to dealing with complexity is to reduce or constrain it. Typically, this involves compartmentalization: dividing a large system into separate parts. Organizations, for instance, divide their work into departments that each deal with separate issues. Engineering systems are often designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions.

Complexity management

As projects and acquisitions become increasingly complex, companies and governments are challenged to find effective ways to manage mega-acquisitions such as the Army Future Combat Systems. Acquisitions such as the FCS rely on a web of interrelated parts which interact unpredictably. As acquisitions become more network-centric and complex, businesses will be forced to find ways to manage complexity while governments will be challenged to provide effective governance to ensure flexibility and resiliency.

Complexity economics

Over the last decades, within the emerging field of complexity economics, new predictive tools have been developed to explain economic growth. Such is the case with the models built by the Santa Fe Institute in 1989 and the more recent economic complexity index (ECI), introduced by the MIT physicist Cesar A. Hidalgo and the Harvard economist Ricardo Hausmann. Based on the ECI, Hausmann, Hidalgo and their team of The Observatory of Economic Complexity have produced GDP forecasts for the year 2020.

Complexity and education

Focusing on issues of student persistence with their studies, Forsman, Moll and Linder explore the "viability of using complexity science as a frame to extend methodological applications for physics education research", finding that "framing a social network analysis within a complexity science perspective offers a new and powerful applicability across a broad range of PER topics".

Complexity and modeling

One of Friedrich Hayek's main contributions to early complexity theory is his distinction between the human capacity to predict the behavior of simple systems and its capacity to predict the behavior of complex systems through modeling. He believed that economics and the sciences of complex phenomena in general, which in his view included biology, psychology, and so on, could not be modeled after the sciences that deal with essentially simple phenomena like physics. Hayek would notably explain that complex phenomena, through modeling, can only allow pattern predictions, compared with the precise predictions that can be made out of non-complex phenomena.

Complexity and chaos theory

Complexity theory is rooted in chaos theory, which in turn has its origins more than a century ago in the work of the French mathematician Henri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order. Chaotic systems remain deterministic, though their long-term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly accurate predictions of the system, though in practice this is impossible to do with arbitrary accuracy. Ilya Prigogine argued that complexity is non-deterministic and gives no way whatsoever to precisely predict the future.

The emergence of complexity theory shows a domain between deterministic order and randomness which is complex. This is referred to as the "edge of chaos".

A plot of the Lorenz attractor.

When one analyzes complex systems, sensitivity to initial conditions, for example, is not an issue as important as it is within chaos theory, in which it prevails. As stated by Colander, the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic sets of relationships can generate some simple behavioral patterns, whereas chaotic behavior, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions.

Therefore, the main difference between chaotic systems and complex systems is their history. Chaotic systems do not rely on their history as complex ones do. Chaotic behavior pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'. On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events, which physicist Murray Gell-Mann called "an accumulation of frozen accidents". In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations.

Complexity and network science

A complex system is usually composed of many components and their interactions. Such a system can be represented by a network where nodes represent the components and links represent their interactions. For example, the Internet can be represented as a network composed of nodes (computers) and links (direct connections between computers), and the resilience of the Internet to failures has been studied using percolation theory, a form of complex systems analysis. The failure and recovery of these networks is an open area of research. Other examples of complex networks include social networks, financial institution interdependencies, traffic systems, airline networks, biological networks, and climate networks. Finally, entire networks often interact in a complex manner; if an individual complex system can be represented as a network, then interacting complex systems can be modeled as networks of networks with dynamic properties.

One of the main reasons for high vulnerability of a network is their central control, i.e., a node which is disconnected from the cluster is usually regraded as failed. A percolation approach to generate and study decentralized systems is by using reinforced nodes that have their own support and redundancy links. Network science has been found useful to better understand the complexity of earth systems.

Self-organization

From Wikipedia, the free encyclopedia
 
Self-organization in micron-sized Nb3O7(OH) cubes during a hydrothermal treatment at 200 °C. Initially amorphous cubes gradually transform into ordered 3D meshes of crystalline nanowires as summarized in the model below.

Self-organization, also called (in the social sciences) spontaneous order, is a process where some form of overall order arises from local interactions between parts of an initially disordered system. The process can be spontaneous when sufficient energy is available, not needing control by any external agent. It is often triggered by seemingly random fluctuations, amplified by positive feedback. The resulting organization is wholly decentralized, distributed over all the components of the system. As such, the organization is typically robust and able to survive or self-repair substantial perturbation. Chaos theory discusses self-organization in terms of islands of predictability in a sea of chaotic unpredictability.

Self-organization occurs in many physical, chemical, biological, robotic, and cognitive systems. Examples of self-organization include crystallization, thermal convection of fluids, chemical oscillation, animal swarming, neural circuits, and black markets.

Overview

Self-organization is realized in the physics of non-equilibrium processes, and in chemical reactions, where it is often described as self-assembly. The concept has proven useful in biology, from molecular to ecosystem level.[3] Cited examples of self-organizing behaviour also appear in the literature of many other disciplines, both in the natural sciences and in the social sciences such as economics or anthropology. Self-organization has also been observed in mathematical systems such as cellular automata.[4] Self-organization is an example of the related concept of emergence.

Self-organization relies on four basic ingredients:

  1. strong dynamical non-linearity, often though not necessarily involving positive and negative feedback
  2. balance of exploitation and exploration
  3. multiple interactions
  4. availability of energy (to overcome natural tendency toward entropy, or loss of free energy)

Principles

The cybernetician William Ross Ashby formulated the original principle of self-organization in 1947. It states that any deterministic dynamic system automatically evolves towards a state of equilibrium that can be described in terms of an attractor in a basin of surrounding states. Once there, the further evolution of the system is constrained to remain in the attractor. This constraint implies a form of mutual dependency or coordination between its constituent components or subsystems. In Ashby's terms, each subsystem has adapted to the environment formed by all other subsystems.

The cybernetician Heinz von Foerster formulated the principle of "order from noise" in 1960. It notes that self-organization is facilitated by random perturbations ("noise") that let the system explore a variety of states in its state space. This increases the chance that the system will arrive into the basin of a "strong" or "deep" attractor, from which it then quickly enters the attractor itself. The biophysicist Henri Atlan developed this concept by proposing the principle of "complexity from noise" (French: le principe de complexité par le bruit) first in the 1972 book L'organisation biologique et la théorie de l'information and then in the 1979 book Entre le cristal et la fumée. The physicist and chemist Ilya Prigogine formulated a similar principle as "order through fluctuations" or "order out of chaos". It is applied in the method of simulated annealing for problem solving and machine learning.

History

The idea that the dynamics of a system can lead to an increase in its organization has a long history. The ancient atomists such as Democritus and Lucretius believed that a designing intelligence is unnecessary to create order in nature, arguing that given enough time and space and matter, order emerges by itself.

The philosopher René Descartes presents self-organization hypothetically in the fifth part of his 1637 Discourse on Method. He elaborated on the idea in his unpublished work The World.

Immanuel Kant used the term "self-organizing" in his 1790 Critique of Judgment, where he argued that teleology is a meaningful concept only if there exists such an entity whose parts or "organs" are simultaneously ends and means. Such a system of organs must be able to behave as if it has a mind of its own, that is, it is capable of governing itself.

In such a natural product as this every part is thought as owing its presence to the agency of all the remaining parts, and also as existing for the sake of the others and of the whole, that is as an instrument, or organ... The part must be an organ producing the other parts—each, consequently, reciprocally producing the others... Only under these conditions and upon these terms can such a product be an organized and self-organized being, and, as such, be called a physical end.

Sadi Carnot (1796–1832) and Rudolf Clausius (1822–1888) discovered the second law of thermodynamics in the 19th century. It states that total entropy, sometimes understood as disorder, will always increase over time in an isolated system. This means that a system cannot spontaneously increase its order without an external relationship that decreases order elsewhere in the system (e.g. through consuming the low-entropy energy of a battery and diffusing high-entropy heat).

18th-century thinkers had sought to understand the "universal laws of form" to explain the observed forms of living organisms. This idea became associated with Lamarckism and fell into disrepute until the early 20th century, when D'Arcy Wentworth Thompson (1860–1948) attempted to revive it.

The psychiatrist and engineer W. Ross Ashby introduced the term "self-organizing" to contemporary science in 1947. It was taken up by the cyberneticians Heinz von Foerster, Gordon Pask, Stafford Beer; and von Foerster organized a conference on "The Principles of Self-Organization" at the University of Illinois' Allerton Park in June, 1960 which led to a series of conferences on Self-Organizing Systems. Norbert Wiener took up the idea in the second edition of his Cybernetics: or Control and Communication in the Animal and the Machine (1961).

Self-organization was associated with general systems theory in the 1960s, but did not become commonplace in the scientific literature until physicists Hermann Haken et al. and complex systems researchers adopted it in a greater picture from cosmology Erich Jantsch, chemistry with dissipative system, biology and sociology as autopoiesis to system thinking in the following 1980s (Santa Fe Institute) and 1990s (complex adaptive system), until our days with the disruptive emerging technologies profounded by a rhizomatic network theory.

Around 2008-2009, a concept of guided self-organization started to take shape. This approach aims to regulate self-organization for specific purposes, so that a dynamical system may reach specific attractors or outcomes. The regulation constrains a self-organizing process within a complex system by restricting local interactions between the system components, rather than following an explicit control mechanism or a global design blueprint. The desired outcomes, such as increases in the resultant internal structure and/or functionality, are achieved by combining task-independent global objectives with task-dependent constraints on local interactions.

By field

Convection cells in a gravity field

Physics

The many self-organizing phenomena in physics include phase transitions and spontaneous symmetry breaking such as spontaneous magnetization and crystal growth in classical physics, and the laser, superconductivity and Bose–Einstein condensation in quantum physics. It is found in self-organized criticality in dynamical systems, in tribology, in spin foam systems, and in loop quantum gravity, river basins and deltas, in dendritic solidification (snow flakes), in capillary imbibition and in turbulent structure.

Chemistry

The DNA structure shown schematically at left self-assembles into the structure at right.

Self-organization in chemistry includes molecular self-assembly, reaction–diffusion systems and oscillating reactions, autocatalytic networks, liquid crystals, grid complexes, colloidal crystals, self-assembled monolayers, micelles, microphase separation of block copolymers, and Langmuir–Blodgett films.

Biology

Birds flocking, an example of self-organization in biology
 

Self-organization in biology can be observed in spontaneous folding of proteins and other biomacromolecules, formation of lipid bilayer membranes, pattern formation and morphogenesis in developmental biology, the coordination of human movement, social behaviour in insects (bees, ants, termites) and mammals, and flocking behaviour in birds and fish.

The mathematical biologist Stuart Kauffman and other structuralists have suggested that self-organization may play roles alongside natural selection in three areas of evolutionary biology, namely population dynamics, molecular evolution, and morphogenesis. However, this does not take into account the essential role of energy in driving biochemical reactions in cells. The systems of reactions in any cell are self-catalyzing but not simply self-organizing as they are thermodynamically open systems relying on a continuous input of energy. Self-organization is not an alternative to natural selection, but it constrains what evolution can do and provides mechanisms such as the self-assembly of membranes which evolution then exploits.

The evolution of order in living systems and the generation of order in certain non-living systems was proposed to obey a common fundamental principal called “the Darwinian dynamic” that was formulated by first considering how microscopic order is generated in simple non-biological systems that are far from thermodynamic equilibrium. Consideration was then extended to short, replicating RNA molecules assumed to be similar to the earliest forms of life in the RNA world. It was shown that the underlying order-generating processes of self-organization in the non-biological systems and in replicating RNA are basically similar.

Cosmology

In his 1995 conference paper "Cosmology as a problem in critical phenomena" Lee Smolin said that several cosmological objects or phenomena, such as spiral galaxies, galaxy formation processes in general, early structure formation, quantum gravity and the large scale structure of the universe might be the result of or have involved certain degree of self-organization. He argues that self-organized systems are often critical systems, with structure spreading out in space and time over every available scale, as shown for example by Per Bak and his collaborators. Therefore, because the distribution of matter in the universe is more or less scale invariant over many orders of magnitude, ideas and strategies developed in the study of self-organized systems could be helpluful in tackling certain unsolved problems in cosmology and astrophysics.

Computer science

Phenomena from mathematics and computer science such as cellular automata, random graphs, and some instances of evolutionary computation and artificial life exhibit features of self-organization. In swarm robotics, self-organization is used to produce emergent behavior. In particular the theory of random graphs has been used as a justification for self-organization as a general principle of complex systems. In the field of multi-agent systems, understanding how to engineer systems that are capable of presenting self-organized behavior is an active research area. Optimization algorithms can be considered self-organizing because they aim to find the optimal solution to a problem. If the solution is considered as a state of the iterative system, the optimal solution is the selected, converged structure of the system. Self-organizing networks include small-world networks self-stabilization and scale-free networks. These emerge from bottom-up interactions, unlike top-down hierarchical networks within organizations, which are not self-organizing. Cloud computing systems have been argued to be inherently self-organising, but while they have some autonomy, they are not self-managing as they do not have the goal of reducing their own complexity.

Cybernetics

Norbert Wiener regarded the automatic serial identification of a black box and its subsequent reproduction as self-organization in cybernetics. The importance of phase locking or the "attraction of frequencies", as he called it, is discussed in the 2nd edition of his Cybernetics: Or Control and Communication in the Animal and the Machine. K. Eric Drexler sees self-replication as a key step in nano and universal assembly. By contrast, the four concurrently connected galvanometers of W. Ross Ashby's Homeostat hunt, when perturbed, to converge on one of many possible stable states. Ashby used his state counting measure of variety to describe stable states and produced the "Good Regulator" theorem which requires internal models for self-organized endurance and stability (e.g. Nyquist stability criterion). Warren McCulloch proposed "Redundancy of Potential Command" as characteristic of the organization of the brain and human nervous system and the necessary condition for self-organization. Heinz von Foerster proposed Redundancy, R=1 − H/Hmax, where H is entropy. In essence this states that unused potential communication bandwidth is a measure of self-organization.

In the 1970s Stafford Beer considered self-organization necessary for autonomy in persisting and living systems. He applied his viable system model to management. It consists of five parts: the monitoring of performance of the survival processes (1), their management by recursive application of regulation (2), homeostatic operational control (3) and development (4) which produce maintenance of identity (5) under environmental perturbation. Focus is prioritized by an alerting "algedonic loop" feedback: a sensitivity to both pain and pleasure produced from under-performance or over-performance relative to a standard capability.

In the 1990s Gordon Pask argued that von Foerster's H and Hmax were not independent, but interacted via countably infinite recursive concurrent spin processes which he called concepts. His strict definition of concept "a procedure to bring about a relation" permitted his theorem "Like concepts repel, unlike concepts attract" to state a general spin-based principle of self-organization. His edict, an exclusion principle, "There are No Doppelgangers" means no two concepts can be the same. After sufficient time, all concepts attract and coalesce as pink noise. The theory applies to all organizationally closed or homeostatic processes that produce enduring and coherent products which evolve, learn and adapt.

Human society

Social self-organization in international drug routes
 

The self-organizing behaviour of social animals and the self-organization of simple mathematical structures both suggest that self-organization should be expected in human society. Tell-tale signs of self-organization are usually statistical properties shared with self-organizing physical systems. Examples such as critical mass, herd behaviour, groupthink and others, abound in sociology, economics, behavioral finance and anthropology.

In social theory, the concept of self-referentiality has been introduced as a sociological application of self-organization theory by Niklas Luhmann (1984). For Luhmann the elements of a social system are self-producing communications, i.e. a communication produces further communications and hence a social system can reproduce itself as long as there is dynamic communication. For Luhmann human beings are sensors in the environment of the system. Luhmann developed an evolutionary theory of Society and its subsystems, using functional analyses and systems theory.

In economics, a market economy is sometimes said to be self-organizing. Paul Krugman has written on the role that market self-organization plays in the business cycle in his book "The Self Organizing Economy". Friedrich Hayek coined the term catallaxy to describe a "self-organizing system of voluntary co-operation", in regards to the spontaneous order of the free market economy. Neo-classical economists hold that imposing central planning usually makes the self-organized economic system less efficient. On the other end of the spectrum, economists consider that market failures are so significant that self-organization produces bad results and that the state should direct production and pricing. Most economists adopt an intermediate position and recommend a mixture of market economy and command economy characteristics (sometimes called a mixed economy). When applied to economics, the concept of self-organization can quickly become ideologically imbued.

In learning

Enabling others to "learn how to learn" is often taken to mean instructing them how to submit to being taught. Self-organised learning (S.O.L.) denies that "the expert knows best" or that there is ever "the one best method", insisting instead on "the construction of personally significant, relevant and viable meaning" to be tested experientially by the learner. This may be collaborative, and more rewarding personally. It is seen as a lifelong process, not limited to specific learning environments (home, school, university) or under the control of authorities such as parents and professors. It needs to be tested, and intermittently revised, through the personal experience of the learner. It need not be restricted by either consciousness or language. Fritjof Capra argued that it is poorly recognised within psychology and education. It may be related to cybernetics as it involves a negative feedback control loop, or to systems theory. It can be conducted as a learning conversation or dialogue between learners or within one person.

Traffic flow

The self-organizing behavior of drivers in traffic flow determines almost all the spatiotemporal behavior of traffic, such as traffic breakdown at a highway bottleneck, highway capacity, and the emergence of moving traffic jams. In 1996–2002 these complex self-organizing effects were explained by Boris Kerner's three-phase traffic theory.

In linguistics

Order appears spontaneously in the evolution of language as individual and population behaviour interacts with biological evolution.

In research funding

Self-organized funding allocation (SOFA) is a method of distributing funding for scientific research. In this system, each researcher is allocated an equal amount of funding, and is required to anonymously allocate a fraction of their funds to the research of others. Proponents of SOFA argue that it would result in similar distribution of funding as the present grant system, but with less overhead. In 2016, a test pilot of SOFA began in the Netherlands.

Criticism

Heinz Pagels, in a 1985 review of Ilya Prigogine and Isabelle Stengers's book Order Out of Chaos in Physics Today, appeals to authority:

Most scientists would agree with the critical view expressed in Problems of Biological Physics (Springer Verlag, 1981) by the biophysicist L. A. Blumenfeld, when he wrote: "The meaningful macroscopic ordering of biological structure does not arise due to the increase of certain parameters or a system above their critical values. These structures are built according to program-like complicated architectural structures, the meaningful information created during many billions of years of chemical and biological evolution being used." Life is a consequence of microscopic, not macroscopic, organization.

Of course, Blumenfeld does not answer the further question of how those program-like structures emerge in the first place. His explanation leads directly to infinite regress.

In short, they [Prigogine and Stengers] maintain that time irreversibility is not derived from a time-independent microworld, but is itself fundamental. The virtue of their idea is that it resolves what they perceive as a "clash of doctrines" about the nature of time in physics. Most physicists would agree that there is neither empirical evidence to support their view, nor is there a mathematical necessity for it. There is no "clash of doctrines." Only Prigogine and a few colleagues hold to these speculations which, in spite of their efforts, continue to live in the twilight zone of scientific credibility.

In theology, Thomas Aquinas (1225–1274) in his Summa Theologica assumes a teleological created universe in rejecting the idea that something can be a self-sufficient cause of its own organization:

Since nature works for a determinate end under the direction of a higher agent, whatever is done by nature must needs be traced back to God, as to its first cause. So also whatever is done voluntarily must also be traced back to some higher cause other than human reason or will, since these can change or fail; for all things that are changeable and capable of defect must be traced back to an immovable and self-necessary first principle, as was shown in the body of the Article.

Noosphere

From Wikipedia, the free encyclopedia

The noosphere (alternate spelling noösphere) is a philosophical concept developed and popularized by the Russian-Ukrainian Soviet biogeochemist Vladimir Vernadsky, and the French philosopher and Jesuit priest Pierre Teilhard de Chardin. Vernadsky defined the noosphere as the new state of the biosphere and described as the planetary "sphere of reason". The noosphere represents the highest stage of biospheric development, its defining factor being the development of humankind's rational activities.

The word is derived from the Greek νόος ("mind", "reason") and σφαῖρα ("sphere"), in lexical analogy to "atmosphere" and "biosphere". The concept, however, cannot be accredited to a single author. The founding authors Vernadsky and de Chardin developed two related but starkly different concepts, the former being grounded in the geological sciences, and the latter in theology. Both conceptions of the noosphere share the common thesis that together human reason and the scientific thought has created, and will continue to create, the next evolutionary geological layer. This geological layer is part of the evolutionary chain. Second generation authors, predominantly of Russian origin, have further developed the Vernadskian concept, creating the related concepts: noocenosis and noocenology.

Founding authors

The term noosphere was first used in the publications of Pierre Teilhard de Chardin in 1922 in his Cosmogenesis. Vernadsky was most likely introduced to the term by a common acquaintance, Édouard Le Roy, during a stay in Paris. Some sources claim Édouard Le Roy actually first proposed the term. Vernadsky himself wrote that he was first introduced to the concept by Le Roy in his 1927 lectures at the College of France, and that Le Roy had emphasized a mutual exploration of the concept with Teilhard de Chardin. According to Vernadsky's own letters, he took Le Roy's ideas on the noosphere from Le Roy's article "Les origines humaines et l’evolution de l’intelligence", part III: "La noosphere et l’hominisation", before reworking the concept within his own field, biogeochemistry. The historian Bailes concludes that Vernadsky and Teilhard de Chardin were mutual influences on each other, as Teilhard de Chardin also attended the Vernadsky's lectures on biogeochemistry, before creating the concept of the noosphere.

An account stated that Le Roy and Teilhard were not aware of the concept of biosphere in their noosphere concept and that it was Vernadsky who introduced them to this notion, which gave their conceptualization a grounding on natural sciences. Both Teilhard de Chardin and Vernadsky base their conceptions of the noosphere on the term 'biosphere', developed by Edward Suess in 1875. Despite the differing backgrounds, approaches and focuses of Teilhard and Vernadsky, they have a few fundamental themes in common. Both scientists overstepped the boundaries of natural science and attempted to create all-embracing theoretical constructions founded in philosophy, social sciences and authorized interpretations of the evolutionary theory. Moreover, both thinkers were convinced of the teleological character of evolution. They also argued that human activity becomes a geological power and that the manner by which it is directed can influence the environment. There are, however, fundamental differences in the two conceptions.

Concept

In the theory of Vernadsky, the noosphere is the third in a succession of phases of development of the Earth, after the geosphere (inanimate matter) and the biosphere (biological life). Just as the emergence of life fundamentally transformed the geosphere, the emergence of human cognition fundamentally transforms the biosphere. In contrast to the conceptions of the Gaia theorists, or the promoters of cyberspace, Vernadsky's noosphere emerges at the point where humankind, through the mastery of nuclear processes, begins to create resources through the transmutation of elements. It is also currently being researched as part of the Global Consciousness Project.

Teilhard perceived a directionality in evolution along an axis of increasing Complexity/Consciousness. For Teilhard, the noosphere is the sphere of thought encircling the earth that has emerged through evolution as a consequence of this growth in complexity/consciousness. The noosphere is therefore as much part of nature as the barysphere, lithosphere, hydrosphere, atmosphere, and biosphere. As a result, Teilhard sees the "social phenomenon [as] the culmination of and not the attenuation of the biological phenomenon." These social phenomena are part of the noosphere and include, for example, legal, educational, religious, research, industrial and technological systems. In this sense, the noosphere emerges through and is constituted by the interaction of human minds. The noosphere thus grows in step with the organization of the human mass in relation to itself as it populates the earth. Teilhard argued the noosphere evolves towards ever greater personalisation, individuation and unification of its elements. He saw the Christian notion of love as being the principal driver of "noogenesis", the evolution of mind. Evolution would culminate in the Omega Point—an apex of thought/consciousness—which he identified with the eschatological return of Christ.

One of the original aspects of the noosphere concept deals with evolution. Henri Bergson, with his L'évolution créatrice (1907), was one of the first to propose evolution is "creative" and cannot necessarily be explained solely by Darwinian natural selection. L'évolution créatrice is upheld, according to Bergson, by a constant vital force which animates life and fundamentally connects mind and body, an idea opposing the dualism of René Descartes. In 1923, C. Lloyd Morgan took this work further, elaborating on an "emergent evolution" which could explain increasing complexity (including the evolution of mind). Morgan found many of the most interesting changes in living things have been largely discontinuous with past evolution. Therefore, these living things did not necessarily evolve through a gradual process of natural selection. Rather, he posited, the process of evolution experiences jumps in complexity (such as the emergence of a self-reflective universe, or noosphere), in a sort of qualitative punctuated equilibrium. Finally, the complexification of human cultures, particularly language, facilitated a quickening of evolution in which cultural evolution occurs more rapidly than biological evolution. Recent understanding of human ecosystems and of human impact on the biosphere have led to a link between the notion of sustainability with the "co-evolution" and harmonization of cultural and biological evolution.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...