Search This Blog

Thursday, September 30, 2021

Cosmological argument

From Wikipedia, the free encyclopedia

A cosmological argument, in natural theology, is an argument which claims that the existence of God can be inferred from facts concerning causation, explanation, change, motion, contingency, dependency, or finitude with respect to the universe or some totality of objects. A cosmological argument can also sometimes be referred to as an argument from universal causation, an argument from first cause, the causal argument, or prime mover argument. Whichever term is employed, there are two basic variants of the argument, each with subtle yet important distinctions: in esse (essentiality), and in fieri (becoming).

The basic premises of all of these arguments involve the concept of causation. The conclusion of these arguments is that there exists a first cause (for whichever group of things it is being argued has a cause), subsequently deemed to be God. The history of this argument goes back to Aristotle or earlier, was developed in Neoplatonism and early Christianity and later in medieval Islamic theology during the 9th to 12th centuries, and was re-introduced to medieval Christian theology in the 13th century by Thomas Aquinas. The cosmological argument is closely related to the principle of sufficient reason as addressed by Gottfried Leibniz and Samuel Clarke, itself a modern exposition of the claim that "nothing comes from nothing" attributed to Parmenides.

Contemporary defenders of cosmological arguments include William Lane Craig, Robert Koons, and Alexander Pruss.

History

Plato and Aristotle, depicted here in Raphael's The School of Athens, both developed first cause arguments.

Plato (c. 427–347 BC) and Aristotle (c. 384–322 BC) both posited first cause arguments, though each had certain notable caveats. In The Laws (Book X), Plato posited that all movement in the world and the Cosmos was "imparted motion". This required a "self-originated motion" to set it in motion and to maintain it. In Timaeus, Plato posited a "demiurge" of supreme wisdom and intelligence as the creator of the Cosmos.

Aristotle argued against the idea of a first cause, often confused with the idea of a "prime mover" or "unmoved mover" (πρῶτον κινοῦν ἀκίνητον or primus motor) in his Physics and Metaphysics. Aristotle argued in favor of the idea of several unmoved movers, one powering each celestial sphere, which he believed lived beyond the sphere of the fixed stars, and explained why motion in the universe (which he believed was eternal) had continued for an infinite period of time. Aristotle argued the atomist's assertion of a non-eternal universe would require a first uncaused cause – in his terminology, an efficient first cause – an idea he considered a nonsensical flaw in the reasoning of the atomists.

Like Plato, Aristotle believed in an eternal cosmos with no beginning and no end (which in turn follows Parmenides' famous statement that "nothing comes from nothing"). In what he called "first philosophy" or metaphysics, Aristotle did intend a theological correspondence between the prime mover and deity (presumably Zeus); functionally, however, he provided an explanation for the apparent motion of the "fixed stars" (now understood as the daily rotation of the Earth). According to his theses, immaterial unmoved movers are eternal unchangeable beings that constantly think about thinking, but being immaterial, they are incapable of interacting with the cosmos and have no knowledge of what transpires therein. From an "aspiration or desire", the celestial spheres, imitate that purely intellectual activity as best they can, by uniform circular motion. The unmoved movers inspiring the planetary spheres are no different in kind from the prime mover, they merely suffer a dependency of relation to the prime mover. Correspondingly, the motions of the planets are subordinate to the motion inspired by the prime mover in the sphere of fixed stars. Aristotle's natural theology admitted no creation or capriciousness from the immortal pantheon, but maintained a defense against dangerous charges of impiety.

Plotinus, a third-century Platonist, taught that the One transcendent absolute caused the universe to exist simply as a consequence of its existence (creatio ex deo). His disciple Proclus stated "The One is God".

Centuries later, the Islamic philosopher Avicenna (c. 980–1037) inquired into the question of being, in which he distinguished between essence (Mahiat) and existence (Wujud). He argued that the fact of existence could not be inferred from or accounted for by the essence of existing things, and that form and matter by themselves could not originate and interact with the movement of the Universe or the progressive actualization of existing things. Thus, he reasoned that existence must be due to an agent cause that necessitates, imparts, gives, or adds existence to an essence. To do so, the cause must coexist with its effect and be an existing thing.

Steven Duncan writes that it "was first formulated by a Greek-speaking Syriac Christian neo-Platonist, John Philoponus, who claims to find a contradiction between the Greek pagan insistence on the eternity of the world and the Aristotelian rejection of the existence of any actual infinite". Referring to the argument as the "'Kalam' cosmological argument", Duncan asserts that it "received its fullest articulation at the hands of [medieval] Muslim and Jewish exponents of Kalam ("the use of reason by believers to justify the basic metaphysical presuppositions of the faith").

Thomas Aquinas (c. 1225–1274) adapted and enhanced the argument he found in his reading of Aristotle, Avicenna, and Maimonides to form one of the most influential versions of the cosmological argument. His conception of First Cause was the idea that the Universe must be caused by something that is itself uncaused, which he claimed is that which we call God:

{{quote|The second way is from the nature of the efficient cause. In the world of sense we find there is an order of efficient causes. There is no case known (neither is it, indeed, possible) in which a thing is found to be the efficient cause of itself; for so it would be prior to itself, which is impossible. Now in efficient causes it is not possible to go on to infinity, because in all efficient causes following in order, the first is the cause of the intermediate cause, and the intermediate is the cause of the ultimate cause, whether the intermediate cause be several, or only one. Now to take away the cause is to take away the effect. Therefore, if there be no first cause among efficient causes, there will be no ultimate, nor any intermediate cause. But if in efficient causes it is possible to go on to infinity, there will be no first efficient cause, neither will there be an ultimate effect, nor any intermediate efficient causes; all of which is plainly false. Therefore it is necessary to admit a first efficient cause, to which everyone gives the name of God.

Importantly, Aquinas' Five Ways, given the second question of his Summa Theologica, are not the entirety of Aquinas' demonstration that the Christian God exists. The Five Ways form only the beginning of Aquinas' Treatise on the Divine Nature.

Versions of the argument

Argument from contingency

In the scholastic era, Aquinas formulated the "argument from contingency", following Aristotle in claiming that there must be something to explain why the Universe exists. Since the Universe could, under different circumstances, conceivably not exist (contingency), its existence must have a cause – not merely another contingent thing, but something that exists by necessity (something that must exist in order for anything else to exist). In other words, even if the Universe has always existed, it still owes its existence to an uncaused cause, Aquinas further said: "... and this we understand to be God."

Aquinas's argument from contingency allows for the possibility of a Universe that has no beginning in time. It is a form of argument from universal causation. Aquinas observed that, in nature, there were things with contingent existences. Since it is possible for such things not to exist, there must be some time at which these things did not in fact exist. Thus, according to Aquinas, there must have been a time when nothing existed. If this is so, there would exist nothing that could bring anything into existence. Contingent beings, therefore, are insufficient to account for the existence of contingent beings: there must exist a necessary being whose non-existence is an impossibility, and from which the existence of all contingent beings is ultimately derived.

The German philosopher Gottfried Leibniz made a similar argument with his principle of sufficient reason in 1714. "There can be found no fact that is true or existent, or any true proposition," he wrote, "without there being a sufficient reason for its being so and not otherwise, although we cannot know these reasons in most cases." He formulated the cosmological argument succinctly: "Why is there something rather than nothing? The sufficient reason ... is found in a substance which ... is a necessary being bearing the reason for its existence within itself."

Leibniz's argument from contingency is one of the most popular cosmological arguments in philosophy of religion. It attempts to prove the existence of a necessary being and infer that this being is God. Alexander Pruss formulates the argument as follows:

  1. Every contingent fact has an explanation.
  2. There is a contingent fact that includes all other contingent facts.
  3. Therefore, there is an explanation of this fact.
  4. This explanation must involve a necessary being.
  5. This necessary being is God.

Premise 1 is a form of the principle of sufficient reason stating that all contingently true sentences (i.e. contingent facts) have a sufficient explanation as to why they are the case. Premise 2 refers to what is known as the Big Conjunctive Contingent Fact (abbreviated BCCF), and the BCCF is generally taken to be the logical conjunction of all contingent facts. It can be thought about as the sum total of all contingent reality. Premise 3 then concludes that the BCCF has an explanation, as every contingency does (in virtue of the PSR). It follows that this explanation is non-contingent (i.e. necessary); no contingency can explain the BCCF, because every contingent fact is a part of the BCCF. Statement 5, which is either seen as a premise or a conclusion, infers that the necessary being which explains the totality of contingent facts is God. Several philosophers of religion, such as Joshua Rasmussen and T. Ryan Byerly, have argued for the inference from (4) to (5).

In esse and in fieri

The difference between the arguments from causation in fieri and in esse is a fairly important one. In fieri is generally translated as "becoming", while in esse is generally translated as "in essence". In fieri, the process of becoming, is similar to building a house. Once it is built, the builder walks away, and it stands on its own accord; compare the watchmaker analogy. (It may require occasional maintenance, but that is beyond the scope of the first cause argument.)

In esse (essence) is more akin to the light from a candle or the liquid in a vessel. George Hayward Joyce, SJ, explained that, "where the light of the candle is dependent on the candle's continued existence, not only does a candle produce light in a room in the first instance, but its continued presence is necessary if the illumination is to continue. If it is removed, the light ceases. Again, a liquid receives its shape from the vessel in which it is contained; but were the pressure of the containing sides withdrawn, it would not retain its form for an instant." This form of the argument is far more difficult to separate from a purely first cause argument than is the example of the house's maintenance above, because here the First Cause is insufficient without the candle's or vessel's continued existence.

The philosopher Robert Koons has stated a new variant on the cosmological argument. He says that to deny causation is to deny all empirical ideas – for example, if we know our own hand, we know it because of the chain of causes including light being reflected upon one's eyes, stimulating the retina and sending a message through the optic nerve into your brain. He summarised the purpose of the argument as "that if you don't buy into theistic metaphysics, you're undermining empirical science. The two grew up together historically and are culturally and philosophically inter-dependent ... If you say I just don't buy this causality principle – that's going to be a big big problem for empirical science." This in fieri version of the argument therefore does not intend to prove God, but only to disprove objections involving science, and the idea that contemporary knowledge disproves the cosmological argument.

Kalām cosmological argument

William Lane Craig, who was responsible for re-popularizing this argument in Western philosophy, presents it in the following general form:

  1. Whatever begins to exist has a cause of its existence.
  2. The universe began to exist.
  3. Therefore, the universe has a cause of its existence.

Craig explains, by nature of the event (the Universe coming into existence), attributes unique to (the concept of) God must also be attributed to the cause of this event, including but not limited to: enormous power (if not omnipotence), being the creator of the Heavens and the Earth (as God is according to the Christian understanding of God), being eternal and being absolutely self-sufficient. Since these attributes are unique to God, anything with these attributes must be God. Something does have these attributes: the cause; hence, the cause is God, the cause exists; hence, God exists.

Craig defends the second premise, that the Universe had a beginning starting with Al-Ghazali's proof that an actual infinity is impossible. However, If the universe never had a beginning then there would be an actual infinite, Craig claims, namely an infinite amount of cause and effect events. Hence, the Universe had a beginning.

Metaphysical argument for the existence of God

Duns Scotus, the influential Medieval Christian theologian, created a metaphysical argument for the existence of God. Though it was inspired by Aquinas' argument from motion, he, like other philosophers and theologians, believed that his statement for God's existence could be considered separate to Aquinas'. His explanation for God's existence is long, and can be summarised as follows:

  1. Something can be produced.
  2. It is produced by itself, something or another.
  3. Not by nothing, because nothing causes nothing.
  4. Not by itself, because an effect never causes itself.
  5. Therefore, by another A.
  6. If A is first then we have reached the conclusion.
  7. If A is not first, then we return to 2).
  8. From 3) and 4), we produce another- B. The ascending series is either infinite or finite.
  9. An infinite series is not possible.
  10. Therefore, God exists.

Scotus deals immediately with two objections he can see: first, that there cannot be a first, and second, that the argument falls apart when 1) is questioned. He states that infinite regress is impossible, because it provokes unanswerable questions, like, in modern English, "What is infinity minus infinity?" The second he states can be answered if the question is rephrased using modal logic, meaning that the first statement is instead "It is possible that something can be produced."

Cosmological argument and infinite regress

Depending on its formulation, the cosmological argument is an example of a positive infinite regress argument. An infinite regress is an infinite series of entities governed by a recursive principle that determines how each entity in the series depends on or is produced by its predecessor. An infinite regress argument is an argument against a theory based on the fact that this theory leads to an infinite regress. A positive infinite regress argument employs the regress in question to argue in support of a theory by showing that its alternative involves a vicious regress. The regress relevant for the cosmological argument is the regress of causes: an event occurred because it was caused by another event that occurred before it, which was itself caused by a previous event, and so on. For an infinite regress argument to be successful, it has to demonstrate not just that the theory in question entails an infinite regress but also that this regress is vicious. Once the viciousness of the regress of causes is established, the cosmological argument can proceed to its positive conclusion by holding that it is necessary to posit a first cause in order to avoid it.

A regress can be vicious due to metaphysical impossibility, implausibility or explanatory failure. It is sometimes held that the regress of causes is vicious because it is metaphysically impossible, i.e. that it involves an outright contradiction. But it is difficult to see where this contradiction lies unless an additional assumption is accepted: that actual infinity is impossible. But this position is opposed to infinity in general, not just specifically to the regress of causes. A more promising view is that the regress of causes is to be rejected because it is implausible. Such an argument can be based on empirical observation, e.g. that, to the best of our knowledge, our universe had a beginning in the form of the Big Bang. But it can also be based on more abstract principles, like Ockham's razor, which posits that we should avoid ontological extravagance by not multiplying entities without necessity. A third option is to see the regress of causes as vicious due to explanatory failure, i.e. that it does not solve the problem it was formulated to solve or that it assumes already in disguised form what it was supposed to explain. According to this position, we seek to explain one event in the present by citing an earlier event that caused it. But this explanation is incomplete unless we can come to understand why this earlier event occurred, which is itself explained by its own cause and so on. At each step, the occurrence of an event has to be assumed. So it fails to explain why anything at all occurs, why there is a chain of causes to begin with.

Objections and counterarguments

What caused the First Cause?

One objection to the argument is that it leaves open the question of why the First Cause is unique in that it does not require any causes. Proponents argue that the First Cause is exempt from having a cause, while opponents argue that this is special pleading or otherwise untrue. Critics often press that arguing for the First Cause's exemption raises the question of why the First Cause is indeed exempt, whereas defenders maintain that this question has been answered by the various arguments, emphasizing that none of its major forms rest on the premise that everything has a cause.

William Lane Craig, who popularised and is notable for defending the Kalam cosmological argument, argues that the infinite is impossible, whichever perspective the viewer takes, and so there must always have been one unmoved thing to begin the universe. He uses Hilbert's paradox of the Grand Hotel and the question 'What is infinity minus infinity?' to illustrate the idea that the infinite is metaphysically, mathematically, and even conceptually, impossible. Other reasons include the fact that it is impossible to count down from infinity, and that, had the universe existed for an infinite amount of time, every possible event, including the final end of the universe, would already have occurred. He therefore states his argument in three points- firstly, everything that begins to exist has a cause of its existence; secondly, the universe began to exist; so, thirdly, therefore, the universe has a cause of its existence. Craig argues in the Blackwell Companion to Natural Theology that there cannot be an infinite regress of causes and thus there must be a first uncaused cause, even if one posits a plurality of causes of the universe. He argues Occam's Razor may be employed to remove unneeded further causes of the universe, to leave a single uncaused cause.

Secondly, it is argued that the premise of causality has been arrived at via a posteriori (inductive) reasoning, which is dependent on experience. David Hume highlighted this problem of induction and argued that causal relations were not true a priori. However, as to whether inductive or deductive reasoning is more valuable remains a matter of debate, with the general conclusion being that neither is prominent. Opponents of the argument tend to argue that it is unwise to draw conclusions from an extrapolation of causality beyond experience. Andrew Loke replies that, according to the Kalam Cosmological Argument, only things which begin to exist require a cause. On the other hand, something that is without beginning has always existed and therefore does not require a cause. The Kalam and the Thomistic cosmological argument posit that there cannot be an actual infinite regress of causes, therefore there must be an uncaused First Cause that is beginningless and does not require a cause.

Not evidence for a theistic God

According to this objection the basic cosmological argument merely establishes that a First Cause exists, not that it has the attributes of a theistic god, such as omniscience, omnipotence, and omnibenevolence. This is why the argument is often expanded to show that at least some of these attributes are necessarily true, for instance in the modern Kalam argument given above.

Existence of causal loops

A causal loop is a form of predestination paradox arising where traveling backwards in time is deemed a possibility. A sufficiently powerful entity in such a world would have the capacity to travel backwards in time to a point before its own existence, and to then create itself, thereby initiating everything which follows from it.

The usual reason given to refute the possibility of a causal loop is that it requires that the loop as a whole be its own cause. Richard Hanley argues that causal loops are not logically, physically, or epistemically impossible: "[In timed systems,] the only possibly objectionable feature that all causal loops share is that coincidence is required to explain them." However, Andrew Loke argues that causal loop of the type that is supposed to avoid a First Cause suffers from the problem of vicious circularity and thus it would not work.

Existence of infinite causal chains

David Hume and later Paul Edwards have invoked a similar principle in their criticisms of the cosmological argument. William L. Rowe has called this the Hume-Edwards principle:

If the existence of every member of a set is explained, the existence of that set is thereby explained.

Nevertheless, David White argues that the notion of an infinite causal regress providing a proper explanation is fallacious. Furthermore, in Hume's Dialogues Concerning Natural Religion, the character Demea states that even if the succession of causes is infinite, the whole chain still requires a cause. To explain this, suppose there exists a causal chain of infinite contingent beings. If one asks the question, "Why are there any contingent beings at all?", it does not help to be told that "There are contingent beings because other contingent beings caused them." That answer would just presuppose additional contingent beings. An adequate explanation of why some contingent beings exist would invoke a different sort of being, a necessary being that is not contingent. A response might suppose each individual is contingent but the infinite chain as a whole is not; or the whole infinite causal chain to be its own cause.

Severinsen argues that there is an "infinite" and complex causal structure. White tried to introduce an argument "without appeal to the principle of sufficient reason and without denying the possibility of an infinite causal regress". A number of other arguments have been offered to demonstrate that an actual infinite regress cannot exist, viz. the argument for the impossibility of concrete actual infinities, the argument for the impossibility of traversing an actual infinite, the argument from the lack of capacity to begin to exist, and various arguments from paradoxes.

Big Bang cosmology

Some cosmologists and physicists argue that a challenge to the cosmological argument is the nature of time: "One finds that time just disappears from the Wheeler–DeWitt equation" (Carlo Rovelli). The Big Bang theory states that it is the point in which all dimensions came into existence, the start of both space and time. Then, the question "What was there before the Universe?" makes no sense; the concept of "before" becomes meaningless when considering a situation without time. This has been put forward by J. Richard Gott III, James E. Gunn, David N. Schramm, and Beatrice Tinsley, who said that asking what occurred before the Big Bang is like asking what is north of the North Pole. However, some cosmologists and physicists do attempt to investigate causes for the Big Bang, using such scenarios as the collision of membranes.

Philosopher Edward Feser argues that most of the classical philosophers' cosmological arguments for the existence of God do not depend on the Big Bang or whether the universe had a beginning. The question is not about what got things started or how long they have been going, but rather what keeps them going.

Complex system

From Wikipedia, the free encyclopedia

A complex system is a system composed of many components which may interact with each other. Examples of complex systems are Earth's global climate, organisms, the human brain, infrastructure such as power grid, transportation or communication systems, social and economic organizations (like cities), an ecosystem, a living cell, and ultimately the entire universe.

Complex systems are systems whose behavior is intrinsically difficult to model due to the dependencies, competitions, relationships, or other types of interactions between their parts or between a given system and its environment. Systems that are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others. Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links to their interactions.

The term complex systems often refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment. The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm to reductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them.

As an interdisciplinary domain, complex systems draws contributions from many different fields, such as the study of self-organization and critical phenomena from physics, that of spontaneous order from the social sciences, chaos from mathematics, adaptation from biology, and many others. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, including statistical physics, information theory, nonlinear dynamics, anthropology, computer science, meteorology, sociology, economics, psychology, and biology.

Key concepts

Systems

Open systems have input and output flows, representing exchanges of matter, energy or information with their surroundings.

Complex systems are chiefly concerned with the behaviors and properties of systems. A system, broadly defined, is a set of entities that, through their interactions, relationships, or dependencies, form a unified whole. It is always defined in terms of its boundary, which determines the entities that are or are not part of the system. Entities lying outside the system then become part of the system's environment.

A system can exhibit properties that produce behaviors which are distinct from the properties and behaviors of its parts; these system-wide or global properties and behaviors are characteristics of how the system interacts with or appears to its environment, or of how its parts behave (say, in response to external stimuli) by virtue of being within the system. The notion of behavior implies that the study of systems is also concerned with processes that take place over time (or, in mathematics, some other phase space parameterization). Because of their broad, interdisciplinary applicability, systems concepts play a central role in complex systems.

As a field of study, a complex system is a subset of systems theory. General systems theory focuses similarly on the collective behaviors of interacting entities, but it studies a much broader class of systems, including non-complex systems where traditional reductionist approaches may remain viable. Indeed, systems theory seeks to explore and describe all classes of systems, and the invention of categories that are useful to researchers across widely varying fields is one of the systems theory's main objectives.

As it relates to complex systems, systems theory contributes an emphasis on the way relationships and dependencies between a system's parts can determine system-wide properties. It also contributes to the interdisciplinary perspective of the study of complex systems: the notion that shared properties link systems across disciplines, justifying the pursuit of modeling approaches applicable to complex systems wherever they appear. Specific concepts important to complex systems, such as emergence, feedback loops, and adaptation, also originate in systems theory.

Complexity

"Systems exhibit complexity" means that their behaviors cannot be easily inferred from their properties. Any modeling approach that ignores such difficulties or characterizes them as noise, then, will necessarily produce models that are neither accurate nor useful. As yet no fully general theory of complex systems has emerged for addressing these problems, so researchers must solve them in domain-specific contexts. Researchers in complex systems address these problems by viewing the chief task of modeling to be capturing, rather than reducing, the complexity of their respective systems of interest.

While no generally accepted exact definition of complexity exists yet, there are many archetypal examples of complexity. Systems can be complex if, for instance, they have chaotic behavior (behavior that exhibits extreme sensitivity to initial conditions, among other properties), or if they have emergent properties (properties that are not apparent from their components in isolation but which result from the relationships and dependencies they form when placed together in a system), or if they are computationally intractable to model (if they depend on a number of parameters that grows too rapidly with respect to the size of the system).

Networks

The interacting components of a complex system form a network, which is a collection of discrete objects and relationships between them, usually depicted as a graph of vertices connected by edges. Networks can describe the relationships between individuals within an organization, between logic gates in a circuit, between genes in gene regulatory networks, or between any other set of related entities.

Networks often describe the sources of complexity in complex systems. Studying complex systems as networks, therefore, enables many useful applications of graph theory and network science. Many complex systems, for example, are also complex networks, which have properties such as phase transitions and power-law degree distributions that readily lend themselves to emergent or chaotic behavior. The fact that the number of edges in a complete graph grows quadratically in the number of vertices sheds additional light on the source of complexity in large networks: as a network grows, the number of relationships between entities quickly dwarfs the number of entities in the network.

Nonlinearity

A sample solution in the Lorenz attractor when ρ = 28, σ = 10, and β = 8/3

Complex systems often have nonlinear behavior, meaning they may respond in different ways to the same input depending on their state or context. In mathematics and physics, nonlinearity describes systems in which a change in the size of the input does not produce a proportional change in the size of the output. For a given change in input, such systems may yield significantly greater than or less than proportional changes in output, or even no output at all, depending on the current state of the system or its parameter values.

Of particular interest to complex systems are nonlinear dynamical systems, which are systems of differential equations that have one or more nonlinear terms. Some nonlinear dynamical systems, such as the Lorenz system, can produce a mathematical phenomenon known as chaos. Chaos, as it applies to complex systems, refers to the sensitive dependence on initial conditions, or "butterfly effect", that a complex system can exhibit. In such a system, small changes to initial conditions can lead to dramatically different outcomes. Chaotic behavior can, therefore, be extremely hard to model numerically, because small rounding errors at an intermediate stage of computation can cause the model to generate completely inaccurate output. Furthermore, if a complex system returns to a state similar to one it held previously, it may behave completely differently in response to the same stimuli, so chaos also poses challenges for extrapolating from experience.

Emergence

Gosper's Glider Gun creating "gliders" in the cellular automaton Conway's Game of Life

Another common feature of complex systems is the presence of emergent behaviors and properties: these are traits of a system that are not apparent from its components in isolation but which result from the interactions, dependencies, or relationships they form when placed together in a system. Emergence broadly describes the appearance of such behaviors and properties, and has applications to systems studied in both the social and physical sciences. While emergence is often used to refer only to the appearance of unplanned organized behavior in a complex system, emergence can also refer to the breakdown of an organization; it describes any phenomena which are difficult or even impossible to predict from the smaller entities that make up the system.

One example of a complex system whose emergent properties have been studied extensively is cellular automata. In a cellular automaton, a grid of cells, each having one of the finitely many states, evolves according to a simple set of rules. These rules guide the "interactions" of each cell with its neighbors. Although the rules are only defined locally, they have been shown capable of producing globally interesting behavior, for example in Conway's Game of Life.

Spontaneous order and self-organization

When emergence describes the appearance of unplanned order, it is spontaneous order (in the social sciences) or self-organization (in physical sciences). Spontaneous order can be seen in herd behavior, whereby a group of individuals coordinates their actions without centralized planning. Self-organization can be seen in the global symmetry of certain crystals, for instance the apparent radial symmetry of snowflakes, which arises from purely local attractive and repulsive forces both between water molecules and their surrounding environment.

Adaptation

Complex adaptive systems are special cases of complex systems that are adaptive in that they have the capacity to change and learn from experience. Examples of complex adaptive systems include the stock market, social insect and ant colonies, the biosphere and the ecosystem, the brain and the immune system, the cell and the developing embryo, the cities, manufacturing businesses and any human social group-based endeavor in a cultural and social system such as political parties or communities.

Features

Complex systems may have the following features:

Cascading failures
Due to the strong coupling between components in complex systems, a failure in one or more components can lead to cascading failures which may have catastrophic consequences on the functioning of the system. Localized attack may lead to cascading failures and abrupt collapse in spatial networks.
Complex systems may be open
Complex systems are usually open systems — that is, they exist in a thermodynamic gradient and dissipate energy. In other words, complex systems are frequently far from energetic equilibrium: but despite this flux, there may be pattern stability, see synergetics.
Complex systems may exhibit critical transitions
Graphical representation of alternative stable states and the direction of critical slowing down prior to a critical transition (taken from Lever et al. 2020). Top panels (a) indicate stability landscapes at different conditions. Middle panels (b) indicate the rates of change akin to the slope of the stability landscapes, and bottom panels (c) indicate a recovery from a perturbation towards the system's future state (c.I) and in another direction (c.II).
Critical transitions are abrupt shifts in the state of ecosystems, the climate, financial systems or other complex systems that may occur when changing conditions pass a critical or bifurcation point. The 'direction of critical slowing down' in a system's state space may be indicative of a system's future state after such transitions when delayed negative feedbacks leading to oscillatory or other complex dynamics are weak.
Complex systems may have a memory
Recovery from a critical transition may require more than a simple return to the conditions at which a transition occurred, a phenomenon called hysteresis. The history of a complex system may thus be important. Because complex systems are dynamical systems they change over time, and prior states may have an influence on present states. Interacting systems may have complex hysteresis of many transitions. An example of hysteresis has been observed in urban traffic. 
Complex systems may be nested
The components of a complex system may themselves be complex systems. For example, an economy is made up of organisations, which are made up of people, which are made up of cells - all of which are complex systems. The arrangement of interactions within complex bipartite networks may be nested as well. More specifically, bipartite ecological and organisational networks of mutually beneficial interactions were found to have a nested structure. This structure promotes indirect facilitation and a system's capacity to persist under increasingly harsh circumstances as well as the potential for large-scale systemic regime shifts.
Dynamic network of multiplicity
As well as coupling rules, the dynamic network of a complex system is important. Small-world or scale-free networks which have many local interactions and a smaller number of inter-area connections are often employed. Natural complex systems often exhibit such topologies. In the human cortex for example, we see dense local connectivity and a few very long axon projections between regions inside the cortex and to other brain regions.
May produce emergent phenomena
Complex systems may exhibit behaviors that are emergent, which is to say that while the results may be sufficiently determined by the activity of the systems' basic constituents, they may have properties that can only be studied at a higher level. For example, the termites in a mound have physiology, biochemistry and biological development that are at one level of analysis, but their social behavior and mound building is a property that emerges from the collection of termites and needs to be analyzed at a different level.
Relationships are non-linear
In practical terms, this means a small perturbation may cause a large effect (see butterfly effect), a proportional effect, or even no effect at all. In linear systems, the effect is always directly proportional to cause. See nonlinearity.
Relationships contain feedback loops
Both negative (damping) and positive (amplifying) feedback are always found in complex systems. The effects of an element's behavior are fed back in such a way that the element itself is altered.

History

A perspective on the development of complexity science (see reference for readable version)

Although arguably, humans have been studying complex systems for thousands of years, the modern scientific study of complex systems is relatively young in comparison to established fields of science such as physics and chemistry. The history of the scientific study of these systems follows several different research trends.

In the area of mathematics, arguably the largest contribution to the study of complex systems was the discovery of chaos in deterministic systems, a feature of certain dynamical systems that is strongly related to nonlinearity. The study of neural networks was also integral in advancing the mathematics needed to study complex systems.

The notion of self-organizing systems is tied with work in nonequilibrium thermodynamics, including that pioneered by chemist and Nobel laureate Ilya Prigogine in his study of dissipative structures. Even older is the work by Hartree-Fock on the quantum chemistry equations and later calculations of the structure of molecules which can be regarded as one of the earliest examples of emergence and emergent wholes in science.

One complex system containing humans is the classical political economy of the Scottish Enlightenment, later developed by the Austrian school of economics, which argues that order in market systems is spontaneous (or emergent) in that it is the result of human action, but not the execution of any human design.

Upon this, the Austrian school developed from the 19th to the early 20th century the economic calculation problem, along with the concept of dispersed knowledge, which were to fuel debates against the then-dominant Keynesian economics. This debate would notably lead economists, politicians, and other parties to explore the question of computational complexity.

A pioneer in the field, and inspired by Karl Popper's and Warren Weaver's works, Nobel prize economist and philosopher Friedrich Hayek dedicated much of his work, from early to the late 20th century, to the study of complex phenomena, not constraining his work to human economies but venturing into other fields such as psychology, biology and cybernetics. Cybernetician Gregory Bateson played a key role in establishing the connection between anthropology and systems theory; he recognized that the interactive parts of cultures function much like ecosystems.

While the explicit study of complex systems dates at least to the 1970s, the first research institute focused on complex systems, the Santa Fe Institute, was founded in 1984. Early Santa Fe Institute participants included physics Nobel laureates Murray Gell-Mann and Philip Anderson, economics Nobel laureate Kenneth Arrow, and Manhattan Project scientists George Cowan and Herb Anderson. Today, there are over 50 institutes and research centers focusing on complex systems.

Since the late 1990s, the interest of mathematical physicists in researching economic phenomena has been on the rise. The proliferation of cross-disciplinary research with the application of solutions originated from the physics epistemology has entailed a gradual paradigm shift in the theoretical articulations and methodological approaches in economics, primarily in financial economics. The development has resulted in the emergence of a new branch of discipline, namely “econophysics,” which is broadly defined as a cross-discipline that applies statistical physics methodologies which are mostly based on the complex systems theory and the chaos theory for economics analysis.

Applications

Complexity in practice

The traditional approach to dealing with complexity is to reduce or constrain it. Typically, this involves compartmentalization: dividing a large system into separate parts. Organizations, for instance, divide their work into departments that each deal with separate issues. Engineering systems are often designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions.

Complexity management

As projects and acquisitions become increasingly complex, companies and governments are challenged to find effective ways to manage mega-acquisitions such as the Army Future Combat Systems. Acquisitions such as the FCS rely on a web of interrelated parts which interact unpredictably. As acquisitions become more network-centric and complex, businesses will be forced to find ways to manage complexity while governments will be challenged to provide effective governance to ensure flexibility and resiliency.

Complexity economics

Over the last decades, within the emerging field of complexity economics, new predictive tools have been developed to explain economic growth. Such is the case with the models built by the Santa Fe Institute in 1989 and the more recent economic complexity index (ECI), introduced by the MIT physicist Cesar A. Hidalgo and the Harvard economist Ricardo Hausmann. Based on the ECI, Hausmann, Hidalgo and their team of The Observatory of Economic Complexity have produced GDP forecasts for the year 2020.

Complexity and education

Focusing on issues of student persistence with their studies, Forsman, Moll and Linder explore the "viability of using complexity science as a frame to extend methodological applications for physics education research", finding that "framing a social network analysis within a complexity science perspective offers a new and powerful applicability across a broad range of PER topics".

Complexity and modeling

One of Friedrich Hayek's main contributions to early complexity theory is his distinction between the human capacity to predict the behavior of simple systems and its capacity to predict the behavior of complex systems through modeling. He believed that economics and the sciences of complex phenomena in general, which in his view included biology, psychology, and so on, could not be modeled after the sciences that deal with essentially simple phenomena like physics. Hayek would notably explain that complex phenomena, through modeling, can only allow pattern predictions, compared with the precise predictions that can be made out of non-complex phenomena.

Complexity and chaos theory

Complexity theory is rooted in chaos theory, which in turn has its origins more than a century ago in the work of the French mathematician Henri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order. Chaotic systems remain deterministic, though their long-term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly accurate predictions of the system, though in practice this is impossible to do with arbitrary accuracy. Ilya Prigogine argued that complexity is non-deterministic and gives no way whatsoever to precisely predict the future.

The emergence of complexity theory shows a domain between deterministic order and randomness which is complex. This is referred to as the "edge of chaos".

A plot of the Lorenz attractor.

When one analyzes complex systems, sensitivity to initial conditions, for example, is not an issue as important as it is within chaos theory, in which it prevails. As stated by Colander, the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic sets of relationships can generate some simple behavioral patterns, whereas chaotic behavior, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions.

Therefore, the main difference between chaotic systems and complex systems is their history. Chaotic systems do not rely on their history as complex ones do. Chaotic behavior pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'. On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events, which physicist Murray Gell-Mann called "an accumulation of frozen accidents". In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations.

Complexity and network science

A complex system is usually composed of many components and their interactions. Such a system can be represented by a network where nodes represent the components and links represent their interactions. For example, the Internet can be represented as a network composed of nodes (computers) and links (direct connections between computers), and the resilience of the Internet to failures has been studied using percolation theory, a form of complex systems analysis. The failure and recovery of these networks is an open area of research. Other examples of complex networks include social networks, financial institution interdependencies, traffic systems, airline networks, biological networks, and climate networks. Finally, entire networks often interact in a complex manner; if an individual complex system can be represented as a network, then interacting complex systems can be modeled as networks of networks with dynamic properties.

One of the main reasons for high vulnerability of a network is their central control, i.e., a node which is disconnected from the cluster is usually regraded as failed. A percolation approach to generate and study decentralized systems is by using reinforced nodes that have their own support and redundancy links. Network science has been found useful to better understand the complexity of earth systems.

Self-organization

From Wikipedia, the free encyclopedia
 
Self-organization in micron-sized Nb3O7(OH) cubes during a hydrothermal treatment at 200 °C. Initially amorphous cubes gradually transform into ordered 3D meshes of crystalline nanowires as summarized in the model below.

Self-organization, also called (in the social sciences) spontaneous order, is a process where some form of overall order arises from local interactions between parts of an initially disordered system. The process can be spontaneous when sufficient energy is available, not needing control by any external agent. It is often triggered by seemingly random fluctuations, amplified by positive feedback. The resulting organization is wholly decentralized, distributed over all the components of the system. As such, the organization is typically robust and able to survive or self-repair substantial perturbation. Chaos theory discusses self-organization in terms of islands of predictability in a sea of chaotic unpredictability.

Self-organization occurs in many physical, chemical, biological, robotic, and cognitive systems. Examples of self-organization include crystallization, thermal convection of fluids, chemical oscillation, animal swarming, neural circuits, and black markets.

Overview

Self-organization is realized in the physics of non-equilibrium processes, and in chemical reactions, where it is often described as self-assembly. The concept has proven useful in biology, from molecular to ecosystem level.[3] Cited examples of self-organizing behaviour also appear in the literature of many other disciplines, both in the natural sciences and in the social sciences such as economics or anthropology. Self-organization has also been observed in mathematical systems such as cellular automata.[4] Self-organization is an example of the related concept of emergence.

Self-organization relies on four basic ingredients:

  1. strong dynamical non-linearity, often though not necessarily involving positive and negative feedback
  2. balance of exploitation and exploration
  3. multiple interactions
  4. availability of energy (to overcome natural tendency toward entropy, or loss of free energy)

Principles

The cybernetician William Ross Ashby formulated the original principle of self-organization in 1947. It states that any deterministic dynamic system automatically evolves towards a state of equilibrium that can be described in terms of an attractor in a basin of surrounding states. Once there, the further evolution of the system is constrained to remain in the attractor. This constraint implies a form of mutual dependency or coordination between its constituent components or subsystems. In Ashby's terms, each subsystem has adapted to the environment formed by all other subsystems.

The cybernetician Heinz von Foerster formulated the principle of "order from noise" in 1960. It notes that self-organization is facilitated by random perturbations ("noise") that let the system explore a variety of states in its state space. This increases the chance that the system will arrive into the basin of a "strong" or "deep" attractor, from which it then quickly enters the attractor itself. The biophysicist Henri Atlan developed this concept by proposing the principle of "complexity from noise" (French: le principe de complexité par le bruit) first in the 1972 book L'organisation biologique et la théorie de l'information and then in the 1979 book Entre le cristal et la fumée. The physicist and chemist Ilya Prigogine formulated a similar principle as "order through fluctuations" or "order out of chaos". It is applied in the method of simulated annealing for problem solving and machine learning.

History

The idea that the dynamics of a system can lead to an increase in its organization has a long history. The ancient atomists such as Democritus and Lucretius believed that a designing intelligence is unnecessary to create order in nature, arguing that given enough time and space and matter, order emerges by itself.

The philosopher René Descartes presents self-organization hypothetically in the fifth part of his 1637 Discourse on Method. He elaborated on the idea in his unpublished work The World.

Immanuel Kant used the term "self-organizing" in his 1790 Critique of Judgment, where he argued that teleology is a meaningful concept only if there exists such an entity whose parts or "organs" are simultaneously ends and means. Such a system of organs must be able to behave as if it has a mind of its own, that is, it is capable of governing itself.

In such a natural product as this every part is thought as owing its presence to the agency of all the remaining parts, and also as existing for the sake of the others and of the whole, that is as an instrument, or organ... The part must be an organ producing the other parts—each, consequently, reciprocally producing the others... Only under these conditions and upon these terms can such a product be an organized and self-organized being, and, as such, be called a physical end.

Sadi Carnot (1796–1832) and Rudolf Clausius (1822–1888) discovered the second law of thermodynamics in the 19th century. It states that total entropy, sometimes understood as disorder, will always increase over time in an isolated system. This means that a system cannot spontaneously increase its order without an external relationship that decreases order elsewhere in the system (e.g. through consuming the low-entropy energy of a battery and diffusing high-entropy heat).

18th-century thinkers had sought to understand the "universal laws of form" to explain the observed forms of living organisms. This idea became associated with Lamarckism and fell into disrepute until the early 20th century, when D'Arcy Wentworth Thompson (1860–1948) attempted to revive it.

The psychiatrist and engineer W. Ross Ashby introduced the term "self-organizing" to contemporary science in 1947. It was taken up by the cyberneticians Heinz von Foerster, Gordon Pask, Stafford Beer; and von Foerster organized a conference on "The Principles of Self-Organization" at the University of Illinois' Allerton Park in June, 1960 which led to a series of conferences on Self-Organizing Systems. Norbert Wiener took up the idea in the second edition of his Cybernetics: or Control and Communication in the Animal and the Machine (1961).

Self-organization was associated with general systems theory in the 1960s, but did not become commonplace in the scientific literature until physicists Hermann Haken et al. and complex systems researchers adopted it in a greater picture from cosmology Erich Jantsch, chemistry with dissipative system, biology and sociology as autopoiesis to system thinking in the following 1980s (Santa Fe Institute) and 1990s (complex adaptive system), until our days with the disruptive emerging technologies profounded by a rhizomatic network theory.

Around 2008-2009, a concept of guided self-organization started to take shape. This approach aims to regulate self-organization for specific purposes, so that a dynamical system may reach specific attractors or outcomes. The regulation constrains a self-organizing process within a complex system by restricting local interactions between the system components, rather than following an explicit control mechanism or a global design blueprint. The desired outcomes, such as increases in the resultant internal structure and/or functionality, are achieved by combining task-independent global objectives with task-dependent constraints on local interactions.

By field

Convection cells in a gravity field

Physics

The many self-organizing phenomena in physics include phase transitions and spontaneous symmetry breaking such as spontaneous magnetization and crystal growth in classical physics, and the laser, superconductivity and Bose–Einstein condensation in quantum physics. It is found in self-organized criticality in dynamical systems, in tribology, in spin foam systems, and in loop quantum gravity, river basins and deltas, in dendritic solidification (snow flakes), in capillary imbibition and in turbulent structure.

Chemistry

The DNA structure shown schematically at left self-assembles into the structure at right.

Self-organization in chemistry includes molecular self-assembly, reaction–diffusion systems and oscillating reactions, autocatalytic networks, liquid crystals, grid complexes, colloidal crystals, self-assembled monolayers, micelles, microphase separation of block copolymers, and Langmuir–Blodgett films.

Biology

Birds flocking, an example of self-organization in biology
 

Self-organization in biology can be observed in spontaneous folding of proteins and other biomacromolecules, formation of lipid bilayer membranes, pattern formation and morphogenesis in developmental biology, the coordination of human movement, social behaviour in insects (bees, ants, termites) and mammals, and flocking behaviour in birds and fish.

The mathematical biologist Stuart Kauffman and other structuralists have suggested that self-organization may play roles alongside natural selection in three areas of evolutionary biology, namely population dynamics, molecular evolution, and morphogenesis. However, this does not take into account the essential role of energy in driving biochemical reactions in cells. The systems of reactions in any cell are self-catalyzing but not simply self-organizing as they are thermodynamically open systems relying on a continuous input of energy. Self-organization is not an alternative to natural selection, but it constrains what evolution can do and provides mechanisms such as the self-assembly of membranes which evolution then exploits.

The evolution of order in living systems and the generation of order in certain non-living systems was proposed to obey a common fundamental principal called “the Darwinian dynamic” that was formulated by first considering how microscopic order is generated in simple non-biological systems that are far from thermodynamic equilibrium. Consideration was then extended to short, replicating RNA molecules assumed to be similar to the earliest forms of life in the RNA world. It was shown that the underlying order-generating processes of self-organization in the non-biological systems and in replicating RNA are basically similar.

Cosmology

In his 1995 conference paper "Cosmology as a problem in critical phenomena" Lee Smolin said that several cosmological objects or phenomena, such as spiral galaxies, galaxy formation processes in general, early structure formation, quantum gravity and the large scale structure of the universe might be the result of or have involved certain degree of self-organization. He argues that self-organized systems are often critical systems, with structure spreading out in space and time over every available scale, as shown for example by Per Bak and his collaborators. Therefore, because the distribution of matter in the universe is more or less scale invariant over many orders of magnitude, ideas and strategies developed in the study of self-organized systems could be helpluful in tackling certain unsolved problems in cosmology and astrophysics.

Computer science

Phenomena from mathematics and computer science such as cellular automata, random graphs, and some instances of evolutionary computation and artificial life exhibit features of self-organization. In swarm robotics, self-organization is used to produce emergent behavior. In particular the theory of random graphs has been used as a justification for self-organization as a general principle of complex systems. In the field of multi-agent systems, understanding how to engineer systems that are capable of presenting self-organized behavior is an active research area. Optimization algorithms can be considered self-organizing because they aim to find the optimal solution to a problem. If the solution is considered as a state of the iterative system, the optimal solution is the selected, converged structure of the system. Self-organizing networks include small-world networks self-stabilization and scale-free networks. These emerge from bottom-up interactions, unlike top-down hierarchical networks within organizations, which are not self-organizing. Cloud computing systems have been argued to be inherently self-organising, but while they have some autonomy, they are not self-managing as they do not have the goal of reducing their own complexity.

Cybernetics

Norbert Wiener regarded the automatic serial identification of a black box and its subsequent reproduction as self-organization in cybernetics. The importance of phase locking or the "attraction of frequencies", as he called it, is discussed in the 2nd edition of his Cybernetics: Or Control and Communication in the Animal and the Machine. K. Eric Drexler sees self-replication as a key step in nano and universal assembly. By contrast, the four concurrently connected galvanometers of W. Ross Ashby's Homeostat hunt, when perturbed, to converge on one of many possible stable states. Ashby used his state counting measure of variety to describe stable states and produced the "Good Regulator" theorem which requires internal models for self-organized endurance and stability (e.g. Nyquist stability criterion). Warren McCulloch proposed "Redundancy of Potential Command" as characteristic of the organization of the brain and human nervous system and the necessary condition for self-organization. Heinz von Foerster proposed Redundancy, R=1 − H/Hmax, where H is entropy. In essence this states that unused potential communication bandwidth is a measure of self-organization.

In the 1970s Stafford Beer considered self-organization necessary for autonomy in persisting and living systems. He applied his viable system model to management. It consists of five parts: the monitoring of performance of the survival processes (1), their management by recursive application of regulation (2), homeostatic operational control (3) and development (4) which produce maintenance of identity (5) under environmental perturbation. Focus is prioritized by an alerting "algedonic loop" feedback: a sensitivity to both pain and pleasure produced from under-performance or over-performance relative to a standard capability.

In the 1990s Gordon Pask argued that von Foerster's H and Hmax were not independent, but interacted via countably infinite recursive concurrent spin processes which he called concepts. His strict definition of concept "a procedure to bring about a relation" permitted his theorem "Like concepts repel, unlike concepts attract" to state a general spin-based principle of self-organization. His edict, an exclusion principle, "There are No Doppelgangers" means no two concepts can be the same. After sufficient time, all concepts attract and coalesce as pink noise. The theory applies to all organizationally closed or homeostatic processes that produce enduring and coherent products which evolve, learn and adapt.

Human society

Social self-organization in international drug routes
 

The self-organizing behaviour of social animals and the self-organization of simple mathematical structures both suggest that self-organization should be expected in human society. Tell-tale signs of self-organization are usually statistical properties shared with self-organizing physical systems. Examples such as critical mass, herd behaviour, groupthink and others, abound in sociology, economics, behavioral finance and anthropology.

In social theory, the concept of self-referentiality has been introduced as a sociological application of self-organization theory by Niklas Luhmann (1984). For Luhmann the elements of a social system are self-producing communications, i.e. a communication produces further communications and hence a social system can reproduce itself as long as there is dynamic communication. For Luhmann human beings are sensors in the environment of the system. Luhmann developed an evolutionary theory of Society and its subsystems, using functional analyses and systems theory.

In economics, a market economy is sometimes said to be self-organizing. Paul Krugman has written on the role that market self-organization plays in the business cycle in his book "The Self Organizing Economy". Friedrich Hayek coined the term catallaxy to describe a "self-organizing system of voluntary co-operation", in regards to the spontaneous order of the free market economy. Neo-classical economists hold that imposing central planning usually makes the self-organized economic system less efficient. On the other end of the spectrum, economists consider that market failures are so significant that self-organization produces bad results and that the state should direct production and pricing. Most economists adopt an intermediate position and recommend a mixture of market economy and command economy characteristics (sometimes called a mixed economy). When applied to economics, the concept of self-organization can quickly become ideologically imbued.

In learning

Enabling others to "learn how to learn" is often taken to mean instructing them how to submit to being taught. Self-organised learning (S.O.L.) denies that "the expert knows best" or that there is ever "the one best method", insisting instead on "the construction of personally significant, relevant and viable meaning" to be tested experientially by the learner. This may be collaborative, and more rewarding personally. It is seen as a lifelong process, not limited to specific learning environments (home, school, university) or under the control of authorities such as parents and professors. It needs to be tested, and intermittently revised, through the personal experience of the learner. It need not be restricted by either consciousness or language. Fritjof Capra argued that it is poorly recognised within psychology and education. It may be related to cybernetics as it involves a negative feedback control loop, or to systems theory. It can be conducted as a learning conversation or dialogue between learners or within one person.

Traffic flow

The self-organizing behavior of drivers in traffic flow determines almost all the spatiotemporal behavior of traffic, such as traffic breakdown at a highway bottleneck, highway capacity, and the emergence of moving traffic jams. In 1996–2002 these complex self-organizing effects were explained by Boris Kerner's three-phase traffic theory.

In linguistics

Order appears spontaneously in the evolution of language as individual and population behaviour interacts with biological evolution.

In research funding

Self-organized funding allocation (SOFA) is a method of distributing funding for scientific research. In this system, each researcher is allocated an equal amount of funding, and is required to anonymously allocate a fraction of their funds to the research of others. Proponents of SOFA argue that it would result in similar distribution of funding as the present grant system, but with less overhead. In 2016, a test pilot of SOFA began in the Netherlands.

Criticism

Heinz Pagels, in a 1985 review of Ilya Prigogine and Isabelle Stengers's book Order Out of Chaos in Physics Today, appeals to authority:

Most scientists would agree with the critical view expressed in Problems of Biological Physics (Springer Verlag, 1981) by the biophysicist L. A. Blumenfeld, when he wrote: "The meaningful macroscopic ordering of biological structure does not arise due to the increase of certain parameters or a system above their critical values. These structures are built according to program-like complicated architectural structures, the meaningful information created during many billions of years of chemical and biological evolution being used." Life is a consequence of microscopic, not macroscopic, organization.

Of course, Blumenfeld does not answer the further question of how those program-like structures emerge in the first place. His explanation leads directly to infinite regress.

In short, they [Prigogine and Stengers] maintain that time irreversibility is not derived from a time-independent microworld, but is itself fundamental. The virtue of their idea is that it resolves what they perceive as a "clash of doctrines" about the nature of time in physics. Most physicists would agree that there is neither empirical evidence to support their view, nor is there a mathematical necessity for it. There is no "clash of doctrines." Only Prigogine and a few colleagues hold to these speculations which, in spite of their efforts, continue to live in the twilight zone of scientific credibility.

In theology, Thomas Aquinas (1225–1274) in his Summa Theologica assumes a teleological created universe in rejecting the idea that something can be a self-sufficient cause of its own organization:

Since nature works for a determinate end under the direction of a higher agent, whatever is done by nature must needs be traced back to God, as to its first cause. So also whatever is done voluntarily must also be traced back to some higher cause other than human reason or will, since these can change or fail; for all things that are changeable and capable of defect must be traced back to an immovable and self-necessary first principle, as was shown in the body of the Article.

Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_group In mathematics , a Lie gro...