Search This Blog

Thursday, September 30, 2021

Emergence

From Wikipedia, the free encyclopedia

The formation of complex symmetrical and fractal patterns in snowflakes exemplifies emergence in a physical system.
 
A termite "cathedral" mound produced by a termite colony offers a classic example of emergence in nature
 
In philosophy, systems theory, science, and art, emergence occurs when an entity is observed to have properties its parts do not have on their own, properties or behaviors which emerge only when the parts interact in a wider whole.

Emergence plays a central role in theories of integrative levels and of complex systems. For instance, the phenomenon of life as studied in biology is an emergent property of chemistry, and many psychological phenomena are known to emerge from underlying neurobiological processes.

In philosophy, theories that emphasize emergent properties have been called emergentism.

In philosophy

Philosophers often understand emergence as a claim about the etiology of a system's properties. An emergent property of a system, in this context, is one that is not a property of any component of that system, but is still a feature of the system as a whole. Nicolai Hartmann (1882-1950), one of the first modern philosophers to write on emergence, termed this a categorial novum (new category).

Definitions

This concept of emergence dates from at least the time of Aristotle. The many scientists and philosophers who have written on the concept include John Stuart Mill (Composition of Causes, 1843) and Julian Huxley (1887-1975).

The philosopher G. H. Lewes coined the term "emergent", writing in 1875:

Every resultant is either a sum or a difference of the co-operant forces; their sum, when their directions are the same – their difference, when their directions are contrary. Further, every resultant is clearly traceable in its components, because these are homogeneous and commensurable. It is otherwise with emergents, when, instead of adding measurable motion to measurable motion, or things of one kind to other individuals of their kind, there is a co-operation of things of unlike kinds. The emergent is unlike its components insofar as these are incommensurable, and it cannot be reduced to their sum or their difference.

In 1999 economist Jeffrey Goldstein provided a current definition of emergence in the journal Emergence. Goldstein initially defined emergence as: "the arising of novel and coherent structures, patterns and properties during the process of self-organization in complex systems".

In 2002 systems scientist Peter Corning described the qualities of Goldstein's definition in more detail:

The common characteristics are: (1) radical novelty (features not previously observed in systems); (2) coherence or correlation (meaning integrated wholes that maintain themselves over some period of time); (3) A global or macro "level" (i.e. there is some property of "wholeness"); (4) it is the product of a dynamical process (it evolves); and (5) it is "ostensive" (it can be perceived).

Corning suggests a narrower definition, requiring that the components be unlike in kind (following Lewes), and that they involve division of labor between these components. He also says that living systems (comparably to the game of chess), while emergent, cannot be reduced to underlying laws of emergence:

Rules, or laws, have no causal efficacy; they do not in fact 'generate' anything. They serve merely to describe regularities and consistent relationships in nature. These patterns may be very illuminating and important, but the underlying causal agencies must be separately specified (though often they are not). But that aside, the game of chess illustrates ... why any laws or rules of emergence and evolution are insufficient. Even in a chess game, you cannot use the rules to predict 'history' – i.e., the course of any given game. Indeed, you cannot even reliably predict the next move in a chess game. Why? Because the 'system' involves more than the rules of the game. It also includes the players and their unfolding, moment-by-moment decisions among a very large number of available options at each choice point. The game of chess is inescapably historical, even though it is also constrained and shaped by a set of rules, not to mention the laws of physics. Moreover, and this is a key point, the game of chess is also shaped by teleonomic, cybernetic, feedback-driven influences. It is not simply a self-ordered process; it involves an organized, 'purposeful' activity.

Strong and weak emergence

Usage of the notion "emergence" may generally be subdivided into two perspectives, that of "weak emergence" and "strong emergence". One paper discussing this division is Weak Emergence, by philosopher Mark Bedau. In terms of physical systems, weak emergence is a type of emergence in which the emergent property is amenable to computer simulation or similar forms of after-the-fact analysis (for example, the formation of a traffic jam, the structure of a flock of starlings in flight or a school of fish, or the formation of galaxies). Crucial in these simulations is that the interacting members retain their independence. If not, a new entity is formed with new, emergent properties: this is called strong emergence, which it is argued cannot be simulated or analysed.

Some common points between the two notions are that emergence concerns new properties produced as the system grows, which is to say ones which are not shared with its components or prior states. Also, it is assumed that the properties are supervenient rather than metaphysically primitive.

Weak emergence describes new properties arising in systems as a result of the interactions at an elemental level. However, Bedau stipulates that the properties can be determined only by observing or simulating the system, and not by any process of a reductionist analysis. As a consequence the emerging properties are scale dependent: they are only observable if the system is large enough to exhibit the phenomenon. Chaotic, unpredictable behaviour can be seen as an emergent phenomenon, while at a microscopic scale the behaviour of the constituent parts can be fully deterministic.

Bedau notes that weak emergence is not a universal metaphysical solvent, as the hypothesis that consciousness is weakly emergent would not resolve the traditional philosophical questions about the physicality of consciousness. However, Bedau concludes that adopting this view would provide a precise notion that emergence is involved in consciousness, and second, the notion of weak emergence is metaphysically benign. 

Strong emergence describes the direct causal action of a high-level system upon its components; qualities produced this way are irreducible to the system's constituent parts. The whole is other than the sum of its parts. An example from physics of such emergence is water, which appears unpredictable even after an exhaustive study of the properties of its constituent atoms of hydrogen and oxygen. It is argued then that no simulation of the system can exist, for such a simulation would itself constitute a reduction of the system to its constituent parts.

Rejecting the distinction

However, biologist Peter Corning has asserted that "the debate about whether or not the whole can be predicted from the properties of the parts misses the point. Wholes produce unique combined effects, but many of these effects may be co-determined by the context and the interactions between the whole and its environment(s)". In accordance with his Synergism Hypothesis, Corning also stated: "It is the synergistic effects produced by wholes that are the very cause of the evolution of complexity in nature." Novelist Arthur Koestler used the metaphor of Janus (a symbol of the unity underlying complements like open/shut, peace/war) to illustrate how the two perspectives (strong vs. weak or holistic vs. reductionistic) should be treated as non-exclusive, and should work together to address the issues of emergence. Theoretical physicist PW Anderson states it this way:

The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. At each level of complexity entirely new properties appear. Psychology is not applied biology, nor is biology applied chemistry. We can now see that the whole becomes not merely more, but very different from the sum of its parts.

Viability of strong emergence

Some thinkers question the plausibility of strong emergence as contravening our usual understanding of physics. Mark A. Bedau observes:

Although strong emergence is logically possible, it is uncomfortably like magic. How does an irreducible but supervenient downward causal power arise, since by definition it cannot be due to the aggregation of the micro-level potentialities? Such causal powers would be quite unlike anything within our scientific ken. This not only indicates how they will discomfort reasonable forms of materialism. Their mysteriousness will only heighten the traditional worry that emergence entails illegitimately getting something from nothing.

Strong emergence can be criticized for being causally overdetermined. The canonical example concerns emergent mental states (M and M∗) that supervene on physical states (P and P∗) respectively. Let M and M∗ be emergent properties. Let M∗ supervene on base property P∗. What happens when M causes M∗? Jaegwon Kim says:

In our schematic example above, we concluded that M causes M∗ by causing P∗. So M causes P∗. Now, M, as an emergent, must itself have an emergence base property, say P. Now we face a critical question: if an emergent, M, emerges from basal condition P, why cannot P displace M as a cause of any putative effect of M? Why cannot P do all the work in explaining why any alleged effect of M occurred? If causation is understood as nomological (law-based) sufficiency, P, as M's emergence base, is nomologically sufficient for it, and M, as P∗'s cause, is nomologically sufficient for P∗. It follows that P is nomologically sufficient for P∗ and hence qualifies as its cause…If M is somehow retained as a cause, we are faced with the highly implausible consequence that every case of downward causation involves overdetermination (since P remains a cause of P∗ as well). Moreover, this goes against the spirit of emergentism in any case: emergents are supposed to make distinctive and novel causal contributions.

If M is the cause of M∗, then M∗ is overdetermined because M∗ can also be thought of as being determined by P. One escape-route that a strong emergentist could take would be to deny downward causation. However, this would remove the proposed reason that emergent mental states must supervene on physical states, which in turn would call physicalism into question, and thus be unpalatable for some philosophers and physicists.

Meanwhile, others have worked towards developing analytical evidence of strong emergence. In 2009, Gu et al. presented a class of physical systems that exhibits non-computable macroscopic properties. More precisely, if one could compute certain macroscopic properties of these systems from the microscopic description of these systems, then one would be able to solve computational problems known to be undecidable in computer science. Gu et al. concluded that

Although macroscopic concepts are essential for understanding our world, much of fundamental physics has been devoted to the search for a 'theory of everything', a set of equations that perfectly describe the behavior of all fundamental particles. The view that this is the goal of science rests in part on the rationale that such a theory would allow us to derive the behavior of all macroscopic concepts, at least in principle. The evidence we have presented suggests that this view may be overly optimistic. A 'theory of everything' is one of many components necessary for complete understanding of the universe, but is not necessarily the only one. The development of macroscopic laws from first principles may involve more than just systematic logic, and could require conjectures suggested by experiments, simulations or insight.

Emergence and interaction

Emergent structures are patterns that emerge via the collective actions of many individual entities. To explain such patterns, one might conclude, per Aristotle, that emergent structures are other than the sum of their parts on the assumption that the emergent order will not arise if the various parts simply interact independently of one another. However, there are those who disagree. According to this argument, the interaction of each part with its immediate surroundings causes a complex chain of processes that can lead to order in some form. In fact, some systems in nature are observed to exhibit emergence based upon the interactions of autonomous parts, and some others exhibit emergence that at least at present cannot be reduced in this way. In particular renormalization methods in theoretical physics enable scientists to study systems that are not tractable as the combination of their parts.

Objective or subjective quality

Crutchfield regards the properties of complexity and organization of any system as subjective qualities determined by the observer.

Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analysed in terms of how model-building observers infer from measurements the computational capabilities embedded in non-linear processes. An observer’s notion of what is ordered, what is random, and what is complex in its environment depends directly on its computational resources: the amount of raw measurement data, of memory, and of time available for estimation and inference. The discovery of structure in an environment depends more critically and subtly, though, on how those resources are organized. The descriptive power of the observer’s chosen (or implicit) computational model class, for example, can be an overwhelming determinant in finding regularity in data.

On the other hand, Peter Corning argues: "Must the synergies be perceived/observed in order to qualify as emergent effects, as some theorists claim? Most emphatically not. The synergies associated with emergence are real and measurable, even if nobody is there to observe them."

The low entropy of an ordered system can be viewed as an example of subjective emergence: the observer sees an ordered system by ignoring the underlying microstructure (i.e. movement of molecules or elementary particles) and concludes that the system has a low entropy. On the other hand, chaotic, unpredictable behaviour can also be seen as subjective emergent, while at a microscopic scale the movement of the constituent parts can be fully deterministic.

In religion, art and humanities

In religion, emergence grounds expressions of religious naturalism and syntheism in which a sense of the sacred is perceived in the workings of entirely naturalistic processes by which more complex forms arise or evolve from simpler forms. Examples are detailed in The Sacred Emergence of Nature by Ursula Goodenough & Terrence Deacon and Beyond Reductionism: Reinventing the Sacred by Stuart Kauffman, both from 2006, and in Syntheism – Creating God in The Internet Age by Alexander Bard & Jan Söderqvist from 2014. An early argument (1904–05) for the emergence of social formations, in part stemming from religion, can be found in Max Weber's most famous work, The Protestant Ethic and the Spirit of Capitalism. Recently, the emergence of a new social system is linked with the emergence of order from nonlinear relationships among multiple interacting units, where multiple interacting units are individual thoughts, consciousness, and actions.

In art, emergence is used to explore the origins of novelty, creativity, and authorship. Some art/literary theorists (Wheeler, 2006; Alexander, 2011) have proposed alternatives to postmodern understandings of "authorship" using the complexity sciences and emergence theory. They contend that artistic selfhood and meaning are emergent, relatively objective phenomena. Michael J. Pearce has used emergence to describe the experience of works of art in relation to contemporary neuroscience. Practicing artist Leonel Moura, in turn, attributes to his "artbots" a real, if nonetheless rudimentary, creativity based on emergent principles. In literature and linguistics, the concept of emergence has been applied in the domain of stylometry to explain the interrelation between the syntactical structures of the text and the author style (Slautina, Marusenko, 2014).

In international development, concepts of emergence have been used within a theory of social change termed SEED-SCALE to show how standard principles interact to bring forward socio-economic development fitted to cultural values, community economics, and natural environment (local solutions emerging from the larger socio-econo-biosphere). These principles can be implemented utilizing a sequence of standardized tasks that self-assemble in individually specific ways utilizing recursive evaluative criteria.

In postcolonial studies, the term "Emerging Literature" refers to a contemporary body of texts that is gaining momentum in the global literary landscape (v. esp.: J.M. Grassin, ed. Emerging Literatures, Bern, Berlin, etc. : Peter Lang, 1996). By opposition, "emergent literature" is rather a concept used in the theory of literature.

Emergent properties and processes

An emergent behavior or emergent property can appear when a number of simple entities (agents) operate in an environment, forming more complex behaviors as a collective. If emergence happens over disparate size scales, then the reason is usually a causal relation across different scales. In other words, there is often a form of top-down feedback in systems with emergent properties. The processes causing emergent properties may occur in either the observed or observing system, and are commonly identifiable by their patterns of accumulating change, generally called 'growth'. Emergent behaviours can occur because of intricate causal relations across different scales and feedback, known as interconnectivity. The emergent property itself may be either very predictable or unpredictable and unprecedented, and represent a new level of the system's evolution. The complex behaviour or properties are not a property of any single such entity, nor can they easily be predicted or deduced from behaviour in the lower-level entities. The shape and behaviour of a flock of birds or school of fish are good examples of emergent properties.

One reason emergent behaviour is hard to predict is that the number of interactions between a system's components increases exponentially with the number of components, thus allowing for many new and subtle types of behaviour to emerge. Emergence is often a product of particular patterns of interaction. Negative feedback introduces constraints that serve to fix structures or behaviours. In contrast, positive feedback promotes change, allowing local variations to grow into global patterns. Another way in which interactions lead to emergent properties is dual-phase evolution. This occurs where interactions are applied intermittently, leading to two phases: one in which patterns form or grow, the other in which they are refined or removed.

On the other hand, merely having a large number of interactions is not enough by itself to guarantee emergent behaviour; many of the interactions may be negligible or irrelevant, or may cancel each other out. In some cases, a large number of interactions can in fact hinder the emergence of interesting behaviour, by creating a lot of "noise" to drown out any emerging "signal"; the emergent behaviour may need to be temporarily isolated from other interactions before it reaches enough critical mass to self-support. Thus it is not just the sheer number of connections between components which encourages emergence; it is also how these connections are organised. A hierarchical organisation is one example that can generate emergent behaviour (a bureaucracy may behave in a way quite different from the individual departments of that bureaucracy); but emergent behaviour can also arise from more decentralized organisational structures, such as a marketplace. In some cases, the system has to reach a combined threshold of diversity, organisation, and connectivity before emergent behaviour appears.

Unintended consequences and side effects are closely related to emergent properties. Luc Steels writes: "A component has a particular functionality but this is not recognizable as a subfunction of the global functionality. Instead a component implements a behaviour whose side effect contributes to the global functionality ... Each behaviour has a side effect and the sum of the side effects gives the desired functionality". In other words, the global or macroscopic functionality of a system with "emergent functionality" is the sum of all "side effects", of all emergent properties and functionalities.

Systems with emergent properties or emergent structures may appear to defy entropic principles and the second law of thermodynamics, because they form and increase order despite the lack of command and central control. This is possible because open systems can extract information and order out of the environment.

Emergence helps to explain why the fallacy of division is a fallacy.

Emergent structures in nature

Ripple patterns in a sand dune created by wind or water is an example of an emergent structure in nature.
 
Giant's Causeway in Northern Ireland is an example of a complex emergent structure.

Emergent structures can be found in many natural phenomena, from the physical to the biological domain. For example, the shape of weather phenomena such as hurricanes are emergent structures. The development and growth of complex, orderly crystals, as driven by the random motion of water molecules within a conducive natural environment, is another example of an emergent process, where randomness can give rise to complex and deeply attractive, orderly structures.

Water crystals forming on glass demonstrate an emergent, fractal process occurring under appropriate conditions of temperature and humidity.

However, crystalline structure and hurricanes are said to have a self-organizing phase.

It is useful to distinguish three forms of emergent structures. A first-order emergent structure occurs as a result of shape interactions (for example, hydrogen bonds in water molecules lead to surface tension). A second-order emergent structure involves shape interactions played out sequentially over time (for example, changing atmospheric conditions as a snowflake falls to the ground build upon and alter its form). Finally, a third-order emergent structure is a consequence of shape, time, and heritable instructions. For example, an organism's genetic code affects the form of the organism's systems in space and time.

Nonliving, physical systems

In physics, emergence is used to describe a property, law, or phenomenon which occurs at macroscopic scales (in space or time) but not at microscopic scales, despite the fact that a macroscopic system can be viewed as a very large ensemble of microscopic systems.

An emergent property need not be more complicated than the underlying non-emergent properties which generate it. For instance, the laws of thermodynamics are remarkably simple, even if the laws which govern the interactions between component particles are complex. The term emergence in physics is thus used not to signify complexity, but rather to distinguish which laws and concepts apply to macroscopic scales, and which ones apply to microscopic scales.

However, another, perhaps more broadly applicable way to conceive of the emergent divide does involve a dose of complexity insofar as the computational feasibility of going from the microscopic to the macroscopic property tells the 'strength' of the emergence. This is better understood given the following definition of (weak) emergence that comes from physics:

An emergent behavior of a physical system is a qualitative property that can only occur in the limit that the number of microscopic constituents tends to infinity."

Since there are no actually infinite systems in the real world, there is no obvious naturally occurring notion of a hard separation between the properties of the constituents of a system and those of the emergent whole. As discussed below, classical mechanics is thought to be emergent from quantum mechanics, though in principle, quantum dynamics fully describes everything happening at a classical level. However, it would take a computer larger than the size of the universe with more computing time than life time of the universe to describe the motion of a falling apple in terms of the locations of its electrons; thus we can take this to be a "strong" emergent divide.

In the case of strong emergence, the number of constituents can be much smaller. F.i. the emergent properties of a H2O molecule are very different from its constituent parts oxygen and hydrogen.

Some examples include:

Classical mechanics
The laws of classical mechanics can be said to emerge as a limiting case from the rules of quantum mechanics applied to large enough masses. This is particularly strange since quantum mechanics is generally thought of as more complicated than classical mechanics.
Friction
Forces between elementary particles are conservative. However, friction emerges when considering more complex structures of matter, whose surfaces can convert mechanical energy into heat energy when rubbed against each other. Similar considerations apply to other emergent concepts in continuum mechanics such as viscosity, elasticity, tensile strength, etc.
Patterned ground
the distinct, and often symmetrical geometric shapes formed by ground material in periglacial regions.
Statistical mechanics
initially derived using the concept of a large enough ensemble that fluctuations about the most likely distribution can be all but ignored. However, small clusters do not exhibit sharp first order phase transitions such as melting, and at the boundary it is not possible to completely categorize the cluster as a liquid or solid, since these concepts are (without extra definitions) only applicable to macroscopic systems. Describing a system using statistical mechanics methods is much simpler than using a low-level atomistic approach.
Electrical networks
The bulk conductive response of binary (RC) electrical networks with random arrangements, known as the Universal Dielectric Response (UDR), can be seen as emergent properties of such physical systems. Such arrangements can be used as simple physical prototypes for deriving mathematical formulae for the emergent responses of complex systems.
Weather
Temperature is sometimes used as an example of an emergent macroscopic behaviour. In classical dynamics, a snapshot of the instantaneous momenta of a large number of particles at equilibrium is sufficient to find the average kinetic energy per degree of freedom which is proportional to the temperature. For a small number of particles the instantaneous momenta at a given time are not statistically sufficient to determine the temperature of the system. However, using the ergodic hypothesis, the temperature can still be obtained to arbitrary precision by further averaging the momenta over a long enough time.
Convection
in a liquid or gas is another example of emergent macroscopic behaviour that makes sense only when considering differentials of temperature. Convection cells, particularly Bénard cells, are an example of a self-organizing system (more specifically, a dissipative system) whose structure is determined both by the constraints of the system and by random perturbations: the possible realizations of the shape and size of the cells depends on the temperature gradient as well as the nature of the fluid and shape of the container, but which configurations are actually realized is due to random perturbations (thus these systems exhibit a form of symmetry breaking).

In some theories of particle physics, even such basic structures as mass, space, and time are viewed as emergent phenomena, arising from more fundamental concepts such as the Higgs boson or strings. In some interpretations of quantum mechanics, the perception of a deterministic reality, in which all objects have a definite position, momentum, and so forth, is actually an emergent phenomenon, with the true state of matter being described instead by a wavefunction which need not have a single position or momentum. Most of the laws of physics themselves as we experience them today appear to have emerged during the course of time making emergence the most fundamental principle in the universe and raising the question of what might be the most fundamental law of physics from which all others emerged. Chemistry can in turn be viewed as an emergent property of the laws of physics. Biology (including biological evolution) can be viewed as an emergent property of the laws of chemistry. Similarly, psychology could be understood as an emergent property of neurobiological laws. Finally, some economic theories understand economy as an emergent feature of psychology.

According to Laughlin, for many particle systems, nothing can be calculated exactly from the microscopic equations, and macroscopic systems are characterised by broken symmetry: the symmetry present in the microscopic equations is not present in the macroscopic system, due to phase transitions. As a result, these macroscopic systems are described in their own terminology, and have properties that do not depend on many microscopic details. This does not mean that the microscopic interactions are irrelevant, but simply that you do not see them anymore — you only see a renormalized effect of them. Laughlin is a pragmatic theoretical physicist: if you cannot, possibly ever, calculate the broken symmetry macroscopic properties from the microscopic equations, then what is the point of talking about reducibility?

Living, biological systems

Emergence and evolution

Life is a major source of complexity, and evolution is the major process behind the varying forms of life. In this view, evolution is the process describing the growth of complexity in the natural world and in speaking of the emergence of complex living beings and life-forms.

Life is thought to have emerged in the early RNA world when RNA chains began to express the basic conditions necessary for natural selection to operate as conceived by Darwin: heritability, variation of type, and competition for limited resources. Fitness of an RNA replicator (its per capita rate of increase) would likely be a function of adaptive capacities that were intrinsic (in the sense that they were determined by the nucleotide sequence) and the availability of resources. The three primary adaptive capacities may have been (1) the capacity to replicate with moderate fidelity (giving rise to both heritability and variation of type); (2) the capacity to avoid decay; and (3) the capacity to acquire and process resources. These capacities would have been determined initially by the folded configurations of the RNA replicators (see “Ribozyme”) that, in turn, would be encoded in their individual nucleotide sequences. Competitive success among different replicators would have depended on the relative values of these adaptive capacities.

Regarding causality in evolution Peter Corning observes:

Synergistic effects of various kinds have played a major causal role in the evolutionary process generally and in the evolution of cooperation and complexity in particular... Natural selection is often portrayed as a “mechanism”, or is personified as a causal agency... In reality, the differential “selection” of a trait, or an adaptation, is a consequence of the functional effects it produces in relation to the survival and reproductive success of a given organism in a given environment. It is these functional effects that are ultimately responsible for the trans-generational continuities and changes in nature.

Per his definition of emergence, Corning also addresses emergence and evolution:

[In] evolutionary processes, causation is iterative; effects are also causes. And this is equally true of the synergistic effects produced by emergent systems. In other words, emergence itself... has been the underlying cause of the evolution of emergent phenomena in biological evolution; it is the synergies produced by organized systems that are the key

Swarming is a well-known behaviour in many animal species from marching locusts to schooling fish to flocking birds. Emergent structures are a common strategy found in many animal groups: colonies of ants, mounds built by termites, swarms of bees, shoals/schools of fish, flocks of birds, and herds/packs of mammals.

An example to consider in detail is an ant colony. The queen does not give direct orders and does not tell the ants what to do. Instead, each ant reacts to stimuli in the form of chemical scent from larvae, other ants, intruders, food and buildup of waste, and leaves behind a chemical trail, which, in turn, provides a stimulus to other ants. Here each ant is an autonomous unit that reacts depending only on its local environment and the genetically encoded rules for its variety of ant. Despite the lack of centralized decision making, ant colonies exhibit complex behavior and have even demonstrated the ability to solve geometric problems. For example, colonies routinely find the maximum distance from all colony entrances to dispose of dead bodies.

It appears that environmental factors may play a role in influencing emergence. Research suggests induced emergence of the bee species Macrotera portalis. In this species, the bees emerge in a pattern consistent with rainfall. Specifically, the pattern of emergence is consistent with southwestern deserts' late summer rains and lack of activity in the spring.

Organization of life

A broader example of emergent properties in biology is viewed in the biological organisation of life, ranging from the subatomic level to the entire biosphere. For example, individual atoms can be combined to form molecules such as polypeptide chains, which in turn fold and refold to form proteins, which in turn create even more complex structures. These proteins, assuming their functional status from their spatial conformation, interact together and with other molecules to achieve higher biological functions and eventually create an organism. Another example is how cascade phenotype reactions, as detailed in chaos theory, arise from individual genes mutating respective positioning. At the highest level, all the biological communities in the world form the biosphere, where its human participants form societies, and the complex interactions of meta-social systems such as the stock market.

Emergence of mind

Among the considered phenomena in the evolutionary account of life, as a continuous history, marked by stages at which fundamentally new forms have appeared - the origin of sapiens intelligence. The emergence of mind and its evolution is researched and considered as a separate phenomenon in a special system knowledge called noogenesis.

In humanity

Spontaneous order

Groups of human beings, left free to each regulate themselves, tend to produce spontaneous order, rather than the meaningless chaos often feared. This has been observed in society at least since Chuang Tzu in ancient China. Human beings are the basic elements of social systems, which perpetually interact and create, maintain, or untangle mutual social bonds. Social bonds in social systems are perpetually changing in the sense of the ongoing reconfiguration of their structure. A classic traffic roundabout is also a good example, with cars moving in and out with such effective organization that some modern cities have begun replacing stoplights at problem intersections with roundabouts, and getting better results. Open-source software and Wiki projects form an even more compelling illustration.

Emergent processes or behaviors can be seen in many other places, such as cities, cabal and market-dominant minority phenomena in economics, organizational phenomena in computer simulations and cellular automata. Whenever there is a multitude of individuals interacting, an order emerges from disorder; a pattern, a decision, a structure, or a change in direction occurs.

Economics

The stock market (or any market for that matter) is an example of emergence on a grand scale. As a whole it precisely regulates the relative security prices of companies across the world, yet it has no leader; when no central planning is in place, there is no one entity which controls the workings of the entire market. Agents, or investors, have knowledge of only a limited number of companies within their portfolio, and must follow the regulatory rules of the market and analyse the transactions individually or in large groupings. Trends and patterns emerge which are studied intensively by technical analysts.

World Wide Web and the Internet

The World Wide Web is a popular example of a decentralized system exhibiting emergent properties. There is no central organization rationing the number of links, yet the number of links pointing to each page follows a power law in which a few pages are linked to many times and most pages are seldom linked to. A related property of the network of links in the World Wide Web is that almost any pair of pages can be connected to each other through a relatively short chain of links. Although relatively well known now, this property was initially unexpected in an unregulated network. It is shared with many other types of networks called small-world networks.

Internet traffic can also exhibit some seemingly emergent properties. In the congestion control mechanism, TCP flows can become globally synchronized at bottlenecks, simultaneously increasing and then decreasing throughput in coordination. Congestion, widely regarded as a nuisance, is possibly an emergent property of the spreading of bottlenecks across a network in high traffic flows which can be considered as a phase transition.

Another important example of emergence in web-based systems is social bookmarking (also called collaborative tagging). In social bookmarking systems, users assign tags to resources shared with other users, which gives rise to a type of information organisation that emerges from this crowdsourcing process. Recent research which analyzes empirically the complex dynamics of such systems has shown that consensus on stable distributions and a simple form of shared vocabularies does indeed emerge, even in the absence of a central controlled vocabulary. Some believe that this could be because users who contribute tags all use the same language, and they share similar semantic structures underlying the choice of words. The convergence in social tags may therefore be interpreted as the emergence of structures as people who have similar semantic interpretation collaboratively index online information, a process called semantic imitation.

Architecture and cities

Traffic patterns in cities can be seen as an example of spontaneous order

Emergent structures appear at many different levels of organization or as spontaneous order. Emergent self-organization appears frequently in cities where no planning or zoning entity predetermines the layout of the city. The interdisciplinary study of emergent behaviors is not generally considered a homogeneous field, but divided across its application or problem domains.

Architects may not design all the pathways of a complex of buildings. Instead they might let usage patterns emerge and then place pavement where pathways have become worn, such as a desire path.

The on-course action and vehicle progression of the 2007 Urban Challenge could possibly be regarded as an example of cybernetic emergence. Patterns of road use, indeterministic obstacle clearance times, etc. will work together to form a complex emergent pattern that can not be deterministically planned in advance.

The architectural school of Christopher Alexander takes a deeper approach to emergence, attempting to rewrite the process of urban growth itself in order to affect form, establishing a new methodology of planning and design tied to traditional practices, an Emergent Urbanism. Urban emergence has also been linked to theories of urban complexity and urban evolution.

Building ecology is a conceptual framework for understanding architecture and the built environment as the interface between the dynamically interdependent elements of buildings, their occupants, and the larger environment. Rather than viewing buildings as inanimate or static objects, building ecologist Hal Levin views them as interfaces or intersecting domains of living and non-living systems. The microbial ecology of the indoor environment is strongly dependent on the building materials, occupants, contents, environmental context and the indoor and outdoor climate. The strong relationship between atmospheric chemistry and indoor air quality and the chemical reactions occurring indoors. The chemicals may be nutrients, neutral or biocides for the microbial organisms. The microbes produce chemicals that affect the building materials and occupant health and well-being. Humans manipulate the ventilation, temperature and humidity to achieve comfort with the concomitant effects on the microbes that populate and evolve.

Eric Bonabeau's attempt to define emergent phenomena is through traffic: "traffic jams are actually very complicated and mysterious. On an individual level, each driver is trying to get somewhere and is following (or breaking) certain rules, some legal (the speed limit) and others societal or personal (slow down to let another driver change into your lane). But a traffic jam is a separate and distinct entity that emerges from those individual behaviors. Gridlock on a highway, for example, can travel backward for no apparent reason, even as the cars are moving forward." He has also likened emergent phenomena to the analysis of market trends and employee behavior.

Computer AI

Some artificially intelligent (AI) computer applications simulate emergent behavior for animation. One example is Boids, which mimics the swarming behavior of birds.

Language

It has been argued that the structure and regularity of language grammar, or at least language change, is an emergent phenomenon. While each speaker merely tries to reach their own communicative goals, they use language in a particular way. If enough speakers behave in that way, language is changed. In a wider sense, the norms of a language, i.e. the linguistic conventions of its speech society, can be seen as a system emerging from long-time participation in communicative problem-solving in various social circumstances.

Emergent change processes

Within the field of group facilitation and organization development, there have been a number of new group processes that are designed to maximize emergence and self-organization, by offering a minimal set of effective initial conditions. Examples of these processes include SEED-SCALE, appreciative inquiry, Future Search, the world cafe or knowledge cafe, Open Space Technology, and others (Holman, 2010).

Triple-alpha process

From Wikipedia, the free encyclopedia

Overview of the triple-alpha process
 
Logarithm of the relative energy output (ε) of proton–proton (PP), CNO and Triple-α fusion processes at different temperatures (T). The dashed line shows the combined energy generation of the PP and CNO processes within a star. At the Sun's core temperature, the PP process is more efficient.

The triple-alpha process is a set of nuclear fusion reactions by which three helium-4 nuclei (alpha particles) are transformed into carbon.

Triple-alpha process in stars

Helium accumulates in the cores of stars as a result of the proton–proton chain reaction and the carbon–nitrogen–oxygen cycle.

Nuclear fusion reaction of two helium-4 nuclei produces beryllium-8, which is highly unstable, and decays back into smaller nuclei with a half-life of 8.19×10−17 s, unless within that time a third alpha particle fuses with the beryllium-8 nucleus to produce an excited resonance state of carbon-12, called the Hoyle state, which nearly always decays back into three alpha particles, but once in about 2421.3 times releases energy and changes into the stable base form of carbon-12. When a star runs out of hydrogen to fuse in its core, it begins to contract and heat up. If the central temperature rises to 108 K, six times hotter than the Sun's core, alpha particles can fuse fast enough to get past the beryllium-8 barrier and produce significant amounts of stable carbon-12.

4
2
He
+ 4
2
He
8
4
Be
 (−0.0918 MeV)
8
4
Be
+ 4
2
He
12
6
C
+ 2
γ
 (+7.367 MeV)

The net energy release of the process is 7.275 MeV.

As a side effect of the process, some carbon nuclei fuse with additional helium to produce a stable isotope of oxygen and energy:

12
6
C
+ 4
2
He
16
8
O
+
γ
(+7.162 MeV)

Nuclear fusion reactions of helium with hydrogen produces lithium-5, which also is highly unstable, and decays back into smaller nuclei with a half-life of 3.7×10−22 s.

Fusing with additional helium nuclei can create heavier elements in a chain of stellar nucleosynthesis known as the alpha process, but these reactions are only significant at higher temperatures and pressures than in cores undergoing the triple-alpha process. This creates a situation in which stellar nucleosynthesis produces large amounts of carbon and oxygen but only a small fraction of those elements are converted into neon and heavier elements. Oxygen and carbon are the main "ash" of helium-4 burning.

Primordial carbon

The triple-alpha process is ineffective at the pressures and temperatures early in the Big Bang. One consequence of this is that no significant amount of carbon was produced in the Big Bang.

Resonances

Ordinarily, the probability of the triple-alpha process is extremely small. However, the beryllium-8 ground state has almost exactly the energy of two alpha particles. In the second step, 8Be + 4He has almost exactly the energy of an excited state of 12C. This resonance greatly increases the probability that an incoming alpha particle will combine with beryllium-8 to form carbon. The existence of this resonance was predicted by Fred Hoyle before its actual observation, based on the physical necessity for it to exist, in order for carbon to be formed in stars. The prediction and then discovery of this energy resonance and process gave very significant support to Hoyle's hypothesis of stellar nucleosynthesis, which posited that all chemical elements had originally been formed from hydrogen, the true primordial substance. The anthropic principle has been cited to explain the fact that nuclear resonances are sensitively arranged to create large amounts of carbon and oxygen in the universe.

Nucleosynthesis of heavy elements

With further increases of temperature and density, fusion processes produce nuclides only up to nickel-56 (which decays later to iron); heavier elements (those beyond Ni) are created mainly by neutron capture. The slow capture of neutrons, the s-process, produces about half of elements beyond iron. The other half are produced by rapid neutron capture, the r-process, which probably occurs in core-collapse supernovae and neutron star mergers.

Reaction rate and stellar evolution

The triple-alpha steps are strongly dependent on the temperature and density of the stellar material. The power released by the reaction is approximately proportional to the temperature to the 40th power, and the density squared. In contrast, the proton–proton chain reaction produces energy at a rate proportional to the fourth power of temperature, the CNO cycle at about the 17th power of the temperature, and both are linearly proportional to the density. This strong temperature dependence has consequences for the late stage of stellar evolution, the red-giant stage.

For lower mass stars on the red-giant branch, the helium accumulating in the core is prevented from further collapse only by electron degeneracy pressure. The entire degenerate core is at the same temperature and pressure, so when its mass becomes high enough, fusion via the triple-alpha process rate starts throughout the core. The core is unable to expand in response to the increased energy production until the pressure is high enough to lift the degeneracy. As a consequence, the temperature increases, causing an increased reaction rate in a positive feedback cycle that becomes a runaway reaction. This process, known as the helium flash, lasts a matter of seconds but burns 60–80% of the helium in the core. During the core flash, the star's energy production can reach approximately 1011 solar luminosities which is comparable to the luminosity of a whole galaxy, although no effects will be immediately observed at the surface, as the whole energy is used up to lift the core from the degenerate to normal, gaseous state. Since the core is no longer degenerate, hydrostatic equilibrium is once more established and the star begins to "burn" helium at its core and hydrogen in a spherical layer above the core. The star enters a steady helium-burning phase which lasts about 10% of the time it spent on the main sequence (our Sun is expected to burn helium at its core for about a billion years after the helium flash).

For higher mass stars, carbon collects in the core, displacing the helium to a surrounding shell where helium burning occurs. In this helium shell, the pressures are lower and the mass is not supported by electron degeneracy. Thus, as opposed to the center of the star, the shell is able to expand in response to increased thermal pressure in the helium shell. Expansion cools this layer and slows the reaction, causing the star to contract again. This process continues cyclically, and stars undergoing this process will have periodically variable radius and power production. These stars will also lose material from their outer layers as they expand and contract.

Discovery

The triple-alpha process is highly dependent on carbon-12 and beryllium-8 having resonances with slightly more energy than helium-4. Based on known resonances, by 1952 it seemed impossible for ordinary stars to produce carbon as well as any heavier element. Nuclear physicist William Alfred Fowler had noted the beryllium-8 resonance, and Edwin Salpeter had calculated the reaction rate for Be-8, C-12 and O-16 nucleosynthesis taking this resonance into account. However, Salpeter calculated that red giants burned helium at temperatures of 2·108 K or higher, whereas other recent work hypothesized temperatures as low as 1.1·108 K for the core of a red giant.

Salpeter's paper mentioned in passing the effects that unknown resonances in carbon-12 would have on his calculations, but the author never followed up on them. It was instead astrophysicist Fred Hoyle who, in 1953, used the abundance of carbon-12 in the universe as evidence for the existence of a carbon-12 resonance. The only way Hoyle could find that would produce an abundance of both carbon and oxygen was through a triple-alpha process with a carbon-12 resonance near 7.68 MeV, which would also eliminate the discrepancy in Salpeter's calculations.

Hoyle went to Fowler's lab at Caltech and said that there had to be a resonance of 7.68 MeV in the carbon-12 nucleus. (There had been reports of an excited state at about 7.5 MeV.) Fred Hoyle's audacity in doing this is remarkable, and initially the nuclear physicists in the lab were skeptical. Finally, a junior physicist, Ward Whaling, fresh from Rice University, who was looking for a project decided to look for the resonance. Fowler gave Whaling permission to use an old Van de Graaff generator that was not being used. Hoyle was back in Cambridge when Fowler's lab discovered a carbon-12 resonance near 7.65 MeV a few months later, validating his prediction. The nuclear physicists put Hoyle as first author on a paper delivered by Whaling at the summer meeting of the American Physical Society. A long and fruitful collaboration between Hoyle and Fowler soon followed, with Fowler even coming to Cambridge.

The final reaction product lies in a 0+ state (spin 0 and positive parity). Since the Hoyle state was predicted to be either a 0+ or a 2+ state, electron–positron pairs or gamma rays were expected to be seen. However, when experiments were carried out, the gamma emission reaction channel was not observed, and this meant the state must be a 0+ state. This state completely suppresses single gamma emission, since single gamma emission must carry away at least 1 unit of angular momentum. Pair production from an excited 0+ state is possible because their combined spins (0) can couple to a reaction that has a change in angular momentum of 0.

Improbability and fine-tuning

Carbon is a necessary component of all known life. 12C, a stable isotope of carbon, is abundantly produced in stars due to three factors:

  1. The decay lifetime of a 8Be nucleus is four orders of magnitude larger than the time for two 4He nuclei (alpha particles) to scatter.
  2. An excited state of the 12C nucleus exists a little (0.3193 MeV) above the energy level of 8Be + 4He. This is necessary because the ground state of 12C is 7.3367 MeV below the energy of 8Be + 4He. Therefore, a 8Be nucleus and a 4He nucleus cannot reasonably fuse directly into a ground-state 12C nucleus. The excited Hoyle state of 12C is 7.656 MeV above the ground state of 12C. This allows 8Be and 4He to use the kinetic energy of their collision to fuse into the excited 12C, which can then transition to its stable ground state. According to one calculation, the energy level of this excited state must be between about 7.3 and 7.9 MeV to produce sufficient carbon for life to exist, and must be further "fine-tuned" to between 7.596 MeV and 7.716 MeV in order to produce the abundant level of 12C observed in nature.
  3. In the reaction 12C + 4He → 16O, there is an excited state of oxygen which, if it were slightly higher, would provide a resonance and speed up the reaction. In that case, insufficient carbon would exist in nature; almost all of it would have converted to oxygen.

Some scholars argue the 7.656 MeV Hoyle resonance, in particular, is unlikely to be the product of mere chance. Fred Hoyle argued in 1982 that the Hoyle resonance was evidence of a "superintellect"; Leonard Susskind in The Cosmic Landscape rejects Hoyle's intelligent design argument. Instead, some scientists believe that different universes, portions of a vast "multiverse", have different fundamental constants: according to this controversial fine-tuning hypothesis, life can only evolve in the minority of universes where the fundamental constants happen to be fine-tuned to support the existence of life. Other scientists reject the hypothesis of the multiverse on account of the lack of independent evidence.

Creatio ex nihilo

From Wikipedia, the free encyclopedia

Tree of Life by Eli Content at the Joods Historisch Museum. The Tree of Life, or Etz haChayim (עץ החיים) in Hebrew, is a mystical symbol used in the Kabbalah of esoteric Judaism to describe the path to HaShem and the manner in which He created the world ex nihilo (out of nothing).

Creatio ex nihilo (Latin for "creation out of nothing") refers to the belief that matter is not eternal but had to be created by some divine creative act, frequently defined as God. It is a theistic answer to the question of how the universe comes to exist. It is in contrast to Ex nihilo nihil fit or "nothing comes from nothing", which means that all things were formed from preexisting things; an idea by the Greek philosopher Parmenides (c.540-480 BC) about the nature of all things, and later more formally stated by Titus Lucretius Carus (c. 99 – c. 55 BC)

Theology

Ex nihilo nihil fit: uncreated matter

Ex nihilo nihil fit means that nothing comes from nothing. In ancient creation myths the universe is formed from eternal formless matter, namely the dark and still primordial ocean of chaos. In Sumerian myth this cosmic ocean is personified as the goddess Nammu "who gave birth to heaven and earth" and had existed forever; in the Babylonian creation epic Enuma Elish pre-existent chaos is made up of fresh-water Apsu and salt-water Tiamat, and from Tiamat the god Marduk created Heaven and Earth; in Egyptian creation myths a pre-existent watery chaos personified as the god Nun and associated with darkness, gave birth to the primeval hill (or in some versions a primeval lotus flower, or in others a celestial cow); and in Greek traditions the ultimate origin of the universe, depending on the source, is sometimes Okeanos (a river that circles the Earth), Night, or water.

To these can be added the account of the Book of Genesis, which opens with God separating and restraining the waters, not creating the waters themselves out of nothing. The Hebrew sentence which opens Genesis, Bereshit bara Elohim et hashamayim ve'et ha'aretz, can be translated into English in at least three ways:

  1. As a statement that the cosmos had an absolute beginning (In the beginning, God created the heavens and earth).
  2. As a statement describing the condition of the world when God began creating (When in the beginning God created the heavens and the earth, the earth was untamed and shapeless).
  3. As background information (When in the beginning God created the heavens and the earth, the earth being untamed and shapeless, God said, Let there be light!).

It has been known since the Middle Ages that on strictly linguistic and exegetical grounds option 1 is not the preferred translation. Our society sees the origin of matter as a question of crucial importance, but for ancient cultures this was not the case, and the authors of Genesis wrote of creation they were concerned with God bringing the cosmos into operation by assigning roles and functions.

Creatio ex nihilo: the creation of matter

Creatio ex nihilo, in contrast to ex nihilo nihil fit, is the idea that matter is not eternal but was created by God at the initial cosmic moment. In the second century a new cosmogony arose, articulated by Plotinus, that the world was an emanation from God and thus part of God. This view of creation was repugnant to Christian church fathers as well as to Arabic and Hebrew philosophers, and they forcefully argued for the otherness of God and his creation and that God created all things from nothing by the word of God. The first articulation of the notion of creation ex nihilo is found in the 2nd century writing To Autocylus (2.10) authored by Theophilus of Antioch. By the beginning of the 3rd century the tension was resolved and creation ex nihilo had become a fundamental tenet of Christian theology. Theophilus of Antioch is the first post New Testament author to unambiguously argue for an ontological ex nihilo creation from nothing, contrasting it to the views of Plato and Lucretius who asserted clearly that matter was preexistent.

In modern times some Christian theologians argue that although the Bible does not explicitly mention creation ex nihili, various passages suggest or imply it. Others assert that it gains validity from having been held by so many for so long; and others find support in modern cosmological theories surrounding the Big Bang. Some examine alternatives to creatio ex nihilo, such as the idea that God created from his own self or from Christ, but this seems to imply that the world is more or less identical with God; or that God created from pre-existent matter, which at least has biblical support, but this implies that the world does not depend on God for its existence.

In Jewish philosophy

Theologians and philosophers of religion point out that it is explicitly stated in Jewish literature from the first century BCE or earlier depending on the dating of 2 Maccabees:

2 Maccabees 7:28:

I beseech you, my child, to look at the heaven and the earth and see everything that is in them, and recognize that God did not make them out of things that existed.

Others have argued that the belief may not be inherent in Maccabees.

In the first century, Philo of Alexandria, a Hellenized Jew, lays out the basic idea of ex nihilo creation, though he is not always consistent, he rejects the Greek idea of the eternal universe and he maintains that God has created time itself. In other places it has been argued that he postulates pre-existent matter alongside God. But other major scholars such as Harry Austryn Wolfson see that interpretation of Philo's ideas differently and argue that the so-called pre-existent matter was created.

Saadia Gaon introduced ex nihilo creation into the readings of the Jewish bible in the 10th century CE in his work Book of Beliefs and Opinions where he imagines a God far more awesome and omnipotent than that of the rabbis, the traditional Jewish teachers who had so far dominated Judaism, whose God created the world from pre-existing matter. Today Jews, like Christians, tend to believe in creation ex nihilo, although some Jewish scholars recognise that Genesis 1:1 recognises the pre-existence of matter to which God gives form.

Islamic

Most scholars of Islam share with Christianity and Judaism the concept that God is First Cause and absolute Creator; He did not create the world from pre-existing matter. However, some scholars, adhering to a strict literal interpretation of the Quran such as Ibn Taimiyya whose sources became the fundament of Wahhabism and contemporary teachings, hold that God fashioned the world out of primordial matter, based on Quranic verses.

Compared to modern science

The Big Bang theory, by contrast, is a scientific theory; it offers no explanation of cosmic existence but only a description of the first few moments of that existence.

Metaphysics

Cosmological argument and Kalam cosmological argument

A major argument for creatio ex nihilo, the cosmological argument, states in summary:

  1. Everything that exists must have a cause.
  2. The universe exists.
  3. Therefore, the universe must have a cause.

An expansion of the first cause argument is the Kalam cosmological argument, which also requires creatio ex nihilo:

  1. Everything that begins to exist has a cause.
  2. The universe began to exist.
  3. Therefore, the universe has a cause.
  4. If the universe has a cause, then an uncaused, personal creator of the universe exists, who without the universe is beginningless, changeless, immaterial, timeless, spaceless, and infinitely powerful.
  5. Therefore, an uncaused, personal creator of the universe exists, who without the universe is beginningless, changeless, immaterial, timeless, spaceless, and infinitely powerful.

 

Why there is anything at all

From Wikipedia, the free encyclopedia
 
This question has been written about by philosophers since at least the ancient Parmenides (c. 515 BC)

The questions pertaining to "why there is anything at all", or, "why there is something rather than nothing" have been raised or commented on by philosophers including Gottfried Wilhelm Leibniz, Ludwig Wittgenstein, and Martin Heidegger – who called it "the fundamental question of metaphysics".

Overview

The question is posed comprehensively, rather than concerning the existence of anything specific such as the universe or multiverse, the Big Bang, mathematical laws, physical laws, time, consciousness, or God. It can be seen as an open metaphysical question.

The circled dot was used by the Pythagoreans and later Greeks to represent the first metaphysical being, the Monad or the Absolute.

On causation

The ancient Greek philosopher Aristotle argued that everything must have a cause, culminating in an ultimate uncaused cause. (See Four causes)

David Hume argued that, while we expect everything to have a cause because of our experience of the necessity of causes, a cause may not be necessary in the case of the formation of the universe, which is outside our experience.

Bertrand Russell took a "brute fact" position when he said, "I should say that the universe is just there, and that's all."

Philosopher Brian Leftow has argued that the question cannot have a causal explanation (as any cause must itself have a cause) or a contingent explanation (as the factors giving the contingency must pre-exist), and that if there is an answer it must be something that exists necessarily (i.e., something that just exists, rather than is caused).

Philosopher William Free argues that the only two options which can explain existence is that things either always existed or spontaneously emerged. In either scenario, existence is a fact for which there isn't a cause.

Explanations

Gottfried Wilhelm Leibniz wrote:

Why is there something rather than nothing? The sufficient reason [...] is found in a substance which [...] is a necessary being bearing the reason for its existence within itself.

Philosopher of physics Dean Rickles has argued that numbers and mathematics (or their underlying laws) may necessarily exist.

Physicist Max Tegmark wrote about the mathematical universe hypothesis, which states that all mathematical structures exist physically, and the physical universe is one of these structures. According to the hypothesis, the universe appears fine-tuned for intelligent life because of the anthropic principle, with most universes being devoid of life.

Criticism of the question

Philosopher Stephen Law has said the question may not need answering, as it is attempting to answer a question that is outside a spatio-temporal setting, from within a spatio-temporal setting. He compares the question to asking "what is north of the North Pole?" Noted philosophical wit Sidney Morgenbesser answered the question with an apothegm: "If there were nothing you'd still be complaining!", or "Even if there was nothing, you still wouldn't be satisfied!"

Physics is not enough

Physicists such as Stephen Hawking and Lawrence Krauss have offered explanations that rely on quantum mechanics, saying that in a quantum vacuum state, virtual particles and spacetime bubbles will spontaneously come into existence, which is mathematically proven by physicists from Wuhan. Nobel Laureate Frank Wilczek is credited with the aphorism that "nothing is unstable." However, this answer has not satisfied physicist Sean Carroll, who argues that Wilczek's aphorism accounts merely for the existence of matter, but not the existence of quantum states, space-time or the universe as a whole.

God is not enough

Philosopher Roy Sorensen writes in the Stanford Encyclopedia of Philosophy that to many philosophers the question is intrinsically impossible to answer, like squaring a circle, and even God does not sufficiently answer it:

"To explain why something exists, we standardly appeal to the existence of something else... For instance, if we answer 'There is something because the Universal Designer wanted there to be something', then our explanation takes for granted the existence of the Universal Designer. Someone who poses the question in a comprehensive way will not grant the existence of the Universal Designer as a starting point. If the explanation cannot begin with some entity, then it is hard to see how any explanation is feasible. Some philosophers conclude 'Why is there something rather than nothing?' is unanswerable. They think the question stumps us by imposing an impossible explanatory demand, namely, 'Deduce the existence of something without using any existential premises'. Logicians should feel no more ashamed of their inability to perform this deduction than geometers should feel ashamed at being unable to square the circle."

Argument that "nothing" is impossible

The pre-Socratic philosopher Parmenides was one of the first Western thinkers to question the possibility of nothing. Many other thinkers, such as Bede Rundle, have questioned whether nothing is an ontological possibility.

Nothing - it is opposite of existing

By realizing the two possibilities only: existence or not existence we facing the problem of two primitive models. If something 'is', then it could not. The superposition of a possibility does not resolve the problem, because appears in a combination of these two primitive models (exist and do not exist). {Taken thoughts from unknown person}

The contemporary philosopher Roy Sorenson has dismissed this line of reasoning. Curiosity, he argues, is possible "even when the proposition is known to be a necessary truth." For instance, a "reductio ad absurdum proof that 1 − 1/3 + 1/5 − 1/7 + … converges to π/4" demonstrates that not converging to π/4 is impossible. However, it provides no insight into why not converging to π/4 is impossible. Similarly, even if "nothing" is impossible, asking why that is the case is a legitimate question.

 

Ultimate fate of the universe

The ultimate fate of the universe is a topic in physical cosmology, whose theoretical restrictions allow possible scenarios for the evolution and ultimate fate of the universe to be described and evaluated. Based on available observational evidence, deciding the fate and evolution of the universe has become a valid cosmological question, being beyond the mostly untestable constraints of mythological or theological beliefs. Several possible futures have been predicted by different scientific hypotheses, including that the universe might have existed for a finite and infinite duration, or towards explaining the manner and circumstances of its beginning.

Observations made by Edwin Hubble during the 1920s–1950s found that galaxies appeared to be moving away from each other, leading to the currently accepted Big Bang theory. This suggests that the universe began – very small and very dense – about 13.82 billion years ago, and it has expanded and (on average) become less dense ever since. Confirmation of the Big Bang mostly depends on knowing the rate of expansion, average density of matter, and the physical properties of the mass–energy in the universe.

There is a strong consensus among cosmologists that the shape of the universe is considered "flat" and will continue to expand forever.

Factors that need to be considered in determining the universe's origin and ultimate fate include the average motions of galaxies, the shape and structure of the universe, and the amount of dark matter and dark energy that the universe contains.

Emerging scientific basis

Theory

The theoretical scientific exploration of the ultimate fate of the universe became possible with Albert Einstein's 1915 theory of general relativity. General relativity can be employed to describe the universe on the largest possible scale. There are several possible solutions to the equations of general relativity, and each solution implies a possible ultimate fate of the universe.

Alexander Friedmann proposed several solutions in 1922, as did Georges Lemaître in 1927. In some of these solutions, the universe has been expanding from an initial singularity which was, essentially, the Big Bang.

Observation

In 1929, Edwin Hubble published his conclusion, based on his observations of Cepheid variable stars in distant galaxies, that the universe was expanding. From then on, the beginning of the universe and its possible end have been the subjects of serious scientific investigation.

Big Bang and Steady State theories

In 1927, Georges Lemaître set out a theory that has since come to be called the Big Bang theory of the origin of the universe. In 1948, Fred Hoyle set out his opposing Steady State theory in which the universe continually expanded but remained statistically unchanged as new matter is constantly created. These two theories were active contenders until the 1965 discovery, by Arno Penzias and Robert Wilson, of the cosmic microwave background radiation, a fact that is a straightforward prediction of the Big Bang theory, and one that the original Steady State theory could not account for. As a result, the Big Bang theory quickly became the most widely held view of the origin of the universe.

Cosmological constant

Einstein and his contemporaries believed in a static universe. When Einstein found that his general relativity equations could easily be solved in such a way as to allow the universe to be expanding at the present and contracting in the far future, he added to those equations what he called a cosmological constant ⁠— ⁠essentially a constant energy density, unaffected by any expansion or contraction ⁠— ⁠whose role was to offset the effect of gravity on the universe as a whole in such a way that the universe would remain static. However, after Hubble announced his conclusion that the universe was expanding, Einstein would write that his cosmological constant was "the greatest blunder of my life."

Density parameter

An important parameter in fate of the universe theory is the density parameter, omega (), defined as the average matter density of the universe divided by a critical value of that density. This selects one of three possible geometries depending on whether is equal to, less than, or greater than . These are called, respectively, the flat, open and closed universes. These three adjectives refer to the overall geometry of the universe, and not to the local curving of spacetime caused by smaller clumps of mass (for example, galaxies and stars). If the primary content of the universe is inert matter, as in the dust models popular for much of the 20th century, there is a particular fate corresponding to each geometry. Hence cosmologists aimed to determine the fate of the universe by measuring , or equivalently the rate at which the expansion was decelerating.

Repulsive force

Starting in 1998, observations of supernovas in distant galaxies have been interpreted as consistent with a universe whose expansion is accelerating. Subsequent cosmological theorizing has been designed so as to allow for this possible acceleration, nearly always by invoking dark energy, which in its simplest form is just a positive cosmological constant. In general, dark energy is a catch-all term for any hypothesized field with negative pressure, usually with a density that changes as the universe expands.

Role of the shape of the universe

The ultimate fate of an expanding universe depends on the matter density and the dark energy density

The current scientific consensus of most cosmologists is that the ultimate fate of the universe depends on its overall shape, how much dark energy it contains and on the equation of state which determines how the dark energy density responds to the expansion of the universe. Recent observations conclude, from 7.5 billion years after the Big Bang, that the expansion rate of the universe has likely been increasing, commensurate with the Open Universe theory. However, other recent measurements by Wilkinson Microwave Anisotropy Probe suggest that the universe is either flat or very close to flat.

Closed universe

If , the geometry of space is closed like the surface of a sphere. The sum of the angles of a triangle exceeds 180 degrees and there are no parallel lines; all lines eventually meet. The geometry of the universe is, at least on a very large scale, elliptic.

In a closed universe, gravity eventually stops the expansion of the universe, after which it starts to contract until all matter in the universe collapses to a point, a final singularity termed the "Big Crunch", the opposite of the Big Bang. Some new modern theories assume the universe may have a significant amount of dark energy, whose repulsive force may be sufficient to cause the expansion of the universe to continue forever—even if .

Open universe

If , the geometry of space is open, i.e., negatively curved like the surface of a saddle. The angles of a triangle sum to less than 180 degrees, and lines that do not meet are never equidistant; they have a point of least distance and otherwise grow apart. The geometry of such a universe is hyperbolic.

Even without dark energy, a negatively curved universe expands forever, with gravity negligibly slowing the rate of expansion. With dark energy, the expansion not only continues but accelerates. The ultimate fate of an open universe is either universal heat death, a "Big Freeze" (not to be confused with heat death, despite seemingly similar name interpretation ⁠— ⁠see §Theories about the end of the universe below), or a "Big Rip", where the acceleration caused by dark energy eventually becomes so strong that it completely overwhelms the effects of the gravitational, electromagnetic and strong binding forces.

Conversely, a negative cosmological constant, which would correspond to a negative energy density and positive pressure, would cause even an open universe to re-collapse to a big crunch.

Flat universe

If the average density of the universe exactly equals the critical density so that , then the geometry of the universe is flat: as in Euclidean geometry, the sum of the angles of a triangle is 180 degrees and parallel lines continuously maintain the same distance. Measurements from Wilkinson Microwave Anisotropy Probe have confirmed the universe is flat within a 0.4% margin of error.

In the absence of dark energy, a flat universe expands forever but at a continually decelerating rate, with expansion asymptotically approaching zero. With dark energy, the expansion rate of the universe initially slows down, due to the effects of gravity, but eventually increases, and the ultimate fate of the universe becomes the same as that of an open universe.

Theories about the end of the universe

The fate of the universe is determined by its density. The preponderance of evidence to date, based on measurements of the rate of expansion and the mass density, favors a universe that will continue to expand indefinitely, resulting in the "Big Freeze" scenario below. However, observations are not conclusive, and alternative models are still possible.

Big Freeze or Heat Death

The Big Freeze (or Big Chill) is a scenario under which continued expansion results in a universe that asymptotically approaches absolute zero temperature. This scenario, in combination with the Big Rip scenario, is gaining ground as the most important hypothesis. It could, in the absence of dark energy, occur only under a flat or hyperbolic geometry. With a positive cosmological constant, it could also occur in a closed universe. In this scenario, stars are expected to form normally for 1012 to 1014 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. Eventually black holes will dominate the universe, which themselves will disappear over time as they emit Hawking radiation. Over infinite time, there would be a spontaneous entropy decrease by the Poincaré recurrence theorem, thermal fluctuations, and the fluctuation theorem.

A related scenario is heat death, which states that the universe goes to a state of maximum entropy in which everything is evenly distributed and there are no gradients—which are needed to sustain information processing, one form of which is life. The heat death scenario is compatible with any of the three spatial models, but requires that the universe reach an eventual temperature minimum.

Big Rip

The current Hubble constant defines a rate of acceleration of the universe not large enough to destroy local structures like galaxies, which are held together by gravity, but large enough to increase the space between them. A steady increase in the Hubble constant to infinity would result in all material objects in the universe, starting with galaxies and eventually (in a finite time) all forms, no matter how small, disintegrating into unbound elementary particles, radiation and beyond. As the energy density, scale factor and expansion rate become infinite the universe ends as what is effectively a singularity.

In the special case of phantom dark energy, which has supposed negative kinetic energy that would result in a higher rate of acceleration than other cosmological constants predict, a more sudden big rip could occur.

Big Crunch

The Big Crunch. The vertical axis can be considered as expansion or contraction with time.

The Big Crunch hypothesis is a symmetric view of the ultimate fate of the universe. Just as the Big Bang started as a cosmological expansion, this theory assumes that the average density of the universe will be enough to stop its expansion and the universe will begin contracting. The end result is unknown; a simple estimation would have all the matter and space-time in the universe collapse into a dimensionless singularity back into how the universe started with the Big Bang, but at these scales unknown quantum effects need to be considered (see Quantum gravity). Recent evidence suggests that this scenario is unlikely but has not been ruled out, as measurements have been available only over a short period of time, relatively speaking, and could reverse in the future.

This scenario allows the Big Bang to occur immediately after the Big Crunch of a preceding universe. If this happens repeatedly, it creates a cyclic model, which is also known as an oscillatory universe. The universe could then consist of an infinite sequence of finite universes, with each finite universe ending with a Big Crunch that is also the Big Bang of the next universe. A problem with the cyclic universe is that it does not reconcile with the second law of thermodynamics, as entropy would build up from oscillation to oscillation and cause the eventual heat death of the universe. Current evidence also indicates the universe is not closed. This has caused cosmologists to abandon the oscillating universe model. A somewhat similar idea is embraced by the cyclic model, but this idea evades heat death because of an expansion of the branes that dilutes entropy accumulated in the previous cycle.

Big Bounce

The Big Bounce is a theorized scientific model related to the beginning of the known universe. It derives from the oscillatory universe or cyclic repetition interpretation of the Big Bang where the first cosmological event was the result of the collapse of a previous universe.

According to one version of the Big Bang theory of cosmology, in the beginning the universe was infinitely dense. Such a description seems to be at odds with other more widely accepted theories, especially quantum mechanics and its uncertainty principle. It is not surprising, therefore, that quantum mechanics has given rise to an alternative version of the Big Bang theory. Also, if the universe is closed, this theory would predict that once this universe collapses it will spawn another universe in an event similar to the Big Bang after a universal singularity is reached or a repulsive quantum force causes re-expansion.

In simple terms, this theory states that the universe will continuously repeat the cycle of a Big Bang, followed up with a Big Crunch.

Big Slurp

This theory posits that the universe currently exists in a false vacuum and that it could become a true vacuum at any moment.

In order to best understand the false vacuum collapse theory, one must first understand the Higgs field which permeates the universe. Much like an electromagnetic field, it varies in strength based upon its potential. A true vacuum exists so long as the universe exists in its lowest energy state, in which case the false vacuum theory is irrelevant. However, if the vacuum is not in its lowest energy state (a false vacuum), it could tunnel into a lower-energy state. This is called vacuum decay. This has the potential to fundamentally alter our universe; in more audacious scenarios even the various physical constants could have different values, severely affecting the foundations of matter, energy, and spacetime. It is also possible that all structures will be destroyed instantaneously, without any forewarning.

Cosmic uncertainty

Each possibility described so far is based on a very simple form for the dark energy equation of state. However, as the name is meant to imply, very little is currently known about the physics of dark energy. If the theory of inflation is true, the universe went through an episode dominated by a different form of dark energy in the first moments of the Big Bang, but inflation ended, indicating an equation of state far more complex than those assumed so far for present-day dark energy. It is possible that the dark energy equation of state could change again, resulting in an event that would have consequences which are extremely difficult to predict or parametrize. As the nature of dark energy and dark matter remain enigmatic, even hypothetical, the possibilities surrounding their coming role in the universe are currently unknown. None of these theoretic endings for the universe are certain.

Observational constraints on theories

Choosing among these rival scenarios is done by 'weighing' the universe, for example, measuring the relative contributions of matter, radiation, dark matter, and dark energy to the critical density. More concretely, competing scenarios are evaluated against data on galaxy clustering and distant supernovas, and on the anisotropies in the cosmic microwave background.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...