Search This Blog

Tuesday, February 17, 2015

Emergence



From Wikipedia, the free encyclopedia


Snowflakes forming complex symmetrical and fractal patterns is an example of emergence in a physical system.

A termite "cathedral" mound produced by a termite colony is a classic example of emergence in nature.

In philosophy, systems theory, science, and art, emergence is conceived as a process whereby larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties. In philosophy, almost all accounts of emergence include a form of irreducibility (either epistemic or ontological) to the lower levels.[1] Also, emergence is central in theories of integrative levels and of complex systems. For instance, the phenomenon life as studied in biology is commonly perceived as an emergent property of interacting molecules as studied in chemistry, whose phenomena reflect interactions among elementary particles, modeled in particle physics, that at such higher mass—via substantial conglomeration—exhibit motion as modeled in gravitational physics. Neurobiological phenomena are often presumed to suffice as the underlying basis of psychological phenomena, whereby economic phenomena are in turn presumed to principally emerge.

Definitions

The idea of emergence has been around since at least the time of Aristotle.[2]  John Stuart Mill[3] and Julian Huxley[4] are two of many scientists and philosophers who have written on the concept.
The term "emergent" was coined by philosopher G. H. Lewes, who wrote:
"Every resultant is either a sum or a difference of the co-operant forces; their sum, when their directions are the same -- their difference, when their directions are contrary. Further, every resultant is clearly traceable in its components, because these are homogeneous and commensurable. It is otherwise with emergents, when, instead of adding measurable motion to measurable motion, or things of one kind to other individuals of their kind, there is a co-operation of things of unlike kinds. The emergent is unlike its components insofar as these are incommensurable, and it cannot be reduced to their sum or their difference."[5][6]
Economist Jeffrey Goldstein provided a current definition of emergence in the journal Emergence.[7] Goldstein initially defined emergence as: "the arising of novel and coherent structures, patterns and properties during the process of self-organization in complex systems".

Goldstein's definition can be further elaborated to describe the qualities of this definition in more detail:
"The common characteristics are: (1) radical novelty (features not previously observed in systems); (2) coherence or correlation (meaning integrated wholes that maintain themselves over some period of time); (3) A global or macro "level" (i.e. there is some property of "wholeness"); (4) it is the product of a dynamical process (it evolves); and (5) it is "ostensive" (it can be perceived). For good measure, Goldstein throws in supervenience."[8]
Systems scientist Peter Corning also points out that living systems cannot be reduced to underlying laws of physics:
Rules, or laws, have no causal efficacy; they do not in fact “generate” anything. They serve merely to describe regularities and consistent relationships in nature. These patterns may be very illuminating and important, but the underlying causal agencies must be separately specified (though often they are not). But that aside, the game of chess illustrates ... why any laws or rules of emergence and evolution are insufficient. Even in a chess game, you cannot use the rules to predict “history” — i.e., the course of any given game. Indeed, you cannot even reliably predict the next move in a chess game. Why? Because the “system” involves more than the rules of the game. It also includes the players and their unfolding, moment-by-moment decisions among a very large number of available options at each choice point. The game of chess is inescapably historical, even though it is also constrained and shaped by a set of rules, not to mention the laws of physics. Moreover, and this is a key point, the game of chess is also shaped by teleonomic, cybernetic, feedback-driven influences. It is not simply a self-ordered process; it involves an organized, “purposeful” activity.[8]

Strong and weak emergence

Usage of the notion "emergence" may generally be subdivided into two perspectives, that of "weak emergence" and "strong emergence". In terms of physical systems, weak emergence is a type of emergence in which the emergent property is amenable to computer simulation. This is opposed to the older notion of strong emergence, in which the emergent property cannot be simulated by a computer.

Some common points between the two notions are that emergence concerns new properties produced as the system grows, which is to say ones which are not shared with its components or prior states. Also, it is assumed that the properties are supervenient rather than metaphysically primitive (Bedau 1997).

Weak emergence describes new properties arising in systems as a result of the interactions at an elemental level. However, it is stipulated that the properties can be determined by observing or simulating the system, and not by any process of a priori analysis.

Bedau notes that weak emergence is not a universal metaphysical solvent, as weak emergence leads to the conclusion that matter itself contains elements of awareness to it. However, Bedau concludes that adopting this view would provide a precise notion that emergence is involved in consciousness, and second, the notion of weak emergence is metaphysically benign (Bedau 1997).

Strong emergence describes the direct causal action of a high-level system upon its components; qualities produced this way are irreducible to the system's constituent parts (Laughlin 2005). The whole is greater than the sum of its parts. It follows that no simulation of the system can exist, for such a simulation would itself constitute a reduction of the system to its constituent parts (Bedau 1997).

However, "the debate about whether or not the whole can be predicted from the properties of the parts misses the point. Wholes produce unique combined effects, but many of these effects may be co-determined by the context and the interactions between the whole and its environment(s)" (Corning 2002). In accordance with his Synergism Hypothesis (Corning 1983 2005), Corning also stated, "It is the synergistic effects produced by wholes that are the very cause of the evolution of complexity in nature." Novelist Arthur Koestler used the metaphor of Janus (a symbol of the unity underlying complements like open/shut, peace/war) to illustrate how the two perspectives (strong vs. weak or holistic vs. reductionistic) should be treated as non-exclusive, and should work together to address the issues of emergence (Koestler 1969). Further,
The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. At each level of complexity entirely new properties appear. Psychology is not applied biology, nor is biology applied chemistry. We can now see that the whole becomes not merely more, but very different from the sum of its parts. (Anderson 1972)
The plausibility of strong emergence is questioned by some as contravening our usual understanding of physics. Mark A. Bedau observes:
Although strong emergence is logically possible, it is uncomfortably like magic. How does an irreducible but supervenient downward causal power arise, since by definition it cannot be due to the aggregation of the micro-level potentialities? Such causal powers would be quite unlike anything within our scientific ken. This not only indicates how they will discomfort reasonable forms of materialism. Their mysteriousness will only heighten the traditional worry that emergence entails illegitimately getting something from nothing.[9]
Meanwhile, others have worked towards developing analytical evidence of strong emergence. In 2009, Gu et al. presented a class of physical systems that exhibits non-computable macroscopic properties.[10][11] More precisely, if one could compute certain macroscopic properties of these systems from the microscopic description of these systems, they one would be able to solve computational problems known to be undecidable in computer science. They concluded that
Although macroscopic concepts are essential for understanding our world, much of fundamental physics has been devoted to the search for a `theory of everything', a set of equations that perfectly describe the behavior of all fundamental particles. The view that this is the goal of science rests in part on the rationale that such a theory would allow us to derive the behavior of all macroscopic concepts, at least in principle. The evidence we have presented suggests that this view may be overly optimistic. A `theory of everything' is one of many components necessary for complete understanding of the universe, but is not necessarily the only one. The development of macroscopic laws from first principles may involve more than just systematic logic, and could require conjectures suggested by experiments, simulations or insight.[10]

Objective or subjective quality

The properties of complexity and organization of any system are considered by Crutchfield to be subjective qualities determined by the observer.
"Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analysed in terms of how model-building observers infer from measurements the computational capabilities embedded in non-linear processes. An observer’s notion of what is ordered, what is random, and what is complex in its environment depends directly on its computational resources: the amount of raw measurement data, of memory, and of time available for estimation and inference. The discovery of structure in an environment depends more critically and subtly, though, on how those resources are organized. The descriptive power of the observer’s chosen (or implicit) computational model class, for example, can be an overwhelming determinant in finding regularity in data."(Crutchfield 1994)
On the other hand, Peter Corning argues "Must the synergies be perceived/observed in order to qualify as emergent effects, as some theorists claim? Most emphatically not. The synergies associated with emergence are real and measurable, even if nobody is there to observe them." (Corning 2002)

In philosophy, religion, art and human sciences

In philosophy, emergence is often understood to be a much weaker claim about the etiology of a system's properties. An emergent property of a system, in this context, is one that is not a property of any component of that system, but is still a feature of the system as a whole. Nicolai Hartmann, one of the first modern philosophers to write on emergence, termed this categorial novum (new category).

In religion, emergence grounds expressions of religious naturalism in which a sense of the sacred is perceived in the workings of entirely naturalistic processes by which more complex forms arise or evolve from simpler forms. Examples are detailed in a 2006 essay titled 'The Sacred Emergence of Nature' by Ursula Goodenough and Terrence Deacon and a 2006 essay titled 'Beyond Reductionism: Reinventing the Sacred' by Stuart Kauffman.

An early argument (1904-5) for the emergence of social formations, in part stemming from religion, can be found in Max Weber's most famous work, The Protestant Ethic and the Spirit of Capitalism [12]

In art, emergence is used to explore the origins of novelty, creativity, and authorship. Some art/literary theorists (Wheeler, 2006;[13] Alexander, 2011[14]) have proposed alternatives to postmodern understandings of "authorship" using the complexity sciences and emergence theory. They contend that artistic selfhood and meaning are emergent, relatively objective phenomena. The concept of emergence has also been applied to the theory of literature and art, history, linguistics, cognitive sciences, etc. by the teachings of Jean-Marie Grassin at the University of Limoges (v. esp.: J. Fontanille, B. Westphal, J. Vion-Dury, éds. L'Émergence—Poétique de l'Émergence, en réponse aux travaux de Jean-Marie Grassin, Bern, Berlin, etc., 2011; and: the article "Emergence" in the International Dictionary of Literary Terms (DITL).

In international development, concepts of emergence have been used within a theory of social change termed SEED-SCALE to show how standard principles interact to bring forward socio-economic development fitted to cultural values, community economics, and natural environment (local solutions emerging from the larger socio-econo-biosphere). These principles can be implemented utilizing a sequence of standardized tasks that self-assemble in individually specific ways utilizing recursive evaluative criteria.[15]

In postcolonial studies, the term "Emerging Literature" refers to a contemporary body of texts that is gaining momentum in the global literary landscape (v. esp.: J.M. Grassin, ed. Emerging Literatures, Bern, Berlin, etc. : Peter Lang, 1996). By opposition, "emergent literature" is rather a concept used in the theory of literature.

Emergent properties and processes

An emergent behavior or emergent property can appear when a number of simple entities (agents) operate in an environment, forming more complex behaviors as a collective. If emergence happens over disparate size scales, then the reason is usually a causal relation across different scales. In other words there is often a form of top-down feedback in systems with emergent properties.[16] The processes from which emergent properties result may occur in either the observed or observing system, and can commonly be identified by their patterns of accumulating change, most generally called 'growth'. Emergent behaviours can occur because of intricate causal relations across different scales and feedback, known as interconnectivity. The emergent property itself may be either very predictable or unpredictable and unprecedented, and represent a new level of the system's evolution.
The complex behaviour or properties are not a property of any single such entity, nor can they easily be predicted or deduced from behaviour in the lower-level entities, and might in fact be irreducible to such behavior. The shape and behaviour of a flock of birds [3] or school of fish are good examples of emergent properties.

One reason why emergent behaviour is hard to predict is that the number of interactions between components of a system increases exponentially with the number of components, thus potentially allowing for many new and subtle types of behaviour to emerge.

On the other hand, merely having a large number of interactions is not enough by itself to guarantee emergent behaviour; many of the interactions may be negligible or irrelevant, or may cancel each other out. In some cases, a large number of interactions can in fact work against the emergence of interesting behaviour, by creating a lot of "noise" to drown out any emerging "signal"; the emergent behaviour may need to be temporarily isolated from other interactions before it reaches enough critical mass to be self-supporting. Thus it is not just the sheer number of connections between components which encourages emergence; it is also how these connections are organised. A hierarchical organisation is one example that can generate emergent behaviour (a bureaucracy may behave in a way quite different from that of the individual humans in that bureaucracy); but perhaps more interestingly, emergent behaviour can also arise from more decentralized organisational structures, such as a marketplace. In some cases, the system has to reach a combined threshold of diversity, organisation, and connectivity before emergent behaviour appears.

Unintended consequences and side effects are closely related to emergent properties. Luc Steels writes: "A component has a particular functionality but this is not recognizable as a subfunction of the global functionality. Instead a component implements a behaviour whose side effect contributes to the global functionality [...] Each behaviour has a side effect and the sum of the side effects gives the desired functionality" (Steels 1990). In other words, the global or macroscopic functionality of a system with "emergent functionality" is the sum of all "side effects", of all emergent properties and functionalities.

Systems with emergent properties or emergent structures may appear to defy entropic principles and the second law of thermodynamics, because they form and increase order despite the lack of command and central control. This is possible because open systems can extract information and order out of the environment.

Emergence helps to explain why the fallacy of division is a fallacy.

Emergent structures in nature


Ripple patterns in a sand dune created by wind or water is an example of an emergent structure in nature.

Giant's Causeway in Northern Ireland is an example of a complex emergent structure created by natural processes.

Emergent structures are patterns that emerge via collective actions of many individual entities. To explain such patterns, one might conclude, per Aristotle,[2] that emergent structures are more than the sum of their parts on the assumption that the emergent order will not arise if the various parts simply interact independently of one another. However, there are those who disagree.[17] According to this argument, the interaction of each part with its immediate surroundings causes a complex chain of processes that can lead to order in some form. In fact, some systems in nature are observed to exhibit emergence based upon the interactions of autonomous parts, and some others exhibit emergence that at least at present cannot be reduced in this way. See the discussion in this article of strong and weak emergence.

Emergent structures can be found in many natural phenomena, from the physical to the biological domain. For example, the shape of weather phenomena such as hurricanes are emergent structures. The development and growth of complex, orderly crystals, as driven by the random motion of water molecules within a conducive natural environment, is another example of an emergent process, where randomness can give rise to complex and deeply attractive, orderly structures.

Water crystals forming on glass demonstrate an emergent, fractal natural process occurring under appropriate conditions of temperature and humidity.
However, crystalline structure and hurricanes are said to have a self-organizing phase.

Symphony of the Stones carved by Goght River at Garni Gorge in Armenia is an example of an emergent natural structure.

It is useful to distinguish three forms of emergent structures. A first-order emergent structure occurs as a result of shape interactions (for example, hydrogen bonds in water molecules lead to surface tension). A second-order emergent structure involves shape interactions played out sequentially over time (for example, changing atmospheric conditions as a snowflake falls to the ground build upon and alter its form). Finally, a third-order emergent structure is a consequence of shape, time, and heritable instructions. For example, an organism's genetic code sets boundary conditions on the interaction of biological systems in space and time.

Non-living, physical systems

In physics, emergence is used to describe a property, law, or phenomenon which occurs at macroscopic scales (in space or time) but not at microscopic scales, despite the fact that a macroscopic system can be viewed as a very large ensemble of microscopic systems.

An emergent property need not be more complicated than the underlying non-emergent properties which generate it. For instance, the laws of thermodynamics are remarkably simple, even if the laws which govern the interactions between component particles are complex. The term emergence in physics is thus used not to signify complexity, but rather to distinguish which laws and concepts apply to macroscopic scales, and which ones apply to microscopic scales.

Some examples include:
  • Classical mechanics: The laws of classical mechanics can be said to emerge as a limiting case from the rules of quantum mechanics applied to large enough masses. This is particularly strange since quantum mechanics is generally thought of as more complicated than classical mechanics.
  • Friction: Forces between elementary particles are conservative. However, friction emerges when considering more complex structures of matter, whose surfaces can convert mechanical energy into heat energy when rubbed against each other. Similar considerations apply to other emergent concepts in continuum mechanics such as viscosity, elasticity, tensile strength, etc.
  • Patterned ground: the distinct, and often symmetrical geometric shapes formed by ground material in periglacial regions.
  • Statistical mechanics was initially derived using the concept of a large enough ensemble that fluctuations about the most likely distribution can be all but ignored. However, small clusters do not exhibit sharp first order phase transitions such as melting, and at the boundary it is not possible to completely categorize the cluster as a liquid or solid, since these concepts are (without extra definitions) only applicable to macroscopic systems. Describing a system using statistical mechanics methods is much simpler than using a low-level atomistic approach.
  • Electrical networks: The bulk conductive response of binary (RC) electrical networks with random arrangements can be seen as emergent properties of such physical systems. Such arrangements can be used as simple physical prototypes for deriving mathematical formulae for the emergent responses of complex systems.[18]
  • Weather.

Temperature is sometimes used as an example of an emergent macroscopic behaviour. In classical dynamics, a snapshot of the instantaneous momenta of a large number of particles at equilibrium is sufficient to find the average kinetic energy per degree of freedom which is proportional to the temperature. For a small number of particles the instantaneous momenta at a given time are not statistically sufficient to determine the temperature of the system. However, using the ergodic hypothesis, the temperature can still be obtained to arbitrary precision by further averaging the momenta over a long enough time.

Convection in a liquid or gas is another example of emergent macroscopic behaviour that makes sense only when considering differentials of temperature. Convection cells, particularly Bénard cells, are an example of a self-organizing system (more specifically, a dissipative system) whose structure is determined both by the constraints of the system and by random perturbations: the possible realizations of the shape and size of the cells depends on the temperature gradient as well as the nature of the fluid and shape of the container, but which configurations are actually realized is due to random perturbations (thus these systems exhibit a form of symmetry breaking).

In some theories of particle physics, even such basic structures as mass, space, and time are viewed as emergent phenomena, arising from more fundamental concepts such as the Higgs boson or strings. In some interpretations of quantum mechanics, the perception of a deterministic reality, in which all objects have a definite position, momentum, and so forth, is actually an emergent phenomenon, with the true state of matter being described instead by a wavefunction which need not have a single position or momentum. Most of the laws of physics themselves as we experience them today appear to have emerged during the course of time making emergence the most fundamental principle in the universe and raising the question of what might be the most fundamental law of physics from which all others emerged. Chemistry can in turn be viewed as an emergent property of the laws of physics. Biology (including biological evolution) can be viewed as an emergent property of the laws of chemistry. Similarly, psychology could be understood as an emergent property of neurobiological laws. Finally, free-market theories understand economy as an emergent feature of psychology.

In Laughlin's book, he explains that for many particle systems, nothing can be calculated exactly from the microscopic equations, and that macroscopic systems are characterised by broken symmetry: the symmetry present in the microscopic equations is not present in the macroscopic system, due to phase transitions. As a result, these macroscopic systems are described in their own terminology, and have properties that do not depend on many microscopic details. This does not mean that the microscopic interactions are irrelevant, but simply that you do not see them anymore — you only see a renormalized effect of them. Laughlin is a pragmatic theoretical physicist: if you cannot, possibly ever, calculate the broken symmetry macroscopic properties from the microscopic equations, then what is the point of talking about reducibility?

Living, biological systems

Emergence and evolution

Life is a major source of complexity, and evolution is the major process behind the varying forms of life. In this view, evolution is the process describing the growth of complexity in the natural world and in speaking of the emergence of complex living beings and life-forms, this view refers therefore to processes of sudden changes in evolution.
Regarding causality in evolution Peter Corning observes:
"Synergistic effects of various kinds have played a major causal role in the evolutionary process generally and in the evolution of cooperation and complexity in particular... Natural selection is often portrayed as a “mechanism”, or is personified as a causal agency... In reality, the differential “selection” of a trait, or an adaptation, is a consequence of the functional effects it produces in relation to the survival and reproductive success of a given organism in a given environment. It is these functional effects that are ultimately responsible for the trans-generational continuities and changes in nature." (Corning 2002)
Per his definition of emergence, Corning also addresses emergence and evolution:
"[In] evolutionary processes, causation is iterative; effects are also causes. And this is equally true of the synergistic effects produced by emergent systems. In other words, emergence itself... has been the underlying cause of the evolution of emergent phenomena in biological evolution; it is the synergies produced by organized systems that are the key." (Corning 2002)
Swarming is a well-known behaviour in many animal species from marching locusts to schooling fish to flocking birds. Emergent structures are a common strategy found in many animal groups: colonies of ants, mounds built by termites, swarms of bees, shoals/schools of fish, flocks of birds, and herds/packs of mammals.

An example to consider in detail is an ant colony. The queen does not give direct orders and does not tell the ants what to do. Instead, each ant reacts to stimuli in the form of chemical scent from larvae, other ants, intruders, food and buildup of waste, and leaves behind a chemical trail, which, in turn, provides a stimulus to other ants. Here each ant is an autonomous unit that reacts depending only on its local environment and the genetically encoded rules for its variety of ant. Despite the lack of centralized decision making, ant colonies exhibit complex behavior and have even been able to demonstrate the ability to solve geometric problems. For example, colonies routinely find the maximum distance from all colony entrances to dispose of dead bodies.[19]

Organization of life

A broader example of emergent properties in biology is viewed in the biological organisation of life, ranging from the subatomic level to the entire biosphere. For example, individual atoms can be combined to form molecules such as polypeptide chains, which in turn fold and refold to form proteins, which in turn create even more complex structures. These proteins, assuming their functional status from their spatial conformation, interact together and with other molecules to achieve higher biological functions and eventually create an organism. Another example is how cascade phenotype reactions, as detailed in chaos theory, arise from individual genes mutating respective positioning.[20] At the highest level, all the biological communities in the world form the biosphere, where its human participants form societies, and the complex interactions of meta-social systems such as the stock market.

In humanity

Spontaneous order

Groups of human beings, left free to each regulate themselves, tend to produce spontaneous order, rather than the meaningless chaos often feared. This has been observed in society at least since Chuang Tzu in ancient China. A classic traffic roundabout is a good example, with cars moving in and out with such effective organization that some modern cities have begun replacing stoplights at problem intersections with traffic circles [4], and getting better results. Open-source software and Wiki projects form an even more compelling illustration.
Emergent processes or behaviours can be seen in many other places, such as cities, cabal and market-dominant minority phenomena in economics, organizational phenomena in computer simulations and cellular automata. Whenever you have a multitude of individuals interacting with one another, there often comes a moment when disorder gives way to order and something new emerges: a pattern, a decision, a structure, or a change in direction (Miller 2010, 29).[21]

Economics

The stock market (or any market for that matter) is an example of emergence on a grand scale. As a whole it precisely regulates the relative security prices of companies across the world, yet it has no leader; when no central planning is in place, there is no one entity which controls the workings of the entire market. Agents, or investors, have knowledge of only a limited number of companies within their portfolio, and must follow the regulatory rules of the market and analyse the transactions individually or in large groupings. Trends and patterns emerge which are studied intensively by technical analysts.[citation needed]

Money

Money, insofar as being a medium of exchange and of deferred payment, is also an example of an emergent phenomenon between market participators. In their strive to possess a commodity with greater marketability than their own commodity, such that the possession of these more marketable commodities (money) facilitate the search for commodities that participators want (e.g. consumables).
Austrian School economist Carl Menger wrote in his work Principles of Economics, "As each economizing individual becomes increasingly more aware of his economic interest, he is led by this interest, without any agreement, without legislative compulsion, and even without regard to the public interest, to give his commodities in exchange for other, more saleable, commodities, even if he does not need them for any immediate consumption purpose. With economic progress, therefore, we can everywhere observe the phenomenon of a certain number of goods, especially those that are most easily saleable at a given time and place, becoming, under the powerful influence of custom, acceptable to everyone in trade, and thus capable of being given in exchange for any other commodity."[22]

World Wide Web and the Internet

The World Wide Web is a popular example of a decentralized system exhibiting emergent properties. There is no central organization rationing the number of links, yet the number of links pointing to each page follows a power law in which a few pages are linked to many times and most pages are seldom linked to. A related property of the network of links in the World Wide Web is that almost any pair of pages can be connected to each other through a relatively short chain of links. Although relatively well known now, this property was initially unexpected in an unregulated network. It is shared with many other types of networks called small-world networks (Barabasi, Jeong, & Albert 1999, pp. 130–131).

Internet traffic can also exhibit some seemingly emergent properties. In the congestion control mechanism, TCP flows can become globally synchronized at bottlenecks, simultaneously increasing and then decreasing throughput in coordination. Congestion, widely regarded as a nuisance, is possibly an emergent property of the spreading of bottlenecks across a network in high traffic flows which can be considered as a phase transition [see review of related research in (Smith 2008, pp. 1–31)].

Another important example of emergence in web-based systems is social bookmarking (also called collaborative tagging). In social bookmarking systems, users assign tags to resources shared with other users, which gives rise to a type of information organisation that emerges from this crowdsourcing process. Recent research which analyzes empirically the complex dynamics of such systems[23] has shown that consensus on stable distributions and a simple form of shared vocabularies does indeed emerge, even in the absence of a central controlled vocabulary. Some believe that this could be because users who contribute tags all use the same language, and they share similar semantic structures underlying the choice of words. The convergence in social tags may therefore be interpreted as the emergence of structures as people who have similar semantic interpretation collaboratively index online information, a process called semantic imitation.[24] [25]

Open-source software, or Wiki projects such as Wikipedia and Wikivoyage are other impressive examples of emergence. The "zeroeth law of Wikipedia" is often cited by its editors to highlight its apparently surprising and unpredictable quality: The problem with Wikipedia is that it only works in practice. In theory, it can never work.

Architecture and cities


Traffic patterns in cities can be seen as an example of spontaneous order[citation needed]

Emergent structures appear at many different levels of organization or as spontaneous order. Emergent self-organization appears frequently in cities where no planning or zoning entity predetermines the layout of the city. (Krugman 1996, pp. 9–29) The interdisciplinary study of emergent behaviors is not generally considered a homogeneous field, but divided across its application or problem domains.

Architects and Landscape Architects may not design all the pathways of a complex of buildings. Instead they might let usage patterns emerge and then place pavement where pathways have become worn in.[citation needed]

The on-course action and vehicle progression of the 2007 Urban Challenge could possibly be regarded as an example of cybernetic emergence. Patterns of road use, indeterministic obstacle clearance times, etc. will work together to form a complex emergent pattern that can not be deterministically planned in advance.

The architectural school of Christopher Alexander takes a deeper approach to emergence attempting to rewrite the process of urban growth itself in order to affect form, establishing a new methodology of planning and design tied to traditional practices, an Emergent Urbanism. Urban emergence has also been linked to theories of urban complexity (Batty 2005) and urban evolution (Marshall 2009).

Building ecology is a conceptual framework for understanding architecture and the built environment as the interface between the dynamically interdependent elements of buildings, their occupants, and the larger environment. Rather than viewing buildings as inanimate or static objects, building ecologist Hal Levin views them as interfaces or intersecting domains of living and non-living systems.[26] The microbial ecology of the indoor environment is strongly dependent on the building materials, occupants, contents, environmental context and the indoor and outdoor climate. The strong relationship between atmospheric chemistry and indoor air quality and the chemical reactions occurring indoors. The chemicals may be nutrients, neutral or biocides for the microbial organisms. The microbes produce chemicals that affect the building materials and occupant health and well being. Humans manipulate the ventilation, temperature and humidity to achieve comfort with the concomitant effects on the microbes that populate and evolve.[26][27][28]

Eric Bonabeau's attempt to define emergent phenomena is through traffic: "traffic jams are actually very complicated and mysterious. On an individual level, each driver is trying to get somewhere and is following (or breaking) certain rules, some legal (the speed limit) and others societal or personal (slow down to let another driver change into your lane). But a traffic jam is a separate and distinct entity that emerges from those individual behaviors. Gridlock on a highway, for example, can travel backward for no apparent reason, even as the cars are moving forward." He has also likened emergent phenomena to the analysis of market trends and employee behavior.[29]

Computational emergent phenomena have also been utilized in architectural design processes, for example for formal explorations and experiments in digital materiality.[30]

Computer AI

Some artificially intelligent computer applications utilize emergent behavior for animation. One example is Boids, which mimics the swarming behavior of birds.

Language

It has been argued that the structure and regularity of language--grammar, or at least language change, is an emergence phenomenon (Hopper 1998).While each speaker merely tries to reach his or her own communicative goals, he or she uses language in a particular way. If enough speakers behave in that way, language is changed (Keller 1994). In a wider sense, the norms of a language, i.e. the linguistic conventions of its speech society, can be seen as a system emerging from long-time participation in communicative problem-solving in various social circumstances. (Määttä 2000)

Emergent change processes

Within the field of group facilitation and organization development, there have been a number of new group processes that are designed to maximize emergence and self-organization, by offering a minimal set of effective initial conditions. Examples of these processes include SEED-SCALE, Appreciative Inquiry, Future Search, the World Cafe or Knowledge Cafe, Open Space Technology, and others. (Holman, 2010)

Zero-energy universe



From Wikipedia, the free encyclopedia

The zero-energy universe theory states that the total amount of energy in the universe is exactly zero: its amount of positive energy in the form of matter is exactly canceled out by its negative energy in the form of gravity.[1][2]

The theory originated in 1973, when Edward Tryon proposed in the Nature journal that the Universe emerged from a large-scale quantum fluctuation of vacuum energy, resulting in its positive mass-energy being exactly balanced by its negative gravitational potential energy.[3]

Free-lunch interpretation

A generic property of inflation is the balancing of the negative gravitational energy, within the inflating region, with the positive energy of the inflaton field to yield a post-inflationary universe with negligible or zero energy density.[4][5] It is this balancing of the total universal energy budget that enables the open-ended growth possible with inflation; during inflation energy flows from the gravitational field (or geometry) to the inflaton field—the total gravitational energy decreases (i.e. becomes more negative) and the total inflaton energy increases (becomes more positive). But the respective energy densities remain constant and opposite since the region is inflating. Consequently, inflation explains the otherwise curious cancellation of matter and gravitational energy on cosmological scales, which is consistent with astronomical observations.[6]

Quantum fluctuation

Due to quantum uncertainty, energy fluctuations such as an electron and its anti-particle, a positron, can arise spontaneously out of vacuum space, but must disappear rapidly. The lower the energy of the bubble, the longer it can exist. A gravitational field has negative energy. Matter has positive energy. The two values cancel out provided the Universe is completely flat. In that case, the Universe has zero energy and can theoretically last forever.[3][7]

Hawking gravitational energy

Stephen Hawking notes in his 2010 book The Grand Design: "If the total energy of the universe must always remain zero, and it costs energy to create a body, how can a whole universe be created from nothing? That is why there must be a law like gravity. Because gravity is attractive, gravitational energy is negative: One has to do work to separate a gravitationally bound system, such as the Earth and moon. This negative energy can balance the positive energy needed to create matter, but it’s not quite that simple. The negative gravitational energy of the Earth, for example, is less than a billionth of the positive energy of the matter particles the Earth is made of. A body such as a star will have more negative gravitational energy, and the smaller it is (the closer the different parts of it are to each other), the greater the negative gravitational energy will be. But before it can become greater (in magnitude) than the positive energy of the matter, the star will collapse to a black hole, and black holes have positive energy. That’s why empty space is stable. Bodies such as stars or black holes cannot just appear out of nothing. But a whole universe can." (p. 180)

Eternal inflation



From Wikipedia, the free encyclopedia
 
Eternal Inflation is an inflationary universe model, which is itself an outgrowth or extension of the Big Bang theory. In theories of eternal inflation, the inflationary phase of the Universe's expansion lasts forever in at least some regions of the Universe. Because these regions expand exponentially rapidly, most of the volume of the Universe at any given time is inflating. All models of eternal inflation produce an infinite multiverse, typically a fractal.

Eternal inflation is predicted by many different models of cosmic inflation. MIT professor Alan H. Guth proposed an inflation model involving a "false vacuum" phase with positive vacuum energy. Parts of the Universe in that phase inflate, and only occasionally decay to lower-energy, non-inflating phases or the ground state. In chaotic inflation, proposed by physicist Andrei Linde, the peaks in the evolution of a scalar field (determining the energy of the vacuum) correspond to regions of rapid inflation which dominate. Chaotic inflation usually eternally inflates,[1] since the expansions of the inflationary peaks exhibit positive feedback and come to dominate the large-scale dynamics of the Universe.

Alan Guth's 2007 paper, "Eternal inflation and its implications",[1] details what is now known on the subject, and demonstrates that this particular flavor of inflationary universe theory is relatively current, or is still considered viable, more than 20 years after its inception.[2] [3][4]

Inflation and the multiverse

Both Linde and Guth believe that inflationary models of the early universe most likely lead to a multiverse but more proof is required.
It's hard to build models of inflation that don't lead to a multiverse. It's not impossible, so I think there's still certainly research that needs to be done. But most models of inflation do lead to a multiverse, and evidence for inflation will be pushing us in the direction of taking [the idea of a] multiverse seriously. Alan Guth[5]
It's possible to invent models of inflation that do not allow [a] multiverse, but it's difficult. Every experiment that brings better credence to inflationary theory brings us much closer to hints that the multiverse is real. Andrei Linde [5]
Polarization in the cosmic microwave background radiation suggests inflationary models for the early universe are more likely but confirmation is needed.[5]

History

Inflation, or the inflationary universe theory, was developed as a way to overcome the few remaining problems with what was otherwise considered a successful theory of cosmology, the Big Bang model. Although Alexei Starobinsky of the L.D. Landau Institute of Theoretical Physics in Moscow developed the first realistic inflation theory in 1979[6][7] he did not articulate its relevance to modern cosmological problems.

In 1979, Alan Guth of the United States developed an inflationary model independently, which offered a mechanism for inflation to begin: the decay of a so-called false vacuum into "bubbles" of "true vacuum" that expanded at the speed of light. Guth coined the term "inflation", and he was the first to discuss the theory with other scientists worldwide. But this formulation was problematic, as there was no consistent way to bring an end to the inflationary epoch and end up with the isotropic, homogeneous Universe observed today. (See False vacuum: Development of theories). In 1982, this "graceful exit problem" was solved by Andrei Linde in the new inflationary scenario. A few months later, the same result was also obtained by Andreas Albrecht and Paul J. Steinhardt.

In 1986, Linde published an alternative model of inflation that also reproduced the same successes of new inflation entitled "Eternally Existing Self-Reproducing Chaotic Inflationary Universe",[8] which provides a detailed description of what has become known as the Chaotic Inflation theory or eternal inflation. The Chaotic Inflation theory is in some ways similar to Fred Hoyle’s steady state theory, as it employs the concept of a universe that is eternally existing, and thus does not require a unique beginning or an ultimate end of the cosmos.

Quantum fluctuations of the inflation field

Chaotic Inflation theory models quantum fluctuations in the rate of inflation.[9] Those regions with a higher rate of inflation expand faster and dominate the universe, despite the natural tendency of inflation to end in other regions. This allows inflation to continue forever, to produce future-eternal inflation.
Within the framework of established knowledge of physics and cosmology, our universe could be one of many in a super-universe or multiverse. Linde (1990, 1994) has proposed that a background space-time "foam" empty of matter and radiation will experience local quantum fluctuations in curvature, forming many bubbles of false vacuum that individually inflate into mini-universes with random characteristics. Each universe within the multiverse can have a different set of constants and physical laws. Some might have life of a form different from ours; others might have no life at all or something even more complex or so different that we cannot even imagine it. Obviously we are in one of those universes with life.[10]
Past-eternal models have been proposed which adhere to the perfect cosmological principle and have features of the steady state cosmos.[11][12][13]

A recent paper by Kohli and Haslam [14] analyzed Linde's chaotic inflation theory in which the quantum fluctuations are modelled as Gaussian white noise. They showed that in this popular scenario, eternal inflation in fact cannot be eternal, and the random noise leads to spacetime being filled with singularities. This was demonstrated by showing that solutions to the Einstein field equations diverge in a finite time. Their paper therefore concluded that the theory of eternal inflation based on random quantum fluctuations would not be a viable theory, and the resulting existence of a multiverse is "still very much an open question that will require much deeper investigation".

Differential decay

In standard inflation, inflationary expansion occurred while the universe was in a false vacuum state, halting when the universe decayed to a true vacuum state and became a general and inclusive phenomenon with homogeneity throughout, yielding a single expanding universe which is "our general reality" wherein the laws of physics are consistent throughout. In this case, the physical laws "just happen" to be compatible with the evolution of life.

The bubble universe model proposes that different regions of this inflationary universe (termed a multiverse) decayed to a true vacuum state at different times, with decaying regions corresponding to "sub"- universes not in causal contact with each other and resulting in different physical laws[why?] in different regions which are then subject to "selection", which determines each region's components based upon (dependent on) the survivability of the quantum components within that region. The end result will be a finite number of universes with physical laws consistent within each region of spacetime.

False vacuum and true vacuum

Variants of the bubble universe model postulate multiple false vacuum states, which result in lower-energy false-vacuum "progeny" universes spawned, which in turn produce true vacuum state progeny universes within themselves.

Evidence from the fluctuation level in our universe

New inflation does not produce a perfectly symmetric universe; tiny quantum fluctuations in the inflaton are created. These tiny fluctuations form the primordial seeds for all structure created in the later universe. These fluctuations were first calculated by Viatcheslav Mukhanov and G. V. Chibisov in the Soviet Union in analyzing Starobinsky's similar model.[15][16][17] In the context of inflation, they were worked out independently of the work of Mukhanov and Chibisov at the three-week 1982 Nuffield Workshop on the Very Early Universe at Cambridge University.[18] The fluctuations were calculated by four groups working separately over the course of the workshop: Stephen Hawking;[19] Starobinsky;[20] Guth and So-Young Pi;[21] and James M. Bardeen, Paul Steinhardt and Michael Turner.[22]

The fact that these models are consistent with WMAP data adds weight to the idea that the Universe could be created in such a way. As a result, many physicists in the field agree it is possible, but needs further support to be accepted.[23]

Frank J. Tipler



From Wikipedia, the free encyclopedia

Frank Jennings Tipler
Born (1947-02-01) February 1, 1947 (age 68)
Andalusia, Alabama[1]
Nationality American
Education PhD (Physics)
Alma mater Massachusetts Institute of Technology; University of Maryland, College Park
Occupation Mathematical physicist
Employer Tulane University
Known for Omega Point Theory
Website
http://tulane.edu/sse/pep/faculty-and-staff/faculty/frank-tipler.cfm

Frank Jennings Tipler (born February 1, 1947) is a mathematical physicist and cosmologist, holding a joint appointment in the Departments of Mathematics and Physics at Tulane University.[2] Tipler has authored books and papers on the Omega Point, which he claims is a mechanism for the resurrection of the dead. Some have argued that it is pseudoscience.[3] He is also known for his theories on the Tipler cylinder time machine.

Biography

Tipler was born in Andalusia, Alabama, to Jewish parents Frank Jennings Tipler Jr., a lawyer, and Anne Tipler, a homemaker.[1] From 1965 through 1969, Tipler attended the Massachusetts Institute of Technology, where he completed a bachelor of science degree in physics.[2] In 1976 he completed his PhD with the University of Maryland.[4][5] Tipler was next hired in a series of postdoctoral researcher positions in physics at three universities, with the final one being at the University of Texas, working under John Archibald Wheeler, Abraham Taub, Rainer K. Sachs, and Dennis W. Sciama.[2] Tipler became an Associate Professor in mathematical physics in 1981, and a full Professor in 1987 at Tulane University, where he has been a faculty member ever since.[2]

The Omega Point cosmology

The Omega Point is a term Tipler uses to describe a cosmological state in the distant proper-time future of the universe that he maintains is required by the known physical laws. According to this cosmology, it is required for the known laws of physics to be mutually consistent that intelligent life take over all matter in the Universe and eventually force its collapse. During that collapse, the computational capacity of the Universe diverges to infinity and environments emulated with that computational capacity last for an infinite duration as the Universe attains a solitary-point cosmological singularity. This singularity is Tipler's Omega Point.[6] With computational resources diverging to infinity, Tipler states that a society far in the future would be able to resurrect the dead by emulating all alternative universes of our universe from its start at the Big Bang.[7] Tipler identifies the Omega Point with God, since, in his view, the Omega Point has all the properties claimed for gods by most of the traditional religions. [7][8]

Tipler's argument that the omega point cosmology is required by the known physical laws is a more recent development that arose after the publication of his 1994 book The Physics of Immortality. In that book (and in papers he had published up to that time), Tipler had offered the Omega Point cosmology as a hypothesis, while still claiming to confine the analysis to the known laws of physics.[9]

Tipler defined the "final anthropic principle" (FAP) along with co-author physicist John D. Barrow in their 1986 book The Anthropic Cosmological Principle as a generalization of the anthropic principle thus:
Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, will never die out.[10]

Criticism

Critics of the final omega point principle say its arguments violate the Copernican principle, that it incorrectly applies the laws of probability, and that it is really a theology or metaphysics principle made to sound plausible to laypeople by using the esoteric language of physics. Martin Gardner dubbed FAP the "completely ridiculous anthropic principle" (CRAP).[11] Oxford-based philosopher Nick Bostrom writes that the final anthropic principle has no claim on any special methodological status, it is "pure speculation", despite attempts to elevate it by calling it a "principle".[12] Philosopher Rem B. Edwards called it "futuristic, pseudoscientific eschatology" that is "highly conjectural, unverified, and improbable".[13]

Physicist David Deutsch incorporates Tipler's Omega Point cosmology as a central feature of the fourth strand of his "four strands" concept of fundamental reality and defends the physics of the Omega Point cosmology,[14] although he is highly critical of Tipler's theological conclusions[15] and what Deutsch states are exaggerated claims that have caused other scientists and philosophers to reject his theory out of hand.[16] Researcher Anders Sandberg pointed out that he believes the Omega Point Theory has many flaws, including missing proofs.[17]

Tipler's Omega Point theories have received criticism by physicists and skeptics.[18][19][20] George Ellis, writing in the journal Nature, described Tipler's book on the Omega Point as "a masterpiece of pseudoscience… the product of a fertile and creative imagination unhampered by the normal constraints of scientific and philosophical discipline",[3] and Michael Shermer devoted a chapter of Why People Believe Weird Things to enumerating what he thought to be flaws in Tipler's thesis.[21]
Physicist Sean M. Carroll thought Tipler's early work was constructive but that now he has become a "crackpot".[22] In a review of Tipler's The Physics of Christianity, Lawrence Krauss described the book as the most "extreme example of uncritical and unsubstantiated arguments put into print by an intelligent professional scientist".[23]

Selected writings

Books

Articles


Simulated reality


From Wikipedia, the free encyclopedia

Simulated reality is the hypothesis that reality could be simulated—for example by computer simulation—to a degree indistinguishable from "true" reality. It could contain conscious minds which may or may not be fully aware that they are living inside a simulation.
This is quite different from the current, technologically achievable concept of virtual reality. Virtual reality is easily distinguished from the experience of actuality; participants are never in doubt about the nature of what they experience. Simulated reality, by contrast, would be hard or impossible to separate from "true" reality.

There has been much debate over this topic, ranging from philosophical discourse to practical applications in computing.

Types of simulation

Brain-computer interface

In brain-computer interface simulations, each participant enters from outside, directly connecting their brain to the simulation computer. The computer transmits sensory data to the participant, reads and responds to their desires and actions in return; in this manner they interact with the simulated world and receive feedback from it. The participant may be induced by any number of possible means to forget, temporarily or otherwise, that they are inside a virtual realm (e.g. "passing through the veil", a term borrowed from Christian tradition, which describes the passage of a soul from an earthly body to an afterlife). While inside the simulation, the participant's consciousness is represented by an avatar, which can look very different from the participant's actual appearance.

Virtual people

In a virtual-people simulation, every inhabitant is a native of the simulated world. They do not have a "real" body in the external reality of the physical world. Instead, each is a fully simulated entity, possessing an appropriate level of consciousness that is implemented using the simulation's own logic (i.e. using its own physics). As such, they could be downloaded from one simulation to another, or even archived and resurrected at a later time. It is also possible that a simulated entity could be moved out of the simulation entirely by means of mind transfer into a synthetic body.

Arguments

Simulation argument

The simulation hypothesis was first published by Hans Moravec.[1][2][3] Later, the philosopher Nick Bostrom developed an expanded argument examining the probability of our reality being a simulacrum.[4] His argument states that at least one of the following statements is very likely to be true:
1. Human civilization is unlikely to reach a level of technological maturity capable of producing simulated realities, or such simulations are physically impossible to construct.
2. A comparable civilization reaching aforementioned technological status will likely not produce a significant number of simulated realities (one that might push the probable existence of digital entities beyond the probable number of "real" entities in a Universe) for any of a number of reasons, such as, diversion of computational processing power for other tasks, ethical considerations of holding entities captive in simulated realities, etc.
3. Any entities with our general set of experiences are almost certainly living in a simulation.
In greater detail, Bostrom is attempting to prove a tripartite disjunction, that at least one of these propositions must be true. His argument rests on the premise that given sufficiently advanced technology, it is possible to represent the populated surface of the Earth without recourse to digital physics; that the qualia experienced by a simulated consciousness is comparable or equivalent to that of a naturally occurring human consciousness; and that one or more levels of simulation within simulations would be feasible given only a modest expenditure of computational resources in the real world.

If one assumes first that humans will not be destroyed or destroy themselves before developing such a technology, and, next, that human descendants will have no overriding legal restrictions or moral compunctions against simulating biospheres or their own historical biosphere, then it would be unreasonable to count ourselves among the small minority of genuine organisms who, sooner or later, will be vastly outnumbered by artificial simulations.

Epistemologically, it is not impossible to tell whether we are living in a simulation. For example, Bostrom suggests that a window could popup saying: "You are living in a simulation. Click here for more information." However, imperfections in a simulated environment might be difficult for the native inhabitants to identify, and for purposes of authenticity, even the simulated memory of a blatant revelation might be purged programmatically. Nonetheless, should any evidence come to light, either for or against the skeptical hypothesis, it would radically alter the aforementioned probability.

The simulation argument also has implications for existential risks. If we are living in a simulation, then it's possible that our simulation could get shut down. Many futurists have speculated about how we can avoid this outcome. Ray Kurzweil argues in The Singularity is Near that we should be interesting to our simulators, and that bringing about the Singularity is probably the most interesting event that could happen. The philosopher Phil Torres has argued that the simulation argument itself leads to the conclusion that, if we run simulations in the future, then there almost certainly exists a stack of nested simulations, with ours located towards the bottom. Since annihilation is inherited downwards, any terminal event in a simulation "above" ours would be a terminal event for us. If there are many simulations above us, then the risk of an existential catastrophe could be significant.[5]

Relativity of reality

As to the question of whether we are living in a simulated reality or a 'real' one, the answer may be 'indistinguishable', in principle. In a commemorative article dedicated to the 'The World Year of Physics 2005', physicist Bin-Guang Ma proposed the theory of 'Relativity of reality'.[6][unreliable source?] The notion appears in ancient philosophy: Zhuangzi's 'Butterfly Dream', and analytical psychology.[7] Without special knowledge of a reference world, one cannot say with absolute skeptical certainty one is experiencing "reality".

Computationalism

Computationalism is a philosophy of mind theory stating that cognition is a form of computation. It is relevant to the Simulation hypothesis in that it illustrates how a simulation could contain conscious subjects, as required by a "virtual people" simulation. For example, it is well known that physical systems can be simulated to some degree of accuracy. If computationalism is correct, and if there is no problem in generating artificial consciousness or cognition, it would establish the theoretical possibility of a simulated reality. However, the relationship between cognition and phenomenal qualia of consciousness is disputed. It is possible that consciousness requires a vital substrate that a computer cannot provide, and that simulated people, while behaving appropriately, would be philosophical zombies. This would undermine Nick Bostrom's simulation argument; we cannot be a simulate consciousness, if consciousness, as we know it, cannot be simulated. However, the skeptical hypothesis remains intact, we could still be envatted brains, existing as conscious beings within a simulated environment, even if consciousness cannot be simulated.
Some theorists[8][9] have argued that if the "consciousness-is-computation" version of computationalism and mathematical realism (or radical mathematical Platonism)[10] are true then consciousnesses is computation, which in principle is platform independent, and thus admits of simulation. This argument states that a "Platonic realm" or ultimate ensemble would contain every algorithm, including those which implement consciousness. Hans Moravec has explored the simulation hypothesis and has argued for a kind of mathematical Platonism according to which every object (including e.g. a stone) can be regarded as implementing every possible computation.[1]

Dreaming

A dream could be considered a type of simulation capable of fooling someone who is asleep. As a result the "dream hypothesis" cannot be ruled out, although it has been argued that common sense and considerations of simplicity rule against it.[11] One of the first philosophers to question the distinction between reality and dreams was Zhuangzi, a Chinese philosopher from the 4th century BC. He phrased the problem as the well-known "Butterfly Dream," which went as follows:
Once Zhuangzi dreamt he was a butterfly, a butterfly flitting and fluttering around, happy with himself and doing as he pleased. He didn't know he was Zhuangzi. Suddenly he woke up and there he was, solid and unmistakable Zhuangzi. But he didn't know if he was Zhuangzi who had dreamt he was a butterfly, or a butterfly dreaming he was Zhuangzi. Between Zhuangzi and a butterfly there must be some distinction! This is called the Transformation of Things. (2, tr. Burton Watson 1968:49)
The philosophical underpinnings of this argument are also brought up by Descartes, who was one of the first Western philosophers to do so. In Meditations on First Philosophy, he states "... there are no certain indications by which we may clearly distinguish wakefulness from sleep",[12] and goes on to conclude that "It is possible that I am dreaming right now and that all of my perceptions are false".[12]
Chalmers (2003) discusses the dream hypothesis, and notes that this comes in two distinct forms:
  • that he is currently dreaming, in which case many of his beliefs about the world are incorrect;
  • that he has always been dreaming, in which case the objects he perceives actually exist, albeit in his imagination.[13]
Both the dream argument and the simulation hypothesis can be regarded as skeptical hypotheses; however in raising these doubts, just as Descartes noted that his own thinking led him to be convinced of his own existence, the existence of the argument itself is testament to the possibility of its own truth.

Another state of mind in which some argue an individual's perceptions have no physical basis in the real world is called psychosis though psychosis may have a physical basis in the real world and explanations vary.

Computability of physics

A decisive refutation of any claim that our reality is computer-simulated would be the discovery of some uncomputable physics, because if reality is doing something that no computer can do, it cannot be a computer simulation. (Computability generally means computability by a Turing machine.  
Hypercomputation (super-Turing computation) introduces other possibilities which will be dealt with separately.) In fact, known physics is held to be (Turing) computable,[14] but the statement "physics is computable" needs to be qualified in various ways. Before symbolic computation, a number, thinking particularly of a real number, one with an infinite number of digits, was said to be computable if a Turing machine will continue to spit out digits endlessly, never reaching a "final digit".[15] This runs counter, however, to the idea of simulating physics in real time (or any plausible kind of time). Known physical laws (including those of quantum mechanics) are very much infused with real numbers and continua, and the universe seems to be able to decide their values on a moment-by-moment basis. As Richard Feynman put it:[16]
"It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypotheses that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities".
The objection could be made that the simulation does not have to run in "real time".[17] It misses an important point, though: the shortfall is not linear; rather it is a matter of performing an infinite number of computational steps in a finite time.[18]

Note that these objections all relate to the idea of reality being exactly simulated. Ordinary computer simulations as used by physicists are always approximations.

These objections do not apply if the hypothetical simulation is being run on a hypercomputer, a hypothetical machine more powerful than a Turing machine.[19] Unfortunately, there is no way of working out if computers running a simulation are capable of doing things that computers in the simulation cannot do. No-one has shown that the laws of physics inside a simulation and those outside it have to be the same, and simulations of different physical laws have been constructed.[20] The problem now is that there is no evidence that can conceivably be produced to show that the universe is not any kind of computer, making the simulation hypothesis unfalsifiable and therefore scientifically unacceptable, at least by Popperian standards.[21]

All conventional computers, however, are less than hypercomputational, and the simulated reality hypothesis is usually expressed in terms of conventional computers, i.e. Turing machines.

Roger Penrose, an English mathematical physicist, presents the argument that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine-type of digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. He sees the collapse of the quantum wavefunction as playing an important role in brain function. (See consciousness causes collapse).

CantGoTu environments

In his book The Fabric of Reality, David Deutsch discusses how the limits to computability imposed by Gödel's Incompleteness Theorem affect the Virtual Reality rendering process.[22][23] In order to do this, Deutsch invents the notion of a CantGoTu environment (named after Cantor, Gödel, and Turing), using Cantor's diagonal argument to construct an 'impossible' Virtual Reality which a physical VR generator would not be able to generate. The way that this works is to imagine that all VR environments renderable by such a generator can be enumerated, and that we label them VR1, VR2, etc. Slicing time up into discrete chunks we can create an environment which is unlike VR1 in the first timeslice, unlike VR2 in the second timeslice and so on. This environment is not in the list, and so it cannot be generated by the VR generator. Deutsch then goes on to discuss a universal VR generator, which as a physical device would not be able to render all possible environments, but would be able to render those environments which can be rendered by all other physical VR generators. He argues that 'an environment which can be rendered' corresponds to a set of mathematical questions whose answers can be calculated, and discusses various forms of the Turing Principle, which in its initial form refers to the fact that it is possible to build a universal computer which can be programmed to execute any computation that any other machine can do. Attempts to capture the process of virtual reality rendering provides us with a version which states: "It is possible to build a virtual-reality generator, whose repertoire includes every physically possible environment". 
 
In other words, a single, buildable physical object can mimic all the behaviours and responses of any other physically possible process or object. This, it is claimed, is what makes reality comprehensible.Later on in the book, Deutsch goes on to argue for a very strong version of the Turing principle, namely: "It is possible to build a virtual reality generator whose repertoire includes every physically possible environment." However, in order to include every physically possible environment, the computer would have to be able to include a recursive simulation of the environment containing itself. Even so, a computer running a simulation need not have to run every possible physical moment to be plausible to its inhabitants.

Nested simulations

The existence of simulated reality is unprovable in any concrete sense: any "evidence" that is directly observed could be another simulation itself. In other words, there is an infinite regress problem with the argument. Even if we are a simulated reality, there is no way to be sure the beings running the simulation are not themselves a simulation, and the operators of that simulation are not a simulation.[24]

"Recursive simulation involves a simulation, or an entity in the simulation, creating another instance of the same simulation, running it and using its results" (Pooch and Sullivan 2000).[25]

Peer-to-Peer Explanation of Quantum Phenomena

In two recent articles, the philosopher Marcus Arvan has argued that a new version of the simulation hypothesis, the Peer-to-Peer Simulation Hypothesis, provides a unified explanation of a wide variety of quantum phenomena. According to Arvan, peer-to-peer networking (networking involving no central "dedicated server") inherently gives rise to (i) Quantum superposition, (ii) Quantum indeterminacy, (iii) The quantum measurement problem, (iv) Wave-particle duality, (iv) Quantum wave-function "collapse”, (v) Quantum entanglement, (vi) a minimum space-time distance (e.g. the Planck length), and (vii) The relativity of time to observers.[26][27]

In fiction

Simulated reality is a theme that pre-dates science fiction. In Medieval and Renaissance religious theatre, the concept of the "world as theater" is frequent. Simulated reality in fiction has been explored by many authors, game designers, and film directors.

Criticism of atheism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Criticism...