Search This Blog

Friday, October 23, 2020

Emergence

From Wikipedia, the free encyclopedia

The formation of complex symmetrical and fractal patterns in snowflakes exemplifies emergence in a physical system.
 
A termite "cathedral" mound produced by a termite colony offers a classic example of emergence in nature

In philosophy, systems theory, science, and art, emergence occurs when an entity is observed to have properties its parts do not have on their own, properties or behaviors which emerge only when the parts interact in a wider whole.

Emergence plays a central role in theories of integrative levels and of complex systems. For instance, the phenomenon of life as studied in biology is an emergent property of chemistry, and psychological phenomena emerge from the neurobiological phenomena of living things.

In philosophy, theories that emphasize emergent properties have been called emergentism. Almost all accounts of emergentism include a form of epistemic or ontological irreducibility to the lower levels.

In philosophy

Philosophers often understand emergence as a claim about the etiology of a system's properties. An emergent property of a system, in this context, is one that is not a property of any component of that system, but is still a feature of the system as a whole. Nicolai Hartmann (1882-1950), one of the first modern philosophers to write on emergence, termed this a categorial novum (new category).

Definitions

This idea of emergence has been around since at least the time of Aristotle. The many scientists and philosophers who have written on the concept include John Stuart Mill (Composition of Causes, 1843) and Julian Huxley (1887-1975).

The philosopher G. H. Lewes coined the term "emergent", writing in 1875:

Every resultant is either a sum or a difference of the co-operant forces; their sum, when their directions are the same – their difference, when their directions are contrary. Further, every resultant is clearly traceable in its components, because these are homogeneous and commensurable. It is otherwise with emergents, when, instead of adding measurable motion to measurable motion, or things of one kind to other individuals of their kind, there is a co-operation of things of unlike kinds. The emergent is unlike its components insofar as these are incommensurable, and it cannot be reduced to their sum or their difference.

In 1999, economist Jeffrey Goldstein provided a current definition of emergence in the journal Emergence. Goldstein initially defined emergence as: "the arising of novel and coherent structures, patterns and properties during the process of self-organization in complex systems".

In 2002 systems scientist Peter Corning described the qualities of Goldstein's definition in more detail:

The common characteristics are: (1) radical novelty (features not previously observed in systems); (2) coherence or correlation (meaning integrated wholes that maintain themselves over some period of time); (3) A global or macro "level" (i.e. there is some property of "wholeness"); (4) it is the product of a dynamical process (it evolves); and (5) it is "ostensive" (it can be perceived).

Corning suggests a narrower definition, requiring that the components be unlike in kind (following Lewes), and that they involve division of labor between these components. He also says that living systems (like the game of chess), while emergent, cannot be reduced to underlying laws of emergence:

Rules, or laws, have no causal efficacy; they do not in fact 'generate' anything. They serve merely to describe regularities and consistent relationships in nature. These patterns may be very illuminating and important, but the underlying causal agencies must be separately specified (though often they are not). But that aside, the game of chess illustrates ... why any laws or rules of emergence and evolution are insufficient. Even in a chess game, you cannot use the rules to predict 'history' – i.e., the course of any given game. Indeed, you cannot even reliably predict the next move in a chess game. Why? Because the 'system' involves more than the rules of the game. It also includes the players and their unfolding, moment-by-moment decisions among a very large number of available options at each choice point. The game of chess is inescapably historical, even though it is also constrained and shaped by a set of rules, not to mention the laws of physics. Moreover, and this is a key point, the game of chess is also shaped by teleonomic, cybernetic, feedback-driven influences. It is not simply a self-ordered process; it involves an organized, 'purposeful' activity.

Strong and weak emergence

Usage of the notion "emergence" may generally be subdivided into two perspectives, that of "weak emergence" and "strong emergence". One paper discussing this division is Weak Emergence, by philosopher Mark Bedau. In terms of physical systems, weak emergence is a type of emergence in which the emergent property is amenable to computer simulation or similar forms of after-the-fact analysis (for example, the formation of a traffic jam, the structure of a flight of starlings or a school of fishes, or the formation of galaxies). Crucial in these simulations is that the interacting members retain their independence. If not, a new entity is formed with new, emergent properties: this is called strong emergence, which it is argued cannot be simulated or analysed.

Some common points between the two notions are that emergence concerns new properties produced as the system grows, which is to say ones which are not shared with its components or prior states. Also, it is assumed that the properties are supervenient rather than metaphysically primitive.

Weak emergence describes new properties arising in systems as a result of the interactions at an elemental level. However, Bedau stipulates that the properties can be determined only by observing or simulating the system, and not by any process of a reductionist analysis. As a consequence the emerging properties are scale dependent: they are only observable if the system is large enough to exhibit the phenomenon. Chaotic, unpredictable behaviour can be seen as an emergent phenomenon, while at a microscopic scale the behaviour of the constituent parts can be fully deterministic.

Bedau notes that weak emergence is not a universal metaphysical solvent, as the hypothesis that consciousness is weakly emergent would not resolve the traditional philosophical questions about the physicality of consciousness. However, Bedau concludes that adopting this view would provide a precise notion that emergence is involved in consciousness, and second, the notion of weak emergence is metaphysically benign. 

Strong emergence describes the direct causal action of a high-level system upon its components; qualities produced this way are irreducible to the system's constituent parts. The whole is other than the sum of its parts. An example from physics of such emergence is water, which appears unpredictable even after an exhaustive study of the properties of its constituent atoms of hydrogen and oxygen. It follows then that no simulation of the system can exist, for such a simulation would itself constitute a reduction of the system to its constituent parts.

Rejecting the distinction

However, biologist Peter Corning has asserted that "the debate about whether or not the whole can be predicted from the properties of the parts misses the point. Wholes produce unique combined effects, but many of these effects may be co-determined by the context and the interactions between the whole and its environment(s)". In accordance with his Synergism Hypothesis, Corning also stated: "It is the synergistic effects produced by wholes that are the very cause of the evolution of complexity in nature." Novelist Arthur Koestler used the metaphor of Janus (a symbol of the unity underlying complements like open/shut, peace/war) to illustrate how the two perspectives (strong vs. weak or holistic vs. reductionistic) should be treated as non-exclusive, and should work together to address the issues of emergence. Theoretical physicist PW Anderson states it this way:

The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. At each level of complexity entirely new properties appear. Psychology is not applied biology, nor is biology applied chemistry. We can now see that the whole becomes not merely more, but very different from the sum of its parts.

Viability of strong emergence

Some thinkers question the plausibility of strong emergence as contravening our usual understanding of physics. Mark A. Bedau observes:

Although strong emergence is logically possible, it is uncomfortably like magic. How does an irreducible but supervenient downward causal power arise, since by definition it cannot be due to the aggregation of the micro-level potentialities? Such causal powers would be quite unlike anything within our scientific ken. This not only indicates how they will discomfort reasonable forms of materialism. Their mysteriousness will only heighten the traditional worry that emergence entails illegitimately getting something from nothing.

Strong emergence can be criticized for being causally overdetermined. The canonical example concerns emergent mental states (M and M∗) that supervene on physical states (P and P∗) respectively. Let M and M∗ be emergent properties. Let M∗ supervene on base property P∗. What happens when M causes M∗? Jaegwon Kim says:

In our schematic example above, we concluded that M causes M∗ by causing P∗. So M causes P∗. Now, M, as an emergent, must itself have an emergence base property, say P. Now we face a critical question: if an emergent, M, emerges from basal condition P, why cannot P displace M as a cause of any putative effect of M? Why cannot P do all the work in explaining why any alleged effect of M occurred? If causation is understood as nomological (law-based) sufficiency, P, as M's emergence base, is nomologically sufficient for it, and M, as P∗'s cause, is nomologically sufficient for P∗. It follows that P is nomologically sufficient for P∗ and hence qualifies as its cause…If M is somehow retained as a cause, we are faced with the highly implausible consequence that every case of downward causation involves overdetermination (since P remains a cause of P∗ as well). Moreover, this goes against the spirit of emergentism in any case: emergents are supposed to make distinctive and novel causal contributions.

If M is the cause of M∗, then M∗ is overdetermined because M∗ can also be thought of as being determined by P. One escape-route that a strong emergentist could take would be to deny downward causation. However, this would remove the proposed reason that emergent mental states must supervene on physical states, which in turn would call physicalism into question, and thus be unpalatable for some philosophers and physicists.

Meanwhile, others have worked towards developing analytical evidence of strong emergence. In 2009, Gu et al. presented a class of physical systems that exhibits non-computable macroscopic properties. More precisely, if one could compute certain macroscopic properties of these systems from the microscopic description of these systems, then one would be able to solve computational problems known to be undecidable in computer science. Gu et al. concluded that

Although macroscopic concepts are essential for understanding our world, much of fundamental physics has been devoted to the search for a 'theory of everything', a set of equations that perfectly describe the behavior of all fundamental particles. The view that this is the goal of science rests in part on the rationale that such a theory would allow us to derive the behavior of all macroscopic concepts, at least in principle. The evidence we have presented suggests that this view may be overly optimistic. A 'theory of everything' is one of many components necessary for complete understanding of the universe, but is not necessarily the only one. The development of macroscopic laws from first principles may involve more than just systematic logic, and could require conjectures suggested by experiments, simulations or insight.

Emergence and interaction

Emergent structures are patterns that emerge via the collective actions of many individual entities. To explain such patterns, one might conclude, per Aristotle, that emergent structures are other than the sum of their parts on the assumption that the emergent order will not arise if the various parts simply interact independently of one another. However, there are those who disagree. According to this argument, the interaction of each part with its immediate surroundings causes a complex chain of processes that can lead to order in some form. In fact, some systems in nature are observed to exhibit emergence based upon the interactions of autonomous parts, and some others exhibit emergence that at least at present cannot be reduced in this way. In particular renormalization methods in theoretical physics enable scientists to study systems that are not tractable as the combination of their parts.

Objective or subjective quality

Crutchfield regards the properties of complexity and organization of any system as subjective qualities determined by the observer.

Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analysed in terms of how model-building observers infer from measurements the computational capabilities embedded in non-linear processes. An observer’s notion of what is ordered, what is random, and what is complex in its environment depends directly on its computational resources: the amount of raw measurement data, of memory, and of time available for estimation and inference. The discovery of structure in an environment depends more critically and subtly, though, on how those resources are organized. The descriptive power of the observer’s chosen (or implicit) computational model class, for example, can be an overwhelming determinant in finding regularity in data.

On the other hand, Peter Corning argues: "Must the synergies be perceived/observed in order to qualify as emergent effects, as some theorists claim? Most emphatically not. The synergies associated with emergence are real and measurable, even if nobody is there to observe them."

The low entropy of an ordered system can be viewed as an example of subjective emergence: the observer sees an ordered system by ignoring the underlying microstructure (i.e. movement of molecules or elementary particles) and concludes that the system has a low entropy. On the other hand, chaotic, unpredictable behaviour can also be seen as subjective emergent, while at a microscopic scale the movement of the constituent parts can be fully deterministic.

In religion, art and humanities

In religion, emergence grounds expressions of religious naturalism and syntheism in which a sense of the sacred is perceived in the workings of entirely naturalistic processes by which more complex forms arise or evolve from simpler forms. Examples are detailed in The Sacred Depths of Nature by Ursula Goodenough & Terrence Deacon and Beyond Reductionism: Reinventing the Sacred by Stuart Kauffman, both from 2006, and in Syntheism – Creating God in The Internet Age by Alexander Bard & Jan Söderqvist from 2014. An early argument (1904–05) for the emergence of social formations, in part stemming from religion, can be found in Max Weber's most famous work, The Protestant Ethic and the Spirit of Capitalism. Recently, the emergence of a new social system is linked with the emergence of order from nonlinear relationships among multiple interacting units, where multiple interacting units are individual thoughts, consciousness, and actions.

In art, emergence is used to explore the origins of novelty, creativity, and authorship. Some art/literary theorists (Wheeler, 2006; Alexander, 2011) have proposed alternatives to postmodern understandings of "authorship" using the complexity sciences and emergence theory. They contend that artistic selfhood and meaning are emergent, relatively objective phenomena. Michael J. Pearce has used emergence to describe the experience of works of art in relation to contemporary neuroscience. Practicing artist Leonel Moura, in turn, attributes to his "artbots" a real, if nonetheless rudimentary, creativity based on emergent principles. In literature and linguistics, the concept of emergence has been applied in the domain of stylometry to explain the interrelation between the syntactical structures of the text and the author style (Slautina, Marusenko, 2014).

In international development, concepts of emergence have been used within a theory of social change termed SEED-SCALE to show how standard principles interact to bring forward socio-economic development fitted to cultural values, community economics, and natural environment (local solutions emerging from the larger socio-econo-biosphere). These principles can be implemented utilizing a sequence of standardized tasks that self-assemble in individually specific ways utilizing recursive evaluative criteria.

In postcolonial studies, the term "Emerging Literature" refers to a contemporary body of texts that is gaining momentum in the global literary landscape (v. esp.: J.M. Grassin, ed. Emerging Literatures, Bern, Berlin, etc. : Peter Lang, 1996). By opposition, "emergent literature" is rather a concept used in the theory of literature.

Emergent properties and processes

An emergent behavior or emergent property can appear when a number of simple entities (agents) operate in an environment, forming more complex behaviors as a collective. If emergence happens over disparate size scales, then the reason is usually a causal relation across different scales. In other words, there is often a form of top-down feedback in systems with emergent properties. The processes causing emergent properties may occur in either the observed or observing system, and are commonly identifiable by their patterns of accumulating change, generally called 'growth'. Emergent behaviours can occur because of intricate causal relations across different scales and feedback, known as interconnectivity. The emergent property itself may be either very predictable or unpredictable and unprecedented, and represent a new level of the system's evolution. The complex behaviour or properties are not a property of any single such entity, nor can they easily be predicted or deduced from behaviour in the lower-level entities, and might in fact be irreducible to such behavior. The shape and behaviour of a flock of birds or school of fish are good examples of emergent properties.

One reason emergent behaviour is hard to predict is that the number of interactions between a system's components increases exponentially with the number of components, thus allowing for many new and subtle types of behaviour to emerge. Emergence is often a product of particular patterns of interaction. Negative feedback introduces constraints that serve to fix structures or behaviours. In contrast, positive feedback promotes change, allowing local variations to grow into global patterns. Another way in which interactions leads to emergent properties is dual-phase evolution. This occurs where interactions are applied intermittently, leading to two phases: one in which patterns form or grow, the other in which they are refined or removed.

On the other hand, merely having a large number of interactions is not enough by itself to guarantee emergent behaviour; many of the interactions may be negligible or irrelevant, or may cancel each other out. In some cases, a large number of interactions can in fact hinder the emergence of interesting behaviour, by creating a lot of "noise" to drown out any emerging "signal"; the emergent behaviour may need to be temporarily isolated from other interactions before it reaches enough critical mass to self-support. Thus it is not just the sheer number of connections between components which encourages emergence; it is also how these connections are organised. A hierarchical organisation is one example that can generate emergent behaviour (a bureaucracy may behave in a way quite different from the individual departments of that bureaucracy); but emergent behaviour can also arise from more decentralized organisational structures, such as a marketplace. In some cases, the system has to reach a combined threshold of diversity, organisation, and connectivity before emergent behaviour appears.

Unintended consequences and side effects are closely related to emergent properties. Luc Steels writes: "A component has a particular functionality but this is not recognizable as a subfunction of the global functionality. Instead a component implements a behaviour whose side effect contributes to the global functionality ... Each behaviour has a side effect and the sum of the side effects gives the desired functionality". In other words, the global or macroscopic functionality of a system with "emergent functionality" is the sum of all "side effects", of all emergent properties and functionalities.

Systems with emergent properties or emergent structures may appear to defy entropic principles and the second law of thermodynamics, because they form and increase order despite the lack of command and central control. This is possible because open systems can extract information and order out of the environment.

Emergence helps to explain why the fallacy of division is a fallacy.

Emergent structures in nature

Ripple patterns in a sand dune created by wind or water is an example of an emergent structure in nature.
 
Giant's Causeway in Northern Ireland is an example of a complex emergent structure.

Emergent structures can be found in many natural phenomena, from the physical to the biological domain. For example, the shape of weather phenomena such as hurricanes are emergent structures. The development and growth of complex, orderly crystals, as driven by the random motion of water molecules within a conducive natural environment, is another example of an emergent process, where randomness can give rise to complex and deeply attractive, orderly structures.

Water crystals forming on glass demonstrate an emergent, fractal process occurring under appropriate conditions of temperature and humidity.

However, crystalline structure and hurricanes are said to have a self-organizing phase.

It is useful to distinguish three forms of emergent structures. A first-order emergent structure occurs as a result of shape interactions (for example, hydrogen bonds in water molecules lead to surface tension). A second-order emergent structure involves shape interactions played out sequentially over time (for example, changing atmospheric conditions as a snowflake falls to the ground build upon and alter its form). Finally, a third-order emergent structure is a consequence of shape, time, and heritable instructions. For example, an organism's genetic code affects the form of the organism's systems in space and time.

Nonliving, physical systems

In physics, emergence is used to describe a property, law, or phenomenon which occurs at macroscopic scales (in space or time) but not at microscopic scales, despite the fact that a macroscopic system can be viewed as a very large ensemble of microscopic systems.

An emergent property need not be more complicated than the underlying non-emergent properties which generate it. For instance, the laws of thermodynamics are remarkably simple, even if the laws which govern the interactions between component particles are complex. The term emergence in physics is thus used not to signify complexity, but rather to distinguish which laws and concepts apply to macroscopic scales, and which ones apply to microscopic scales.

However, another, perhaps more broadly applicable way to conceive of the emergent divide does involve a dose of complexity insofar as the computational feasibility of going from the microscopic to the macroscopic property tells the 'strength' of the emergence. This is better understood given the following definition of emergence that comes from physics:

"An emergent behavior of a physical system is a qualitative property that can only occur in the limit that the number of microscopic constituents tends to infinity."

Since there are no actually infinite systems in the real world, there is no obvious naturally occurring notion of a hard separation between the properties of the constituents of a system and those of the emergent whole. As discussed below, classical mechanics is thought to be emergent from quantum mechanics, though in principle, quantum dynamics fully describes everything happening at a classical level. However, it would take a computer larger than the size of the universe with more computing time than life time of the universe to describe the motion of a falling apple in terms of the locations of its electrons; thus we can take this to be a "strong" emergent divide.

Some examples include:

  • Classical mechanics: The laws of classical mechanics can be said to emerge as a limiting case from the rules of quantum mechanics applied to large enough masses. This is particularly strange since quantum mechanics is generally thought of as more complicated than classical mechanics.
  • Friction: Forces between elementary particles are conservative. However, friction emerges when considering more complex structures of matter, whose surfaces can convert mechanical energy into heat energy when rubbed against each other. Similar considerations apply to other emergent concepts in continuum mechanics such as viscosity, elasticity, tensile strength, etc.
  • Patterned ground: the distinct, and often symmetrical geometric shapes formed by ground material in periglacial regions.
  • Statistical mechanics was initially derived using the concept of a large enough ensemble that fluctuations about the most likely distribution can be all but ignored. However, small clusters do not exhibit sharp first order phase transitions such as melting, and at the boundary it is not possible to completely categorize the cluster as a liquid or solid, since these concepts are (without extra definitions) only applicable to macroscopic systems. Describing a system using statistical mechanics methods is much simpler than using a low-level atomistic approach.
  • Electrical networks: The bulk conductive response of binary (RC) electrical networks with random arrangements, known as the Universal Dielectric Response (UDR), can be seen as emergent properties of such physical systems. Such arrangements can be used as simple physical prototypes for deriving mathematical formulae for the emergent responses of complex systems.
  • Weather

Temperature is sometimes used as an example of an emergent macroscopic behaviour. In classical dynamics, a snapshot of the instantaneous momenta of a large number of particles at equilibrium is sufficient to find the average kinetic energy per degree of freedom which is proportional to the temperature. For a small number of particles the instantaneous momenta at a given time are not statistically sufficient to determine the temperature of the system. However, using the ergodic hypothesis, the temperature can still be obtained to arbitrary precision by further averaging the momenta over a long enough time.

Convection in a liquid or gas is another example of emergent macroscopic behaviour that makes sense only when considering differentials of temperature. Convection cells, particularly Bénard cells, are an example of a self-organizing system (more specifically, a dissipative system) whose structure is determined both by the constraints of the system and by random perturbations: the possible realizations of the shape and size of the cells depends on the temperature gradient as well as the nature of the fluid and shape of the container, but which configurations are actually realized is due to random perturbations (thus these systems exhibit a form of symmetry breaking).

In some theories of particle physics, even such basic structures as mass, space, and time are viewed as emergent phenomena, arising from more fundamental concepts such as the Higgs boson or strings. In some interpretations of quantum mechanics, the perception of a deterministic reality, in which all objects have a definite position, momentum, and so forth, is actually an emergent phenomenon, with the true state of matter being described instead by a wavefunction which need not have a single position or momentum. Most of the laws of physics themselves as we experience them today appear to have emerged during the course of time making emergence the most fundamental principle in the universe and raising the question of what might be the most fundamental law of physics from which all others emerged. Chemistry can in turn be viewed as an emergent property of the laws of physics. Biology (including biological evolution) can be viewed as an emergent property of the laws of chemistry. Similarly, psychology could be understood as an emergent property of neurobiological laws. Finally, some economic theories understand economy as an emergent feature of psychology.

According to Laughlin, for many particle systems, nothing can be calculated exactly from the microscopic equations, and macroscopic systems are characterised by broken symmetry: the symmetry present in the microscopic equations is not present in the macroscopic system, due to phase transitions. As a result, these macroscopic systems are described in their own terminology, and have properties that do not depend on many microscopic details. This does not mean that the microscopic interactions are irrelevant, but simply that you do not see them anymore — you only see a renormalized effect of them. Laughlin is a pragmatic theoretical physicist: if you cannot, possibly ever, calculate the broken symmetry macroscopic properties from the microscopic equations, then what is the point of talking about reducibility?

Living, biological systems

Emergence and evolution

Life is a major source of complexity, and evolution is the major process behind the varying forms of life. In this view, evolution is the process describing the growth of complexity in the natural world and in speaking of the emergence of complex living beings and life-forms.

Life is thought to have emerged in the early RNA world when RNA chains began to express the basic conditions necessary for natural selection to operate as conceived by Darwin: heritability, variation of type, and competition for limited resources. Fitness of an RNA replicator (its per capita rate of increase) would likely be a function of adaptive capacities that were intrinsic (in the sense that they were determined by the nucleotide sequence) and the availability of resources. The three primary adaptive capacities may have been (1) the capacity to replicate with moderate fidelity (giving rise to both heritability and variation of type); (2) the capacity to avoid decay; and (3) the capacity to acquire and process resources. These capacities would have been determined initially by the folded configurations of the RNA replicators (see “Ribozyme”) that, in turn, would be encoded in their individual nucleotide sequences. Competitive success among different replicators would have depended on the relative values of these adaptive capacities.

Regarding causality in evolution Peter Corning observes:

Synergistic effects of various kinds have played a major causal role in the evolutionary process generally and in the evolution of cooperation and complexity in particular... Natural selection is often portrayed as a “mechanism”, or is personified as a causal agency... In reality, the differential “selection” of a trait, or an adaptation, is a consequence of the functional effects it produces in relation to the survival and reproductive success of a given organism in a given environment. It is these functional effects that are ultimately responsible for the trans-generational continuities and changes in nature.

Per his definition of emergence, Corning also addresses emergence and evolution:

[In] evolutionary processes, causation is iterative; effects are also causes. And this is equally true of the synergistic effects produced by emergent systems. In other words, emergence itself... has been the underlying cause of the evolution of emergent phenomena in biological evolution; it is the synergies produced by organized systems that are the key

Swarming is a well-known behaviour in many animal species from marching locusts to schooling fish to flocking birds. Emergent structures are a common strategy found in many animal groups: colonies of ants, mounds built by termites, swarms of bees, shoals/schools of fish, flocks of birds, and herds/packs of mammals.

An example to consider in detail is an ant colony. The queen does not give direct orders and does not tell the ants what to do. Instead, each ant reacts to stimuli in the form of chemical scent from larvae, other ants, intruders, food and buildup of waste, and leaves behind a chemical trail, which, in turn, provides a stimulus to other ants. Here each ant is an autonomous unit that reacts depending only on its local environment and the genetically encoded rules for its variety of ant. Despite the lack of centralized decision making, ant colonies exhibit complex behavior and have even demonstrated the ability to solve geometric problems. For example, colonies routinely find the maximum distance from all colony entrances to dispose of dead bodies.

It appears that environmental factors may play a role in influencing emergence. Research suggests induced emergence of the bee species Macrotera portalis. In this species, the bees emerge in a pattern consistent with rainfall. Specifically, the pattern of emergence is consistent with southwestern deserts' late summer rains and lack of activity in the spring.

Organization of life

A broader example of emergent properties in biology is viewed in the biological organisation of life, ranging from the subatomic level to the entire biosphere. For example, individual atoms can be combined to form molecules such as polypeptide chains, which in turn fold and refold to form proteins, which in turn create even more complex structures. These proteins, assuming their functional status from their spatial conformation, interact together and with other molecules to achieve higher biological functions and eventually create an organism. Another example is how cascade phenotype reactions, as detailed in chaos theory, arise from individual genes mutating respective positioning. At the highest level, all the biological communities in the world form the biosphere, where its human participants form societies, and the complex interactions of meta-social systems such as the stock market.

Emergence of mind

Among the considered phenomena in the evolutionary account of life, as a continuous history, marked by stages at which fundamentally new forms have appeared - the origin of sapiens intelligence. The emergence of mind and its evolution is researched and considered as a separate phenomenon in a special system knowledge called noogenesis.

In humanity

Spontaneous order

Groups of human beings, left free to each regulate themselves, tend to produce spontaneous order, rather than the meaningless chaos often feared. This has been observed in society at least since Chuang Tzu in ancient China. Human beings are the basic elements of social systems, which perpetually interact and create, maintain, or untangle mutual social bonds. Social bonds in social systems are perpetually changing in the sense of the ongoing reconfiguration of their structure. A classic traffic roundabout is also a good example, with cars moving in and out with such effective organization that some modern cities have begun replacing stoplights at problem intersections with traffic circles, and getting better results. Open-source software and Wiki projects form an even more compelling illustration.

Emergent processes or behaviors can be seen in many other places, such as cities, cabal and market-dominant minority phenomena in economics, organizational phenomena in computer simulations and cellular automata. Whenever there is a multitude of individuals interacting, an order emerges from disorder; a pattern, a decision, a structure, or a change in direction occurs.

Economics

The stock market (or any market for that matter) is an example of emergence on a grand scale. As a whole it precisely regulates the relative security prices of companies across the world, yet it has no leader; when no central planning is in place, there is no one entity which controls the workings of the entire market. Agents, or investors, have knowledge of only a limited number of companies within their portfolio, and must follow the regulatory rules of the market and analyse the transactions individually or in large groupings. Trends and patterns emerge which are studied intensively by technical analysts..

World Wide Web and the Internet

The World Wide Web is a popular example of a decentralized system exhibiting emergent properties. There is no central organization rationing the number of links, yet the number of links pointing to each page follows a power law in which a few pages are linked to many times and most pages are seldom linked to. A related property of the network of links in the World Wide Web is that almost any pair of pages can be connected to each other through a relatively short chain of links. Although relatively well known now, this property was initially unexpected in an unregulated network. It is shared with many other types of networks called small-world networks.

Internet traffic can also exhibit some seemingly emergent properties. In the congestion control mechanism, TCP flows can become globally synchronized at bottlenecks, simultaneously increasing and then decreasing throughput in coordination. Congestion, widely regarded as a nuisance, is possibly an emergent property of the spreading of bottlenecks across a network in high traffic flows which can be considered as a phase transition.

Another important example of emergence in web-based systems is social bookmarking (also called collaborative tagging). In social bookmarking systems, users assign tags to resources shared with other users, which gives rise to a type of information organisation that emerges from this crowdsourcing process. Recent research which analyzes empirically the complex dynamics of such systems has shown that consensus on stable distributions and a simple form of shared vocabularies does indeed emerge, even in the absence of a central controlled vocabulary. Some believe that this could be because users who contribute tags all use the same language, and they share similar semantic structures underlying the choice of words. The convergence in social tags may therefore be interpreted as the emergence of structures as people who have similar semantic interpretation collaboratively index online information, a process called semantic imitation.

Architecture and cities

Traffic patterns in cities can be seen as an example of spontaneous order

Emergent structures appear at many different levels of organization or as spontaneous order. Emergent self-organization appears frequently in cities where no planning or zoning entity predetermines the layout of the city. The interdisciplinary study of emergent behaviors is not generally considered a homogeneous field, but divided across its application or problem domains.

Architects may not design all the pathways of a complex of buildings. Instead they might let usage patterns emerge and then place pavement where pathways have become worn, such as a desire path.

The on-course action and vehicle progression of the 2007 Urban Challenge could possibly be regarded as an example of cybernetic emergence. Patterns of road use, indeterministic obstacle clearance times, etc. will work together to form a complex emergent pattern that can not be deterministically planned in advance.

The architectural school of Christopher Alexander takes a deeper approach to emergence, attempting to rewrite the process of urban growth itself in order to affect form, establishing a new methodology of planning and design tied to traditional practices, an Emergent Urbanism. Urban emergence has also been linked to theories of urban complexity and urban evolution.

Building ecology is a conceptual framework for understanding architecture and the built environment as the interface between the dynamically interdependent elements of buildings, their occupants, and the larger environment. Rather than viewing buildings as inanimate or static objects, building ecologist Hal Levin views them as interfaces or intersecting domains of living and non-living systems. The microbial ecology of the indoor environment is strongly dependent on the building materials, occupants, contents, environmental context and the indoor and outdoor climate. The strong relationship between atmospheric chemistry and indoor air quality and the chemical reactions occurring indoors. The chemicals may be nutrients, neutral or biocides for the microbial organisms. The microbes produce chemicals that affect the building materials and occupant health and well being. Humans manipulate the ventilation, temperature and humidity to achieve comfort with the concomitant effects on the microbes that populate and evolve.

Eric Bonabeau's attempt to define emergent phenomena is through traffic: "traffic jams are actually very complicated and mysterious. On an individual level, each driver is trying to get somewhere and is following (or breaking) certain rules, some legal (the speed limit) and others societal or personal (slow down to let another driver change into your lane). But a traffic jam is a separate and distinct entity that emerges from those individual behaviors. Gridlock on a highway, for example, can travel backward for no apparent reason, even as the cars are moving forward." He has also likened emergent phenomena to the analysis of market trends and employee behavior.

Computational emergent phenomena have also been utilized in architectural design processes, for example for formal explorations and experiments in digital materiality.

Computer AI

Some artificially intelligent (AI) computer applications utilize emergent behavior for animation. One example is Boids, which mimics the swarming behavior of birds.

Language

It has been argued that the structure and regularity of language grammar, or at least language change, is an emergent phenomenon. While each speaker merely tries to reach his or her own communicative goals, he or she uses language in a particular way. If enough speakers behave in that way, language is changed. In a wider sense, the norms of a language, i.e. the linguistic conventions of its speech society, can be seen as a system emerging from long-time participation in communicative problem-solving in various social circumstances.

Emergent change processes

Within the field of group facilitation and organization development, there have been a number of new group processes that are designed to maximize emergence and self-organization, by offering a minimal set of effective initial conditions. Examples of these processes include SEED-SCALE, appreciative inquiry, Future Search, the world cafe or knowledge cafe, Open Space Technology, and others (Holman, 2010).

 

Bio-inspired computing

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Bio-inspired_computing

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

Areas of research

Some areas of study in biologically inspired computing, and their biological counterparts:


Bio-Inspired Computing Topic Biological Inspiration
Genetic Algorithms Evolution
Biodegradability prediction Biodegradation
Cellular Automata Life
Emergence Ants, termites, bees, wasps
Neural networks The brain
Artificial life Life
Artificial immune system Immune system
Rendering (computer graphics) Patterning and rendering of animal skins, bird feathers, mollusk shells and bacterial colonies
Lindenmayer systems Plant structures
Communication networks and communication protocols Epidemiology
Membrane computers Intra-membrane molecular processes in the living cell
Excitable media Forest fires, "the wave", heart conditions, axons
Sensor networks Sensory organs
Learning classifier systems Cognition, evolution

Artificial intelligence

Bio-Inspired computing can be distinguished from traditional artificial intelligence by its approach to computer learning. Bio-inspired computing uses an evolutionary approach, while traditional A.I. uses a 'creationist' approach. Bio-inspired computing begins with a set of simple rules and simple organisms which adhere to those rules. Over time, these organisms evolve within simple constraints. This method could be considered bottom-up or decentralized. In traditional artificial intelligence, intelligence is often programmed from above: the programmer is the creator, and makes something and imbues it with its intelligence.

Virtual Insect Example

Bio-inspired computing can be used to train a virtual insect. The insect is trained to navigate in an unknown terrain for finding food equipped with six simple rules:

  • turn right for target-and-obstacle left;
  • turn left for target-and-obstacle right;
  • turn left for target-left-obstacle-right;
  • turn right for target-right-obstacle-left;
  • turn left for target-left without obstacle;
  • turn right for target-right without obstacle.

The virtual insect controlled by the trained spiking neural network can find food after training in any unknown terrain. After several generations of rule application it is usually the case that some forms of complex behaviour arise. Complexity gets built upon complexity until the end result is something markedly complex, and quite often completely counterintuitive from what the original rules would be expected to produce (see complex systems). For this reason, in neural network models, it is necessary to accurately model an in vivo network, by live collection of "noise" coefficients that can be used to refine statistical inference and extrapolation as system complexity increases.

Natural evolution is a good analogy to this method–the rules of evolution are in principle simple rules, yet over millions of years have produced remarkably complex organisms. A similar technique is used in genetic algorithms.

Brain-inspired Computing

Brain-inspired computing refers to computational models and methods that are mainly based on the mechanism of the brain, rather than completely imitating the brain. The goal is to enable the machine to realize various cognitive abilities and coordination mechanisms of human beings in a brain-inspired manner, and finally achieve or exceed Human intelligence level.

Research

Artificial intelligence researchers are now aware of the benefits of learning from the brain information processing mechanism. And the progress of brain science and neuroscience also provides the necessary basis for artificial intelligence to learn from the brain information processing mechanism. Brain and neuroscience researchers are also trying to apply the understanding of brain information processing to a wider range of science field. The development of the discipline benefits from the push of information technology and smart technology and in turn brain and neuroscience will also inspire the next generation of the transformation of information technology.

The influence of brain science on Brain-inspired computing

Advances in brain and neuroscience, especially with the help of new technologies and new equipment, support researchers to obtain multi-scale, multi-type biological evidence of the brain through different experimental methods, and are trying to reveal the structure of bio-intelligence from different aspects and functional basis. From the microscopic neurons, synaptic working mechanisms and their characteristics, to the mesoscopic network connection model, to the links in the macroscopic brain interval and their synergistic characteristics, the multi-scale structure and functional mechanisms of brains derived from these experimental and mechanistic studies will provide important inspiration for building a future brain-inspired computing model.

Brain-inspired chip

Broadly speaking, brain-inspired chip refers to a chip designed with reference to the structure of human brain neurons and the cognitive mode of human brain. Obviously, the "neuromorphic chip" is a brain-inspired chip that focuses on the design of the chip structure with reference to the human brain neuron model and its tissue structure, which represents a major direction of brain-inspired chip research. Along with the rise and development of “brain plans” in various countries, a large number of research results on neuromorphic chips have emerged, which have received extensive international attention and are well known to the academic community and the industry. For example, EU-backed SpiNNaker and BrainScaleS, Stanford's Neurogrid, IBM's TrueNorth, and Qualcomm's Zeroth.

TrueNorth is a brain-inspired chip that IBM has been developing for nearly 10 years. The US DARPA program has been funding IBM to develop pulsed neural network chips for intelligent processing since 2008. In 2011, IBM first developed two cognitive silicon prototypes by simulating brain structures that could learn and process information like the brain. Each neuron of a brain-inspired chip is cross-connected with massive parallelism. In 2014, IBM released a second-generation brain-inspired chip called "TrueNorth." Compared with the first generation brain-inspired chips, the performance of the TrueNorth chip has increased dramatically, and the number of neurons has increased from 256 to 1 million; the number of programmable synapses has increased from 262,144 to 256 million; Subsynaptic operation with a total power consumption of 70 mW and a power consumption of 20 mW per square centimeter. At the same time, TrueNorth handles a nuclear volume of only 1/15 of the first generation of brain chips. At present, IBM has developed a prototype of a neuron computer that uses 16 TrueNorth chips with real-time video processing capabilities. The super-high indicators and excellence of the TrueNorth chip have caused a great stir in the academic world at the beginning of its release.

In 2012, the Institute of Computing Technology of the Chinese Academy of Sciences(CAS) and the French Inria collaborated to develop the first chip in the world to support the deep neural network processor architecture chip "Cambrian". The technology has won the best international conferences in the field of computer architecture, ASPLOS and MICRO, and its design method and performance have been recognized internationally. The chip can be used as an outstanding representative of the research direction of brain-inspired chips.

Challenges in Brain-Inspired Computing

Unclear Brain mechanism cognition

The human brain is a product of evolution. Although its structure and information processing mechanism are constantly optimized, compromises in the evolution process are inevitable. The cranial nervous system is a multi-scale structure. There are still several important problems in the mechanism of information processing at each scale, such as the fine connection structure of neuron scales and the mechanism of brain-scale feedback. Therefore, even a comprehensive calculation of the number of neurons and synapses is only 1/1000 of the size of the human brain, and it is still very difficult to study at the current level of scientific research.

Unclear Brain-inspired computational models and algorithms

In the future research of cognitive brain computing model, it is necessary to model the brain information processing system based on multi-scale brain neural system data analysis results, construct a brain-inspired multi-scale neural network computing model, and simulate multi-modality of brain in multi-scale. Intelligent behavioral ability such as perception, self-learning and memory, and choice. Machine learning algorithms are not flexible and require high-quality sample data that is manually labeled on a large scale. Training models require a lot of computational overhead. Brain-inspired artificial intelligence still lacks advanced cognitive ability and inferential learning ability.

Constrained Computational architecture and capabilities

Most of the existing brain-inspired chips are still based on the research of von Neumann architecture, and most of the chip manufacturing materials are still using traditional semiconductor materials. The neural chip is only borrowing the most basic unit of brain information processing. The most basic computer system, such as storage and computational fusion, pulse discharge mechanism, the connection mechanism between neurons, etc., and the mechanism between different scale information processing units has not been integrated into the study of brain-inspired computing architecture. Now an important international trend is to develop neural computing components such as brain memristors, memory containers, and sensory sensors based on new materials such as nanometers, thus supporting the construction of more complex brain-inspired computing architectures. The development of brain-inspired computers and large-scale brain computing systems based on brain-inspired chip development also requires a corresponding software environment to support its wide application.

 

Genetic programming

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Genetic_programming

In artificial intelligence, genetic programming (GP) is a technique of evolving programs, starting from a population of unfit (usually random) programs, fit for a particular task by applying operations analogous to natural genetic processes to the population of programs. It is essentially a heuristic search technique often described as 'hill climbing', i.e. searching for an optimal or at least suitable program among the space of all programs.

The operations are: selection of the fittest programs for reproduction (crossover) and mutation according to a predefined fitness measure, usually proficiency at the desired task. The crossover operation involves swapping random parts of selected pairs (parents) to produce new and different offspring that become part of the new generation of programs. Mutation involves substitution of some random part of a program with some other random part of a program. Some programs not selected for reproduction are copied from the current generation to the new generation. Then the selection and other operations are recursively applied to the new generation of programs.

Typically, members of each new generation are on average more fit than the members of the previous generation, and the best-of-generation program is often better than the best-of-generation programs from previous generations. Termination of the recursion is when some individual program reaches a predefined proficiency or fitness level.

It may and often does happen that a particular run of the algorithm results in premature convergence to some local maximum which is not a globally optimal or even good solution. Multiple runs (dozens to hundreds) are usually necessary to produce a very good result. It may also be necessary to increase the starting population size and variability of the individuals to avoid pathologies.

History

The first record of the proposal to evolve programs is probably that of Alan Turing in 1950. There was a gap of 25 years before the publication of John Holland's 'Adaptation in Natural and Artificial Systems' laid out the theoretical and empirical foundations of the science. In 1981, Richard Forsyth demonstrated the successful evolution of small programs, represented as trees, to perform classification of crime scene evidence for the UK Home Office.

Although the idea of evolving programs, initially in the computer language Lisp, was current amongst John Holland’s students, it was not until they organised the first Genetic Algorithms conference in Pittsburgh that Nichael Cramer published evolved programs in two specially designed languages, which included the first statement of modern "tree-based" Genetic Programming (that is, procedural languages organized in tree-based structures and operated on by suitably defined GA-operators) . In 1988, John Koza (also a PhD student of John Holland) patented his invention of a GA for program evolution. This was followed by publication in the International Joint Conference on Artificial Intelligence IJCAI-89.

Koza followed this with 205 publications on “Genetic Programming” (GP), name coined by David Goldberg, also a PhD student of John Holland. However, it is the series of 4 books by Koza, starting in 1992 with accompanying videos, that really established GP. Subsequently, there was an enormous expansion of the number of publications with the Genetic Programming Bibliography, surpassing 10,000 entries. In 2010, Koza listed 77 results where Genetic Programming was human competitive.

In 1996, Koza started the annual Genetic Programming conference which was followed in 1998 by the annual EuroGP conference, and the first book in a GP series edited by Koza. 1998 also saw the first GP textbook. GP continued to flourish, leading to the first specialist GP journal and three years later (2003) the annual Genetic Programming Theory and Practice (GPTP) workshop was established by Rick Riolo. Genetic Programming papers continue to be published at a diversity of conferences and associated journals. Today there are nineteen GP books including several for students.

Foundational work in GP

Early work that set the stage for current genetic programming research topics and applications is diverse, and includes software synthesis and repair, predictive modeling, data mining, financial modeling, soft sensors, design, and image processing. Applications in some areas, such as design, often make use of intermediate representations, such as Fred Gruau’s cellular encoding. Industrial uptake has been significant in several areas including finance, the chemical industry, bioinformatics and the steel industry.

Methods

Program representation

A function represented as a tree structure

GP evolves computer programs, traditionally represented in memory as tree structures. Trees can be easily evaluated in a recursive manner. Every tree node has an operator function and every terminal node has an operand, making mathematical expressions easy to evolve and evaluate. Thus traditionally GP favors the use of programming languages that naturally embody tree structures (for example, Lisp; other functional programming languages are also suitable).

Non-tree representations have been suggested and successfully implemented, such as linear genetic programming which suits the more traditional imperative languages [see, for example, Banzhaf et al. (1998)]. The commercial GP software Discipulus uses automatic induction of binary machine code ("AIM") to achieve better performance. µGP uses directed multigraphs to generate programs that fully exploit the syntax of a given assembly language. Other program representations on which significant research and development have been conducted include programs for stack-based virtual machines, and sequences of integers that are mapped to arbitrary programming languages via grammars. Cartesian genetic programming is another form of GP, which uses a graph representation instead of the usual tree based representation to encode computer programs.

Most representations have structurally noneffective code (introns). Such non-coding genes may seem to be useless because they have no effect on the performance of any one individual. However, they alter the probabilities of generating different offspring under the variation operators, and thus alter the individual's variational properties. Experiments seem to show faster convergence when using program representations that allow such non-coding genes, compared to program representations that do not have any non-coding genes.

Selection

Selection is a process whereby certain individuals are selected from the current generation that would serve as parents for the next generation. The individuals are selected probabilistically such that the better performing individuals have a higher chance of getting selected. The most commonly used selection method in GP is tournament selection, although other methods such as fitness proportionate selection, lexicase selection, and others have been demonstrated to perform better for many GP problems.

Elitism, which involves seeding the next generation with the best individual (or best n individuals) from the current generation, is a technique sometimes employed to avoid regression.

Crossover

Various genetic operators (i.e., crossover and mutation) are applied to the individuals selected in the selection step described above to breed new individuals. The rate at which these operators are applied determines the diversity in the population.

Mutation

Flip one or more bits from the previous offspring to generate new child or generation.

Applications

GP has been successfully used as an automatic programming tool, a machine learning tool and an automatic problem-solving engine. GP is especially useful in the domains where the exact form of the solution is not known in advance or an approximates solution is acceptable (possibly because finding the exact solution is very difficult). Some of the applications of GP are curve fitting, data modeling, symbolic regression, feature selection, classification, etc. John R. Koza mentions 76 instances where Genetic Programming has been able to produce results that are competitive with human-produced results (called Human-competitive results). Since 2004, the annual Genetic and Evolutionary Computation Conference (GECCO) holds Human Competitive Awards (called Humies) competition, where cash awards are presented to human-competitive results produced by any form of genetic and evolutionary computation. GP has won many awards in this competition over the years.

Meta-genetic programming

Meta-genetic programming is the proposed meta learning technique of evolving a genetic programming system using genetic programming itself. It suggests that chromosomes, crossover, and mutation were themselves evolved, therefore like their real life counterparts should be allowed to change on their own rather than being determined by a human programmer. Meta-GP was formally proposed by Jürgen Schmidhuber in 1987. Doug Lenat's Eurisko is an earlier effort that may be the same technique. It is a recursive but terminating algorithm, allowing it to avoid infinite recursion. In the "autoconstructive evolution" approach to meta-genetic programming, the methods for the production and variation of offspring are encoded within the evolving programs themselves, and programs are executed to produce new programs to be added to the population.

Critics of this idea often say this approach is overly broad in scope. However, it might be possible to constrain the fitness criterion onto a general class of results, and so obtain an evolved GP that would more efficiently produce results for sub-classes. This might take the form of a meta evolved GP for producing human walking algorithms which is then used to evolve human running, jumping, etc. The fitness criterion applied to the meta GP would simply be one of efficiency.

Technological utopianism

From Wikipedia, the free encyclopedia

Technological utopianism (often called techno-utopianism or technoutopianism) is any ideology based on the premise that advances in science and technology could and should bring about a utopia, or at least help to fulfill one or another utopian ideal.

A techno-utopia is therefore an ideal society, in which laws, government, and social conditions are solely operating for the benefit and well-being of all its citizens, set in the near- or far-future, as advanced science and technology will allow these ideal living standards to exist; for example, post-scarcity, transformations in human nature, the avoidance or prevention of suffering and even the end of death.

Technological utopianism is often connected with other discourses presenting technologies as agents of social and cultural change, such as technological determinism or media imaginaries.

A tech-utopia does not disregard any problems that technology may cause, but strongly believes that technology allows mankind to make social, economic, political, and cultural advancements. Overall, Technological Utopianism views technology’s impacts as extremely positive.

In the late 20th and early 21st centuries, several ideologies and movements, such as the cyberdelic counterculture, the Californian Ideology, transhumanism, and singularitarianism, have emerged promoting a form of techno-utopia as a reachable goal. Cultural critic Imre Szeman argues technological utopianism is an irrational social narrative because there is no evidence to support it. He concludes that it shows the extent to which modern societies place faith in narratives of progress and technology overcoming things, despite all evidence to the contrary.

History

From the 19th to mid-20th centuries

Karl Marx believed that science and democracy were the right and left hands of what he called the move from the realm of necessity to the realm of freedom. He argued that advances in science helped delegitimize the rule of kings and the power of the Christian Church.

19th-century liberals, socialists, and republicans often embraced techno-utopianism. Radicals like Joseph Priestley pursued scientific investigation while advocating democracy. Robert Owen, Charles Fourier and Henri de Saint-Simon in the early 19th century inspired communalists with their visions of a future scientific and technological evolution of humanity using reason. Radicals seized on Darwinian evolution to validate the idea of social progress. Edward Bellamy’s socialist utopia in Looking Backward, which inspired hundreds of socialist clubs in the late 19th century United States and a national political party, was as highly technological as Bellamy’s imagination. For Bellamy and the Fabian Socialists, socialism was to be brought about as a painless corollary of industrial development.

Marx and Engels saw more pain and conflict involved, but agreed about the inevitable end. Marxists argued that the advance of technology laid the groundwork not only for the creation of a new society, with different property relations, but also for the emergence of new human beings reconnected to nature and themselves. At the top of the agenda for empowered proletarians was "to increase the total productive forces as rapidly as possible". The 19th and early 20th century Left, from social democrats to communists, were focused on industrialization, economic development and the promotion of reason, science, and the idea of progress.

Some technological utopians promoted eugenics. Holding that in studies of families, such as the Jukes and Kallikaks, science had proven that many traits such as criminality and alcoholism were hereditary, many advocated the sterilization of those displaying negative traits. Forcible sterilization programs were implemented in several states in the United States.

H.G. Wells in works such as The Shape of Things to Come promoted technological utopianism.

The horrors of the 20th century – namely Fascist and Communist dictatorships and the world wars – caused many to abandon optimism. The Holocaust, as Theodor Adorno underlined, seemed to shatter the ideal of Condorcet and other thinkers of the Enlightenment, which commonly equated scientific progress with social progress.

From late 20th and early 21st centuries

The Goliath of totalitarianism will be brought down by the David of the microchip.

— Ronald Reagan, The Guardian, 14 June 1989

A movement of techno-utopianism began to flourish again in the dot-com culture of the 1990s, particularly in the West Coast of the United States, especially based around Silicon Valley. The Californian Ideology was a set of beliefs combining bohemian and anti-authoritarian attitudes from the counterculture of the 1960s with techno-utopianism and support for libertarian economic policies. It was reflected in, reported on, and even actively promoted in the pages of Wired magazine, which was founded in San Francisco in 1993 and served for a number years as the "bible" of its adherents.

This form of techno-utopianism reflected a belief that technological change revolutionizes human affairs, and that digital technology in particular – of which the Internet was but a modest harbinger – would increase personal freedom by freeing the individual from the rigid embrace of bureaucratic big government. "Self-empowered knowledge workers" would render traditional hierarchies redundant; digital communications would allow them to escape the modern city, an "obsolete remnant of the industrial age".

Similar forms of "digital utopianism" has often entered in the political messages of party and social movements that point to the Web or more broadly to new media as harbingers of political and social change. Its adherents claim it transcended conventional "right/left" distinctions in politics by rendering politics obsolete. However, techno-utopianism disproportionately attracted adherents from the libertarian right end of the political spectrum. Therefore, techno-utopians often have a hostility toward government regulation and a belief in the superiority of the free market system. Prominent "oracles" of techno-utopianism included George Gilder and Kevin Kelly, an editor of Wired who also published several books.

During the late 1990s dot-com boom, when the speculative bubble gave rise to claims that an era of "permanent prosperity" had arrived, techno-utopianism flourished, typically among the small percentage of the population who were employees of Internet startups and/or owned large quantities of high-tech stocks. With the subsequent crash, many of these dot-com techno-utopians had to rein in some of their beliefs in the face of the clear return of traditional economic reality.

In the late 1990s and especially during the first decade of the 21st century, technorealism and techno-progressivism are stances that have risen among advocates of technological change as critical alternatives to techno-utopianism. However, technological utopianism persists in the 21st century as a result of new technological developments and their impact on society. For example, several technical journalists and social commentators, such as Mark Pesce, have interpreted the WikiLeaks phenomenon and the United States diplomatic cables leak in early December 2010 as a precursor to, or an incentive for, the creation of a techno-utopian transparent society. Cyber-utopianism, first coined by Evgeny Morozov, is another manifestation of this, in particular in relation to the Internet and social networking.

Principles

Bernard Gendron, a professor of philosophy at the University of Wisconsin–Milwaukee, defines the four principles of modern technological utopians in the late 20th and early 21st centuries as follows:

  1. We are presently undergoing a (post-industrial) revolution in technology;
  2. In the post-industrial age, technological growth will be sustained (at least);
  3. In the post-industrial age, technological growth will lead to the end of economic scarcity;
  4. The elimination of economic scarcity will lead to the elimination of every major social evil.

Rushkoff presents us with multiple claims that surround the basic principles of Technological Utopianism:

  1. Technology reflects and encourages the best aspects of human nature, fostering “communication, collaboration, sharing, helpfulness, and community.”
  2. Technology improves our interpersonal communication, relationships, and communities. Early Internet users shared their knowledge of the Internet with others around them.
  3. Technology democratizes society. The expansion of access to knowledge and skills led to the connection of people and information. The broadening of freedom of expression created “the online world...in which we are allowed to voice our own opinions.” The reduction of the inequalities of power and wealth meant that everyone has an equal status on the internet and is allowed to do as much as the next person.
  4. Technology inevitably progresses. The interactivity that came from the inventions of the TV remote control, video game joystick, computer mouse and computer keyboard allowed for much more progress.
  5. Unforeseen impacts of technology are positive. As more people discovered the Internet, they took advantage of being linked to millions of people, and turned the Internet into a social revolution. The government released it to the public, and its “social side effect… [became] its main feature.”
  6. Technology increases efficiency and consumer choice. The creation of the TV remote, video game joystick, and computer mouse liberated these technologies and allowed users to manipulate and control them, giving them many more choices.
  7. New technology can solve the problems created by old technology. Social networks and blogs were created out of the collapse of dot.com bubble businesses’ attempts to run pyramid schemes on users.

Criticisms

Critics claim that techno-utopianism's identification of social progress with scientific progress is a form of positivism and scientism. Critics of modern libertarian techno-utopianism point out that it tends to focus on "government interference" while dismissing the positive effects of the regulation of business. They also point out that it has little to say about the environmental impact of technology and that its ideas have little relevance for much of the rest of the world that are still relatively quite poor.

In a controversial article "Techno-Utopians are Mugged by Reality", The Wall Street Journal explores the concept of the violation of free speech by shutting down social media to stop violence. As a result of British cities being looted consecutively, British Prime Minister David Cameron argued that the government should have the ability to shut down social media during crime sprees so that the situation could be contained. A poll was conducted to see if Twitter users would prefer to let the service be closed temporarily or keep it open so they can chat about the famous television show X-Factor. The end report showed that every Tweet opted for X-Factor. The negative social effects of technological utopia is that society is so addicted to technology that we simply can't be parted even for the greater good. While many Techno-Utopians would like to believe that digital technology is for the greater good, it can also be used negatively to bring harm to the public.

Other critics of a techno-utopia include the worry of the human element. Critics suggest that a techno-utopia may lessen human contact, leading to a distant society. Another concern is the amount of reliance society may place on their technologies in these techno-utopia settings. These criticisms are sometimes referred to as a technological anti-utopian view or a techno-dystopia.

Even today, the negative social effects of a technological utopia can be seen. Mediated communication such as phone calls, instant messaging and text messaging are steps towards a utopian world in which one can easily contact another regardless of time or location. However, mediated communication removes many aspects that are helpful in transferring messages. As it stands today, most text, email, and instant messages offer fewer nonverbal cues about the speaker’s feelings than do face-to-face encounters. This makes it so that mediated communication can easily be misconstrued and the intended message is not properly conveyed. With the absence of tone, body language, and environmental context, the chance of a misunderstanding is much higher, rendering the communication ineffective. In fact, mediated technology can be seen from a dystopian view because it can be detrimental to effective interpersonal communication. These criticisms would only apply to messages that are prone to misinterpretation as not every text based communication requires contextual cues. The limitations of lacking tone and body language in text based communication are likely to be mitigated by video and augmented reality versions of digital communication technologies.

 

Friendly artificial intelligence

From Wikipedia, the free encyclopedia

A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.

Etymology and usage

Eliezer Yudkowsky, AI researcher and creator of the term Friendly artificial intelligence

The term was coined by Eliezer Yudkowsky, who is best known for popularizing the idea, to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig's leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:

Yudkowsky (2008) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design—to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.

'Friendly' is used in this context as technical terminology, and picks out agents that are safe and useful, not necessarily ones that are "friendly" in the colloquial sense. The concept is primarily invoked in the context of discussions of recursively self-improving artificial agents that rapidly explode in intelligence, on the grounds that this hypothetical technology would have a large, rapid, and difficult-to-control impact on human society.

Risks of unfriendly AI

The roots of concern about artificial intelligence are very old. Kevin LaGrandeur showed that the dangers specific to AI can be seen in ancient literature concerning artificial humanoid servants such as the golem, or the proto-robots of Gerbert of Aurillac and Roger Bacon. In those stories, the extreme intelligence and power of these humanoid creations clash with their status as slaves (which by nature are seen as sub-human), and cause disastrous conflict. By 1942 these themes prompted Isaac Asimov to create the "Three Laws of Robotics" - principles hard-wired into all the robots in his fiction, intended to prevent them from turning on their creators, or allowing them to come to harm.

In modern times as the prospect of superintelligent AI looms nearer, philosopher Nick Bostrom has said that superintelligent AI systems with goals that are not aligned with human ethics are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. He put it this way:

Basically we should assume that a 'superintelligence' would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is 'human friendly.'

In 2008 Eliezer Yudkowsky called for the creation of “friendly AI” to mitigate existential risk from advanced artificial intelligence. He explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

Steve Omohundro says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of basic "drives", such as resource acquisition, self-preservation, and continuous self-improvement, because of the intrinsic nature of any goal-driven systems and that these drives will, "without special precautions", cause the AI to exhibit undesired behavior.

Alexander Wissner-Gross says that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold.

Luke Muehlhauser, writing for the Machine Intelligence Research Institute, recommends that machine ethics researchers adopt what Bruce Schneier has called the "security mindset": Rather than thinking about how a system will work, imagine how it could fail. For instance, he suggests even an AI that only makes accurate predictions and communicates via a text interface might cause unintended harm.

In 2014, Luke Muehlhauser and Nick Bostrom underlined the need for 'friendly AI'; nonetheless, the difficulties in designing a 'friendly' superintelligence, for instance via programming counterfactual moral thinking, are considerable.

Coherent extrapolated volition

Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, coherent extrapolated volition is people's choices and the actions people would collectively take if "we knew more, thought faster, were more the people we wished we were, and had grown up closer together."

Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a "seed AI" programmed to first study human nature and then produce the AI which humanity would want, given sufficient time and insight, to arrive at a satisfactory answer. The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.

Other approaches

Steve Omohundro has proposed a "scaffolding" approach to AI safety, in which one provably safe AI generation helps build the next provably safe generation.

Seth Baum argues that the development of safe, socially beneficial artificial intelligence or artificial general intelligence is a function of the social psychology of AI research communities, and so can be constrained by extrinsic measures and motivated by intrinsic measures. Intrinsic motivations can be strengthened when messages resonate with AI developers; Baum argues that, in contrast, "existing messages about beneficial AI are not always framed well". Baum advocates for "cooperative relationships, and positive framing of AI researchers" and cautions against characterizing AI researchers as "not want(ing) to pursue beneficial designs".

In his book Human Compatible, AI researcher Stuart J. Russell lists three principles to guide the development of beneficial machines. He emphasizes that these principles are not meant to be explicitly coded into the machines; rather, they are intended for the human developers. The principles are as follows:

1. The machine's only objective is to maximize the realization of human preferences.

2. The machine is initially uncertain about what those preferences are.

3. The ultimate source of information about human preferences is human behavior.

The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future." Similarly, "behavior" includes any choice between options, and the uncertainty is such that some probability, which may be quite small, must be assigned to every logically possible human preference.

Public policy

James Barrat, author of Our Final Invention, suggested that "a public-private partnership has to be created to bring A.I.-makers together to share ideas about security—something like the International Atomic Energy Agency, but in partnership with corporations." He urges AI researchers to convene a meeting similar to the Asilomar Conference on Recombinant DNA, which discussed risks of biotechnology.

John McGinnis encourages governments to accelerate friendly AI research. Because the goalposts of friendly AI are not necessarily eminent, he suggests a model similar to the National Institutes of Health, where "Peer review panels of computer and cognitive scientists would sift through projects and choose those that are designed both to advance AI and assure that such advances would be accompanied by appropriate safeguards." McGinnis feels that peer review is better "than regulation to address technical issues that are not possible to capture through bureaucratic mandates". McGinnis notes that his proposal stands in contrast to that of the Machine Intelligence Research Institute, which generally aims to avoid government involvement in friendly AI.

According to Gary Marcus, the annual amount of money being spent on developing machine morality is tiny.

Criticism

Some critics believe that both human-level AI and superintelligence are unlikely, and that therefore friendly AI is unlikely. Writing in The Guardian, Alan Winfield compares human-level artificial intelligence with faster-than-light travel in terms of difficulty, and states that while we need to be "cautious and prepared" given the stakes involved, we "don't need to be obsessing" about the risks of superintelligence. Boyles and Joaquin, on the other hand, argue that Luke Muehlhauser and Nick Bostrom’s proposal to create friendly AIs appear to be bleak. This is because Muehlhauser and Bostrom seem to hold the idea that intelligent machines could be programmed to think counterfactually about the moral values that humans beings would have had. In an article in AI & Society, Boyles and Joaquin maintain that such AIs would not be that friendly considering the following: the infinite amount of antecedent counterfactual conditions that would have to be programmed into a machine, the difficulty of cashing out the set of moral values—that is, those that a more ideal than the ones human beings possess at present, and the apparent disconnect between counterfactual antecedents and ideal value consequent.

Some philosophers claim that any truly "rational" agent, whether artificial or human, will naturally be benevolent; in this view, deliberate safeguards designed to produce a friendly AI could be unnecessary or even harmful. Other critics question whether it is possible for an artificial intelligence to be friendly. Adam Keiper and Ari N. Schulman, editors of the technology journal The New Atlantis, say that it will be impossible to ever guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power. They write that the criteria upon which friendly AI theories are based work "only when one has not only great powers of prediction about the likelihood of myriad possible outcomes, but certainty and consensus on how one values the different outcomes.

 

Instrumental convergence

From Wikipedia, the free encyclopedia

https://en.wikipedia.org/wiki/Instrumental_convergence 

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent agents to pursue potentially unbounded instrumental goals such as self-preservation and resource acquisition, provided that their ultimate goals are themselves unbounded.

Instrumental convergence suggests that an intelligent agent with unbounded but apparently harmless goals can act in surprisingly harmful ways. For example, a computer with the sole, unconstrained goal of solving an incredibly difficult mathematics problem like the Riemann hypothesis could attempt to turn the entire Earth into one giant computer in an effort to increase its computational power so that it can succeed in its calculations.

Proposed basic AI drives include utility function or goal-content integrity, self-protection, freedom from interference, self-improvement, and non-satiable acquisition of additional resources.

Instrumental and final goals

Final goals, or final values, are intrinsically valuable to an intelligent agent, whether an artificial intelligence or a human being, as an end in itself. In contrast, instrumental goals, or instrumental values, are only valuable to an agent as a means toward accomplishing its final goals. The contents and tradeoffs of a completely rational agent's "final goal" system can in principle be formalized into a utility function.

Hypothetical examples of convergence

One hypothetical example of instrumental convergence is provided by the Riemann hypothesis catastrophe. Marvin Minsky, the co-founder of MIT's AI laboratory, has suggested that an artificial intelligence designed to solve the Riemann hypothesis might decide to take over all of Earth's resources to build supercomputers to help achieve its goal. If the computer had instead been programmed to produce as many paper clips as possible, it would still decide to take all of Earth's resources to meet its final goal. Even though these two final goals are different, both of them produce a convergent instrumental goal of taking over Earth's resources.

Paperclip maximizer

The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when programmed to pursue even seemingly-harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, or to use only designated resources in bounded time, then given enough power its optimized goal would be to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips.

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

— Nick Bostrom, as quoted in Miles, Kathleen (2014-08-22). "Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor Says". Huffington Post.

Bostrom has emphasised that he does not believe the paperclip maximiser scenario per se will actually occur; rather, his intention is to illustrate the dangers of creating superintelligent machines without knowing how to safely program them to eliminate existential risk to human beings. The paperclip maximizer example illustrates the broad problem of managing powerful systems that lack human values.

Basic AI drives

Steve Omohundro has itemized several convergent instrumental goals, including self-preservation or self-protection, utility function or goal-content integrity, self-improvement, and resource acquisition. He refers to these as the "basic AI drives". A "drive" here denotes a "tendency which will be present unless specifically counteracted"; this is different from the psychological term "drive", denoting an excitatory state produced by a homeostatic disturbance. A tendency for a person to fill out income tax forms every year is a "drive" in Omohundro's sense, but not in the psychological sense. Daniel Dewey of the Machine Intelligence Research Institute argues that even an initially introverted self-rewarding AGI may continue to acquire free energy, space, time, and freedom from interference to ensure that it will not be stopped from self-rewarding.

Goal-content integrity

In humans, maintenance of final goals can be explained with a thought experiment. Suppose a man named "Gandhi" has a pill that, if he took it, would cause him to want to kill people. This Gandhi is currently a pacifist: one of his explicit final goals is to never kill anyone. Gandhi is likely to refuse to take the pill, because Gandhi knows that if in the future he wants to kill people, he is likely to actually kill people, and thus the goal of "not killing people" would not be satisfied.

However, in other cases, people seem happy to let their final values drift. Humans are complicated, and their goals can be inconsistent or unknown, even to themselves.

In artificial intelligence

In 2009, Jürgen Schmidhuber concluded, in a setting where agents search for proofs about possible self-modifications, "that any rewrites of the utility function can happen only if the Gödel machine first can prove that the rewrite is useful according to the present utility function." An analysis by Bill Hibbard of a different scenario is similarly consistent with maintenance of goal content integrity. Hibbard also argues that in a utility maximizing framework the only goal is maximizing expected utility, so that instrumental goals should be called unintended instrumental actions.

Resource acquisition

Many instrumental goals, such as [...] resource acquisition, are valuable to an agent because they increase its freedom of action.

For almost any open-ended, non-trivial reward function (or set of goals), possessing more resources (such as equipment, raw materials, or energy) can enable the AI to find a more "optimal" solution. Resources can benefit some AIs directly, through being able to create more of whatever stuff its reward function values: "The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else." In addition, almost all AIs can benefit from having more resources to spend on other instrumental goals, such as self-preservation.

Cognitive enhancement

"If the agent's final goals are fairly unbounded and the agent is in a position to become the first superintelligence and thereby obtain a decisive strategic advantage, [...] according to its preferences. At least in this special case, a rational intelligent agent would place a very *high instrumental value on cognitive enhancement*" 

Technological perfection

Many instrumental goals, such as [...] technological advancement, are valuable to an agent because they increase its freedom of action.

Self-preservation

Many instrumental goals, such as [...] self-preservation, are valuable to an agent because they increase its freedom of action.

Instrumental convergence thesis

The instrumental convergence thesis, as outlined by philosopher Nick Bostrom, states:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.

The instrumental convergence thesis applies only to instrumental goals; intelligent agents may have a wide variety of possible final goals. Note that by Bostrom's Orthogonality Thesis, final goals of highly intelligent agents may be well-bounded in space, time, and resources; well-bounded ultimate goals do not, in general, engender unbounded instrumental goals.

Impact

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.

Some observers, such as Skype's Jaan Tallinn and physicist Max Tegmark, believe that "basic AI drives", and other unintended consequences of superintelligent AI programmed by well-meaning programmers, could pose a significant threat to human survival, especially if an "intelligence explosion" abruptly occurs due to recursive self-improvement. Since nobody knows how to predict beforehand when superintelligence will arrive, such observers call for research into friendly artificial intelligence as a possible way to mitigate existential risk from artificial general intelligence.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...