Search This Blog

Monday, July 30, 2018

Incomplete Nature

From Wikipedia, the free encyclopedia
 
Incomplete Nature: How Mind Emerged from Matter
Author Terrence W. Deacon
Country United States
Language English
Subject Science
Published W. W. Norton & Company; 1 edition (November 21, 2011)
Media type Print
Pages 670
ISBN 978-0393049916
OCLC 601107605
612.8/2

Incomplete Nature: How Mind Emerged from Matter is a 2011 book by biological anthropologist Terrence Deacon. The book covers topics in biosemiotics, philosophy of mind, and the origins of life. Broadly, the book seeks to naturalistically explain "aboutness", that is, concepts like intentionality, meaning, normativity, purpose, and function; which Deacon groups together and labels as ententional phenomena.

Core Ideas

Deacon's first book, The Symbolic Species focused on the evolution of human language. In that book, Deacon notes that much of the mystery surrounding language origins comes from a profound confusion on the nature of semiotic processes themselves. Accordingly, the focus of Incomplete Nature shifts from human origins to the origin of life and semiosis. Incomplete Nature can be viewed as a sizable contribution to the growing body of work positing that the problem of consciousness and the problem of the origin of life are inexorably linked.[1][2] Deacon tackles these two linked problems by going back to basics. The book expands upon the classical conceptions of work and information in order to give an account of ententionality that is consistent with eliminative materialism and yet does not seek to explain away or pass off as epiphenominal the non-physical properties of life.

Constraints

A central thesis of the book is that absence can still be efficacious. Deacon makes the claim that just as the concept of zero revolutionized mathematics, thinking about life, mind, and other ententional phenomena in terms of constraints (i.e. what is absent) can similarly help us overcome the artificial dichotomy of the mind body problem. A good example of this concept is the hole that defines the hub of a wagon wheel. The hole itself is not a physical thing, but rather a source of constraint that helps to restrict the conformational possibilities of the wheel's components, such that, on a global scale, the property of rolling emerges. Constraints which produce emergent phenomena may not be a process which can be understood by looking at the make-up of the constituents of a pattern. Emergent phenomena are difficult to study because their complexity does not necessarily decompose into parts. When a pattern is broken down, the constraints are no longer at work; there is no hole, no absence to notice. Imagine a hub, a hole for an axle, produced only when the wheel is rolling, thus breaking the wheel may not show you how the hub emerges.

Orthograde and contragrade

Deacon notes that the apparent patterns of causality exhibited by living systems seem to be in some ways the inverse of the causal patterns of non-living systems.[citation needed] In an attempt to find a solution to the philosophical problems associated with teleological explanations, Deacon returns to Aristotle's four causes and attempts to modernize them with thermodynamic concepts.

A cartoon characterization of the asymmetry implicit in thermodynamic change from a constrained ("ordered") state to a less constrained ("disordered") state, which tends to occur spontaneously (an orthograde process), contrasted with the reversed direction of change, which does not tend to occur spontaneously (a contragrade process), and so only tends to occur in response to the imposition of highly constrained external work (arrows in the image on the right).

Orthograde changes are caused internally. They are spontaneous changes. That is, orthograde changes are generated by the spontaneous elimination of asymmetries in a thermodynamic system in disequilibrium. Because orthograde changes are driven by the internal geometry of a changing system, orthograde causes can be seen as analogous to Aristotle's formal cause. More loosely, Aristotle's final cause can also be considered orthograde, because goal oriented actions are caused from within.[3]

Contragrade changes are imposed from the outside. They are non-spontaneous changes. Contragrade change is induced when one thermodynamic system interacts with the orthograde changes of another thermodynamic system. The interaction drives the first system into a higher energy, more asymmetrical state. Contragrade changes do work. Because contragrade changes are driven by external interactions with another changing system, contragrade causes can be seen as analogous to Aristotle's efficient cause.[4]

Homeodynamics, morphodynamics, and teleodynamics

Much of the book is devoted to expanding upon the ideas of classical thermodynamics, with an extended discussion about how consistently far from equilibrium systems can interact and combine to produce novel emergent properties.
Homomorphoteleo.jpg

Deacon defines three hierarchically nested levels of thermodynamic systems: Homeodynamic systems combine to produce morphodynamic systems which combine to produce teleodynamic systems. Teleodynamic systems can be further combined to produce higher orders of self organization.

Homeodynamics

Homeodynamic systems are essentially equivalent to classical thermodynamic systems like a gas under pressure or solute in solution, but the term serves to emphasize that homeodynamics is an abstract process that can be realized in forms beyond the scope of classic thermodynamics. For example, the diffuse brain activity normally associated with emotional states can be considered to be a homeodynamic system because there is a general state of equilibrium which its components (neural activity) distribute towards.[5] In general, a homeodynamic system is any collection of components that will spontaneously eliminate constraints by rearranging the parts until a maximum entropy state (disorderliness) is achieved.

Morphodynamics

A morphodynamic system consists of a coupling of two homeodynamic systems such that the constraint dissipation of each complements the other, producing macroscopic order out of microscopic interactions. Morphodynamic systems require constant perturbation to maintain their structure, so they are relatively rare in nature. The paradigm example of a morphodynamic system is a Rayleigh–Bénard cell. Other common examples are snowflake formation, whirlpools and the stimulated emission of laser light.

Benard Cell

Maximum entropy production: The organized structure of a morphodynamic system forms to facilitate maximal entropy production. In the case of a Rayleigh–Bénard cell, heat at the base of the liquid produces an uneven distribution of high energy molecules which will tend to diffuse towards the surface. As the temperature of the heat source increases, density effects come into play. Simple diffusion can no longer dissipate energy as fast as it is added and so the bottom of the liquid becomes hot and more buoyant than the cooler, denser liquid at the top. The bottom of the liquid begins to rise, and the top begins to sink - producing convection currents.

Two systems: The significant heat differential on the liquid produces two homeodynamic systems. The first is a diffusion system, where high energy molecules on the bottom collide with lower energy molecules on the top until the added kinetic energy from the heat source is evenly distributed. The second is a convection system, where the low density fluid on the bottom mixes with the high density fluid on the top until the density becomes evenly distributed. The second system arises when there is too much energy to be effectively dissipated by the first, and once both systems are in place, they will begin to interact.

Self organization: The convection creates currents in the fluid that disrupt the pattern of heat diffusion from bottom to top. Heat begins to diffuse into the denser areas of current, irrespective of the vertical location of these denser portions of fluid. The areas of the fluid where diffusion is occurring most rapidly will be the most viscous because molecules are rubbing against each other in opposite directions. The convection currents will shun these areas in favor of parts of the fluid where they can flow more easily. And so the fluid spontaneously segregates itself into cells where high energy, low density fluid flows up from the center of the cell and cooler, denser fluid flows down along the edges, with diffusion effects dominating in the area between the center and the edge of each cell.

Synergy and constraint: What is notable about morphodynamic processes is that order spontaneously emerges explicitly because the ordered system that results is more efficient at increasing entropy than a chaotic one. In the case of the Rayleigh–Bénard cell, neither diffusion nor convection on their own will produce as much entropy as both effects coupled together. When both effects are brought into interaction, they constrain each other into a particular geometric form because that form facilitates minimal interference between the two processes. The orderly hexagonal form is stable as long as the energy differential persists, and yet the orderly form more effectively degrades the energy differential than any other form. This is why morphodynamic processes in nature are usually so short lived. They are self organizing, but also self undermining.

Teleodynamics

A teleodynamic system consists of coupling two morphodynamic systems such that the self undermining quality of each is constrained by the other. Each system prevents the other from dissipating all of the energy available, and so long term organizational stability is obtained. Deacon claims that we should pinpoint the moment when two morphodynamic systems reciprocally constrain each other as the point when ententional qualities like function, purpose and normativity emerge.[6]

Autogenesis

Deacon explores the properties of teleodynamic systems by describing a chemically plausible model system called an autogen. Deacon emphasizes that the specific autogen he describes is not a proposed description of the first life form, but rather a description of the kinds of thermodynamic synergies that the first living creature likely possessed.[7]
 
Autogen pg 339

Reciprocal catalysis: An autogen consists of two self catalyzing cyclical morphodynamic chemical reactions, similar to a chemoton. In one reaction, organic molecules react in a looped series, the products of one reaction becoming the reactants for the next. This looped reaction is self amplifying, producing more and more reactants until all the substrate is consumed. A side product of this reciprocally catalytic loop is a lipid that can be used as a reactant in a second reaction. This second reaction creates a boundary (either a microtubule or some other closed capsid like structure), that serves to contain the first reaction. The boundary limits diffusion; it keeps all of the necessary catalysts in close proximity to each other. In addition, the boundary prevents the first reaction from completely consuming all of the available substrate in the environment.

The first self: Unlike an isolated morphodynamic process whose organization rapidly eliminates the energy gradient necessary to maintain its structure, a teleodynamic process is self-limiting and self preserving. The two reactions complement each other, and ensure that neither ever runs to equilibrium - that is completion, cessation, and death. So, in a teleodynamic system there will be structures that embody a preliminary sketch of a biological function. The internal reaction network functions to create the substrates for the boundary reaction, and the boundary reaction functions to protect and constrain the internal reaction network. Either process in isolation would be abiotic but together they create a system with a normative status dependent on the functioning of its component parts.

Work

As with other concepts in the book, in his discussion of work Deacon seeks to generalize the Newtonian conception of work such that the term can be used to describe and differentiate mental phenomena - to describe "that which makes daydreaming effortless but metabolically equivalent problem solving difficult."[8] Work is generally described as "activity that is necessary to overcome resistance to change. Resistance can be either active or passive, and so work can be directed towards enacting change that wouldn't otherwise occur or preventing change that would happen in its absence."[9] Using the terminology developed earlier in the book, work can be considered to be "the organization of differences between orthograde processes such that a locus of contragrade process is created. Or, more simply, work is a spontaneous change inducing a non-spontaneous change to occur."[10]

Thermodynamic work

A thermodynamic systems capacity to do work depends less upon the total energy of the system and more upon the geometric distribution of its components. A glass of water at 20 degrees Celsius will have the same amount of energy as a glass divided in half with the top fluid at 30 degrees and the bottom at 10, but only in the second glass will the top half have the capacity to do work upon the bottom. This is because work occurs at both macroscopic and microscopic levels. Microscopically, there is constant work being performed on one molecule by another when they collide. But the potential for this microscopic work to additively sum to macroscopic work depends on there being an asymmetric distribution of particle speeds, so that the average collision pushes in a focused direction. Microscopic work is necessary but not sufficient for macroscopic work. A global property of asymmetric distribution is also required.

Morphodynamic work

By recognizing that asymmetry is a general property of work - that work is done as asymmetric systems spontaneously tend towards symmetry, Deacon abstracts the concept of work and applies it to systems whose symmetries are vastly more complex than those covered by classical thermodynamics. In a morphodynamic system, the tendency towards symmetry produces not global equilibrium, but a complex geometric form like a hexagonal Benard cell or the resonant frequency of a flute. This tendency towards convolutedly symmetric forms can be harnessed to do work on other morphodynamic systems, if the systems are properly coupled.

Resonance example: A good example of morphodynamic work is the induced resonance that can be observed by singing or playing a flute next to a string instrument like a harp or guitar. The vibrating air emitted from the flute will interact with the taut strings. If any of the strings are tuned to a resonant frequency that matches the note being played, they too will begin to vibrate and emit sound.

Contragrade change: When energy is added to the flute by blowing air into it, there is a spontaneous (orthograde) tendency for the system to dissipate the added energy by inducing the air within the flute to vibrate at a specific frequency. This orthograde morphodynamic form generation can be used to induce contragrade change in the system coupled to it - the taught string. Playing the flute does work on the string by causing it to enter a high energy state that could not be reached spontaneously in an uncoupled state.

Structure and form: Importantly, this is not just the macro scale propagation of random micro vibrations from one system to another. The global geometric structure of the system is essential. The total energy transferred from the flute to the string matters far less than the patterns it takes in transit. That is, the amplitude of the coupled note is irrelevant, what matters is its frequency. Notes that have a higher or lower frequency than the resonant frequency of the string will not be able to do morphodynamic work.

Teleodynamic work

Work is generally defined to be the interaction of two orthograde changing systems such that contragrade change is produced.[11] In teleodynamic systems, the spontaneous orthograde tendency is not to equilibriate (as in homeodynamic systems), nor to self simplify (as in morphodynamic systems) but rather to tend towards self-preservation. Living organisms spontaneously tend to heal, to reproduce and to pursue resources towards these ends. Teleodynamic work acts on these tendencies and pushes them in a contragrade, non-spontaneous direction.

Reading exemplifies the logic of teleodynamic work. A passive source of cognitive constraints is potentially provided by the letterforms on a page. A literate person has structured his or her sensory and cognitive habits to use such letterforms to reorganize the neural activities constituting thinking. This enables us to do teleodynamic work to shift mental tendencies away from those that are spontaneous (such as daydreaming) to those that are constrained by the text. Artist: Giovanni Battista Piazzetta (1682–1754).

Evolution as work: Natural selection, or perhaps more accurately, adaptation, can be considered to be a ubiquitous form of teleodynamic work. The othograde self-preservation and reproduction tendencies of individual organisms tends to undermine those same tendencies in conspecifics. This competition produces a constraint that tends to mold organisms into forms that are more adapted to their environments – forms that would otherwise not spontaneously persist.

For example, in a population of New Zealand wrybill who make a living by searching for grubs under rocks, those that have a bent beak gain access to more calories. Those with bent beaks are able to better provide for their young, and at the same time they remove a disproportionate quantity of grubs from their environment, making it more difficult for those with strait beaks to provide for their own young. Throughout their lives, all the wrybills in the population do work to structure the form of the next generation. The increased efficiency of the bent beak causes that morphology to dominate the next generation. Thus an asymmetry of beak shape distribution is produced in the population - an asymmetry produced by teleodynamic work.

Thought as work: Mental problem solving can also be considered teleodynamic work. Thought forms are spontaneously generated, and task of problem solving is the task of molding those forms to fit the context of the problem at hand. Deacon makes the link between evolution as teleodynamic work and thought as teleodynamic work explicit. "The experience of being sentient is what it feels like to be evolution."[12]

Emergent causal powers

By conceiving of work in this way, Deacon claims "we can begin to discern a basis for a form of causal openness in the universe."[13] While increases in complexity in no way alter the laws of physics, by juxtaposing systems together, pathways of spontaneous change can be made available that were inconceivably improbable prior to the systems coupling. The causal power of any complex living system lies not solely in the underlying quantum mechanics but also in the global arrangement of its components. A careful arrangement of parts can constrain possibilities such that phenomena that were formerly impossibly rare can become improbably common.

Information

One of the central purposes of Incomplete Nature is to articulate a theory of biological information. The first formal theory of information was articulated by Claude Shannon in 1948 in his work A Mathematical Theory of Communication. Shannon's work is widely credited with ushering in the information age, but somewhat paradoxically, it was completely silent on questions of meaning and reference, i.e., what the information is about. As an engineer, Shannon was concerned with the challenge of reliably transmitting a message from one location to another. The meaning and content of the message was largely irrelevant. So, while Shannon information theory has been essential for the development of devices like computers, it has left open many philosophical questions regarding the nature of information. Incomplete Nature seeks to answer some of these questions.

Shannon information

Shannon's key insight was to recognize a link between entropy and information. Entropy is often defined as a measurement of disorder, or randomness, but this can be misleading. For Shannon's purposes, the entropy of a system is the number of possible states that the system has the capacity to be in. Any one of these potential states can constitute a message. For example, a typewritten page can bear as many different messages as there are different combinations of characters that can be arranged on the page. The information content of a message can only be understood against the background context of all of the messages that could have been sent, but weren't. Information is produced by a reduction of entropy in the message medium.

Three nested conceptions of information

Boltzmann entropy

Shannon's information based conception of entropy should be distinguished from the more classic thermodynamic conception of entropy developed by Ludwig Boltzmann and others at the end of the nineteenth century. While Shannon entropy is static and has to do with the set of all possible messages/states that a signal bearing system might take, Boltzmann entropy has to do with the tendency of all dynamic systems to tend towards equilibrium. That is, there are many more ways for a collection of particles to be well mixed than to be segregated based on velocity, mass, or any other property. Boltzmann entropy is central to the theory of work developed earlier in the book because entropy dictates the direction in which a system will spontaneously tend.

Significant information

Deacon's addition to Shannon information theory is to propose a method for describing not just how a message is transmitted, but also how it is interpreted. Deacon weaves together Shannon entropy and Boltzmann entropy in order to develop a theory of interpretation based in teleodynamic work. Interpretation is inherently normative. Data becomes information when it has significance for its interpreter. Thus interpretive systems are teleodynamic - the interpretive process is designed to perpetuate itself. "The interpretation of something as information indirectly reinforces the capacity to do this again."

Autopoiesis

From Wikipedia, the free encyclopedia
 
3D representation of a living cell during the process of mitosis, example of an autopoietic system

The term autopoiesis (from Greek αὐτo- (auto-), meaning 'self', and ποίησις (poiesis), meaning 'creation, production') refers to a system capable of reproducing and maintaining itself. The term was introduced in 1972 by Chilean biologists Humberto Maturana and Francisco Varela to define the self-maintaining chemistry of living cells. Since then the concept has been also applied to the fields of cognition, systems theory and sociology.

The original definition can be found in Autopoiesis and Cognition: the Realization of the Living (1st edition 1973, 2nd 1980):[1]
Page 16: It was in these circumstances ... in which he analyzed Don Quixote's dilemma of whether to follow the path of arms (praxis, action) or the path of letters (poiesis, creation, production), I understood for the first time the power of the word "poiesis" and invented the word that we needed: autopoiesis. This was a word without a history, a word that could directly mean what takes place in the dynamics of the autonomy proper to living systems.

Page 78: An autopoietic machine is a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components which: (i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological domain of its realization as such a network.[2]

Page 89: ... the space defined by an autopoietic system is self-contained and cannot be described by using dimensions that define another space. When we refer to our interactions with a concrete autopoietic system, however, we project this system on the space of our manipulations and make a description of this projection.

Meaning

Autopoiesis was originally presented as a system description that was said to define and explain the nature of living systems. A canonical example of an autopoietic system is the biological cell. The eukaryotic cell, for example, is made of various biochemical components such as nucleic acids and proteins, and is organized into bounded structures such as the cell nucleus, various organelles, a cell membrane and cytoskeleton. These structures, based on an external flow of molecules and energy, produce the components which, in turn, continue to maintain the organized bounded structure that gives rise to these components (not unlike a wave propagating through a medium).

An autopoietic system is to be contrasted with an allopoietic system, such as a car factory, which uses raw materials (components) to generate a car (an organized structure) which is something other than itself (the factory). However, if the system is extended from the factory to include components in the factory's "environment", such as supply chains, plant / equipment, workers, dealerships, customers, contracts, competitors, cars, spare parts, and so on, then as a total viable system it could be considered to be autopoietic.

Though others have often used the term as a synonym for self-organization, Maturana himself stated he would "[n]ever use the notion of self-organization ... Operationally it is impossible. That is, if the organization of a thing changes, the thing changes".[3] Moreover, an autopoietic system is autonomous and operationally closed, in the sense that there are sufficient processes within it to maintain the whole. Autopoietic systems are "structurally coupled" with their medium, embedded in a dynamic of changes that can be recalled as sensory-motor coupling. This continuous dynamic is considered as a rudimentary form of knowledge or cognition and can be observed throughout life-forms.

An application of the concept of autopoiesis to sociology can be found in Niklas Luhmann's Systems Theory, which was subsequently adapted by Bob Jessop in his studies of the capitalist state system. Marjatta Maula adapted the concept of autopoiesis in a business context. The theory of autopoiesis has also been applied in the context of legal systems by not only Niklas Luhmann, but also Gunther Teubner.[4][5]

In the context of textual studies, Jerome McGann argues that texts are "autopoietic mechanisms operating as self-generating feedback systems that cannot be separated from those who manipulate and use them".[6] Citing Maturana and Varela, he defines an autopoietic system as "a closed topological space that 'continuously generates and specifies its own organization through its operation as a system of production of its own components, and does this in an endless turnover of components'", concluding that "Autopoietic systems are thus distinguished from allopoietic systems, which are Cartesian and which 'have as the product of their functioning something different from themselves'". Coding and markup appear allopoietic", McGann argues, but are generative parts of the system they serve to maintain, and thus language and print or electronic technology are autopoietic systems.[7]

In his discussion of Hegel, the philosopher Slavoj Žižek argues, "Hegel is – to use today's terms – the ultimate thinker of autopoiesis, of the process of the emergence of necessary features out of chaotic contingency, the thinker of contingency's gradual self-organisation, of the gradual rise of order out of chaos."[8]

Relation to complexity

Autopoiesis can be defined as the ratio between the complexity of a system and the complexity of its environment.[9]
This generalized view of autopoiesis considers systems as self-producing not in terms of their physical components, but in terms of its organization, which can be measured in terms of information and complexity. In other words, we can describe autopoietic systems as those producing more of their own complexity than the one produced by their environment.[10]

Relation to cognition

An extensive discussion of the connection of autopoiesis to cognition is provided by Thompson.[11] The basic notion of autopoiesis as involving constructive interaction with the environment is extended to include cognition. Initially, Maturana defined cognition as behavior of an organism "with relevance to the maintenance of itself".[12] However, computer models that are self-maintaining but non-cognitive have been devised, so some additional restrictions are needed, and the suggestion is that the maintenance process, to be cognitive, involves readjustment of the internal workings of the system in some metabolic process. On this basis it is claimed that autopoiesis is a necessary but not a sufficient condition for cognition.[13] Thompson (p. 127) takes the view that this distinction may or may not be fruitful, but what matters is that living systems involve autopoiesis and (if it is necessary to add this point) cognition as well. It can be noted that this definition of 'cognition' is restricted, and does not necessarily entail any awareness or consciousness by the living system.

Relation to consciousness

The connection of autopoiesis to cognition, or if necessary, of living systems to cognition, is an objective assessment ascertainable by observation of a living system.

One question that arises is about the connection between cognition seen in this manner and consciousness. The separation of cognition and consciousness recognizes that the organism may be unaware of the substratum where decisions are made. What is the connection between these realms? Thompson refers to this issue as the "explanatory gap", and one aspect of it is the hard problem of consciousness, how and why we have qualia.[14]

A second question is whether autopoiesis can provide a bridge between these concepts. Thompson discusses this issue from the standpoint of enactivism. An autopoietic cell actively relates to its environment. Its sensory responses trigger motor behavior governed by autopoiesis, and this behavior (it is claimed) is a simplified version of a nervous system behavior. The further claim is that real-time interactions like this require attention, and an implication of attention is awareness.[15]

Criticism

There are multiple criticisms of the use of the term in both its original context, as an attempt to define and explain the living, and its various expanded usages, such as applying it to self-organizing systems in general or social systems in particular.[16] Critics have argued that the term fails to define or explain living systems and that, because of the extreme language of self-referentiality it uses without any external reference, it is really an attempt to give substantiation to Maturana's radical constructivist or solipsistic epistemology,[17] or what Danilo Zolo[18][19] has called instead a "desolate theology". An example is the assertion by Maturana and Varela that "We do not see what we do not see and what we do not see does not exist".[20] The autopoietic model, said Rod Swenson,[21] is "miraculously decoupled from the physical world by its progenitors ... (and thus) grounded on a solipsistic foundation that flies in the face of both common sense and scientific knowledge".

Using light instead of electrons promises faster, smaller, more-efficient computers and smartphones

December 1, 2017
Original link:  http://www.kurzweilai.net/using-light-instead-of-electrons-promises-faster-smaller-more-efficient-computers-and-smartphones

Trapped light for optical computation (credit: Imperial College London)

By forcing light to go through a smaller gap than ever before, a research team at Imperial College London has taken a step toward computers based on light instead of electrons.

Light would be preferable for computing because it can carry much-higher-density information, it’s much faster, and more efficient (generates little to no heat). But light beams don’t easily interact with one other. So information on high-speed fiber-optic cables (provided by your cable TV company, for example) currently has to be converted (via a modem or other device) into slower signals (electrons on wires or wireless signals) to allow for processing the data on devices such as computers and smartphones.

Electron-microscope image of an optical-computing nanofocusing device that is 25 nanometers wide and 2 micrometers long, using grating couplers (vertical lines) to interface with fiber-optic cables. (credit: Nielsen et al., 2017/Imperial College London)

To overcome that limitation, the researchers used metamaterials to squeeze light into a metal channel only 25 nanometers (billionths of a meter) wide, increasing its intensity and allowing photons to interact over the range of micrometers (millionths of meters) instead of centimeters.*

That means optical computation that previously required a centimeters-size device can now be realized on the micrometer (one millionth of a meter) scale, bringing optical processing into the size range of electronic transistors.

The results were published Thursday Nov. 30, 2017 in the journal Science.

* Normally, when two light beams cross each other, the individual photons do not interact or alter each other, as two electrons do when they meet. That means a long span of material is needed to gradually accumulate the effect and make it useful. Here, a “plasmonic nanofocusing” waveguide is used, strongly confining light within a nonlinear organic polymer.


Abstract of Giant nonlinear response at a plasmonic nanofocus drives efficient four-wave mixing

Efficient optical frequency mixing typically must accumulate over large interaction lengths because nonlinear responses in natural materials are inherently weak. This limits the efficiency of mixing processes owing to the requirement of phase matching. Here, we report efficient four-wave mixing (FWM) over micrometer-scale interaction lengths at telecommunications wavelengths on silicon. We used an integrated plasmonic gap waveguide that strongly confines light within a nonlinear organic polymer. The gap waveguide intensifies light by nanofocusing it to a mode cross-section of a few tens of nanometers, thus generating a nonlinear response so strong that efficient FWM accumulates over wavelength-scale distances. This technique opens up nonlinear optics to a regime of relaxed phase matching, with the possibility of compact, broadband, and efficient frequency mixing integrated with silicon photonics.

Health 2.0

From Wikipedia, the free encyclopedia
 
"Health 2.0" is a term introduced in the mid-2000s, as the subset of health care technologies mirroring the wider Web 2.0 movement. It has been defined variously as including social media, user-generated content, and cloud-based and mobile technologies. Some Health 2.0 proponents see these technologies as empowering patients to have greater control over their own health care and diminishing medical paternalism. Critics of the technologies have expressed concerns about possible misinformation and violations of patient privacy,

History

Health 2.0 built on the possibilities for changing health care, which started with the introduction of eHealth in the mid-1990s following the emergence of the World Wide Web. In the mid-2000s, following the widespread adoption both of the Internet and of easy to use tools for communication, social networking, and self-publishing, there was spate of media attention to and increasing interest from patients, clinicians, and medical librarians in using these tools for health care and medical purposes.[1][2]

Early examples of Health 2.0 were the use of a specific set of Web tools (blogs, email list-servs, online communities, podcasts, search, tagging, Twitter, videos, wikis, and more) by actors in health care including doctors, patients, and scientists, using principles of open source and user-generated content, and the power of networks and social networks in order to personalize health care, to collaborate, and to promote health education.[3] Possible explanations why health care has generated its own "2.0" term are the availability and proliferation of Health 2.0 applications across health care in general, and the potential for improving public health in particular.[4]

Current use

While the "2.0" moniker was originally associated with concepts like collaboration, openness, participation, and social networking,[5] in recent years the term "Health 2.0" has evolved to mean the role of Saas and cloud-based technologies, and their associated applications on multiple devices. Health 2.0 describes the integration of these into much of general clinical and administrative workflow in health care. As of 2014, approximately 3,000 companies were offering products and services matching this definition, with venture capital funding in the sector exceeding $2.3 billion in 2013.[6]

Definitions

The "traditional" definition of "Health 2.0" focused on technology as an enabler for care collaboration: "The use of social software t-weight tools to promote collaboration between patients, their caregivers, medical professionals, and other stakeholders in health."[7]

In 2011, Indu Subaiya redefined Health 2.0[8] as the use in health care of new cloud, Saas, mobile, and device technologies that are:
  1. Adaptable technologies which easily allow other tools and applications to link and integrate with them, primarily through use of accessible APIs
  2. Focused on the user experience, bringing in the principles of user-centered design
  3. Data driven, in that they both create data and present data to the user in order to help improve decision making
This wider definition allows recognition of what is or what isn't a Health 2.0 technology. Typically, enterprise-based, customized client-server systems are not, while more open, cloud based systems fit the definition. However, this line was blurring by 2011-2 as more enterprise vendors started to introduce cloud-based systems and native applications for new devices like smartphones and tablets.

In addition, Health 2.0 has several competing terms, each with its own followers—if not exact definitions—including Connected Health, Digital Health, Medicine 2.0, and mHealth. All of these support a goal of wider change to the health care system, using technology-enabled system reform—usually changing the relationship between patient and professional.:
  1. Personalized search that looks into the long tail but cares about the user experience
  2. Communities that capture the accumulated knowledge of patients, caregivers, and clinicians, and explains it to the world
  3. Intelligent tools for content delivery—and transactions
  4. Better integration of data with content

Wider health system definitions

In the late 2000s, several commentators used Health 2.0 as a moniker for a wider concept of system reform, seeking a participatory process between patient and clinician: "New concept of health care wherein all the constituents (patients, physicians, providers, and payers) focus on health care value (outcomes/price) and use competition at the medical condition level over the full cycle of care as the catalyst for improving the safety, efficiency, and quality of health care".[9]

Health 2.0 defines the combination of health data and health information with (patient) experience, through the use of ICT, enabling the citizen to become an active and responsible partner in his/her own health and care pathway.[10]

Health 2.0 is participatory healthcare. Enabled by information, software, and communities that we collect or create, we the patients can be effective partners in our own healthcare, and we the people can participate in reshaping the health system itself.[11]

Definitions of Medicine 2.0 appear to be very similar but typically include more scientific and research aspects—Medicine 2.0: "Medicine 2.0 applications, services and tools are Web-based services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies as well as semantic web and virtual reality tools, to enable and facilitate specifically social networking, participation, apomediation, collaboration, and openness within and between these user groups.[12][13] Published in JMIR Tom Van de Belt, Lucien Engelen et al. systematic review found 46 (!) unique definitions of health 2.0[14]

Overview

A model of Health 2.0

Health 2.0 refers to the use of a diverse set of technologies including Connected Health, electronic medical records, mHealth, telemedicine, and the use of the Internet by patients themselves such as through blogs, Internet forums, online communities, patient to physician communication systems, and other more advanced systems.[15][16] A key concept is that patients themselves should have greater insight and control into information generated about them. Additionally Health 2.0 relies on the use of modern cloud and mobile-based technologies.

Much of the potential for change from Health 2.0 is facilitated by combining technology driven trends such as Personal Health Records with social networking —"[which] may lead to a powerful new generation of health applications, where people share parts of their electronic health records with other consumers and 'crowdsource' the collective wisdom of other patients and professionals."[5] Traditional models of medicine had patient records (held on paper or a proprietary computer system) that could only be accessed by a physician or other medical professional. Physicians acted as gatekeepers to this information, telling patients test results when and if they deemed it necessary. Such a model operates relatively well in situations such as acute care, where information about specific blood results would be of little use to a lay person, or in general practice where results were generally benign. However, in the case of complex chronic diseases, psychiatric disorders, or diseases of unknown etiology patients were at risk of being left without well-coordinated care because data about them was stored in a variety of disparate places and in some cases might contain the opinions of healthcare professionals which were not to be shared with the patient. Increasingly, medical ethics deems such actions to be medical paternalism, and they are discouraged in modern medicine.[17][18]

A hypothetical example demonstrates the increased engagement of a patient operating in a Health 2.0 setting: a patient goes to see their primary care physician with a presenting complaint, having first ensured their own medical record was up to date via the Internet. The treating physician might make a diagnosis or send for tests, the results of which could be transmitted directly to the patient's electronic medical record. If a second appointment is needed, the patient will have had time to research what the results might mean for them, what diagnoses may be likely, and may have communicated with other patients who have had a similar set of results in the past. On a second visit a referral might be made to a specialist. The patient might have the opportunity to search for the views of other patients on the best specialist to go to, and in combination with their primary care physician decides who to see. The specialist gives a diagnosis along with a prognosis and potential options for treatment. The patient has the opportunity to research these treatment options and take a more proactive role in coming to a joint decision with their healthcare provider. They can also choose to submit more data about themselves, such as through a personalized genomics service to identify any risk factors that might improve or worsen their prognosis. As treatment commences, the patient can track their health outcomes through a data-sharing patient community to determine whether the treatment is having an effect for them, and they can stay up to date on research opportunities and clinical trials for their condition. They also have the social support of communicating with other patients diagnosed with the same condition throughout the world.

Level of use of Web 2.0 in health care

Partly due to weak definitions, the novelty of the endeavor and its nature as an entrepreneurial (rather than academic) movement, little empirical evidence exists to explain how much Web 2.0 is being used in general. While it has been estimated that nearly one-third of the 100 million Americans who have looked for health information online say that they or people they know have been significantly helped by what they found,[19] this study considers only the broader use of the Internet for health management.

A study examining physician practices has suggested that a segment of 245,000 physicians in the U.S are using Web 2.0 for their practice, indicating that use is beyond the stage of the early adopter with regard to physicians and Web 2.0.[20]

Types of Web 2.0 technology in health care

Web 2.0 is commonly associated with technologies such as podcasts, RSS feeds, social bookmarking, weblogs (health blogs), wikis, and other forms of many-to-many publishing; social software; and web application programming interfaces (APIs).[21]

The following are examples of uses that have been documented in academic literature.

Purpose Description Case example in academic literature Users
Staying informed Used to stay informed of latest developments in a particular field Podcasts, RSS, and search tools[2] All (medical professionals and public)
Medical education Use for professional development for doctors, and public health promotion for by public health professionals and the general public How podcasts can be used on the move to increase total available educational time[22] or the many applications of these tools to public health[23] All (medical professionals and public)
Collaboration and practice Web 2.0 tools use in daily practice for medical professionals to find information and make decisions Google searches revealed the correct diagnosis in 15 out of 26 cases (58%, 95% confidence interval 38% to 77%) in a 2005 study[24] Doctors, nurses
Managing a particular disease Patients who use search tools to find out information about a particular condition Shown that patients have different patterns of usage depending on if they are newly diagnosed or managing a severe long-term illness. Long-term patients are more likely to connect to a community in Health 2.0[25] Public
Sharing data for research Completing patient-reported outcomes and aggregating the data for personal and scientific research Disease specific communities for patients with rare conditions aggregate data on treatments, symptoms, and outcomes to improve their decision making ability and carry out scientific research such as observational trials[26] All (medical professionals and public)

Criticism of the use of Web 2.0 in health care

Hughes et al. (2009) argue there are four major tensions represented in the literature on Health/Medicine 2.0. These concern:[3]
  1. the lack of clear definitions
  2. issues around the loss of control over information that doctors perceive
  3. safety and the dangers of inaccurate information
  4. issues of ownership and privacy
Several criticisms have been raised about the use of Web 2.0 in health care. Firstly, Google has limitations as a diagnostic tool for Medical Doctors (MDs), as it may be effective only for conditions with unique symptoms and signs that can easily be used as search term.[24] Studies of its accuracy have returned varying results, and this remains in dispute.[27] Secondly, long-held concerns exist about the effects of patients obtaining information online, such as the idea that patients may delay seeking medical advice[28] or accidentally reveal private medical data.[29][30] Finally, concerns exist about the quality of user-generated content leading to misinformation,[31][32] such as perpetuating the discredited claim that the MMR vaccine may cause autism.[33] In contrast, a 2004 study of a British epilepsy online support group suggested that only 6% of information was factually wrong.[34] In a 2007 Pew Research Center survey of Americans, only 3% reported that online advice had caused them serious harm, while nearly one-third reported that they or their acquaintances had been helped by online health advice.

New technology allows robots to visualize their own future

December 6, 2017
Original link:  http://www.kurzweilai.net/new-technology-allows-robots-to-visualize-their-own-future
UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. It could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes.
The initial prototype focuses on learning simple manual skills entirely from autonomous play — similar to how children can learn about their world by playing with toys, moving them around, grasping, etc.

Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now — predictions made only several seconds into the future — but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles.
The robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment, or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised (no humans involved) exploration, where the robot plays with objects on a table.

After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”

The research team demonstrated the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on Monday, December 4, 2017.

Learning by playing: how it works

Robot’s imagined predictions (credit: UC Berkeley)

At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next, based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects.

“In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model.

With the new technology, a robot pushes objects on a table, then uses the learned prediction model to choose motions that will move an object to a desired location. Robots use the learned model from raw camera observations to teach themselves how to avoid obstacles and push objects around obstructions.

Since control through video prediction relies only on observations that can be collected autonomously by the robot, such as through camera images, the resulting method is general and broadly applicable. Building video prediction models only requires unannotated video, which can be collected by the robot entirely autonomously.

That contrasts with conventional computer-vision methods, which require humans to manually label thousands or even millions of images.

Gaia hypothesis

From Wikipedia, the free encyclopedia
The study of planetary habitability is partly based upon extrapolation from knowledge of the Earth's conditions, as the Earth is the only planet currently known to harbour life.

The Gaia hypothesis (/ˈɡ.ə/ GHY, /ˈɡ.ə/ GAY), also known as the Gaia theory or the Gaia principle, proposes that living organisms interact with their inorganic surroundings on Earth to form a synergistic and self-regulating, complex system that helps to maintain and perpetuate the conditions for life on the planet.

The hypothesis was formulated by the chemist James Lovelock[1] and co-developed by the microbiologist Lynn Margulis in the 1970s.[2] Lovelock named the idea after Gaia, the primordial goddess who personified the Earth in Greek mythology. In 2006, the Geological Society of London awarded Lovelock the Wollaston Medal in part for his work on the Gaia hypothesis.[3]

Topics related to the hypothesis include how the biosphere and the evolution of organisms affect the stability of global temperature, salinity of seawater, atmospheric oxygen levels, the maintenance of a hydrosphere of liquid water and other environmental variables that affect the habitability of Earth.

The Gaia hypothesis was initially criticized for being teleological and against the principles of natural selection, but later refinements aligned the Gaia hypothesis with ideas from fields such as Earth system science, biogeochemistry and systems ecology. Lovelock also once described the "geophysiology" of the Earth. Even so, the Gaia hypothesis continues to attract criticism, and today some scientists consider it to be only weakly supported by, or at odds with, the available evidence.

Introduction

Gaian hypotheses suggest that organisms co-evolve with their environment: that is, they "influence their abiotic environment, and that environment in turn influences the biota by Darwinian process". Lovelock (1995) gave evidence of this in his second book, showing the evolution from the world of the early thermo-acido-philic and methanogenic bacteria towards the oxygen-enriched atmosphere today that supports more complex life.

A reduced version of the hypothesis has been called "influential Gaia"[11] in "Directed Evolution of the Biosphere: Biogeochemical Selection or Gaia?" by Andrei G. Lapenis, which states the biota influence certain aspects of the abiotic world, e.g. temperature and atmosphere. This is not the work of an individual but a collective of Russian scientific research that was combined into this peer reviewed publication. It states the coevolution of life and the environment through “micro–forces”[11] and biogeochemical processes. An example is how the activity of photosynthetic bacteria during Precambrian times have completely modified the Earth atmosphere to turn it aerobic, and as such supporting evolution of life (in particular eukaryotic life).

Since barriers existed throughout the Twentieth Century between Russia and the rest of the world, it is only relatively recently that the early Russian scientists who introduced concepts overlapping the Gaia hypothesis have become better known to the Western scientific community.[11] These scientists include:
  1. Piotr Alekseevich Kropotkin (1842–1921)
  2. Rafail Vasil’evich Rizpolozhensky (1847–1918)
  3. Vladimir Ivanovich Vernadsky (1863–1945)
  4. Vladimir Alexandrovich Kostitzin (1886–1963)
Biologists and Earth scientists usually view the factors that stabilize the characteristics of a period as an undirected emergent property or entelechy of the system; as each individual species pursues its own self-interest, for example, their combined actions may have counterbalancing effects on environmental change. Opponents of this view sometimes reference examples of events that resulted in dramatic change rather than stable equilibrium, such as the conversion of the Earth's atmosphere from a reducing environment to an oxygen-rich one at the end of the Archaean and the beginning of the Proterozoic periods.

Less accepted versions of the hypothesis claim that changes in the biosphere are brought about through the coordination of living organisms and maintain those conditions through homeostasis. In some versions of Gaia philosophy, all lifeforms are considered part of one single living planetary being called Gaia. In this view, the atmosphere, the seas and the terrestrial crust would be results of interventions carried out by Gaia through the coevolving diversity of living organisms.

Details

The Gaia hypothesis posits that the Earth is a self-regulating complex system involving the biosphere, the atmosphere, the hydrospheres and the pedosphere, tightly coupled as an evolving system. The hypothesis contends that this system as a whole, called Gaia, seeks a physical and chemical environment optimal for contemporary life.[12]

Gaia evolves through a cybernetic feedback system operated unconsciously by the biota, leading to broad stabilization of the conditions of habitability in a full homeostasis. Many processes in the Earth's surface essential for the conditions of life depend on the interaction of living forms, especially microorganisms, with inorganic elements. These processes establish a global control system that regulates Earth's surface temperature, atmosphere composition and ocean salinity, powered by the global thermodynamic disequilibrium state of the Earth system.[13]

The existence of a planetary homeostasis influenced by living forms had been observed previously in the field of biogeochemistry, and it is being investigated also in other fields like Earth system science. The originality of the Gaia hypothesis relies on the assessment that such homeostatic balance is actively pursued with the goal of keeping the optimal conditions for life, even when terrestrial or external events menace them.[14]

Regulation of global surface temperature

Rob Rohde's palaeotemperature graphs

Since life started on Earth, the energy provided by the Sun has increased by 25% to 30%;[15] however, the surface temperature of the planet has remained within the levels of habitability, reaching quite regular low and high margins. Lovelock has also hypothesised that methanogens produced elevated levels of methane in the early atmosphere, giving a view similar to that found in petrochemical smog, similar in some respects to the atmosphere on Titan.[7] This, he suggests tended to screen out ultraviolet until the formation of the ozone screen, maintaining a degree of homeostasis. However, the Snowball Earth[16] research has suggested that "oxygen shocks" and reduced methane levels led, during the Huronian, Sturtian and Marinoan/Varanger Ice Ages, to a world that very nearly became a solid "snowball". These epochs are evidence against the ability of the pre Phanerozoic biosphere to fully self-regulate.

Processing of the greenhouse gas CO2, explained below, plays a critical role in the maintenance of the Earth temperature within the limits of habitability.

The CLAW hypothesis, inspired by the Gaia hypothesis, proposes a feedback loop that operates between ocean ecosystems and the Earth's climate.[17] The hypothesis specifically proposes that particular phytoplankton that produce dimethyl sulfide are responsive to variations in climate forcing, and that these responses lead to a negative feedback loop that acts to stabilise the temperature of the Earth's atmosphere.

Currently the increase in human population and the environmental impact of their activities, such as the multiplication of greenhouse gases may cause negative feedbacks in the environment to become positive feedback. Lovelock has stated that this could bring an extremely accelerated global warming,[18] but he has since stated the effects will likely occur more slowly.[19]

Daisyworld simulations

Plots from a standard black & white Daisyworld simulation

James Lovelock and Andrew Watson developed the mathematical model Daisyworld, in which temperature regulation arises from a simple ecosystem consisting of two species whose activity varies in response to the planet's environment. The model demonstrates that beneficial feedback mechanisms can emerge in this "toy world" containing only self-interested organisms rather than through classic group selection mechanisms.[20]

Daisyworld examines the energy budget of a planet populated by two different types of plants, black daisies and white daisies. The colour of the daisies influences the albedo of the planet such that black daisies absorb light and warm the planet, while white daisies reflect light and cool the planet. As the model runs the output of the "sun" increases, meaning that the surface temperature of an uninhabited "gray" planet will steadily rise. In contrast, on Daisyworld competition between the daisies (based on temperature-effects on growth rates) leads to a shifting balance of daisy populations that tends to favour a planetary temperature close to the optimum for daisy growth.

It has been suggested that the results were predictable because Lovelock and Watson selected examples that produced the responses they desired.[21]

Regulation of oceanic salinity

Ocean salinity has been constant at about 3.5% for a very long time.[22] Salinity stability in oceanic environments is important as most cells require a rather constant salinity and do not generally tolerate values above 5%. The constant ocean salinity was a long-standing mystery, because no process counterbalancing the salt influx from rivers was known. Recently it was suggested[23] that salinity may also be strongly influenced by seawater circulation through hot basaltic rocks, and emerging as hot water vents on mid-ocean ridges. However, the composition of seawater is far from equilibrium, and it is difficult to explain this fact without the influence of organic processes. One suggested explanation lies in the formation of salt plains throughout Earth's history. It is hypothesized that these are created by bacterial colonies that fix ions and heavy metals during their life processes.[22]

In the biogeochemical processes of the earth, sources and sinks are the movement of elements. The composition of salt ions within our oceans and seas are: sodium (Na+), chlorine (Cl), sulfate (SO42−), Magnesium (Mg2+), calcium (Ca2+) and potassium (K+). The elements that comprise salinity do not readily change and are a conservative property of seawater.[22] There are many mechanisms that change salinity from a particulate form to a dissolved form and back. The known sources of sodium i.e. salts is when weathering, erosion, and dissolution of rocks transport into rivers and deposit into the oceans.

The Mediterranean Sea as being Gaia's kidney is found (here) by Kenneth J. Hsue a correspondence author in 2001. The "desiccation" of the Mediterranean is the evidence of a functioning kidney. Earlier "kidney functions" were performed during the "deposition of the Cretaceous (South Atlantic), Jurassic (Gulf of Mexico), Permo-Triassic (Europe), Devonian (Canada), Cambrian/Precambrian (Gondwana) saline giants."[24]

Regulation of oxygen in the atmosphere

Levels of gases in the atmosphere in 420,000 years of ice core data from Vostok, Antarctica research station. Current period is at the left.

The Gaia hypothesis states that the Earth's atmospheric composition is kept at a dynamically steady state by the presence of life.[25] The atmospheric composition provides the conditions that contemporary life has adapted to. All the atmospheric gases other than noble gases present in the atmosphere are either made by organisms or processed by them.

The stability of the atmosphere in Earth is not a consequence of chemical equilibrium. Oxygen is a reactive compound, and should eventually combine with gases and minerals of the Earth's atmosphere and crust. Oxygen only began to persist in the atmosphere in small quantities about 50 million years before the start of the Great Oxygenation Event.[26] Since the start of the Cambrian period, atmospheric oxygen concentrations have fluctuated between 15% and 35% of atmospheric volume.[27] Traces of methane (at an amount of 100,000 tonnes produced per year)[28] should not exist, as methane is combustible in an oxygen atmosphere.

Dry air in the atmosphere of Earth contains roughly (by volume) 78.09% nitrogen, 20.95% oxygen, 0.93% argon, 0.039% carbon dioxide, and small amounts of other gases including methane. Lovelock originally speculated that concentrations of oxygen above about 25% would increase the frequency of wildfires and conflagration of forests. Recent work on the findings of fire-caused charcoal in Carboniferous and Cretaceous coal measures, in geologic periods when O2 did exceed 25%, has supported Lovelock's contention.[citation needed]

Processing of CO2

Gaia scientists see the participation of living organisms in the carbon cycle as one of the complex processes that maintain conditions suitable for life. The only significant natural source of atmospheric carbon dioxide (CO2) is volcanic activity, while the only significant removal is through the precipitation of carbonate rocks.[29] Carbon precipitation, solution and fixation are influenced by the bacteria and plant roots in soils, where they improve gaseous circulation, or in coral reefs, where calcium carbonate is deposited as a solid on the sea floor. Calcium carbonate is used by living organisms to manufacture carbonaceous tests and shells. Once dead, the living organisms' shells fall to the bottom of the oceans where they generate deposits of chalk and limestone.

One of these organisms is Emiliania huxleyi, an abundant coccolithophore algae which also has a role in the formation of clouds.[30] CO2 excess is compensated by an increase of coccolithophoride life, increasing the amount of CO2 locked in the ocean floor. Coccolithophorides increase the cloud cover, hence control the surface temperature, help cool the whole planet and favor precipitations necessary for terrestrial plants.[citation needed] Lately the atmospheric CO2 concentration has increased and there is some evidence that concentrations of ocean algal blooms are also increasing.[31]

Lichen and other organisms accelerate the weathering of rocks in the surface, while the decomposition of rocks also happens faster in the soil, thanks to the activity of roots, fungi, bacteria and subterranean animals. The flow of carbon dioxide from the atmosphere to the soil is therefore regulated with the help of living beings. When CO2 levels rise in the atmosphere the temperature increases and plants grow. This growth brings higher consumption of CO2 by the plants, who process it into the soil, removing it from the atmosphere.

History

Precedents

"Earthrise" taken from Apollo 8 on December 24, 1968

The idea of the Earth as an integrated whole, a living being, has a long tradition. The mythical Gaia was the primal Greek goddess personifying the Earth, the Greek version of "Mother Nature" (from Ge = Earth, and Aia = PIE grandmother), or the Earth Mother. James Lovelock gave this name to his hypothesis after a suggestion from the novelist William Golding, who was living in the same village as Lovelock at the time (Bowerchalke, Wiltshire, UK). Golding's advice was based on Gea, an alternative spelling for the name of the Greek goddess, which is used as prefix in geology, geophysics and geochemistry.[32] Golding later made reference to Gaia in his Nobel prize acceptance speech.

In the eighteenth century, as geology consolidated as a modern science, James Hutton maintained that geological and biological processes are interlinked.[33] Later, the naturalist and explorer Alexander von Humboldt recognized the coevolution of living organisms, climate, and Earth's crust.[33] In the twentieth century, Vladimir Vernadsky formulated a theory of Earth's development that is now one of the foundations of ecology. Vernadsky was a Ukrainian geochemist and was one of the first scientists to recognize that the oxygen, nitrogen, and carbon dioxide in the Earth's atmosphere result from biological processes. During the 1920s he published works arguing that living organisms could reshape the planet as surely as any physical force. Vernadsky was a pioneer of the scientific bases for the environmental sciences.[34] His visionary pronouncements were not widely accepted in the West, and some decades later the Gaia hypothesis received the same type of initial resistance from the scientific community.

Also in the turn to the 20th century Aldo Leopold, pioneer in the development of modern environmental ethics and in the movement for wilderness conservation, suggested a living Earth in his biocentric or holistic ethics regarding land.
It is at least not impossible to regard the earth's parts—soil, mountains, rivers, atmosphere etc,—as organs or parts of organs of a coordinated whole, each part with its definite function. And if we could see this whole, as a whole, through a great period of time, we might perceive not only organs with coordinated functions, but possibly also that process of consumption as replacement which in biology we call metabolism, or growth. In such case we would have all the visible attributes of a living thing, which we do not realize to be such because it is too big, and its life processes too slow.
— Aldo Leopold, Animate Earth.[35]
Another influence for the Gaia hypothesis and the environmental movement in general came as a side effect of the Space Race between the Soviet Union and the United States of America. During the 1960s, the first humans in space could see how the Earth looked as a whole. The photograph Earthrise taken by astronaut William Anders in 1968 during the Apollo 8 mission became, through the Overview Effect an early symbol for the global ecology movement.[36]

Formulation of the hypothesis


Lovelock started defining the idea of a self-regulating Earth controlled by the community of living organisms in September 1965, while working at the Jet Propulsion Laboratory in California on methods of detecting life on Mars.[37][38] The first paper to mention it was Planetary Atmospheres: Compositional and other Changes Associated with the Presence of Life, co-authored with C.E. Giffin.[39] A main concept was that life could be detected in a planetary scale by the chemical composition of the atmosphere. According to the data gathered by the Pic du Midi observatory, planets like Mars or Venus had atmospheres in chemical equilibrium. This difference with the Earth atmosphere was considered to be a proof that there was no life in these planets.

Lovelock formulated the Gaia Hypothesis in journal articles in 1972[1] and 1974,[2] followed by a popularizing 1979 book Gaia: A new look at life on Earth. An article in the New Scientist of February 6, 1975,[40] and a popular book length version of the hypothesis, published in 1979 as The Quest for Gaia, began to attract scientific and critical attention.

Lovelock called it first the Earth feedback hypothesis,[41] and it was a way to explain the fact that combinations of chemicals including oxygen and methane persist in stable concentrations in the atmosphere of the Earth. Lovelock suggested detecting such combinations in other planets' atmospheres as a relatively reliable and cheap way to detect life.


Later, other relationships such as sea creatures producing sulfur and iodine in approximately the same quantities as required by land creatures emerged and helped bolster the hypothesis.[42]

In 1971 microbiologist Dr. Lynn Margulis joined Lovelock in the effort of fleshing out the initial hypothesis into scientifically proven concepts, contributing her knowledge about how microbes affect the atmosphere and the different layers in the surface of the planet.[4] The American biologist had also awakened criticism from the scientific community with her theory on the origin of eukaryotic organelles and her contributions to the endosymbiotic theory, nowadays accepted. Margulis dedicated the last of eight chapters in her book, The Symbiotic Planet, to Gaia. However, she objected to the widespread personification of Gaia and stressed that Gaia is "not an organism", but "an emergent property of interaction among organisms". She defined Gaia as "the series of interacting ecosystems that compose a single huge ecosystem at the Earth's surface. Period". The book's most memorable "slogan" was actually quipped by a student of Margulis': "Gaia is just symbiosis as seen from space".

James Lovelock called his first proposal the Gaia hypothesis but has also used the term Gaia theory. Lovelock states that the initial formulation was based on observation, but still lacked a scientific explanation. The Gaia hypothesis has since been supported by a number of scientific experiments[43] and provided a number of useful predictions.[44] In fact, wider research proved the original hypothesis wrong, in the sense that it is not life alone but the whole Earth system that does the regulating.[12]

First Gaia conference

In 1985, the first public symposium on the Gaia hypothesis, Is The Earth A Living Organism? was held at University of Massachusetts Amherst, August 1–6.[45] The principal sponsor was the National Audubon Society. Speakers included James Lovelock, George Wald, Mary Catherine Bateson, Lewis Thomas, John Todd, Donald Michael, Christopher Bird, Thomas Berry, David Abram, Michael Cohen, and William Fields. Some 500 people attended.[46]

Second Gaia conference

In 1988, climatologist Stephen Schneider organised a conference of the American Geophysical Union. The first Chapman Conference on Gaia,[47] was held in San Diego, California on March 7, 1988.

During the "philosophical foundations" session of the conference, David Abram spoke on the influence of metaphor in science, and of the Gaia hypothesis as offering a new and potentially game-changing metaphorics, while James Kirchner criticised the Gaia hypothesis for its imprecision. Kirchner claimed that Lovelock and Margulis had not presented one Gaia hypothesis, but four -
  • CoEvolutionary Gaia: that life and the environment had evolved in a coupled way. Kirchner claimed that this was already accepted scientifically and was not new.
  • Homeostatic Gaia: that life maintained the stability of the natural environment, and that this stability enabled life to continue to exist.
  • Geophysical Gaia: that the Gaia hypothesis generated interest in geophysical cycles and therefore led to interesting new research in terrestrial geophysical dynamics.
  • Optimising Gaia: that Gaia shaped the planet in a way that made it an optimal environment for life as a whole. Kirchner claimed that this was not testable and therefore was not scientific.
Of Homeostatic Gaia, Kirchner recognised two alternatives. "Weak Gaia" asserted that life tends to make the environment stable for the flourishing of all life. "Strong Gaia" according to Kirchner, asserted that life tends to make the environment stable, to enable the flourishing of all life. Strong Gaia, Kirchner claimed, was untestable and therefore not scientific.[48]

Lovelock and other Gaia-supporting scientists, however, did attempt to disprove the claim that the hypothesis is not scientific because it is impossible to test it by controlled experiment. For example, against the charge that Gaia was teleological, Lovelock and Andrew Watson offered the Daisyworld Model (and its modifications, above) as evidence against most of these criticisms.[20] Lovelock said that the Daisyworld model "demonstrates that self-regulation of the global environment can emerge from competition amongst types of life altering their local environment in different ways".[49]

Lovelock was careful to present a version of the Gaia hypothesis that had no claim that Gaia intentionally or consciously maintained the complex balance in her environment that life needed to survive. It would appear that the claim that Gaia acts "intentionally" was a metaphoric statement in his popular initial book and was not meant to be taken literally. This new statement of the Gaia hypothesis was more acceptable to the scientific community. Most accusations of teleologism ceased, following this conference.

Third Gaia conference

By the time of the 2nd Chapman Conference on the Gaia Hypothesis, held at Valencia, Spain, on 23 June 2000,[50] the situation had changed significantly. Rather than a discussion of the Gaian teleological views, or "types" of Gaia hypotheses, the focus was upon the specific mechanisms by which basic short term homeostasis was maintained within a framework of significant evolutionary long term structural change.

The major questions were:[51]
  1. "How has the global biogeochemical/climate system called Gaia changed in time? What is its history? Can Gaia maintain stability of the system at one time scale but still undergo vectorial change at longer time scales? How can the geologic record be used to examine these questions?"
  2. "What is the structure of Gaia? Are the feedbacks sufficiently strong to influence the evolution of climate? Are there parts of the system determined pragmatically by whatever disciplinary study is being undertaken at any given time or are there a set of parts that should be taken as most true for understanding Gaia as containing evolving organisms over time? What are the feedbacks among these different parts of the Gaian system, and what does the near closure of matter mean for the structure of Gaia as a global ecosystem and for the productivity of life?"
  3. "How do models of Gaian processes and phenomena relate to reality and how do they help address and understand Gaia? How do results from Daisyworld transfer to the real world? What are the main candidates for "daisies"? Does it matter for Gaia theory whether we find daisies or not? How should we be searching for daisies, and should we intensify the search? How can Gaian mechanisms be investigated using process models or global models of the climate system that include the biota and allow for chemical cycling?"
In 1997, Tyler Volk argued that a Gaian system is almost inevitably produced as a result of an evolution towards far-from-equilibrium homeostatic states that maximise entropy production, and Kleidon (2004) agreed stating: "...homeostatic behavior can emerge from a state of MEP associated with the planetary albedo"; "...the resulting behavior of a biotic Earth at a state of MEP may well lead to near-homeostatic behavior of the Earth system on long time scales, as stated by the Gaia hypothesis". Staley (2002) has similarly proposed "...an alternative form of Gaia theory based on more traditional Darwinian principles... In [this] new approach, environmental regulation is a consequence of population dynamics, not Darwinian selection. The role of selection is to favor organisms that are best adapted to prevailing environmental conditions. However, the environment is not a static backdrop for evolution, but is heavily influenced by the presence of living organisms. The resulting co-evolving dynamical process eventually leads to the convergence of equilibrium and optimal conditions".

Fourth Gaia conference

A fourth international conference on the Gaia hypothesis, sponsored by the Northern Virginia Regional Park Authority and others, was held in October 2006 at the Arlington, VA campus of George Mason University.[52]

Martin Ogle, Chief Naturalist, for NVRPA, and long-time Gaia hypothesis proponent, organized the event. Lynn Margulis, Distinguished University Professor in the Department of Geosciences, University of Massachusetts-Amherst, and long-time advocate of the Gaia hypothesis, was a keynote speaker. Among many other speakers: Tyler Volk, Co-director of the Program in Earth and Environmental Science at New York University; Dr. Donald Aitken, Principal of Donald Aitken Associates; Dr. Thomas Lovejoy, President of the Heinz Center for Science, Economics and the Environment; Robert Correll, Senior Fellow, Atmospheric Policy Program, American Meteorological Society and noted environmental ethicist, J. Baird Callicott.

This conference approached the Gaia hypothesis as both science and metaphor as a means of understanding how we might begin addressing 21st century issues such as climate change and ongoing environmental destruction.

Criticism

After initially being largely ignored by most scientists (from 1969 until 1977), thereafter for a period the initial Gaia hypothesis was criticized by a number of scientists, such as Ford Doolittle,[53] Richard Dawkins[54] and Stephen Jay Gould.[47] Lovelock has said that because his hypothesis is named after a Greek goddess, and championed by many non-scientists,[41] the Gaia hypothesis was interpreted as a neo-Pagan religion. Many scientists in particular also criticised the approach taken in his popular book Gaia, a New Look at Life on Earth for being teleological—a belief that things are purposeful and aimed towards a goal. Responding to this critique in 1990, Lovelock stated, "Nowhere in our writings do we express the idea that planetary self-regulation is purposeful, or involves foresight or planning by the biota".

Stephen Jay Gould criticised Gaia as being "a metaphor, not a mechanism."[55] He wanted to know the actual mechanisms by which self-regulating homeostasis was achieved. In his defense of Gaia, David Abram argues that Gould overlooked the fact that "mechanism", itself, is a metaphor — albeit an exceedingly common and often unrecognized metaphor — one which leads us to consider natural and living systems as though they were machines organized and built from outside (rather than as autopoietic or self-organizing phenomena). Mechanical metaphors, according to Abram, lead us to overlook the active or agential quality of living entities, while the organismic metaphorics of the Gaia hypothesis accentuate the active agency of both the biota and the biosphere as a whole.[56][57] With regard to causality in Gaia, Lovelock argues that no single mechanism is responsible, that the connections between the various known mechanisms may never be known, that this is accepted in other fields of biology and ecology as a matter of course, and that specific hostility is reserved for his own hypothesis for other reasons.[58]

Aside from clarifying his language and understanding of what is meant by a life form, Lovelock himself ascribes most of the criticism to a lack of understanding of non-linear mathematics by his critics, and a linearizing form of greedy reductionism in which all events have to be immediately ascribed to specific causes before the fact. He also states that most of his critics are biologists but that his hypothesis includes experiments in fields outside biology, and that some self-regulating phenomena may not be mathematically explainable.[58]

Natural selection and evolution

Lovelock has suggested that global biological feedback mechanisms could evolve by natural selection, stating that organisms that improve their environment for their survival do better than those that damage their environment. However, in the early 1980s, W. Ford Doolittle and Richard Dawkins separately argued against Gaia. Doolittle argued that nothing in the genome of individual organisms could provide the feedback mechanisms proposed by Lovelock, and therefore the Gaia hypothesis proposed no plausible mechanism and was unscientific.[53] Dawkins meanwhile stated that for organisms to act in concert would require foresight and planning, which is contrary to the current scientific understanding of evolution.[54] Like Doolittle, he also rejected the possibility that feedback loops could stabilize the system.

Lynn Margulis, a microbiologist who collaborated with Lovelock in supporting the Gaia hypothesis, argued in 1999, that "Darwin's grand vision was not wrong, only incomplete. In accentuating the direct competition between individuals for resources as the primary selection mechanism, Darwin (and especially his followers) created the impression that the environment was simply a static arena". She wrote that the composition of the Earth's atmosphere, hydrosphere, and lithosphere are regulated around "set points" as in homeostasis, but those set points change with time.[59]

Evolutionary biologist W. D. Hamilton called the concept of Gaia Copernican, adding that it would take another Newton to explain how Gaian self-regulation takes place through Darwinian natural selection.[32][better source needed]

Criticism in the 21st century

The Gaia hypothesis continues to be broadly skeptically received by the scientific community. For instance, arguments both for and against it were laid out in the journal Climatic Change in 2002 and 2003. A significant argument raised against it are the many examples where life has had a detrimental or destabilising effect on the environment rather than acting to regulate it.[8][9] Several recent books have criticised the Gaia hypothesis, expressing views ranging from "... the Gaia hypothesis lacks unambiguous observational support and has significant theoretical difficulties"[60] to "Suspended uncomfortably between tainted metaphor, fact, and false science, I prefer to leave Gaia firmly in the background"[10] to "The Gaia hypothesis is supported neither by evolutionary theory nor by the empirical evidence of the geological record".[61] The CLAW hypothesis,[17] initially suggested as a potential example of direct Gaian feedback, has subsequently been found to be less credible as understanding of cloud condensation nuclei has improved.[62] In 2009 the Medea hypothesis was proposed: that life has highly detrimental (biocidal) impacts on planetary conditions, in direct opposition to the Gaia hypothesis.[63]

In a recent book-length evaluation of the Gaia hypothesis considering modern evidence from across the various relevant disciplines the author, Toby Tyrrell, concluded that: "I believe Gaia is a dead end. Its study has, however, generated many new and thought provoking questions. While rejecting Gaia, we can at the same time appreciate Lovelock's originality and breadth of vision, and recognise that his audacious concept has helped to stimulate many new ideas about the Earth, and to champion a holistic approach to studying it".[64] Elsewhere he presents his conclusion "The Gaia hypothesis is not an accurate picture of how our world works".[65] This statement needs to be understood as referring to the "strong" and "moderate" forms of Gaia—that the biota obeys a principle that works to make Earth optimal (strength 5) or favourable for life (strength 4) or that it works as a homeostatic mechanism (strength 3). The latter is the "weakest" form of Gaia that Lovelock has advocated. Tyrrell rejects it. However, he finds that the two weaker forms of Gaia—Coeveolutionary Gaia and Influential Gaia, which assert that there are close links between the evolution of life and the environment and that biology affects the physical and chemical environment—are both credible, but that it is not useful to use the term "Gaia" in this sense.

Citation signal

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cit...