What Is Life? The Physical Aspect of the Living Cell is a 1944 science book written for the lay reader by physicist Erwin Schrödinger. The book was based on a course of public lectures delivered by Schrödinger in February 1943, under the auspices of the Dublin Institute for Advanced Studies at Trinity College, Dublin.
The lectures attracted an audience of about 400, who were warned "that
the subject-matter was a difficult one and that the lectures could not
be termed popular, even though the physicist’s most dreaded weapon,
mathematical deduction, would hardly be utilized." Schrödinger's lecture focused on one important question: "how can the
events in space and time which take place within the spatial boundary of
a living organism be accounted for by physics and chemistry?"
In the book, Schrödinger introduced the idea of an "aperiodic
crystal" that contained genetic information in its configuration of
covalent chemical bonds.
In the 1950s, this idea stimulated enthusiasm for discovering the
genetic molecule. Although the existence of some form of hereditary
information had been hypothesized since 1869, its role in reproduction
and its helical shape were still unknown at the time of Schrödinger's
lecture. In retrospect, Schrödinger's aperiodic crystal can be viewed
as a well-reasoned theoretical prediction of what biologists should have
been looking for during their search for genetic material. Both James D. Watson,[2] and Francis Crick, who jointly proposed the double helix structure of DNA based on X-ray diffraction experiments by Rosalind Franklin, credited Schrödinger's book with presenting an early theoretical description of how the storage of genetic information would work, and each independently acknowledged the book as a source of inspiration for their initial researches.[3]
Background
The book is based on lectures delivered under the auspices of the Institute at Trinity College, Dublin,
in February 1943 and published in 1944. At that time DNA was not yet
accepted as the carrier of hereditary information, which only was the
case after the Hershey–Chase experiment of 1952. One of the most successful branches of physics at this time was statistical physics, and quantum mechanics,
a theory which is also very statistical in its nature. Schrödinger
himself is one of the founding fathers of quantum mechanics.
Max Delbrück's thinking about the physical basis of life was an important influence on Schrödinger.[4] However, long before the publication of What is Life?, geneticist and 1946 Nobel-prize winner H. J. Muller had in his 1922 article "Variation due to Change in the Individual Gene"[5]
already laid out all the basic properties of the "heredity molecule"
(then not yet known to be DNA) that Schrödinger was to re-derive in 1944
"from first principles" in What is Life? (including the
"aperiodicity" of the molecule), properties which Muller specified and
refined additionally in his 1929 article "The Gene As The Basis of Life"[6] and during the 1930s.[7] Moreover, H. J. Muller himself wrote in a 1960 letter to a journalist regarding What Is Life?
that whatever the book got right about the "hereditary molecule" had
already been published before 1944 and that Schrödinger's were only the
wrong speculations; Muller also named two famous geneticists (including
Delbrück) who knew every relevant pre-1944 publication and had been in
contact with Schrödinger before 1944. But DNA as the molecule of
heredity became topical only after Oswald Avery's
most important bacterial-transformation experiments in 1944. Before
these experiments, proteins were considered the most likely candidates.
Content
In
chapter I, Schrödinger explains that most physical laws on a large scale
are due to chaos on a small scale. He calls this principle
"order-from-disorder." As an example he mentions diffusion,
which can be modeled as a highly ordered process, but which is caused
by random movement of atoms or molecules. If the number of atoms is
reduced, the behaviour of a system becomes more and more random. He
states that life greatly depends on order and that a naïve physicist may
assume that the master code of a living organism has to consist of a
large number of atoms.
In chapter II and III, he summarizes what was known at this time
about the hereditary mechanism. Most importantly, he elaborates the
important role mutations play in evolution.
He concludes that the carrier of hereditary information has to be both
small in size and permanent in time, contradicting the naïve physicist's
expectation. This contradiction cannot be resolved by classical physics.
In chapter IV, Schrödinger presents molecules,
which are indeed stable even if they consist of only a few atoms, as
the solution. Even though molecules were known before, their stability
could not be explained by classical physics, but is due to the discrete
nature of quantum mechanics. Furthermore, mutations are directly linked to quantum leaps.
He continues to explain, in chapter V, that true solids, which are also permanent, are crystals.
The stability of molecules and crystals is due to the same principles
and a molecule might be called "the germ of a solid." On the other hand,
an amorphous solid, without crystalline structure, should be regarded as a liquid with a very high viscosity.
Schrödinger believes the heredity material to be a molecule, which
unlike a crystal does not repeat itself. He calls this an aperiodic
crystal. Its aperiodic nature allows it to encode an almost infinite
number of possibilities with a small number of atoms. He finally
compares this picture with the known facts and finds it in accordance
with them.
In chapter VI Schrödinger states:
…living matter, while not eluding the "laws of physics"
as established up to date, is likely to involve "other laws of physics"
hitherto unknown, which however, once they have been revealed, will form
just as integral a part of science as the former.
He knows that this statement is open to misconception and tries to
clarify it. The main principle involved with "order-from-disorder" is
the second law of thermodynamics, according to which entropy only increases in a closed system (such as the universe). Schrödinger explains that living matter evades the decay to thermodynamical equilibrium by homeostatically maintaining negative entropy (today this quantity is called information[8]) in an open system.
In chapter VII, he maintains that "order-from-order" is not
absolutely new to physics; in fact, it is even simpler and more
plausible. But nature follows "order-from-disorder", with some
exceptions as the movement of the celestial bodies
and the behaviour of mechanical devices such as clocks. But even those
are influenced by thermal and frictional forces. The degree to which a
system functions mechanically or statistically depends on the
temperature. If heated, a clock ceases to function, because it melts.
Conversely, if the temperature approaches absolute zero,
any system behaves more and more mechanically. Some systems approach
this mechanical behaviour rather fast with room temperature already
being practically equivalent to absolute zero.
Schrödinger concludes this chapter and the book with philosophical speculations on determinism, free will, and the mystery of human consciousness.
He attempts to "see whether we cannot draw the correct
non-contradictory conclusion from the following two premises: (1) My
body functions as a pure mechanism according to Laws of Nature; and (2)
Yet I know, by incontrovertible direct experience, that I am directing
its motions, of which I foresee the effects, that may be fateful and
all-important, in which case I feel and take full responsibility for them. The only
possible inference from these two facts is, I think, that I – I in the
widest meaning of the word, that is to say, every conscious mind that
has ever said or felt 'I' – am the person, if any, who controls the
'motion of the atoms' according to the Laws of Nature" Schrödinger then
states that this insight is not new and that Upanishads considered this
insight of "ATHMAN = BRAHMAN" to "represent quintessence of deepest
insights into the happenings of the world. Schrödinger rejects the idea
that the source of consciousness should perish with the body because he
finds the idea "distasteful". He also rejects the idea that there are
multiple immortal souls that can exist without the body because he
believes that consciousness is nevertheless highly dependent on the
body. Schrödinger writes that, to reconcile the two premises,
The
only possible alternative is simply to keep to the immediate experience
that consciousness is a singular of which the plural is unknown; that
there is only one thing and that what seems to be a plurality is merely a
series of different aspects of this one thing…
Any intuitions that consciousness is plural, he says, are illusions. Schrödinger is sympathetic to the Hindu concept of Brahman, by which each individual's consciousness is only a manifestation of a unitary consciousness pervading the universe
— which corresponds to the Hindu concept of God. Schrödinger concludes
that "…'I' am the person, if any, who controls the 'motion of the atoms'
according to the Laws of Nature." However, he also qualifies the
conclusion as "necessarily subjective" in its "philosophical
implications". In the final paragraph, he points out that what is meant
by "I" is not the collection of experienced events but "namely the
canvas upon which they are collected." If a hypnotist succeeds in
blotting out all earlier reminiscences, he writes, there would be no
loss of personal existence — "Nor will there ever be."[9]
Schrödinger's "paradox"
In a world governed by the second law of thermodynamics, all isolated systems
are expected to approach a state of maximum disorder. Since life
approaches and maintains a highly ordered state, some argue that this
seems to violate the aforementioned second law, implying that there is a
paradox. However, since the biosphere
is not an isolated system, there is no paradox. The increase of order
inside an organism is more than paid for by an increase in disorder
outside this organism by the loss of heat into the environment. By this
mechanism, the second law is obeyed, and life maintains a highly ordered
state, which it sustains by causing a net increase in disorder in the
Universe. In order to increase the complexity on Earth—as life does—free energy is needed and in this case is provided by the Sun.[10][11]
Deacon's first book, The Symbolic Species focused on the evolution of human language.
In that book, Deacon notes that much of the mystery surrounding
language origins comes from a profound confusion on the nature of semiotic processes themselves. Accordingly, the focus of Incomplete Nature shifts from human origins to the origin of life and semiosis. Incomplete Nature can be viewed as a sizable contribution to the growing body of work positing that the problem of consciousness and the problem of the origin of life are inexorably linked.[1][2] Deacon tackles these two linked problems by going back to basics. The book expands upon the classical conceptions of work and information in order to give an account of ententionality that is consistent with eliminative materialism and yet does not seek to explain away or pass off as epiphenominal the non-physical properties of life.
Constraints
A central thesis of the book is that absence can still be efficacious. Deacon makes the claim that just as the concept of zero revolutionized mathematics, thinking about life, mind, and other ententional phenomena in terms of constraints (i.e. what is absent) can similarly help us overcome the artificial dichotomy of the mind body problem.
A good example of this concept is the hole that defines the hub of a
wagon wheel. The hole itself is not a physical thing, but rather a
source of constraint that helps to restrict the conformational
possibilities of the wheel's components, such that, on a global scale,
the property of rolling emerges. Constraints which produce emergent
phenomena may not be a process which can be understood by looking at
the make-up of the constituents of a pattern. Emergent phenomena are
difficult to study because their complexity does not necessarily
decompose into parts. When a pattern is broken down, the constraints are
no longer at work; there is no hole, no absence to notice. Imagine a
hub, a hole for an axle, produced only when the wheel is rolling, thus
breaking the wheel may not show you how the hub emerges.
Orthograde and contragrade
Deacon notes that the apparent patterns of causality exhibited by living systems seem to be in some ways the inverse of the causal patterns of non-living systems.[citation needed] In an attempt to find a solution to the philosophical problems associated with teleological explanations, Deacon returns to Aristotle'sfour causes and attempts to modernize them with thermodynamic concepts.
Orthograde changes are caused internally. They are
spontaneous changes. That is, orthograde changes are generated by the
spontaneous elimination of asymmetries in a thermodynamic system in
disequilibrium. Because orthograde changes are driven by the internal
geometry of a changing system, orthograde causes can be seen as
analogous to Aristotle'sformal cause. More loosely, Aristotle'sfinal cause can also be considered orthograde, because goal oriented actions are caused from within.[3]
Contragrade changes are imposed from the outside. They are
non-spontaneous changes. Contragrade change is induced when one
thermodynamic system interacts with the orthograde changes of another
thermodynamic system. The interaction drives the first system into a
higher energy, more asymmetrical state. Contragrade changes do work.
Because contragrade changes are driven by external interactions with
another changing system, contragrade causes can be seen as analogous to Aristotle'sefficient cause.[4]
Homeodynamics, morphodynamics, and teleodynamics
Much of the book is devoted to expanding upon the ideas of classical thermodynamics, with an extended discussion about how consistently far from equilibrium systems can interact and combine to produce novel emergent properties.
Deacon defines three hierarchically nested levels of thermodynamic systems: Homeodynamic systems combine to produce morphodynamic systems which combine to produce teleodynamic systems. Teleodynamic systems can be further combined to produce higher orders of self organization.
Homeodynamics
Homeodynamic systems are essentially equivalent to classical thermodynamic systems
like a gas under pressure or solute in solution, but the term serves to
emphasize that homeodynamics is an abstract process that can be
realized in forms beyond the scope of classic thermodynamics.
For example, the diffuse brain activity normally associated with
emotional states can be considered to be a homeodynamic system because
there is a general state of equilibrium which its components (neural
activity) distribute towards.[5]
In general, a homeodynamic system is any collection of components that
will spontaneously eliminate constraints by rearranging the parts until a
maximum entropy state (disorderliness) is achieved.
Morphodynamics
A
morphodynamic system consists of a coupling of two homeodynamic systems
such that the constraint dissipation of each complements the other,
producing macroscopic order out of microscopic interactions.
Morphodynamic systems require constant perturbation to maintain their
structure, so they are relatively rare in nature. The paradigm example
of a morphodynamic system is a Rayleigh–Bénard cell. Other common examples are snowflake formation, whirlpools and the stimulated emission of laser light.
Benard Cell
Maximum entropy production: The organized structure of a morphodynamic system forms to facilitate maximal entropy production. In the case of a Rayleigh–Bénard cell,
heat at the base of the liquid produces an uneven distribution of high
energy molecules which will tend to diffuse towards the surface. As the
temperature of the heat source increases, density effects come into play. Simple diffusion can no longer dissipate energy as fast as it is added and so the bottom of the liquid becomes hot and more buoyant than the cooler, denser liquid at the top. The bottom of the liquid begins to rise, and the top begins to sink - producing convection currents.
Two systems: The significant heat differential on the
liquid produces two homeodynamic systems. The first is a diffusion
system, where high energy molecules on the bottom collide with lower
energy molecules on the top until the added kinetic energy from the heat
source is evenly distributed. The second is a convection system, where
the low density fluid on the bottom mixes with the high density fluid on
the top until the density becomes evenly distributed. The second system
arises when there is too much energy to be effectively dissipated by
the first, and once both systems are in place, they will begin to
interact.
Self organization: The convection creates currents in the
fluid that disrupt the pattern of heat diffusion from bottom to top.
Heat begins to diffuse into the denser areas of current, irrespective of
the vertical location of these denser portions of fluid. The areas of
the fluid where diffusion is occurring most rapidly will be the most
viscous because molecules are rubbing against each other in opposite
directions. The convection currents will shun these areas in favor of
parts of the fluid where they can flow more easily. And so the fluid
spontaneously segregates itself into cells where high energy, low
density fluid flows up from the center of the cell and cooler, denser
fluid flows down along the edges, with diffusion effects dominating in
the area between the center and the edge of each cell.
Synergy and constraint: What is notable about
morphodynamic processes is that order spontaneously emerges explicitly
because the ordered system that results is more efficient at increasing
entropy than a chaotic one. In the case of the Rayleigh–Bénard cell,
neither diffusion nor convection on their own will produce as much
entropy as both effects coupled together. When both effects are brought
into interaction, they constrain each other into a particular geometric
form because that form facilitates minimal interference between the two
processes. The orderly hexagonal form is stable as long as the energy
differential persists, and yet the orderly form more effectively
degrades the energy differential than any other form. This is why
morphodynamic processes in nature are usually so short lived. They are
self organizing, but also self undermining.
Teleodynamics
A
teleodynamic system consists of coupling two morphodynamic systems such
that the self undermining quality of each is constrained by the other.
Each system prevents the other from dissipating all of the energy
available, and so long term organizational stability is obtained. Deacon
claims that we should pinpoint the moment when two morphodynamic
systems reciprocally constrain each other as the point when ententional qualities like function, purpose and normativity emerge.[6]
Autogenesis
Deacon
explores the properties of teleodynamic systems by describing a
chemically plausible model system called an autogen. Deacon emphasizes
that the specific autogen he describes is not a proposed description of
the first life form, but rather a description of the kinds of
thermodynamic synergies that the first living creature likely possessed.[7]
Autogen pg 339
Reciprocal catalysis: An autogen consists of two self catalyzing cyclical morphodynamic chemical reactions, similar to a chemoton.
In one reaction, organic molecules react in a looped series, the
products of one reaction becoming the reactants for the next. This
looped reaction is self amplifying, producing more and more reactants
until all the substrate is consumed. A side product of this reciprocally
catalytic loop is a lipid that can be used as a reactant in a second reaction. This second reaction creates a boundary (either a microtubule or some other closed capsid like structure), that serves to contain the first reaction. The boundary limits diffusion; it keeps all of the necessary catalysts
in close proximity to each other. In addition, the boundary prevents
the first reaction from completely consuming all of the available
substrate in the environment.
The first self: Unlike an isolated morphodynamic process
whose organization rapidly eliminates the energy gradient necessary to
maintain its structure, a teleodynamic process is self-limiting and self
preserving. The two reactions complement each other, and ensure that
neither ever runs to equilibrium - that is completion, cessation, and death. So, in a teleodynamic system there will be structures that embody a preliminary sketch of a biological function. The internal reaction network functions to create the substrates
for the boundary reaction, and the boundary reaction functions to
protect and constrain the internal reaction network. Either process in
isolation would be abiotic but together they create a system with a normative status dependent on the functioning of its component parts.
Work
As with other concepts in the book, in his discussion of work Deacon seeks to generalize the Newtonian
conception of work such that the term can be used to describe and
differentiate mental phenomena - to describe "that which makes
daydreaming effortless but metabolically equivalent problem solving
difficult."[8]
Work is generally described as "activity that is necessary to overcome
resistance to change. Resistance can be either active or passive, and
so work can be directed towards enacting change that wouldn't otherwise
occur or preventing change that would happen in its absence."[9]
Using the terminology developed earlier in the book, work can be
considered to be "the organization of differences between orthograde
processes such that a locus of contragrade process is created. Or, more
simply, work is a spontaneous change inducing a non-spontaneous change
to occur."[10]
Thermodynamic work
A
thermodynamic systems capacity to do work depends less upon the total
energy of the system and more upon the geometric distribution of its
components. A glass of water at 20 degrees Celsius will have the same
amount of energy as a glass divided in half with the top fluid
at 30 degrees and the bottom at 10, but only in the second glass will
the top half have the capacity to do work upon the bottom. This is
because work occurs at both macroscopic and microscopic
levels. Microscopically, there is constant work being performed on one
molecule by another when they collide. But the potential for this
microscopic work to additively sum to macroscopic work depends on there
being an asymmetric distribution of particle speeds, so that the average
collision pushes in a focused direction. Microscopic work is necessary but not sufficient for macroscopic work. A global property of asymmetric distribution is also required.
Morphodynamic work
By
recognizing that asymmetry is a general property of work - that work is
done as asymmetric systems spontaneously tend towards symmetry, Deacon
abstracts the concept of work and applies it to systems whose symmetries
are vastly more complex than those covered by classical thermodynamics. In a morphodynamic system, the tendency towards symmetry produces not global equilibrium, but a complex geometric form like a hexagonal Benard cell or the resonant frequency of a flute. This tendency towards convolutedly symmetric forms can be harnessed to do work on other morphodynamic systems, if the systems are properly coupled.
Resonance example: A good example of morphodynamic work is the induced resonance
that can be observed by singing or playing a flute next to a string
instrument like a harp or guitar. The vibrating air emitted from the
flute will interact with the taut strings. If any of the strings are
tuned to a resonant frequency that matches the note being played, they too will begin to vibrate and emit sound.
Contragrade change: When energy is added to the flute by
blowing air into it, there is a spontaneous (orthograde) tendency for
the system to dissipate the added energy by inducing the air within the
flute to vibrate at a specific frequency. This orthograde morphodynamic
form generation can be used to induce contragrade change in the system
coupled to it - the taught string. Playing the flute does work on the
string by causing it to enter a high energy state that could not be
reached spontaneously in an uncoupled state.
Structure and form: Importantly, this is not just the
macro scale propagation of random micro vibrations from one system to
another. The global geometric structure of the system is essential. The
total energy transferred from the flute to the string matters far less
than the patterns it takes in transit. That is, the amplitude of the coupled note is irrelevant, what matters is its frequency.
Notes that have a higher or lower frequency than the resonant frequency
of the string will not be able to do morphodynamic work.
Teleodynamic work
Work is generally defined to be the interaction of two orthograde changing systems such that contragrade change is produced.[11]
In teleodynamic systems, the spontaneous orthograde tendency is not to
equilibriate (as in homeodynamic systems), nor to self simplify (as in
morphodynamic systems) but rather to tend towards self-preservation.
Living organisms spontaneously tend to heal, to reproduce
and to pursue resources towards these ends. Teleodynamic work acts on
these tendencies and pushes them in a contragrade, non-spontaneous
direction.
Reading
exemplifies the logic of teleodynamic work. A passive source of
cognitive constraints is potentially provided by the letterforms on a
page. A literate person has structured his or her sensory and cognitive
habits to use such letterforms to reorganize the neural activities
constituting thinking. This enables us to do teleodynamic work to shift
mental tendencies away from those that are spontaneous (such as
daydreaming) to those that are constrained by the text. Artist: Giovanni Battista Piazzetta (1682–1754).
Evolution as work:Natural selection, or perhaps more accurately, adaptation,
can be considered to be a ubiquitous form of teleodynamic work. The
othograde self-preservation and reproduction tendencies of individual
organisms tends to undermine those same tendencies in conspecifics. This
competition produces a constraint that tends to mold organisms into
forms that are more adapted to their environments – forms that would
otherwise not spontaneously persist.
For example, in a population of New Zealand wrybill
who make a living by searching for grubs under rocks, those that have a
bent beak gain access to more calories. Those with bent beaks are able
to better provide for their young, and at the same time they remove a
disproportionate quantity of grubs from their environment, making it
more difficult for those with strait beaks to provide for their own
young. Throughout their lives, all the wrybills
in the population do work to structure the form of the next generation.
The increased efficiency of the bent beak causes that morphology to
dominate the next generation. Thus an asymmetry of beak shape
distribution is produced in the population - an asymmetry produced by
teleodynamic work.
Thought as work: Mental problem solving can also be
considered teleodynamic work. Thought forms are spontaneously generated,
and task of problem solving is the task of molding those forms to fit
the context of the problem at hand. Deacon makes the link between
evolution as teleodynamic work and thought as teleodynamic work
explicit. "The experience of being sentient is what it feels like to be evolution."[12]
Emergent causal powers
By conceiving of work in this way, Deacon claims "we can begin to discern a basis for a form of causal openness in the universe."[13]
While increases in complexity in no way alter the laws of physics, by
juxtaposing systems together, pathways of spontaneous change can be made
available that were inconceivably improbable prior to the systems
coupling. The causal power of any complex living system lies not solely
in the underlying quantum mechanics
but also in the global arrangement of its components. A careful
arrangement of parts can constrain possibilities such that phenomena
that were formerly impossibly rare can become improbably common.
Information
One of the central purposes of Incomplete Nature is to articulate a theory of biological information. The first formal theory of information was articulated by Claude Shannon in 1948 in his work A Mathematical Theory of Communication. Shannon's work is widely credited with ushering in the information age, but somewhat paradoxically, it was completely silent on questions of meaning and reference, i.e., what the information is about.
As an engineer, Shannon was concerned with the challenge of reliably
transmitting a message from one location to another. The meaning and
content of the message was largely irrelevant. So, while Shannon
information theory has been essential for the development of devices
like computers,
it has left open many philosophical questions regarding the nature of
information. Incomplete Nature seeks to answer some of these questions.
Shannon information
Shannon's key insight was to recognize a link between entropy and information. Entropy
is often defined as a measurement of disorder, or randomness, but this
can be misleading. For Shannon's purposes, the entropy of a system is
the number of possible states that the system has the capacity to be in.
Any one of these potential states can constitute a message. For
example, a typewritten page can bear as many different messages as there
are different combinations of characters that can be arranged on the
page. The information content of a message can only be understood
against the background context of all of the messages that could have
been sent, but weren't. Information is produced by a reduction of
entropy in the message medium.
Three nested conceptions of information
Boltzmann entropy
Shannon's information based conception of entropy should be distinguished from the more classic thermodynamic conception of entropy developed by Ludwig Boltzmann
and others at the end of the nineteenth century. While Shannon entropy
is static and has to do with the set of all possible messages/states
that a signal bearing system might take, Boltzmann entropy has to do
with the tendency of all dynamic systems to tend towards equilibrium.
That is, there are many more ways for a collection of particles to be
well mixed than to be segregated based on velocity, mass, or any other
property. Boltzmann entropy is central to the theory of work developed
earlier in the book because entropy dictates the direction in which a
system will spontaneously tend.
Significant information
Deacon's
addition to Shannon information theory is to propose a method for
describing not just how a message is transmitted, but also how it is
interpreted. Deacon weaves together Shannon entropy and Boltzmann
entropy in order to develop a theory of interpretation based in
teleodynamic work. Interpretation is inherently normative. Data becomes
information when it has significance for its interpreter. Thus
interpretive systems are teleodynamic - the interpretive process is
designed to perpetuate itself. "The interpretation of something as
information indirectly reinforces the capacity to do this again."
3D representation of a living cell during the process of mitosis, example of an autopoietic system
The term autopoiesis (from Greekαὐτo- (auto-), meaning 'self', and ποίησις (poiesis), meaning 'creation, production') refers to a system capable of reproducing and maintaining itself. The term was introduced in 1972 by Chilean biologists Humberto Maturana and Francisco Varela to define the self-maintaining chemistry of living cells. Since then the concept has been also applied to the fields of cognition, systems theory and sociology.
The original definition can be found in Autopoiesis and Cognition: the Realization of the Living (1st edition 1973, 2nd 1980):[1]
Page 16: It was in these circumstances ... in which he analyzed Don Quixote's dilemma of whether to follow the path of arms (praxis, action) or the path of letters (poiesis, creation, production), I understood for the first time the power of the word "poiesis" and invented the word that we needed: autopoiesis.
This was a word without a history, a word that could directly mean what
takes place in the dynamics of the autonomy proper to living systems.
Page 78: An autopoietic machine is a machine organized (defined
as a unity) as a network of processes of production (transformation and
destruction) of components which: (i) through their interactions and
transformations continuously regenerate and realize the network of
processes (relations) that produced them; and (ii) constitute it (the
machine) as a concrete unity in space in which they (the components)
exist by specifying the topological domain of its realization as such a
network.[2]
Page 89: ... the space defined by an autopoietic system is
self-contained and cannot be described by using dimensions that define
another space. When we refer to our interactions with a concrete
autopoietic system, however, we project this system on the space of our
manipulations and make a description of this projection.
Meaning
Autopoiesis was originally presented as a system description that was said to define and explain the nature of living systems. A canonical example of an autopoietic system is the biological cell. The eukaryotic cell, for example, is made of various biochemical components such as nucleic acids and proteins, and is organized into bounded structures such as the cell nucleus, various organelles, a cell membrane and cytoskeleton. These structures, based on an external flow of molecules and energy, produce
the components which, in turn, continue to maintain the organized
bounded structure that gives rise to these components (not unlike a wave
propagating through a medium).
An autopoietic system is to be contrasted with an allopoietic
system, such as a car factory, which uses raw materials (components) to
generate a car (an organized structure) which is something other
than itself (the factory). However, if the system is extended from the
factory to include components in the factory's "environment", such as
supply chains, plant / equipment, workers, dealerships, customers,
contracts, competitors, cars, spare parts, and so on, then as a total
viable system it could be considered to be autopoietic.
Though others have often used the term as a synonym for self-organization,
Maturana himself stated he would "[n]ever use the notion of
self-organization ... Operationally it is impossible. That is, if the
organization of a thing changes, the thing changes".[3]
Moreover, an autopoietic system is autonomous and operationally closed,
in the sense that there are sufficient processes within it to maintain
the whole. Autopoietic systems are "structurally coupled" with their
medium, embedded in a dynamic of changes that can be recalled as sensory-motor coupling. This continuous dynamic is considered as a rudimentary form of knowledge or cognition and can be observed throughout life-forms.
An application of the concept of autopoiesis to sociology can be found in Niklas Luhmann's Systems Theory, which was subsequently adapted by Bob Jessop in his studies of the capitalist state system. Marjatta Maula
adapted the concept of autopoiesis in a business context. The theory of
autopoiesis has also been applied in the context of legal systems by
not only Niklas Luhmann, but also Gunther Teubner.[4][5]
In the context of textual studies, Jerome McGann
argues that texts are "autopoietic mechanisms operating as
self-generating feedback systems that cannot be separated from those who
manipulate and use them".[6]
Citing Maturana and Varela, he defines an autopoietic system as "a
closed topological space that 'continuously generates and specifies its
own organization through its operation as a system of production of its
own components, and does this in an endless turnover of components'",
concluding that "Autopoietic systems are thus distinguished from
allopoietic systems, which are Cartesian and which 'have as the product
of their functioning something different from themselves'". Coding and markup appear allopoietic",
McGann argues, but are generative parts of the system they serve to
maintain, and thus language and print or electronic technology are
autopoietic systems.[7]
In his discussion of Hegel, the philosopher Slavoj Žižek
argues, "Hegel is – to use today's terms – the ultimate thinker of
autopoiesis, of the process of the emergence of necessary features out
of chaotic contingency, the thinker of contingency's gradual
self-organisation, of the gradual rise of order out of chaos."[8]
Relation to complexity
Autopoiesis can be defined as the ratio between the complexity of a system and the complexity of its environment.[9]
This generalized view of
autopoiesis considers systems as self-producing not in terms of their
physical components, but in terms of its organization, which can be
measured in terms of information and complexity. In other words, we can
describe autopoietic systems as those producing more of their own
complexity than the one produced by their environment.[10]
Relation to cognition
An extensive discussion of the connection of autopoiesis to cognition is provided by Thompson.[11]
The basic notion of autopoiesis as involving constructive interaction
with the environment is extended to include cognition. Initially,
Maturana defined cognition as behavior of an organism "with relevance to
the maintenance of itself".[12]
However, computer models that are self-maintaining but non-cognitive
have been devised, so some additional restrictions are needed, and the
suggestion is that the maintenance process, to be cognitive, involves
readjustment of the internal workings of the system in some metabolic process. On this basis it is claimed that autopoiesis is a necessary but not a sufficient condition for cognition.[13]
Thompson (p. 127) takes the view that this distinction may or may not
be fruitful, but what matters is that living systems involve autopoiesis
and (if it is necessary to add this point) cognition as well. It can be
noted that this definition of 'cognition' is restricted, and does not
necessarily entail any awareness or consciousness by the living system.
Relation to consciousness
The
connection of autopoiesis to cognition, or if necessary, of living
systems to cognition, is an objective assessment ascertainable by
observation of a living system.
One question that arises is about the connection between
cognition seen in this manner and consciousness. The separation of
cognition and consciousness recognizes that the organism may be unaware
of the substratum where decisions are made. What is the connection
between these realms? Thompson refers to this issue as the "explanatory
gap", and one aspect of it is the hard problem of consciousness, how and why we have qualia.[14]
A second question is whether autopoiesis can provide a bridge
between these concepts. Thompson discusses this issue from the
standpoint of enactivism.
An autopoietic cell actively relates to its environment. Its sensory
responses trigger motor behavior governed by autopoiesis, and this
behavior (it is claimed) is a simplified version of a nervous system
behavior. The further claim is that real-time interactions like this
require attention, and an implication of attention is awareness.[15]
Criticism
There
are multiple criticisms of the use of the term in both its original
context, as an attempt to define and explain the living, and its various
expanded usages, such as applying it to self-organizing systems in
general or social systems in particular.[16] Critics have argued that the term fails to define or explain living systems and that, because of the extreme language of self-referentiality it uses without any external reference, it is really an attempt to give substantiation to Maturana's radical constructivist or solipsisticepistemology,[17] or what Danilo Zolo[18][19]
has called instead a "desolate theology". An example is the assertion
by Maturana and Varela that "We do not see what we do not see and what
we do not see does not exist".[20] The autopoietic model, said Rod Swenson,[21]
is "miraculously decoupled from the physical world by its
progenitors ... (and thus) grounded on a solipsistic foundation that
flies in the face of both common sense and scientific knowledge".
Trapped light for optical computation (credit: Imperial College London)
By forcing light to go through a smaller gap than ever before, a research team at Imperial College London has taken a step toward computers based on light instead of electrons.
Light would be preferable for computing because it can carry
much-higher-density information, it’s much faster, and more efficient
(generates little to no heat). But light beams don’t easily interact
with one other. So information on high-speed fiber-optic cables
(provided by your cable TV company, for example) currently has to be
converted (via a modem or other device) into slower signals (electrons
on wires or wireless signals) to allow for processing the data on
devices such as computers and smartphones.
Electron-microscope
image of an optical-computing nanofocusing device that is 25 nanometers
wide and 2 micrometers long, using grating couplers (vertical lines) to
interface with fiber-optic cables. (credit: Nielsen et al.,
2017/Imperial College London)
To overcome that limitation, the researchers used metamaterials to
squeeze light into a metal channel only 25 nanometers (billionths of a
meter) wide, increasing its intensity and allowing photons to interact
over the range of micrometers (millionths of meters) instead of
centimeters.*
That means optical computation that previously required a
centimeters-size device can now be realized on the micrometer (one
millionth of a meter) scale, bringing optical processing into the size
range of electronic transistors.
The results were published Thursday Nov. 30, 2017 in the journal Science.
* Normally, when two light beams cross each other, the individual
photons do not interact or alter each other, as two electrons do when
they meet. That means a long span of material is needed to gradually
accumulate the effect and make it useful. Here, a “plasmonic nanofocusing”waveguide is used, strongly confining light within a nonlinear organic polymer.
Abstract of Giant nonlinear response at a plasmonic nanofocus drives efficient four-wave mixing
Efficient optical frequency mixing typically must accumulate over
large interaction lengths because nonlinear responses in natural
materials are inherently weak. This limits the efficiency of mixing
processes owing to the requirement of phase matching. Here, we report
efficient four-wave mixing (FWM) over micrometer-scale interaction
lengths at telecommunications wavelengths on silicon. We used an
integrated plasmonic gap waveguide that strongly confines light within a
nonlinear organic polymer. The gap waveguide intensifies light by
nanofocusing it to a mode cross-section of a few tens of nanometers,
thus generating a nonlinear response so strong that efficient FWM
accumulates over wavelength-scale distances. This technique opens up
nonlinear optics to a regime of relaxed phase matching, with the
possibility of compact, broadband, and efficient frequency mixing
integrated with silicon photonics.
"Health 2.0" is a term introduced in the mid-2000s, as the subset of health care technologies mirroring the wider Web 2.0
movement. It has been defined variously as including social media,
user-generated content, and cloud-based and mobile technologies. Some
Health 2.0 proponents see these technologies as empowering patients to
have greater control over their own health care and diminishing medical paternalism. Critics of the technologies have expressed concerns about possible misinformation and violations of patient privacy,
History
Health 2.0 built on the possibilities for changing health care, which started with the introduction of eHealth in the mid-1990s following the emergence of the World Wide Web. In the mid-2000s, following the widespread adoption both of the Internet and of easy to use tools for communication, social networking, and self-publishing,
there was spate of media attention to and increasing interest from
patients, clinicians, and medical librarians in using these tools for
health care and medical purposes.[1][2]
Early examples of Health 2.0 were the use of a specific set of Web tools (blogs, email list-servs, online communities, podcasts, search, tagging, Twitter, videos, wikis, and more) by actors in health care including doctors, patients, and scientists, using principles of open source
and user-generated content, and the power of networks and social
networks in order to personalize health care, to collaborate, and to
promote health education.[3]
Possible explanations why health care has generated its own "2.0" term
are the availability and proliferation of Health 2.0 applications across
health care in general, and the potential for improving public health in particular.[4]
Current use
While the "2.0" moniker was originally associated with concepts like collaboration, openness, participation, and social networking,[5] in recent years the term "Health 2.0" has evolved to mean the role of Saas and cloud-based
technologies, and their associated applications on multiple devices.
Health 2.0 describes the integration of these into much of general
clinical and administrative workflow in health care. As of 2014,
approximately 3,000 companies were offering products and services
matching this definition, with venture capital funding in the sector exceeding $2.3 billion in 2013.[6]
Definitions
The "traditional" definition of "Health 2.0" focused on technology as an enabler for care collaboration:
"The use of social software t-weight tools to promote collaboration
between patients, their caregivers, medical professionals, and other
stakeholders in health."[7]
In 2011, Indu Subaiya redefined Health 2.0[8] as the use in health care of new cloud, Saas, mobile, and device technologies that are:
Adaptable technologies which easily allow other tools and
applications to link and integrate with them, primarily through use of
accessible APIs
Focused on the user experience, bringing in the principles of user-centered design
Data driven, in that they both create data and present data to the user in order to help improve decision making
This wider definition allows recognition of what is or what isn't a
Health 2.0 technology. Typically, enterprise-based, customized
client-server systems are not, while more open, cloud based systems fit
the definition. However, this line was blurring by 2011-2 as more
enterprise vendors started to introduce cloud-based systems and native
applications for new devices like smartphones and tablets.
In addition, Health 2.0 has several competing terms, each with its own followers—if not exact definitions—including Connected Health, Digital Health, Medicine 2.0, and mHealth.
All of these support a goal of wider change to the health care system,
using technology-enabled system reform—usually changing the relationship
between patient and professional.:
Personalized search that looks into the long tail but cares about the user experience
Communities that capture the accumulated knowledge of patients, caregivers, and clinicians, and explains it to the world
Intelligent tools for content delivery—and transactions
Better integration of data with content
Wider health system definitions
In
the late 2000s, several commentators used Health 2.0 as a moniker for a
wider concept of system reform, seeking a participatory process between
patient and clinician: "New concept of health care wherein all the
constituents (patients, physicians, providers, and payers) focus on
health care value (outcomes/price) and use competition at the medical
condition level over the full cycle of care as the catalyst for
improving the safety, efficiency, and quality of health care".[9]
Health 2.0 defines the combination of health data and health
information with (patient) experience, through the use of ICT, enabling
the citizen to become an active and responsible partner in his/her own
health and care pathway.[10]
Health 2.0 is participatory healthcare. Enabled by information,
software, and communities that we collect or create, we the patients can
be effective partners in our own healthcare, and we the people can
participate in reshaping the health system itself.[11]
Definitions of Medicine 2.0
appear to be very similar but typically include more scientific and
research aspects—Medicine 2.0: "Medicine 2.0 applications, services and
tools are Web-based services for health care consumers, caregivers,
patients, health professionals, and biomedical researchers, that use Web
2.0 technologies as well as semantic web and virtual reality tools, to
enable and facilitate specifically social networking, participation,
apomediation, collaboration, and openness within and between these user
groups.[12][13]
Published in JMIR Tom Van de Belt, Lucien Engelenet al. systematic review found 46 (!) unique definitions of health 2.0[14]
Overview
A model of Health 2.0
Health 2.0 refers to the use of a diverse set of technologies including Connected Health, electronic medical records, mHealth, telemedicine, and the use of the Internet by patients themselves such as through blogs, Internet forums, online communities, patient to physician communication systems, and other more advanced systems.[15][16]
A key concept is that patients themselves should have greater insight
and control into information generated about them. Additionally Health
2.0 relies on the use of modern cloud and mobile-based technologies.
Much of the potential for change from Health 2.0 is facilitated
by combining technology driven trends such as Personal Health Records
with social networking —"[which] may lead to a powerful new generation
of health applications, where people share parts of their electronic
health records with other consumers and 'crowdsource' the collective
wisdom of other patients and professionals."[5] Traditional models of medicine had patient records (held on paper or a proprietary computer system) that could only be accessed by a physician or other medical professional.
Physicians acted as gatekeepers to this information, telling patients
test results when and if they deemed it necessary. Such a model operates
relatively well in situations such as acute care, where information
about specific blood results would be of little use to a lay person, or in general practice where results were generally benign. However, in the case of complex chronic diseases, psychiatric disorders,
or diseases of unknown etiology patients were at risk of being left
without well-coordinated care because data about them was stored in a
variety of disparate places and in some cases might contain the opinions
of healthcare professionals which were not to be shared with the
patient. Increasingly, medical ethics deems such actions to be medical paternalism, and they are discouraged in modern medicine.[17][18]
A hypothetical example demonstrates the increased engagement of a
patient operating in a Health 2.0 setting: a patient goes to see their primary care physician with a presenting complaint, having first ensured their own medical record
was up to date via the Internet. The treating physician might make a
diagnosis or send for tests, the results of which could be transmitted
directly to the patient's electronic medical record. If a second
appointment is needed, the patient will have had time to research what
the results might mean for them, what diagnoses may be likely, and may
have communicated with other patients who have had a similar set of
results in the past. On a second visit a referral might be made to a
specialist. The patient might have the opportunity to search for the
views of other patients on the best specialist to go to, and in
combination with their primary care physician decides who to see. The
specialist gives a diagnosis along with a prognosis and potential
options for treatment. The patient has the opportunity to research these
treatment options and take a more proactive role in coming to a joint
decision with their healthcare provider. They can also choose to submit
more data about themselves, such as through a personalized genomics
service to identify any risk factors
that might improve or worsen their prognosis. As treatment commences,
the patient can track their health outcomes through a data-sharing
patient community to determine whether the treatment is having an effect
for them, and they can stay up to date on research opportunities and clinical trials for their condition. They also have the social support of communicating with other patients diagnosed with the same condition throughout the world.
Level of use of Web 2.0 in health care
Partly
due to weak definitions, the novelty of the endeavor and its nature as
an entrepreneurial (rather than academic) movement, little empirical evidence
exists to explain how much Web 2.0 is being used in general. While it
has been estimated that nearly one-third of the 100 million Americans
who have looked for health information online say that they or people
they know have been significantly helped by what they found,[19] this study considers only the broader use of the Internet for health management.
A study examining physician practices has suggested that a
segment of 245,000 physicians in the U.S are using Web 2.0 for their
practice, indicating that use is beyond the stage of the early adopter with regard to physicians and Web 2.0.[20]
Use for professional development for doctors, and public health
promotion for by public health professionals and the general public
How podcasts can be used on the move to increase total available educational time[22] or the many applications of these tools to public health[23]
All (medical professionals and public)
Collaboration and practice
Web 2.0 tools use in daily practice for medical professionals to find information and make decisions
Google searches revealed the correct diagnosis in 15 out of 26 cases (58%, 95% confidence interval 38% to 77%) in a 2005 study[24]
Doctors, nurses
Managing a particular disease
Patients who use search tools to find out information about a particular condition
Shown that patients have different patterns of usage depending on if
they are newly diagnosed or managing a severe long-term illness.
Long-term patients are more likely to connect to a community in Health
2.0[25]
Public
Sharing data for research
Completing patient-reported outcomes and aggregating the data for personal and scientific research
Disease specific communities for patients with rare conditions
aggregate data on treatments, symptoms, and outcomes to improve their
decision making ability and carry out scientific research such as
observational trials[26]
All (medical professionals and public)
Criticism of the use of Web 2.0 in health care
Hughes et al. (2009) argue there are four major tensions represented in the literature on Health/Medicine 2.0. These concern:[3]
the lack of clear definitions
issues around the loss of control over information that doctors perceive
safety and the dangers of inaccurate information
issues of ownership and privacy
Several criticisms have been raised about the use of Web 2.0 in health care. Firstly, Google has limitations as a diagnostic tool for Medical Doctors (MDs), as it may be effective only for conditions with unique symptoms and signs that can easily be used as search term.[24] Studies of its accuracy have returned varying results, and this remains in dispute.[27]
Secondly, long-held concerns exist about the effects of patients
obtaining information online, such as the idea that patients may delay
seeking medical advice[28] or accidentally reveal private medical data.[29][30] Finally, concerns exist about the quality of user-generated content leading to misinformation,[31][32] such as perpetuating the discredited claim that the MMR vaccine may cause autism.[33] In contrast, a 2004 study of a British epilepsy online support group suggested that only 6% of information was factually wrong.[34] In a 2007 Pew Research Center
survey of Americans, only 3% reported that online advice had caused
them serious harm, while nearly one-third reported that they or their
acquaintances had been helped by online health advice.
December 6, 2017 Original link: http://www.kurzweilai.net/new-technology-allows-robots-to-visualize-their-own-future
UC Berkeley researchers have developed a robotic learning technology
that enables robots to imagine the future of their actions so they can
figure out how to manipulate objects they have never encountered before.
It could help self-driving cars anticipate future events on the road
and produce more intelligent robotic assistants in homes.
The initial prototype focuses on learning simple manual skills
entirely from autonomous play — similar to how children can learn about
their world by playing with toys, moving them around, grasping, etc.
Using this technology, called visual foresight,
the robots can predict what their cameras will see if they perform a
particular sequence of movements. These robotic imaginations are still
relatively simple for now — predictions made only several seconds into
the future — but they are enough for the robot to figure out how to move
objects around on a table without disturbing obstacles.
The robot can learn to perform these tasks without any help from
humans or prior knowledge about physics, its environment, or what the
objects are. That’s because the visual imagination is learned entirely
from scratch from unattended and unsupervised (no humans involved)
exploration, where the robot plays with objects on a table.
After this play phase, the robot builds a predictive model of the
world, and can use this model to manipulate new objects that it has not
seen before.
“In the same way that we can imagine how our actions will move the
objects in our environment, this method can enable a robot to visualize
how different behaviors will affect the world around it,” said Sergey Levine,
assistant professor in Berkeley’s Department of Electrical Engineering
and Computer Sciences, whose lab developed the technology. “This can
enable intelligent planning of highly flexible skills in complex
real-world situations.”
At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA).
DNA-based models predict how pixels in an image will move from one
frame to the next, based on the robot’s actions. Recent improvements to
this class of models, as well as greatly improved planning capabilities,
have enabled robotic control based on video prediction to perform
increasingly complex tasks, such as sliding toys around obstacles and
repositioning multiple objects.
“In that past, robots have learned skills with a human supervisor
helping and providing feedback. What makes this work exciting is that
the robots can learn a range of visual object manipulation skills
entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model.
With the new technology, a robot pushes objects on a table, then uses
the learned prediction model to choose motions that will move an object
to a desired location. Robots use the learned model from raw camera
observations to teach themselves how to avoid obstacles and push objects
around obstructions.
Since control through video prediction relies only on observations
that can be collected autonomously by the robot, such as through camera
images, the resulting method is general and broadly applicable. Building
video prediction models only requires unannotated video, which can be
collected by the robot entirely autonomously.
That contrasts with conventional computer-vision methods, which
require humans to manually label thousands or even millions of images.