Search This Blog

Tuesday, July 31, 2018

Philosophy of biology

From Wikipedia, the free encyclopedia

The philosophy of biology is a subfield of philosophy of science, which deals with epistemological, metaphysical, and ethical issues in the biological and biomedical sciences. Although philosophers of science and philosophers generally have long been interested in biology (e.g., Aristotle, Descartes, and even Kant), philosophy of biology only emerged as an independent field of philosophy in the 1960s and 1970s. Philosophers of science then began paying increasing attention to biology, from the rise of Neodarwinism in the 1930s and 1940s to the discovery of the structure of DNA in 1953 to more recent advances in genetic engineering. Other key ideas include the reduction of all life processes to biochemical reactions, and the incorporation of psychology into a broader neuroscience.

Overview

The philosophy of biology can be seen as following an empirical tradition, favoring naturalism. Many contemporary philosophers of biology have largely avoided traditional questions about the distinction between life and non-life. Instead, they have examined the practices, theories, and concepts of biologists with a view toward better understanding biology as a scientific discipline (or group of scientific fields). Scientific ideas are philosophically analyzed and their consequences are explored. It is sometimes difficult to delineate philosophy of biology as separate from theoretical biology. A few of the questions philosophers of biology have attempted to answer, for example, include:
  • "What is a biological species?"
  • "How is rationality possible, given our biological origins?"
  • "How do organisms coordinate their common behavior?"
  • "Are there genome editing agents?"
  • "How might our biological understandings of race, sexuality, and gender reflect social values?"
  • "What is natural selection, and how does it operate in nature?"
  • "How do medical doctors explain disease?"
  • "From where do language and logic stem?"
  • "How is ecology related to medicine?"
A subset of philosophers of biology with a more explicitly naturalistic orientation hope that biology will provide scientific answers to such fundamental problems of epistemology, ethics, aesthetics, anthropology and even metaphysics. Furthermore, progress in biology urges modern societies to rethink traditional values concerning all aspects of human life. The possibility of genetic modification of human stem cells, for example, has led to an ongoing controversy on how certain biological techniques could infringe upon ethical consensus (see bioethics). Some of the questions addressed by these philosophers of biology include:
  • "What is life?"[1]
  • "What makes humans uniquely human?"
  • "What is the basis of moral thinking?"
  • "What are the factors we use for aesthetic judgments?"
  • "Is evolution compatible with Christianity or other religious systems?"
Increasingly, ideas drawn from philosophical ontology and logic are being used by biologists in the domain of bioinformatics. Ontologies such as the Gene Ontology[2] are being used to annotate the results of biological experiments in a variety of model organisms in order to create logically tractable bodies of data available for reasoning and search. The Gene Ontology itself is a species-neutral graph-theoretical representation of biological types joined together by formally defined relations.[3]

Philosophy of biology today has become a very visible, well-organized discipline - with its own journals, conferences, and professional organizations. The largest of the latter is the International Society for the History, Philosophy, and Social Studies of Biology (ISHPSSB);[4] the name of the Society reflecting the interdisciplinary nature of the field.

Reductionism, holism, and vitalism

One subject within philosophy of biology deals with the relationship between reductionism and holism, contending views with epistemological and methodological significance, but also with ethical and metaphysical connotations.
  • Scientific reductionism is the view that higher-level biological processes reduce to physical and chemical processes. For example, the biological process of respiration is explained as a biochemical process involving oxygen and carbon dioxide.
  • Holism is the view that emphasizes higher-level processes, also called emergent properties: phenomena at a larger level that occur due to the pattern of interactions between the elements of a system over time. For example, to explain why one species of finch survives a drought while others die out, the holistic method looks at the entire ecosystem. Reducing an ecosystem to its parts in this case would be less effective at explaining overall behavior (in this case, the decrease in biodiversity). As individual organisms must be understood in the context of their ecosystems, holists argue, so must lower-level biological processes be understood in the broader context of the living organism in which they take part. Proponents of this view cite our growing understanding of the multidirectional and multilayered nature of gene modulation (including epigenetic changes) as an area where a reductionist view is inadequate for full explanatory power.[5] See also Holism in science.
  • Vitalism is the view, rejected by mainstream biologists since the 19th century, that there is a life-force (called the "vis viva") that has thus far been unmeasurable scientifically that gives living organisms their "life." Vitalists often claimed that the vis viva acts with purposes according to its pre-established "form" (see teleology). Examples of vitalist philosophy are found in many religions. Mainstream biologists reject vitalism on the grounds that it opposes the scientific method. The scientific method was designed as a methodology to build an extremely reliable understanding of the world, that is, a supportable, evidenced understanding. Following this epistemological view, mainstream scientists reject phenomena that have not been scientifically measured or verified, and thus reject vitalism.
Some philosophers of biology have attempted to explain the rise and fall of reductionism, vitalism, and holism throughout the history of biology. For example, these philosophers claim that the ideas of Charles Darwin ended the last remainders of teleology in biology, though the matter continues to be debated. Debates in these areas of philosophy of biology turn on how one views reductionism.

An autonomous philosophy of biology

All processes in organisms obey physical laws, the difference from inanimate processes lying in their organisation and their being subject to control by coded information. This has led some biologists and philosophers (for example, Ernst Mayr and David Hull) to return to the strictly philosophical reflections of Charles Darwin to resolve some of the problems which confronted them when they tried to employ a philosophy of science derived from classical physics. This latter, positivist approach, exemplified by Joseph Henry Woodger, emphasised a strict determinism (as opposed to high probability) and to the discovery of universally applicable laws, testable in the course of experiment. It was difficult for biology, beyond a basic microbiological level, to live up to these structures.[6] Standard philosophy of science seemed to leave out a lot of what characterised living organisms - namely, a historical component in the form of an inherited genotype.

Biologists with philosophic interests responded, emphasising the dual nature of the living organism. On the one hand there was the genetic programme (represented in nucleic acids) - the genotype. On the other there was its extended body or soma - the phenotype. In accommodating the more probabilistic and non-universal nature of biological generalisations, it was a help that standard philosophy of science was in the process of accommodating similar aspects of 20th century physics.

This led to a distinction between proximate causes and explanations - "how" questions dealing with the phenotype; and ultimate causes - "why" questions, including evolutionary causes, focused on the genotype. This clarification was part of the great reconciliation, by Ernst Mayr, among others, in the 1940s, between Darwinian evolution by natural selection and the genetic model of inheritance. A commitment to conceptual clarification has characterised many of these philosophers since. Trivially, this has reminded us of the scientific basis of all biology, while noting its diversity - from microbiology to ecology. A complete philosophy of biology would need to accommodate all these activities.[citation needed] Less trivially, it has unpacked the notion of "teleology". Since 1859, scientists have had no need for a notion of cosmic teleology - a programme or a law that can explain and predict evolution. Darwin provided that. But teleological explanations (relating to purpose or function) have remained stubbornly useful in biology - from the structural configuration of macromolecules to the study of co-operation in social systems. By clarifying and restricting the use of the term to describe and explain systems controlled strictly scientifically by genetic programmes, or other physical systems, teleological questions can be framed and investigated while remaining committed to the physical nature of all underlying organic processes.

Similar attention has been given to the concepts of natural selection (what is the target of natural selection? - the individual? the environment? the genome? the species?); adaptation; diversity and classification; species and speciation; and macroevolution.

Just as biology has developed as an autonomous discipline in full conversation with the other sciences, there is a great deal of work now being carried on by biologists and philosophers to develop a dedicated philosophy of biological science which, while in full conversation with all other philosophic disciplines, attempts to give answers to the real questions raised by scientific investigations in biology.

Other perspectives

While the overwhelming majority of English-speaking scholars operating under the banner of "philosophy of biology" work within the Anglo-American tradition of analytical philosophy, there is a stream of philosophic work in continental philosophy which seeks to deal with issues deriving from biological science. The communication difficulties involved between these two traditions are well known, not helped by differences in language. Gerhard Vollmer is often thought of as a bridge but, despite his education and residence in Germany, he largely works in the Anglo-American tradition, particularly pragmatism, and is famous for his development of Konrad Lorenz's and Willard Van Orman Quine's idea of evolutionary epistemology. On the other hand, one scholar who has attempted to give a more continental account of the philosophy of biology is Hans Jonas. His "The Phenomenon of Life" (New York, 1966) sets out boldly to offer an "existential interpretation of biological facts", starting with the organism's response to stimulus and ending with man confronting the Universe, and drawing upon a detailed reading of phenomenology. This is unlikely to have much influence on mainstream philosophy of biology, but indicates, as does Vollmer's work, the current powerful influence of biological thought on philosophy. Another account is given by the late Virginia Tech philosopher Marjorie Grene.

Philosophy of biology was historically associated very closely with theoretical evolutionary biology, however more recently there have been more diverse movements within philosophy of biology including movements to examine for instance molecular biology.[11]

Scientific discovery process

Research in biology continues to be less guided by theory than it is in other sciences.[12] This is especially the case where the availability of high throughput screening techniques for the different "-omics" fields such as genomics, whose complexity makes them predominantly data-driven. Such data-intensive scientific discovery is by some considered to be the fourth paradigm, after empiricism, theory and computer simulation.[13] Others reject the idea that data driven research is about to replace theory.[14][15] As Krakauer et al. put it: "machine learning is a powerful means of preprocessing data in preparation for mechanistic theory building, but should not be considered the final goal of a scientific inquiry."[16] In regard to cancer biology, Raspe et al. state: "A better understanding of tumor biology is fundamental for extracting the relevant information from any high throughput data." [17] The journal Science chose cancer immunotherapy as the breakthrough of 2013. According to their explanation a lesson to be learned from the successes of cancer immunotherapy is that they emerged from decoding of basic biology.[18]

Theory in biology is to some extent less strictly formalized than in physics. Besides 1) classic mathematical-analytical theory, as in physics, there is 2) statistics-based, 3) computer simulation and 4) conceptual/verbal analysis.[19] Dougherty and Bittner argue that for biology to progress as a science, it has to move to more rigorous mathematical modeling, or otherwise risk to be "empty talk".[20]
 
In tumor biology research, the characterization of cellular signaling processes has largely focused on identifying the function of individual genes and proteins. Janes [21] showed however the context-dependent nature of signaling driving cell decisions demonstrating the need for a more system based approach.[22] The lack of attention for context dependency in preclinical research is also illustrated by the observation that preclinical testing rarely includes predictive biomarkers that, when advanced to clinical trials, will help to distinguish those patients who are likely to benefit from a drug.

What Is Life?

From Wikipedia, the free encyclopedia

What Is Life? The Physical
Aspect of the Living Cell
Was ist Leben (1)-OG.JPG
Title pages of 1948 edition
Author Erwin Schrödinger
Country United Kingdom (UK)
Language English
Genre Popular science
Publisher Cambridge University Press
Publication date
1944
Media type Print
Pages 194 pp.
ISBN 0-521-42708-8
OCLC 24503223
574/.01 20
LC Class QH331 .S357 1992

What Is Life? The Physical Aspect of the Living Cell is a 1944 science book written for the lay reader by physicist Erwin Schrödinger. The book was based on a course of public lectures delivered by Schrödinger in February 1943, under the auspices of the Dublin Institute for Advanced Studies at Trinity College, Dublin. The lectures attracted an audience of about 400, who were warned "that the subject-matter was a difficult one and that the lectures could not be termed popular, even though the physicist’s most dreaded weapon, mathematical deduction, would hardly be utilized." Schrödinger's lecture focused on one important question: "how can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?"

In the book, Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. In the 1950s, this idea stimulated enthusiasm for discovering the genetic molecule. Although the existence of some form of hereditary information had been hypothesized since 1869, its role in reproduction and its helical shape were still unknown at the time of Schrödinger's lecture. In retrospect, Schrödinger's aperiodic crystal can be viewed as a well-reasoned theoretical prediction of what biologists should have been looking for during their search for genetic material. Both James D. Watson,[2] and Francis Crick, who jointly proposed the double helix structure of DNA based on X-ray diffraction experiments by Rosalind Franklin, credited Schrödinger's book with presenting an early theoretical description of how the storage of genetic information would work, and each independently acknowledged the book as a source of inspiration for their initial researches.[3]

Background

The book is based on lectures delivered under the auspices of the Institute at Trinity College, Dublin, in February 1943 and published in 1944. At that time DNA was not yet accepted as the carrier of hereditary information, which only was the case after the Hershey–Chase experiment of 1952. One of the most successful branches of physics at this time was statistical physics, and quantum mechanics, a theory which is also very statistical in its nature. Schrödinger himself is one of the founding fathers of quantum mechanics.

Max Delbrück's thinking about the physical basis of life was an important influence on Schrödinger.[4] However, long before the publication of What is Life?, geneticist and 1946 Nobel-prize winner H. J. Muller had in his 1922 article "Variation due to Change in the Individual Gene"[5] already laid out all the basic properties of the "heredity molecule" (then not yet known to be DNA) that Schrödinger was to re-derive in 1944 "from first principles" in What is Life? (including the "aperiodicity" of the molecule), properties which Muller specified and refined additionally in his 1929 article "The Gene As The Basis of Life"[6] and during the 1930s.[7] Moreover, H. J. Muller himself wrote in a 1960 letter to a journalist regarding What Is Life? that whatever the book got right about the "hereditary molecule" had already been published before 1944 and that Schrödinger's were only the wrong speculations; Muller also named two famous geneticists (including Delbrück) who knew every relevant pre-1944 publication and had been in contact with Schrödinger before 1944. But DNA as the molecule of heredity became topical only after Oswald Avery's most important bacterial-transformation experiments in 1944. Before these experiments, proteins were considered the most likely candidates.

Content

In chapter I, Schrödinger explains that most physical laws on a large scale are due to chaos on a small scale. He calls this principle "order-from-disorder." As an example he mentions diffusion, which can be modeled as a highly ordered process, but which is caused by random movement of atoms or molecules. If the number of atoms is reduced, the behaviour of a system becomes more and more random. He states that life greatly depends on order and that a naïve physicist may assume that the master code of a living organism has to consist of a large number of atoms.

In chapter II and III, he summarizes what was known at this time about the hereditary mechanism. Most importantly, he elaborates the important role mutations play in evolution. He concludes that the carrier of hereditary information has to be both small in size and permanent in time, contradicting the naïve physicist's expectation. This contradiction cannot be resolved by classical physics.

In chapter IV, Schrödinger presents molecules, which are indeed stable even if they consist of only a few atoms, as the solution. Even though molecules were known before, their stability could not be explained by classical physics, but is due to the discrete nature of quantum mechanics. Furthermore, mutations are directly linked to quantum leaps.

He continues to explain, in chapter V, that true solids, which are also permanent, are crystals. The stability of molecules and crystals is due to the same principles and a molecule might be called "the germ of a solid." On the other hand, an amorphous solid, without crystalline structure, should be regarded as a liquid with a very high viscosity. Schrödinger believes the heredity material to be a molecule, which unlike a crystal does not repeat itself. He calls this an aperiodic crystal. Its aperiodic nature allows it to encode an almost infinite number of possibilities with a small number of atoms. He finally compares this picture with the known facts and finds it in accordance with them.
In chapter VI Schrödinger states:
…living matter, while not eluding the "laws of physics" as established up to date, is likely to involve "other laws of physics" hitherto unknown, which however, once they have been revealed, will form just as integral a part of science as the former.
He knows that this statement is open to misconception and tries to clarify it. The main principle involved with "order-from-disorder" is the second law of thermodynamics, according to which entropy only increases in a closed system (such as the universe). Schrödinger explains that living matter evades the decay to thermodynamical equilibrium by homeostatically maintaining negative entropy (today this quantity is called information[8]) in an open system.

In chapter VII, he maintains that "order-from-order" is not absolutely new to physics; in fact, it is even simpler and more plausible. But nature follows "order-from-disorder", with some exceptions as the movement of the celestial bodies and the behaviour of mechanical devices such as clocks. But even those are influenced by thermal and frictional forces. The degree to which a system functions mechanically or statistically depends on the temperature. If heated, a clock ceases to function, because it melts. Conversely, if the temperature approaches absolute zero, any system behaves more and more mechanically. Some systems approach this mechanical behaviour rather fast with room temperature already being practically equivalent to absolute zero.

Schrödinger concludes this chapter and the book with philosophical speculations on determinism, free will, and the mystery of human consciousness. He attempts to "see whether we cannot draw the correct non-contradictory conclusion from the following two premises: (1) My body functions as a pure mechanism according to Laws of Nature; and (2) Yet I know, by incontrovertible direct experience, that I am directing its motions, of which I foresee the effects, that may be fateful and all-important, in which case I feel and take full responsibility for them. The only possible inference from these two facts is, I think, that I – I in the widest meaning of the word, that is to say, every conscious mind that has ever said or felt 'I' – am the person, if any, who controls the 'motion of the atoms' according to the Laws of Nature" Schrödinger then states that this insight is not new and that Upanishads considered this insight of "ATHMAN = BRAHMAN" to "represent quintessence of deepest insights into the happenings of the world. Schrödinger rejects the idea that the source of consciousness should perish with the body because he finds the idea "distasteful". He also rejects the idea that there are multiple immortal souls that can exist without the body because he believes that consciousness is nevertheless highly dependent on the body. Schrödinger writes that, to reconcile the two premises,
The only possible alternative is simply to keep to the immediate experience that consciousness is a singular of which the plural is unknown; that there is only one thing and that what seems to be a plurality is merely a series of different aspects of this one thing…
Any intuitions that consciousness is plural, he says, are illusions. Schrödinger is sympathetic to the Hindu concept of Brahman, by which each individual's consciousness is only a manifestation of a unitary consciousness pervading the universe — which corresponds to the Hindu concept of God. Schrödinger concludes that "…'I' am the person, if any, who controls the 'motion of the atoms' according to the Laws of Nature." However, he also qualifies the conclusion as "necessarily subjective" in its "philosophical implications". In the final paragraph, he points out that what is meant by "I" is not the collection of experienced events but "namely the canvas upon which they are collected." If a hypnotist succeeds in blotting out all earlier reminiscences, he writes, there would be no loss of personal existence — "Nor will there ever be."[9]

Schrödinger's "paradox"

In a world governed by the second law of thermodynamics, all isolated systems are expected to approach a state of maximum disorder. Since life approaches and maintains a highly ordered state, some argue that this seems to violate the aforementioned second law, implying that there is a paradox. However, since the biosphere is not an isolated system, there is no paradox. The increase of order inside an organism is more than paid for by an increase in disorder outside this organism by the loss of heat into the environment. By this mechanism, the second law is obeyed, and life maintains a highly ordered state, which it sustains by causing a net increase in disorder in the Universe. In order to increase the complexity on Earth—as life does—free energy is needed and in this case is provided by the Sun.[10][11]

Editions

Monday, July 30, 2018

Incomplete Nature

From Wikipedia, the free encyclopedia
 
Incomplete Nature: How Mind Emerged from Matter
Author Terrence W. Deacon
Country United States
Language English
Subject Science
Published W. W. Norton & Company; 1 edition (November 21, 2011)
Media type Print
Pages 670
ISBN 978-0393049916
OCLC 601107605
612.8/2

Incomplete Nature: How Mind Emerged from Matter is a 2011 book by biological anthropologist Terrence Deacon. The book covers topics in biosemiotics, philosophy of mind, and the origins of life. Broadly, the book seeks to naturalistically explain "aboutness", that is, concepts like intentionality, meaning, normativity, purpose, and function; which Deacon groups together and labels as ententional phenomena.

Core Ideas

Deacon's first book, The Symbolic Species focused on the evolution of human language. In that book, Deacon notes that much of the mystery surrounding language origins comes from a profound confusion on the nature of semiotic processes themselves. Accordingly, the focus of Incomplete Nature shifts from human origins to the origin of life and semiosis. Incomplete Nature can be viewed as a sizable contribution to the growing body of work positing that the problem of consciousness and the problem of the origin of life are inexorably linked.[1][2] Deacon tackles these two linked problems by going back to basics. The book expands upon the classical conceptions of work and information in order to give an account of ententionality that is consistent with eliminative materialism and yet does not seek to explain away or pass off as epiphenominal the non-physical properties of life.

Constraints

A central thesis of the book is that absence can still be efficacious. Deacon makes the claim that just as the concept of zero revolutionized mathematics, thinking about life, mind, and other ententional phenomena in terms of constraints (i.e. what is absent) can similarly help us overcome the artificial dichotomy of the mind body problem. A good example of this concept is the hole that defines the hub of a wagon wheel. The hole itself is not a physical thing, but rather a source of constraint that helps to restrict the conformational possibilities of the wheel's components, such that, on a global scale, the property of rolling emerges. Constraints which produce emergent phenomena may not be a process which can be understood by looking at the make-up of the constituents of a pattern. Emergent phenomena are difficult to study because their complexity does not necessarily decompose into parts. When a pattern is broken down, the constraints are no longer at work; there is no hole, no absence to notice. Imagine a hub, a hole for an axle, produced only when the wheel is rolling, thus breaking the wheel may not show you how the hub emerges.

Orthograde and contragrade

Deacon notes that the apparent patterns of causality exhibited by living systems seem to be in some ways the inverse of the causal patterns of non-living systems.[citation needed] In an attempt to find a solution to the philosophical problems associated with teleological explanations, Deacon returns to Aristotle's four causes and attempts to modernize them with thermodynamic concepts.

A cartoon characterization of the asymmetry implicit in thermodynamic change from a constrained ("ordered") state to a less constrained ("disordered") state, which tends to occur spontaneously (an orthograde process), contrasted with the reversed direction of change, which does not tend to occur spontaneously (a contragrade process), and so only tends to occur in response to the imposition of highly constrained external work (arrows in the image on the right).

Orthograde changes are caused internally. They are spontaneous changes. That is, orthograde changes are generated by the spontaneous elimination of asymmetries in a thermodynamic system in disequilibrium. Because orthograde changes are driven by the internal geometry of a changing system, orthograde causes can be seen as analogous to Aristotle's formal cause. More loosely, Aristotle's final cause can also be considered orthograde, because goal oriented actions are caused from within.[3]

Contragrade changes are imposed from the outside. They are non-spontaneous changes. Contragrade change is induced when one thermodynamic system interacts with the orthograde changes of another thermodynamic system. The interaction drives the first system into a higher energy, more asymmetrical state. Contragrade changes do work. Because contragrade changes are driven by external interactions with another changing system, contragrade causes can be seen as analogous to Aristotle's efficient cause.[4]

Homeodynamics, morphodynamics, and teleodynamics

Much of the book is devoted to expanding upon the ideas of classical thermodynamics, with an extended discussion about how consistently far from equilibrium systems can interact and combine to produce novel emergent properties.
Homomorphoteleo.jpg

Deacon defines three hierarchically nested levels of thermodynamic systems: Homeodynamic systems combine to produce morphodynamic systems which combine to produce teleodynamic systems. Teleodynamic systems can be further combined to produce higher orders of self organization.

Homeodynamics

Homeodynamic systems are essentially equivalent to classical thermodynamic systems like a gas under pressure or solute in solution, but the term serves to emphasize that homeodynamics is an abstract process that can be realized in forms beyond the scope of classic thermodynamics. For example, the diffuse brain activity normally associated with emotional states can be considered to be a homeodynamic system because there is a general state of equilibrium which its components (neural activity) distribute towards.[5] In general, a homeodynamic system is any collection of components that will spontaneously eliminate constraints by rearranging the parts until a maximum entropy state (disorderliness) is achieved.

Morphodynamics

A morphodynamic system consists of a coupling of two homeodynamic systems such that the constraint dissipation of each complements the other, producing macroscopic order out of microscopic interactions. Morphodynamic systems require constant perturbation to maintain their structure, so they are relatively rare in nature. The paradigm example of a morphodynamic system is a Rayleigh–Bénard cell. Other common examples are snowflake formation, whirlpools and the stimulated emission of laser light.

Benard Cell

Maximum entropy production: The organized structure of a morphodynamic system forms to facilitate maximal entropy production. In the case of a Rayleigh–Bénard cell, heat at the base of the liquid produces an uneven distribution of high energy molecules which will tend to diffuse towards the surface. As the temperature of the heat source increases, density effects come into play. Simple diffusion can no longer dissipate energy as fast as it is added and so the bottom of the liquid becomes hot and more buoyant than the cooler, denser liquid at the top. The bottom of the liquid begins to rise, and the top begins to sink - producing convection currents.

Two systems: The significant heat differential on the liquid produces two homeodynamic systems. The first is a diffusion system, where high energy molecules on the bottom collide with lower energy molecules on the top until the added kinetic energy from the heat source is evenly distributed. The second is a convection system, where the low density fluid on the bottom mixes with the high density fluid on the top until the density becomes evenly distributed. The second system arises when there is too much energy to be effectively dissipated by the first, and once both systems are in place, they will begin to interact.

Self organization: The convection creates currents in the fluid that disrupt the pattern of heat diffusion from bottom to top. Heat begins to diffuse into the denser areas of current, irrespective of the vertical location of these denser portions of fluid. The areas of the fluid where diffusion is occurring most rapidly will be the most viscous because molecules are rubbing against each other in opposite directions. The convection currents will shun these areas in favor of parts of the fluid where they can flow more easily. And so the fluid spontaneously segregates itself into cells where high energy, low density fluid flows up from the center of the cell and cooler, denser fluid flows down along the edges, with diffusion effects dominating in the area between the center and the edge of each cell.

Synergy and constraint: What is notable about morphodynamic processes is that order spontaneously emerges explicitly because the ordered system that results is more efficient at increasing entropy than a chaotic one. In the case of the Rayleigh–Bénard cell, neither diffusion nor convection on their own will produce as much entropy as both effects coupled together. When both effects are brought into interaction, they constrain each other into a particular geometric form because that form facilitates minimal interference between the two processes. The orderly hexagonal form is stable as long as the energy differential persists, and yet the orderly form more effectively degrades the energy differential than any other form. This is why morphodynamic processes in nature are usually so short lived. They are self organizing, but also self undermining.

Teleodynamics

A teleodynamic system consists of coupling two morphodynamic systems such that the self undermining quality of each is constrained by the other. Each system prevents the other from dissipating all of the energy available, and so long term organizational stability is obtained. Deacon claims that we should pinpoint the moment when two morphodynamic systems reciprocally constrain each other as the point when ententional qualities like function, purpose and normativity emerge.[6]

Autogenesis

Deacon explores the properties of teleodynamic systems by describing a chemically plausible model system called an autogen. Deacon emphasizes that the specific autogen he describes is not a proposed description of the first life form, but rather a description of the kinds of thermodynamic synergies that the first living creature likely possessed.[7]
 
Autogen pg 339

Reciprocal catalysis: An autogen consists of two self catalyzing cyclical morphodynamic chemical reactions, similar to a chemoton. In one reaction, organic molecules react in a looped series, the products of one reaction becoming the reactants for the next. This looped reaction is self amplifying, producing more and more reactants until all the substrate is consumed. A side product of this reciprocally catalytic loop is a lipid that can be used as a reactant in a second reaction. This second reaction creates a boundary (either a microtubule or some other closed capsid like structure), that serves to contain the first reaction. The boundary limits diffusion; it keeps all of the necessary catalysts in close proximity to each other. In addition, the boundary prevents the first reaction from completely consuming all of the available substrate in the environment.

The first self: Unlike an isolated morphodynamic process whose organization rapidly eliminates the energy gradient necessary to maintain its structure, a teleodynamic process is self-limiting and self preserving. The two reactions complement each other, and ensure that neither ever runs to equilibrium - that is completion, cessation, and death. So, in a teleodynamic system there will be structures that embody a preliminary sketch of a biological function. The internal reaction network functions to create the substrates for the boundary reaction, and the boundary reaction functions to protect and constrain the internal reaction network. Either process in isolation would be abiotic but together they create a system with a normative status dependent on the functioning of its component parts.

Work

As with other concepts in the book, in his discussion of work Deacon seeks to generalize the Newtonian conception of work such that the term can be used to describe and differentiate mental phenomena - to describe "that which makes daydreaming effortless but metabolically equivalent problem solving difficult."[8] Work is generally described as "activity that is necessary to overcome resistance to change. Resistance can be either active or passive, and so work can be directed towards enacting change that wouldn't otherwise occur or preventing change that would happen in its absence."[9] Using the terminology developed earlier in the book, work can be considered to be "the organization of differences between orthograde processes such that a locus of contragrade process is created. Or, more simply, work is a spontaneous change inducing a non-spontaneous change to occur."[10]

Thermodynamic work

A thermodynamic systems capacity to do work depends less upon the total energy of the system and more upon the geometric distribution of its components. A glass of water at 20 degrees Celsius will have the same amount of energy as a glass divided in half with the top fluid at 30 degrees and the bottom at 10, but only in the second glass will the top half have the capacity to do work upon the bottom. This is because work occurs at both macroscopic and microscopic levels. Microscopically, there is constant work being performed on one molecule by another when they collide. But the potential for this microscopic work to additively sum to macroscopic work depends on there being an asymmetric distribution of particle speeds, so that the average collision pushes in a focused direction. Microscopic work is necessary but not sufficient for macroscopic work. A global property of asymmetric distribution is also required.

Morphodynamic work

By recognizing that asymmetry is a general property of work - that work is done as asymmetric systems spontaneously tend towards symmetry, Deacon abstracts the concept of work and applies it to systems whose symmetries are vastly more complex than those covered by classical thermodynamics. In a morphodynamic system, the tendency towards symmetry produces not global equilibrium, but a complex geometric form like a hexagonal Benard cell or the resonant frequency of a flute. This tendency towards convolutedly symmetric forms can be harnessed to do work on other morphodynamic systems, if the systems are properly coupled.

Resonance example: A good example of morphodynamic work is the induced resonance that can be observed by singing or playing a flute next to a string instrument like a harp or guitar. The vibrating air emitted from the flute will interact with the taut strings. If any of the strings are tuned to a resonant frequency that matches the note being played, they too will begin to vibrate and emit sound.

Contragrade change: When energy is added to the flute by blowing air into it, there is a spontaneous (orthograde) tendency for the system to dissipate the added energy by inducing the air within the flute to vibrate at a specific frequency. This orthograde morphodynamic form generation can be used to induce contragrade change in the system coupled to it - the taught string. Playing the flute does work on the string by causing it to enter a high energy state that could not be reached spontaneously in an uncoupled state.

Structure and form: Importantly, this is not just the macro scale propagation of random micro vibrations from one system to another. The global geometric structure of the system is essential. The total energy transferred from the flute to the string matters far less than the patterns it takes in transit. That is, the amplitude of the coupled note is irrelevant, what matters is its frequency. Notes that have a higher or lower frequency than the resonant frequency of the string will not be able to do morphodynamic work.

Teleodynamic work

Work is generally defined to be the interaction of two orthograde changing systems such that contragrade change is produced.[11] In teleodynamic systems, the spontaneous orthograde tendency is not to equilibriate (as in homeodynamic systems), nor to self simplify (as in morphodynamic systems) but rather to tend towards self-preservation. Living organisms spontaneously tend to heal, to reproduce and to pursue resources towards these ends. Teleodynamic work acts on these tendencies and pushes them in a contragrade, non-spontaneous direction.

Reading exemplifies the logic of teleodynamic work. A passive source of cognitive constraints is potentially provided by the letterforms on a page. A literate person has structured his or her sensory and cognitive habits to use such letterforms to reorganize the neural activities constituting thinking. This enables us to do teleodynamic work to shift mental tendencies away from those that are spontaneous (such as daydreaming) to those that are constrained by the text. Artist: Giovanni Battista Piazzetta (1682–1754).

Evolution as work: Natural selection, or perhaps more accurately, adaptation, can be considered to be a ubiquitous form of teleodynamic work. The othograde self-preservation and reproduction tendencies of individual organisms tends to undermine those same tendencies in conspecifics. This competition produces a constraint that tends to mold organisms into forms that are more adapted to their environments – forms that would otherwise not spontaneously persist.

For example, in a population of New Zealand wrybill who make a living by searching for grubs under rocks, those that have a bent beak gain access to more calories. Those with bent beaks are able to better provide for their young, and at the same time they remove a disproportionate quantity of grubs from their environment, making it more difficult for those with strait beaks to provide for their own young. Throughout their lives, all the wrybills in the population do work to structure the form of the next generation. The increased efficiency of the bent beak causes that morphology to dominate the next generation. Thus an asymmetry of beak shape distribution is produced in the population - an asymmetry produced by teleodynamic work.

Thought as work: Mental problem solving can also be considered teleodynamic work. Thought forms are spontaneously generated, and task of problem solving is the task of molding those forms to fit the context of the problem at hand. Deacon makes the link between evolution as teleodynamic work and thought as teleodynamic work explicit. "The experience of being sentient is what it feels like to be evolution."[12]

Emergent causal powers

By conceiving of work in this way, Deacon claims "we can begin to discern a basis for a form of causal openness in the universe."[13] While increases in complexity in no way alter the laws of physics, by juxtaposing systems together, pathways of spontaneous change can be made available that were inconceivably improbable prior to the systems coupling. The causal power of any complex living system lies not solely in the underlying quantum mechanics but also in the global arrangement of its components. A careful arrangement of parts can constrain possibilities such that phenomena that were formerly impossibly rare can become improbably common.

Information

One of the central purposes of Incomplete Nature is to articulate a theory of biological information. The first formal theory of information was articulated by Claude Shannon in 1948 in his work A Mathematical Theory of Communication. Shannon's work is widely credited with ushering in the information age, but somewhat paradoxically, it was completely silent on questions of meaning and reference, i.e., what the information is about. As an engineer, Shannon was concerned with the challenge of reliably transmitting a message from one location to another. The meaning and content of the message was largely irrelevant. So, while Shannon information theory has been essential for the development of devices like computers, it has left open many philosophical questions regarding the nature of information. Incomplete Nature seeks to answer some of these questions.

Shannon information

Shannon's key insight was to recognize a link between entropy and information. Entropy is often defined as a measurement of disorder, or randomness, but this can be misleading. For Shannon's purposes, the entropy of a system is the number of possible states that the system has the capacity to be in. Any one of these potential states can constitute a message. For example, a typewritten page can bear as many different messages as there are different combinations of characters that can be arranged on the page. The information content of a message can only be understood against the background context of all of the messages that could have been sent, but weren't. Information is produced by a reduction of entropy in the message medium.

Three nested conceptions of information

Boltzmann entropy

Shannon's information based conception of entropy should be distinguished from the more classic thermodynamic conception of entropy developed by Ludwig Boltzmann and others at the end of the nineteenth century. While Shannon entropy is static and has to do with the set of all possible messages/states that a signal bearing system might take, Boltzmann entropy has to do with the tendency of all dynamic systems to tend towards equilibrium. That is, there are many more ways for a collection of particles to be well mixed than to be segregated based on velocity, mass, or any other property. Boltzmann entropy is central to the theory of work developed earlier in the book because entropy dictates the direction in which a system will spontaneously tend.

Significant information

Deacon's addition to Shannon information theory is to propose a method for describing not just how a message is transmitted, but also how it is interpreted. Deacon weaves together Shannon entropy and Boltzmann entropy in order to develop a theory of interpretation based in teleodynamic work. Interpretation is inherently normative. Data becomes information when it has significance for its interpreter. Thus interpretive systems are teleodynamic - the interpretive process is designed to perpetuate itself. "The interpretation of something as information indirectly reinforces the capacity to do this again."

Autopoiesis

From Wikipedia, the free encyclopedia
 
3D representation of a living cell during the process of mitosis, example of an autopoietic system

The term autopoiesis (from Greek αὐτo- (auto-), meaning 'self', and ποίησις (poiesis), meaning 'creation, production') refers to a system capable of reproducing and maintaining itself. The term was introduced in 1972 by Chilean biologists Humberto Maturana and Francisco Varela to define the self-maintaining chemistry of living cells. Since then the concept has been also applied to the fields of cognition, systems theory and sociology.

The original definition can be found in Autopoiesis and Cognition: the Realization of the Living (1st edition 1973, 2nd 1980):[1]
Page 16: It was in these circumstances ... in which he analyzed Don Quixote's dilemma of whether to follow the path of arms (praxis, action) or the path of letters (poiesis, creation, production), I understood for the first time the power of the word "poiesis" and invented the word that we needed: autopoiesis. This was a word without a history, a word that could directly mean what takes place in the dynamics of the autonomy proper to living systems.

Page 78: An autopoietic machine is a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components which: (i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological domain of its realization as such a network.[2]

Page 89: ... the space defined by an autopoietic system is self-contained and cannot be described by using dimensions that define another space. When we refer to our interactions with a concrete autopoietic system, however, we project this system on the space of our manipulations and make a description of this projection.

Meaning

Autopoiesis was originally presented as a system description that was said to define and explain the nature of living systems. A canonical example of an autopoietic system is the biological cell. The eukaryotic cell, for example, is made of various biochemical components such as nucleic acids and proteins, and is organized into bounded structures such as the cell nucleus, various organelles, a cell membrane and cytoskeleton. These structures, based on an external flow of molecules and energy, produce the components which, in turn, continue to maintain the organized bounded structure that gives rise to these components (not unlike a wave propagating through a medium).

An autopoietic system is to be contrasted with an allopoietic system, such as a car factory, which uses raw materials (components) to generate a car (an organized structure) which is something other than itself (the factory). However, if the system is extended from the factory to include components in the factory's "environment", such as supply chains, plant / equipment, workers, dealerships, customers, contracts, competitors, cars, spare parts, and so on, then as a total viable system it could be considered to be autopoietic.

Though others have often used the term as a synonym for self-organization, Maturana himself stated he would "[n]ever use the notion of self-organization ... Operationally it is impossible. That is, if the organization of a thing changes, the thing changes".[3] Moreover, an autopoietic system is autonomous and operationally closed, in the sense that there are sufficient processes within it to maintain the whole. Autopoietic systems are "structurally coupled" with their medium, embedded in a dynamic of changes that can be recalled as sensory-motor coupling. This continuous dynamic is considered as a rudimentary form of knowledge or cognition and can be observed throughout life-forms.

An application of the concept of autopoiesis to sociology can be found in Niklas Luhmann's Systems Theory, which was subsequently adapted by Bob Jessop in his studies of the capitalist state system. Marjatta Maula adapted the concept of autopoiesis in a business context. The theory of autopoiesis has also been applied in the context of legal systems by not only Niklas Luhmann, but also Gunther Teubner.[4][5]

In the context of textual studies, Jerome McGann argues that texts are "autopoietic mechanisms operating as self-generating feedback systems that cannot be separated from those who manipulate and use them".[6] Citing Maturana and Varela, he defines an autopoietic system as "a closed topological space that 'continuously generates and specifies its own organization through its operation as a system of production of its own components, and does this in an endless turnover of components'", concluding that "Autopoietic systems are thus distinguished from allopoietic systems, which are Cartesian and which 'have as the product of their functioning something different from themselves'". Coding and markup appear allopoietic", McGann argues, but are generative parts of the system they serve to maintain, and thus language and print or electronic technology are autopoietic systems.[7]

In his discussion of Hegel, the philosopher Slavoj Žižek argues, "Hegel is – to use today's terms – the ultimate thinker of autopoiesis, of the process of the emergence of necessary features out of chaotic contingency, the thinker of contingency's gradual self-organisation, of the gradual rise of order out of chaos."[8]

Relation to complexity

Autopoiesis can be defined as the ratio between the complexity of a system and the complexity of its environment.[9]
This generalized view of autopoiesis considers systems as self-producing not in terms of their physical components, but in terms of its organization, which can be measured in terms of information and complexity. In other words, we can describe autopoietic systems as those producing more of their own complexity than the one produced by their environment.[10]

Relation to cognition

An extensive discussion of the connection of autopoiesis to cognition is provided by Thompson.[11] The basic notion of autopoiesis as involving constructive interaction with the environment is extended to include cognition. Initially, Maturana defined cognition as behavior of an organism "with relevance to the maintenance of itself".[12] However, computer models that are self-maintaining but non-cognitive have been devised, so some additional restrictions are needed, and the suggestion is that the maintenance process, to be cognitive, involves readjustment of the internal workings of the system in some metabolic process. On this basis it is claimed that autopoiesis is a necessary but not a sufficient condition for cognition.[13] Thompson (p. 127) takes the view that this distinction may or may not be fruitful, but what matters is that living systems involve autopoiesis and (if it is necessary to add this point) cognition as well. It can be noted that this definition of 'cognition' is restricted, and does not necessarily entail any awareness or consciousness by the living system.

Relation to consciousness

The connection of autopoiesis to cognition, or if necessary, of living systems to cognition, is an objective assessment ascertainable by observation of a living system.

One question that arises is about the connection between cognition seen in this manner and consciousness. The separation of cognition and consciousness recognizes that the organism may be unaware of the substratum where decisions are made. What is the connection between these realms? Thompson refers to this issue as the "explanatory gap", and one aspect of it is the hard problem of consciousness, how and why we have qualia.[14]

A second question is whether autopoiesis can provide a bridge between these concepts. Thompson discusses this issue from the standpoint of enactivism. An autopoietic cell actively relates to its environment. Its sensory responses trigger motor behavior governed by autopoiesis, and this behavior (it is claimed) is a simplified version of a nervous system behavior. The further claim is that real-time interactions like this require attention, and an implication of attention is awareness.[15]

Criticism

There are multiple criticisms of the use of the term in both its original context, as an attempt to define and explain the living, and its various expanded usages, such as applying it to self-organizing systems in general or social systems in particular.[16] Critics have argued that the term fails to define or explain living systems and that, because of the extreme language of self-referentiality it uses without any external reference, it is really an attempt to give substantiation to Maturana's radical constructivist or solipsistic epistemology,[17] or what Danilo Zolo[18][19] has called instead a "desolate theology". An example is the assertion by Maturana and Varela that "We do not see what we do not see and what we do not see does not exist".[20] The autopoietic model, said Rod Swenson,[21] is "miraculously decoupled from the physical world by its progenitors ... (and thus) grounded on a solipsistic foundation that flies in the face of both common sense and scientific knowledge".

Using light instead of electrons promises faster, smaller, more-efficient computers and smartphones

December 1, 2017
Original link:  http://www.kurzweilai.net/using-light-instead-of-electrons-promises-faster-smaller-more-efficient-computers-and-smartphones

Trapped light for optical computation (credit: Imperial College London)

By forcing light to go through a smaller gap than ever before, a research team at Imperial College London has taken a step toward computers based on light instead of electrons.

Light would be preferable for computing because it can carry much-higher-density information, it’s much faster, and more efficient (generates little to no heat). But light beams don’t easily interact with one other. So information on high-speed fiber-optic cables (provided by your cable TV company, for example) currently has to be converted (via a modem or other device) into slower signals (electrons on wires or wireless signals) to allow for processing the data on devices such as computers and smartphones.

Electron-microscope image of an optical-computing nanofocusing device that is 25 nanometers wide and 2 micrometers long, using grating couplers (vertical lines) to interface with fiber-optic cables. (credit: Nielsen et al., 2017/Imperial College London)

To overcome that limitation, the researchers used metamaterials to squeeze light into a metal channel only 25 nanometers (billionths of a meter) wide, increasing its intensity and allowing photons to interact over the range of micrometers (millionths of meters) instead of centimeters.*

That means optical computation that previously required a centimeters-size device can now be realized on the micrometer (one millionth of a meter) scale, bringing optical processing into the size range of electronic transistors.

The results were published Thursday Nov. 30, 2017 in the journal Science.

* Normally, when two light beams cross each other, the individual photons do not interact or alter each other, as two electrons do when they meet. That means a long span of material is needed to gradually accumulate the effect and make it useful. Here, a “plasmonic nanofocusing” waveguide is used, strongly confining light within a nonlinear organic polymer.


Abstract of Giant nonlinear response at a plasmonic nanofocus drives efficient four-wave mixing

Efficient optical frequency mixing typically must accumulate over large interaction lengths because nonlinear responses in natural materials are inherently weak. This limits the efficiency of mixing processes owing to the requirement of phase matching. Here, we report efficient four-wave mixing (FWM) over micrometer-scale interaction lengths at telecommunications wavelengths on silicon. We used an integrated plasmonic gap waveguide that strongly confines light within a nonlinear organic polymer. The gap waveguide intensifies light by nanofocusing it to a mode cross-section of a few tens of nanometers, thus generating a nonlinear response so strong that efficient FWM accumulates over wavelength-scale distances. This technique opens up nonlinear optics to a regime of relaxed phase matching, with the possibility of compact, broadband, and efficient frequency mixing integrated with silicon photonics.

Equality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Equality_...