Search This Blog

Monday, July 30, 2018

Health 2.0

From Wikipedia, the free encyclopedia
 
"Health 2.0" is a term introduced in the mid-2000s, as the subset of health care technologies mirroring the wider Web 2.0 movement. It has been defined variously as including social media, user-generated content, and cloud-based and mobile technologies. Some Health 2.0 proponents see these technologies as empowering patients to have greater control over their own health care and diminishing medical paternalism. Critics of the technologies have expressed concerns about possible misinformation and violations of patient privacy,

History

Health 2.0 built on the possibilities for changing health care, which started with the introduction of eHealth in the mid-1990s following the emergence of the World Wide Web. In the mid-2000s, following the widespread adoption both of the Internet and of easy to use tools for communication, social networking, and self-publishing, there was spate of media attention to and increasing interest from patients, clinicians, and medical librarians in using these tools for health care and medical purposes.[1][2]

Early examples of Health 2.0 were the use of a specific set of Web tools (blogs, email list-servs, online communities, podcasts, search, tagging, Twitter, videos, wikis, and more) by actors in health care including doctors, patients, and scientists, using principles of open source and user-generated content, and the power of networks and social networks in order to personalize health care, to collaborate, and to promote health education.[3] Possible explanations why health care has generated its own "2.0" term are the availability and proliferation of Health 2.0 applications across health care in general, and the potential for improving public health in particular.[4]

Current use

While the "2.0" moniker was originally associated with concepts like collaboration, openness, participation, and social networking,[5] in recent years the term "Health 2.0" has evolved to mean the role of Saas and cloud-based technologies, and their associated applications on multiple devices. Health 2.0 describes the integration of these into much of general clinical and administrative workflow in health care. As of 2014, approximately 3,000 companies were offering products and services matching this definition, with venture capital funding in the sector exceeding $2.3 billion in 2013.[6]

Definitions

The "traditional" definition of "Health 2.0" focused on technology as an enabler for care collaboration: "The use of social software t-weight tools to promote collaboration between patients, their caregivers, medical professionals, and other stakeholders in health."[7]

In 2011, Indu Subaiya redefined Health 2.0[8] as the use in health care of new cloud, Saas, mobile, and device technologies that are:
  1. Adaptable technologies which easily allow other tools and applications to link and integrate with them, primarily through use of accessible APIs
  2. Focused on the user experience, bringing in the principles of user-centered design
  3. Data driven, in that they both create data and present data to the user in order to help improve decision making
This wider definition allows recognition of what is or what isn't a Health 2.0 technology. Typically, enterprise-based, customized client-server systems are not, while more open, cloud based systems fit the definition. However, this line was blurring by 2011-2 as more enterprise vendors started to introduce cloud-based systems and native applications for new devices like smartphones and tablets.

In addition, Health 2.0 has several competing terms, each with its own followers—if not exact definitions—including Connected Health, Digital Health, Medicine 2.0, and mHealth. All of these support a goal of wider change to the health care system, using technology-enabled system reform—usually changing the relationship between patient and professional.:
  1. Personalized search that looks into the long tail but cares about the user experience
  2. Communities that capture the accumulated knowledge of patients, caregivers, and clinicians, and explains it to the world
  3. Intelligent tools for content delivery—and transactions
  4. Better integration of data with content

Wider health system definitions

In the late 2000s, several commentators used Health 2.0 as a moniker for a wider concept of system reform, seeking a participatory process between patient and clinician: "New concept of health care wherein all the constituents (patients, physicians, providers, and payers) focus on health care value (outcomes/price) and use competition at the medical condition level over the full cycle of care as the catalyst for improving the safety, efficiency, and quality of health care".[9]

Health 2.0 defines the combination of health data and health information with (patient) experience, through the use of ICT, enabling the citizen to become an active and responsible partner in his/her own health and care pathway.[10]

Health 2.0 is participatory healthcare. Enabled by information, software, and communities that we collect or create, we the patients can be effective partners in our own healthcare, and we the people can participate in reshaping the health system itself.[11]

Definitions of Medicine 2.0 appear to be very similar but typically include more scientific and research aspects—Medicine 2.0: "Medicine 2.0 applications, services and tools are Web-based services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies as well as semantic web and virtual reality tools, to enable and facilitate specifically social networking, participation, apomediation, collaboration, and openness within and between these user groups.[12][13] Published in JMIR Tom Van de Belt, Lucien Engelen et al. systematic review found 46 (!) unique definitions of health 2.0[14]

Overview

A model of Health 2.0

Health 2.0 refers to the use of a diverse set of technologies including Connected Health, electronic medical records, mHealth, telemedicine, and the use of the Internet by patients themselves such as through blogs, Internet forums, online communities, patient to physician communication systems, and other more advanced systems.[15][16] A key concept is that patients themselves should have greater insight and control into information generated about them. Additionally Health 2.0 relies on the use of modern cloud and mobile-based technologies.

Much of the potential for change from Health 2.0 is facilitated by combining technology driven trends such as Personal Health Records with social networking —"[which] may lead to a powerful new generation of health applications, where people share parts of their electronic health records with other consumers and 'crowdsource' the collective wisdom of other patients and professionals."[5] Traditional models of medicine had patient records (held on paper or a proprietary computer system) that could only be accessed by a physician or other medical professional. Physicians acted as gatekeepers to this information, telling patients test results when and if they deemed it necessary. Such a model operates relatively well in situations such as acute care, where information about specific blood results would be of little use to a lay person, or in general practice where results were generally benign. However, in the case of complex chronic diseases, psychiatric disorders, or diseases of unknown etiology patients were at risk of being left without well-coordinated care because data about them was stored in a variety of disparate places and in some cases might contain the opinions of healthcare professionals which were not to be shared with the patient. Increasingly, medical ethics deems such actions to be medical paternalism, and they are discouraged in modern medicine.[17][18]

A hypothetical example demonstrates the increased engagement of a patient operating in a Health 2.0 setting: a patient goes to see their primary care physician with a presenting complaint, having first ensured their own medical record was up to date via the Internet. The treating physician might make a diagnosis or send for tests, the results of which could be transmitted directly to the patient's electronic medical record. If a second appointment is needed, the patient will have had time to research what the results might mean for them, what diagnoses may be likely, and may have communicated with other patients who have had a similar set of results in the past. On a second visit a referral might be made to a specialist. The patient might have the opportunity to search for the views of other patients on the best specialist to go to, and in combination with their primary care physician decides who to see. The specialist gives a diagnosis along with a prognosis and potential options for treatment. The patient has the opportunity to research these treatment options and take a more proactive role in coming to a joint decision with their healthcare provider. They can also choose to submit more data about themselves, such as through a personalized genomics service to identify any risk factors that might improve or worsen their prognosis. As treatment commences, the patient can track their health outcomes through a data-sharing patient community to determine whether the treatment is having an effect for them, and they can stay up to date on research opportunities and clinical trials for their condition. They also have the social support of communicating with other patients diagnosed with the same condition throughout the world.

Level of use of Web 2.0 in health care

Partly due to weak definitions, the novelty of the endeavor and its nature as an entrepreneurial (rather than academic) movement, little empirical evidence exists to explain how much Web 2.0 is being used in general. While it has been estimated that nearly one-third of the 100 million Americans who have looked for health information online say that they or people they know have been significantly helped by what they found,[19] this study considers only the broader use of the Internet for health management.

A study examining physician practices has suggested that a segment of 245,000 physicians in the U.S are using Web 2.0 for their practice, indicating that use is beyond the stage of the early adopter with regard to physicians and Web 2.0.[20]

Types of Web 2.0 technology in health care

Web 2.0 is commonly associated with technologies such as podcasts, RSS feeds, social bookmarking, weblogs (health blogs), wikis, and other forms of many-to-many publishing; social software; and web application programming interfaces (APIs).[21]

The following are examples of uses that have been documented in academic literature.

Purpose Description Case example in academic literature Users
Staying informed Used to stay informed of latest developments in a particular field Podcasts, RSS, and search tools[2] All (medical professionals and public)
Medical education Use for professional development for doctors, and public health promotion for by public health professionals and the general public How podcasts can be used on the move to increase total available educational time[22] or the many applications of these tools to public health[23] All (medical professionals and public)
Collaboration and practice Web 2.0 tools use in daily practice for medical professionals to find information and make decisions Google searches revealed the correct diagnosis in 15 out of 26 cases (58%, 95% confidence interval 38% to 77%) in a 2005 study[24] Doctors, nurses
Managing a particular disease Patients who use search tools to find out information about a particular condition Shown that patients have different patterns of usage depending on if they are newly diagnosed or managing a severe long-term illness. Long-term patients are more likely to connect to a community in Health 2.0[25] Public
Sharing data for research Completing patient-reported outcomes and aggregating the data for personal and scientific research Disease specific communities for patients with rare conditions aggregate data on treatments, symptoms, and outcomes to improve their decision making ability and carry out scientific research such as observational trials[26] All (medical professionals and public)

Criticism of the use of Web 2.0 in health care

Hughes et al. (2009) argue there are four major tensions represented in the literature on Health/Medicine 2.0. These concern:[3]
  1. the lack of clear definitions
  2. issues around the loss of control over information that doctors perceive
  3. safety and the dangers of inaccurate information
  4. issues of ownership and privacy
Several criticisms have been raised about the use of Web 2.0 in health care. Firstly, Google has limitations as a diagnostic tool for Medical Doctors (MDs), as it may be effective only for conditions with unique symptoms and signs that can easily be used as search term.[24] Studies of its accuracy have returned varying results, and this remains in dispute.[27] Secondly, long-held concerns exist about the effects of patients obtaining information online, such as the idea that patients may delay seeking medical advice[28] or accidentally reveal private medical data.[29][30] Finally, concerns exist about the quality of user-generated content leading to misinformation,[31][32] such as perpetuating the discredited claim that the MMR vaccine may cause autism.[33] In contrast, a 2004 study of a British epilepsy online support group suggested that only 6% of information was factually wrong.[34] In a 2007 Pew Research Center survey of Americans, only 3% reported that online advice had caused them serious harm, while nearly one-third reported that they or their acquaintances had been helped by online health advice.

New technology allows robots to visualize their own future

December 6, 2017
Original link:  http://www.kurzweilai.net/new-technology-allows-robots-to-visualize-their-own-future
UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. It could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes.
The initial prototype focuses on learning simple manual skills entirely from autonomous play — similar to how children can learn about their world by playing with toys, moving them around, grasping, etc.

Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now — predictions made only several seconds into the future — but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles.
The robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment, or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised (no humans involved) exploration, where the robot plays with objects on a table.

After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”

The research team demonstrated the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on Monday, December 4, 2017.

Learning by playing: how it works

Robot’s imagined predictions (credit: UC Berkeley)

At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next, based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects.

“In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model.

With the new technology, a robot pushes objects on a table, then uses the learned prediction model to choose motions that will move an object to a desired location. Robots use the learned model from raw camera observations to teach themselves how to avoid obstacles and push objects around obstructions.

Since control through video prediction relies only on observations that can be collected autonomously by the robot, such as through camera images, the resulting method is general and broadly applicable. Building video prediction models only requires unannotated video, which can be collected by the robot entirely autonomously.

That contrasts with conventional computer-vision methods, which require humans to manually label thousands or even millions of images.

Gaia hypothesis

From Wikipedia, the free encyclopedia
The study of planetary habitability is partly based upon extrapolation from knowledge of the Earth's conditions, as the Earth is the only planet currently known to harbour life.

The Gaia hypothesis (/ˈɡ.ə/ GHY, /ˈɡ.ə/ GAY), also known as the Gaia theory or the Gaia principle, proposes that living organisms interact with their inorganic surroundings on Earth to form a synergistic and self-regulating, complex system that helps to maintain and perpetuate the conditions for life on the planet.

The hypothesis was formulated by the chemist James Lovelock[1] and co-developed by the microbiologist Lynn Margulis in the 1970s.[2] Lovelock named the idea after Gaia, the primordial goddess who personified the Earth in Greek mythology. In 2006, the Geological Society of London awarded Lovelock the Wollaston Medal in part for his work on the Gaia hypothesis.[3]

Topics related to the hypothesis include how the biosphere and the evolution of organisms affect the stability of global temperature, salinity of seawater, atmospheric oxygen levels, the maintenance of a hydrosphere of liquid water and other environmental variables that affect the habitability of Earth.

The Gaia hypothesis was initially criticized for being teleological and against the principles of natural selection, but later refinements aligned the Gaia hypothesis with ideas from fields such as Earth system science, biogeochemistry and systems ecology. Lovelock also once described the "geophysiology" of the Earth. Even so, the Gaia hypothesis continues to attract criticism, and today some scientists consider it to be only weakly supported by, or at odds with, the available evidence.

Introduction

Gaian hypotheses suggest that organisms co-evolve with their environment: that is, they "influence their abiotic environment, and that environment in turn influences the biota by Darwinian process". Lovelock (1995) gave evidence of this in his second book, showing the evolution from the world of the early thermo-acido-philic and methanogenic bacteria towards the oxygen-enriched atmosphere today that supports more complex life.

A reduced version of the hypothesis has been called "influential Gaia"[11] in "Directed Evolution of the Biosphere: Biogeochemical Selection or Gaia?" by Andrei G. Lapenis, which states the biota influence certain aspects of the abiotic world, e.g. temperature and atmosphere. This is not the work of an individual but a collective of Russian scientific research that was combined into this peer reviewed publication. It states the coevolution of life and the environment through “micro–forces”[11] and biogeochemical processes. An example is how the activity of photosynthetic bacteria during Precambrian times have completely modified the Earth atmosphere to turn it aerobic, and as such supporting evolution of life (in particular eukaryotic life).

Since barriers existed throughout the Twentieth Century between Russia and the rest of the world, it is only relatively recently that the early Russian scientists who introduced concepts overlapping the Gaia hypothesis have become better known to the Western scientific community.[11] These scientists include:
  1. Piotr Alekseevich Kropotkin (1842–1921)
  2. Rafail Vasil’evich Rizpolozhensky (1847–1918)
  3. Vladimir Ivanovich Vernadsky (1863–1945)
  4. Vladimir Alexandrovich Kostitzin (1886–1963)
Biologists and Earth scientists usually view the factors that stabilize the characteristics of a period as an undirected emergent property or entelechy of the system; as each individual species pursues its own self-interest, for example, their combined actions may have counterbalancing effects on environmental change. Opponents of this view sometimes reference examples of events that resulted in dramatic change rather than stable equilibrium, such as the conversion of the Earth's atmosphere from a reducing environment to an oxygen-rich one at the end of the Archaean and the beginning of the Proterozoic periods.

Less accepted versions of the hypothesis claim that changes in the biosphere are brought about through the coordination of living organisms and maintain those conditions through homeostasis. In some versions of Gaia philosophy, all lifeforms are considered part of one single living planetary being called Gaia. In this view, the atmosphere, the seas and the terrestrial crust would be results of interventions carried out by Gaia through the coevolving diversity of living organisms.

Details

The Gaia hypothesis posits that the Earth is a self-regulating complex system involving the biosphere, the atmosphere, the hydrospheres and the pedosphere, tightly coupled as an evolving system. The hypothesis contends that this system as a whole, called Gaia, seeks a physical and chemical environment optimal for contemporary life.[12]

Gaia evolves through a cybernetic feedback system operated unconsciously by the biota, leading to broad stabilization of the conditions of habitability in a full homeostasis. Many processes in the Earth's surface essential for the conditions of life depend on the interaction of living forms, especially microorganisms, with inorganic elements. These processes establish a global control system that regulates Earth's surface temperature, atmosphere composition and ocean salinity, powered by the global thermodynamic disequilibrium state of the Earth system.[13]

The existence of a planetary homeostasis influenced by living forms had been observed previously in the field of biogeochemistry, and it is being investigated also in other fields like Earth system science. The originality of the Gaia hypothesis relies on the assessment that such homeostatic balance is actively pursued with the goal of keeping the optimal conditions for life, even when terrestrial or external events menace them.[14]

Regulation of global surface temperature

Rob Rohde's palaeotemperature graphs

Since life started on Earth, the energy provided by the Sun has increased by 25% to 30%;[15] however, the surface temperature of the planet has remained within the levels of habitability, reaching quite regular low and high margins. Lovelock has also hypothesised that methanogens produced elevated levels of methane in the early atmosphere, giving a view similar to that found in petrochemical smog, similar in some respects to the atmosphere on Titan.[7] This, he suggests tended to screen out ultraviolet until the formation of the ozone screen, maintaining a degree of homeostasis. However, the Snowball Earth[16] research has suggested that "oxygen shocks" and reduced methane levels led, during the Huronian, Sturtian and Marinoan/Varanger Ice Ages, to a world that very nearly became a solid "snowball". These epochs are evidence against the ability of the pre Phanerozoic biosphere to fully self-regulate.

Processing of the greenhouse gas CO2, explained below, plays a critical role in the maintenance of the Earth temperature within the limits of habitability.

The CLAW hypothesis, inspired by the Gaia hypothesis, proposes a feedback loop that operates between ocean ecosystems and the Earth's climate.[17] The hypothesis specifically proposes that particular phytoplankton that produce dimethyl sulfide are responsive to variations in climate forcing, and that these responses lead to a negative feedback loop that acts to stabilise the temperature of the Earth's atmosphere.

Currently the increase in human population and the environmental impact of their activities, such as the multiplication of greenhouse gases may cause negative feedbacks in the environment to become positive feedback. Lovelock has stated that this could bring an extremely accelerated global warming,[18] but he has since stated the effects will likely occur more slowly.[19]

Daisyworld simulations

Plots from a standard black & white Daisyworld simulation

James Lovelock and Andrew Watson developed the mathematical model Daisyworld, in which temperature regulation arises from a simple ecosystem consisting of two species whose activity varies in response to the planet's environment. The model demonstrates that beneficial feedback mechanisms can emerge in this "toy world" containing only self-interested organisms rather than through classic group selection mechanisms.[20]

Daisyworld examines the energy budget of a planet populated by two different types of plants, black daisies and white daisies. The colour of the daisies influences the albedo of the planet such that black daisies absorb light and warm the planet, while white daisies reflect light and cool the planet. As the model runs the output of the "sun" increases, meaning that the surface temperature of an uninhabited "gray" planet will steadily rise. In contrast, on Daisyworld competition between the daisies (based on temperature-effects on growth rates) leads to a shifting balance of daisy populations that tends to favour a planetary temperature close to the optimum for daisy growth.

It has been suggested that the results were predictable because Lovelock and Watson selected examples that produced the responses they desired.[21]

Regulation of oceanic salinity

Ocean salinity has been constant at about 3.5% for a very long time.[22] Salinity stability in oceanic environments is important as most cells require a rather constant salinity and do not generally tolerate values above 5%. The constant ocean salinity was a long-standing mystery, because no process counterbalancing the salt influx from rivers was known. Recently it was suggested[23] that salinity may also be strongly influenced by seawater circulation through hot basaltic rocks, and emerging as hot water vents on mid-ocean ridges. However, the composition of seawater is far from equilibrium, and it is difficult to explain this fact without the influence of organic processes. One suggested explanation lies in the formation of salt plains throughout Earth's history. It is hypothesized that these are created by bacterial colonies that fix ions and heavy metals during their life processes.[22]

In the biogeochemical processes of the earth, sources and sinks are the movement of elements. The composition of salt ions within our oceans and seas are: sodium (Na+), chlorine (Cl), sulfate (SO42−), Magnesium (Mg2+), calcium (Ca2+) and potassium (K+). The elements that comprise salinity do not readily change and are a conservative property of seawater.[22] There are many mechanisms that change salinity from a particulate form to a dissolved form and back. The known sources of sodium i.e. salts is when weathering, erosion, and dissolution of rocks transport into rivers and deposit into the oceans.

The Mediterranean Sea as being Gaia's kidney is found (here) by Kenneth J. Hsue a correspondence author in 2001. The "desiccation" of the Mediterranean is the evidence of a functioning kidney. Earlier "kidney functions" were performed during the "deposition of the Cretaceous (South Atlantic), Jurassic (Gulf of Mexico), Permo-Triassic (Europe), Devonian (Canada), Cambrian/Precambrian (Gondwana) saline giants."[24]

Regulation of oxygen in the atmosphere

Levels of gases in the atmosphere in 420,000 years of ice core data from Vostok, Antarctica research station. Current period is at the left.

The Gaia hypothesis states that the Earth's atmospheric composition is kept at a dynamically steady state by the presence of life.[25] The atmospheric composition provides the conditions that contemporary life has adapted to. All the atmospheric gases other than noble gases present in the atmosphere are either made by organisms or processed by them.

The stability of the atmosphere in Earth is not a consequence of chemical equilibrium. Oxygen is a reactive compound, and should eventually combine with gases and minerals of the Earth's atmosphere and crust. Oxygen only began to persist in the atmosphere in small quantities about 50 million years before the start of the Great Oxygenation Event.[26] Since the start of the Cambrian period, atmospheric oxygen concentrations have fluctuated between 15% and 35% of atmospheric volume.[27] Traces of methane (at an amount of 100,000 tonnes produced per year)[28] should not exist, as methane is combustible in an oxygen atmosphere.

Dry air in the atmosphere of Earth contains roughly (by volume) 78.09% nitrogen, 20.95% oxygen, 0.93% argon, 0.039% carbon dioxide, and small amounts of other gases including methane. Lovelock originally speculated that concentrations of oxygen above about 25% would increase the frequency of wildfires and conflagration of forests. Recent work on the findings of fire-caused charcoal in Carboniferous and Cretaceous coal measures, in geologic periods when O2 did exceed 25%, has supported Lovelock's contention.[citation needed]

Processing of CO2

Gaia scientists see the participation of living organisms in the carbon cycle as one of the complex processes that maintain conditions suitable for life. The only significant natural source of atmospheric carbon dioxide (CO2) is volcanic activity, while the only significant removal is through the precipitation of carbonate rocks.[29] Carbon precipitation, solution and fixation are influenced by the bacteria and plant roots in soils, where they improve gaseous circulation, or in coral reefs, where calcium carbonate is deposited as a solid on the sea floor. Calcium carbonate is used by living organisms to manufacture carbonaceous tests and shells. Once dead, the living organisms' shells fall to the bottom of the oceans where they generate deposits of chalk and limestone.

One of these organisms is Emiliania huxleyi, an abundant coccolithophore algae which also has a role in the formation of clouds.[30] CO2 excess is compensated by an increase of coccolithophoride life, increasing the amount of CO2 locked in the ocean floor. Coccolithophorides increase the cloud cover, hence control the surface temperature, help cool the whole planet and favor precipitations necessary for terrestrial plants.[citation needed] Lately the atmospheric CO2 concentration has increased and there is some evidence that concentrations of ocean algal blooms are also increasing.[31]

Lichen and other organisms accelerate the weathering of rocks in the surface, while the decomposition of rocks also happens faster in the soil, thanks to the activity of roots, fungi, bacteria and subterranean animals. The flow of carbon dioxide from the atmosphere to the soil is therefore regulated with the help of living beings. When CO2 levels rise in the atmosphere the temperature increases and plants grow. This growth brings higher consumption of CO2 by the plants, who process it into the soil, removing it from the atmosphere.

History

Precedents

"Earthrise" taken from Apollo 8 on December 24, 1968

The idea of the Earth as an integrated whole, a living being, has a long tradition. The mythical Gaia was the primal Greek goddess personifying the Earth, the Greek version of "Mother Nature" (from Ge = Earth, and Aia = PIE grandmother), or the Earth Mother. James Lovelock gave this name to his hypothesis after a suggestion from the novelist William Golding, who was living in the same village as Lovelock at the time (Bowerchalke, Wiltshire, UK). Golding's advice was based on Gea, an alternative spelling for the name of the Greek goddess, which is used as prefix in geology, geophysics and geochemistry.[32] Golding later made reference to Gaia in his Nobel prize acceptance speech.

In the eighteenth century, as geology consolidated as a modern science, James Hutton maintained that geological and biological processes are interlinked.[33] Later, the naturalist and explorer Alexander von Humboldt recognized the coevolution of living organisms, climate, and Earth's crust.[33] In the twentieth century, Vladimir Vernadsky formulated a theory of Earth's development that is now one of the foundations of ecology. Vernadsky was a Ukrainian geochemist and was one of the first scientists to recognize that the oxygen, nitrogen, and carbon dioxide in the Earth's atmosphere result from biological processes. During the 1920s he published works arguing that living organisms could reshape the planet as surely as any physical force. Vernadsky was a pioneer of the scientific bases for the environmental sciences.[34] His visionary pronouncements were not widely accepted in the West, and some decades later the Gaia hypothesis received the same type of initial resistance from the scientific community.

Also in the turn to the 20th century Aldo Leopold, pioneer in the development of modern environmental ethics and in the movement for wilderness conservation, suggested a living Earth in his biocentric or holistic ethics regarding land.
It is at least not impossible to regard the earth's parts—soil, mountains, rivers, atmosphere etc,—as organs or parts of organs of a coordinated whole, each part with its definite function. And if we could see this whole, as a whole, through a great period of time, we might perceive not only organs with coordinated functions, but possibly also that process of consumption as replacement which in biology we call metabolism, or growth. In such case we would have all the visible attributes of a living thing, which we do not realize to be such because it is too big, and its life processes too slow.
— Aldo Leopold, Animate Earth.[35]
Another influence for the Gaia hypothesis and the environmental movement in general came as a side effect of the Space Race between the Soviet Union and the United States of America. During the 1960s, the first humans in space could see how the Earth looked as a whole. The photograph Earthrise taken by astronaut William Anders in 1968 during the Apollo 8 mission became, through the Overview Effect an early symbol for the global ecology movement.[36]

Formulation of the hypothesis


Lovelock started defining the idea of a self-regulating Earth controlled by the community of living organisms in September 1965, while working at the Jet Propulsion Laboratory in California on methods of detecting life on Mars.[37][38] The first paper to mention it was Planetary Atmospheres: Compositional and other Changes Associated with the Presence of Life, co-authored with C.E. Giffin.[39] A main concept was that life could be detected in a planetary scale by the chemical composition of the atmosphere. According to the data gathered by the Pic du Midi observatory, planets like Mars or Venus had atmospheres in chemical equilibrium. This difference with the Earth atmosphere was considered to be a proof that there was no life in these planets.

Lovelock formulated the Gaia Hypothesis in journal articles in 1972[1] and 1974,[2] followed by a popularizing 1979 book Gaia: A new look at life on Earth. An article in the New Scientist of February 6, 1975,[40] and a popular book length version of the hypothesis, published in 1979 as The Quest for Gaia, began to attract scientific and critical attention.

Lovelock called it first the Earth feedback hypothesis,[41] and it was a way to explain the fact that combinations of chemicals including oxygen and methane persist in stable concentrations in the atmosphere of the Earth. Lovelock suggested detecting such combinations in other planets' atmospheres as a relatively reliable and cheap way to detect life.


Later, other relationships such as sea creatures producing sulfur and iodine in approximately the same quantities as required by land creatures emerged and helped bolster the hypothesis.[42]

In 1971 microbiologist Dr. Lynn Margulis joined Lovelock in the effort of fleshing out the initial hypothesis into scientifically proven concepts, contributing her knowledge about how microbes affect the atmosphere and the different layers in the surface of the planet.[4] The American biologist had also awakened criticism from the scientific community with her theory on the origin of eukaryotic organelles and her contributions to the endosymbiotic theory, nowadays accepted. Margulis dedicated the last of eight chapters in her book, The Symbiotic Planet, to Gaia. However, she objected to the widespread personification of Gaia and stressed that Gaia is "not an organism", but "an emergent property of interaction among organisms". She defined Gaia as "the series of interacting ecosystems that compose a single huge ecosystem at the Earth's surface. Period". The book's most memorable "slogan" was actually quipped by a student of Margulis': "Gaia is just symbiosis as seen from space".

James Lovelock called his first proposal the Gaia hypothesis but has also used the term Gaia theory. Lovelock states that the initial formulation was based on observation, but still lacked a scientific explanation. The Gaia hypothesis has since been supported by a number of scientific experiments[43] and provided a number of useful predictions.[44] In fact, wider research proved the original hypothesis wrong, in the sense that it is not life alone but the whole Earth system that does the regulating.[12]

First Gaia conference

In 1985, the first public symposium on the Gaia hypothesis, Is The Earth A Living Organism? was held at University of Massachusetts Amherst, August 1–6.[45] The principal sponsor was the National Audubon Society. Speakers included James Lovelock, George Wald, Mary Catherine Bateson, Lewis Thomas, John Todd, Donald Michael, Christopher Bird, Thomas Berry, David Abram, Michael Cohen, and William Fields. Some 500 people attended.[46]

Second Gaia conference

In 1988, climatologist Stephen Schneider organised a conference of the American Geophysical Union. The first Chapman Conference on Gaia,[47] was held in San Diego, California on March 7, 1988.

During the "philosophical foundations" session of the conference, David Abram spoke on the influence of metaphor in science, and of the Gaia hypothesis as offering a new and potentially game-changing metaphorics, while James Kirchner criticised the Gaia hypothesis for its imprecision. Kirchner claimed that Lovelock and Margulis had not presented one Gaia hypothesis, but four -
  • CoEvolutionary Gaia: that life and the environment had evolved in a coupled way. Kirchner claimed that this was already accepted scientifically and was not new.
  • Homeostatic Gaia: that life maintained the stability of the natural environment, and that this stability enabled life to continue to exist.
  • Geophysical Gaia: that the Gaia hypothesis generated interest in geophysical cycles and therefore led to interesting new research in terrestrial geophysical dynamics.
  • Optimising Gaia: that Gaia shaped the planet in a way that made it an optimal environment for life as a whole. Kirchner claimed that this was not testable and therefore was not scientific.
Of Homeostatic Gaia, Kirchner recognised two alternatives. "Weak Gaia" asserted that life tends to make the environment stable for the flourishing of all life. "Strong Gaia" according to Kirchner, asserted that life tends to make the environment stable, to enable the flourishing of all life. Strong Gaia, Kirchner claimed, was untestable and therefore not scientific.[48]

Lovelock and other Gaia-supporting scientists, however, did attempt to disprove the claim that the hypothesis is not scientific because it is impossible to test it by controlled experiment. For example, against the charge that Gaia was teleological, Lovelock and Andrew Watson offered the Daisyworld Model (and its modifications, above) as evidence against most of these criticisms.[20] Lovelock said that the Daisyworld model "demonstrates that self-regulation of the global environment can emerge from competition amongst types of life altering their local environment in different ways".[49]

Lovelock was careful to present a version of the Gaia hypothesis that had no claim that Gaia intentionally or consciously maintained the complex balance in her environment that life needed to survive. It would appear that the claim that Gaia acts "intentionally" was a metaphoric statement in his popular initial book and was not meant to be taken literally. This new statement of the Gaia hypothesis was more acceptable to the scientific community. Most accusations of teleologism ceased, following this conference.

Third Gaia conference

By the time of the 2nd Chapman Conference on the Gaia Hypothesis, held at Valencia, Spain, on 23 June 2000,[50] the situation had changed significantly. Rather than a discussion of the Gaian teleological views, or "types" of Gaia hypotheses, the focus was upon the specific mechanisms by which basic short term homeostasis was maintained within a framework of significant evolutionary long term structural change.

The major questions were:[51]
  1. "How has the global biogeochemical/climate system called Gaia changed in time? What is its history? Can Gaia maintain stability of the system at one time scale but still undergo vectorial change at longer time scales? How can the geologic record be used to examine these questions?"
  2. "What is the structure of Gaia? Are the feedbacks sufficiently strong to influence the evolution of climate? Are there parts of the system determined pragmatically by whatever disciplinary study is being undertaken at any given time or are there a set of parts that should be taken as most true for understanding Gaia as containing evolving organisms over time? What are the feedbacks among these different parts of the Gaian system, and what does the near closure of matter mean for the structure of Gaia as a global ecosystem and for the productivity of life?"
  3. "How do models of Gaian processes and phenomena relate to reality and how do they help address and understand Gaia? How do results from Daisyworld transfer to the real world? What are the main candidates for "daisies"? Does it matter for Gaia theory whether we find daisies or not? How should we be searching for daisies, and should we intensify the search? How can Gaian mechanisms be investigated using process models or global models of the climate system that include the biota and allow for chemical cycling?"
In 1997, Tyler Volk argued that a Gaian system is almost inevitably produced as a result of an evolution towards far-from-equilibrium homeostatic states that maximise entropy production, and Kleidon (2004) agreed stating: "...homeostatic behavior can emerge from a state of MEP associated with the planetary albedo"; "...the resulting behavior of a biotic Earth at a state of MEP may well lead to near-homeostatic behavior of the Earth system on long time scales, as stated by the Gaia hypothesis". Staley (2002) has similarly proposed "...an alternative form of Gaia theory based on more traditional Darwinian principles... In [this] new approach, environmental regulation is a consequence of population dynamics, not Darwinian selection. The role of selection is to favor organisms that are best adapted to prevailing environmental conditions. However, the environment is not a static backdrop for evolution, but is heavily influenced by the presence of living organisms. The resulting co-evolving dynamical process eventually leads to the convergence of equilibrium and optimal conditions".

Fourth Gaia conference

A fourth international conference on the Gaia hypothesis, sponsored by the Northern Virginia Regional Park Authority and others, was held in October 2006 at the Arlington, VA campus of George Mason University.[52]

Martin Ogle, Chief Naturalist, for NVRPA, and long-time Gaia hypothesis proponent, organized the event. Lynn Margulis, Distinguished University Professor in the Department of Geosciences, University of Massachusetts-Amherst, and long-time advocate of the Gaia hypothesis, was a keynote speaker. Among many other speakers: Tyler Volk, Co-director of the Program in Earth and Environmental Science at New York University; Dr. Donald Aitken, Principal of Donald Aitken Associates; Dr. Thomas Lovejoy, President of the Heinz Center for Science, Economics and the Environment; Robert Correll, Senior Fellow, Atmospheric Policy Program, American Meteorological Society and noted environmental ethicist, J. Baird Callicott.

This conference approached the Gaia hypothesis as both science and metaphor as a means of understanding how we might begin addressing 21st century issues such as climate change and ongoing environmental destruction.

Criticism

After initially being largely ignored by most scientists (from 1969 until 1977), thereafter for a period the initial Gaia hypothesis was criticized by a number of scientists, such as Ford Doolittle,[53] Richard Dawkins[54] and Stephen Jay Gould.[47] Lovelock has said that because his hypothesis is named after a Greek goddess, and championed by many non-scientists,[41] the Gaia hypothesis was interpreted as a neo-Pagan religion. Many scientists in particular also criticised the approach taken in his popular book Gaia, a New Look at Life on Earth for being teleological—a belief that things are purposeful and aimed towards a goal. Responding to this critique in 1990, Lovelock stated, "Nowhere in our writings do we express the idea that planetary self-regulation is purposeful, or involves foresight or planning by the biota".

Stephen Jay Gould criticised Gaia as being "a metaphor, not a mechanism."[55] He wanted to know the actual mechanisms by which self-regulating homeostasis was achieved. In his defense of Gaia, David Abram argues that Gould overlooked the fact that "mechanism", itself, is a metaphor — albeit an exceedingly common and often unrecognized metaphor — one which leads us to consider natural and living systems as though they were machines organized and built from outside (rather than as autopoietic or self-organizing phenomena). Mechanical metaphors, according to Abram, lead us to overlook the active or agential quality of living entities, while the organismic metaphorics of the Gaia hypothesis accentuate the active agency of both the biota and the biosphere as a whole.[56][57] With regard to causality in Gaia, Lovelock argues that no single mechanism is responsible, that the connections between the various known mechanisms may never be known, that this is accepted in other fields of biology and ecology as a matter of course, and that specific hostility is reserved for his own hypothesis for other reasons.[58]

Aside from clarifying his language and understanding of what is meant by a life form, Lovelock himself ascribes most of the criticism to a lack of understanding of non-linear mathematics by his critics, and a linearizing form of greedy reductionism in which all events have to be immediately ascribed to specific causes before the fact. He also states that most of his critics are biologists but that his hypothesis includes experiments in fields outside biology, and that some self-regulating phenomena may not be mathematically explainable.[58]

Natural selection and evolution

Lovelock has suggested that global biological feedback mechanisms could evolve by natural selection, stating that organisms that improve their environment for their survival do better than those that damage their environment. However, in the early 1980s, W. Ford Doolittle and Richard Dawkins separately argued against Gaia. Doolittle argued that nothing in the genome of individual organisms could provide the feedback mechanisms proposed by Lovelock, and therefore the Gaia hypothesis proposed no plausible mechanism and was unscientific.[53] Dawkins meanwhile stated that for organisms to act in concert would require foresight and planning, which is contrary to the current scientific understanding of evolution.[54] Like Doolittle, he also rejected the possibility that feedback loops could stabilize the system.

Lynn Margulis, a microbiologist who collaborated with Lovelock in supporting the Gaia hypothesis, argued in 1999, that "Darwin's grand vision was not wrong, only incomplete. In accentuating the direct competition between individuals for resources as the primary selection mechanism, Darwin (and especially his followers) created the impression that the environment was simply a static arena". She wrote that the composition of the Earth's atmosphere, hydrosphere, and lithosphere are regulated around "set points" as in homeostasis, but those set points change with time.[59]

Evolutionary biologist W. D. Hamilton called the concept of Gaia Copernican, adding that it would take another Newton to explain how Gaian self-regulation takes place through Darwinian natural selection.[32][better source needed]

Criticism in the 21st century

The Gaia hypothesis continues to be broadly skeptically received by the scientific community. For instance, arguments both for and against it were laid out in the journal Climatic Change in 2002 and 2003. A significant argument raised against it are the many examples where life has had a detrimental or destabilising effect on the environment rather than acting to regulate it.[8][9] Several recent books have criticised the Gaia hypothesis, expressing views ranging from "... the Gaia hypothesis lacks unambiguous observational support and has significant theoretical difficulties"[60] to "Suspended uncomfortably between tainted metaphor, fact, and false science, I prefer to leave Gaia firmly in the background"[10] to "The Gaia hypothesis is supported neither by evolutionary theory nor by the empirical evidence of the geological record".[61] The CLAW hypothesis,[17] initially suggested as a potential example of direct Gaian feedback, has subsequently been found to be less credible as understanding of cloud condensation nuclei has improved.[62] In 2009 the Medea hypothesis was proposed: that life has highly detrimental (biocidal) impacts on planetary conditions, in direct opposition to the Gaia hypothesis.[63]

In a recent book-length evaluation of the Gaia hypothesis considering modern evidence from across the various relevant disciplines the author, Toby Tyrrell, concluded that: "I believe Gaia is a dead end. Its study has, however, generated many new and thought provoking questions. While rejecting Gaia, we can at the same time appreciate Lovelock's originality and breadth of vision, and recognise that his audacious concept has helped to stimulate many new ideas about the Earth, and to champion a holistic approach to studying it".[64] Elsewhere he presents his conclusion "The Gaia hypothesis is not an accurate picture of how our world works".[65] This statement needs to be understood as referring to the "strong" and "moderate" forms of Gaia—that the biota obeys a principle that works to make Earth optimal (strength 5) or favourable for life (strength 4) or that it works as a homeostatic mechanism (strength 3). The latter is the "weakest" form of Gaia that Lovelock has advocated. Tyrrell rejects it. However, he finds that the two weaker forms of Gaia—Coeveolutionary Gaia and Influential Gaia, which assert that there are close links between the evolution of life and the environment and that biology affects the physical and chemical environment—are both credible, but that it is not useful to use the term "Gaia" in this sense.

Ecocentrism

From Wikipedia, the free encyclopedia

Ecocentrism (/ˌɛkˈsɛntrɪzəm/; from Greek: οἶκος oikos, "house" and κέντρον kentron, "center") is a term used in ecological political philosophy to denote a nature-centered, as opposed to human-centered (i.e. anthropocentric), system of values. The justification for ecocentrism usually consists in an ontological belief and subsequent ethical claim. The ontological belief denies that there are any existential divisions between human and non-human nature sufficient to claim that humans are either (a) the sole bearers of intrinsic value or (b) possess greater intrinsic value than non-human nature. Thus the subsequent ethical claim is for an equality of intrinsic value across human and non-human nature, or 'biospherical egalitarianism'. According to Stan Rowe:

and:

Origin of term

The ecocentric ethic was conceived by Aldo Leopold[5] and recognizes that all species, including humans, are the product of a long evolutionary process and are inter-related in their life processes.[6] The writings of Aldo Leopold and his idea of the land ethic and good environmental management are a key element to this philosophy. Ecocentrism focuses on the biotic community as a whole and strives to maintain ecosystem composition and ecological processes.[7] The term also finds expression in the first principle of the deep ecology movement, as formulated by Arne Næss and George Sessions in 1984[8] which points out that anthropocentrism, which considers humans as the center of the universe and the pinnacle of all creation, is a difficult opponent for ecocentrism.[9]

Background

Environmental thought and the various branches of the environmental movement are often classified into two intellectual camps: those that are considered anthropocentric, or "human-centred," in orientation and those considered biocentric, or "life-centred". This division has been described in other terminology as "shallow" ecology versus "deep" ecology and as "technocentrism" versus "ecocentrism". Ecocentrism can be seen as one stream of thought within environmentalism, the political and ethical movement that seeks to protect and improve the quality of the natural environment through changes to environmentally harmful human activities by adopting environmentally benign forms of political, economic, and social organization and through a reassessment of humanity's relationship with nature. In various ways, environmentalism claims that non-human organisms and the natural environment as a whole deserve consideration when appraising the morality of political, economic, and social policies.[10]

Relationship to other similar philosophies

Anthropocentrism

Ecocentrism is taken by its proponents to constitute a radical challenge to long-standing and deeply rooted anthropocentric attitudes in Western culture, science, and politics. Anthropocentrism is alleged to leave the case for the protection of non-human nature subject to the demands of human utility, and thus never more than contingent on the demands of human welfare. An ecocentric ethic, by contrast, is believed to be necessary in order to develop a non-contingent basis for protecting the natural world. Critics of ecocentrism have argued that it opens the doors to an anti-humanist morality that risks sacrificing human well-being for the sake of an ill-defined ‘greater good’.[11] Deep ecologist Arne Naess has identified anthropocentrism as a root cause of the ecological crisis, human overpopulation, and the extinctions of many non-human species.[12] Others point to the gradual historical realization that humans are not the centre of all things, that "A few hundred years ago, with some reluctance, Western people admitted that the planets, Sun and stars did not circle around their abode. In short, our thoughts and concepts though irreducibly anthropomorphic need not be anthropocentric."[13]

Industrocentrism

Industrocentrism is an ideology that goes hand in hand with today's industrial neoliberal capitalist agenda. It sees all things on earth as resources to be utilized by humans or to be commodified. This view is the opposite of anthropocentrism and ecocentrism. It negatively affects humans, nonhumans, and the environment in the long run in that it only focuses on short term economic gratification.[14]

Technocentrism

Ecocentrism is also contrasted with technocentrism (meaning values centred on technology) as two opposing perspectives on attitudes towards human technology and its ability to affect, control and even protect the environment. Ecocentrics, including "deep green" ecologists, see themselves as being subject to nature, rather than in control of it. They lack faith in modern technology and the bureaucracy attached to it. Ecocentrics will argue that the natural world should be respected for its processes and products, and that low impact technology and self-reliance is more desirable than technological control of nature.[15] Technocentrics, including imperialists, have absolute faith in technology and industry and firmly believe that humans have control over nature. Although technocentrics may accept that environmental problems do exist, they do not see them as problems to be solved by a reduction in industry. Rather, environmental problems are seen as problems to be solved using science. Indeed, technocentrics see that the way forward for developed and developing countries and the solutions to our environmental problems today lie in scientific and technological advancement.[15]

Biocentrism

The distinction between biocentrism and ecocentrism is ill-defined. Ecocentrism recognizes Earth's interactive living and non-living systems rather than just the Earth's organisms (biocentrism) as central in importance.[16] The term has been used by those advocating "left biocentrism", combining deep ecology with an "anti-industrial and anti-capitalist" position (David Orton et al.).

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...