Search This Blog

Wednesday, January 28, 2015

Climate sensitivity

From Wikipedia, the free encyclopedia

Refer to caption and adjacent text
Frequency distribution of climate sensitivity, based on model simulations.[1] Few of the simulations result in less than 2 °C of warming—near the low end of estimates by the Intergovernmental Panel on Climate Change (IPCC).[1] Some simulations result in significantly more than the 4 °C, which is at the high end of the IPCC estimates.[1] This pattern (statisticians call it a "right-skewed distribution") suggests that if carbon dioxide concentrations double, the probability of very large increases in temperature is greater than the probability of very small increases.[1]

Climate sensitivity is the equilibrium temperature change in response to changes of the radiative forcing.[2] Therefore climate sensitivity depends on the initial climate state, but potentially can be accurately inferred from precise palaeoclimate data. Slow climate feedbacks, especially changes of ice sheet size and atmospheric CO2, amplify the total Earth system sensitivity by an amount that depends on the time scale considered.[3]

Although climate sensitivity is usually used in the context of radiative forcing by carbon dioxide (CO2), it is thought of as a general property of the climate system: the change in surface air temperature (ΔTs) following a unit change in radiative forcing (RF), and thus is expressed in units of °C/(W/m2). For this to be useful, the measure must be independent of the nature of the forcing (e.g. from greenhouse gases or solar variation); to first order this is indeed found to be so[citation needed].

The climate sensitivity specifically due to CO
2
is often expressed as the temperature change in °C associated with a doubling of the concentration of carbon dioxide in Earth's atmosphere.

For coupled atmosphere-ocean global climate models (e.g. CMIP5) the climate sensitivity is an emergent property: it is not a model parameter, but rather a result of a combination of model physics and parameters. By contrast, simpler energy-balance models may have climate sensitivity as an explicit parameter.

 \Delta T_s = \lambda \cdot RF

The terms represented in the equation relate radiative forcing (RF) to linear changes in global surface temperature change (ΔTs) via the climate sensitivity λ.

It is also possible to estimate climate sensitivity from observations; however, this is difficult due to uncertainties in the forcing and temperature histories.

Equilibrium and transient climate sensitivity

The equilibrium climate sensitivity (ECS) refers to the equilibrium change in global mean near-surface air temperature that would result from a sustained doubling of the atmospheric (equivalent) carbon dioxide concentration (ΔTx2). As estimated by the IPCC Fifth Assessment Report (AR5) "there is high confidence that ECS is extremely unlikely less than 1°C and medium confidence that the ECS is likely between 1.5°C and 4.5°C and very unlikely greater than 6°C."[4] This is a change from the IPCC Fourth Assessment Report (AR4), which said it was likely to be in the range 2 to 4.5 °C with a best estimate of about 3 °C, and is very unlikely to be less than 1.5 °C. Values substantially higher than 4.5 °C cannot be excluded, but agreement of models with observations is not as good for those values.[5] The IPCC Third Assessment Report (TAR) said it was "likely to be in the range of 1.5 to 4.5 °C".[6] Other estimates of climate sensitivity are discussed later on.

A model estimate of equilibrium sensitivity thus requires a very long model integration; fully equilibrating ocean temperatures requires integrations of thousands of model years. A measure requiring shorter integrations is the transient climate response (TCR) which is defined as the average temperature response over a twenty-year period centered at CO
2
doubling in a transient simulation with CO
2
increasing at 1% per year.[7] The transient response is lower than the equilibrium sensitivity, due to the "inertia" of ocean heat uptake.

Over the 50–100 year timescale, the climate response to forcing is likely to follow the TCR; for considerations of climate stabilization, the ECS is more useful.

An estimate of the equilibrium climate sensitivity may be made from combining the effective climate sensitivity with the known properties of the ocean reservoirs and the surface heat fluxes; this is the effective climate sensitivity. This "may vary with forcing history and climate state".[8] [9]

A less commonly used concept, the Earth system sensitivity (ESS), can be defined which includes the effects of slower feedbacks, such as the albedo change from melting the large ice sheets that covered much of the northern hemisphere during the last glacial maximum. These extra feedbacks make the ESS larger than the ECS — possibly twice as large — but also mean that it may well not apply to current conditions.[10]

Sensitivity to carbon dioxide forcing

Climate sensitivity is often evaluated in terms of the change in equilibrium temperature due to radiative forcing due to the greenhouse effect. According to the Arrhenius relation,[11] the radiative forcing (and hence the change in temperature) is proportional to the logarithm of the concentration of infrared-absorbing gasses in the atmosphere. Thus, the sensitivity of temperature to gasses in the atmosphere (most notably carbon dioxide) is often expressed in terms of the change in temperature per doubling of the concentration of the gas.

Radiative forcing due to doubled CO
2

CO
2
climate sensitivity has a component directly due to radiative forcing by CO
2
, and a further contribution arising from climate feedbacks, both positive and negative. "Without any feedbacks, a doubling of CO
2
(which amounts to a forcing of 3.7 W/m2) would result in 1 °C global warming, which is easy to calculate and is undisputed. The remaining uncertainty is due entirely to feedbacks in the system, namely, the water vapor feedback, the ice-albedo feedback, the cloud feedback, and the lapse rate feedback";[12] addition of these feedbacks leads to a value of the sensitivity to CO
2
doubling of approximately 3 °C ± 1.5 °C, which corresponds to a value of λ of 0.8 K/(W/m2).

In the earlier 1979 NAS report[13] (p. 7), the radiative forcing due to doubled CO
2
is estimated to be 4 W/m2, as calculated (for example) in Ramanathan et al. (1979).[14] In 2001 the IPCC adopted the revised value of 3.7 W/m2, the difference attributed to a "stratospheric temperature adjustment".[15] More recently an intercomparison of radiative transfer codes (Collins et al., 2006)[16] showed discrepancies among climate models and between climate models and more exact radiation codes in the forcing attributed to doubled CO
2
even in cloud-free sky; presumably the differences would be even greater if forcing were evaluated in the presence of clouds because of differences in the treatment of clouds in different models. Undoubtedly the difference in forcing attributed to doubled CO
2
in different climate models contributes to differences in apparent sensitivities of the models, although this effect is thought to be small relative to the intrinsic differences in sensitivities of the models themselves.[17]

Refer to caption and adjacent text
Frequency distribution of climate sensitivity, based on model simulations.[1] Few of the simulations result in less than 2 °C of warming—near the low end of estimates by the Intergovernmental Panel on Climate Change (IPCC).[1] Some simulations result in significantly more than the 4 °C, which is at the high end of the IPCC estimates.[1] This pattern (statisticians call it a "right-skewed distribution") suggests that if carbon dioxide concentrations double, the probability of very large increases in temperature is greater than the probability of very small increases.[1]

Consensus estimates

A committee on anthropogenic global warming convened in 1979 by the National Academy of Sciences and chaired by Jule Charney[13] estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. Only two sets of models were available; one, due to Syukuro Manabe, exhibited a climate sensitivity of 2 °C, the other, due to James E. Hansen, exhibited a climate sensitivity of 4 °C.
"According to Manabe, Charney chose 0.5 °C as a not-unreasonable margin of error, subtracted it from Manabe’s number, and added it to Hansen’s. Thus was born the 1.5 °C-to-4.5 °C range of likely climate sensitivity that has appeared in every greenhouse assessment since..."[18]

Chapter 4 of the "Charney report" compares the predictions of the models: "We conclude that the predictions ... are basically consistent and mutually supporting. The differences in model results are relatively small and may be accounted for by differences in model characteristics and simplifying assumptions."[13]

In 2008 climatologist Stefan Rahmstorf wrote, regarding the Charney report's original range of uncertainty: "At that time, this range was on very shaky ground. Since then, many vastly improved models have been developed by a number of climate research centers around the world. Current state-of-the-art climate models span a range of 2.6–4.1 °C, most clustering around 3 °C."[12]

Intergovernmental Panel on Climate Change

The 1990 IPCC First Assessment Report estimated that equilibrium climate sensitivity to CO
2
doubling lay between 1.5 and 4.5 °C, with a "best guess in the light of current knowledge" of 2.5 °C.[19] This used models with strongly simplified representations of the ocean dynamics. The IPCC supplementary report, 1992 which used full ocean GCMs nonetheless saw "no compelling reason to warrant changing" from this estimate [20] and the IPCC Second Assessment Report found that "No strong reasons have emerged to change" these estimates,[21] with much of the uncertainty attributed to cloud processes. As noted above, the IPCC TAR retained the likely range 1.5 to 4.5 °C.[6]

Authors of the IPCC Fourth Assessment Report (Meehl et al., 2007)[22] stated that confidence in estimates of equilibrium climate sensitivity had increased substantially since the TAR. AR4's assessment was based on a combination of several independent lines of evidence, including observed climate change and the strength of known "feedbacks" simulated in general circulation models.[23] IPCC authors concluded that the global mean equilibrium warming for doubling CO
2
(to a concentration of 560 ppmv), or equilibrium climate sensitivity, very likely is greater than 1.5 °C (2.7 °F) and likely to lie in the range 2 to 4.5 °C (4 to 8.1 °F), with a most likely value of about 3 °C (5 °F). For fundamental physical reasons, as well as data limitations, the IPCC states a climate sensitivity higher than 4.5 °C (8.1 °F) cannot be ruled out, but that agreement for these values with observations and "proxy" climate data is generally worse compared to values in the 2 to 4.5 °C (4 to 8.1 °F) range.[23]

The TAR uses the word "likely" in a qualitative sense to describe the likelihood of the 1.5 to 4.5 °C range being correct.[22] AR4, however, quantifies the probable range of climate sensitivity estimates:[24]
  • 2-4.5 °C is "likely", = greater than 66% chance of being correct
  • less than 1.5 °C is "very unlikely" = less than 10%
The IPCC Fifth Assessment Report stated: Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence).

These are Bayesian probabilities, which are based on an expert assessment of the available evidence.[24]

Calculations of CO
2
sensitivity from observational data

Sample calculation using industrial-age data

Rahmstorf (2008)[12] provides an informal example of how climate sensitivity might be estimated empirically, from which the following is modified. Denote the sensitivity, i.e. the equilibrium increase in global mean temperature including the effects of feedbacks due to a sustained forcing by doubled CO
2
(taken as 3.7 W/m2), as x °C. If Earth were to experience an equilibrium temperature change of ΔT (°C) due to a sustained forcing of ΔF (W/m2), then one might say that x/(ΔT) = (3.7 W/m2)/(ΔF), i.e. that x = ΔT * (3.7 W/m2)/ΔF. The global temperature increase since the beginning of the industrial period (taken as 1750) is about 0.8 °C, and the radiative forcing due to CO
2
and other long-lived greenhouse gases (mainly methane, nitrous oxide, and chlorofluorocarbons) emitted since that time is about 2.6 W/m2. Neglecting other forcings and considering the temperature increase to be an equilibrium increase would lead to a sensitivity of about 1.1 °C. However, ΔF also contains contributions due to solar activity (+0.3 W/m2), aerosols (-1 W/m2), ozone (0.3 W/m2) and other lesser influences, bringing the total forcing over the industrial period to 1.6 W/m2 according to best estimate of the IPCC AR4, albeit with substantial uncertainty. Additionally the fact that the climate system is not at equilibrium must be accounted for; this is done by subtracting the planetary heat uptake rate H from the forcing; i.e., x = ΔT * (3.7 W/m2)/(ΔF-H). Taking planetary heat uptake rate as the rate of ocean heat uptake, estimated by the IPCC AR4 as 0.2 W/m2, yields a value for x of 2.1 °C. (All numbers are approximate and quite uncertain.)

Sample calculation using ice-age data

In 2008, Farley wrote: "... examine the change in temperature and solar forcing between glaciation (ice age) and interglacial (no ice age) periods. The change in temperature, revealed in ice core samples, is 5 °C, while the change in solar forcing is 7.1 W/m2. The computed climate sensitivity is therefore 5/7.1 = 0.7 K(W/m2)−1. We can use this empirically derived climate sensitivity to predict the temperature rise from a forcing of 4 W/m2, arising from a doubling of the atmospheric CO
2
from pre-industrial levels. The result is a predicted temperature increase of 3 °C."[25]

Based on analysis of uncertainties in total forcing, in Antarctic cooling, and in the ratio of global to Antarctic cooling of the last glacial maximum relative to the present, Ganopolski and Schneider von Deimling (2008) infer a range of 1.3 to 6.8 °C for climate sensitivity determined by this approach.[26]

A lower figure was calculated in a 2011 Science paper by Schmittner et al., who combined temperature reconstructions of the Last Glacial Maximum with climate model simulations to suggest a rate of global warming from doubling of atmospheric carbon dioxide of a median of 2.3 °C and uncertainty 1.7–2.6 °C (66% probability range), less than the earlier estimates of 2 to 4.5 °C as the 66% probability range. Schmittner et al. said their "results imply less probability of extreme climatic change than previously thought." Their work suggests that climate sensitivities >6 °C "cannot be reconciled with paleoclimatic and geologic evidence, and hence should be assigned near-zero probability."[27][28]

Other experimental estimates

Idso (1998)[29] calculated based on eight natural experiments a λ of 0.1 °C/(Wm−2) resulting in a climate sensitivity of only 0.4 °C for a doubling of the concentration of CO
2
in the atmosphere.

Andronova and Schlesinger (2001) found that the climate sensitivity could lie between 1 and 10 °C, with a 54 percent likelihood that it lies outside the IPCC range.[30] The exact range depends on which factors are most important during the instrumental period: "At present, the most likely scenario is one that includes anthropogenic sulfate aerosol forcing but not solar variation. Although the value of the climate sensitivity in that case is most uncertain, there is a 70 percent chance that it exceeds the maximum IPCC value. This is not good news," said Schlesinger.

Forest, et al. (2002)[31] using patterns of change and the MIT EMIC estimated a 95% confidence interval of 1.4–7.7 °C for the climate sensitivity, and a 30% probability that sensitivity was outside the 1.5 to 4.5 °C range.

Gregory, et al. (2002)[32] estimated a lower bound of 1.6 °C by estimating the change in Earth's radiation budget and comparing it to the global warming observed over the 20th century.

Shaviv (2005)[33] carried out a similar analysis for 6 different time scales, ranging from the 11-yr solar cycle to the climate variations over geological time scales. He found a typical sensitivity of 0.54±0.12 K/(W m−2) or 2.1 °C (ranging between 1.6 °C and 2.5 °C at 99% confidence) if there is no cosmic-ray climate connection, or a typical sensitivity of 0.35±0.09 K/(W m−2) or 1.3 °C (between 1.0 °C and 1.7 °C at 99% confidence), if the cosmic-ray climate link is real. (Note Shaviv quotes a radiative forcing equivalent of 3.8 Wm−2. [ΔTx2=3.8 Wm−2 λ].)

Frame, et al. (2005)[34] noted that the range of the confidence limits is dependent on the nature of the prior assumptions made.

Annan and Hargreaves (2006)[35] presented an estimate that resulted from combining prior estimates based on analyses of paleoclimate, responses to volcanic eruptions, and the temperature change in response to forcings over the twentieth century. They also introduced a triad notation (L, C, H) to convey the probability distribution function (pdf) of the sensitivity, where the central value C indicates the maximum likelihood estimate in degrees Celsius and the outer values L and H represent the limits of the 95% confidence interval for a pdf, or 95% of the area under the curve for a likelihood function. In this notation their estimate of sensitivity was (1.7, 2.9, 4.9) °C.

Forster and Gregory (2006)[36] presented a new independent estimate based on the slope of a plot of calculated greenhouse gas forcing minus top-of-atmosphere energy imbalance, as measured by satellite borne radiometers, versus global mean surface temperature. In the triad notation of Annan and Hargreaves their estimate of sensitivity was (1.0, 1.6, 4.1) °C.

Royer, et al. (2007)[37] determined climate sensitivity within a major part of the Phanerozoic. The range of values—1.5 °C minimum, 2.8 °C best estimate, and 6.2 °C maximum—is, given various uncertainties, consistent with sensitivities of current climate models and with other determinations.[38]

Lindzen and Choi (2011) find the equilibrium climate sensitivity to be 0.7 C, implying a negative feedback of clouds.[39]

Ring et all (2012) find the equilibrium climate sensitivity to be in the range 1.45 C- 2.01 C, depending on the data set used as an input in model simulations.[40]

Skeie et al (2013) use the Bayesian analysis of the OHC data and conclude that the equilibrium climate sensitivity is 1.8 C, far lower than previous best estimate relied upon by the IPCC.[41]

Aldrin et al (2012)use simple deterministic climate model, modelling yearly hemispheric surface temperature and global ocean heat content as a function of historical radiative forcing and combine it with an empirical, stochastic model. By using a Bayesian framework they estimate the equilibrium climate sensitivity to be 1.98 C.[42]

Lewis (2013) estimates by using the Bayesian framework that the equilibrium climate sensitivity is 1.6 K, with the likely range (90% confidence level) 1.2-2.2 K.[43]

ScienceDaily reported on a study by Fasullo and Trenberth (2012),[44] who tested model estimates of climate sensitivity based on their ability to reproduce observed relative humidity in the tropics and subtropics. The best performing models tended to project relatively high climate sensitivities, of around 4 °C.[44]

Previdi et al. 2013 reviewed the 2×CO2 Earth system sensitivity, and concluded it is higher if the ice sheet and the vegetation albedo feedback is included in addition to the fast feedbacks, being ∼4–6 °C, and higher still if climate–GHG feedbacks are also included.[45]

Lewis and Curry (2014) estimated that equilibrium climate sensitivity was 1.64  °C, based on the 1750-2011 time series and "the uncertainty ranges for forcing components" in the IPCC's Fifth Assessment Report.[46]

Literature reviews

A literature review by Knutti and Hegerl (2008)[47] concluded that "various observations favour a climate sensitivity value of about 3 °C, with a likely range of about 2-4.5 °C. However, the physics of the response and uncertainties in forcing lead to difficulties in ruling out higher values."

Radiative forcing functions

A number of different inputs can give rise to radiative forcing. In addition to the downwelling radiation due to the greenhouse effect, the IPCC First Scientific Assessment Report listed solar radiation variability due to orbital changes, variability due to changes in solar irradiance, direct aerosol effects (e.g., changes in albedo due to cloud cover), indirect aerosol effects, and surface characteristics.[48]

Sensitivity to solar forcing

Solar irradiance is about 0.9 W/m2 brighter during solar maximum than during solar minimum.
Analysis by Camp and Tung shows that this correlates with a variation of ±0.1°C in measured average global temperature between the peak and minimum of the 11-year solar cycle.[49] From this data (incorporating the Earth's albedo and the fact that the solar absorption cross-section is 1/4 of the surface area of the Earth), Tung, Zhou and Camp (2008) derive a transient sensitivity value of 0.69 to 0.97 °C/(W/m2).[50] This would correspond to a transient climate sensitivity to carbon dioxide doubling of 2.5 to 3.6 K, similar to the range of the current scientific consensus. However, they note that this is the transient response to a forcing with an 11 year cycle; due to lag effects, they estimate the equilibrium response to forcing would be about 1.5 times higher.

Gaia hypothesis


From Wikipedia, the free encyclopedia


The study of planetary habitability is partly based upon extrapolation from knowledge of the Earth's conditions, as the Earth is the only planet currently known to harbour life

The Gaia hypothesis, also known as Gaia theory or Gaia principle, proposes that organisms interact with their inorganic surroundings on Earth to form a self-regulating, complex system that contributes to maintaining the conditions for life on the planet. Topics of interest include how the biosphere and the evolution of life forms affect the stability of global temperature, ocean salinity, oxygen in the atmosphere and other environmental variables that affect the habitability of Earth.

The hypothesis was formulated by the chemist James Lovelock[1] and co-developed by the microbiologist Lynn Margulis in the 1970s.[2] The hypothesis was initially criticized for being teleological and contradicting principles of natural selection, but later refinements resulted in ideas framed by the Gaia hypothesis being used in fields such as Earth system science, biogeochemistry, systems ecology, and the emerging subject of geophysiology.[3][4][5] Nevertheless, the Gaia hypothesis continues to attract criticism, and today many scientists consider it to be only weakly supported by, or at odds with, the available evidence. In 2006, the Geological Society of London awarded Lovelock the Wollaston Medal largely for his work on the Gaia theory.[6]

Introduction

Gaian hypotheses suggest that organisms co-evolve with their environment: that is, they "influence their abiotic environment, and that environment in turn influences the biota by Darwinian process". Lovelock (1995) gave evidence of this in his second book, showing the evolution from the world of the early thermo-acido-philic and methanogenic bacteria towards the oxygen-enriched atmosphere today that supports more complex life.

The scientifically accepted form of the hypothesis has been called "influential Gaia". It states the biota influence certain aspects of the abiotic world, e.g. temperature and atmosphere. They state the evolution of life and its environment may affect each other. An example is how the activity of photosynthetic bacteria during Precambrian times have completely modified the Earth atmosphere to turn it aerobic, and as such supporting evolution of life (in particular eukaryotic life).

Biologists and Earth scientists usually view the factors that stabilize the characteristics of a period as an undirected emergent property or entelechy of the system; as each individual species pursues its own self-interest, for example, their combined actions may have counterbalancing effects on environmental change. Opponents of this view sometimes reference examples of events that resulted in dramatic change rather than stable equilibrium, such as the conversion of the Earth's atmosphere from a reducing environment to an oxygen-rich one.

Less accepted versions of the hypothesis claim that changes in the biosphere are brought about through the coordination of living organisms and maintain those conditions through homeostasis. In some versions of Gaia philosophy, all lifeforms are considered part of one single living planetary being called Gaia. In this view, the atmosphere, the seas and the terrestrial crust would be results of interventions carried out by Gaia through the coevolving diversity of living organisms. However, the Earth as a unit does not match the generally accepted biological criteria for life itself: for example, there is no evidence to suggest that "Gaia" has reproduced.

Details

The Gaia theory posits that the Earth is a self-regulating complex system involving the biosphere, the atmosphere, the hydrospheres and the pedosphere, tightly coupled as an evolving system. The theory sustains that this system as a whole, called Gaia, seeks a physical and chemical environment optimal for contemporary life.[7]

Gaia evolves through a cybernetic feedback system operated unconsciously by the biota, leading to broad stabilization of the conditions of habitability in a full homeostasis. Many processes in the Earth's surface essential for the conditions of life depend on the interaction of living forms, especially microorganisms, with inorganic elements. These processes establish a global control system that regulates Earth's surface temperature, atmosphere composition and ocean salinity, powered by the global thermodynamic disequilibrium state of the Earth system.[8]

The existence of a planetary homeostasis influenced by living forms had been observed previously in the field of biogeochemistry, and it is being investigated also in other fields like Earth system science. The originality of the Gaia theory relies on the assessment that such homeostatic balance is actively pursued with the goal of keeping the optimal conditions for life, even when terrestrial or external events menace them.[9]

 

Regulation of the salinity in the oceans

Ocean salinity has been constant at about 3.4% for a very long time. Salinity stability in oceanic environments is important as most cells require a rather constant salinity and do not generally tolerate values above 5%. The constant ocean salinity was a long-standing mystery, because no process counterbalancing the salt influx from rivers was known. Recently it was suggested[10] that salinity may also be strongly influenced by seawater circulation through hot basaltic rocks, and emerging as hot water vents on mid-ocean ridges. However, the composition of seawater is far from equilibrium, and it is difficult to explain this fact without the influence of organic processes. One suggested explanation lies in the formation of salt plains throughout Earth's history. It is hypothesized that these are created by bacterial colonies that fix ions and heavy metals during their life processes.[citation needed]

Regulation of oxygen in the atmosphere


Levels of gases in the atmosphere in 420,000 years of ice core data from Vostok, Antarctica research station. Current period is at the left.
 
The atmospheric composition remains fairly constant, providing the conditions that contemporary life has adapted to. All the atmospheric gases other than noble gases present in the atmosphere are either made by organisms or processed by them. The Gaia theory states that the Earth's atmospheric composition is kept at a dynamically steady state by the presence of life.[11]
The stability of the atmosphere in Earth is not a consequence of chemical equilibrium as it is in planets without life. Oxygen is the second most reactive electro-negative element after fluorine, and should combine with gases and minerals of the Earth's atmosphere and crust. Traces of methane (at an amount of 100,000 tonnes produced per year)[12] should not exist, as methane is combustible in an oxygen atmosphere.

Dry air in the atmosphere of Earth contains roughly (by volume) 78.09% nitrogen, 20.95% oxygen, 0.93% argon, 0.039% carbon dioxide, and small amounts of other gases including methane. Lovelock originally speculated that concentrations of oxygen above about 25% would increase the frequency of wildfires and conflagration of forests. Recent work on the findings of fire-caused charcoal in Carboniferous and Cretaceous coal measures, in geologic periods when O2 did exceed 25%, has supported Lovelock's contention.[citation needed]

Regulation of the global surface temperature


Rob Rohde's palaeotemperature graphs
 
Since life started on Earth, the energy provided by the Sun has increased by 25% to 30%;[13] however, the surface temperature of the planet has remained within the levels of habitability, reaching quite regular low and high margins. Lovelock has also hypothesised that methanogens produced elevated levels of methane in the early atmosphere, giving a view similar to that found in petrochemical smog, similar in some respects to the atmosphere on Titan.[14] This, he suggests tended to screen out ultraviolet until the formation of the ozone screen, maintaining a degree of homeostasis. However, the Snowball Earth[15] research has suggested that "oxygen shocks" and reduced methane levels led, during the Huronian, Sturtian and Marinoan/Varanger Ice Ages, to a world that very nearly became a solid "snowball". These epochs are evidence against the ability of the biosphere to fully self-regulate.
Processing of the greenhouse gas CO2, explained below, plays a critical role in the maintenance of the Earth temperature within the limits of habitability.

The CLAW hypothesis, inspired by the Gaia theory, proposes a feedback loop that operates between ocean ecosystems and the Earth's climate.[16] The hypothesis specifically proposes that particular phytoplankton that produce dimethyl sulfide are responsive to variations in climate forcing, and that these responses lead to a negative feedback loop that acts to stabilise the temperature of the Earth's atmosphere.

Currently the increase in human population and the environmental impact of their activities, such as the multiplication of greenhouse gases may cause negative feedbacks in the environment to become positive feedback. Lovelock has stated that this could bring an extremely accelerated global warming,[17] but he has since stated the effects will likely occur more slowly.[18]

Daisyworld simulations


Plots from a standard black & white Daisyworld simulation
Main article: Daisyworld

James Lovelock and Andrew Watson developed the mathematical model Daisyworld, in which temperature regulation arises from a simple ecosystem consisting of two species whose activity varies in response to the planet's environment. The model demonstrates that beneficial feedback mechanisms can emerge in this "toy world" containing only self-interested organisms rather than through classic group selection mechanisms.[19]

Daisyworld examines the energy budget of a planet populated by two different types of plants, black daisies and white daisies. The colour of the daisies influences the albedo of the planet such that black daisies absorb light and warm the planet, while white daisies reflect light and cool the planet. As the model runs the output of the "sun" increases, meaning that the surface temperature of an uninhabited "gray" planet will steadily rise. In contrast, on Daisyworld competition between the daisies (based on temperature-effects on growth rates) leads to a shifting balance of daisy populations that tends to favour a planetary temperature close to the optimum for daisy growth.

It has been suggested that the results were predictable because Lovelock and Watson selected examples that produced the responses they desired.[20]

Processing of CO2

Gaia scientists see the participation of living organisms in the carbon cycle as one of the complex processes that maintain conditions suitable for life. The only significant natural source of atmospheric carbon dioxide (CO2) is volcanic activity, while the only significant removal is through the precipitation of carbonate rocks.[21] Carbon precipitation, solution and fixation are influenced by the bacteria and plant roots in soils, where they improve gaseous circulation, or in coral reefs, where calcium carbonate is deposited as a solid on the sea floor. Calcium carbonate is used by living organisms to manufacture carbonaceous tests and shells. Once dead, the living organisms' shells fall to the bottom of the oceans where they generate deposits of chalk and limestone.
One of these organisms is Emiliania huxleyi, an abundant coccolithophore algae which also has a role in the formation of clouds.[22] CO2 excess is compensated by an increase of coccolithophoride life, increasing the amount of CO2 locked in the ocean floor. Coccolithophorides increase the cloud cover, hence control the surface temperature, help cool the whole planet and favor precipitations necessary for terrestrial plants. Lately the atmospheric CO2 concentration has increased and there is some evidence that concentrations of ocean algal blooms are also increasing.[23]

Lichen and other organisms accelerate the weathering of rocks in the surface, while the decomposition of rocks also happens faster in the soil, thanks to the activity of roots, fungi, bacteria and subterranean animals. The flow of carbon dioxide from the atmosphere to the soil is therefore regulated with the help of living beings. When CO2 levels rise in the atmosphere the temperature increases and plants grow. This growth brings higher consumption of CO2 by the plants, who process it into the soil, removing it from the atmosphere.

History

Precedents


"Earthrise" taken on December 24, 1968

The idea of the Earth as an integrated whole, a living being, has a long tradition. The mythical Gaia was the primal Greek goddess personifying the Earth, the Greek version of "Mother Nature", or the Earth Mother. James Lovelock gave this name to his hypothesis after a suggestion from the novelist William Golding, who was living in the same village as Lovelock at the time (Bowerchalke, Wiltshire, UK). Golding's advice was based on Gea, an alternative spelling for the name of the Greek goddess, which is used as prefix in geology, geophysics and geochemistry.[24] Golding later made reference to Gaia in his Nobel prize acceptance speech.

In the eighteenth century, as geology consolidated as a modern science, James Hutton maintained that geological and biological processes are interlinked.[25] Later, the naturalist and explorer Alexander von Humboldt recognized the coevolution of living organisms, climate, and Earth's crust.[25] In the twentieth century, Vladimir Vernadsky formulated a theory of Earth's development that is now one of the foundations of ecology. The Ukrainian geochemist was one of the first scientists to recognize that the oxygen, nitrogen, and carbon dioxide in the Earth's atmosphere result from biological processes. During the 1920s he published works arguing that living organisms could reshape the planet as surely as any physical force. Vernadsky was a pioneer of the scientific bases for the environmental sciences.[26] His visionary pronouncements were not widely accepted in the West, and some decades after the Gaia hypothesis received the same type of initial resistance from the scientific community.

Also in the turn to the 20th century Aldo Leopold, pioneer in the development of modern environmental ethics and in the movement for wilderness conservation, suggested a living Earth in his biocentric or holistic ethics regarding land.
It is at least not impossible to regard the earth's parts—soil, mountains, rivers, atmosphere etc,—as organs or parts of organs of a coordinated whole, each part with its definite function. And if we could see this whole, as a whole, through a great period of time, we might perceive not only organs with coordinated functions, but possibly also that process of consumption as replacement which in biology we call metabolism, or growth. In such case we would have all the visible attributes of a living thing, which we do not realize to be such because it is too big, and its life processes too slow.
— Stephan Harding , Animate Earth.[27]
Another influence for the Gaia theory and the environmental movement in general came as a side effect of the Space Race between the Soviet Union and the United States of America. During the 1960s, the first humans in space could see how the Earth looked as a whole. The photograph Earthrise taken by astronaut William Anders in 1968 during the Apollo 8 mission became an early symbol for the global ecology movement.[28]

Formulation of the hypothesis


James Lovelock started defining the idea of a self-regulating Earth controlled by the community of living organisms in September 1965, while working at the Jet Propulsion Laboratory in California on methods of detecting life on Mars.[29][30] The first paper to mention it was Planetary Atmospheres: Compositional and other Changes Associated with the Presence of Life, co-authored with C.E. Giffin.[31] A main concept was that life could be detected in a planetary scale by the chemical composition of the atmosphere. According to the data gathered by the Pic du Midi observatory, planets like Mars or Venus had atmospheres in chemical equilibrium. This difference with the Earth atmosphere was considered to be a proof that there was no life in these planets.

Lovelock formulated the Gaia Hypothesis in journal articles in 1972[1] and 1974,[2] followed by a popularizing 1979 book Gaia: A new look at life on Earth. An article in the New Scientist of February 6, 1975,[32] and a popular book length version of the hypothesis, published in 1979 as The Quest for Gaia, began to attract scientific and critical attention.

Lovelock called it first the Earth feedback hypothesis,[33] and it was a way to explain the fact that combinations of chemicals including oxygen and methane persist in stable concentrations in the atmosphere of the Earth. Lovelock suggested detecting such combinations in other planets' atmospheres as a relatively reliable and cheap way to detect life.

Later, other relationships such as sea creatures producing sulfur and iodine in approximately the same quantities as required by land creatures emerged and helped bolster the theory.[34]

In 1971 microbiologist Dr. Lynn Margulis joined Lovelock in the effort of fleshing out the initial hypothesis into scientifically proven concepts, contributing her knowledge about how microbes affect the atmosphere and the different layers in the surface of the planet.[3] The American biologist had also awakened criticism from the scientific community with her theory on the origin of eukaryotic organelles and her contributions to the endosymbiotic theory, nowadays accepted. Margulis dedicated the last of eight chapters in her book, The Symbiotic Planet, to Gaia. However, she objected to the widespread personification of Gaia and stressed that Gaia is "not an organism", but "an emergent property of interaction among organisms". She defined Gaia as "the series of interacting ecosystems that compose a single huge ecosystem at the Earth's surface. Period". The book's most memorable "slogan" was actually quipped by a student of Margulis': "Gaia is just symbiosis as seen from space".

James Lovelock called his first proposal the Gaia hypothesis but has also used the term Gaia theory. Lovelock states that the initial formulation was based on observation, but still lacked a scientific explanation. The Gaia hypothesis has since been supported by a number of scientific experiments[35] and provided a number of useful predictions.[36] In fact, wider research proved the original hypothesis wrong, in the sense that it is not life alone but the whole Earth system that does the regulating.[7]

First Gaia conference

In 1985, the first public symposium on the Gaia hypothesis, Is The Earth A Living Organism? was held at University of Massachusetts Amherst, August 1–6.[37] The principal sponsor was the National Audubon Society. Speakers included James Lovelock, George Wald, Mary Catherine Bateson, Lewis Thomas, John Todd, Donald Michael, Christopher Bird, Thomas Berry, David Abram, Michael Cohen, and William Fields. Some 500 people attended.[citation needed]

Second Gaia conference

In 1988, climatologist Stephen Schneider organised a conference of the American Geophysical Union. The first Chapman Conference on Gaia,[38] was held in San Diego, California on March 7, 1988.

During the "philosophical foundations" session of the conference, David Abram spoke on the influence of metaphor in science, and of Gaia theory as offering a new and potentially game-changing metaphorics, while James Kirchner criticised the Gaia hypothesis for its imprecision. Kirchner claimed that Lovelock and Margulis had not presented one Gaia hypothesis, but four -
  • CoEvolutionary Gaia: that life and the environment had evolved in a coupled way. Kirchner claimed that this was already accepted scientifically and was not new.
  • Homeostatic Gaia: that life maintained the stability of the natural environment, and that this stability enabled life to continue to exist.
  • Geophysical Gaia: that the Gaia theory generated interest in geophysical cycles and therefore led to interesting new research in terrestrial geophysical dynamics.
  • Optimising Gaia: that Gaia shaped the planet in a way that made it an optimal environment for life as a whole. Kirchner claimed that this was not testable and therefore was not scientific.
Of Homeostatic Gaia, Kirchner recognised two alternatives. "Weak Gaia" asserted that life tends to make the environment stable for the flourishing of all life. "Strong Gaia" according to Kirchner, asserted that life tends to make the environment stable, to enable the flourishing of all life. Strong Gaia, Kirchner claimed, was untestable and therefore not scientific.[39]

Lovelock and other Gaia-supporting scientists, however, did attempt to disprove the claim that the theory is not scientific because it is impossible to test it by controlled experiment. For example, against the charge that Gaia was teleological, Lovelock and Andrew Watson offered the Daisyworld model (and its modifications, above) as evidence against most of these criticisms. Lovelock said that the Daisyworld model "demonstrates that self-regulation of the global environment can emerge from competition amongst types of life altering their local environment in different ways".[40]

Lovelock was careful to present a version of the Gaia hypothesis that had no claim that Gaia intentionally or consciously maintained the complex balance in her environment that life needed to survive. It would appear that the claim that Gaia acts "intentionally" was a metaphoric statement in his popular initial book and was not meant to be taken literally. This new statement of the Gaia hypothesis was more acceptable to the scientific community. Most accusations of teleologism ceased, following this conference.

Third Gaia conference

By the time of the 2nd Chapman Conference on the Gaia Hypothesis, held at Valencia, Spain, on 23 June 2000,[41] the situation had changed significantly in accord with the developing science of Bio-geophysiology. Rather than a discussion of the Gaian teleological views, or "types" of Gaia Theory, the focus was upon the specific mechanisms by which basic short term homeostasis was maintained within a framework of significant evolutionary long term structural change.
The major questions were:[42]
  1. "How has the global biogeochemical/climate system called Gaia changed in time? What is its history? Can Gaia maintain stability of the system at one time scale but still undergo vectorial change at longer time scales? How can the geologic record be used to examine these questions?"
  2. "What is the structure of Gaia? Are the feedbacks sufficiently strong to influence the evolution of climate? Are there parts of the system determined pragmatically by whatever disciplinary study is being undertaken at any given time or are there a set of parts that should be taken as most true for understanding Gaia as containing evolving organisms over time? What are the feedbacks among these different parts of the Gaian system, and what does the near closure of matter mean for the structure of Gaia as a global ecosystem and for the productivity of life?"
  3. "How do models of Gaian processes and phenomena relate to reality and how do they help address and understand Gaia? How do results from Daisyworld transfer to the real world? What are the main candidates for "daisies"? Does it matter for Gaia theory whether we find daisies or not? How should we be searching for daisies, and should we intensify the search? How can Gaian mechanisms be investigated using process models or global models of the climate system that include the biota and allow for chemical cycling?"
In 1997, Tyler Volk argued that a Gaian system is almost inevitably produced as a result of an evolution towards far-from-equilibrium homeostatic states that maximise entropy production, and Kleidon (2004) agreed stating: "...homeostatic behavior can emerge from a state of MEP associated with the planetary albedo"; "...the resulting behavior of a biotic Earth at a state of MEP may well lead to near-homeostatic behavior of the Earth system on long time scales, as stated by the Gaia hypothesis". Staley (2002) has similarly proposed "...an alternative form of Gaia theory based on more traditional Darwinian principles... In [this] new approach, environmental regulation is a consequence of population dynamics, not Darwinian selection. The role of selection is to favor organisms that are best adapted to prevailing environmental conditions. However, the environment is not a static backdrop for evolution, but is heavily influenced by the presence of living organisms. The resulting co-evolving dynamical process eventually leads to the convergence of equilibrium and optimal conditions".

Fourth Gaia conference

A fourth international conference on the Gaia Theory, sponsored by the Northern Virginia Regional Park Authority and others, was held in October 2006 at the Arlington, VA campus of George Mason University.[43]

Martin Ogle, Chief Naturalist, for NVRPA, and long-time Gaia Theory proponent, organized the event. Lynn Margulis, Distinguished University Professor in the Department of Geosciences, University of Massachusetts-Amherst, and long-time advocate of the Gaia Theory, was a keynote speaker. Among many other speakers: Tyler Volk, Co-director of the Program in Earth and Environmental Science at New York University; Dr. Donald Aitken, Principal of Donald Aitken Associates; Dr. Thomas Lovejoy, President of the Heinz Center for Science, Economics and the Environment; Robert Correll, Senior Fellow, Atmospheric Policy Program, American Meteorological Society and noted environmental ethicist, J. Baird Callicott. James Lovelock, the theory's progenitor, prepared a video for the event.

This conference approached Gaia Theory as both science and metaphor as a means of understanding how we might begin addressing 21st century issues such as climate change and ongoing environmental destruction.

Criticism

After initially being largely ignored by most scientists, (from 1969 until 1977), thereafter for a period, the initial Gaia hypothesis was criticized by a number of scientists, such as Ford Doolittle, Richard Dawkins and Stephen Jay Gould.[38] Lovelock has said that by naming his theory after a Greek goddess, championed by many non-scientists,[33] the Gaia hypothesis was interpreted as a neo-Pagan religion. Many scientists in particular also criticised the approach taken in his popular book Gaia, a New Look at Life on Earth for being teleological—a belief that things are by purpose aimed towards a goal. Responding to this critique in 1990, Lovelock stated, "Nowhere in our writings do we express the idea that planetary self-regulation is purposeful, or involves foresight or planning by the biota".

Stephen Jay Gould criticised Gaia as being "a metaphor, not a mechanism." [44] He wanted to know the actual mechanisms by which self-regulating homeostasis was regulated. David Abram argues that Gould overlooked the fact that "mechanism", itself, is a metaphor — albeit an exceedingly common and commonly unrecognized metaphor — one which leads us to consider natural and living systems as though they were machines organized and built from outside (rather than as autopoietic or self-organizing phenomena). Mechanical metaphors, according to Abram, lead us to overlook the active or agential quality of living entities, while the organismic metaphorics of Gaia theory accentuate the active agency of both the biota and the biosphere as a whole.[45][46] With regard to causality in Gaia, Lovelock argues that no single mechanism is responsible, that the connections between the various known mechanisms may never be known, that this is accepted in other fields of biology and ecology as a matter of course, and that specific hostility is reserved for his own theory for other reasons.[47]

Aside from clarifying his language and understanding of what is meant by a life form, Lovelock himself ascribes most of the criticism to a lack of understanding of non-linear mathematics by his critics, and a linearizing form of greedy reductionism in which all events have to be immediately ascribed to specific causes before the fact. He also states that most of his critics are biologists but that his theory includes experiments in fields outside biology, and that some self-regulating phenomena may not be mathematically explainable.[47]

Natural selection and evolution

Lovelock has suggested that global biological feedback mechanisms could evolve by natural selection, stating that organisms that improve their environment for their survival do better than those that damage their environment. However, in 1981, W. Ford Doolittle, in the CoEvolution Quarterly article "Is Nature Really Motherly" argued that nothing in the genome of individual organisms could provide the feedback mechanisms proposed by Lovelock, and therefore the Gaia hypothesis proposed no plausible mechanism and was unscientific. In Richard Dawkins' 1982 book, The Extended Phenotype, he stated that for organisms to act in concert would require foresight and planning, which is contrary to the current scientific understanding of evolution. Like Doolittle, he also rejected the possibility that feedback loops could stabilize the system.

Basic criteria of the definition of a life-form include an ability to replicate and pass on genetic information to a succeeding generation, and to be affected by natural selection.[48] Dawkins stressed that the planet is not the offspring of any parents and is unable to reproduce.[22]

Lynn Margulis, a microbiologist who collaborated with Lovelock in supporting the Gaia hypothesis, argued in 1999, that "Darwin's grand vision was not wrong, only incomplete. In accentuating the direct competition between individuals for resources as the primary selection mechanism, Darwin (and especially his followers) created the impression that the environment was simply a static arena". She wrote that the composition of the Earth's atmosphere, hydrosphere, and lithosphere are regulated around "set points" as in homeostasis, but those set points change with time.[49]

Evolutionary biologist W. D. Hamilton called the concept of Gaia Copernican, adding that it would take another Newton to explain how Gaian self-regulation takes place through Darwinian natural selection.[24][better source needed]

Recent criticism

Aspects of the Gaia hypothesis continue to be skeptically received by relevant scientists. For instance, arguments both for and against it were laid out in the journal Climatic Change in 2002 and 2003. A main reason for doubting it, it was suggested, are the many examples where life has detrimental and/or destabilising effects on the environment.[50][51] Several recent books have criticised the Gaia hypothesis, with comments ranging from “... the Gaia hypothesis lacks unambiguous observational support and has significant theoretical difficulties”[52] to “Suspended uncomfortably between tainted metaphor, fact, and false science, I prefer to leave Gaia firmly in the background”[53] to “The Gaia hypothesis is supported neither by evolutionary theory nor by the empirical evidence of the geological record”.[54] The CLAW hypothesis, previously held up as confirmation of the success of Gaia, has subsequently been discredited.[55] In 2009 the direct opposite hypothesis to Gaia was proposed: that life has highly detrimental (biocidal) impacts on planetary conditions.[56] In a recent book-length evaluation of the Gaia hypothesis considering modern evidence from across the various relevant disciplines (hailed by the publishers as the first of its kind) the author, Toby Tyrrell of the National Oceanography Centre (UK), concluded that: “I believe Gaia is a dead end. Its study has, however, generated many new and thought provoking questions. While rejecting Gaia, we can at the same time appreciate Lovelock's originality and breadth of vision, and recognise that his audacious concept has helped to stimulate many new ideas about the Earth, and to champion a holistic approach to studying it.” [57] Elsewhere he presents his conclusion “The Gaia hypothesis is not an accurate picture of how our world works”.[58] (This statement need to be understood as referring to the "strong" and "moderate" forms of Gaia—that the biota obeys a principle that works to make Earth optimal (strength 5) or favourable for life (strength 4) or that it works as a homeostatic mechanism (strength 3). The latter is the "weakest" form of Gaia that Lovelock has advocated. Tyrrell rejects it. However, he finds that the two weaker forms of Gaia—Coeveolutionary Gaia and Influential Gaia, which assert that there are close links between the evolution of life and the environment and that biology affects the physical and chemical environment—are both credible, but that it is not useful to use the term "Gaia" in this sense.[59]

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...