Search This Blog

Thursday, March 12, 2026

Climate sensitivity

From Wikipedia, the free encyclopedia
Diagram of factors that determine climate sensitivity. After increasing CO2 levels, there is an initial warming. This warming gets amplified by the net effect of climate feedbacks.

Climate sensitivity is a key measure in climate science and describes how much Earth's surface will warm in response to a doubling of the atmospheric carbon dioxide (CO2) concentration. Its formal definition is: "The change in the surface temperature in response to a change in the atmospheric carbon dioxide (CO2) concentration or other radiative forcing." This concept helps scientists understand the extent and magnitude of the effects of climate change.

The Earth's surface warms as a direct consequence of increased atmospheric CO2, as well as increased concentrations of other greenhouse gases such as nitrous oxide and methane. The increasing temperatures have secondary effects on the climate system. These secondary effects are called climate feedbacks. Self-reinforcing feedbacks include for example the melting of sunlight-reflecting ice as well as higher evapotranspiration. The latter effect increases average atmospheric water vapour, which is itself a greenhouse gas.

Scientists do not know exactly how strong these climate feedbacks are. Therefore, it is difficult to predict the precise amount of warming that will result from a given increase in greenhouse gas concentrations. If climate sensitivity turns out to be on the high side of scientific estimates, the Paris Agreement goal of limiting global warming to below 2 °C (3.6 °F) will be even more difficult to achieve.

There are two main kinds of climate sensitivity: the transient climate response is the initial rise in global temperature when CO2 levels double, and the equilibrium climate sensitivity is the larger long-term temperature increase after the planet adjusts to the doubling. Climate sensitivity is estimated by several methods: looking directly at temperature and greenhouse gas concentrations since the Industrial Revolution began around the 1750s, using indirect measurements from the Earth's distant past, and simulating the climate.

Fundamentals

The rate at which energy reaches Earth (as sunlight) and leaves Earth (as heat radiation to space) must balance, or the planet will get warmer or cooler. An imbalance between incoming and outgoing radiation energy is called radiative forcing. A warmer planet radiates heat to space faster and so a new balance is eventually reached, with a higher temperature and stored energy content. However, the warming of the planet also has knock-on effects, which create further warming in an exacerbating feedback loop. Climate sensitivity is a measure of how much temperature change a given amount of radiative forcing will cause.

Radiative forcing

Radiative forcings are generally quantified as Watts per square meter (W/m2) and averaged over Earth's uppermost surface defined as the top of the atmosphere. The magnitude of a forcing is specific to the physical driver and is defined relative to an accompanying time span of interest for its application. In the context of a contribution to long-term climate sensitivity from 1750 to 2020, the 50% increase in atmospheric CO
2
is characterized by a forcing of about +2.1 W/m2. In the context of shorter-term contributions to Earth's energy imbalance (i.e. its heating/cooling rate), time intervals of interest may be as short as the interval between measurement or simulation data samplings, and are thus likely to be accompanied by smaller forcing values. Forcings from such investigations have also been analyzed and reported at decadal time scales.

Radiative forcing leads to long-term changes in global temperature. A number of factors contribute radiative forcing: increased downwelling radiation from the greenhouse effect, variability in solar radiation from changes in planetary orbit, changes in solar irradiance, direct and indirect effects caused by aerosols (for example changes in albedo from cloud cover), and changes in land use (deforestation or the loss of reflective ice cover). In contemporary research, radiative forcing by greenhouse gases is well understood. As of 2019, large uncertainties remain for aerosols.

Key numbers

Carbon dioxide (CO2) levels rose from 280 parts per million (ppm) in the 18th century, when humans in the Industrial Revolution started burning significant amounts of fossil fuel such as coal, to over 415 ppm by 2020. As CO2 is a greenhouse gas, it hinders heat energy from leaving the Earth's atmosphere. In 2016, atmospheric CO2 levels had increased by 45% over preindustrial levels, and radiative forcing caused by increased CO2 was already more than 50% higher than in pre-industrial times because of non-linear effects. Between the 18th-century start of the Industrial Revolution and the year 2020, the Earth's temperature rose by a little over one degree Celsius (about two degrees Fahrenheit).

Societal importance

Because the economics of climate change mitigation depend greatly on how quickly carbon neutrality needs to be achieved, climate sensitivity estimates can have important economic and policy-making implications. One study suggests that halving the uncertainty of the value for transient climate response (TCR) could save trillions of dollars. A higher climate sensitivity would mean more dramatic increases in temperature, which makes it more prudent to take significant climate action. If climate sensitivity turns out to be on the high end of what scientists estimate, the Paris Agreement goal of limiting global warming to well below 2 °C cannot be achieved, and temperature increases will exceed that limit, at least temporarily. One study estimated that emissions cannot be reduced fast enough to meet the 2 °C goal if equilibrium climate sensitivity (the long-term measure) is higher than 3.4 °C (6.1 °F). The more sensitive the climate system is to changes in greenhouse gas concentrations, the more likely it is to have decades when temperatures are much higher or much lower than the longer-term average.

Factors that determine sensitivity

The radiative forcing caused by a doubling of atmospheric CO2 levels (from the pre-industrial 280 ppm) is approximately 3.7 watts per square meter (W/m2). In the absence of feedbacks, the energy imbalance would eventually result in roughly 1 °C (1.8 °F) of global warming. That figure is straightforward to calculate by using the Stefan–Boltzmann law and is undisputed.

A further contribution arises from climate feedbacks, both self-reinforcing and balancing. The uncertainty in climate sensitivity estimates is mostly from the feedbacks in the climate system, including water vapour feedback, ice–albedo feedback, cloud feedback, and lapse rate feedback. Balancing feedbacks tend to counteract warming by increasing the rate at which energy is radiated to space from a warmer planet. Self-reinfocing feedbacks increase warming; for example, higher temperatures can cause ice to melt, which reduces the ice area and the amount of sunlight the ice reflects, which in turn results in less heat energy being radiated back into space. The reflectiveness of a surface is called albedo. Climate sensitivity depends on the balance between those feedbacks.

Types

Schematic of how different measures of climate sensitivity relate to one another

Depending on the time scale, there are two main ways to define climate sensitivity: the short-term transient climate response (TCR) and the long-term equilibrium climate sensitivity (ECS), both of which incorporate the warming from exacerbating feedback loops. They are not discrete categories, but they overlap. Sensitivity to atmospheric CO2 increases is measured in the amount of temperature change for doubling in the atmospheric CO2 concentration.

Although the term "climate sensitivity" is usually used for the sensitivity to radiative forcing caused by rising atmospheric CO2, it is a general property of the climate system. Other agents can also cause a radiative imbalance. Climate sensitivity is the change in surface air temperature per unit change in radiative forcing, and the climate sensitivity parameter is therefore expressed in units of °C/(W/m2). Climate sensitivity is approximately the same whatever the reason for the radiative forcing (such as from greenhouse gases or solar variation). When climate sensitivity is expressed as the temperature change for a level of atmospheric CO2 double the pre-industrial level, its units are degrees Celsius (°C).

Transient climate response

The transient climate response (TCR) is defined as "the change in the global mean surface temperature, averaged over a 20-year period, centered at the time of atmospheric carbon dioxide doubling, in a climate model simulation" in which the atmospheric CO2 concentration increases at 1% per year. That estimate is generated by using shorter-term simulations. The transient response is lower than the equilibrium climate sensitivity because slower feedbacks, which exacerbate the temperature increase, take more time to respond in full to an increase in the atmospheric CO2 concentration. For instance, the deep ocean takes many centuries to reach a new steady state after a perturbation during which it continues to serve as heatsink, which cools the upper ocean. The IPCC literature assessment estimates that the TCR likely lies between 1 °C (1.8 °F) and 2.5 °C (4.5 °F).

A related measure is the transient climate response to cumulative carbon emissions (TCRE), which is the globally averaged surface temperature change after 1000 GtC of CO2 has been emitted. As such, it includes not only temperature feedbacks to forcing but also the carbon cycle and carbon cycle feedbacks.

Equilibrium climate sensitivity

The equilibrium climate sensitivity (ECS) is the long-term temperature rise (equilibrium global mean near-surface air temperature) that is expected to result from a doubling of the atmospheric CO2 concentration (ΔT). It is a prediction of the new global mean near-surface air temperature once the CO2 concentration has stopped increasing, and most of the feedbacks have had time to have their full effect. Reaching an equilibrium temperature can take centuries or even millennia after CO2 has doubled. ECS is higher than TCR because of the oceans' short-term buffering effects. Computer models are used for estimating the ECS. A comprehensive estimate means that modelling the whole time span during which significant feedbacks continue to change global temperatures in the model, such as fully-equilibrating ocean temperatures, requires running a computer model that covers thousands of years. There are, however, less computing-intensive methods.

The IPCC Sixth Assessment Report (AR6) stated that there is high confidence that ECS is within the range of 2.5 °C to 4 °C, with a best estimate of 3 °C.

The long time scales involved with ECS make it arguably a less relevant measure for policy decisions around climate change.

Effective climate sensitivity

A common approximation to ECS is the effective equilibrium climate sensitivity, is an estimate of equilibrium climate sensitivity by using data from a climate system in model or real-world observations that is not yet in equilibrium. Estimates assume that the net amplification effect of feedbacks, as measured after some period of warming, will remain constant afterwards. That is not necessarily true, as feedbacks can change with time. In many climate models, feedbacks become stronger over time and so the effective climate sensitivity is lower than the real ECS.

Earth system sensitivity

By definition, equilibrium climate sensitivity does not include feedbacks that take millennia to emerge, such as long-term changes in Earth's albedo because of changes in ice sheets and vegetation. It also does not include the slow response of the deep oceans' warming, which takes millennia. Earth system sensitivity (ESS) incorporates the effects of these slower feedback loops, such as the change in Earth's albedo from the melting of large continental ice sheets, which covered much of the Northern Hemisphere during the Last Glacial Maximum and still cover Greenland and Antarctica. Changes in albedo as a result of changes in vegetation, as well as changes in ocean circulation, are also included. The longer-term feedback loops make the ESS larger than the ECS, possibly twice as large. Data from the geological history of Earth is used in estimating ESS. Differences between modern and long-ago climatic conditions mean that estimates of the future ESS are highly uncertain. The carbon cycle is not included in the definition of the ESS, but all other elements of the climate system are included.

Sensitivity to nature of forcing

Different forcing agents, such as greenhouse gases and aerosols, can be compared using their radiative forcing, the initial radiative imbalance averaged over the entire globe. Climate sensitivity is the amount of warming per radiative forcing. To a first approximation, the cause of the radiative imbalance does not matter. However, radiative forcing from sources other than CO2 can cause slightly more or less surface warming than the same averaged forcing from CO2. The amount of feedback varies mainly because the forcings are not uniformly distributed over the globe. Forcings that initially warm the Northern Hemisphere, land, or polar regions generate more self-reinforcing feedbacks (such as the ice-albedo feedback) than an equivalent forcing from CO2, which is more uniformly distributed over the globe. This gives rise to more overall warming. Several studies indicate that human-emitted aerosols are more effective than CO2 at changing global temperatures, and volcanic forcing is less effective. When climate sensitivity to CO2 forcing is estimated using historical temperature and forcing (caused by a mix of aerosols and greenhouse gases), and that effect is not taken into account, climate sensitivity is underestimated.

State dependence

Artist impression of a Snowball Earth.

Climate sensitivity has been defined as the short- or long-term temperature change resulting from any doubling of CO2, but there is evidence that the sensitivity of Earth's climate system is not constant. For instance, the planet has polar ice and high-altitude glaciers. Until the world's ice has completely melted, a self-reinforcing ice–albedo feedback loop makes the system more sensitive overall. Throughout Earth's history, multiple periods are thought to have snow and ice cover almost the entire globe. In most models of "Snowball Earth", parts of the tropics were at least intermittently free of ice cover. As the ice advanced or retreated, climate sensitivity must have been very high, as the large changes in area of ice cover would have made for a very strong ice–albedo feedback. Volcanic atmospheric composition changes are thought to have provided the radiative forcing needed to escape the snowball state.

Equilibrium climate sensitivity can change with climate.

Throughout the Quaternary period (the most recent 2.58 million years), climate has oscillated between glacial periods, the most recent one being the Last Glacial Maximum, and interglacial periods, the most recent one being the current Holocene, but the period's climate sensitivity is difficult to determine. The Paleocene–Eocene Thermal Maximum, about 55.5 million years ago, was unusually warm and may have been characterized by above-average climate sensitivity.

Climate sensitivity may further change if tipping points are crossed. It is unlikely that tipping points will cause short-term changes in climate sensitivity. If a tipping point is crossed, climate sensitivity is expected to change at the time scale of the subsystem that hits its tipping point. Especially if there are multiple interacting tipping points, the transition of climate to a new state may be difficult to reverse.

The two most common definitions of climate sensitivity specify the climate state: the ECS and the TCR are defined for a doubling with respect to the CO2 levels in the pre-industrial era. Because of potential changes in climate sensitivity, the climate system may warm by a different amount after a second doubling of CO2 from after a first doubling. The effect of any change in climate sensitivity is expected to be small or negligible in the first century after additional CO2 is released into the atmosphere.

Estimation

Using Industrial Age (1750–present) data

Climate sensitivity can be estimated using the observed temperature increase, the observed ocean heat uptake, and the modelled or observed radiative forcing. The data are linked through a simple energy-balance model to calculate climate sensitivity. Radiative forcing is often modelled because Earth observation satellites measuring it has existed during only part of the Industrial Age (only since the late 1950s). Estimates of climate sensitivity calculated by using these global energy constraints have consistently been lower than those calculated by using other methods, around 2 °C (3.6 °F) or lower.

Estimates of transient climate response (TCR) that have been calculated from models and observational data can be reconciled if it is taken into account that fewer temperature measurements are taken in the polar regions, which warm more quickly than the Earth as a whole. If only regions for which measurements are available are used in evaluating the model, the differences in TCR estimates are negligible.

A very simple climate model could estimate climate sensitivity from Industrial Age data by waiting for the climate system to reach equilibrium and then by measuring the resulting warming, ΔTeq (°C). Computation of the equilibrium climate sensitivity, S (°C), using the radiative forcing ΔF (W/m2) and the measured temperature rise, would then be possible. The radiative forcing resulting from a doubling of CO2, F2×CO2, is relatively well known, at about 3.7 W/m2. Combining that information results in this equation:

.

However, the climate system is not in equilibrium since the actual warming lags the equilibrium warming, largely because the oceans take up heat and will take centuries or millennia to reach equilibrium. Estimating climate sensitivity from Industrial Age data requires an adjustment to the equation above. The actual forcing felt by the atmosphere is the radiative forcing minus the ocean's heat uptake, H (W/m2) and so climate sensitivity can be estimated:

The global temperature increase between the beginning of the Industrial Period, which is (taken as 1750, and 2011 was about 0.85 °C (1.53 °F). In 2011, the radiative forcing from CO2 and other long-lived greenhouse gases (mainly methane, nitrous oxide, and chlorofluorocarbon) that have been emitted since the 18th century was roughly 2.8 W/m2. The climate forcing, ΔF, also contains contributions from solar activity (+0.05 W/m2), aerosols (−0.9 W/m2), ozone (+0.35 W/m2), and other smaller influences, which brings the total forcing over the Industrial Period to 2.2 W/m2, according to the best estimate of the IPCC Fifth Assessment Report in 2014, with substantial uncertainty. The ocean heat uptake, estimated by the same report to be 0.42 W/m2, yields a value for S of 1.8 °C (3.2 °F).

Other strategies

In theory, Industrial Age temperatures could also be used to determine a time scale for the temperature response of the climate system and thus climate sensitivity: if the effective heat capacity of the climate system is known, and the timescale is estimated using autocorrelation of the measured temperature, an estimate of climate sensitivity can be derived. In practice, however, the simultaneous determination of the time scale and heat capacity is difficult.

Attempts have been made to use the 11-year solar cycle to constrain the transient climate response. Solar irradiance is about 0.9 W/m2 higher during a solar maximum than during a solar minimum, and those effect can be observed in measured average global temperatures from 1959 to 2004. Unfortunately, the solar minima in the period coincided with volcanic eruptions, which have a cooling effect on the global temperature. Because the eruptions caused a larger and less well-quantified decrease in radiative forcing than the reduced solar irradiance, it is questionable whether useful quantitative conclusions can be derived from the observed temperature variations.

Observations of volcanic eruptions have also been used to try to estimate climate sensitivity, but as the aerosols from a single eruption last at most a couple of years in the atmosphere, the climate system can never come close to equilibrium, and there is less cooling than there would be if the aerosols stayed in the atmosphere for longer. Therefore, volcanic eruptions give information only about a lower bound on transient climate sensitivity.

Using data from Earth's past

Historical climate sensitivity can be estimated by using reconstructions of Earth's past temperatures and CO2 levels. Paleoclimatologists have studied different geological periods, such as the warm Pliocene (5.3 to 2.6 million years ago) and the colder Pleistocene (2.6 million to 11,700 years ago), and sought periods that are in some way analogous to or informative about current climate change. Climates further back in Earth's history are more difficult to study because fewer data are available about them. For instance, past CO2 concentrations can be derived from air trapped in ice cores, but as of 2020, the oldest continuous ice core is less than one million years old. Recent periods, such as the Last Glacial Maximum (LGM) (about 21,000 years ago) and the Mid-Holocene (about 6,000 years ago), are often studied, especially when more information about them becomes available.

A 2007 estimate of sensitivity made using data from the most recent 420 million years is consistent with sensitivities of current climate models and with other determinations. The Paleocene–Eocene Thermal Maximum (about 55.5 million years ago), a 20,000-year period during which massive amount of carbon entered the atmosphere and average global temperatures increased by approximately 6 °C (11 °F), also provides a good opportunity to study the climate system when it was in a warm state. Studies of the last 800,000 years have concluded that climate sensitivity was greater in glacial periods than in interglacial periods.

As the name suggests, the Last Glacial Maximum was much colder than today, and good data on atmospheric CO2 concentrations and radiative forcing from that period are available. The period's orbital forcing was different from today's but had little direct effect on mean annual temperatures. Estimating climate sensitivity from the Last Glacial Maximum can be done by several different ways. One way is to use estimates of global radiative forcing and temperature directly. The set of feedback mechanisms active during the period, however, may be different from the feedbacks caused by a present doubling of CO2, and such feedback differences across climate states must be accounted for when inferring today's climate sensitivity from paleoclimate evidence. In a different approach, a model of intermediate complexity is used to simulate conditions during the period. Several versions of this single model are run, with different values chosen for uncertain parameters, such that each version has a different ECS. Outcomes that best simulate the LGM's observed cooling probably produce the most realistic ECS values.

Using climate models

Histogram of equilibrium climate sensitivity as derived for different plausible assumptions
Frequency distribution of equilibrium climate sensitivity based on simulations of the doubling of CO2. Each model simulation has different estimates for processes which scientists do not sufficiently understand. Few of the simulations result in less than 2 °C (3.6 °F) of warming or significantly more than 4 °C (7.2 °F). However, the positive skew, which is also found in other studies, suggests that if carbon dioxide concentrations double, the probability of large or very large increases in temperature is greater than the probability of small increases.

Climate models simulate the CO2-driven warming of the future as well as the past. They operate on principles similar to those underlying models that predict the weather, but they focus on longer-term processes. Climate models typically begin with a starting state and then apply physical laws and knowledge about biology to predict future states. As with weather modelling, no computer has the power to model the complexity of the entire planet and simplifications are used to reduce that complexity to something manageable. An important simplification divides Earth's atmosphere into model cells. For instance, the atmosphere might be divided into cubes of air ten or one hundred kilometers on each side. Each model cell is treated as if it were homogeneous (uniform). Calculations for model cells are much faster than trying to simulate each molecule of air separately.

A lower model resolution (large model cells and long time steps) takes less computing power but cannot simulate the atmosphere in as much detail. A model cannot simulate processes smaller than the model cells or shorter-term than a single time step. The effects of the smaller-scale and shorter-term processes must therefore be estimated by using other methods. Physical laws contained in the models may also be simplified to speed up calculations. The biosphere must be included in climate models. The effects of the biosphere are estimated by using data on the average behaviour of the average plant assemblage of an area under the modelled conditions. Climate sensitivity is therefore an emergent property of these models; it is not prescribed, but it follows from the interaction of all the modelled processes.

To estimate climate sensitivity, a model is run by using a variety of radiative forcings (doubling quickly, doubling gradually, or following historical emissions) and the temperature results are compared to the forcing applied. Different models give different estimates of climate sensitivity, but they tend to fall within a similar range, as described above.

Testing, comparisons, and climate ensembles

Modelling of the climate system can lead to a wide range of outcomes. Models are often run that use different plausible parameters in their approximation of physical laws and the behaviour of the biosphere, which forms a perturbed physics ensemble, which attempts to model the sensitivity of the climate to different types and amounts of change in each parameter. Alternatively, structurally-different models developed at different institutions are put together, creating an ensemble. By selecting only the simulations that can simulate some part of the historical climate well, a constrained estimate of climate sensitivity can be made. One strategy for obtaining more accurate results is placing more emphasis on climate models that perform well in general.

A model is tested using observations, paleoclimate data, or both to see if it replicates them accurately. If it does not, inaccuracies in the physical model and parametrizations are sought, and the model is modified. For models used to estimate climate sensitivity, specific test metrics that are directly and physically linked to climate sensitivity are sought. Examples of such metrics are the global patterns of warming, the ability of a model to reproduce observed relative humidity in the tropics and subtropics, patterns of heat radiation, and the variability of temperature around long-term historical warming. Ensemble climate models developed at different institutions tend to produce constrained estimates of ECS that are slightly higher than 3 °C (5.4 °F). The models with ECS slightly above 3 °C (5.4 °F) simulate the above situations better than models with a lower climate sensitivity.

Many projects and groups exist to compare and to analyse the results of multiple models. For instance, the Coupled Model Intercomparison Project (CMIP) has been running since the 1990s.

Historical estimates

Svante Arrhenius in the 19th century was the first person to quantify global warming as a consequence of a doubling of the concentration of CO2. In his first paper on the matter, he estimated that global temperature would rise by around 5 to 6 °C (9.0 to 10.8 °F) if the quantity of CO2 was doubled. In later work, he revised that estimate to 4 °C (7.2 °F). Arrhenius used Samuel Pierpont Langley's observations of radiation emitted by the full moon to estimate the amount of radiation that was absorbed by water vapour and by CO2. To account for water vapour feedback, he assumed that relative humidity would stay the same under global warming.

The first calculation of climate sensitivity that used detailed measurements of absorption spectra, as well as the first calculation to use a computer for numerical integration of the radiative transfer through the atmosphere, was performed by Syukuro Manabe and Richard Wetherald in 1967. Assuming constant humidity, they computed an equilibrium climate sensitivity of 2.3 °C per doubling of CO2, which they rounded to 2 °C, the value most often quoted from their work, in the abstract of the paper. The work has been called "arguably the greatest climate-science paper of all time" and "the most influential study of climate of all time."

A committee on anthropogenic global warming, convened in 1979 by the United States National Academy of Sciences and chaired by Jule Charney, estimated equilibrium climate sensitivity to be 3 °C (5.4 °F), plus or minus 1.5 °C (2.7 °F). The Manabe and Wetherald estimate (2 °C (3.6 °F)), James E. Hansen's estimate of 4 °C (7.2 °F), and Charney's model were the only models available in 1979. According to Manabe, speaking in 2004, "Charney chose 0.5 °C as a reasonable margin of error, subtracted it from Manabe's number, and added it to Hansen's, giving rise to the 1.5 to 4.5 °C (2.7 to 8.1 °F) range of likely climate sensitivity that has appeared in every greenhouse assessment since ...." In 2008, climatologist Stefan Rahmstorf said: "At that time [it was published], the [Charney report estimate's] range [of uncertainty] was on very shaky ground. Since then, many vastly improved models have been developed by a number of climate research centers around the world."

Assessment reports of IPCC

diagram showing five historical estimates of equilibrium climate sensitivity by the IPCC
Historical estimates of climate sensitivity from the IPCC assessments. The first three reports gave a qualitative likely range, and the fourth and the fifth assessment report formally quantified the uncertainty. The dark blue range is judged as being more than 66% likely.

Despite considerable progress in the understanding of Earth's climate system, assessments continued to report similar uncertainty ranges for climate sensitivity for some time after the 1979 Charney report. The First Assessment Report of the Intergovernmental Panel on Climate Change (IPCC), published in 1990, estimated that equilibrium climate sensitivity to a doubling of CO2 lay between 1.5 and 4.5 °C (2.7 and 8.1 °F), with a "best guess in the light of current knowledge" of 2.5 °C (4.5 °F). The report used models with simplified representations of ocean dynamics. The IPCC supplementary report, 1992, which used full-ocean circulation models, saw "no compelling reason to warrant changing" the 1990 estimate; and the IPCC Second Assessment Report stated, "No strong reasons have emerged to change [these estimates]," In the reports, much of the uncertainty around climate sensitivity was attributed to insufficient knowledge of cloud processes. The 2001 IPCC Third Assessment Report also retained this likely range.

Authors of the 2007 IPCC Fourth Assessment Report stated that confidence in estimates of equilibrium climate sensitivity had increased substantially since the Third Annual Report. The IPCC authors concluded that ECS is very likely to be greater than 1.5 °C (2.7 °F) and likely to lie in the range 2 to 4.5 °C (3.6 to 8.1 °F), with a most likely value of about 3 °C (5.4 °F). The IPCC stated that fundamental physical reasons and data limitations prevent a climate sensitivity higher than 4.5 °C (8.1 °F) from being ruled out, but the climate sensitivity estimates in the likely range agreed better with observations and the proxy climate data.

The 2013 IPCC Fifth Assessment Report reverted to the earlier range of 1.5 to 4.5 °C (2.7 to 8.1 °F) (with high confidence), because some estimates using industrial-age data came out low. The report also stated that ECS is extremely unlikely to be less than 1 °C (1.8 °F) (high confidence), and it is very unlikely to be greater than 6 °C (11 °F) (medium confidence). Those values were estimated by combining the available data with expert judgement.

In preparation for the 2021 IPCC Sixth Assessment Report, a new generation of climate models was developed by scientific groups around the world. Across 27 global climate models, estimates of a higher climate sensitivity were produced. The values spanned 1.8 to 5.6 °C (3.2 to 10.1 °F) and exceeded 4.5 °C (8.1 °F) in 10 of them. The estimates for equilibrium climate sensitivity changed from 3.2 °C to 3.7 °C and the estimates for the transient climate response from 1.8 °C, to 2.0 °C. The cause of the increased ECS lies mainly in improved modelling of clouds. Temperature rises are now believed to cause sharper decreases in the number of low clouds, and fewer low clouds means more sunlight is absorbed by the planet and less reflected to space.

Remaining deficiencies in the simulation of clouds may have led to overestimates, as models with the highest ECS values were not consistent with observed warming. A fifth of the models began to 'run hot', predicting that global warming would produce significantly higher temperatures than is considered plausible. According to these models, known as hot models, average global temperatures in the worst-case scenario would rise by more than 5 °C above preindustrial levels by 2100, with a "catastrophic" impact on human society. In comparison, empirical observations combined with physics models indicate that the "very likely" range is between 2.3 and 4.7 °C. Models with a very high climate sensitivity are also known to be poor at reproducing known historical climate trends, such as warming over the 20th century or cooling during the last ice age. For these reasons the predictions of hot models are considered implausible, and have been given less weight by the IPCC in 2022.

Wednesday, March 11, 2026

Positive feedback

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Positive_feedback
Causal loop diagram that depicts the causes of a stampede as a positive feedback loop. Alarm or panic can sometimes be spread by positive feedback among a herd of animals to cause a stampede.
In sociology a network effect can quickly create the positive feedback of a bank run. The above photo is of the UK Northern Rock 2007 bank run.

Positive feedback (exacerbating feedback, self-reinforcing feedback) is a process that occurs in a feedback loop where the outcome of a process reinforces the inciting process to build momentum. As such, these forces can exacerbate the effects of a small disturbance. That is, the effects of a perturbation on a system include an increase in the magnitude of the perturbation. That is, A produces more of B which in turn produces more of A. In contrast, a system in which the results of a change act to reduce or counteract it has negative feedback. Both concepts play an important role in science and engineering, including biology, chemistry, and cybernetics.

Mathematically, positive feedback is defined as a positive loop gain around a closed loop of cause and effect. That is, positive feedback is in phase with the input, in the sense that it adds to make the input larger. Positive feedback tends to cause system instability. When the loop gain is positive and above 1, there will typically be exponential growth, increasing oscillations, chaotic behavior or other divergences from equilibrium.[3] System parameters will typically accelerate towards extreme values, which may damage or destroy the system, or may end with the system latched into a new stable state. Positive feedback may be controlled by signals in the system being filtered, damped, or limited, or it can be cancelled or reduced by adding negative feedback.

Positive feedback is used in digital electronics to force voltages away from intermediate voltages into '0' and '1' states. On the other hand, thermal runaway is a type of positive feedback that can destroy semiconductor junctions. Positive feedback in chemical reactions can increase the rate of reactions, and in some cases can lead to explosions. Positive feedback in mechanical design causes tipping-point, or over-centre, mechanisms to snap into position, for example in switches and locking pliers. Out of control, it can cause bridges to collapse. Positive feedback in economic systems can cause boom-then-bust cycles. A familiar example of positive feedback is the loud squealing or howling sound produced by audio feedback in public address systems: the microphone picks up sound from its own loudspeakers, amplifies it, and sends it through the speakers again.

Platelet clotting demonstrates positive feedback. The damaged blood vessel wall releases chemicals that initiate the formation of a blood clot through platelet congregation. As more platelets gather, more chemicals are released that speed up the process. The process gets faster and faster until the blood vessel wall is completely sealed and the positive feedback loop has ended. The exponential form of the graph illustrates the positive feedback mechanism.

Overview

Positive feedback enhances or amplifies an effect by it having an influence on the process which gave rise to it. For example, when part of an electronic output signal returns to the input, and is in phase with it, the system gain is increased. The feedback from the outcome to the originating process can be direct, or it can be via other state variables. Such systems can give rich qualitative behaviors, but whether the feedback is instantaneously positive or negative in sign has an extremely important influence on the results. Positive feedback reinforces and negative feedback moderates the original process. Positive and negative in this sense refer to loop gains greater than or less than zero, and do not imply any value judgements as to the desirability of the outcomes or effects. A key feature of positive feedback is thus that small disturbances get bigger. When a change occurs in a system, positive feedback causes further change, in the same direction.

Basic

A basic feedback system can be represented by this block diagram. In the diagram the + symbol is an adder and A and B are arbitrary causal functions.

A simple feedback loop is shown in the diagram. If the loop gain AB is positive, then a condition of positive or regenerative feedback exists.

If the functions A and B are linear and AB is smaller than unity, then the overall system gain from the input to output is finite but can be very large as AB approaches unity. In that case, it can be shown that the overall or loop gain from input to output is:

When AB > 1, the system is unstable, so does not have a well-defined gain; the gain may be called infinite.

Thus depending on the feedback, state changes can be convergent, or divergent. The result of positive feedback is to augment changes, so that small perturbations may result in big changes.

A system in equilibrium in which there is positive feedback to any change from its current state may be unstable, in which case the system is said to be in an unstable equilibrium. The magnitude of the forces that act to move such a system away from its equilibrium is an increasing function of the distance of the state from the equilibrium.

Positive feedback does not necessarily imply instability of an equilibrium, for example stable on and off states may exist in positive-feedback architectures.

Hysteresis

Hysteresis causes the output value to depend on the history of the input.
In a Schmitt trigger circuit, feedback to the non-inverting input of an amplifier pushes the output directly away from the applied voltage towards the maximum or minimum voltage the amplifier can generate.

In the real world, positive feedback loops typically do not cause ever-increasing growth but are modified by limiting effects of some sort. According to Donella Meadows:

"Positive feedback loops are sources of growth, explosion, erosion, and collapse in systems. A system with an unchecked positive loop ultimately will destroy itself. That's why there are so few of them. Usually, a negative loop will kick in sooner or later."

Hysteresis, in which the starting point affects where the system ends up, can be generated by positive feedback. When the gain of the feedback loop is above 1, then the output moves away from the input: if it is above the input, then it moves towards the nearest positive limit, while if it is below the input then it moves towards the nearest negative limit.

Once it reaches the limit, it will be stable. However, if the input goes past the limit, then the feedback will change sign and the output will move in the opposite direction until it hits the opposite limit. The system therefore shows bistable behaviour.

Terminology

The terms positive and negative were first applied to feedback before World War II. The idea of positive feedback was already current in the 1920s with the introduction of the regenerative circuit.

Friis & Jensen (1924) described regeneration in a set of electronic amplifiers as a case where the "feed-back" action is positive in contrast to negative feed-back action, which they mention only in passing. Harold Stephen Black's classic 1934 paper first details the use of negative feedback in electronic amplifiers. According to Black:

"Positive feed-back increases the gain of the amplifier, negative feed-back reduces it."

According to Mindell (2002) confusion in the terms arose shortly after this:

"...Friis and Jensen had made the same distinction Black used between 'positive feed-back' and 'negative feed-back', based not on the sign of the feedback itself but rather on its effect on the amplifier's gain. In contrast, Nyquist and Bode, when they built on Black's work, referred to negative feedback as that with the sign reversed. Black had trouble convincing others of the utility of his invention in part because confusion existed over basic matters of definition."

These confusions, along with the everyday associations of positive with good and negative with bad, have led many systems theorists to propose alternative terms. For example, Donella Meadows prefers the terms reinforcing and balancing feedbacks.

Examples and applications

In electronics

A vintage style regenerative radio receiver. Due to the controlled use of positive feedback, sufficient amplification can be derived from a single vacuum tube or valve (centre).

Regenerative circuits were invented and patented in 1914 for the amplification and reception of very weak radio signals. Carefully controlled positive feedback around a single transistor amplifier can multiply its gain by 1,000 or more. Therefore, a signal can be amplified 20,000 or even 100,000 times in one stage, that would normally have a gain of only 20 to 50. The problem with regenerative amplifiers working at these very high gains is that they easily become unstable and start to oscillate. The radio operator has to be prepared to tweak the amount of feedback fairly continuously for good reception. Superregenerative recievers use even more gain. Modern radio receivers use the superheterodyne design, with many more amplification stages, but much more stable operation and no positive feedback.

The oscillation that can break out in a regenerative radio circuit is used in electronic oscillators. By the use of tuned circuits or a piezoelectric crystal (commonly quartz), the signal that is amplified by the positive feedback remains linear and sinusoidal. There are several designs for such harmonic oscillators, including the Armstrong oscillator, Hartley oscillator, Colpitts oscillator, and the Wien bridge oscillator. They all use positive feedback to create oscillations.

Many electronic circuits, especially amplifiers, incorporate negative feedback. This reduces their gain, but improves their linearity, input impedance, output impedance, and bandwidth, and stabilises all of these parameters, including the loop gain. These parameters also become less dependent on the details of the amplifying device itself, and more dependent on the feedback components, which are less likely to vary with manufacturing tolerance, age and temperature. The difference between positive and negative feedback for AC signals is one of phase: if the signal is fed back out of phase, the feedback is negative and if it is in phase the feedback is positive. One problem for amplifier designers who use negative feedback is that some of the components of the circuit will introduce phase shift in the feedback path. If there is a frequency (usually a high frequency) where the phase shift reaches 180°, then the designer must ensure that the amplifier gain at that frequency is very low (usually by low-pass filtering). If the loop gain (the product of the amplifier gain and the extent of the positive feedback) at any frequency is greater than one, then the amplifier will oscillate at that frequency (Barkhausen stability criterion). Such oscillations are sometimes called parasitic oscillations. An amplifier that is stable in one set of conditions can break into parasitic oscillation in another. This may be due to changes in temperature, supply voltage, adjustment of front-panel controls, or even the proximity of a person or other conductive item.

Amplifiers may oscillate gently in ways that are hard to detect without an oscilloscope, or the oscillations may be so extensive that only a very distorted or no required signal at all gets through, or that damage occurs. Low frequency parasitic oscillations have been called 'motorboating' due to the similarity to the sound of a low-revving exhaust note.

The effect of using a Schmitt trigger (B) instead of a comparator (A)

Many common digital electronic circuits employ positive feedback. While normal simple Boolean logic gates usually rely simply on gain to push digital signal voltages away from intermediate values to the values that are meant to represent Boolean '0' and '1', but many more complex gates use feedback. When an input voltage is expected to vary in an analogue way, but sharp thresholds are required for later digital processing, the Schmitt trigger circuit uses positive feedback to ensure that if the input voltage creeps gently above the threshold, the output is forced smartly and rapidly from one logic state to the other. One of the corollaries of the Schmitt trigger's use of positive feedback is that, should the input voltage move gently down again past the same threshold, the positive feedback will hold the output in the same state with no change. This effect is called hysteresis: the input voltage has to drop past a different, lower threshold to 'un-latch' the output and reset it to its original digital value. By reducing the extent of the positive feedback, the hysteresis-width can be reduced, but it can not entirely be eradicated. The Schmitt trigger is, to some extent, a latching circuit.

Positive feedback is a mechanism by which an output is enhanced, such as protein levels. However, in order to avoid any fluctuation in the protein level, the mechanism is inhibited stochastically (I), therefore when the concentration of the activated protein (A) is past the threshold ([I]), the loop mechanism is activated and the concentration of A increases exponentially if d[A]=k [A].
Illustration of an R-S ('reset-set') flip-flop made from two digital nor gates with positive feedback. Red and black mean logical '1' and '0', respectively.

An electronic flip-flop, or "latch", or "bistable multivibrator", is a circuit that due to high positive feedback is not stable in a balanced or intermediate state. Such a bistable circuit is the basis of one bit of electronic memory. The flip-flop uses a pair of amplifiers, transistors, or logic gates connected to each other so that positive feedback maintains the state of the circuit in one of two unbalanced stable states after the input signal has been removed until a suitable alternative signal is applied to change the state. Computer random access memory (RAM) can be made in this way, with one latching circuit for each bit of memory.

Thermal runaway occurs in electronic systems because some aspect of a circuit is allowed to pass more current when it gets hotter, then the hotter it gets, the more current it passes, which heats it some more and so it passes yet more current. The effects are usually catastrophic for the device in question. If devices have to be used near to their maximum power-handling capacity, and thermal runaway is possible or likely under certain conditions, improvements can usually be achieved by careful design.

A phonograph turntable is prone to acoustic feedback.

Audio and video systems can demonstrate positive feedback. If a microphone picks up the amplified sound output of loudspeakers in the same circuit, then howling and screeching sounds of audio feedback (at up to the maximum power capacity of the amplifier) will be heard, as random noise is re-amplified by positive feedback and filtered by the characteristics of the audio system and the room.

Audio and live music

Audio feedback (also known as acoustic feedback, simply as feedback, or the Larsen effect) is a special kind of positive feedback which occurs when a sound loop exists between an audio input (for example, a microphone or guitar pickup) and an audio output (for example, a loudly-amplified loudspeaker). In this example, a signal received by the microphone is amplified and passed out of the loudspeaker. The sound from the loudspeaker can then be received by the microphone again, amplified further, and then passed out through the loudspeaker again. The frequency of the resulting sound is determined by resonance frequencies in the microphone, amplifier, and loudspeaker, the acoustics of the room, the directional pick-up and emission patterns of the microphone and loudspeaker, and the distance between them. For small PA systems the sound is readily recognized as a loud squeal or screech.

Feedback is almost always considered undesirable when it occurs with a singer's or public speaker's microphone at an event using a sound reinforcement system or PA system. Audio engineers use various electronic devices, such as equalizers and, since the 1990s, automatic feedback detection devices to prevent these unwanted squeals or screeching sounds, which detract from the audience's enjoyment of the event. On the other hand, since the 1960s, electric guitar players in rock music bands using loud guitar amplifiers and distortion effects have intentionally created guitar feedback to create a desirable musical effect. "I Feel Fine" by the Beatles marks one of the earliest examples of the use of feedback as a recording effect in popular music. It starts with a single, percussive feedback note produced by plucking the A string on Lennon's guitar. Artists such as the Kinks and the Who had already used feedback live, but Lennon remained proud of the fact that the Beatles were perhaps the first group to deliberately put it on vinyl. In one of his last interviews, he said, "I defy anybody to find a record—unless it's some old blues record in 1922—that uses feedback that way."

The principles of audio feedback were first discovered by Danish scientist Søren Absalon Larsen. Microphones are not the only transducers subject to this effect. Phone cartridges can do the same, usually in the low-frequency range below about 100 Hz, manifesting as a low rumble. Jimi Hendrix was an innovator in the intentional use of guitar feedback in his guitar solos to create unique sound effects. He helped develop the controlled and musical use of audio feedback in electric guitar playing, and later Brian May was a famous proponent of the technique.

Video feedback

Video

Similarly, if a video camera is pointed at a monitor screen that is displaying the camera's own signal, then repeating patterns can be formed on the screen by positive feedback. This video feedback effect was used in the opening sequences to the first ten seasons of the television program Doctor Who.

Switches

In electrical switches, including bimetallic strip based thermostats, the switch usually has hysteresis in the switching action. In these cases hysteresis is mechanically achieved via positive feedback within a tipping point mechanism. The positive feedback action minimises the length of time arcing occurs for during the switching and also holds the contacts in an open or closed state.

In biology

Positive feedback is the amplification of a body's response to a stimulus. For example, in childbirth, when the head of the fetus pushes up against the cervix (1) it stimulates a nerve impulse from the cervix to the brain (2). When the brain is notified, it signals the pituitary gland to release a hormone called oxytocin(3). Oxytocin is then carried via the bloodstream to the uterus (4) causing contractions, pushing the fetus towards the cervix eventually inducing childbirth.

In physiology

A number of examples of positive feedback systems may be found in physiology.

  • One example is the onset of contractions in childbirth, known as the Ferguson reflex. When a contraction occurs, the hormone oxytocin causes a nerve stimulus, which stimulates the hypothalamus to produce more oxytocin, which increases uterine contractions. This results in contractions increasing in amplitude and frequency.
  • Another example is the process of blood clotting. The loop is initiated when injured tissue releases signal chemicals that activate platelets in the blood. An activated platelet releases chemicals to activate more platelets, causing a rapid cascade and the formation of a blood clot.
  • Lactation also involves positive feedback in that as the baby suckles on the nipple there is a nerve response into the spinal cord and up into the hypothalamus of the brain, which then stimulates the pituitary gland to produce more prolactin to produce more milk.
  • A spike in estrogen during the follicular phase of the menstrual cycle causes ovulation.
  • The generation of nerve signals is another example, in which the membrane of a nerve fibre causes slight leakage of sodium ions through sodium channels, resulting in a change in the membrane potential, which in turn causes more opening of channels, and so on (Hodgkin cycle). So a slight initial leakage results in an explosion of sodium leakage which creates the nerve action potential.
  • In excitation–contraction coupling of the heart, an increase in intracellular calcium ions to the cardiac myocyte is detected by ryanodine receptors in the membrane of the sarcoplasmic reticulum which transport calcium out into the cytosol in a positive feedback physiological response.

In most cases, such feedback loops culminate in counter-signals being released that suppress or break the loop. Childbirth contractions stop when the baby is out of the mother's body. Chemicals break down the blood clot. Lactation stops when the baby no longer nurses.[27]

In gene regulation

Positive feedback is a well-studied phenomenon in gene regulation, where it is most often associated with bistability. Positive feedback occurs when a gene activates itself directly or indirectly via a double negative feedback loop. Genetic engineers have constructed and tested simple positive feedback networks in bacteria to demonstrate the concept of bistability. A classic example of positive feedback is the lac operon in E. coli. Positive feedback plays an integral role in cellular differentiation, development, and cancer progression, and therefore, positive feedback in gene regulation can have significant physiological consequences. Random motions in molecular dynamics coupled with positive feedback can trigger interesting effects, such as create population of phenotypically different cells from the same parent cell. This happens because noise can become amplified by positive feedback. Positive feedback can also occur in other forms of cell signaling, such as enzyme kinetics or metabolic pathways.

In evolutionary biology

Positive feedback loops have been used to describe aspects of the dynamics of change in biological evolution. For example, beginning at the macro level, Alfred J. Lotka (1945) argued that the evolution of the species was most essentially a matter of selection that fed back energy flows to capture more and more energy for use by living systems. At the human level, Richard D. Alexander (1989) proposed that social competition between and within human groups fed back to the selection of intelligence thus constantly producing more and more refined human intelligence. Crespi (2004) discussed several other examples of positive feedback loops in evolution. The analogy of evolutionary arms races provides further examples of positive feedback in biological systems.

During the Phanerozoic the biodiversity shows a steady but not monotonic increase from near zero to several thousands of genera.

It has been shown that changes in biodiversity through the Phanerozoic correlate much better with hyperbolic model (widely used in demography and macrosociology) than with exponential and logistic models (traditionally used in population biology and extensively applied to fossil biodiversity as well). The latter models imply that changes in diversity are guided by first-order positive feedback (more ancestors, more descendants) or a negative feedback arising from resource limitation. The hyperbolic model implies a second-order positive feedback. The hyperbolic pattern of the world population growth has been demonstrated (see below) to arise from second-order positive feedback between the population size and the rate of technological growth. The hyperbolic character of biodiversity growth can be similarly accounted for by a positive feedback between the diversity and community structure complexity. It has been suggested that the similarity between the curves of biodiversity and human population probably comes from the fact that both are derived from the interference of the hyperbolic trend (produced by the positive feedback) with cyclical and stochastic dynamics.

Immune system

A cytokine storm, or hypercytokinemia is a potentially fatal immune reaction consisting of a positive feedback loop between cytokines and immune cells, with highly elevated levels of various cytokines. In normal immune function, positive feedback loops can be utilized to enhance the action of B lymphocytes. When a B cell binds its antibodies to an antigen and becomes activated, it begins releasing antibodies and secreting a complement protein called C3. Both C3 and a B cell's antibodies can bind to a pathogen, and when a B cell has its antibodies bind to a pathogen with C3, it speeds up that B cell's secretion of more antibodies and more C3, thus creating a positive feedback loop.

Cell death

Apoptosis is a caspase-mediated process of cellular death, whose aim is the removal of long-lived or damaged cells. A failure of this process has been implicated in prominent conditions such as cancer or Parkinson's disease. The very core of the apoptotic process is the auto-activation of caspases, which may be modelled via a positive-feedback loop. This positive feedback exerts an auto-activation of the effector caspase by means of intermediate caspases. When isolated from the rest of apoptotic pathway, this positive feedback presents only one stable steady state, regardless of the number of intermediate activation steps of the effector caspase. When this core process is complemented with inhibitors and enhancers of caspases effects, this process presents bistability, thereby modelling the alive and dying states of a cell.

In psychology

Winner (1996) described gifted children as driven by positive feedback loops involving setting their own learning course, this feeding back satisfaction, thus further setting their learning goals to higher levels and so on. Winner termed this positive feedback loop as a rage to master. Vandervert (2009a, 2009b) proposed that the child prodigy can be explained in terms of a positive feedback loop between the output of thinking/performing in working memory, which then is fed to the cerebellum where it is streamlined, and then fed back to working memory thus steadily increasing the quantitative and qualitative output of working memory. Vandervert also argued that this working memory/cerebellar positive feedback loop was responsible for language evolution in working memory.

In economics

Markets with social influence

Product recommendations and information about past purchases have been shown to influence consumers' choices significantly whether it is for music, movie, book, technological, and other type of products. Social influence often induces a rich-get-richer phenomenon (Matthew effect) where popular products tend to become even more popular.

Market dynamics

According to the theory of reflexivity advanced by George Soros, price changes are driven by a positive feedback process whereby investors' expectations are influenced by price movements so their behaviour acts to reinforce movement in that direction until it becomes unsustainable, whereupon the feedback drives prices in the opposite direction.

In social media

Programs such as Facebook and Twitter depend on positive feedback to create interest in topics and drive the take-up of the media. In the age of smartphones and social media, the feedback loop has created a craze for virtual validation in the form of likes, shares, and FOMO (fear of missing out). This is intensified by the use of bots which are designed to respond to particular words or themes and transmit posts more widely.

What is called negative feedback in social media should often be regarded as positive feedback in this context. Outrageous statements and negative comments often produce much more feedback than positive comments.

Systemic risk

Systemic risk is the risk that an amplification or leverage or positive feedback process presents to a system. This is usually unknown, and under certain conditions, this process can amplify exponentially and rapidly lead to destructive or chaotic behaviour. A Ponzi scheme is a good example of a positive-feedback system: funds from new investors are used to pay out unusually high returns, which in turn attract more new investors, causing rapid growth toward collapse. W. Brian Arthur has also studied and written on positive feedback in the economy (e.g. W. Brian Arthur, 1990). Hyman Minsky proposed a theory that certain credit expansion practices could make a market economy into "a deviation amplifying system" that could suddenly collapse, sometimes called a Minsky moment.

Simple systems that clearly separate the inputs from the outputs are not prone to systemic risk. This risk is more likely as the complexity of the system increases because it becomes more difficult to see or analyze all the possible combinations of variables in the system even under careful stress testing conditions. The more efficient a complex system is, the more likely it is to be prone to systemic risks because it takes only a small amount of deviation to disrupt the system. Therefore, well-designed complex systems generally have built-in features to avoid this condition, such as a small amount of friction, or resistance, or inertia, or time delay to decouple the outputs from the inputs within the system. These factors amount to an inefficiency, but they are necessary to avoid instabilities.

The 2010 Flash Crash incident was blamed on the practice of high-frequency trading (HFT), although whether HFT really increases systemic risk remains controversial.

Human population growth

Agriculture and human population can be considered to be in a positive feedback mode, which means that one drives the other with increasing intensity. It is suggested that this positive feedback system will end sometime with a catastrophe, as modern agriculture is using up all of the easily available phosphate and is resorting to highly efficient monocultures which are more susceptible to systemic risk.

Technological innovation and human population can be similarly considered, and this has been offered as an explanation for the apparent hyperbolic growth of the human population in the past, instead of a simpler exponential growth. It is proposed that the growth rate is accelerating because of second-order positive feedback between population and technology. Technological growth increases the carrying capacity of land for people, which leads to a growing population, and this in turn drives further technological growth.

Prejudice, social institutions and poverty

Gunnar Myrdal described a vicious circle of increasing inequalities, and poverty, which is known as circular cumulative causation.

James Moody, Assistant Professor at Ohio State University, states that students who self-segregate or grow up in segregated environments have "little meaningful exposure to other races because they never form relationships with students of another race...[; as a result,...] they are viewing other racial groups at a social distance, which can bolster stereotypes," which ultimately causes a positive feedback loop in which segregated groups become more prejudiced, polarized, and segregated against each other, similar to that of political polarization.

In meteorology

Drought intensifies through positive feedback. A lack of rain decreases soil moisture, which kills plants or causes them to release less water through transpiration. Both factors limit evapotranspiration, the process by which water vapour is added to the atmosphere from the surface, and add dry dust to the atmosphere, which absorbs water. Less water vapour means both low dew point temperatures and more efficient daytime heating, decreasing the chances of humidity in the atmosphere leading to cloud formation. Lastly, without clouds, there cannot be rain, and the loop is complete.

In climatology

Human-caused increases in greenhouse gases stimulate positive feedback in global warming.
 
Some effects of global warming can either enhance (positive feedbacks) or inhibit (negative feedbacks) warming
 
Globally, wildfires and deforestation have reduced forests' net absorption of greenhouse gases, reducing their effectiveness at mitigating climate change. Global warming increases forest fires that release more greenhouse gases, creating a positive feedback loop that causes more warming.
 
Over recent decades, "forest disturbance" (damage) by fire has increased in most of the planet's forest zones. The increase in area, frequency, and severity of forest fires creates a positive feedback that increases global warming.

Climate forcings may push a climate system in the direction of warming or cooling, for example, increased atmospheric concentrations of greenhouse gases cause warming at the surface. Forcings are external to the climate system and feedbacks are internal processes of the system. Some feedback mechanisms act in relative isolation to the rest of the climate system while others are tightly coupled. Forcings, feedbacks and the dynamics of the climate system determine how much and how fast the climate changes. The main positive feedback in global warming is the tendency of warming to increase the amount of water vapour in the atmosphere, which in turn leads to further warming. The main negative feedback comes from the Stefan–Boltzmann law, the amount of heat radiated from the Earth into space is proportional to the fourth power of the temperature of Earth's surface and atmosphere.

Other examples of positive feedback subsystems in climatology include:

  • A warmer atmosphere melts ice, changing the albedo (surface reflectivity), which further warms the atmosphere.
  • Methane hydrates can be unstable so that a warming ocean could release more methane, which is also a greenhouse gas.
  • Peat, occurring naturally in peat bogs, contains carbon. When peat dries it decomposes, and may additionally burn. Peat also releases nitrous oxide.
  • Global warming affects the cloud distribution. Clouds at higher altitudes enhance the greenhouse effects, while low clouds mainly reflect back sunlight, having opposite effects on temperature.

The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report states that "Anthropogenic warming could lead to some effects that are abrupt or irreversible, depending upon the rate and magnitude of the climate change."

In sociology

A self-fulfilling prophecy is a social positive feedback loop between beliefs and behaviour: if enough people believe that something is true, their behaviour can make it true, and observations of their behaviour may in turn increase belief. A classic example is a bank run.

Another sociological example of positive feedback is the network effect. When more people are encouraged to join a network this increases the reach of the network therefore the network expands ever more quickly. A viral video is an example of the network effect in which links to a popular video are shared and redistributed, ensuring that more people see the video and then re-publish the links. This is the basis for many social phenomena, including Ponzi schemes and chain letters. In many cases, population size is the limiting factor to the feedback effect.

In political science

In politics, institutions can reinforce norms, which can subsequently be a source of positive feedback. This rationale is frequently utilized to comprehend public policy processes, which may be dissected into a sequence of events. Self-reinforcing processes are understood to be affected by positive feedback mechanisms (e.g., supportive policy constituencies). Conversely, unsuccessful policy processes encounter negative feedback mechanisms (e.g., veto points with veto power).

A comparative illustration of policy feedback can be observed in the economic foreign policies of Brazil and China, particularly in their execution of state capitalism tactics during the 1990s and 2000s. Although both nations initially embraced similar state capitalist ideas, their paths in executing economic policies diverged over time due to distinct incentives. In China, a positive feedback mechanism reinforced previous policies, whereas in Brazil, negative feedback mechanisms compelled the country to abandon state capitalism policies and dynamics.

In chemistry

If a chemical reaction causes the release of heat, and the reaction itself happens faster at higher temperatures, then there is a high likelihood of positive feedback. If the heat produced is not removed from the reactants fast enough, thermal runaway can occur and very quickly lead to a chemical explosion.

In conservation

Many wildlife are hunted for their parts which can be quite valuable. The closer to extinction that targeted species become, the higher the price there is on their parts.

Immortality

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Immortality The Fountain of Eternal Li...