Search This Blog

Monday, November 18, 2024

Climate sensitivity

From Wikipedia, the free encyclopedia
Diagram of factors that determine climate sensitivity. After increasing CO2 levels, there is an initial warming. This warming gets amplified by the net effect of climate feedbacks.

Climate sensitivity is a key measure in climate science and describes how much Earth's surface will warm for a doubling in the atmospheric carbon dioxide (CO2) concentration.  Its formal definition is: "The change in the surface temperature in response to a change in the atmospheric carbon dioxide (CO2) concentration or other radiative forcing." This concept helps scientists understand the extent and magnitude of the effects of climate change.

The Earth's surface warms as a direct consequence of increased atmospheric CO2, as well as increased concentrations of other greenhouse gases such as nitrous oxide and methane. The increasing temperatures have secondary effects on the climate system. These secondary effects are called climate feedbacks. Self-reinforcing feedbacks include for example the melting of sunlight-reflecting ice as well as higher evapotranspiration. The latter effect increases average atmospheric water vapour, which is itself a greenhouse gas.

Scientists do not know exactly how strong these climate feedbacks are. Therefore, it is difficult to predict the precise amount of warming that will result from a given increase in greenhouse gas concentrations. If climate sensitivity turns out to be on the high side of scientific estimates, the Paris Agreement goal of limiting global warming to below 2 °C (3.6 °F) will be even more difficult to achieve.

There are two main kinds of climate sensitivity: the transient climate response is the initial rise in global temperature when CO2 levels double, and the equilibrium climate sensitivity is the larger long-term temperature increase after the planet adjusts to the doubling. Climate sensitivity is estimated by several methods: looking directly at temperature and greenhouse gas concentrations since the Industrial Revolution began around the 1750s, using indirect measurements from the Earth's distant past, and simulating the climate.

Fundamentals

The rate at which energy reaches Earth as sunlight and leaves Earth as heat radiation to space must balance, or the total amount of heat energy on the planet at any one time will rise or fall, which results in a planet that is warmer or cooler overall. A driver of an imbalance between the rates of incoming and outgoing radiation energy is called radiative forcing. A warmer planet radiates heat to space faster and so a new balance is eventually reached, with a higher temperature and stored energy content. However, the warming of the planet also has knock-on effects, which create further warming in an exacerbating feedback loop. Climate sensitivity is a measure of how much temperature change a given amount of radiative forcing will cause.

Radiative forcing

Radiative forcings are generally quantified as Watts per square meter (W/m2) and averaged over Earth's uppermost surface defined as the top of the atmosphere. The magnitude of a forcing is specific to the physical driver and is defined relative to an accompanying time span of interest for its application. In the context of a contribution to long-term climate sensitivity from 1750 to 2020, the 50% increase in atmospheric CO
2
is characterized by a forcing of about +2.1 W/m2. In the context of shorter-term contributions to Earth's energy imbalance (i.e. its heating/cooling rate), time intervals of interest may be as short as the interval between measurement or simulation data samplings, and are thus likely to be accompanied by smaller forcing values. Forcings from such investigations have also been analyzed and reported at decadal time scales.

Radiative forcing leads to long-term changes in global temperature. A number of factors contribute radiative forcing: increased downwelling radiation from the greenhouse effect, variability in solar radiation from changes in planetary orbit, changes in solar irradiance, direct and indirect effects caused by aerosols (for example changes in albedo from cloud cover), and changes in land use (deforestation or the loss of reflective ice cover). In contemporary research, radiative forcing by greenhouse gases is well understood. As of 2019, large uncertainties remain for aerosols.

Key numbers

Carbon dioxide (CO2) levels rose from 280 parts per million (ppm) in the 18th century, when humans in the Industrial Revolution started burning significant amounts of fossil fuel such as coal, to over 415 ppm by 2020. As CO2 is a greenhouse gas, it hinders heat energy from leaving the Earth's atmosphere. In 2016, atmospheric CO2 levels had increased by 45% over preindustrial levels, and radiative forcing caused by increased CO2 was already more than 50% higher than in pre-industrial times because of non-linear effects. Between the 18th-century start of the Industrial Revolution and the year 2020, the Earth's temperature rose by a little over one degree Celsius (about two degrees Fahrenheit).

Societal importance

Because the economics of climate change mitigation depend greatly on how quickly carbon neutrality needs to be achieved, climate sensitivity estimates can have important economic and policy-making implications. One study suggests that halving the uncertainty of the value for transient climate response (TCR) could save trillions of dollars. A higher climate sensitivity would mean more dramatic increases in temperature, which makes it more prudent to take significant climate action. If climate sensitivity turns out to be on the high end of what scientists estimate, the Paris Agreement goal of limiting global warming to well below 2 °C cannot be achieved, and temperature increases will exceed that limit, at least temporarily. One study estimated that emissions cannot be reduced fast enough to meet the 2 °C goal if equilibrium climate sensitivity (the long-term measure) is higher than 3.4 °C (6.1 °F). The more sensitive the climate system is to changes in greenhouse gas concentrations, the more likely it is to have decades when temperatures are much higher or much lower than the longer-term average.

Factors that determine sensitivity

The radiative forcing caused by a doubling of atmospheric CO2 levels (from the pre-industrial 280 ppm) is approximately 3.7 watts per square meter (W/m2). In the absence of feedbacks, the energy imbalance would eventually result in roughly 1 °C (1.8 °F) of global warming. That figure is straightforward to calculate by using the Stefan–Boltzmann law and is undisputed.

A further contribution arises from climate feedbacks, both self-reinforcing and balancing. The uncertainty in climate sensitivity estimates is entirely from the modelling of feedbacks in the climate system, including water vapour feedback, ice–albedo feedback, cloud feedback, and lapse rate feedback. Balancing feedbacks tend to counteract warming by increasing the rate at which energy is radiated to space from a warmer planet. Exacerbating feedbacks increase warming; for example, higher temperatures can cause ice to melt, which reduces the ice area and the amount of sunlight the ice reflects, which in turn results in less heat energy being radiated back into space. Climate sensitivity depends on the balance between those feedbacks.

Types

Schematic of how different measures of climate sensitivity relate to one another

Depending on the time scale, there are two main ways to define climate sensitivity: the short-term transient climate response (TCR) and the long-term equilibrium climate sensitivity (ECS), both of which incorporate the warming from exacerbating feedback loops. They are not discrete categories, but they overlap. Sensitivity to atmospheric CO2 increases is measured in the amount of temperature change for doubling in the atmospheric CO2 concentration.

Although the term "climate sensitivity" is usually used for the sensitivity to radiative forcing caused by rising atmospheric CO2, it is a general property of the climate system. Other agents can also cause a radiative imbalance. Climate sensitivity is the change in surface air temperature per unit change in radiative forcing, and the climate sensitivity parameter is therefore expressed in units of °C/(W/m2). Climate sensitivity is approximately the same whatever the reason for the radiative forcing (such as from greenhouse gases or solar variation). When climate sensitivity is expressed as the temperature change for a level of atmospheric CO2 double the pre-industrial level, its units are degrees Celsius (°C).

Transient climate response

The transient climate response (TCR) is defined as "the change in the global mean surface temperature, averaged over a 20-year period, centered at the time of atmospheric carbon dioxide doubling, in a climate model simulation" in which the atmospheric CO2 concentration increases at 1% per year. That estimate is generated by using shorter-term simulations. The transient response is lower than the equilibrium climate sensitivity because slower feedbacks, which exacerbate the temperature increase, take more time to respond in full to an increase in the atmospheric CO2 concentration. For instance, the deep ocean takes many centuries to reach a new steady state after a perturbation during which it continues to serve as heatsink, which cools the upper ocean. The IPCC literature assessment estimates that the TCR likely lies between 1 °C (1.8 °F) and 2.5 °C (4.5 °F).

A related measure is the transient climate response to cumulative carbon emissions (TCRE), which is the globally averaged surface temperature change after 1000 GtC of CO2 has been emitted. As such, it includes not only temperature feedbacks to forcing but also the carbon cycle and carbon cycle feedbacks.

Equilibrium climate sensitivity

The equilibrium climate sensitivity (ECS) is the long-term temperature rise (equilibrium global mean near-surface air temperature) that is expected to result from a doubling of the atmospheric CO2 concentration (ΔT). It is a prediction of the new global mean near-surface air temperature once the CO2 concentration has stopped increasing, and most of the feedbacks have had time to have their full effect. Reaching an equilibrium temperature can take centuries or even millennia after CO2 has doubled. ECS is higher than TCR because of the oceans' short-term buffering effects. Computer models are used for estimating the ECS. A comprehensive estimate means that modelling the whole time span during which significant feedbacks continue to change global temperatures in the model, such as fully-equilibrating ocean temperatures, requires running a computer model that covers thousands of years. There are, however, less computing-intensive methods.

The IPCC Sixth Assessment Report (AR6) stated that there is high confidence that ECS is within the range of 2.5 °C to 4 °C, with a best estimate of 3 °C.

The long time scales involved with ECS make it arguably a less relevant measure for policy decisions around climate change.

Effective climate sensitivity

A common approximation to ECS is the effective equilibrium climate sensitivity, is an estimate of equilibrium climate sensitivity by using data from a climate system in model or real-world observations that is not yet in equilibrium. Estimates assume that the net amplification effect of feedbacks, as measured after some period of warming, will remain constant afterwards. That is not necessarily true, as feedbacks can change with time. In many climate models, feedbacks become stronger over time and so the effective climate sensitivity is lower than the real ECS.

Earth system sensitivity

By definition, equilibrium climate sensitivity does not include feedbacks that take millennia to emerge, such as long-term changes in Earth's albedo because of changes in ice sheets and vegetation. It includes the slow response of the deep oceans' warming, which also takes millennia, and so ECS fails to reflect the actual future warming that would occur if CO2 is stabilized at double pre-industrial values. Earth system sensitivity (ESS) incorporates the effects of these slower feedback loops, such as the change in Earth's albedo from the melting of large continental ice sheets, which covered much of the Northern Hemisphere during the Last Glacial Maximum and still cover Greenland and Antarctica). Changes in albedo as a result of changes in vegetation, as well as changes in ocean circulation, are also included. The longer-term feedback loops make the ESS larger than the ECS, possibly twice as large. Data from the geological history of Earth is used in estimating ESS. Differences between modern and long-ago climatic conditions mean that estimates of the future ESS are highly uncertain. Unlike ECS and TCR, the carbon cycle is not included in the definition of the ESS, but all other elements of the climate system are included.

Sensitivity to nature of forcing

Different forcing agents, such as greenhouse gases and aerosols, can be compared using their radiative forcing, the initial radiative imbalance averaged over the entire globe. Climate sensitivity is the amount of warming per radiative forcing. To a first approximation, the cause of the radiative imbalance does not matter whether it is greenhouse gases or something else. However, radiative forcing from sources other than CO2 can cause a somewhat larger or smaller surface warming than a similar radiative forcing from CO2. The amount of feedback varies mainly because the forcings are not uniformly distributed over the globe. Forcings that initially warm the Northern Hemisphere, land, or polar regions are more strongly systematically effective at changing temperatures than an equivalent forcing from CO2, which is more uniformly distributed over the globe. That is because those regions have more self-reinforcing feedbacks, such as the ice–albedo feedback. Several studies indicate that human-emitted aerosols are more effective than CO2 at changing global temperatures, and volcanic forcing is less effective. When climate sensitivity to CO2 forcing is estimated by using historical temperature and forcing (caused by a mix of aerosols and greenhouse gases), and that effect is not taken into account, climate sensitivity is underestimated.

State dependence

Artist impression of a Snowball Earth.

Climate sensitivity has been defined as the short- or long-term temperature change resulting from any doubling of CO2, but there is evidence that the sensitivity of Earth's climate system is not constant. For instance, the planet has polar ice and high-altitude glaciers. Until the world's ice has completely melted, an exacerbating ice–albedo feedback loop makes the system more sensitive overall. Throughout Earth's history, multiple periods are thought to have snow and ice cover almost the entire globe. In most models of "Snowball Earth", parts of the tropics were at least intermittently free of ice cover. As the ice advanced or retreated, climate sensitivity must have been very high, as the large changes in area of ice cover would have made for a very strong ice–albedo feedback. Volcanic atmospheric composition changes are thought to have provided the radiative forcing needed to escape the snowball state.

Equilibrium climate sensitivity can change with climate.

Throughout the Quaternary period (the most recent 2.58 million years), climate has oscillated between glacial periods, the most recent one being the Last Glacial Maximum, and interglacial periods, the most recent one being the current Holocene, but the period's climate sensitivity is difficult to determine. The Paleocene–Eocene Thermal Maximum, about 55.5 million years ago, was unusually warm and may have been characterized by above-average climate sensitivity.

Climate sensitivity may further change if tipping points are crossed. It is unlikely that tipping points will cause short-term changes in climate sensitivity. If a tipping point is crossed, climate sensitivity is expected to change at the time scale of the subsystem that hits its tipping point. Especially if there are multiple interacting tipping points, the transition of climate to a new state may be difficult to reverse.

The two most common definitions of climate sensitivity specify the climate state: the ECS and the TCR are defined for a doubling with respect to the CO2 levels in the pre-industrial era. Because of potential changes in climate sensitivity, the climate system may warm by a different amount after a second doubling of CO2 from after a first doubling. The effect of any change in climate sensitivity is expected to be small or negligible in the first century after additional CO2 is released into the atmosphere.

Estimation

Using Industrial Age (1750–present) data

Climate sensitivity can be estimated using the observed temperature increase, the observed ocean heat uptake, and the modelled or observed radiative forcing. The data are linked through a simple energy-balance model to calculate climate sensitivity. Radiative forcing is often modelled because Earth observation satellites measuring it has existed during only part of the Industrial Age (only since the late 1950s). Estimates of climate sensitivity calculated by using these global energy constraints have consistently been lower than those calculated by using other methods, around 2 °C (3.6 °F) or lower.

Estimates of transient climate response (TCR) that have been calculated from models and observational data can be reconciled if it is taken into account that fewer temperature measurements are taken in the polar regions, which warm more quickly than the Earth as a whole. If only regions for which measurements are available are used in evaluating the model, the differences in TCR estimates are negligible.

A very simple climate model could estimate climate sensitivity from Industrial Age data by waiting for the climate system to reach equilibrium and then by measuring the resulting warming, ΔTeq (°C). Computation of the equilibrium climate sensitivity, S (°C), using the radiative forcing ΔF (W/m2) and the measured temperature rise, would then be possible. The radiative forcing resulting from a doubling of CO2, F2CO2, is relatively well known, at about 3.7 W/m2. Combining that information results in this equation:

.

However, the climate system is not in equilibrium since the actual warming lags the equilibrium warming, largely because the oceans take up heat and will take centuries or millennia to reach equilibrium. Estimating climate sensitivity from Industrial Age data requires an adjustment to the equation above. The actual forcing felt by the atmosphere is the radiative forcing minus the ocean's heat uptake, H (W/m2) and so climate sensitivity can be estimated:

The global temperature increase between the beginning of the Industrial Period, which is (taken as 1750, and 2011 was about 0.85 °C (1.53 °F). In 2011, the radiative forcing from CO2 and other long-lived greenhouse gases (mainly methane, nitrous oxide, and chlorofluorocarbon) that have been emitted since the 18th century was roughly 2.8 W/m2. The climate forcing, ΔF, also contains contributions from solar activity (+0.05 W/m2), aerosols (−0.9 W/m2), ozone (+0.35 W/m2), and other smaller influences, which brings the total forcing over the Industrial Period to 2.2 W/m2, according to the best estimate of the IPCC Fifth Assessment Report in 2014, with substantial uncertainty. The ocean heat uptake, estimated by the same report to be 0.42 W/m2, yields a value for S of 1.8 °C (3.2 °F).

Other strategies

In theory, Industrial Age temperatures could also be used to determine a time scale for the temperature response of the climate system and thus climate sensitivity: if the effective heat capacity of the climate system is known, and the timescale is estimated using autocorrelation of the measured temperature, an estimate of climate sensitivity can be derived. In practice, however, the simultaneous determination of the time scale and heat capacity is difficult.

Attempts have been made to use the 11-year solar cycle to constrain the transient climate response. Solar irradiance is about 0.9 W/m2 higher during a solar maximum than during a solar minimum, and those effect can be observed in measured average global temperatures from 1959 to 2004. Unfortunately, the solar minima in the period coincided with volcanic eruptions, which have a cooling effect on the global temperature. Because the eruptions caused a larger and less well-quantified decrease in radiative forcing than the reduced solar irradiance, it is questionable whether useful quantitative conclusions can be derived from the observed temperature variations.

Observations of volcanic eruptions have also been used to try to estimate climate sensitivity, but as the aerosols from a single eruption last at most a couple of years in the atmosphere, the climate system can never come close to equilibrium, and there is less cooling than there would be if the aerosols stayed in the atmosphere for longer. Therefore, volcanic eruptions give information only about a lower bound on transient climate sensitivity.

Using data from Earth's past

Historical climate sensitivity can be estimated by using reconstructions of Earth's past temperatures and CO2 levels. Paleoclimatologists have studied different geological periods, such as the warm Pliocene (5.3 to 2.6 million years ago) and the colder Pleistocene (2.6 million to 11,700 years ago), and sought periods that are in some way analogous to or informative about current climate change. Climates further back in Earth's history are more difficult to study because fewer data are available about them. For instance, past CO2 concentrations can be derived from air trapped in ice cores, but as of 2020, the oldest continuous ice core is less than one million years old. Recent periods, such as the Last Glacial Maximum (LGM) (about 21,000 years ago) and the Mid-Holocene (about 6,000 years ago), are often studied, especially when more information about them becomes available.

A 2007 estimate of sensitivity made using data from the most recent 420 million years is consistent with sensitivities of current climate models and with other determinations. The Paleocene–Eocene Thermal Maximum (about 55.5 million years ago), a 20,000-year period during which massive amount of carbon entered the atmosphere and average global temperatures increased by approximately 6 °C (11 °F), also provides a good opportunity to study the climate system when it was in a warm state. Studies of the last 800,000 years have concluded that climate sensitivity was greater in glacial periods than in interglacial periods.

As the name suggests, the Last Glacial Maximum was much colder than today, and good data on atmospheric CO2 concentrations and radiative forcing from that period are available. The period's orbital forcing was different from today's but had little effect on mean annual temperatures. Estimating climate sensitivity from the Last Glacial Maximum can be done by several different ways. One way is to use estimates of global radiative forcing and temperature directly. The set of feedback mechanisms active during the period, however, may be different from the feedbacks caused by a present doubling of CO2, which introduces additional uncertainty. In a different approach, a model of intermediate complexity is used to simulate conditions during the period. Several versions of this single model are run, with different values chosen for uncertain parameters, such that each version has a different ECS. Outcomes that best simulate the LGM's observed cooling probably produce the most realistic ECS values.

Using climate models

Histogram of equilibrium climate sensitivity as derived for different plausible assumptions
Frequency distribution of equilibrium climate sensitivity based on simulations of the doubling of CO2. Each model simulation has different estimates for processes which scientists do not sufficiently understand. Few of the simulations result in less than 2 °C (3.6 °F) of warming or significantly more than 4 °C (7.2 °F). However, the positive skew, which is also found in other studies, suggests that if carbon dioxide concentrations double, the probability of large or very large increases in temperature is greater than the probability of small increases.

Climate models simulate the CO2-driven warming of the future as well as the past. They operate on principles similar to those underlying models that predict the weather, but they focus on longer-term processes. Climate models typically begin with a starting state and then apply physical laws and knowledge about biology to generate subsequent states. As with weather modelling, no computer has the power to model the complexity of the entire planet and so simplifications are used to reduce that complexity to something manageable. An important simplification divides Earth's atmosphere into model cells. For instance, the atmosphere might be divided into cubes of air ten or one hundred kilometers on a side. Each model cell is treated as if it were homogeneous. Calculations for model cells are much faster than trying to simulate each molecule of air separately.

A lower model resolution (large model cells and long time steps) takes less computing power but cannot simulate the atmosphere in as much detail. A model cannot simulate processes smaller than the model cells or shorter-term than a single time step. The effects of the smaller-scale and shorter-term processes must therefore be estimated by using other methods. Physical laws contained in the models may also be simplified to speed up calculations. The biosphere must be included in climate models. The effects of the biosphere are estimated by using data on the average behaviour of the average plant assemblage of an area under the modelled conditions. Climate sensitivity is therefore an emergent property of these models. It is not prescribed, but it follows from the interaction of all the modelled processes.

To estimate climate sensitivity, a model is run by using a variety of radiative forcings (doubling quickly, doubling gradually, or following historical emissions) and the temperature results are compared to the forcing applied. Different models give different estimates of climate sensitivity, but they tend to fall within a similar range, as described above.

Testing, comparisons, and climate ensembles

Modelling of the climate system can lead to a wide range of outcomes. Models are often run that use different plausible parameters in their approximation of physical laws and the behaviour of the biosphere, which forms a perturbed physics ensemble, which attempts to model the sensitivity of the climate to different types and amounts of change in each parameter. Alternatively, structurally-different models developed at different institutions are put together, creating an ensemble. By selecting only the simulations that can simulate some part of the historical climate well, a constrained estimate of climate sensitivity can be made. One strategy for obtaining more accurate results is placing more emphasis on climate models that perform well in general.

A model is tested using observations, paleoclimate data, or both to see if it replicates them accurately. If it does not, inaccuracies in the physical model and parametrizations are sought, and the model is modified. For models used to estimate climate sensitivity, specific test metrics that are directly and physically linked to climate sensitivity are sought. Examples of such metrics are the global patterns of warming, the ability of a model to reproduce observed relative humidity in the tropics and subtropics, patterns of heat radiation, and the variability of temperature around long-term historical warming. Ensemble climate models developed at different institutions tend to produce constrained estimates of ECS that are slightly higher than 3 °C (5.4 °F). The models with ECS slightly above 3 °C (5.4 °F) simulate the above situations better than models with a lower climate sensitivity.

Many projects and groups exist to compare and to analyse the results of multiple models. For instance, the Coupled Model Intercomparison Project (CMIP) has been running since the 1990s.

Historical estimates

Svante Arrhenius in the 19th century was the first person to quantify global warming as a consequence of a doubling of the concentration of CO2. In his first paper on the matter, he estimated that global temperature would rise by around 5 to 6 °C (9.0 to 10.8 °F) if the quantity of CO2 was doubled. In later work, he revised that estimate to 4 °C (7.2 °F). Arrhenius used Samuel Pierpont Langley's observations of radiation emitted by the full moon to estimate the amount of radiation that was absorbed by water vapour and by CO2. To account for water vapour feedback, he assumed that relative humidity would stay the same under global warming.

The first calculation of climate sensitivity that used detailed measurements of absorption spectra, as well as the first calculation to use a computer for numerical integration of the radiative transfer through the atmosphere, was performed by Syukuro Manabe and Richard Wetherald in 1967. Assuming constant humidity, they computed an equilibrium climate sensitivity of 2.3 °C per doubling of CO2, which they rounded to 2 °C, the value most often quoted from their work, in the abstract of the paper. The work has been called "arguably the greatest climate-science paper of all time" and "the most influential study of climate of all time."

A committee on anthropogenic global warming, convened in 1979 by the United States National Academy of Sciences and chaired by Jule Charney, estimated equilibrium climate sensitivity to be 3 °C (5.4 °F), plus or minus 1.5 °C (2.7 °F). The Manabe and Wetherald estimate (2 °C (3.6 °F)), James E. Hansen's estimate of 4 °C (7.2 °F), and Charney's model were the only models available in 1979. According to Manabe, speaking in 2004, "Charney chose 0.5 °C as a reasonable margin of error, subtracted it from Manabe's number, and added it to Hansen's, giving rise to the 1.5 to 4.5 °C (2.7 to 8.1 °F) range of likely climate sensitivity that has appeared in every greenhouse assessment since ...." In 2008, climatologist Stefan Rahmstorf said: "At that time [it was published], the [Charney report estimate's] range [of uncertainty] was on very shaky ground. Since then, many vastly improved models have been developed by a number of climate research centers around the world."

Assessment reports of IPCC

diagram showing five historical estimates of equilibrium climate sensitivity by the IPCC
Historical estimates of climate sensitivity from the IPCC assessments. The first three reports gave a qualitative likely range, and the fourth and the fifth assessment report formally quantified the uncertainty. The dark blue range is judged as being more than 66% likely.

Despite considerable progress in the understanding of Earth's climate system, assessments continued to report similar uncertainty ranges for climate sensitivity for some time after the 1979 Charney report. The First Assessment Report of the Intergovernmental Panel on Climate Change (IPCC), published in 1990, estimated that equilibrium climate sensitivity to a doubling of CO2 lay between 1.5 and 4.5 °C (2.7 and 8.1 °F), with a "best guess in the light of current knowledge" of 2.5 °C (4.5 °F). The report used models with simplified representations of ocean dynamics. The IPCC supplementary report, 1992, which used full-ocean circulation models, saw "no compelling reason to warrant changing" the 1990 estimate; and the IPCC Second Assessment Report stated, "No strong reasons have emerged to change [these estimates]," In the reports, much of the uncertainty around climate sensitivity was attributed to insufficient knowledge of cloud processes. The 2001 IPCC Third Assessment Report also retained this likely range.

Authors of the 2007 IPCC Fourth Assessment Report stated that confidence in estimates of equilibrium climate sensitivity had increased substantially since the Third Annual Report. The IPCC authors concluded that ECS is very likely to be greater than 1.5 °C (2.7 °F) and likely to lie in the range 2 to 4.5 °C (3.6 to 8.1 °F), with a most likely value of about 3 °C (5.4 °F). The IPCC stated that fundamental physical reasons and data limitations prevent a climate sensitivity higher than 4.5 °C (8.1 °F) from being ruled out, but the climate sensitivity estimates in the likely range agreed better with observations and the proxy climate data.

The 2013 IPCC Fifth Assessment Report reverted to the earlier range of 1.5 to 4.5 °C (2.7 to 8.1 °F) (with high confidence), because some estimates using industrial-age data came out low. The report also stated that ECS is extremely unlikely to be less than 1 °C (1.8 °F) (high confidence), and it is very unlikely to be greater than 6 °C (11 °F) (medium confidence). Those values were estimated by combining the available data with expert judgement.

In preparation for the 2021 IPCC Sixth Assessment Report, a new generation of climate models was developed by scientific groups around the world. Across 27 global climate models, estimates of a higher climate sensitivity were produced. The values spanned 1.8 to 5.6 °C (3.2 to 10.1 °F) and exceeded 4.5 °C (8.1 °F) in 10 of them. The estimates for equilibrium climate sensitivity changed from 3.2 °C to 3.7 °C and the estimates for the transient climate response from 1.8 °C, to 2.0 °C. The cause of the increased ECS lies mainly in improved modelling of clouds. Temperature rises are now believed to cause sharper decreases in the number of low clouds, and fewer low clouds means more sunlight is absorbed by the planet and less reflected to space.

Remaining deficiencies in the simulation of clouds may have led to overestimates, as models with the highest ECS values were not consistent with observed warming. A fifth of the models began to 'run hot', predicting that global warming would produce significantly higher temperatures than is considered plausible. According to these models, known as hot models, average global temperatures in the worst-case scenario would rise by more than 5 °C above preindustrial levels by 2100, with a "catastrophic" impact on human society. In comparison, empirical observations combined with physics models indicate that the "very likely" range is between 2.3 and 4.7 °C. Models with a very high climate sensitivity are also known to be poor at reproducing known historical climate trends, such as warming over the 20th century or cooling during the last ice age. For these reasons the predictions of hot models are considered implausible, and have been given less weight by the IPCC in 2022.

Vascular dementia

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Vascular_dementia
 
Vascular dementia
Other namesDementia due to cerebrovascular disease;
Vascular cognitive impairment

Brain atrophy from vascular dementia

Vascular dementia is dementia caused by a series of strokes. Restricted blood flow due to strokes reduces oxygen and glucose delivery to the brain, causing cell injury and neurological deficits in the affected region. Subtypes of vascular dementia include subcortical vascular dementia, multi-infarct dementia, stroke-related dementia, and mixed dementia.

Subcortical vascular dementia occurs from damage to small blood vessels in the brain. Multi-infarct dementia results from a series of small strokes affecting several brain regions. Stroke-related dementia involving successive small strokes causes a more gradual decline in cognition. Dementia may occur when neurodegenerative and cerebrovascular pathologies are mixed, as in susceptible elderly people (75 years and older). Cognitive decline can be traced back to occurrence of successive strokes.

ICD-11 lists vascular dementia as dementia due to cerebrovascular disease. DSM-5 lists vascular dementia as either major or mild vascular neurocognitive disorder.

Signs and symptoms

People with vascular dementia present with progressive cognitive impairment, acutely or sub-acutely as in mild cognitive impairment, frequently step-wise, after multiple strokes.

The disease is described as both a mental and behavioral disorder within the ICD-11. Signs and symptoms are cognitive, motor, behavioral, and for a significant proportion of people, also affective. These changes typically occur over a period of 5–10 years. Signs are typically the same as in other dementias, but mainly include cognitive decline and memory impairment of sufficient severity as to interfere with activities of daily living, sometimes with presence of focal neurological signs, and evidence of features consistent with cerebrovascular disease on brain imaging (CT or MRI).

The neurological signs localizing to certain areas of the brain that can be observed are hemiparesis, bradykinesia, hyperreflexia, extensor plantar reflexes, ataxia, pseudobulbar palsy, as well as gait problems and swallowing difficulties. People have patchy deficits in terms of cognitive testing. They tend to have better free recall and fewer recall intrusions when compared with people having Alzheimer's disease. In the more severely affected people, or those affected by infarcts in Wernicke's or Broca's areas, specific problems with speaking called dysarthria and aphasias may be present.

In small vessel disease, the frontal lobes are often affected. Consequently, people with vascular dementia tend to perform worse than their Alzheimer's disease counterparts in frontal lobe tasks, such as verbal fluency, and may present with frontal lobe problems: apathy, abulia (lack of will or initiative), problems with attention, orientation, and urinary incontinence. They tend to exhibit more perseverative behavior. People with vascular dementia may also present with general slowing of processing ability, difficulty shifting sets, and impairment in abstract thinking. Apathy early in the disease is more suggestive of vascular dementia.

Rare genetic disorders that cause vascular lesions in the brain have other presentation patterns. As a rule, they tend to occur earlier in life and have a more aggressive course. In addition, infectious disorders, such as syphilis, can cause arterial damage, strokes, and bacterial inflammation of the brain.

Causes

Risk factors and clinical characteristics for vascular dementia

Vascular dementia can be caused by ischemic or hemorrhagic infarcts affecting multiple brain areas, including the anterior cerebral artery territory, the parietal lobes, or the cingulate gyrus. On rare occasion, infarcts in the hippocampus or thalamus are the cause of dementia. A history of stroke increases the risk of developing dementia by around 70%, and recent stroke increases the risk by around 120%. Brain vascular lesions can also be the result of diffuse cerebrovascular disease, such as small vessel disease.

Risk factors

Risk factors for vascular dementia include increasing age, hypertension, smoking, hypercholesterolemia, diabetes mellitus, cardiovascular disease, and cerebrovascular disease. Other risk factors include lifestyle, geographic origin, and APOE-ε4 genotype.

Vascular dementia can sometimes be triggered by cerebral amyloid angiopathy, which involves accumulation of amyloid beta plaques in the walls of the cerebral arteries, leading to breakdown and rupture of the vessels. Since amyloid plaques are a characteristic feature of Alzheimer's disease, vascular dementia may occur as a consequence.

Diagnosis

Several specific diagnostic criteria can be used to diagnose vascular dementia, including the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) criteria, the International Classification of Diseases, Tenth Edition (ICD-10) criteria, the National Institute of Neurological Disorders and Stroke criteria, Association Internationale pour la Recherche et l'Enseignement en Neurosciences (NINDS-AIREN) criteria, the Alzheimer's Disease Diagnostic and Treatment Center criteria, and the Hachinski Ischemic Score (after Vladimir Hachinski).

The recommended investigations for cognitive impairment include: blood tests (for anemia, vitamin deficiency, thyrotoxicosis, infection, among others), chest xray, ECG, and neuroimaging, preferably a scan with a functional or metabolic sensitivity beyond a simple CT or MRI. When available as a diagnostic tool, single photon emission computed tomography (SPECT) and positron emission tomography (PET) neuroimaging may be used to confirm a diagnosis of multi-infarct dementia in conjunction with evaluations involving mental status examination.

In a person already having dementia, SPECT appears to be superior in differentiating multi-infarct dementia from Alzheimer's disease, compared to the usual mental testing and medical history analysis.

The screening blood tests typically include full blood count, liver function tests, thyroid function tests, lipid profile, erythrocyte sedimentation rate, C reactive protein, syphilis serology, calcium serum level, fasting glucose, urea, electrolytes, vitamin B-12, and folate.

Differential diagnosis

Differentiating dementia syndromes can be challenging, due to the frequently overlapping clinical features and related underlying pathology. Mixed dementia, involving two types of dementia, can occur. In particular, Alzheimer's disease often co-occurs with vascular dementia.

Mixed dementia is diagnosed when people have evidence of Alzheimer's disease and cerebrovascular disease, either clinically or based on neuro-imaging evidence of ischemic lesions.

Pathology

Gross examination of the brain may reveal noticeable lesions and damage to blood vessels.Accumulation of various substances such as lipid deposits and clotted blood appear on microscopic views. The white matter is substantially affected, with noticeable atrophy (tissue loss), in addition to calcification of the arteries. Microinfarcts may also be present in the gray matter (cerebral cortex), sometimes in large numbers.

Although atheroma of the major cerebral arteries is typical in vascular dementia, smaller vessels and arterioles are mainly affected.

Prevention

Early detection and accurate diagnosis are important, as vascular dementia is at least partially preventable. Ischemic changes in the brain are irreversible, but the person with vascular dementia can demonstrate periods of stability or even mild improvement. Since stroke is an essential part of vascular dementia, the goal is to prevent new strokes. This is attempted through reduction of stroke risk factors, such as high blood pressure, high blood lipid levels, atrial fibrillation, or diabetes mellitus.

Medications for high blood pressure are used to prevent pre-stroke dementia. These medications include angiotensin converting enzyme inhibitors, diuretics, calcium channel blockers, sympathetic nerve inhibitors, angiotensin II receptor antagonists or adrenergic antagonists.

A 2023 review found that therapy with statin drugs was ineffective in treating or preventing stroke or dementia in people without a history of cerebrovascular disease.

Treatment

As of 2024, there are no medications used specifically for prevention or treatment of vascular dementia.

Prognosis

Many studies have been conducted to determine average survival of people with dementia. The studies were frequently small and limited, which caused contradictory results in the connection of mortality to the type of dementia and the person's gender. One 2015 study found that the one-year mortality was three to four times higher in people after their first referral to a day clinic for dementia, when compared to the general population. If the person was hospitalized for dementia, the mortality was even higher than in people hospitalized for cardiovascular disease. Vascular dementia was found to have either comparable or worse survival rates when compared to Alzheimer's disease; another 2014 study found that the prognosis for people with vascular dementia was worse for male and older people.

Vascular dementia may be a direct cause of death due to the possibility of a fatal interruption in the brain's blood supply.

Epidemiology

Vascular dementia is the second-most-common form of dementia after Alzheimer's disease in older adults. The prevalence of the illness is 1.5% in Western countries and approximately 2.2% in Japan. It accounts for 50% of all dementias in Japan, 20% to 40% in Europe and 15% in Latin America. 25% of people with stroke develop new-onset dementia within one year of their stroke. One study found that in the United States, the prevalence of vascular dementia in all people over the age of 71 is 2.43%, and another found that the prevalence of the dementias doubles with every 5.1 years of age.

The incidence peaks between the fourth and the seventh decades of life and 80% of people have a history of hypertension.

A 2018 meta-analysis identified 36 studies of prevalent stroke (1.9 million participants) and 12 studies of incident stroke (1.3 million participants). For prevalent stroke, the pooled hazard ratio for all-cause dementia was 1.69; for incident stroke, the pooled risk ratio was 2.18. Study characteristics did not modify these associations, with the exception of sex, which explained 50.2% of between-study heterogeneity for prevalent stroke. These results confirm that stroke is a strong, independent, and potentially modifiable risk factor for all-cause dementia.

Earth's energy budget

From Wikipedia, the free encyclopedia
Earth's energy balance and imbalance, showing where the excess energy goes: Outgoing radiation is decreasing owing to increasing greenhouse gases in the atmosphere, leading to Earth's energy imbalance of about 460 TW. The percentage going into each domain of the climate system is also indicated.

Earth's energy budget (or Earth's energy balance) is the balance between the energy that Earth receives from the Sun and the energy the Earth loses back into outer space. Smaller energy sources, such as Earth's internal heat, are taken into consideration, but make a tiny contribution compared to solar energy. The energy budget also takes into account how energy moves through the climate system. The Sun heats the equatorial tropics more than the polar regions. Therefore, the amount of solar irradiance received by a certain region is unevenly distributed. As the energy seeks equilibrium across the planet, it drives interactions in Earth's climate system, i.e., Earth's water, ice, atmosphere, rocky crust, and all living things. The result is Earth's climate.

Earth's energy budget depends on many factors, such as atmospheric aerosols, greenhouse gases, surface albedo, clouds, and land use patterns. When the incoming and outgoing energy fluxes are in balance, Earth is in radiative equilibrium and the climate system will be relatively stable. Global warming occurs when earth receives more energy than it gives back to space, and global cooling takes place when the outgoing energy is greater.

Multiple types of measurements and observations show a warming imbalance since at least year 1970. The rate of heating from this human-caused event is without precedent. The main origin of changes in the Earth's energy is from human-induced changes in the composition of the atmosphere. During 2005 to 2019 the Earth's energy imbalance (EEI) averaged about 460 TW or globally 0.90±0.15 W/m2.

It takes time for any changes in the energy budget to result in any significant changes in the global surface temperature. This is due to the thermal inertia of the oceans, land and cryosphere. Most climate models make accurate calculations of this inertia, energy flows and storage amounts.

Definition

Earth's energy budget includes the "major energy flows of relevance for the climate system". These are "the top-of-atmosphere energy budget; the surface energy budget; changes in the global energy inventory and internal flows of energy within the climate system".

Earth's energy flows

In spite of the enormous transfers of energy into and from the Earth, it maintains a relatively constant temperature because, as a whole, there is little net gain or loss: Earth emits via atmospheric and terrestrial radiation (shifted to longer electromagnetic wavelengths) to space about the same amount of energy as it receives via solar insolation (all forms of electromagnetic radiation).

The main origin of changes in the Earth's energy is from human-induced changes in the composition of the atmosphere, amounting to about 460 TW or globally 0.90±0.15 W/m2.

Incoming solar energy (shortwave radiation)

The total amount of energy received per second at the top of Earth's atmosphere (TOA) is measured in watts and is given by the solar constant times the cross-sectional area of the Earth corresponded to the radiation. Because the surface area of a sphere is four times the cross-sectional area of a sphere (i.e. the area of a circle), the globally and yearly averaged TOA flux is one quarter of the solar constant and so is approximately 340 watts per square meter (W/m2). Since the absorption varies with location as well as with diurnal, seasonal and annual variations, the numbers quoted are multi-year averages obtained from multiple satellite measurements.

Of the ~340 W/m2 of solar radiation received by the Earth, an average of ~77 W/m2 is reflected back to space by clouds and the atmosphere and ~23 W/m2 is reflected by the surface albedo, leaving ~240 W/m2 of solar energy input to the Earth's energy budget. This amount is called the absorbed solar radiation (ASR). It implies a value of about 0.3 for the mean net albedo of Earth, also called its Bond albedo (A):

Outgoing longwave radiation

Thermal energy leaves the planet in the form of outgoing longwave radiation (OLR). Longwave radiation is electromagnetic thermal radiation emitted by Earth's surface and atmosphere. Longwave radiation is in the infrared band. But, the terms are not synonymous, as infrared radiation can be either shortwave or longwave. Sunlight contains significant amounts of shortwave infrared radiation. A threshold wavelength of 4 microns is sometimes used to distinguish longwave and shortwave radiation.

Generally, absorbed solar energy is converted to different forms of heat energy. Some of the solar energy absorbed by the surface is converted to thermal radiation at wavelengths in the "atmospheric window"; this radiation is able to pass through the atmosphere unimpeded and directly escape to space, contributing to OLR. The remainder of absorbed solar energy is transported upwards through the atmosphere through a variety of heat transfer mechanisms, until the atmosphere emits that energy as thermal energy which is able to escape to space, again contributing to OLR. For example, heat is transported into the atmosphere via evapotranspiration and latent heat fluxes or conduction/convection processes, as well as via radiative heat transport. Ultimately, all outgoing energy is radiated into space in the form of longwave radiation.

The transport of longwave radiation from Earth's surface through its multi-layered atmosphere is governed by radiative transfer equations such as Schwarzschild's equation for radiative transfer (or more complex equations if scattering is present) and obeys Kirchhoff's law of thermal radiation.

A one-layer model produces an approximate description of OLR which yields temperatures at the surface (Ts=288 Kelvin) and at the middle of the troposphere (Ta=242 K) that are close to observed average values:

In this expression σ is the Stefan–Boltzmann constant and ε represents the emissivity of the atmosphere, which is less than 1 because the atmosphere does not emit within the wavelength range known as the atmospheric window.

Aerosols, clouds, water vapor, and trace greenhouse gases contribute to an effective value of about ε = 0.78. The strong (fourth-power) temperature sensitivity maintains a near-balance of the outgoing energy flow to the incoming flow via small changes in the planet's absolute temperatures.

Increase in the Earth's non-cloud greenhouse effect (2000–2022) based on satellite data.

As viewed from Earth's surrounding space, greenhouse gases influence the planet's atmospheric emissivity (ε). Changes in atmospheric composition can thus shift the overall radiation balance. For example, an increase in heat trapping by a growing concentration of greenhouse gases (i.e. an enhanced greenhouse effect) forces a decrease in OLR and a warming (restorative) energy imbalance. Ultimately when the amount of greenhouse gases increases or decreases, in-situ surface temperatures rise or fall until the absorbed solar radiation equals the outgoing longwave radiation, or ASR equals OLR.

Earth's internal heat sources and other minor effects

The geothermal heat flow from the Earth's interior is estimated to be 47 terawatts (TW) and split approximately equally between radiogenic heat and heat left over from the Earth's formation. This corresponds to an average flux of 0.087 W/m2 and represents only 0.027% of Earth's total energy budget at the surface, being dwarfed by the 173000 TW of incoming solar radiation.

Human production of energy is even lower at an average 18 TW, corresponding to an estimated 160,000 TW-hr, for all of year 2019. However, consumption is growing rapidly and energy production with fossil fuels also produces an increase in atmospheric greenhouse gases, leading to a more than 20 times larger imbalance in the incoming/outgoing flows that originate from solar radiation.

Photosynthesis also has a significant effect: An estimated 140 TW (or around 0.08%) of incident energy gets captured by photosynthesis, giving energy to plants to produce biomass. A similar flow of thermal energy is released over the course of a year when plants are used as food or fuel.

Other minor sources of energy are usually ignored in the calculations, including accretion of interplanetary dust and solar wind, light from stars other than the Sun and the thermal radiation from space. Earlier, Joseph Fourier had claimed that deep space radiation was significant in a paper often cited as the first on the greenhouse effect.

Budget analysis

A Sankey diagram illustrating a balanced example of Earth's energy budget. Line thickness is linearly proportional to relative amount of energy.

In simplest terms, Earth's energy budget is balanced when the incoming flow equals the outgoing flow. Since a portion of incoming energy is directly reflected, the balance can also be stated as absorbed incoming solar (shortwave) radiation equal to outgoing longwave radiation:

Internal flow analysis

To describe some of the internal flows within the budget, let the insolation received at the top of the atmosphere be 100 units (= 340 W/m2), as shown in the accompanying Sankey diagram. Called the albedo of Earth, around 35 units in this example are directly reflected back to space: 27 from the top of clouds, 2 from snow and ice-covered areas, and 6 by other parts of the atmosphere. The 65 remaining units (ASR = 220 W/m2) are absorbed: 14 within the atmosphere and 51 by the Earth's surface.

The 51 units reaching and absorbed by the surface are emitted back to space through various forms of terrestrial energy: 17 directly radiated to space and 34 absorbed by the atmosphere (19 through latent heat of vaporisation, 9 via convection and turbulence, and 6 as absorbed infrared by greenhouse gases). The 48 units absorbed by the atmosphere (34 units from terrestrial energy and 14 from insolation) are then finally radiated back to space. This simplified example neglects some details of mechanisms that recirculate, store, and thus lead to further buildup of heat near the surface.

Ultimately the 65 units (17 from the ground and 48 from the atmosphere) are emitted as OLR. They approximately balance the 65 units (ASR) absorbed from the sun in order to maintain a net-zero gain of energy by Earth.

Heat storage reservoirs

The rising accumulation of energy in the oceanic, land, ice, and atmospheric components of Earth's climate system since 1960.

Land, ice, and oceans are active material constituents of Earth's climate system along with the atmosphere. They have far greater mass and heat capacity, and thus much more thermal inertia. When radiation is directly absorbed or the surface temperature changes, thermal energy will flow as sensible heat either into or out of the bulk mass of these components via conduction/convection heat transfer processes. The transformation of water between its solid/liquid/vapor states also acts as a source or sink of potential energy in the form of latent heat. These processes buffer the surface conditions against some of the rapid radiative changes in the atmosphere. As a result, the daytime versus nighttime difference in surface temperatures is relatively small. Likewise, Earth's climate system as a whole shows a slow response to shifts in the atmospheric radiation balance.

The top few meters of Earth's oceans harbor more thermal energy than its entire atmosphere. Like atmospheric gases, fluidic ocean waters transport vast amounts of such energy over the planet's surface. Sensible heat also moves into and out of great depths under conditions that favor downwelling or upwelling.

Over 90 percent of the extra energy that has accumulated on Earth from ongoing global warming since 1970 has been stored in the ocean. About one-third has propagated to depths below 700 meters. The overall rate of growth has also risen during recent decades, reaching close to 500 TW (1 W/m2) as of 2020. That led to about 14 zettajoules (ZJ) of heat gain for the year, exceeding the 570 exajoules (=160,000 TW-hr) of total primary energy consumed by humans by a factor of at least 20.

Heating/cooling rate analysis

Generally speaking, changes to Earth's energy flux balance can be thought of as being the result of external forcings (both natural and anthropogenic, radiative and non-radiative), system feedbacks, and internal system variability. Such changes are primarily expressed as observable shifts in temperature (T), clouds (C), water vapor (W), aerosols (A), trace greenhouse gases (G), land/ocean/ice surface reflectance (S), and as minor shifts in insolaton (I) among other possible factors. Earth's heating/cooling rate can then be analyzed over selected timeframes (Δt) as the net change in energy (ΔE) associated with these attributes:

Here the term ΔET, corresponding to the Planck response, is negative-valued when temperature rises due to its strong direct influence on OLR.

The recent increase in trace greenhouse gases produces an enhanced greenhouse effect, and thus a positive ΔEG forcing term. By contrast, a large volcanic eruption (e.g. Mount Pinatubo 1991, El Chichón 1982) can inject sulfur-containing compounds into the upper atmosphere. High concentrations of stratospheric sulfur aerosols may persist for up to a few years, yielding a negative forcing contribution to ΔEA. Various other types of anthropogenic aerosol emissions make both positive and negative contributions to ΔEA. Solar cycles produce ΔEI smaller in magnitude than those of recent ΔEG trends from human activity.

Climate forcings are complex since they can produce direct and indirect feedbacks that intensify (positive feedback) or weaken (negative feedback) the original forcing. These often follow the temperature response. Water vapor trends as a positive feedback with respect to temperature changes due to evaporation shifts and the Clausius-Clapeyron relation. An increase in water vapor results in positive ΔEW due to further enhancement of the greenhouse effect. A slower positive feedback is the ice-albedo feedback. For example, the loss of Arctic ice due to rising temperatures makes the region less reflective, leading to greater absorption of energy and even faster ice melt rates, thus positive influence on ΔES. Collectively, feedbacks tend to amplify global warming or cooling.

Clouds are responsible for about half of Earth's albedo and are powerful expressions of internal variability of the climate system. They may also act as feedbacks to forcings, and could be forcings themselves if for example a result of cloud seeding activity. Contributions to ΔEC vary regionally and depending upon cloud type. Measurements from satellites are gathered in concert with simulations from models in an effort to improve understanding and reduce uncertainty.

Earth's energy imbalance (EEI)

Earth's energy budget (in W/m2) determines the climate. It is the balance of incoming and outgoing radiation and can be measured by satellites. The Earth's energy imbalance is the "net absorbed" energy amount and grew from +0.6 W/m2 (2009 est.) to above +1.0 W/m2 in 2019.

The Earth's energy imbalance (EEI) is defined as "the persistent and positive (downward) net top of atmosphere energy flux associated with greenhouse gas forcing of the climate system".

If Earth's incoming energy flux (ASR) is larger or smaller than the outgoing energy flux (OLR), then the planet will gain (warm) or lose (cool) net heat energy in accordance with the law of energy conservation:

.

Positive EEI thus defines the overall rate of planetary heating and is typically expressed as watts per square meter (W/m2). During 2005 to 2019 the Earth's energy imbalance averaged about 460 TW or globally 0.90 ± 0.15 W per m2.

When Earth's energy imbalance (EEI) shifts by a sufficiently large amount, the shift is measurable by orbiting satellite-based instruments. Imbalances that fail to reverse over time will also drive long-term temperature changes in the atmospheric, oceanic, land, and ice components of the climate system. Temperature, sea level, ice mass and related shifts thus also provide measures of EEI.

The biggest changes in EEI arise from changes in the composition of the atmosphere through human activities, thereby interfering with the natural flow of energy through the climate system. The main changes are from increases in carbon dioxide and other greenhouse gases, that produce heating (positive EEI), and pollution. The latter refers to atmospheric aerosols of various kinds, some of which absorb energy while others reflect energy and produce cooling (or lower EEI).  

Estimates of the Earth Energy Imbalance (EEI)
Time Period EEI (W/m2)

Square brackets show 90% confidence intervals

1971-2006 0.50 [0.31 to 0.68]
1971-2018 0.57 [0.43 to 0.72]
1976-2023 0.65 [0.48 to 0.82]
2006-2018 0.79 [0.52 to 1.07]
2011-2023 0.96 [0.67 to 1.26]

It is not (yet) possible to measure the absolute magnitude of EEI directly at top of atmosphere, although changes over time as observed by satellite-based instruments are thought to be accurate. The only practical way to estimate the absolute magnitude of EEI is through an inventory of the changes in energy in the climate system. The biggest of these energy reservoirs is the ocean.

Energy inventory assessments

The planetary heat content that resides in the climate system can be compiled given the heat capacity, density and temperature distributions of each of its components. Most regions are now reasonably well sampled and monitored, with the most significant exception being the deep ocean.

Schematic drawing of Earth's excess heat inventory and energy imbalance for two recent time periods.

Estimates of the absolute magnitude of EEI have likewise been calculated using the measured temperature changes during recent multi-decadal time intervals. For the 2006 to 2020 period EEI was about +0.76±0.2 W/m2 and showed a significant increase above the mean of +0.48±0.1 W/m2 for the 1971 to 2020 period.

EEI has been positive because temperatures have increased almost everywhere for over 50 years. Global surface temperature (GST) is calculated by averaging temperatures measured at the surface of the sea along with air temperatures measured over land. Reliable data extending to at least 1880 shows that GST has undergone a steady increase of about 0.18 °C per decade since about year 1970.

Ocean waters are especially effective absorbents of solar energy and have a far greater total heat capacity than the atmosphere. Research vessels and stations have sampled sea temperatures at depth and around the globe since before 1960. Additionally, after the year 2000, an expanding network of nearly 4000 Argo robotic floats has measured the temperature anomaly, or equivalently the ocean heat content change (ΔOHC). Since at least 1990, OHC has increased at a steady or accelerating rate. ΔOHC represents the largest portion of EEI since oceans have thus far taken up over 90% of the net excess energy entering the system over time (Δt):

.

Earth's outer crust and thick ice-covered regions have taken up relatively little of the excess energy. This is because excess heat at their surfaces flows inward only by means of thermal conduction, and thus penetrates only several tens of centimeters on the daily cycle and only several tens of meters on the annual cycle. Much of the heat uptake goes either into melting ice and permafrost or into evaporating more water from soils.

Measurements at top of atmosphere (TOA)

Several satellites measure the energy absorbed and radiated by Earth, and thus by inference the energy imbalance. These are located top of atmosphere (TOA) and provide data covering the globe. The NASA Earth Radiation Budget Experiment (ERBE) project involved three such satellites: the Earth Radiation Budget Satellite (ERBS), launched October 1984; NOAA-9, launched December 1984; and NOAA-10, launched September 1986.

The growth in Earth's energy imbalance from satellite and in situ measurements (2005–2019). A rate of +1.0 W/m2 summed over the planet's surface equates to a continuous heat uptake of about 500 terawatts (~0.3% of the incident solar radiation).

NASA's Clouds and the Earth's Radiant Energy System (CERES) instruments are part of its Earth Observing System (EOS) since March 2000. CERES is designed to measure both solar-reflected (short wavelength) and Earth-emitted (long wavelength) radiation. The CERES data showed increases in EEI from +0.42±0.48 W/m2 in 2005 to +1.12±0.48 W/m2 in 2019. Contributing factors included more water vapor, less clouds, increasing greenhouse gases, and declining ice that were partially offset by rising temperatures. Subsequent investigation of the behavior using the GFDL CM4/AM4 climate model concluded there was a less than 1% chance that internal climate variability alone caused the trend.

Other researchers have used data from CERES, AIRS, CloudSat, and other EOS instruments to look for trends of radiative forcing embedded within the EEI data. Their analysis showed a forcing rise of +0.53±0.11 W/m2 from years 2003 to 2018. About 80% of the increase was associated with the rising concentration of greenhouse gases which reduced the outgoing longwave radiation.

Further satellite measurements including TRMM and CALIPSO data have indicated additional precipitation, which is sustained by increased energy leaving the surface through evaporation (the latent heat flux), offsetting some of the increase in the longwave greenhouse flux to the surface.

It is noteworthy that radiometric calibration uncertainties limit the capability of the current generation of satellite-based instruments, which are otherwise stable and precise. As a result, relative changes in EEI are quantifiable with an accuracy which is not also achievable for any single measurement of the absolute imbalance.

Geodetic and hydrographic surveys

Earth heating estimates from a combination of space altimetry and space gravimetry.

Observations since 1994 show that ice has retreated from every part of Earth at an accelerating rate. Mean global sea level has likewise risen as a consequence of the ice melt in combination with the overall rise in ocean temperatures. These shifts have contributed measurable changes to the geometric shape and gravity of the planet.

Changes to the mass distribution of water within the hydrosphere and cryosphere have been deduced using gravimetric observations by the GRACE satellite instruments. These data have been compared against ocean surface topography and further hydrographic observations using computational models that account for thermal expansion, salinity changes, and other factors. Estimates thereby obtained for ΔOHC and EEI have agreed with the other (mostly) independent assessments within uncertainties.

Importance as a climate change metric

Climate scientists Kevin Trenberth, James Hansen, and colleagues have identified the monitoring of Earth's energy imbalance as an important metric to help policymakers guide the pace for mitigation and adaptation measures. Because of climate system inertia, longer-term EEI (Earth's energy imbalance) trends can forecast further changes that are "in the pipeline".

Scientists found that the EEI is the most important metric related to climate change. It is the net result of all the processes and feedbacks in play in the climate system. Knowing how much extra energy affects weather systems and rainfall is vital to understand the increasing weather extremes.

In 2012, NASA scientists reported that to stop global warming atmospheric CO2 concentration would have to be reduced to 350 ppm or less, assuming all other climate forcings were fixed. As of 2020, atmospheric CO2 reached 415 ppm and all long-lived greenhouse gases exceeded a 500 ppm CO2-equivalent concentration due to continued growth in human emissions.

Neurosurgery

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurosurg...