Search This Blog

Wednesday, October 18, 2023

Global dimming

From Wikipedia, the free encyclopedia

The first systematic measurements of global direct irradiance at the Earth's surface began in the 1950s. A decline in irradiance was soon observed, and it was given the name of global dimming. It continued from 1950s until 1980s, with an observed reduction of 4–5% per decade, even though solar activity did not vary more than the usual at the time. Global dimming has instead been attributed to an increase in atmospheric particulate matter, predominantly sulfate aerosols, as the result of rapidly growing air pollution due to post-war industrialization. After 1980s, global dimming started to reverse, alongside reductions in particulate emissions, in what has been described as global brightening, although this reversal is only considered "partial" for now. The reversal has also been globally uneven, as the dimming trend continued during the 1990s over some mostly developing countries like India, Zimbabwe, Chile and Venezuela. Over China, the dimming trend continued at a slower rate after 1990, and did not begin to reverse until around 2005.

Global dimming has interfered with the hydrological cycle by lowering evaporation, which is likely to have reduced rainfall in certain areas, and may have caused the observed southwards shift of the entire tropical rain belt between 1950 and 1985, with a limited recovery afterwards. Since high evaporation at the tropics is needed to drive the wet season, cooling caused by particulate pollution appears to weaken Monsoon of South Asia, while reductions in pollution strengthen it. Multiple studies have also connected record levels of particulate pollution in the Northern Hemisphere to the monsoon failure behind the 1984 Ethiopian famine, although the full extent of anthropogenic vs. natural influences on that event is still disputed. On the other hand, global dimming has also counteracted some of the greenhouse gas emissions, effectively "masking" the total extent of global warming experienced to date, with the most-polluted regions even experiencing cooling in the 1970s. Conversely, global brightening had contributed to the acceleration of global warming which began in the 1990s.

In the near future, global brightening is expected to continue, as nations act to reduce the toll of air pollution on the health of their citizens. This also means that less of global warming would be masked in the future. Climate models are broadly capable of simulating the impact of aerosols like sulfates, and in the IPCC Sixth Assessment Report, they are believed to offset around 0.5 °C (0.90 °F) of warming. Likewise, climate change scenarios incorporate reductions in particulates and the cooling they offered into their projections, and this includes the scenarios for climate action required to meet 1.5 °C (2.7 °F) and 2 °C (3.6 °F) targets. It is generally believed that the cooling provided by global dimming is similar to the warming derived from atmospheric methane, meaning that simultaneous reductions in both would effectively cancel each other out. However, uncertainties remain about the models' representation of aerosol impacts on weather systems, especially over the regions with a poorer historical record of atmospheric observations.

The processes behind global dimming are similar to those which drive reductions in direct sunlight after volcanic eruptions. In fact, the eruption of Mount Pinatubo in 1991 had temporarily reversed the brightening trend. Both processes are considered an analogue for stratospheric aerosol injection, a solar geoengineering intervention which aims to counteract global warming through intentional releases of reflective aerosols, albeit at much higher altitudes, where lower quantities would be needed and the polluting effects would be minimized. However, while that intervention may be very effective at stopping or reversing warming and its main consequences, it would also have substantial effects on the global hydrological cycle, as well as regional weather and ecosystems. Because its effects are only temporary, it would have to be maintained for centuries until the greenhouse gas concentrations are normalized to avoid a rapid and violent return of the warming, sometimes known as termination shock.

History

In the late 1960s, Mikhail Ivanovich Budyko worked with simple two-dimensional energy-balance climate models to investigate the reflectivity of ice. He found that the ice–albedo feedback created a positive feedback loop in the Earth's climate system. The more snow and ice, the more solar radiation is reflected back into space and hence the colder Earth grows and the more it snows. Other studies suggested that sulfate pollution or a volcano eruption could provoke the onset of an ice age.

In the 1980s, research in Israel and the Netherlands revealed an apparent reduction in the amount of sunlight, and Atsumu Ohmura, a geography researcher at the Swiss Federal Institute of Technology, found that solar radiation striking the Earth's surface had declined by more than 10% over the three previous decades, even as the global temperature had been generally rising since the 1970s. In the 1990s, this was followed by the papers describing multi-decade declines in Estonia, Germany and across the former Soviet Union, which prompted the researcher Gerry Stanhill to coin the term "global dimming". Subsequent research estimated an average reduction in sunlight striking the terrestrial surface of around 4–5% per decade over late 1950s–1980s, and 2–3% per decade when 1990s were included. Notably, solar radiation at the top of the atmosphere did not vary by more than 0.1-0.3% in all that time, strongly suggesting that the reasons for the dimming were on Earth. Additionally, only visible light and infrared radiation were dimmed, rather than the ultraviolet part of the spectrum.

Reversal

Sun-blocking aerosols around the world steadily declined (red line) since the 1991 eruption of Mount Pinatubo, according to satellite estimates. Credit: Michael Mishchenko, NASA

Starting from 2005, scientific papers began to report that after 1990, the global dimming trend had clearly switched to global brightening. This followed measures taken to combat air pollution by the developed nations, typically through flue-gas desulfurization installations at thermal power plants, such as wet scrubbers or fluidized bed combustion. In the United States, sulfate aerosols have declined significantly since 1970 with the passage of the Clean Air Act, which was strengthened in 1977 and 1990. According to the EPA, from 1970 to 2005, total emissions of the six principal air pollutants, including sulfates, dropped by 53% in the US. By 2010, this reduction in sulfate pollution led to estimated healthcare cost savings valued at $50 billion annually. Similar measures were taken in Europe, such as the 1985 Helsinki Protocol on the Reduction of Sulfur Emissions under the Convention on Long-Range Transboundary Air Pollution, and with similar improvements.

Orange on the map shows sulfate aerosol hotspots in the years 2005–2007.

On the other hand, a 2009 review found that dimming continued in China after stabilizing in the 1990s and intensified in India, consistent with their continued industrialization, while the US, Europe, and South Korea continued to brighten. Evidence from Zimbabwe, Chile and Venezuela also pointed to continued dimming during that period, albeit at a lower confidence level due to the lower number of observations. Due to these contrasting trends, no statistically significant change had occurred on a global scale from 2001 to 2012. Post-2010 observations indicate that the global decline in aerosol concentrations and global dimming continued, with pollution controls on the global shipping industry playing a substantial role in the recent years. Since nearly 90% of the human population lives in the Northern Hemisphere, clouds there are far more affected by aerosols than in the Southern Hemisphere, but these differences have halved in the two decades since 2000, providing further evidence for the ongoing global brightening.

Causes

Smog, seen here at the Golden Gate Bridge, is a likely contributor to global dimming.

Global dimming had been widely attributed to the increased presence of aerosol particles in Earth's atmosphere, predominantly those of sulfates. While natural dust is also an aerosol with some impacts on climate, and volcanic eruptions considerably increase sulfate concentrations in the short term, these effects have been dwarfed by increases in sulfate emissions since the start of the Industrial Revolution. According to the IPCC First Assessment Report, the global human-caused emissions of sulfur into the atmosphere were less than 3 million tons per year in 1860, yet they increased to 15 million tons in 1900, 40 million tons in 1940 and about 80 millions in 1980. This meant that the human-caused emissions became "at least as large" as all natural emissions of sulfur-containing compounds: the largest natural source, emissions of dimethyl sulfide from the ocean, was estimated at 40 million tons per year, while volcano emissions were estimated at 10 million tons. Moreover, that was the average figure: according to the report, "in the industrialized regions of Europe and North America, anthropogenic emissions dominate over natural emissions by about a factor of ten or even more".

Big Brown Cloud Storm over Asia.

Aerosols and other atmospheric particulates have direct and indirect effects on the amount of sunlight received at the surface. Directly, particles of sulfur dioxide reflect almost all sunlight, like tiny mirrors. On the other hand, incomplete combustion of fossil fuels (such as diesel) and wood releases particles of black carbon (predominantly soot), which absorb solar energy and heat up, reducing the overall amount of sunlight received on the surface while also contributing to warming. Black carbon is an extremely small component of air pollution at land surface levels, yet it has a substantial heating effect on the atmosphere at altitudes above two kilometers (6,562 ft).

Indirectly, the pollutants affect the climate by acting as nuclei, meaning that water droplets in clouds coalesce around the particles. Increased pollution causes more particulates and thereby creates clouds consisting of a greater number of smaller droplets (that is, the same amount of water is spread over more droplets). The smaller droplets make clouds more reflective, so that more incoming sunlight is reflected back into space and less reaches the Earth's surface. This same effect also reflects radiation from below, trapping it in the lower atmosphere. In models, these smaller droplets also decrease rainfall. In the 1990s, experiments comparing the atmosphere over the northern and southern islands of the Maldives, showed that the effect of macroscopic pollutants in the atmosphere at that time (blown south from India) caused about a 10% reduction in sunlight reaching the surface in the area under the Asian brown cloud – a much greater reduction than expected from the presence of the particles themselves. Prior to the research being undertaken, predictions were of a 0.5–1% effect from particulate matter; the variation from prediction may be explained by cloud formation with the particles acting as the focus for droplet creation.

Relationship to climate change

This figure shows the level of agreement between a climate model driven by five factors and the historical temperature record. The negative component identified as "sulfate" is associated with the aerosol emissions blamed for global dimming.

It has been understood for a long time that any effect on solar irradiance from aerosols would necessarily impact Earth's radiation balance. Reductions in atmospheric temperatures have already been observed after large volcanic eruptions such as the 1963 eruption of Mount Agung in Bali, 1982 El Chichón eruption in Mexico, 1985 Nevado del Ruiz eruption in Colombia and 1991 eruption of Mount Pinatubo in the Philippines. However, even the major eruptions only result in temporary jumps of sulfur particles, unlike the more sustained increases caused by the anthropogenic pollution. In 1990, the IPCC First Assessment Report acknowledged that "Human-made aerosols, from sulphur emitted largely in fossil fuel combustion can modify clouds and this may act to lower temperatures", while "a decrease in emissions of sulphur might be expected to increase global temperatures". However, lack of observational data and difficulties in calculating indirect effects on clouds left the report unable to estimate whether the total impact of all anthropogenic aerosols on the global temperature amounted to cooling or warming. By 1995, the IPCC Second Assessment Report had confidently assessed the overall impact of aerosols as negative (cooling); however, aerosols were recognized as the largest source of uncertainty in future projections in that report and the subsequent ones.

At the peak of global dimming, it was able to counteract the warming trend completely, but by 1975, the continually increasing concentrations of greenhouse gases have overcome the masking effect and dominated ever since. Even then, regions with high concentrations of sulfate aerosols due to air pollution had initially experienced cooling, in contradiction to the overall warming trend. The eastern United States was a prominent example: the temperatures there declined by 0.7 °C (1.3 °F) between 1970 and 1980, and by up to 1 °C (1.8 °F) in the Arkansas and Missouri. As the sulfate pollution was reduced, the central and eastern United States had experienced warming of 0.3 °C (0.54 °F) between 1980 and 2010, even as sulfate particles still accounted for around 25% of all particulates. By 2021, the northeastern coast of the United States was instead one of the fastest-warming regions of North America, as the slowdown of the Atlantic Meridional Overturning Circulation increased temperatures in that part of the North Atlantic Ocean.

Globally, the emergence of extreme heat beyond the preindustrial records was delayed by aerosol cooling, and hot extremes accelerated as global dimming abated: it has been estimated that since the mid-1990s, peak daily temperatures in northeast Asia and hottest days of the year in Western Europe would have been substantially less hot if aerosol concentrations had stayed the same as before. In Europe, the declines in aerosol concentrations since the 1980s had also reduced the associated fog, mist and haze: altogether, it was responsible for about 10–20% of daytime warming across Europe, and about 50% of the warming over the more polluted Eastern Europe. Because aerosol cooling depends on reflecting sunlight, air quality improvements had a negligible impact on wintertime temperatures, but had increased temperatures from April to September by around 1 °C (1.8 °F) in Central and Eastern Europe. Some of the acceleration of sea level rise, as well as Arctic amplification and the associated Arctic sea ice decline, was also attributed to the reduction in aerosol masking.

Pollution from black carbon, mostly represented by soot, also contributes to global dimming. However, because it absorbs heat instead of reflecting it, it warms the planet instead of cooling it like sulfates. This warming is much weaker than that of greenhouse gases, but it can be regionally significant when black carbon is deposited over ice masses like mountain glaciers and the Greenland ice sheet, where it reduces their albedo and increases their absorption of solar radiation. Even the indirect effect of soot particles acting as cloud nuclei is not strong enough to provide cooling: the "brown clouds" formed around soot particles were known to have a net warming effect since the 2000s. Black carbon pollution is particularly strong over India, and as the result, it is considered to be one of the few regions where cleaning up air pollution would reduce, rather than increase, warming.

Since changes in aerosol concentrations already have an impact on the global climate, they would necessarily influence future projections as well. In fact, it is impossible to fully estimate the warming impact of all greenhouse gases without accounting for the counteracting cooling from aerosols. Climate models started to account for the effects of sulfate aerosols around the IPCC Second Assessment Report; when the IPCC Fourth Assessment Report was published in 2007, every climate model had integrated sulfates, but only 5 were able to account for less impactful particulates like black carbon. By 2021, CMIP6 models estimated total aerosol cooling in the range from 0.1 °C (0.18 °F) to 0.7 °C (1.3 °F); The IPCC Sixth Assessment Report selected the best estimate of a 0.5 °C (0.90 °F) cooling provided by sulfate aerosols, while black carbon amounts to about 0.1 °C (0.18 °F) of warming. While these values are based on combining model estimates with observational constraints, including those on ocean heat content, the matter is not yet fully settled. The difference between model estimates mainly stems from disagreements over the indirect effects of aerosols on clouds. While it is well known that aerosols increase the number of cloud droplets and this makes the clouds more reflective, calculating how liquid water path, an important cloud property, is affected by their presence is far more challenging, as it involves computationally heavy continuous calculations of evaporation and condensation within clouds. Climate models generally assume that aerosols increase liquid water path, which makes the clouds even more reflective.

Visible ship tracks in the Northern Pacific, on 4 March 2009.

However, satellite observations taken in 2010s suggested that aerosols decreased liquid water path instead, and in 2018, this was reproduced in a model which integrated more complex cloud microphysics. Yet, 2019 research found that earlier satellite observations were biased by failing to account for the thickest, most water-heavy clouds naturally raining more and shedding more particulates: very strong aerosol cooling was seen when comparing clouds of the same thickness. Moreover, large-scale observations can be confounded by changes in other atmospheric factors, like humidity: i.e. it was found that while post-1980 improvements in air quality would have reduced the number of clouds over the East Coast of the United States by around 20%, this was offset by the increase in relative humidity caused by atmospheric response to AMOC slowdown. Similarly, while the initial research looking at sulfates from the 2014–2015 eruption of Bárðarbunga found that they caused no change in liquid water path, it was later suggested that this finding was confounded by counteracting changes in humidity. To avoid confounders, many observations of aerosol effects focus on ship tracks, but post-2020 research found that visible ship tracks are a poor proxy for other clouds, and estimates derived from them overestimate aerosol cooling by as much as 200%. At the same time, other research found that the majority of ship tracks are "invisible" to satellites, meaning that the earlier research had underestimated aerosol cooling by overlooking them. Finally, 2023 research indicates that all climate models have underestimated sulfur emissions from volcanoes which occur in the background, outside of major eruptions, and so had consequently overestimated the cooling provided by anthropogenic aerosols, especially in the Arctic climate.

Early 2010s estimates of past and future anthropogenic global sulfur dioxide emissions, including the Representative Concentration Pathways. While no climate change scenario may reach Maximum Feasible Reductions (MFRs), all assume steep declines from today's levels. By 2019, sulfate emission reductions were confirmed to proceed at a very fast rate.

Regardless of the current strength of aerosol cooling, all future climate change scenarios project decreases in particulates and this includes the scenarios where 1.5 °C (2.7 °F) and 2 °C (3.6 °F) targets are met: their specific emission reduction targets assume the need to make up for lower dimming. Since models estimate that the cooling caused by sulfates is largely equivalent to the warming caused by atmospheric methane (and since methane is a relatively short-lived greenhouse gas), it is believed that simultaneous reductions in both would effectively cancel each other out. Yet, in the recent years, methane concentrations had been increasing at rates exceeding their previous period of peak growth in the 1980s, with wetland methane emissions driving much of the recent growth, while air pollution is getting cleaned up aggressively. These trends are some of the main reasons why 1.5 °C (2.7 °F) warming is now expected around 2030, as opposed to the mid-2010s estimates where it would not occur until 2040.

It has also been suggested that aerosols are not given sufficient attention in regional risk assessments, in spite of being more influential on a regional scale than globally. For instance, a climate change scenario with high greenhouse gas emissions but strong reductions in air pollution would see 0.2 °C (0.36 °F) more global warming by 2050 than the same scenario with little improvement in air quality, but regionally, the difference would add 5 more tropical nights per year in northern China and substantially increase precipitation in northern China and northern India. Likewise, a paper comparing current level of clean air policies with a hypothetical maximum technically feasible action under otherwise the same climate change scenario found that the latter would increase the risk of temperature extremes by 30–50% in China and in Europe. Unfortunately, because historical records of aerosols are sparser in some regions than in others, accurate regional projections of aerosol impacts are difficult. Even the latest CMIP6 climate models can only accurately represent aerosol trends over Europe, but struggle with representing North America and Asia, meaning that their near-future projections of regional impacts are likely to contain errors as well.

Aircraft contrails and lockdowns

NASA photograph showing aircraft contrails and natural clouds.

In general, aircraft contrails (also called vapor trails) are believed to trap outgoing longwave radiation emitted by the Earth and atmosphere more than they reflect incoming solar radiation, resulting in a net increase in radiative forcing. In 1992, this warming effect was estimated between 3.5 mW/m2 and 17 mW/m2. Global radiative forcing impact of aircraft contrails has been calculated from the reanalysis data, climate models, and radiative transfer codes; estimated at 12 mW/m2 for 2005, with an uncertainty range of 5 to 26 mW/m2, and with a low level of scientific understanding. Contrail cirrus may be air traffic's largest radiative forcing component, larger than all CO2 accumulated from aviation, and could triple from a 2006 baseline to 160–180 mW/m2 by 2050 without intervention. For comparison, the total radiative forcing from human activities amounted to 2.72 W/m2 (with a range between 1.96 and 3.48W/m2) in 2019, and the increase from 2011 to 2019 alone amounted to 0.34W/m2.

Contrail effects differ a lot depending on when they are formed, as they decrease the daytime temperature and increase the nighttime temperature, reducing their difference. In 2006, it was estimated that night flights contribute 60 to 80% of contrail radiative forcing while accounting for 25% of daily air traffic, and winter flights contribute half of the annual mean radiative forcing while accounting for 22% of annual air traffic. Starting from the 1990s, it was suggested that contrails during daytime have a strong cooling effect, and when combined with the warming from night-time flights, this would lead to a substantial diurnal temperature variation (the difference in the day's highs and lows at a fixed station). When no commercial aircraft flew across the USA following the September 11 attacks, the diurnal temperature variation was widened by 1.1 °C (2.0 °F). Measured across 4,000 weather stations in the continental United States, this increase was the largest recorded in 30 years. Without contrails, the local diurnal temperature range was 1 °C (1.8 °F) higher than immediately before. In the southern US, the difference was diminished by about 3.3 °C (6 °F), and by 2.8 °C (5 °F) in the US midwest.However, follow-up studies found that a natural change in cloud cover can more than explain these findings. The authors of a 2008 study wrote, "The variations in high cloud cover, including contrails and contrail-induced cirrus clouds, contribute weakly to the changes in the diurnal temperature range, which is governed primarily by lower altitude clouds, winds, and humidity."

USAAF 8th Air Force B-17s and their contrails.

A 2011 study of British meteorological records taken during World War II identified one event where the temperature was 0.8 °C (1.4 °F) higher than the day's average near airbases used by USAAF strategic bombers after they flew in a formation, although they cautioned it was a single event.

The sky above Würzburg without contrails after air travel disruption in 2010 (left) and with regular air traffic and the right conditions (right)

The global response to the 2020 coronavirus pandemic led to a reduction in global air traffic of nearly 70% relative to 2019. Thus, it provided an extended opportunity to study the impact of contrails on regional and global temperature. Multiple studies found "no significant response of diurnal surface air temperature range" as the result of contrail changes, and either "no net significant global ERF" (effective radiative forcing) or a very small warming effect. On the other hand, the decline in sulfate emissions caused by the curtailed road traffic and industrial output during the COVID-19 lockdowns did have a detectable warming impact: it was estimated to have increased global temperatures by 0.01–0.02 °C (0.018–0.036 °F) initially and up to 0.03 °C (0.054 °F) by 2023, before disappearing. Regionally, the lockdowns were estimated to increase temperatures by 0.05–0.15 °C (0.090–0.270 °F) in eastern China over January–March, and then by 0.04–0.07 °C (0.072–0.126 °F) over Europe, eastern United States, and South Asia in March–May, with the peak impact of 0.3 °C (0.54 °F) in some regions of the United States and Russia. In the city of Wuhan, the urban heat island effect was found to have decreased by 0.24 °C (0.43 °F) at night and by 0.12 °C (0.22 °F) overall during the strictest lockdowns.

Relationship to hydrological cycle

Sulfate aerosols have decreased precipitation over most of Asia (red), but increased it over some parts of Central Asia (blue).

On regional and global scale, air pollution can affect the water cycle, in a manner similar to some natural processes. One example is the impact of Sahara dust on hurricane formation: air laden with sand and mineral particles moves over the Atlantic Ocean, where they block some of the sunlight from reaching the water surface, slightly cooling it and dampening the development of hurricanes. Likewise, it has been suggested since the early 2000s that since aerosols decrease solar radiation over the ocean and hence reduce evaporation from it, they would be "spinning down the hydrological cycle of the planet." In 2011, it was found that anthropogenic aerosols had been the predominant factor behind 20th century changes in rainfall over the Atlantic Ocean sector, when the entire tropical rain belt shifted southwards between 1950 and 1985, with a limited northwards shift afterwards. Future reductions in aerosol emissions are expected to result in a more rapid northwards shift, with limited impact in the Atlantic but a substantially greater impact in the Pacific.

Most notably, multiple studies connect aerosols from the Northern Hemisphere to the failed monsoon in sub-Saharan Africa during the 1970s and 1980s, which then led to the Sahel drought and the associated famine. However, model simulations of Sahel climate are very inconsistent, so it's difficult to prove that the drought would not have occurred without aerosol pollution, although it would have clearly been less severe. Some research indicates that those models which demonstrate warming alone driving strong precipitation increases in the Sahel are the most accurate, making it more likely that sulfate pollution was to blame for overpowering this response and sending the region into drought.

Another dramatic finding had connected the impact of aerosols with the weakening of the Monsoon of South Asia. It was first advanced in 2006, yet it also remained difficult to prove. In particular, some research suggested that warming itself increases the risk of monsoon failure, potentially pushing it past a tipping point. By 2021, however, it was concluded that global warming consistently strengthened the monsoon, and some strengthening was already observed in the aftermath of lockdown-caused aerosol reductions.

In 2009, an analysis of 50 years of data found that light rains had decreased over eastern China, even though there was no significant change in the amount of water held by the atmosphere. This was attributed to aerosols reducing droplet size within clouds, which led to those clouds retaining water for a longer time without raining. The phenomenon of aerosols suppressing rainfall through reducing cloud droplet size has been confirmed by subsequent studies. Later research found that aerosol pollution over South and East Asia didn't just suppress rainfall there, but also resulted in more moisture transferred to Central Asia, where summer rainfall had increased as the result. IPCC Sixth Assessment Report had also linked changes in aerosol concentrations to altered precipitation in the Mediterranean region.

Solar geoengineering

This graph shows baseline radiative forcing under three different Representative Concentration Pathway scenarios, and how it would be affected by the deployment of SAI, starting from 2034, to either halve the speed of warming by 2100, to halt the warming, or to reverse it entirely.

An increase in planetary albedo of 1% would eliminate most of radiative forcing from anthropogenic greenhouse gas emissions and thereby global warming, while a 2% albedo increase would negate the warming effect of doubling the atmospheric carbon dioxide concentration. This is the theory behind solar geoengineering, and the high reflective potential of sulfate aerosols means that they were considered in this capacity for a long time. In 1974, Mikhail Budyko suggested that if global warming became a problem, the planet could be cooled by burning sulfur in the stratosphere, which would create a haze. This approach would simply send the sulfates to the troposphere – the lowest part of the atmosphere. Using it today would be equivalent to more than reversing the decades of air quality improvements, and the world would face the same issues which prompted the introduction of those regulations in the first place, such as acid rain. The suggestion of relying on tropospheric global dimming to curb warming has been described as a "Faustian bargain" and is not seriously considered by modern research.

Instead, starting with the seminal 2006 paper by Paul Crutzen, the solution advocated is known as stratospheric aerosol injection, or SAI. It would transport sulfates into the next higher layer of the atmosphere – stratosphere, where they would last for years instead of weeks, so far less sulfur would have to be emitted. It has been estimated that the amount of sulfur needed to offset a warming of around 4 °C (7.2 °F) relative to now (and 5 °C (9.0 °F) relative to the preindustrial), under the highest-emission scenario RCP 8.5 would be less than what is already emitted through air pollution today, and that reductions in sulfur pollution from future air quality improvements already expected under that scenario would offset the sulfur used for geoengineering. The trade-off is increased cost. While there's a popular narrative that stratospheric aerosol injection can be carried out by individuals, small states, or other non-state rogue actors, scientific estimates suggest that cooling the atmosphere by 1 °C (1.8 °F) through stratospheric aerosol injection would cost at least $18 billion annually (at 2020 USD value), meaning that only the largest economies or economic blocs could afford this intervention. Even so, these approaches would still be "orders of magnitude" cheaper than greenhouse gas mitigation, let alone the costs of unmitigated effects of climate change.

The main downside to SAI is that any such cooling would still cease 1–3 years after the last aerosol injection, while the warming from CO2 emissions lasts for hundreds to thousands of years unless they are reversed earlier. This means that neither stratospheric aerosol injection nor other forms of solar geoengineering can be used as a substitute for reducing greenhouse gas emissions, because if solar geoengineering were to cease while greenhouse gas levels remained high, it would lead to "large and extremely rapid" warming and similarly abrupt changes to the water cycle. Many thousands of species would likely go extinct as the result. Instead, any solar geoengineering would act as a temporary measure to limit warming while emissions of greenhouse gases are reduced and carbon dioxide is removed, which may well take hundreds of years.

Other risks include limited knowledge about the regional impacts of solar geoengineering (beyond the certainty that even stopping or reversing the warming entirely would still result in significant changes in weather patterns in many areas) and, correspondingly, the impacts on ecosystems. It is generally believed that relative to now, crop yields and carbon sinks would be largely unaffected or may even increase slightly, because reduced photosynthesis due to lower sunlight would be offset by CO2 fertilization effect and the reduction in thermal stress, but there's less confidence about how specific ecosystems may be affected. Moreover, stratospheric aerosol injection is likely to somewhat increase mortality from skin cancer due to the weakened ozone layer, but it would also reduce mortality from ground-level ozone, with the net effect unclear. Changes in precipitation are also likely to shift the habitat of mosquitoes and thus substantially affect the distribution and spread of vector-borne diseases, with currently unclear consequences.

Thermosphere

From Wikipedia, the free encyclopedia
Earth's atmosphere as it appears from space, as bands of different colours at the horizon. From the bottom, afterglow illuminates the troposphere in orange with silhouettes of clouds, and the stratosphere in white and blue. Next the mesosphere (pink area) extends to just below the edge of space at one hundred kilometers and the pink line of airglow of the lower thermosphere (dark), which hosts green and red aurorae over several hundred kilometers.
A diagram of the layers of Earth's atmosphere

The thermosphere is the layer in the Earth's atmosphere directly above the mesosphere and below the exosphere. Within this layer of the atmosphere, ultraviolet radiation causes photoionization/photodissociation of molecules, creating ions; the thermosphere thus constitutes the larger part of the ionosphere. Taking its name from the Greek θερμός (pronounced thermos) meaning heat, the thermosphere begins at about 80 km (50 mi) above sea level. At these high altitudes, the residual atmospheric gases sort into strata according to molecular mass (see turbosphere). Thermospheric temperatures increase with altitude due to absorption of highly energetic solar radiation. Temperatures are highly dependent on solar activity, and can rise to 2,000 °C (3,630 °F) or more. Radiation causes the atmospheric particles in this layer to become electrically charged, enabling radio waves to be refracted and thus be received beyond the horizon. In the exosphere, beginning at about 600 km (375 mi) above sea level, the atmosphere turns into space, although, by the judging criteria set for the definition of the Kármán line (100 km), most of the thermosphere is part of space. The border between the thermosphere and exosphere is known as the thermopause.

The highly attenuated gas in this layer can reach 2,500 °C (4,530 °F). Despite the high temperature, an observer or object will experience low temperatures in the thermosphere, because the extremely low density of the gas (practically a hard vacuum) is insufficient for the molecules to conduct heat. A normal thermometer will read significantly below 0 °C (32 °F), at least at night, because the energy lost by thermal radiation would exceed the energy acquired from the atmospheric gas by direct contact. In the anacoustic zone above 160 kilometres (99 mi), the density is so low that molecular interactions are too infrequent to permit the transmission of sound.

The dynamics of the thermosphere are dominated by atmospheric tides, which are driven predominantly by diurnal heating. Atmospheric waves dissipate above this level because of collisions between the neutral gas and the ionospheric plasma.

The thermosphere is uninhabited with the exception of the International Space Station, which orbits the Earth within the middle of the thermosphere between 408 and 410 kilometres (254 and 255 mi) and the Tiangong space station, which orbits between 340 and 450 kilometres (210 and 280 mi).

Neutral gas constituents

It is convenient to separate the atmospheric regions according to the two temperature minima at an altitude of about 12 kilometres (7.5 mi) (the tropopause) and at about 85 kilometres (53 mi) (the mesopause) (Figure 1). The thermosphere (or the upper atmosphere) is the height region above 85 kilometres (53 mi), while the region between the tropopause and the mesopause is the middle atmosphere (stratosphere and mesosphere) where absorption of solar UV radiation generates the temperature maximum near an altitude of 45 kilometres (28 mi) and causes the ozone layer.

Figure 1. Nomenclature of atmospheric regions based on the profiles of electric conductivity (left), temperature (middle), and electron number density in m−3(right)

The density of the Earth's atmosphere decreases nearly exponentially with altitude. The total mass of the atmosphere is M = ρA H  ≃ 1 kg/cm2 within a column of one square centimeter above the ground (with ρA = 1.29 kg/m3 the atmospheric density on the ground at z = 0 m altitude, and H ≃ 8 km the average atmospheric scale height). Eighty percent of that mass is concentrated within the troposphere. The mass of the thermosphere above about 85 kilometres (53 mi) is only 0.002% of the total mass. Therefore, no significant energetic feedback from the thermosphere to the lower atmospheric regions can be expected.

Turbulence causes the air within the lower atmospheric regions below the turbopause at about 90 kilometres (56 mi) to be a mixture of gases that does not change its composition. Its mean molecular weight is 29 g/mol with molecular oxygen (O2) and nitrogen (N2) as the two dominant constituents. Above the turbopause, however, diffusive separation of the various constituents is significant, so that each constituent follows its barometric height structure with a scale height inversely proportional to its molecular weight. The lighter constituents atomic oxygen (O), helium (He), and hydrogen (H) successively dominate above an altitude of about 200 kilometres (124 mi) and vary with geographic location, time, and solar activity. The ratio N2/O which is a measure of the electron density at the ionospheric F region is highly affected by these variations. These changes follow from the diffusion of the minor constituents through the major gas component during dynamic processes.

The thermosphere contains an appreciable concentration of elemental sodium located in a 10-kilometre (6.2 mi) thick band that occurs at the edge of the mesosphere, 80 to 100 kilometres (50 to 62 mi) above Earth's surface. The sodium has an average concentration of 400,000 atoms per cubic centimeter. This band is regularly replenished by sodium sublimating from incoming meteors. Astronomers have begun using this sodium band to create "guide stars" as part of the optical correction process in producing ultra-sharp ground-based observations.

Energy input

Energy budget

The thermospheric temperature can be determined from density observations as well as from direct satellite measurements. The temperature vs. altitude z in Fig. 1 can be simulated by the so-called Bates profile:

(1)  

with T the exospheric temperature above about 400 km altitude, To = 355 K, and zo = 120 km reference temperature and height, and s an empirical parameter depending on T and decreasing with T. That formula is derived from a simple equation of heat conduction. One estimates a total heat input of qo≃ 0.8 to 1.6 mW/m2 above zo = 120 km altitude. In order to obtain equilibrium conditions, that heat input qo above zo is lost to the lower atmospheric regions by heat conduction.

The exospheric temperature T is a fair measurement of the solar XUV radiation. Since solar radio emission F at 10.7  cm wavelength is a good indicator of solar activity, one can apply the empirical formula for quiet magnetospheric conditions.

(2)  

with T in K, Fo in 10−2 W m−2 Hz−1 (the Covington index) a value of F averaged over several solar cycles. The Covington index varies typically between 70 and 250 during a solar cycle, and never drops below about 50. Thus, T varies between about 740 and 1350 K. During very quiet magnetospheric conditions, the still continuously flowing magnetospheric energy input contributes by about 250  K to the residual temperature of 500  K in eq.(2). The rest of 250  K in eq.(2) can be attributed to atmospheric waves generated within the troposphere and dissipated within the lower thermosphere.

Solar XUV radiation

The solar X-ray and extreme ultraviolet radiation (XUV) at wavelengths < 170  nm is almost completely absorbed within the thermosphere. This radiation causes the various ionospheric layers as well as a temperature increase at these heights (Figure 1). While the solar visible light (380 to 780  nm) is nearly constant with the variability of not more than about 0.1% of the solar constant, the solar XUV radiation is highly variable in time and space. For instance, X-ray bursts associated with solar flares can dramatically increase their intensity over preflare levels by many orders of magnitude over some time of tens of minutes. In the extreme ultraviolet, the Lyman α line at 121.6 nm represents an important source of ionization and dissociation at ionospheric D layer heights. During quiet periods of solar activity, it alone contains more energy than the rest of the XUV spectrum. Quasi-periodic changes of the order of 100% or greater, with periods of 27 days and 11 years, belong to the prominent variations of solar XUV radiation. However, irregular fluctuations over all time scales are present all the time. During the low solar activity, about half of the total energy input into the thermosphere is thought to be solar XUV radiation. That solar XUV energy input occurs only during daytime conditions, maximizing at the equator during equinox.

Solar wind

The second source of energy input into the thermosphere is solar wind energy which is transferred to the magnetosphere by mechanisms that are not well understood. One possible way to transfer energy is via a hydrodynamic dynamo process. Solar wind particles penetrate the polar regions of the magnetosphere where the geomagnetic field lines are essentially vertically directed. An electric field is generated, directed from dawn to dusk. Along the last closed geomagnetic field lines with their footpoints within the auroral zones, field-aligned electric currents can flow into the ionospheric dynamo region where they are closed by electric Pedersen and Hall currents. Ohmic losses of the Pedersen currents heat the lower thermosphere (see e.g., Magnetospheric electric convection field). Also, penetration of high energetic particles from the magnetosphere into the auroral regions enhance drastically the electric conductivity, further increasing the electric currents and thus Joule heating. During the quiet magnetospheric activity, the magnetosphere contributes perhaps by a quarter to the thermosphere's energy budget. This is about 250  K of the exospheric temperature in eq.(2). During the very large activity, however, this heat input can increase substantially, by a factor of four or more. That solar wind input occurs mainly in the auroral regions during both day and night.

Atmospheric waves

Two kinds of large-scale atmospheric waves within the lower atmosphere exist: internal waves with finite vertical wavelengths which can transport wave energy upward, and external waves with infinitely large wavelengths that cannot transport wave energy. Atmospheric gravity waves and most of the atmospheric tides generated within the troposphere belong to the internal waves. Their density amplitudes increase exponentially with height so that at the mesopause these waves become turbulent and their energy is dissipated (similar to breaking of ocean waves at the coast), thus contributing to the heating of the thermosphere by about 250  K in eq.(2). On the other hand, the fundamental diurnal tide labeled (1, −2) which is most efficiently excited by solar irradiance is an external wave and plays only a marginal role within the lower and middle atmosphere. However, at thermospheric altitudes, it becomes the predominant wave. It drives the electric Sq-current within the ionospheric dynamo region between about 100 and 200  km height.

Heating, predominately by tidal waves, occurs mainly at lower and middle latitudes. The variability of this heating depends on the meteorological conditions within the troposphere and middle atmosphere, and may not exceed about 50%.

Dynamics

Figure 2. Schematic meridian-height cross-section of circulation of (a) symmetric wind component (P20), (b) of antisymmetric wind component (P10), and (d) of symmetric diurnal wind component (P11) at 3 h and 15 h local time. Upper right panel (c) shows the horizontal wind vectors of the diurnal component in the northern hemisphere depending on local time.

Within the thermosphere above an altitude of about 150 kilometres (93 mi), all atmospheric waves successively become external waves, and no significant vertical wave structure is visible. The atmospheric wave modes degenerate to the spherical functions Pnm with m a meridional wave number and n the zonal wave number (m = 0: zonal mean flow; m = 1: diurnal tides; m = 2: semidiurnal tides; etc.). The thermosphere becomes a damped oscillator system with low-pass filter characteristics. This means that smaller-scale waves (greater numbers of (n,m)) and higher frequencies are suppressed in favor of large-scale waves and lower frequencies. If one considers very quiet magnetospheric disturbances and a constant mean exospheric temperature (averaged over the sphere), the observed temporal and spatial distribution of the exospheric temperature distribution can be described by a sum of spheric functions:

(3)  

Here, it is φ latitude, λ longitude, and t time, ωa the angular frequency of one year, ωd the angular frequency of one solar day, and τ = ωdt + λ the local time. ta = June 21 is the date of northern summer solstice, and τd = 15:00 is the local time of maximum diurnal temperature.

The first term in (3) on the right is the global mean of the exospheric temperature (of the order of 1000  K). The second term [with P20 = 0.5(3 sin2(φ)−1)] represents the heat surplus at lower latitudes and a corresponding heat deficit at higher latitudes (Fig. 2a). A thermal wind system develops with the wind toward the poles in the upper level and winds away from the poles in the lower level. The coefficient ΔT20 ≈ 0.004 is small because Joule heating in the aurora regions compensates that heat surplus even during quiet magnetospheric conditions. During disturbed conditions, however, that term becomes dominant, changing sign so that now heat surplus is transported from the poles to the equator. The third term (with P10 = sin φ) represents heat surplus on the summer hemisphere and is responsible for the transport of excess heat from the summer into the winter hemisphere (Fig. 2b). Its relative amplitude is of the order ΔT10 ≃ 0.13. The fourth term (with P11(φ) = cos φ) is the dominant diurnal wave (the tidal mode (1,−2)). It is responsible for the transport of excess heat from the daytime hemisphere into the nighttime hemisphere (Fig. 2d). Its relative amplitude is ΔT11≃ 0.15, thus on the order of 150 K. Additional terms (e.g., semiannual, semidiurnal terms, and higher-order terms) must be added to eq.(3). However, they are of minor importance. Corresponding sums can be developed for density, pressure, and the various gas constituents.

Thermospheric storms

In contrast to solar XUV radiation, magnetospheric disturbances, indicated on the ground by geomagnetic variations, show an unpredictable impulsive character, from short periodic disturbances of the order of hours to long-standing giant storms of several days' duration. The reaction of the thermosphere to a large magnetospheric storm is called a thermospheric storm. Since the heat input into the thermosphere occurs at high latitudes (mainly into the auroral regions), the heat transport is represented by the term P20 in eq.(3) is reversed. Also, due to the impulsive form of the disturbance, higher-order terms are generated which, however, possess short decay times and thus quickly disappear. The sum of these modes determines the "travel time" of the disturbance to the lower latitudes, and thus the response time of the thermosphere with respect to the magnetospheric disturbance. Important for the development of an ionospheric storm is the increase of the ratio N2/O during a thermospheric storm at middle and higher latitude. An increase of N2 increases the loss process of the ionospheric plasma and causes therefore a decrease of the electron density within the ionospheric F-layer (negative ionospheric storm).

Climate change

A contraction of the thermosphere has been observed as a possible result in part due to increased carbon dioxide concentrations, the strongest cooling and contraction occurring in that layer during solar minimum. The most recent contraction in 2008–2009 was the largest such since at least 1967.

Variable (computer science)

From Wikipedia, the free encyclopedia
 

In computer programming, a variable is an abstract storage location paired with an associated symbolic name, which contains some known or unknown quantity of data or object referred to as a value; or in simpler terms, a variable is a named container for a particular set of bits or type of data (like integer, float, string etc...). A variable can eventually be associated with or identified by a memory address. The variable name is the usual way to reference the stored value, in addition to referring to the variable itself, depending on the context. This separation of name and content allows the name to be used independently of the exact information it represents. The identifier in computer source code can be bound to a value during run time, and the value of the variable may thus change during the course of program execution.

Variables in programming may not directly correspond to the concept of variables in mathematics. The latter is abstract, having no reference to a physical object such as storage location. The value of a computing variable is not necessarily part of an equation or formula as in mathematics. Variables in computer programming are frequently given long names to make them relatively descriptive of their use, whereas variables in mathematics often have terse, one- or two-character names for brevity in transcription and manipulation.

A variable's storage location may be referenced by several different identifiers, a situation known as aliasing. Assigning a value to the variable using one of the identifiers will change the value that can be accessed through the other identifiers.

Compilers have to replace variables' symbolic names with the actual locations of the data. While a variable's name, type, and location often remain fixed, the data stored in the location may be changed during program execution.

Actions on a variable

In imperative programming languages, values can generally be accessed or changed at any time. In pure functional and logic languages, variables are bound to expressions and keep a single value during their entire lifetime due to the requirements of referential transparency. In imperative languages, the same behavior is exhibited by (named) constants (symbolic constants), which are typically contrasted with (normal) variables.

Depending on the type system of a programming language, variables may only be able to store a specified data type (e.g. integer or string). Alternatively, a datatype may be associated only with the current value, allowing a single variable to store anything supported by the programming language. Variables are the containers for storing the values.

Variables and scope:

  • Automatic variables: Each local variable in a function comes into existence only when the function is called, and disappears when the function is exited. Such variables are known as automatic variables.
  • External variables: These are variables that are external to a function and can be accessed by name by any function. These variables remain in existence permanently; rather that appearing and disappearing as functions are called and exited, they retain their values even after the functions that set them have returned.

Identifiers referencing a variable

An identifier referencing a variable can be used to access the variable in order to read out the value, or alter the value, or edit other attributes of the variable, such as access permission, locks, semaphores, etc.

For instance, a variable might be referenced by the identifier "total_count" and the variable can contain the number 1956. If the same variable is referenced by the identifier "r" as well, and if using this identifier "r", the value of the variable is altered to 2009, then reading the value using the identifier "total_count" will yield a result of 2009 and not 1956.

If a variable is only referenced by a single identifier, that identifier can simply be called the name of the variable; otherwise we can speak of it as one of the names of the variable. For instance, in the previous example the identifier "total_count" is a name of the variable in question, and "r" is another name of the same variable.

Scope and extent

The scope of a variable describes where in a program's text the variable may be used, while the extent (also called lifetime) of a variable describes when in a program's execution the variable has a (meaningful) value. The scope of a variable affects its extent. The scope of a variable is actually a property of the name of the variable, and the extent is a property of the storage location of the variable. These should not be confused with context (also called environment), which is a property of the program, and varies by point in the program's text or execution—see scope: an overview. Further, object lifetime may coincide with variable lifetime, but in many cases is not tied to it.

Scope is an important part of the name resolution of a variable. Most languages define a specific scope for each variable (as well as any other named entity), which may differ within a given program. The scope of a variable is the portion of the program's text for which the variable's name has meaning and for which the variable is said to be "visible". Entrance into that scope typically begins a variable's lifetime (as it comes into context) and exit from that scope typically ends its lifetime (as it goes out of context). For instance, a variable with "lexical scope" is meaningful only within a certain function/subroutine, or more finely within a block of expressions/statements (accordingly with function scope or block scope); this is static resolution, performable at parse-time or compile-time. Alternatively, a variable with dynamic scope is resolved at run-time, based on a global binding stack that depends on the specific control flow. Variables only accessible within a certain functions are termed "local variables". A "global variable", or one with indefinite scope, may be referred to anywhere in the program.

Extent, on the other hand, is a runtime (dynamic) aspect of a variable. Each binding of a variable to a value can have its own extent at runtime. The extent of the binding is the portion of the program's execution time during which the variable continues to refer to the same value or memory location. A running program may enter and leave a given extent many times, as in the case of a closure.

Unless the programming language features garbage collection, a variable whose extent permanently outlasts its scope can result in a memory leak, whereby the memory allocated for the variable can never be freed since the variable which would be used to reference it for deallocation purposes is no longer accessible. However, it can be permissible for a variable binding to extend beyond its scope, as occurs in Lisp closures and C static local variables; when execution passes back into the variable's scope, the variable may once again be used. A variable whose scope begins before its extent does is said to be uninitialized and often has an undefined, arbitrary value if accessed (see wild pointer), since it has yet to be explicitly given a particular value. A variable whose extent ends before its scope may become a dangling pointer and deemed uninitialized once more since its value has been destroyed. Variables described by the previous two cases may be said to be out of extent or unbound. In many languages, it is an error to try to use the value of a variable when it is out of extent. In other languages, doing so may yield unpredictable results. Such a variable may, however, be assigned a new value, which gives it a new extent.

For space efficiency, a memory space needed for a variable may be allocated only when the variable is first used and freed when it is no longer needed. A variable is only needed when it is in scope, thus beginning each variable's lifetime when it enters scope may give space to unused variables. To avoid wasting such space, compilers often warn programmers if a variable is declared but not used.

It is considered good programming practice to make the scope of variables as narrow as feasible so that different parts of a program do not accidentally interact with each other by modifying each other's variables. Doing so also prevents action at a distance. Common techniques for doing so are to have different sections of a program use different name spaces, or to make individual variables "private" through either dynamic variable scoping or lexical variable scoping.

Many programming languages employ a reserved value (often named null or nil) to indicate an invalid or uninitialized variable.

Typing

In statically typed languages such as Go or ML, a variable also has a type, meaning that only certain kinds of values can be stored in it. For example, a variable of type "integer" is prohibited from storing text values.

In dynamically typed languages such as Python, a variable's type is inferred by its value, and can change according to its value. In Common Lisp, both situations exist simultaneously: A variable is given a type (if undeclared, it is assumed to be T, the universal supertype) which exists at compile time. Values also have types, which can be checked and queried at runtime.

Typing of variables also allows polymorphisms to be resolved at compile time. However, this is different from the polymorphism used in object-oriented function calls (referred to as virtual functions in C++) which resolves the call based on the value type as opposed to the supertypes the variable is allowed to have.

Variables often store simple data, like integers and literal strings, but some programming languages allow a variable to store values of other datatypes as well. Such languages may also enable functions to be parametric polymorphic. These functions operate like variables to represent data of multiple types. For example, a function named length may determine the length of a list. Such a length function may be parametric polymorphic by including a type variable in its type signature, since the number of elements in the list is independent of the elements' types.

Parameters

The formal parameters (or formal arguments) of functions are also referred to as variables. For instance, in this Python code segment,

>>> def addtwo(x):
...     return x + 2
...
>>> addtwo(5)
7

the variable named x is a parameter because it is given a value when the function is called. The integer 5 is the argument which gives x its value. In most languages, function parameters have local scope. This specific variable named x can only be referred to within the addtwo function (though of course other functions can also have variables called x).

Memory allocation

The specifics of variable allocation and the representation of their values vary widely, both among programming languages and among implementations of a given language. Many language implementations allocate space for local variables, whose extent lasts for a single function call on the call stack, and whose memory is automatically reclaimed when the function returns. More generally, in name binding, the name of a variable is bound to the address of some particular block (contiguous sequence) of bytes in memory, and operations on the variable manipulate that block. Referencing is more common for variables whose values have large or unknown sizes when the code is compiled. Such variables reference the location of the value instead of storing the value itself, which is allocated from a pool of memory called the heap.

Bound variables have values. A value, however, is an abstraction, an idea; in implementation, a value is represented by some data object, which is stored somewhere in computer memory. The program, or the runtime environment, must set aside memory for each data object and, since memory is finite, ensure that this memory is yielded for reuse when the object is no longer needed to represent some variable's value.

Objects allocated from the heap must be reclaimed—especially when the objects are no longer needed. In a garbage-collected language (such as C#, Java, Python, Golang and Lisp), the runtime environment automatically reclaims objects when extant variables can no longer refer to them. In non-garbage-collected languages, such as C, the program (and the programmer) must explicitly allocate memory, and then later free it, to reclaim its memory. Failure to do so leads to memory leaks, in which the heap is depleted as the program runs, risks eventual failure from exhausting available memory.

When a variable refers to a data structure created dynamically, some of its components may be only indirectly accessed through the variable. In such circumstances, garbage collectors (or analogous program features in languages that lack garbage collectors) must deal with a case where only a portion of the memory reachable from the variable needs to be reclaimed.

Naming conventions

Unlike their mathematical counterparts, programming variables and constants commonly take multiple-character names, e.g. COST or total. Single-character names are most commonly used only for auxiliary variables; for instance, i, j, k for array index variables.

Some naming conventions are enforced at the language level as part of the language syntax which involves the format of valid identifiers. In almost all languages, variable names cannot start with a digit (0–9) and cannot contain whitespace characters. Whether or not punctuation marks are permitted in variable names varies from language to language; many languages only permit the underscore ("_") in variable names and forbid all other punctuation. In some programming languages, sigils (symbols or punctuation) are affixed to variable identifiers to indicate the variable's datatype or scope.

Case-sensitivity of variable names also varies between languages and some languages require the use of a certain case in naming certain entities; Most modern languages are case-sensitive; some older languages are not. Some languages reserve certain forms of variable names for their own internal use; in many languages, names beginning with two underscores ("__") often fall under this category.

However, beyond the basic restrictions imposed by a language, the naming of variables is largely a matter of style. At the machine code level, variable names are not used, so the exact names chosen do not matter to the computer. Thus names of variables identify them, for the rest they are just a tool for programmers to make programs easier to write and understand. Using poorly chosen variable names can make code more difficult to review than non-descriptive names, so names that are clear are often encouraged.

Programmers often create and adhere to code style guidelines that offer guidance on naming variables or impose a precise naming scheme. Shorter names are faster to type but are less descriptive; longer names often make programs easier to read and the purpose of variables easier to understand. However, extreme verbosity in variable names can also lead to less comprehensible code.

Variable types (based on lifetime)

We can classify variables based on their lifetime. The different types of variables are static, stack-dynamic, explicit heap-dynamic, and implicit heap-dynamic. A static variable is also known as global variable, it is bound to a memory cell before execution begins and remains to the same memory cell until termination. A typical example is the static variables in C and C++. A Stack-dynamic variable is known as local variable, which is bound when the declaration statement is executed, and it is deallocated when the procedure returns. The main examples are local variables in C subprograms and Java methods. Explicit Heap-Dynamic variables are nameless (abstract) memory cells that are allocated and deallocated by explicit run-time instructions specified by the programmer. The main examples are dynamic objects in C++ (via new and delete) and all objects in Java. Implicit Heap-Dynamic variables are bound to heap storage only when they are assigned values. Allocation and release occur when values are reassigned to variables. As a result, Implicit heap-dynamic variables have the highest degree of flexibility. The main examples are some variables in JavaScript, PHP and all variables in APL.

Great chain of being

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Great_chain_of_being 1579 draw...