Search This Blog

Thursday, July 8, 2021

Genetic drift

Genetic drift (allelic drift or the Sewall Wright effect) is the change in the frequency of an existing gene variant (allele) in a population due to random sampling of organisms. The alleles in the offspring are a sample of those in the parents, and chance has a role in determining whether a given individual survives and reproduces. A population's allele frequency is the fraction of the copies of one gene that share a particular form.

Genetic drift may cause gene variants to disappear completely and thereby reduce genetic variation. It can also cause initially rare alleles to become much more frequent and even fixed.

When there are few copies of an allele, the effect of genetic drift is larger, and when there are many copies the effect is smaller. In the middle of the 20th century, vigorous debates occurred over the relative importance of natural selection versus neutral processes, including genetic drift. Ronald Fisher, who explained natural selection using Mendelian genetics, held the view that genetic drift plays at the most a minor role in evolution, and this remained the dominant view for several decades. In 1968, population geneticist Motoo Kimura rekindled the debate with his neutral theory of molecular evolution, which claims that most instances where a genetic change spreads across a population (although not necessarily changes in phenotypes) are caused by genetic drift acting on neutral mutations.

Analogy with marbles in a jar

The process of genetic drift can be illustrated using 20 marbles in a jar to represent 20 organisms in a population. Consider this jar of marbles as the starting population. Half of the marbles in the jar are red and half are blue, with each colour corresponding to a different allele of one gene in the population. In each new generation the organisms reproduce at random. To represent this reproduction, randomly select a marble from the original jar and deposit a new marble with the same colour into a new jar. This is the "offspring" of the original marble, meaning that the original marble remains in its jar. Repeat this process until there are 20 new marbles in the second jar. The second jar will now contain 20 "offspring", or marbles of various colours. Unless the second jar contains exactly 10 red marbles and 10 blue marbles, a random shift has occurred in the allele frequencies.

If this process is repeated a number of times, the numbers of red and blue marbles picked each generation will fluctuate. Sometimes a jar will have more red marbles than its "parent" jar and sometimes more blue. This fluctuation is analogous to genetic drift – a change in the population's allele frequency resulting from a random variation in the distribution of alleles from one generation to the next.

It is even possible that in any one generation no marbles of a particular colour are chosen, meaning they have no offspring. In this example, if no red marbles are selected, the jar representing the new generation contains only blue offspring. If this happens, the red allele has been lost permanently in the population, while the remaining blue allele has become fixed: all future generations are entirely blue. In small populations, fixation can occur in just a few generations.

In this simulation each black dot on a marble signifies that it has been chosen for copying (reproduction) one time. There is fixation in the blue "allele" within five generations.

Probability and allele frequency

The mechanisms of genetic drift can be illustrated with a simplified example. Consider a very large colony of bacteria isolated in a drop of solution. The bacteria are genetically identical except for a single gene with two alleles labeled A and B. A and B are neutral alleles meaning that they do not affect the bacteria's ability to survive and reproduce; all bacteria in this colony are equally likely to survive and reproduce. Suppose that half the bacteria have allele A and the other half have allele B. Thus A and B each have allele frequency 1/2.

The drop of solution then shrinks until it has only enough food to sustain four bacteria. All other bacteria die without reproducing. Among the four who survive, there are sixteen possible combinations for the A and B alleles:

(A-A-A-A), (B-A-A-A), (A-B-A-A), (B-B-A-A),
(A-A-B-A), (B-A-B-A), (A-B-B-A), (B-B-B-A),
(A-A-A-B), (B-A-A-B), (A-B-A-B), (B-B-A-B),
(A-A-B-B), (B-A-B-B), (A-B-B-B), (B-B-B-B).

Since all bacteria in the original solution are equally likely to survive when the solution shrinks, the four survivors are a random sample from the original colony. The probability that each of the four survivors has a given allele is 1/2, and so the probability that any particular allele combination occurs when the solution shrinks is

(The original population size is so large that the sampling effectively happens with replacement). In other words, each of the sixteen possible allele combinations is equally likely to occur, with probability 1/16.

Counting the combinations with the same number of A and B, we get the following table.

A B Combinations Probability
4 0 1 1/16
3 1 4 4/16
2 2 6 6/16
1 3 4 4/16
0 4 1 1/16

As shown in the table, the total number of combinations that have the same number of A alleles as of B alleles is six, and the probability of this combination is 6/16. The total number of other combinations is ten, so the probability of unequal number of A and B alleles is 10/16. Thus, although the original colony began with an equal number of A and B alleles, it is very possible that the number of alleles in the remaining population of four members will not be equal. Equal numbers is actually less likely than unequal numbers. In the latter case, genetic drift has occurred because the population's allele frequencies have changed due to random sampling. In this example the population contracted to just four random survivors, a phenomenon known as population bottleneck.

The probabilities for the number of copies of allele A (or B) that survive (given in the last column of the above table) can be calculated directly from the binomial distribution where the "success" probability (probability of a given allele being present) is 1/2 (i.e., the probability that there are k copies of A (or B) alleles in the combination) is given by

where n=4 is the number of surviving bacteria.

Mathematical models

Mathematical models of genetic drift can be designed using either branching processes or a diffusion equation describing changes in allele frequency in an idealised population.

Wright–Fisher model

Consider a gene with two alleles, A or B. In diploid populations consisting of N individuals there are 2N copies of each gene. An individual can have two copies of the same allele or two different alleles. We can call the frequency of one allele p and the frequency of the other q. The Wright–Fisher model (named after Sewall Wright and Ronald Fisher) assumes that generations do not overlap (for example, annual plants have exactly one generation per year) and that each copy of the gene found in the new generation is drawn independently at random from all copies of the gene in the old generation. The formula to calculate the probability of obtaining k copies of an allele that had frequency p in the last generation is then

where the symbol "!" signifies the factorial function. This expression can also be formulated using the binomial coefficient,

Moran model

The Moran model assumes overlapping generations. At each time step, one individual is chosen to reproduce and one individual is chosen to die. So in each timestep, the number of copies of a given allele can go up by one, go down by one, or can stay the same. This means that the transition matrix is tridiagonal, which means that mathematical solutions are easier for the Moran model than for the Wright–Fisher model. On the other hand, computer simulations are usually easier to perform using the Wright–Fisher model, because fewer time steps need to be calculated. In the Moran model, it takes N timesteps to get through one generation, where N is the effective population size. In the Wright–Fisher model, it takes just one.

In practice, the Moran and Wright–Fisher models give qualitatively similar results, but genetic drift runs twice as fast in the Moran model.

Other models of drift

If the variance in the number of offspring is much greater than that given by the binomial distribution assumed by the Wright–Fisher model, then given the same overall speed of genetic drift (the variance effective population size), genetic drift is a less powerful force compared to selection. Even for the same variance, if higher moments of the offspring number distribution exceed those of the binomial distribution then again the force of genetic drift is substantially weakened.

Random effects other than sampling error

Random changes in allele frequencies can also be caused by effects other than sampling error, for example random changes in selection pressure.

One important alternative source of stochasticity, perhaps more important than genetic drift, is genetic draft. Genetic draft is the effect on a locus by selection on linked loci. The mathematical properties of genetic draft are different from those of genetic drift. The direction of the random change in allele frequency is autocorrelated across generations.

Drift and fixation

The Hardy–Weinberg principle states that within sufficiently large populations, the allele frequencies remain constant from one generation to the next unless the equilibrium is disturbed by migration, genetic mutations, or selection.

However, in finite populations, no new alleles are gained from the random sampling of alleles passed to the next generation, but the sampling can cause an existing allele to disappear. Because random sampling can remove, but not replace, an allele, and because random declines or increases in allele frequency influence expected allele distributions for the next generation, genetic drift drives a population towards genetic uniformity over time. When an allele reaches a frequency of 1 (100%) it is said to be "fixed" in the population and when an allele reaches a frequency of 0 (0%) it is lost. Smaller populations achieve fixation faster, whereas in the limit of an infinite population, fixation is not achieved. Once an allele becomes fixed, genetic drift comes to a halt, and the allele frequency cannot change unless a new allele is introduced in the population via mutation or gene flow. Thus even while genetic drift is a random, directionless process, it acts to eliminate genetic variation over time.

Rate of allele frequency change due to drift

Ten simulations of random genetic drift of a single given allele with an initial frequency distribution 0.5 measured over the course of 50 generations, repeated in three reproductively synchronous populations of different sizes. In these simulations, alleles drift to loss or fixation (frequency of 0.0 or 1.0) only in the smallest population.

Assuming genetic drift is the only evolutionary force acting on an allele, after t generations in many replicated populations, starting with allele frequencies of p and q, the variance in allele frequency across those populations is

Time to fixation or loss

Assuming genetic drift is the only evolutionary force acting on an allele, at any given time the probability that an allele will eventually become fixed in the population is simply its frequency in the population at that time. For example, if the frequency p for allele A is 75% and the frequency q for allele B is 25%, then given unlimited time the probability A will ultimately become fixed in the population is 75% and the probability that B will become fixed is 25%.

The expected number of generations for fixation to occur is proportional to the population size, such that fixation is predicted to occur much more rapidly in smaller populations. Normally the effective population size, which is smaller than the total population, is used to determine these probabilities. The effective population (Ne) takes into account factors such as the level of inbreeding, the stage of the lifecycle in which the population is the smallest, and the fact that some neutral genes are genetically linked to others that are under selection. The effective population size may not be the same for every gene in the same population.

One forward-looking formula used for approximating the expected time before a neutral allele becomes fixed through genetic drift, according to the Wright–Fisher model, is

where T is the number of generations, Ne is the effective population size, and p is the initial frequency for the given allele. The result is the number of generations expected to pass before fixation occurs for a given allele in a population with given size (Ne) and allele frequency (p).

The expected time for the neutral allele to be lost through genetic drift can be calculated as

When a mutation appears only once in a population large enough for the initial frequency to be negligible, the formulas can be simplified to

for average number of generations expected before fixation of a neutral mutation, and

for the average number of generations expected before the loss of a neutral mutation.

Time to loss with both drift and mutation

The formulae above apply to an allele that is already present in a population, and which is subject to neither mutation nor natural selection. If an allele is lost by mutation much more often than it is gained by mutation, then mutation, as well as drift, may influence the time to loss. If the allele prone to mutational loss begins as fixed in the population, and is lost by mutation at rate m per replication, then the expected time in generations until its loss in a haploid population is given by

where is Euler's constant. The first approximation represents the waiting time until the first mutant destined for loss, with loss then occurring relatively rapidly by genetic drift, taking time Ne ≪ 1/m. The second approximation represents the time needed for deterministic loss by mutation accumulation. In both cases, the time to fixation is dominated by mutation via the term 1/m, and is less affected by the effective population size.

Versus natural selection

In natural populations, genetic drift and natural selection do not act in isolation; both phenomena are always at play, together with mutation and migration. Neutral evolution is the product of both mutation and drift, not of drift alone. Similarly, even when selection overwhelms genetic drift, it can only act on variation that mutation provides.

While natural selection has a direction, guiding evolution towards heritable adaptations to the current environment, genetic drift has no direction and is guided only by the mathematics of chance. As a result, drift acts upon the genotypic frequencies within a population without regard to their phenotypic effects. In contrast, selection favors the spread of alleles whose phenotypic effects increase survival and/or reproduction of their carriers, lowers the frequencies of alleles that cause unfavorable traits, and ignores those that are neutral.

The law of large numbers predicts that when the absolute number of copies of the allele is small (e.g., in small populations), the magnitude of drift on allele frequencies per generation is larger. The magnitude of drift is large enough to overwhelm selection at any allele frequency when the selection coefficient is less than 1 divided by the effective population size. Non-adaptive evolution resulting from the product of mutation and genetic drift is therefore considered to be a consequential mechanism of evolutionary change primarily within small, isolated populations. The mathematics of genetic drift depend on the effective population size, but it is not clear how this is related to the actual number of individuals in a population. Genetic linkage to other genes that are under selection can reduce the effective population size experienced by a neutral allele. With a higher recombination rate, linkage decreases and with it this local effect on effective population size. This effect is visible in molecular data as a correlation between local recombination rate and genetic diversity, and negative correlation between gene density and diversity at noncoding DNA regions. Stochasticity associated with linkage to other genes that are under selection is not the same as sampling error, and is sometimes known as genetic draft in order to distinguish it from genetic drift.

When the allele frequency is very small, drift can also overpower selection even in large populations. For example, while disadvantageous mutations are usually eliminated quickly in large populations, new advantageous mutations are almost as vulnerable to loss through genetic drift as are neutral mutations. Not until the allele frequency for the advantageous mutation reaches a certain threshold will genetic drift have no effect.

Population bottleneck

Changes in a population's allele frequency following a population bottleneck: the rapid and radical decline in population size has reduced the population's genetic variation.

A population bottleneck is when a population contracts to a significantly smaller size over a short period of time due to some random environmental event. In a true population bottleneck, the odds for survival of any member of the population are purely random, and are not improved by any particular inherent genetic advantage. The bottleneck can result in radical changes in allele frequencies, completely independent of selection.

The impact of a population bottleneck can be sustained, even when the bottleneck is caused by a one-time event such as a natural catastrophe. An interesting example of a bottleneck causing unusual genetic distribution is the relatively high proportion of individuals with total rod cell color blindness (achromatopsia) on Pingelap atoll in Micronesia. After a bottleneck, inbreeding increases. This increases the damage done by recessive deleterious mutations, in a process known as inbreeding depression. The worst of these mutations are selected against, leading to the loss of other alleles that are genetically linked to them, in a process of background selection. For recessive harmful mutations, this selection can be enhanced as a consequence of the bottleneck, due to genetic purging. This leads to a further loss of genetic diversity. In addition, a sustained reduction in population size increases the likelihood of further allele fluctuations from drift in generations to come.

A population's genetic variation can be greatly reduced by a bottleneck, and even beneficial adaptations may be permanently eliminated. The loss of variation leaves the surviving population vulnerable to any new selection pressures such as disease, climatic change or shift in the available food source, because adapting in response to environmental changes requires sufficient genetic variation in the population for natural selection to take place.

There have been many known cases of population bottleneck in the recent past. Prior to the arrival of Europeans, North American prairies were habitat for millions of greater prairie chickens. In Illinois alone, their numbers plummeted from about 100 million birds in 1900 to about 50 birds in the 1990s. The declines in population resulted from hunting and habitat destruction, but a consequence has been a loss of most of the species' genetic diversity. DNA analysis comparing birds from the mid century to birds in the 1990s documents a steep decline in the genetic variation in just the latter few decades. Currently the greater prairie chicken is experiencing low reproductive success.

However, the genetic loss caused by bottleneck and genetic drift can increase fitness, as in Ehrlichia.

Over-hunting also caused a severe population bottleneck in the northern elephant seal in the 19th century. Their resulting decline in genetic variation can be deduced by comparing it to that of the southern elephant seal, which were not so aggressively hunted.

Founder effect

When very few members of a population migrate to form a separate new population, the founder effect occurs. For a period after the foundation, the small population experiences intensive drift. In the figure this results in fixation of the red allele.

The founder effect is a special case of a population bottleneck, occurring when a small group in a population splinters off from the original population and forms a new one. The random sample of alleles in the just formed new colony is expected to grossly misrepresent the original population in at least some respects. It is even possible that the number of alleles for some genes in the original population is larger than the number of gene copies in the founders, making complete representation impossible. When a newly formed colony is small, its founders can strongly affect the population's genetic make-up far into the future.

A well-documented example is found in the Amish migration to Pennsylvania in 1744. Two members of the new colony shared the recessive allele for Ellis–Van Creveld syndrome. Members of the colony and their descendants tend to be religious isolates and remain relatively insular. As a result of many generations of inbreeding, Ellis–Van Creveld syndrome is now much more prevalent among the Amish than in the general population.

The difference in gene frequencies between the original population and colony may also trigger the two groups to diverge significantly over the course of many generations. As the difference, or genetic distance, increases, the two separated populations may become distinct, both genetically and phenetically, although not only genetic drift but also natural selection, gene flow, and mutation contribute to this divergence. This potential for relatively rapid changes in the colony's gene frequency led most scientists to consider the founder effect (and by extension, genetic drift) a significant driving force in the evolution of new species. Sewall Wright was the first to attach this significance to random drift and small, newly isolated populations with his shifting balance theory of speciation. Following after Wright, Ernst Mayr created many persuasive models to show that the decline in genetic variation and small population size following the founder effect were critically important for new species to develop. However, there is much less support for this view today since the hypothesis has been tested repeatedly through experimental research and the results have been equivocal at best.

History

The role of random chance in evolution was first outlined by Arend L. Hagedoorn and A. C. Hagedoorn-Vorstheuvel La Brand in 1921. They highlighted that random survival plays a key role in the loss of variation from populations. Fisher (1922) responded to this with the first, albeit marginally incorrect, mathematical treatment of the 'Hagedoorn effect'. Notably, he expected that many natural populations were too large (an N ~10,000) for the effects of drift to be substantial and thought drift would have an insignificant effect on the evolutionary process. The corrected mathematical treatment and term "genetic drift" was later coined by a founder of population genetics, Sewall Wright. His first use of the term "drift" was in 1929, though at the time he was using it in the sense of a directed process of change, or natural selection. Random drift by means of sampling error came to be known as the "Sewall–Wright effect," though he was never entirely comfortable to see his name given to it. Wright referred to all changes in allele frequency as either "steady drift" (e.g., selection) or "random drift" (e.g., sampling error). "Drift" came to be adopted as a technical term in the stochastic sense exclusively. Today it is usually defined still more narrowly, in terms of sampling error, although this narrow definition is not universal. Wright wrote that the "restriction of "random drift" or even "drift" to only one component, the effects of accidents of sampling, tends to lead to confusion." Sewall Wright considered the process of random genetic drift by means of sampling error equivalent to that by means of inbreeding, but later work has shown them to be distinct.

In the early days of the modern evolutionary synthesis, scientists were beginning to blend the new science of population genetics with Charles Darwin's theory of natural selection. Within this framework, Wright focused on the effects of inbreeding on small relatively isolated populations. He introduced the concept of an adaptive landscape in which phenomena such as cross breeding and genetic drift in small populations could push them away from adaptive peaks, which in turn allow natural selection to push them towards new adaptive peaks. Wright thought smaller populations were more suited for natural selection because "inbreeding was sufficiently intense to create new interaction systems through random drift but not intense enough to cause random nonadaptive fixation of genes."

Wright's views on the role of genetic drift in the evolutionary scheme were controversial almost from the very beginning. One of the most vociferous and influential critics was colleague Ronald Fisher. Fisher conceded genetic drift played some role in evolution, but an insignificant one. Fisher has been accused of misunderstanding Wright's views because in his criticisms Fisher seemed to argue Wright had rejected selection almost entirely. To Fisher, viewing the process of evolution as a long, steady, adaptive progression was the only way to explain the ever-increasing complexity from simpler forms. But the debates have continued between the "gradualists" and those who lean more toward the Wright model of evolution where selection and drift together play an important role.

In 1968, Motoo Kimura rekindled the debate with his neutral theory of molecular evolution, which claims that most of the genetic changes are caused by genetic drift acting on neutral mutations.

The role of genetic drift by means of sampling error in evolution has been criticized by John H. Gillespie and William B. Provine, who argue that selection on linked sites is a more important stochastic force.

Wednesday, July 7, 2021

History of climate change science

From Wikipedia, the free encyclopedia

The history of the scientific discovery of climate change began in the early 19th century when ice ages and other natural changes in paleoclimate were first suspected and the natural greenhouse effect was first identified. In the late 19th century, scientists first argued that human emissions of greenhouse gases could change the climate. Many other theories of climate change were advanced, involving forces from volcanism to solar variation. In the 1960s, the evidence for the warming effect of carbon dioxide gas became increasingly convincing. Some scientists also pointed out that human activities that generated atmospheric aerosols (e.g., "pollution") could have cooling effects as well.

During the 1970s, scientific opinion increasingly favored the warming viewpoint. By the 1990s, as a result of improving fidelity of computer models and observational work confirming the Milankovitch theory of the ice ages, a consensus position formed: greenhouse gases were deeply involved in most climate changes and human-caused emissions were bringing discernible global warming. Since the 1990s, scientific research on climate change has included multiple disciplines and has expanded. Research has expanded our understanding of causal relations, links with historic data and ability to model climate change numerically. Research during this period has been summarized in the Assessment Reports by the Intergovernmental Panel on Climate Change.

Climate change, broadly interpreted, is a significant and lasting change in the statistical distribution of weather patterns over periods ranging from decades to millions of years. It may be a change in average weather conditions, or in the distribution of weather around the average conditions (such as more or fewer extreme weather events). Climate change is caused by factors that include oceanic processes (such as oceanic circulation), biotic processes (e.g., plants), variations in solar radiation received by Earth, plate tectonics and volcanic eruptions, and human-induced alterations of the natural world. The latter effect is currently causing global warming, and "climate change" is often used to describe human-specific impacts.

Regional changes, antiquity through 19th century

From ancient times, people suspected that the climate of a region could change over the course of centuries. For example, Theophrastus, a pupil of Aristotle, told how the draining of marshes had made a particular locality more susceptible to freezing, and speculated that lands became warmer when the clearing of forests exposed them to sunlight. Renaissance and later scholars saw that deforestation, irrigation, and grazing had altered the lands around the Mediterranean since ancient times; they thought it plausible that these human interventions had affected the local weather. Vitruvius, in the first century BC, wrote about climate in relation to housing architecture and how to choose locations for cities.

The 18th and 19th-century conversion of Eastern North America from forest to croplands brought obvious change within a human lifetime. From the early 19th century, many believed the transformation was altering the region's climate—probably for the better. When farmers in America, dubbed "sodbusters", took over the Great Plains, they held that "rain follows the plow." Other experts disagreed, and some argued that deforestation caused rapid rainwater run-off and flooding, and could even result in reduced rainfall. European academics, convinced of the superiority of their own civilization, said that the Orientals of the Ancient Near East had heedlessly converted their once lush lands into impoverished deserts.

Meanwhile, national weather agencies had begun to compile masses of reliable observations of temperature, rainfall, and the like. When these figures were analyzed, they showed many rises and dips, but no steady long-term change. By the end of the 19th century, scientific opinion had turned decisively against any belief in a human influence on climate. And whatever the regional effects, few imagined that humans could affect the climate of the planet as a whole.

Paleo-climate change and theories of its causes, 19th century

Erratics, boulders deposited by glaciers far from any existing glaciers, led geologists to the conclusion that climate had changed in the past.

From the mid-17th century, naturalists attempted to reconcile mechanical philosophy with theology, initially within a Biblical timescale. By the late 18th century, there was increasing acceptance of prehistoric epochs. Geologists found evidence of a succession of geological ages with changes in climate. There were various competing theories about these changes; Buffon proposed that the Earth had begun as an incandescent globe and was very gradually cooling. James Hutton, whose ideas of cyclic change over huge periods of time were later dubbed uniformitarianism, was among those who found signs of past glacial activity in places too warm for glaciers in modern times.

In 1815 Jean-Pierre Perraudin described for the first time how glaciers might be responsible for the giant boulders seen in alpine valleys. As he hiked in the Val de Bagnes, he noticed giant granite rocks that were scattered around the narrow valley. He knew that it would take an exceptional force to move such large rocks. He also noticed how glaciers left stripes on the land and concluded that it was the ice that had carried the boulders down into the valleys.

His idea was initially met with disbelief. Jean de Charpentier wrote, "I found his hypothesis so extraordinary and even so extravagant that I considered it as not worth examining or even considering." Despite Charpentier's initial rejection, Perraudin eventually convinced Ignaz Venetz that it might be worth studying. Venetz convinced Charpentier, who in turn convinced the influential scientist Louis Agassiz that the glacial theory had merit.

Agassiz developed a theory of what he termed "Ice Age"—when glaciers covered Europe and much of North America. In 1837 Agassiz was the first to scientifically propose that the Earth had been subject to a past ice age. William Buckland had been a leading proponent in Britain of flood geology, later dubbed catastrophism, which accounted for erratic boulders and other "diluvium" as relics of the Biblical flood. This was strongly opposed by Charles Lyell's version of Hutton's uniformitarianism and was gradually abandoned by Buckland and other catastrophist geologists. A field trip to the Alps with Agassiz in October 1838 convinced Buckland that features in Britain had been caused by glaciation, and both he and Lyell strongly supported the ice age theory which became widely accepted by the 1870s.

Joseph Fourier

Before the concept of ice ages was proposed, Joseph Fourier in 1824 reasoned on the basis of physics that Earth's atmosphere kept the planet warmer than would be the case in a vacuum. Fourier recognized that the atmosphere transmitted visible light waves efficiently to the earth's surface. The earth then absorbed visible light and emitted infrared radiation in response, but the atmosphere did not transmit infrared efficiently, which therefore increased surface temperatures. He also suspected that human activities could influence climate, although he focused primarily on land-use changes. In an 1827 paper, Fourier stated, "The establishment and progress of human societies, the action of natural forces, can notably change, and in vast regions, the state of the surface, the distribution of water and the great movements of the air. Such effects are able to make to vary, in the course of many centuries, the average degree of heat; because the analytic expressions contain coefficients relating to the state of the surface and which greatly influence the temperature." Fourier's work build on previous discoveries: in 1681 Edme Mariotte noted that glass, though transparent to sunlight, obstructs radiant heat. Around 1774 Horace Bénédict de Saussure showed that non-luminous warm objects emit infrared heat, and used a glass-topped insulated box to trap and measure heat from sunlight.

The physicist Claude Pouillet proposed in 1838 that water vapor and carbon dioxide might trap infrared and warm the atmosphere, but there was still no experimental evidence of these gases absorbing heat from thermal radiation.

The warming effect of electromagnetic radiation on different gases was examined in 1856 by Eunice Newton Foote, who described her experiments using glass tubes exposed to sunlight. The warming effect of the sun was greater for compressed air than for an evacuated tube and greater for moist air than dry air. "Thirdly, the highest effect of the sun's rays I have found to be in carbonic acid gas." (carbon dioxide) She continued: "An atmosphere of that gas would give to our earth a high temperature; and if, as some suppose, at one period of its history, the air had mixed with it a larger proportion than at present, an increased temperature from its own action, as well as from an increased weight, must have necessarily resulted." Her work was presented by Prof. Joseph Henry at the American Association for the Advancement of Science meeting in August 1856 and described as a brief note written by then journalist David Ames Wells; her paper was published later that year in the American Journal of Science and Arts.

John Tyndall took Fourier's work one step further in 1859 when he investigated the absorption of infrared radiation in different gases. He found that water vapor, hydrocarbons like methane (CH4), and carbon dioxide (CO2) strongly block the radiation.

Some scientists suggested that ice ages and other great climate changes were due to changes in the amount of gases emitted in volcanism. But that was only one of many possible causes. Another obvious possibility was solar variation. Shifts in ocean currents also might explain many climate changes. For changes over millions of years, the raising and lowering of mountain ranges would change patterns of both winds and ocean currents. Or perhaps the climate of a continent had not changed at all, but it had grown warmer or cooler because of polar wander (the North Pole shifting to where the Equator had been or the like). There were dozens of theories.

For example, in the mid-19th century, James Croll published calculations of how the gravitational pulls of the Sun, Moon, and planets subtly affect the Earth's motion and orientation. The inclination of the Earth's axis and the shape of its orbit around the Sun oscillate gently in cycles lasting tens of thousands of years. During some periods the Northern Hemisphere would get slightly less sunlight during the winter than it would get during other centuries. Snow would accumulate, reflecting sunlight and leading to a self-sustaining ice age. Most scientists, however, found Croll's ideas—and every other theory of climate change—unconvincing.

First calculations of greenhouse effect, 1896

In 1896 Svante Arrhenius calculated the effect of a doubling atmospheric carbon dioxide to be an increase in surface temperatures of 5–6 degrees Celsius.
 
T. C. Chamberlin

By the late 1890s, Samuel Pierpoint Langley along with Frank W. Very had attempted to determine the surface temperature of the Moon by measuring infrared radiation leaving the Moon and reaching the Earth. The angle of the Moon in the sky when a scientist took a measurement determined how much CO
2
and water vapor the Moon's radiation had to pass through to reach the Earth's surface, resulting in weaker measurements when the Moon was low in the sky. This result was unsurprising given that scientists had known about infrared radiation absorption for decades.

In 1896 Svante Arrhenius used Langley's observations of increased infrared absorption where Moon rays pass through the atmosphere at a low angle, encountering more carbon dioxide (CO
2
), to estimate an atmospheric cooling effect from a future decrease of CO
2
. He realized that the cooler atmosphere would hold less water vapor (another greenhouse gas) and calculated the additional cooling effect. He also realized the cooling would increase snow and ice cover at high latitudes, making the planet reflect more sunlight and thus further cool down, as James Croll had hypothesized. Overall Arrhenius calculated that cutting CO
2
in half would suffice to produce an ice age. He further calculated that a doubling of atmospheric CO
2
would give a total warming of 5–6 degrees Celsius.

Further, Arrhenius' colleague Arvid Högbom, who was quoted in length in Arrhenius' 1896 study On the Influence of Carbonic Acid in the Air upon the Temperature of the Earth had been attempting to quantify natural sources of emissions of CO
2
for purposes of understanding the global carbon cycle. Högbom found that estimated carbon production from industrial sources in the 1890s (mainly coal burning) was comparable with the natural sources. Arrhenius saw that this human emission of carbon would eventually lead to warming. However, because of the relatively low rate of CO
2
production in 1896, Arrhenius thought the warming would take thousands of years, and he expected it would be beneficial to humanity.

In 1899 Thomas Chrowder Chamberlin developed at length the idea that changes in climate could result from changes in the concentration of atmospheric carbon dioxide. Chamberlin wrote in his 1899 book, An Attempt to Frame a Working Hypothesis of the Cause of Glacial Periods on an Atmospheric Basis:

Previous advocacy of an atmospheric hypothesis, — The general doctrine that the glacial periods may have been due to a change in the atmospheric content of carbon dioxide is not new. It was urged by Tyndall a half century ago and has been urged by others since. Recently it has been very effectively advocated by Dr. Arrhenius, who has taken a great step in advance of his predecessors in reducing his conclusions to definite quantitative terms deduced from observational data. [..] The functions of carbon dioxide. — By the investigations of Tyndall, Lecher and Pretner, Keller, Roentgen, and Arrhenius, it has been shown that the carbon dioxide and water vapor of the atmosphere have remarkable power of absorbing and temporarily retaining heat rays, while the oxygen, nitrogen, and argon of the atmosphere possess this power in a feeble degree only. It follows that the effect of the carbon dioxide and water vapor is to blanket the earth with a thermally absorbent envelope. [..] The general results assignable to a greatly increased or a greatly reduced quantity of atmospheric carbon dioxide and water may be summarized as follows:

  • a. An increase, by causing a larger absorption of the sun's radiant energy, raises the average temperature, while a reduction lowers it. The estimate of Dr. Arrhenius, based upon an elaborate mathematical discussion of the observations of Professor Langley, is that an increase of the carbon dioxide to the amount of two or three times the present content would elevate the average temperature 8° or 9° C. and would bring on a mild climate analogous to that which prevailed in the Middle Tertiary age. On the other hand, a reduction of the quantity of carbon dioxide in the atmosphere to an amount ranging from 55 to 62 per cent, of the present content, would reduce the average temperature 4 or 5 C, which would bring on a glaciation comparable to that of the Pleistocene period.
  • b. A second effect of increase and decrease in the amount of atmospheric carbon dioxide is the equalization, on the one hand, of surface temperatures, or their differentiation on the other. The temperature of the surface of the earth varies with latitude, altitude, the distribution of land and water, day and night, the seasons, and some other elements that may here be neglected. It is postulated that an increase in the thermal absorption of the atmosphere equalizes the temperature, and tends to eliminate the variations attendant on these contingencies. Conversely, a reduction of thermal atmospheric absorption tends to intensify all of these variations. A secondary effect of intensification of differences of temperature is an increase of atmospheric movements in the effort to restore equilibrium. Increased atmospheric movements, which are necessarily convectional, carry the warmer air to the surface of the atmosphere, and facilitate the discharge of the heat and thus intensify the primary effect. [..]

In the case of the outgoing rays, which are absorbed in much larger proportions than the incoming rays because they are more largely long-wave rays, the tables of Arrhenius show that the absorption is augmented by increase of carbonic acid in greater proportions in high latitudes than in low; for example, the increase of temperature for three times the present content of carbonic acid is 21.5 per cent, greater between 60° and 70° N. latitude than at the equator.

It now becomes necessary to assign agencies capable of removing carbon dioxide from the atmosphere at a rate sufficiently above the normal rate of supply, at certain times, to produce glaciation; and on the other hand, capable of restoring it to the atmosphere at certain other times in sufficient amounts to produce mild climates.

When the temperature is rising after a glacial episode, dissociation is promoted, and the ocean gives forth its carbon dioxide at an increased rate, and thereby assists in accelerating the amelioration of climate.

A study of the life of the geological periods seems to indicate that there were very notable fluctuations in the total mass of living matter. To be sure there was a reciprocal relation between the life of the land and that of the sea, so that when the latter was extended upon the continental platforms and greatly augmented, the former was contracted, but notwithstanding this it seems clear that the sum of life activity fluctuated notably during the ages. It is believed that on the whole it was greatest at the periods of sea extension and mild climates, and least at the times of disruption and climatic intensification. This factor then acted antithetically to the carbonic acid freeing previously noted, and, so far as it went, tended to offset its effects.

In periods of sea extension and of land reduction (base-level periods in particular), the habitat of shallow water lime-secreting life is concurrently extended, giving to the agencies that set carbon dioxide free accelerated activity, which is further aided by the consequent rising temperature which reduces the absorptive power of the ocean and increases dissociation. At the same time, the area of the land being diminished, a low consumption of carbon dioxide both in original decomposition of the silicates and in the solution of the limestones and dolomites obtains.

Thus the reciprocating agencies again conjoin, but now to increase the carbon dioxide of the air. These are the great and essential factors. They are modified by several subordinate agencies already mentioned, but the quantitative effect of these is thought to be quite insufficient to prevent very notable fluctuations in the atmospheric constitution.

As a result, it is postulated that geological history has been accentuated by an alternation of climatic episodes embracing, on the one hand, periods of mild, equable, moist climate nearly uniform for the whole globe; and on the other, periods when there were extremes of aridity and precipitation, and of heat and cold; these last denoted by deposits of salt and gypsum, of subaerial conglomerates, of red sandstones and shales, of arkose deposits, and occasionally by glaciation in low latitudes.

The term "greenhouse effect" for this warming was introduced by John Henry Poynting in 1909, in a commentary discussing the effect of the atmosphere on the temperature of the Earth and Mars.

Paleoclimates and sunspots, early 1900s to 1950s

Arrhenius's calculations were disputed and subsumed into a larger debate over whether atmospheric changes had caused the ice ages. Experimental attempts to measure infrared absorption in the laboratory seemed to show little differences resulted from increasing CO
2
levels, and also found significant overlap between absorption by CO
2
and absorption by water vapor, all of which suggested that increasing carbon dioxide emissions would have little climatic effect. These early experiments were later found to be insufficiently accurate, given the instrumentation of the time. Many scientists also thought that the oceans would quickly absorb any excess carbon dioxide.

Other theories of the causes of climate change fared no better. The principal advances were in observational paleoclimatology, as scientists in various fields of geology worked out methods to reveal ancient climates. Wilmot H. Bradley found that annual varves of clay laid down in lake beds showed climate cycles. Andrew Ellicott Douglass saw strong indications of climate change in tree rings. Noting that the rings were thinner in dry years, he reported climate effects from solar variations, particularly in connection with the 17th-century dearth of sunspots (the Maunder Minimum) noticed previously by William Herschel and others. Other scientists, however, found good reason to doubt that tree rings could reveal anything beyond random regional variations. The value of tree rings for climate study was not solidly established until the 1960s.

Through the 1930s the most persistent advocate of a solar-climate connection was astrophysicist Charles Greeley Abbot. By the early 1920s, he had concluded that the solar "constant" was misnamed: his observations showed large variations, which he connected with sunspots passing across the face of the Sun. He and a few others pursued the topic into the 1960s, convinced that sunspot variations were a main cause of climate change. Other scientists were skeptical. Nevertheless, attempts to connect the solar cycle with climate cycles were popular in the 1920s and 1930s. Respected scientists announced correlations that they insisted were reliable enough to make predictions. Sooner or later, every prediction failed, and the subject fell into disrepute.

Meanwhile, Milutin Milankovitch, building on James Croll's theory, improved the tedious calculations of the varying distances and angles of the Sun's radiation as the Sun and Moon gradually perturbed the Earth's orbit. Some observations of varves (layers seen in the mud covering the bottom of lakes) matched the prediction of a Milankovitch cycle lasting about 21,000 years. However, most geologists dismissed the astronomical theory. For they could not fit Milankovitch's timing to the accepted sequence, which had only four ice ages, all of them much longer than 21,000 years.

In 1938 Guy Stewart Callendar attempted to revive Arrhenius's greenhouse-effect theory. Callendar presented evidence that both temperature and the CO
2
level in the atmosphere had been rising over the past half-century, and he argued that newer spectroscopic measurements showed that the gas was effective in absorbing infrared in the atmosphere. Nevertheless, most scientific opinion continued to dispute or ignore the theory.

Increasing concern, 1950s–1960s

Charles Keeling, receiving the National Medal of Science from George W. Bush, in 2001

Better spectrography in the 1950s showed that CO
2
and water vapor absorption lines did not overlap completely. Climatologists also realized that little water vapor was present in the upper atmosphere. Both developments showed that the CO
2
greenhouse effect would not be overwhelmed by water vapor.

In 1955 Hans Suess's carbon-14 isotope analysis showed that CO
2
released from fossil fuels was not immediately absorbed by the ocean. In 1957, better understanding of ocean chemistry led Roger Revelle to a realization that the ocean surface layer had limited ability to absorb carbon dioxide, also predicting the rise in levels of CO
2
and later being proven by Charles David Keeling. By the late 1950s, more scientists were arguing that carbon dioxide emissions could be a problem, with some projecting in 1959 that CO
2
would rise 25% by the year 2000, with potentially "radical" effects on climate. In the centennial of the American oil industry in 1959, organized by the American Petroleum Institute and the Columbia Graduate School of Business, Edward Teller said "It has been calculated that a temperature rise corresponding to a 10 per cent increase in carbon dioxide will be sufficient to melt the icecap and submerge New York. [....] At present the carbon dioxide in the atmosphere has risen by 2 per cent over normal. By 1970, it will be perhaps 4 per cent, by 1980, 8 per cent, by 1990, 16 per cent if we keep on with our exponential rise in the use of purely conventional fuels.". In 1960 Charles David Keeling demonstrated that the level of CO
2
in the atmosphere was in fact rising. Concern mounted year by year along with the rise of the "Keeling Curve" of atmospheric CO
2
.

Another clue to the nature of climate change came in the mid-1960s from analysis of deep-sea cores by Cesare Emiliani and analysis of ancient corals by Wallace Broecker and collaborators. Rather than four long ice ages, they found a large number of shorter ones in a regular sequence. It appeared that the timing of ice ages was set by the small orbital shifts of the Milankovitch cycles. While the matter remained controversial, some began to suggest that the climate system is sensitive to small changes and can readily be flipped from a stable state into a different one.

Scientists meanwhile began using computers to develop more sophisticated versions of Arrhenius's calculations. In 1967, taking advantage of the ability of digital computers to integrate absorption curves numerically, Syukuro Manabe and Richard Wetherald made the first detailed calculation of the greenhouse effect incorporating convection (the "Manabe-Wetherald one-dimensional radiative-convective model"). They found that, in the absence of unknown feedbacks such as changes in clouds, a doubling of carbon dioxide from the current level would result in approximately 2 °C increase in global temperature.

By the 1960s, aerosol pollution ("smog") had become a serious local problem in many cities, and some scientists began to consider whether the cooling effect of particulate pollution could affect global temperatures. Scientists were unsure whether the cooling effect of particulate pollution or warming effect of greenhouse gas emissions would predominate, but regardless, began to suspect that human emissions could be disruptive to climate in the 21st century if not sooner. In his 1968 book The Population Bomb, Paul R. Ehrlich wrote, "the greenhouse effect is being enhanced now by the greatly increased level of carbon dioxide... [this] is being countered by low-level clouds generated by contrails, dust, and other contaminants... At the moment we cannot predict what the overall climatic results will be of our using the atmosphere as a garbage dump."

Efforts to establish a global temperature record that began in 1938 culminated in 1963, when J. Murray Mitchell presented one of the first up-to-date temperature reconstructions. His study involved data from over 200 weather stations, collected by the World Weather Records, which was used to calculate latitudinal average temperature. In his presentation, Murray showed that, beginning in 1880, global temperatures increased steadily until 1940. After that, a multi-decade cooling trend emerged. Murray’s work contributed to the overall acceptance of a possible global cooling trend.

In 1965, the landmark report, "Restoring the Quality of Our Environment" by U.S. President Lyndon B. Johnson’s Science Advisory Committee warned of the harmful effects of fossil fuel emissions:

The part that remains in the atmosphere may have a significant effect on climate; carbon dioxide is nearly transparent to visible light, but it is a strong absorber and back radiator of infrared radiation, particularly in the wave lengths from 12 to 18 microns; consequently, an increase of atmospheric carbon dioxide could act, much like the glass in a greenhouse, to raise the temperature of the lower air.

The committee used the recently available global temperature reconstructions and carbon dioxide data from Charles David Keeling and colleagues to reach their conclusions. They declared the rise of atmospheric carbon dioxide levels to be the direct result of fossil fuel burning. The committee concluded that human activities were sufficiently large to have significant, global impact—beyond the area the activities take place. “Man is unwittingly conducting a vast geophysical experiment,” the committee wrote.

Nobel Prize winner Glenn T. Seaborg, Chairperson of the United States Atomic Energy Commission warned of the climate crisis in 1966: "At the rate we are currently adding carbon dioxide to our atmosphere (six billion tons a year), within the next few decades the heat balance of the atmosphere could be altered enough to produce marked changes in the climate--changes which we might have no means of controlling even if by that time we have made great advances in our programs of weather modification."

A 1968 study by the Stanford Research Institute for the American Petroleum Institute noted:

If the earth's temperature increases significantly, a number of events might be expected to occur, including the melting of the Antarctic ice cap, a rise in sea levels, warming of the oceans, and an increase in photosynthesis. [..] Revelle makes the point that man is now engaged in a vast geophysical experiment with his environment, the earth. Significant temperature changes are almost certain to occur by the year 2000 and these could bring about climatic changes.

In 1969, NATO was the first candidate to deal with climate change on an international level. It was planned then to establish a hub of research and initiatives of the organization in the civil area, dealing with environmental topics as acid rain and the greenhouse effect. The suggestion of US President Richard Nixon was not very successful with the administration of German Chancellor Kurt Georg Kiesinger. But the topics and the preparation work done on the NATO proposal by the German authorities gained international momentum, (see e.g. the Stockholm United Nations Conference on the Human Environment 1970) as the government of Willy Brandt started to apply them on the civil sphere instead.

Also in 1969, Mikhail Budyko published a theory on the ice–albedo feedback, a foundational element of what is today known as Arctic amplification. The same year a similar model was published by William D. Sellers. Both studies attracted significant attention, since they hinted at the possibility for a runaway positive feedback within the global climate system.

Scientists increasingly predict warming, 1970s

Mean temperature anomalies during the period 1965 to 1975 with respect to the average temperatures from 1937 to 1946. This dataset was not available at the time.

In the early 1970s, evidence that aerosols were increasing worldwide and that the global temperature series showed cooling encouraged Reid Bryson and some others to warn of the possibility of severe cooling. The questions and concerns put forth by Bryson and others launched a new wave of research into the factors of such global cooling. Meanwhile, the new evidence that the timing of ice ages was set by predictable orbital cycles suggested that the climate would gradually cool, over thousands of years. Several scientific panels from this time period concluded that more research was needed to determine whether warming or cooling was likely, indicating that the trend in the scientific literature had not yet become a consensus. For the century ahead, however, a survey of the scientific literature from 1965 to 1979 found 7 articles predicting cooling and 44 predicting warming (many other articles on climate made no prediction); the warming articles were cited much more often in subsequent scientific literature. Research into warming and greenhouse gases held the greater emphasis, with nearly 6 times more studies predicting warming than predicting cooling, suggesting concern among scientists was largely over warming as they turned their attention toward the greenhouse effect.

John Sawyer published the study Man-made Carbon Dioxide and the “Greenhouse” Effect in 1972. He summarized the knowledge of the science at the time, the anthropogenic attribution of the carbon dioxide greenhouse gas, distribution and exponential rise, findings which still hold today. Additionally he accurately predicted the rate of global warming for the period between 1972 and 2000.

The increase of 25% CO2 expected by the end of the century therefore corresponds to an increase of 0.6°C in the world temperature – an amount somewhat greater than the climatic variation of recent centuries. – John Sawyer, 1972

The first satellite records compiled in the early 1970s showed snow and ice cover over the Northern Hemisphere to be increasing, prompting further scrutiny into the possibility of global cooling. J. Murray Mitchell updated his global temperature reconstruction in 1972, which continued to show cooling. However, scientists determined that the cooling observed by Mitchell was not a global phenomenon. Global averages were changing, largely in part due to unusually severe winters experienced by Asia and some parts of North America in 1972 and 1973, but these changes were mostly constrained to the Northern Hemisphere. In the Southern Hemisphere, the opposite trend was observed. The severe winters, however, pushed the issue of global cooling into the public eye.

The mainstream news media at the time exaggerated the warnings of the minority who expected imminent cooling. For example, in 1975, Newsweek magazine published a story titled “The Cooling World” that warned of "ominous signs that the Earth's weather patterns have begun to change." The article drew on studies documenting the increasing snow and ice in regions of the Northern Hemisphere and concerns and claims by Reid Bryson that global cooling by aerosols would dominate carbon dioxide warming. The article continued by stating that evidence of global cooling was so strong that meteorologists were having "a hard time keeping up with it." On 23 October 2006, Newsweek issued an update stating that it had been "spectacularly wrong about the near-term future". Nevertheless, this article and others like it had long-lasting effects on public perception of climate science.

Such media coverage heralding the coming of a new ice age resulted in beliefs that this was the consensus among scientists, despite this is not being reflected by the scientific literature. As it became apparent that scientific opinion was in favor of global warming, the public began to express doubt over how trustworthy the science was. The argument that scientists were wrong about global cooling, so therefore may be wrong about global warming has been called “the “Ice Age Fallacy” by TIME author Bryan Walsh.

In the first two "Reports for the Club of Rome" in 1972 and 1974, the anthropogenic climate changes by CO
2
increase as well as by waste heat were mentioned. About the latter John Holdren wrote in a study cited in the 1st report, “… that global thermal pollution is hardly our most immediate environmental threat. It could prove to be the most inexorable, however, if we are fortunate enough to evade all the rest.” Simple global-scale estimates that recently have been actualized and confirmed by more refined model calculations show noticeable contributions from waste heat to global warming after the year 2100, if its growth rates are not strongly reduced (below the averaged 2% p.a. which occurred since 1973).

Evidence for warming accumulated. By 1975, Manabe and Wetherald had developed a three-dimensional Global climate model that gave a roughly accurate representation of the current climate. Doubling CO
2
in the model's atmosphere gave a roughly 2 °C rise in global temperature. Several other kinds of computer models gave similar results: it was impossible to make a model that gave something resembling the actual climate and not have the temperature rise when the CO
2
concentration was increased.

In a separate development, an analysis of deep-sea cores published in 1976 by Nicholas Shackleton and colleagues showed that the dominating influence on ice age timing came from a 100,000-year Milankovitch orbital change. This was unexpected, since the change in sunlight in that cycle was slight. The result emphasized that the climate system is driven by feedbacks, and thus is strongly susceptible to small changes in conditions.

The 1979 World Climate Conference (12 to 23 February) of the World Meteorological Organization concluded "it appears plausible that an increased amount of carbon dioxide in the atmosphere can contribute to a gradual warming of the lower atmosphere, especially at higher latitudes....It is possible that some effects on a regional and global scale may be detectable before the end of this century and become significant before the middle of the next century."

In July 1979 the United States National Research Council published a report, concluding (in part):

When it is assumed that the CO
2
content of the atmosphere is doubled and statistical thermal equilibrium is achieved, the more realistic of the modeling efforts predict a global surface warming of between 2°C and 3.5°C, with greater increases at high latitudes. ... we have tried but have been unable to find any overlooked or underestimated physical effects that could reduce the currently estimated global warmings due to a doubling of atmospheric CO
2
to negligible proportions or reverse them altogether.

Consensus begins to form, 1980–1988

James Hansen during his 1988 testimony to Congress, which alerted the public to the dangers of global warming

By the early 1980s, the slight cooling trend from 1945 to 1975 had stopped. Aerosol pollution had decreased in many areas due to environmental legislation and changes in fuel use, and it became clear that the cooling effect from aerosols was not going to increase substantially while carbon dioxide levels were progressively increasing.

Hansen and others published the 1981 study Climate impact of increasing atmospheric carbon dioxide, and noted:

It is shown that the anthropogenic carbon dioxide warming should emerge from the noise level of natural climate variability by the end of the century, and there is a high probability of warming in the 1980s. Potential effects on climate in the 21st century include the creation of drought-prone regions in North America and central Asia as part of a shifting of climatic zones, erosion of the West Antarctic ice sheet with a consequent worldwide rise in sea level, and opening of the fabled Northwest Passage.

In 1982, Greenland ice cores drilled by Hans Oeschger, Willi Dansgaard, and collaborators revealed dramatic temperature oscillations in the space of a century in the distant past. The most prominent of the changes in their record corresponded to the violent Younger Dryas climate oscillation seen in shifts in types of pollen in lake beds all over Europe. Evidently drastic climate changes were possible within a human lifetime.

In 1973 James Lovelock speculated that chlorofluorocarbons (CFCs) could have a global warming effect. In 1975 V. Ramanathan found that a CFC molecule could be 10,000 times more effective in absorbing infrared radiation than a carbon dioxide molecule, making CFCs potentially important despite their very low concentrations in the atmosphere. While most early work on CFCs focused on their role in ozone depletion, by 1985 Ramanathan and others showed that CFCs together with methane and other trace gases could have nearly as important a climate effect as increases in CO
2
. In other words, global warming would arrive twice as fast as had been expected.

In 1985 a joint UNEP/WMO/ICSU Conference on the "Assessment of the Role of Carbon Dioxide and Other Greenhouse Gases in Climate Variations and Associated Impacts" concluded that greenhouse gases "are expected" to cause significant warming in the next century and that some warming is inevitable.

Meanwhile, ice cores drilled by a Franco-Soviet team at the Vostok Station in Antarctica showed that CO
2
and temperature had gone up and down together in wide swings through past ice ages. This confirmed the CO
2
-temperature relationship in a manner entirely independent of computer climate models, strongly reinforcing the emerging scientific consensus. The findings also pointed to powerful biological and geochemical feedbacks.

In June 1988, James E. Hansen made one of the first assessments that human-caused warming had already measurably affected global climate. Shortly after, a "World Conference on the Changing Atmosphere: Implications for Global Security" gathered hundreds of scientists and others in Toronto. They concluded that the changes in the atmosphere due to human pollution "represent a major threat to international security and are already having harmful consequences over many parts of the globe," and declared that by 2005 the world would be well-advised to push its emissions some 20% below the 1988 level.

The 1980s saw important breakthroughs with regard to global environmental challenges. Ozone depletion was mitigated by the Vienna Convention (1985) and the Montreal Protocol (1987). Acid rain was mainly regulated on national and regional levels.

Modern period: 1988 to present

Intergovernmental Panel
on Climate Change


IPCC   IPCC
IPCC Assessment Reports:
First (1990)
1992 supplementary report
Second (1995)
Third (2001)
Fourth (2007)
Fifth (2014)
Sixth (2022)
IPCC Special Reports:
Emissions Scenarios (2000)
Renewable energy sources (2012)
Extreme events and disasters (2012)
Global Warming of 1.5 °C (2018)
Climate Change & Land (2019)
Ocean & Cryosphere (2019)

UNFCCC | WMO | UNEP
2015 – Warmest Global Year on Record (since 1880) – Colors indicate temperature anomalies (NASA/NOAA; 20 January 2016).

In 1988 the WMO established the Intergovernmental Panel on Climate Change with the support of the UNEP. The IPCC continues its work through the present day, and issues a series of Assessment Reports and supplemental reports that describe the state of scientific understanding at the time each report is prepared. Scientific developments during this period are summarized about once every five to six years in the IPCC Assessment Reports which were published in 1990 (First Assessment Report), 1995 (Second Assessment Report), 2001 (Third Assessment Report), 2007 (Fourth Assessment Report), and 2013/2014 (Fifth Assessment Report).

Since the 1990s, research on climate change has expanded and grown, linking many fields such as atmospheric sciences, numerical modeling, behavioral sciences, geology and economics, or security.

Discovery of other climate changing factors

Methane: In 1859, John Tyndall determined that coal gas, a mix of methane and other gases, strongly absorbed infrared radiation. Methane was subsequently detected in the atmosphere in 1948, and in the 1980s scientists realized that human emissions were having a substantial impact.

Chlorofluorocarbon: In 1973, British scientist James Lovelock speculated that chlorofluorocarbons (CFCs) could have a global warming effect. In 1975, V. Ramanathan found that a CFC molecule could be 10,000 times more effective in absorbing infrared radiation than a carbon dioxide molecule, making CFCs potentially important despite their very low concentrations in the atmosphere. While most early work on CFCs focused on their role in ozone depletion, by 1985 scientists had concluded that CFCs together with methane and other trace gases could have nearly as important a climate effect as increases in CO2.

Samaritans

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...