Search This Blog

Friday, November 13, 2015

False vacuum



From Wikipedia, the free encyclopedia


A scalar field φ in a false vacuum. Note that the energy E is higher than that in the true vacuum or ground state, but there is a barrier preventing the field from classically rolling down to the true vacuum. Therefore, the transition to the true vacuum must be stimulated by the creation of high-energy particles or through quantum-mechanical tunneling.

In quantum field theory, a false vacuum is a metastable sector of space that appears to be a perturbative vacuum, but is unstable due to instanton effects that may tunnel to a lower energy state. This tunneling can be caused by quantum fluctuations or the creation of high-energy particles. The false vacuum is a local minimum, but not the lowest energy state, even though it may remain stable for some time.

Stability and instability of the vacuum


Diagram showing the Higgs boson and top quark masses, which could indicate whether our universe is stable, or a long-lived 'bubble'. The outer dotted line is the current measurement uncertainties; the inner ones show predicted sizes after completion of future physics programs, but their location could be anywhere inside the outer.[1]

For decades, scientific models of our universe have included the possibility that it exists as a long-lived, but not completely stable, sector of space, which could potentially at some time be destroyed upon 'toppling' into a more stable vacuum state.[2][3][4][5][6] If the universe were indeed in such a false vacuum state, a catastrophic bubble of more stable "true vacuum" could theoretically occur at any time or place expanding outward at the speed of light.[2][7] The Standard Model of particle physics opens the possibility of calculating, from the masses of the Higgs boson and the top quark, whether the universe's present electroweak vacuum state is likely to be stable or merely long-lived.[8][9] (This was sometimes misreported as the Higgs boson "ending" the universe[13]). A 125–127 GeV Higgs mass seems to be extremely close to the boundary for stability (estimated in 2012 as 123.8–135.0 GeV[1]). However, a definitive answer requires much more precise measurements of the top quark's pole mass,[1] and new physics beyond the Standard Model of Particle Physics could drastically change this picture.[14]

Implications

If measurements of these particles suggests that our universe lies within a false vacuum of this kind, then it would imply—more than likely in many billions of years[15][Note 1]—that it could cease to exist as we know it, if a true vacuum happened to nucleate.[15]

This is because, if the Standard Model is correct, the particles and forces we observe in our universe exist as they do because of underlying quantum fields. Quantum fields can have states of differing stability, including 'stable', 'unstable', or 'metastable' (meaning, long-lived but capable of being "toppled" in the right circumstances). If a more stable vacuum state were able to arise, then existing particles and forces would no longer arise as they do in the universe's present state. Different particles or forces would arise from (and be shaped by) whatever new quantum states arose. The world we know depends upon these particles and forces, so if this happened, everything around us, from subatomic particles to galaxies, and all fundamental forces, would be reconstituted into new fundamental particles and forces and structures. The universe would lose all of its present structures and become inhabited by new ones (depending upon the exact states involved) based upon the same quantum fields.[citation needed]

It would also have implications for other aspects of physics, and would suggest that the Higgs self-coupling λ and its βλ function could be very close to zero at the Planck scale, with "intriguing" implications, including implications for theories of gravity and Higgs-based inflation.[1]:218 A future electron-positron collider would be able to provide the precise measurements of the top quark needed for such calculations.[1]

In a new study posted on the arXiv in March 2015, it was pointed out that the vacuum decay rate could be vastly increased in the vicinity of black holes, which would serve as a nucleation seed [1]. According to the new study, a potentially catastrophic vacuum decay could be triggered any time by primordial black holes, should they exist. It was also discussed that tiny black holes potentially produced at the LHC could trigger such a vacuum decay event, but the results in the existing study were not conclusive.

Vacuum metastability event

A hypothetical vacuum metastability event would be theoretically possible if our universe were part of a metastable (false) vacuum in the first place, an issue that was highly theoretical and far from resolved in 1982.[2] A false vacuum is one that appears stable, and is stable within certain limits and conditions, but is capable of being disrupted and entering a different state which is more stable. If this were the case, a bubble of lower-energy vacuum could come to exist by chance or otherwise in our universe, and catalyze the conversion of our universe to a lower energy state in a volume expanding at nearly the speed of light, destroying all that we know without forewarning.[3] Chaotic Inflation theory suggests that the universe may be in either a false vacuum or a true vacuum state.
A paper by Coleman and de Luccia which attempted to include simple gravitational assumptions into these theories noted that if this was an accurate representation of nature, then the resulting universe "inside the bubble" in such a case would appear to be extremely unstable and would almost immediately collapse:[3]

In general, gravitation makes the probability of vacuum decay smaller; in the extreme case of very small energy-density difference, it can even stabilize the false vacuum, preventing vacuum decay altogether. We believe we understand this. For the vacuum to decay, it must be possible to build a bubble of total energy zero. In the absence of gravitation, this is no problem, no matter how small the energy-density difference; all one has to do is make the bubble big enough, and the volume/surface ratio will do the job. In the presence of gravitation, though, the negative energy density of the true vacuum distorts geometry within the bubble with the result that, for a small enough energy density, there is no bubble with a big enough volume/surface ratio. Within the bubble, the effects of gravitation are more dramatic. The geometry of space-time within the bubble is that of anti-de Sitter space, a space much like conventional de Sitter space except that its group of symmetries is O(3, 2) rather than O(4, 1). Although this space-time is free of singularities, it is unstable under small perturbations, and inevitably suffers gravitational collapse of the same sort as the end state of a contracting Friedmann universe. The time required for the collapse of the interior universe is on the order of ... microseconds or less.

The possibility that we are living in a false vacuum has never been a cheering one to contemplate. Vacuum decay is the ultimate ecological catastrophe; in the new vacuum there are new constants of nature; after vacuum decay, not only is life as we know it impossible, so is chemistry as we know it. However, one could always draw stoic comfort from the possibility that perhaps in the course of time the new vacuum would sustain, if not life as we know it, at least some structures capable of knowing joy. This possibility has now been eliminated.

The second special case is decay into a space of vanishing cosmological constant, the case that applies if we are now living in the debris of a false vacuum which decayed at some early cosmic epoch. This case presents us with less interesting physics and with fewer occasions for rhetorical excess than the preceding one. It is now the interior of the bubble that is ordinary Minkowski space...

Sidney Coleman & F. de Luccia

Such an event would be one possible doomsday event. It was used as a plot device in a science-fiction story in 1988 by Geoffrey A. Landis,[16] in 2000 by Stephen Baxter,[17] in 2002 by Greg Egan,[18] and in 2015 by Alastair Reynolds in his novel Poseidon's Wake.

In theory, either high enough energy concentrations or random chance could trigger the tunneling needed to set this event in motion. However an immense number of ultra-high energy particles and events have occurred in the history of our universe, dwarfing by many orders of magnitude any events at human disposal. Hut and Rees[19] note that, because we have observed cosmic ray collisions at much higher energies than those produced in terrestrial particle accelerators, these experiments should not, at least for the foreseeable future, pose a threat to our current vacuum. Particle accelerators have reached energies of only approximately eight tera electron volts (8×1012 eV). Cosmic ray collisions have been observed at and beyond energies of 1018 eV, a million times more powerful – the so-called Greisen–Zatsepin–Kuzmin limit – and other cosmic events may be more powerful yet. Against this, John Leslie has argued[20] that if present trends continue, particle accelerators will exceed the energy given off in naturally occurring cosmic ray collisions by the year 2150. Fears of this kind were raised by critics of both the Relativistic Heavy Ion Collider and the Large Hadron Collider at the time of their respective proposal, and determined to be unfounded by scientific inquiry.

On the other hand, if the many-worlds interpretation of quantum mechanics is correct, the explanation for why there has been no vacuum decay yet despite many high-energy particle collisions changes entirely. In this case, the corresponding collisions did trigger the vacuum decay, and we're not observing it simply because every such event excludes any observers in its causal (light-cone) future - and there are always worlds exactly identical to such a world in everything except the decay event and its future cone. More generally, the same applies to any set of future light-cones - that is, any causally closed patch of spacetime - as long as its content either destroys any observers or is eventually unremarkable. Then observers' transition through the boundary of this patch, such as the (potential) cone of the decay event, will have quantitatively altered probability distributions - but formally and subjectively it will be just the same quantum branching, qualitatively indistinguishable from any other ordinary moment of existence. If this is the case, fine-tuning is an active process, and therefore a vacuum metastability event will never happen.[21]

Bubble nucleation

In the theoretical physics of the false vacuum, the system moves to a lower energy state – either the true vacuum, or another, lower energy vacuum – through a process known as bubble nucleation.[4][5][22][23][24][25] In this, instanton effects cause a bubble to appear in which fields have their true vacuum values inside. Therefore, the interior of the bubble has a lower energy. The walls of the bubble (or domain walls) have a surface tension, as energy is expended as the fields roll over the potential barrier to the lower energy vacuum. The most likely size of the bubble is determined in the semi-classical approximation to be such that the bubble has zero total change in the energy: the decrease in energy by the true vacuum in the interior is compensated by the tension of the walls.

Joseph Lykken has said that study of the exact properties of the Higgs boson could shed light on the possibility of vacuum collapse.[26]

Expansion of bubble

Any increase in size of the bubble will decrease its potential energy, as the energy of the wall increases as the area of a sphere 4 \pi r^2 but the negative contribution of the interior increases more quickly, as the volume of a sphere \textstyle\frac{4}{3} \pi r^3. Therefore, after the bubble is nucleated, it quickly begins expanding at very nearly the speed of light. The excess energy contributes to the very large kinetic energy of the walls. If two bubbles are nucleated and they eventually collide, it is thought that particle production would occur where the walls collide.

The tunnelling rate is increased by increasing the energy difference between the two vacua and decreased by increasing the height or width of the barrier.

Gravitational effects

The addition of gravity to the story leads to a considerably richer variety of phenomena. The key insight is that a false vacuum with positive potential energy density is a de Sitter vacuum, in which the potential energy acts as a cosmological constant and the Universe is undergoing the exponential expansion of de Sitter space. This leads to a number of interesting effects, first studied by Coleman and de Luccia.[3]

Development of theories

Alan Guth, in his original proposal for cosmic inflation,[27] proposed that inflation could end through quantum mechanical bubble nucleation of the sort described above. See History of Chaotic inflation theory. It was soon understood that a homogeneous and isotropic universe could not be preserved through the violent tunneling process. This led Andrei Linde[28] and, independently, Andreas Albrecht and Paul Steinhardt,[29] to propose "new inflation" or "slow roll inflation" in which no tunnelling occurs, and the inflationary scalar field instead rolls down a gentle slope.

Wednesday, November 11, 2015

(Possible) attributions of recent climate change


From Wikipedia, the free encyclopedia


This graph is known as the Keeling Curve and shows the long-term increase of atmospheric carbon dioxide (CO2) concentrations from 1958–2015. Monthly CO2 measurements display seasonal oscillations in an upward trend. Each year's maximum occurs during the Northern Hemisphere's late spring, and declines during its growing season as plants remove some atmospheric CO2.
Refer to caption
Global annual average temperature (as measured over both land and oceans). Red bars indicate temperatures above and blue bars indicate temperatures below the average temperature for the period 1901–2000. The black line shows atmospheric carbon dioxide (CO2) concentration in parts per million (ppm). While there is a clear long-term global warming trend, each individual year does not show a temperature increase relative to the previous year, and some years show greater changes than others. These year-to-year fluctuations in temperature are due to natural processes, such as the effects of El Niños, La Niñas, and the eruption of large volcanoes.[1]
Refer to caption
This image shows three examples of internal climate variability measured between 1950 and 2012: the El Niño–Southern oscillation, the Arctic oscillation, and the North Atlantic oscillation.[2]

Attribution of recent climate change is the effort to scientifically ascertain mechanisms responsible for recent changes observed in the Earth's climate, commonly known as 'global warming'. The effort has focused on changes observed during the period of instrumental temperature record, when records are most reliable; particularly in the last 50 years, when human activity has grown fastest and observations of the troposphere have become available. The dominant mechanisms (to which recent climate change has been attributed) are anthropogenic, i.e., the result of human activity. They are:[3]
There are also natural mechanisms for variation including climate oscillations, changes in solar activity, and volcanic activity.

According to the Intergovernmental Panel on Climate Change (IPCC), it is "extremely likely" that human influence was the dominant cause of global warming between 1951 and 2010.[4] The IPCC defines "extremely likely" as indicating a probability of 95 to 100%, based on an expert assessment of all the available evidence.[5]

Multiple lines of evidence support attribution of recent climate change to human activities:[6]
  • A basic physical understanding of the climate system: greenhouse gas concentrations have increased and their warming properties are well-established.[6]
  • Historical estimates of past climate changes suggest that the recent changes in global surface temperature are unusual.[6]
  • Computer-based climate models are unable to replicate the observed warming unless human greenhouse gas emissions are included.[6]
  • Natural forces alone (such as solar and volcanic activity) cannot explain the observed warming.[6]
The IPCC's attribution of recent global warming to human activities is a view shared by most scientists,[7][8]:2[9] and is also supported by 196 other scientific organizations worldwide[10] (see also: scientific opinion on climate change).

Background

This section introduces some concepts in climate science that are used in the following sections:Factors affecting Earth's climate can be broken down into feedbacks and forcings.[8]:7 A forcing is something that is imposed externally on the climate system. External forcings include natural phenomena such as volcanic eruptions and variations in the sun's output.[11] Human activities can also impose forcings, for example, through changing the composition of the atmosphere.
Radiative forcing is a measure of how various factors alter the energy balance of the Earth's atmosphere.[12] A positive radiative forcing will tend to increase the energy of the Earth-atmosphere system, leading to a warming of the system. Between the start of the Industrial Revolution in 1750, and the year 2005, the increase in the atmospheric concentration of carbon dioxide (chemical formula: CO2) led to a positive radiative forcing, averaged over the Earth's surface area, of about 1.66 watts per square metre (abbreviated W m−2).[13]

Climate feedbacks can either amplify or dampen the response of the climate to a given forcing.[8]:7 There are many feedback mechanisms in the climate system that can either amplify (a positive feedback) or diminish (a negative feedback) the effects of a change in climate forcing.
Aspects of the climate system will show variation in response to changes in forcings.[14] In the absence of forcings imposed on it, the climate system will still show internal variability (see images opposite). This internal variability is a result of complex interactions between components of the climate system, such as the coupling between the atmosphere and ocean (see also the later section on Internal climate variability and global warming).[15] An example of internal variability is the El Niño-Southern Oscillation.

Detection vs. attribution

Refer to caption and adjacent text
In detection and attribution, the natural factors considered usually include changes in the Sun's output and volcanic eruptions, as well as natural modes of variability such as El Niño and La Niña. Human factors include the emissions of heat-trapping "greenhouse" gases and particulates as well as clearing of forests and other land-use changes. Figure source: NOAA NCDC.[16]

Detection and attribution of climate signals, as well as its common-sense meaning, has a more precise definition within the climate change literature, as expressed by the IPCC.[17] Detection of a climate signal does not always imply significant attribution. The IPCC's Fourth Assessment Report says "it is extremely likely that human activities have exerted a substantial net warming influence on climate since 1750," where "extremely likely" indicates a probability greater than 95%.[3] Detection of a signal requires demonstrating that an observed change is statistically significantly different from that which can be explained by natural internal variability.

Attribution requires demonstrating that a signal is:
  • unlikely to be due entirely to internal variability;
  • consistent with the estimated responses to the given combination of anthropogenic and natural forcing
  • not consistent with alternative, physically plausible explanations of recent climate change that exclude important elements of the given combination of forcings.

Key attributions

Greenhouse gases

Carbon dioxide is the primary greenhouse gas that is contributing to recent climate change.[18] CO
2
is absorbed and emitted naturally as part of the carbon cycle, through animal and plant respiration, volcanic eruptions, and ocean-atmosphere exchange.[18] Human activities, such as the burning of fossil fuels and changes in land use (see below), release large amounts of carbon to the atmosphere, causing CO
2
concentrations in the atmosphere to rise.[18][19]

The high-accuracy measurements of atmospheric CO2 concentration, initiated by Charles David Keeling in 1958, constitute the master time series documenting the changing composition of the atmosphere.[20] These data have iconic status in climate change science as evidence of the effect of human activities on the chemical composition of the global atmosphere.[20]

Along with CO2, methane and nitrous oxide are also major forcing contributors to the greenhouse effect. The Kyoto Protocol lists these together with hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulphur hexafluoride (SF6),[21] which are entirely artificial (i.e. anthropogenic) gases, which also contribute to radiative forcing in the atmosphere. The chart at right attributes anthropogenic greenhouse gas emissions to eight main economic sectors, of which the largest contributors are power stations (many of which burn coal or other fossil fuels), industrial processes, transportation fuels (generally fossil fuels), and agricultural by-products (mainly methane from enteric fermentation and nitrous oxide from fertilizer use).[22]

Water vapor

Refer to adjacent text
Emission Database for Global Atmospheric Research version 3.2, fast track 2000 project

Water vapor is the most abundant greenhouse gas and also the most important in terms of its contribution to the natural greenhouse effect, despite having a short atmospheric lifetime[18] (about 10 days).[23] Some human activities can influence local water vapor levels. However, on a global scale, the concentration of water vapor is controlled by temperature, which influences overall rates of evaporation and precipitation.[18] Therefore, the global concentration of water vapor is not substantially affected by direct human emissions.[18]

Land use

Climate change is attributed to land use for two main reasons. Between 1750 and 2007, about two-thirds of anthropogenic CO2 emissions were produced from burning fossil fuels, and about one-third of emissions from changes in land use,[24] primarily deforestation.[25] Deforestation both reduces the amount of carbon dioxide absorbed by deforested regions and releases greenhouse gases directly, together with aerosols, through biomass burning that frequently accompanies it.
A second reason that climate change has been attributed to land use is that the terrestrial albedo is often altered by use, which leads to radiative forcing. This effect is more significant locally than globally.[25]

Livestock and land use

Worldwide, livestock production occupies 70% of all land used for agriculture, or 30% of the ice-free land surface of the Earth.[26] More than 18% of anthropogenic greenhouse gas emissions are attributed to livestock and livestock-related activities such as deforestation and increasingly fuel-intensive farming practices.[26] Specific attributions to the livestock sector include:

Aerosols

With virtual certainty, scientific consensus has attributed various forms of climate change, chiefly cooling effects, to aerosols, which are small particles or droplets suspended in the atmosphere.[27]

Key sources to which anthropogenic aerosols are attributed[28] include:

Attribution of 20th century climate change

Refer to caption
One global climate model's reconstruction of temperature change during the 20th century as the result of five studied forcing factors and the amount of temperature change attributed to each.

Over the past 150 years human activities have released increasing quantities of greenhouse gases into the atmosphere. This has led to increases in mean global temperature, or global warming. Other human effects are relevant—for example, sulphate aerosols are believed to have a cooling effect. Natural factors also contribute. According to the historical temperature record of the last century, the Earth's near-surface air temperature has risen around 0.74 ± 0.18 °Celsius (1.3 ± 0.32 °Fahrenheit).[29]

A historically important question in climate change research has regarded the relative importance of human activity and non-anthropogenic causes during the period of instrumental record. In the 1995 Second Assessment Report (SAR), the IPCC made the widely quoted statement that "The balance of evidence suggests a discernible human influence on global climate". The phrase "balance of evidence" suggested the (English) common-law standard of proof required in civil as opposed to criminal courts: not as high as "beyond reasonable doubt". In 2001 the Third Assessment Report (TAR) refined this, saying "There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities".[30] The 2007 Fourth Assessment Report (AR4) strengthened this finding:
  • "Anthropogenic warming of the climate system is widespread and can be detected in temperature observations taken at the surface, in the free atmosphere and in the oceans. Evidence of the effect of external influences, both anthropogenic and natural, on the climate system has continued to accumulate since the TAR."[31]
Other findings of the IPCC Fourth Assessment Report include:
  • "It is extremely unlikely (<5 class="reference" id="cite_ref-ar4_uncertainty_32-0" sup="">[32]
that the global pattern of warming during the past half century can be explained without external forcing (i.e., it is inconsistent with being the result of internal variability), and very unlikely[32] that it is due to known natural external causes alone. The warming occurred in both the ocean and the atmosphere and took place at a time when natural external forcing factors would likely have produced cooling."[33]
  • "From new estimates of the combined anthropogenic forcing due to greenhouse gases, aerosols, and land surface changes, it is extremely likely (>95%)[32] that human activities have exerted a substantial net warming influence on climate since 1750."[34]
  • "It is virtually certain[32] that anthropogenic aerosols produce a net negative radiative forcing (cooling influence) with a greater magnitude in the Northern Hemisphere than in the Southern Hemisphere."[34]
  • Over the past five decades there has been a global warming of approximately 0.65 °C (1.17 °F) at the Earth's surface (see historical temperature record). Among the possible factors that could produce changes in global mean temperature are internal variability of the climate system, external forcing, an increase in concentration of greenhouse gases, or any combination of these. Current studies indicate that the increase in greenhouse gases, most notably CO2, is mostly responsible for the observed warming. Evidence for this conclusion includes:
    • Estimates of internal variability from climate models, and reconstructions of past temperatures, indicate that the warming is unlikely to be entirely natural.
    • Climate models forced by natural factors and increased greenhouse gases and aerosols reproduce the observed global temperature changes; those forced by natural factors alone do not.[30]
    • "Fingerprint" methods (see below) indicate that the pattern of change is closer to that expected from greenhouse gas-forced change than from natural change.[35]
    • The plateau in warming from the 1940s to 1960s can be attributed largely to sulphate aerosol cooling.[36]

    Details on attribution

    Refer to caption
    For Northern Hemisphere temperature, recent decades appear to be the warmest since at least about 1000AD, and the warming since the late 19th century is unprecedented over the last 1000 years.[37] Older data are insufficient to provide reliable hemispheric temperature estimates.[37]

    Recent scientific assessments find that most of the warming of the Earth's surface over the past 50 years has been caused by human activities (see also the section on scientific literature and opinion). This conclusion rests on multiple lines of evidence. Like the warming "signal" that has gradually emerged from the "noise" of natural climate variability, the scientific evidence for a human influence on global climate has accumulated over the past several decades, from many hundreds of studies. No single study is a "smoking gun." Nor has any single study or combination of studies undermined the large body of evidence supporting the conclusion that human activity is the primary driver of recent warming.[1]

    The first line of evidence is based on a physical understanding of how greenhouse gases trap heat, how the climate system responds to increases in greenhouse gases, and how other human and natural factors influence climate. The second line of evidence is from indirect estimates of climate changes over the last 1,000 to 2,000 years. These records are obtained from living things and their remains (like tree rings and corals) and from physical quantities (like the ratio between lighter and heavier isotopes of oxygen in ice cores), which change in measurable ways as climate changes. The lesson from these data is that global surface temperatures over the last several decades are clearly unusual, in that they were higher than at any time during at least the past 400 years. For the Northern Hemisphere, the recent temperature rise is clearly unusual in at least the last 1,000 years (see graph opposite).[1]

    The third line of evidence is based on the broad, qualitative consistency between observed changes in climate and the computer model simulations of how climate would be expected to change in response to human activities. For example, when climate models are run with historical increases in greenhouse gases, they show gradual warming of the Earth and ocean surface, increases in ocean heat content and the temperature of the lower atmosphere, a rise in global sea level, retreat of sea ice and snow cover, cooling of the stratosphere, an increase in the amount of atmospheric water vapor, and changes in large-scale precipitation and pressure patterns. These and other aspects of modelled climate change are in agreement with observations.[1]

    "Fingerprint" studies

    Refer to caption
    Reconstructions of global temperature that include greenhouse gas increases and other human influences (red line, based on many models) closely match measured temperatures (dashed line).[38] Those that only include natural influences (blue line, based on many models) show a slight cooling, which has not occurred.[38] The ability of models to generate reasonable histories of global temperature is verified by their response to four 20th-century volcanic eruptions: each eruption caused brief cooling that appeared in observed as well as modeled records.[38]

    Finally, there is extensive statistical evidence from so-called "fingerprint" studies. Each factor that affects climate produces a unique pattern of climate response, much as each person has a unique fingerprint. Fingerprint studies exploit these unique signatures, and allow detailed comparisons of modelled and observed climate change patterns. Scientists rely on such studies to attribute observed changes in climate to a particular cause or set of causes. In the real world, the climate changes that have occurred since the start of the Industrial Revolution are due to a complex mixture of human and natural causes. The importance of each individual influence in this mixture changes over time. Of course, there are not multiple Earths, which would allow an experimenter to change one factor at a time on each Earth, thus helping to isolate different fingerprints. Therefore, climate models are used to study how individual factors affect climate. For example, a single factor (like greenhouse gases) or a set of factors can be varied, and the response of the modelled climate system to these individual or combined changes can thus be studied.[1]

    refer to caption
    key to above map of temperature changes
    Two fingerprints of human activities on the climate are that land areas will warm more than the oceans, and that high latitudes will warm more than low latitudes.[39] These projections have been confirmed by observations (shown above).[39]

    For example, when climate model simulations of the last century include all of the major influences on climate, both human-induced and natural, they can reproduce many important features of observed climate change patterns. When human influences are removed from the model experiments, results suggest that the surface of the Earth would actually have cooled slightly over the last 50 years (see graph, opposite). The clear message from fingerprint studies is that the observed warming over the last half-century cannot be explained by natural factors, and is instead caused primarily by human factors.[1]

    Another fingerprint of human effects on climate has been identified by looking at a slice through the layers of the atmosphere, and studying the pattern of temperature changes from the surface up through the stratosphere (see the section on solar activity). The earliest fingerprint work focused on changes in surface and atmospheric temperature. Scientists then applied fingerprint methods to a whole range of climate variables, identifying human-caused climate signals in the heat content of the oceans, the height of the tropopause (the boundary between the troposphere and stratosphere, which has shifted upward by hundreds of feet in recent decades), the geographical patterns of precipitation, drought, surface pressure, and the runoff from major river basins.[1]

    Studies published after the appearance of the IPCC Fourth Assessment Report in 2007 have also found human fingerprints in the increased levels of atmospheric moisture (both close to the surface and over the full extent of the atmosphere), in the decline of Arctic sea ice extent, and in the patterns of changes in Arctic and Antarctic surface temperatures.[1]

    The message from this entire body of work is that the climate system is telling a consistent story of increasingly dominant human influence – the changes in temperature, ice extent, moisture, and circulation patterns fit together in a physically consistent way, like pieces in a complex puzzle.[1]
    Increasingly, this type of fingerprint work is shifting its emphasis. As noted, clear and compelling scientific evidence supports the case for a pronounced human influence on global climate. Much of the recent attention is now on climate changes at continental and regional scales, and on variables that can have large impacts on societies. For example, scientists have established causal links between human activities and the changes in snowpack, maximum and minimum (diurnal) temperature, and the seasonal timing of runoff over mountainous regions of the western United States. Human activity is likely to have made a substantial contribution to ocean surface temperature changes in hurricane formation regions. Researchers are also looking beyond the physical climate system, and are beginning to tie changes in the distribution and seasonal behaviour of plant and animal species to human-caused changes in temperature and precipitation.[1]

    For over a decade, one aspect of the climate change story seemed to show a significant difference between models and observations. In the tropics, all models predicted that with a rise in greenhouse gases, the troposphere would be expected to warm more rapidly than the surface. Observations from weather balloons, satellites, and surface thermometers seemed to show the opposite behaviour (more rapid warming of the surface than the troposphere). This issue was a stumbling block in understanding the causes of climate change. It is now largely resolved. Research showed that there were large uncertainties in the satellite and weather balloon data. When uncertainties in models and observations are properly accounted for, newer observational data sets (with better treatment of known problems) are in agreement with climate model results.[1]

    Refer to caption
    This set of graphs shows the estimated contribution of various natural and human factors to changes in global mean temperature between 1889–2006.[40] Estimated contributions are based on multivariate analysis rather than model simulations.[41] The graphs show that human influence on climate has eclipsed the magnitude of natural temperature changes over the past 120 years.[42] Natural influences on temperature—El Niño, solar variability, and volcanic aerosols—have varied approximately plus and minus 0.2 °C (0.4 °F), (averaging to about zero), while human influences have contributed roughly 0.8 °C (1 °F) of warming since 1889.[42]

    This does not mean, however, that all remaining differences between models and observations have been resolved. The observed changes in some climate variables, such as Arctic sea ice, some aspects of precipitation, and patterns of surface pressure, appear to be proceeding much more rapidly than models have projected. The reasons for these differences are not well understood. Nevertheless, the bottom-line conclusion from climate fingerprinting is that most of the observed changes studied to date are consistent with each other, and are also consistent with our scientific understanding of how the climate system would be expected to respond to the increase in heat-trapping gases resulting from human activities.[1]

    Extreme weather events

    refer to caption
    Frequency of occurrence (vertical axis) of local June–July–August temperature anomalies (relative to 1951–1980 mean) for Northern Hemisphere land in units of local standard deviation (horizontal axis).[43] According to Hansen et al. (2012),[43] the distribution of anomalies has shifted to the right as a consequence of global warming, meaning that unusually hot summers have become more common. This is analogous to the rolling of a dice: cool summers now cover only half of one side of a six-sided die, white covers one side, red covers four sides, and an extremely hot (red-brown) anomaly covers half of one side.[43]

    One of the subjects discussed in the literature is whether or not extreme weather events can be attributed to human activities. Seneviratne et al. (2012)[44] stated that attributing individual extreme weather events to human activities was challenging. They were, however, more confident over attributing changes in long-term trends of extreme weather. For example, Seneviratne et al. (2012)[45] concluded that human activities had likely led to a warming of extreme daily minimum and maximum temperatures at the global scale.

    Another way of viewing the problem is to consider the effects of human-induced climate change on the probability of future extreme weather events. Stott et al. (2003),[46] for example, considered whether or not human activities had increased the risk of severe heat waves in Europe, like the one experienced in 2003. Their conclusion was that human activities had very likely more than doubled the risk of heat waves of this magnitude.[46]

    An analogy can be made between an athlete on steroids and human-induced climate change.[47] In the same way that an athlete's performance may increase from using steroids, human-induced climate change increases the risk of some extreme weather events.

    Hansen et al. (2012)[48] suggested that human activities have greatly increased the risk of summertime heat waves. According to their analysis, the land area of the Earth affected by very hot summer temperature anomalies has greatly increased over time (refer to graphs on the left). In the base period 1951-1980, these anomalies covered a few tenths of 1% of the global land area.[49] In recent years, this has increased to around 10% of the global land area. With high confidence, Hansen et al. (2012)[49] attributed the 2010 Moscow and 2011 Texas heat waves to human-induced global warming.

    An earlier study by Dole et al. (2011)[50] concluded that the 2010 Moscow heatwave was mostly due to natural weather variability. While not directly citing Dole et al. (2011),[50] Hansen et al. (2012)[49] rejected this type of explanation. Hansen et al. (2012)[49] stated that a combination of natural weather variability and human-induced global warming was responsible for the Moscow and Texas heat waves.

    Scientific literature and opinion

    There are a number of examples of published and informal support for the consensus view. As mentioned earlier, the IPCC has concluded that most of the observed increase in globally averaged temperatures since the mid-20th century is "very likely" due to human activities.[51] The IPCC's conclusions are consistent with those of several reports produced by the US National Research Council.[7][52][53] A report published in 2009 by the U.S. Global Change Research Program concluded that "[global] warming is unequivocal and primarily human-induced."[54] A number of scientific organizations have issued statements that support the consensus view. Two examples include:

    Detection and attribution studies

    The IPCC Fourth Assessment Report (2007), concluded that attribution was possible for a number of observed changes in the climate (see effects of global warming). However, attribution was found to be more difficult when assessing changes over smaller regions (less than continental scale) and over short time periods (less than 50 years).[33] Over larger regions, averaging reduces natural variability of the climate, making detection and attribution easier.
    • In 1996, in a paper in Nature titled "A search for human influences on the thermal structure of the atmosphere", Benjamin D. Santer et al. wrote: "The observed spatial patterns of temperature change in the free atmosphere from 1963 to 1987 are similar to those predicted by state-of-the-art climate models incorporating various combinations of changes in carbon dioxide, anthropogenic sulphate aerosol and stratospheric ozone concentrations. The degree of pattern similarity between models and observations increases through this period. It is likely that this trend is partially due to human activities, although many uncertainties remain, particularly relating to estimates of natural variability."
    • A 2002 paper in the Journal of Geophysical Research says "Our analysis suggests that the early twentieth century warming can best be explained by a combination of warming due to increases in greenhouse gases and natural forcing, some cooling due to other anthropogenic forcings, and a substantial, but not implausible, contribution from internal variability. In the second half of the century we find that the warming is largely caused by changes in greenhouse gases, with changes in sulphates and, perhaps, volcanic aerosol offsetting approximately one third of the warming."[57][58]
    • A 2005 review of detection and attribution studies by the International Ad Hoc Detection and Attribution Group[59] found that "natural drivers such as solar variability and volcanic activity are at most partially responsible for the large-scale temperature changes observed over the past century, and that a large fraction of the warming over the last 50 yr can be attributed to greenhouse gas increases. Thus, the recent research supports and strengthens the IPCC Third Assessment Report conclusion that 'most of the global warming over the past 50 years is likely due to the increase in greenhouse gases.'"
    • Barnett and colleagues (2005) say that the observed warming of the oceans "cannot be explained by natural internal climate variability or solar and volcanic forcing, but is well simulated by two anthropogenically forced climate models," concluding that "it is of human origin, a conclusion robust to observational sampling and model differences".[60]
    • Two papers in the journal Science in August 2005[61][62] resolve the problem, evident at the time of the TAR, of tropospheric temperature trends (see also the section on "fingerprint" studies) . The UAH version of the record contained errors, and there is evidence of spurious cooling trends in the radiosonde record, particularly in the tropics. See satellite temperature measurements for details; and the 2006 US CCSP report.[63]
    • Multiple independent reconstructions of the temperature record of the past 1000 years confirm that the late 20th century is probably the warmest period in that time (see the preceding section -details on attribution).

    Reviews of scientific opinion

    • An essay in Science surveyed 928 abstracts related to climate change, and concluded that most journal reports accepted the consensus.[64] This is discussed further in scientific opinion on climate change.
    • A 2010 paper in the Proceedings of the National Academy of Sciences found that among a pool of roughly 1,000 researchers who work directly on climate issues and publish the most frequently on the subject, 97% agree that anthropogenic climate change is happening.[65]
    • A 2011 paper from George Mason University published in the International Journal of Public Opinion Research, "The Structure of Scientific Opinion on Climate Change," collected the opinions of scientists in the earth, space, atmospheric, oceanic or hydrological sciences.[66] The 489 survey respondents—representing nearly half of all those eligible according to the survey's specific standards – work in academia, government, and industry, and are members of prominent professional organizations.[66] The study found that 97% of the 489 scientists surveyed agreed that global temperatures have risen over the past century.[66] Moreover, 84% agreed that "human-induced greenhouse warming" is now occurring."[66] Only 5% disagreed with the idea that human activity is a significant cause of global warming.[66]
    As described above, a small minority of scientists do disagree with the consensus: see list of scientists opposing global warming consensus. For example, Willie Soon and Richard Lindzen[67] say that there is insufficient proof for anthropogenic attribution. Generally this position requires new physical mechanisms to explain the observed warming.[68]

    Solar activity

    Refer to caption
    Solar radiation at the top of our atmosphere, and global temperature

    Modelled simulation of the effect of various factors (including GHGs, Solar irradiance) singly and in combination, showing in particular that solar activity produces a small and nearly uniform warming, unlike what is observed.

    Solar sunspot maximum occurs when the magnetic field of the sun collapses and reverse as part of its average 11 year solar cycle (22 years for complete North to North restoration).
    The role of the sun in recent climate change has been looked at by climate scientists. Since 1978, output from the Sun has been measured by satellites [8]:6 significantly more accurately than was previously possible from the surface. These measurements indicate that the Sun's total solar irradiance has not increased since 1978, so the warming during the past 30 years cannot be directly attributed to an increase in total solar energy reaching the Earth (see graph above, left). In the three decades since 1978, the combination of solar and volcanic activity probably had a slight cooling influence on the climate.[69]

    Climate models have been used to examine the role of the sun in recent climate change.[70] Models are unable to reproduce the rapid warming observed in recent decades when they only take into account variations in total solar irradiance and volcanic activity. Models are, however, able to simulate the observed 20th century changes in temperature when they include all of the most important external forcings, including human influences and natural forcings. As has already been stated, Hegerl et al. (2007) concluded that greenhouse gas forcing had "very likely" caused most of the observed global warming since the mid-20th century. In making this conclusion, Hegerl et al. (2007) allowed for the possibility that climate models had been underestimated the effect of solar forcing.[71]

    The role of solar activity in climate change has also been calculated over longer time periods using "proxy" datasets, such as tree rings.[72] Models indicate that solar and volcanic forcings can explain periods of relative warmth and cold between A.D. 1000 and 1900, but human-induced forcings are needed to reproduce the late-20th century warming.[73]

    Another line of evidence against the sun having caused recent climate change comes from looking at how temperatures at different levels in the Earth's atmosphere have changed.[74] Models and observations (see figure above, middle) show that greenhouse gas results in warming of the lower atmosphere at the surface (called the troposphere) but cooling of the upper atmosphere (called the stratosphere).[75] Depletion of the ozone layer by chemical refrigerants has also resulted in a cooling effect in the stratosphere. If the sun was responsible for observed warming, warming of the troposphere at the surface and warming at the top of the stratosphere would be expected as increase solar activity would replenish ozone and oxides of nitrogen.[76] The stratosphere has a reverse temperature gradient than the troposphere so as the temperature of the troposphere cools with altitude, the stratosphere rises with altitude. Hadley cells are the mechanism by which equatorial generated ozone in the tropics (highest area of UV irradiance in the stratosphere) is moved poleward. Global climate models suggest that climate change may widen the Hadley cells and push the jetstream northward thereby expanding the tropics region and resulting in warmer, dryer conditions in those areas overall.[77]

    Non-consensus views

    Refer to caption
    Contribution of natural factors and human activities to radiative forcing of climate change.[13] Radiative forcing values are for the year 2005, relative to the pre-industrial era (1750).[13] The contribution of solar irradiance to radiative forcing is 5% the value of the combined radiative forcing due to increases in the atmospheric concentrations of carbon dioxide, methane and nitrous oxide.[78]

    Habibullo Abdussamatov (2004), head of space research at St. Petersburg's Pulkovo Astronomical Observatory in Russia, has argued that the sun is responsible for recently observed climate change.[79] Journalists for news sources canada.com (Solomon, 2007b),[80] National Geographic News (Ravillious, 2007),[81] and LiveScience (Than, 2007)[82] reported on the story of warming on Mars. In these articles, Abdussamatov was quoted. He stated that warming on Mars was evidence that global warming on Earth was being caused by changes in the sun.

    Ravillious (2007)[81] quoted two scientists who disagreed with Abdussamatov: Amato Evan, a climate scientist at the University of Wisconsin-Madison, in the US, and Colin Wilson, a planetary physicist at Oxford University in the UK. According to Wilson, "Wobbles in the orbit of Mars are the main cause of its climate change in the current era" (see also orbital forcing).[83] Than (2007) quoted Charles Long, a climate physicist at Pacific Northwest National Laboratories in the US, who disagreed with Abdussamatov.[82]

    Than (2007) pointed to the view of Benny Peiser, a social anthropologist at Liverpool John Moores University in the UK.[82] In his newsletter, Peiser had cited a blog that had commented on warming observed on several planetary bodies in the Solar system. These included Neptune's moon Triton,[84] Jupiter,[85] Pluto[86] and Mars. In an e-mail interview with Than (2007), Peiser stated that:
    "I think it is an intriguing coincidence that warming trends have been observed on a number of very diverse planetary bodies in our solar system, (...) Perhaps this is just a fluke."
    Than (2007) provided alternative explanations of why warming had occurred on Triton, Pluto, Jupiter and Mars.

    The US Environmental Protection Agency (US EPA, 2009) responded to public comments on climate change attribution.[78] A number of commenters had argued that recent climate change could be attributed to changes in solar irradiance. According to the US EPA (2009), this attribution was not supported by the bulk of the scientific literature. Citing the work of the IPCC (2007), the US EPA pointed to the low contribution of solar irradiance to radiative forcing since the start of the Industrial Revolution in 1750. Over this time period (1750 to 2005),[87] the estimated contribution of solar irradiance to radiative forcing was 5% the value of the combined radiative forcing due to increases in the atmospheric concentrations of carbon dioxide, methane and nitrous oxide (see graph opposite).

    Effect of cosmic rays

    Henrik Svensmark has suggested that the magnetic activity of the sun deflects cosmic rays, and that this may influence the generation of cloud condensation nuclei, and thereby have an effect on the climate.[88] The website ScienceDaily reported on a 2009 study that looked at how past changes in climate have been affected by the Earth's magnetic field.[89] Geophysicist Mads Faurschou Knudsen, who co-authored the study, stated that the study's results supported Svensmark's theory. The authors of the study also acknowledged that CO2 plays an important role in climate change.

    Consensus view on cosmic rays

    The view that cosmic rays could provide the mechanism by which changes in solar activity affect climate is not supported by the literature.[90] Solomon et al. (2007)[91] state:
    [..] the cosmic ray time series does not appear to correspond to global total cloud cover after 1991 or to global low-level cloud cover after 1994. Together with the lack of a proven physical mechanism and the plausibility of other causal factors affecting changes in cloud cover, this makes the association between galactic cosmic ray-induced changes in aerosol and cloud formation controversial
    Studies by Lockwood and Fröhlich (2007)[92] and Sloan and Wolfendale (2008)[93] found no relation between warming in recent decades and cosmic rays. Pierce and Adams (2009)[94] used a model to simulate the effect of cosmic rays on cloud properties. They concluded that the hypothesized effect of cosmic rays was too small to explain recent climate change.[94] Pierce and Adams (2009)[95] noted that their findings did not rule out a possible connection between cosmic rays and climate change, and recommended further research.

    Erlykin et al. (2009)[96] found that the evidence showed that connections between solar variation and climate were more likely to be mediated by direct variation of insolation rather than cosmic rays, and concluded: "Hence within our assumptions, the effect of varying solar activity, either by direct solar irradiance or by varying cosmic ray rates, must be less than 0.07 °C since 1956, i.e. less than 14% of the observed global warming." Carslaw (2009)[97] and Pittock (2009)[98] review the recent and historical literature in this field and continue to find that the link between cosmic rays and climate is tenuous, though they encourage continued research. US EPA (2009)[90] commented on research by Duplissy et al. (2009):[99]
    The CLOUD experiments at CERN are interesting research but do not provide conclusive evidence that cosmic rays can serve as a major source of cloud seeding. Preliminary results from the experiment (Duplissy et al., 2009) suggest that though there was some evidence of ion mediated nucleation, for most of the nucleation events observed the contribution of ion processes appeared to be minor. These experiments also showed the difficulty in maintaining sufficiently clean conditions and stable temperatures to prevent spurious aerosol bursts. There is no indication that the earlier Svensmark experiments could even have matched the controlled conditions of the CERN experiment. We find that the Svensmark results on cloud seeding have not yet been shown to be robust or sufficient to materially alter the conclusions of the assessment literature, especially given the abundance of recent literature that is skeptical of the cosmic ray-climate linkage

    Thailand

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Thailand Thailand , officially the K...