Search This Blog

Wednesday, November 11, 2015

(Possible) attributions of recent climate change


From Wikipedia, the free encyclopedia


This graph is known as the Keeling Curve and shows the long-term increase of atmospheric carbon dioxide (CO2) concentrations from 1958–2015. Monthly CO2 measurements display seasonal oscillations in an upward trend. Each year's maximum occurs during the Northern Hemisphere's late spring, and declines during its growing season as plants remove some atmospheric CO2.
Refer to caption
Global annual average temperature (as measured over both land and oceans). Red bars indicate temperatures above and blue bars indicate temperatures below the average temperature for the period 1901–2000. The black line shows atmospheric carbon dioxide (CO2) concentration in parts per million (ppm). While there is a clear long-term global warming trend, each individual year does not show a temperature increase relative to the previous year, and some years show greater changes than others. These year-to-year fluctuations in temperature are due to natural processes, such as the effects of El Niños, La Niñas, and the eruption of large volcanoes.[1]
Refer to caption
This image shows three examples of internal climate variability measured between 1950 and 2012: the El Niño–Southern oscillation, the Arctic oscillation, and the North Atlantic oscillation.[2]

Attribution of recent climate change is the effort to scientifically ascertain mechanisms responsible for recent changes observed in the Earth's climate, commonly known as 'global warming'. The effort has focused on changes observed during the period of instrumental temperature record, when records are most reliable; particularly in the last 50 years, when human activity has grown fastest and observations of the troposphere have become available. The dominant mechanisms (to which recent climate change has been attributed) are anthropogenic, i.e., the result of human activity. They are:[3]
There are also natural mechanisms for variation including climate oscillations, changes in solar activity, and volcanic activity.

According to the Intergovernmental Panel on Climate Change (IPCC), it is "extremely likely" that human influence was the dominant cause of global warming between 1951 and 2010.[4] The IPCC defines "extremely likely" as indicating a probability of 95 to 100%, based on an expert assessment of all the available evidence.[5]

Multiple lines of evidence support attribution of recent climate change to human activities:[6]
  • A basic physical understanding of the climate system: greenhouse gas concentrations have increased and their warming properties are well-established.[6]
  • Historical estimates of past climate changes suggest that the recent changes in global surface temperature are unusual.[6]
  • Computer-based climate models are unable to replicate the observed warming unless human greenhouse gas emissions are included.[6]
  • Natural forces alone (such as solar and volcanic activity) cannot explain the observed warming.[6]
The IPCC's attribution of recent global warming to human activities is a view shared by most scientists,[7][8]:2[9] and is also supported by 196 other scientific organizations worldwide[10] (see also: scientific opinion on climate change).

Background

This section introduces some concepts in climate science that are used in the following sections:Factors affecting Earth's climate can be broken down into feedbacks and forcings.[8]:7 A forcing is something that is imposed externally on the climate system. External forcings include natural phenomena such as volcanic eruptions and variations in the sun's output.[11] Human activities can also impose forcings, for example, through changing the composition of the atmosphere.
Radiative forcing is a measure of how various factors alter the energy balance of the Earth's atmosphere.[12] A positive radiative forcing will tend to increase the energy of the Earth-atmosphere system, leading to a warming of the system. Between the start of the Industrial Revolution in 1750, and the year 2005, the increase in the atmospheric concentration of carbon dioxide (chemical formula: CO2) led to a positive radiative forcing, averaged over the Earth's surface area, of about 1.66 watts per square metre (abbreviated W m−2).[13]

Climate feedbacks can either amplify or dampen the response of the climate to a given forcing.[8]:7 There are many feedback mechanisms in the climate system that can either amplify (a positive feedback) or diminish (a negative feedback) the effects of a change in climate forcing.
Aspects of the climate system will show variation in response to changes in forcings.[14] In the absence of forcings imposed on it, the climate system will still show internal variability (see images opposite). This internal variability is a result of complex interactions between components of the climate system, such as the coupling between the atmosphere and ocean (see also the later section on Internal climate variability and global warming).[15] An example of internal variability is the El Niño-Southern Oscillation.

Detection vs. attribution

Refer to caption and adjacent text
In detection and attribution, the natural factors considered usually include changes in the Sun's output and volcanic eruptions, as well as natural modes of variability such as El Niño and La Niña. Human factors include the emissions of heat-trapping "greenhouse" gases and particulates as well as clearing of forests and other land-use changes. Figure source: NOAA NCDC.[16]

Detection and attribution of climate signals, as well as its common-sense meaning, has a more precise definition within the climate change literature, as expressed by the IPCC.[17] Detection of a climate signal does not always imply significant attribution. The IPCC's Fourth Assessment Report says "it is extremely likely that human activities have exerted a substantial net warming influence on climate since 1750," where "extremely likely" indicates a probability greater than 95%.[3] Detection of a signal requires demonstrating that an observed change is statistically significantly different from that which can be explained by natural internal variability.

Attribution requires demonstrating that a signal is:
  • unlikely to be due entirely to internal variability;
  • consistent with the estimated responses to the given combination of anthropogenic and natural forcing
  • not consistent with alternative, physically plausible explanations of recent climate change that exclude important elements of the given combination of forcings.

Key attributions

Greenhouse gases

Carbon dioxide is the primary greenhouse gas that is contributing to recent climate change.[18] CO
2
is absorbed and emitted naturally as part of the carbon cycle, through animal and plant respiration, volcanic eruptions, and ocean-atmosphere exchange.[18] Human activities, such as the burning of fossil fuels and changes in land use (see below), release large amounts of carbon to the atmosphere, causing CO
2
concentrations in the atmosphere to rise.[18][19]

The high-accuracy measurements of atmospheric CO2 concentration, initiated by Charles David Keeling in 1958, constitute the master time series documenting the changing composition of the atmosphere.[20] These data have iconic status in climate change science as evidence of the effect of human activities on the chemical composition of the global atmosphere.[20]

Along with CO2, methane and nitrous oxide are also major forcing contributors to the greenhouse effect. The Kyoto Protocol lists these together with hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulphur hexafluoride (SF6),[21] which are entirely artificial (i.e. anthropogenic) gases, which also contribute to radiative forcing in the atmosphere. The chart at right attributes anthropogenic greenhouse gas emissions to eight main economic sectors, of which the largest contributors are power stations (many of which burn coal or other fossil fuels), industrial processes, transportation fuels (generally fossil fuels), and agricultural by-products (mainly methane from enteric fermentation and nitrous oxide from fertilizer use).[22]

Water vapor

Refer to adjacent text
Emission Database for Global Atmospheric Research version 3.2, fast track 2000 project

Water vapor is the most abundant greenhouse gas and also the most important in terms of its contribution to the natural greenhouse effect, despite having a short atmospheric lifetime[18] (about 10 days).[23] Some human activities can influence local water vapor levels. However, on a global scale, the concentration of water vapor is controlled by temperature, which influences overall rates of evaporation and precipitation.[18] Therefore, the global concentration of water vapor is not substantially affected by direct human emissions.[18]

Land use

Climate change is attributed to land use for two main reasons. Between 1750 and 2007, about two-thirds of anthropogenic CO2 emissions were produced from burning fossil fuels, and about one-third of emissions from changes in land use,[24] primarily deforestation.[25] Deforestation both reduces the amount of carbon dioxide absorbed by deforested regions and releases greenhouse gases directly, together with aerosols, through biomass burning that frequently accompanies it.
A second reason that climate change has been attributed to land use is that the terrestrial albedo is often altered by use, which leads to radiative forcing. This effect is more significant locally than globally.[25]

Livestock and land use

Worldwide, livestock production occupies 70% of all land used for agriculture, or 30% of the ice-free land surface of the Earth.[26] More than 18% of anthropogenic greenhouse gas emissions are attributed to livestock and livestock-related activities such as deforestation and increasingly fuel-intensive farming practices.[26] Specific attributions to the livestock sector include:

Aerosols

With virtual certainty, scientific consensus has attributed various forms of climate change, chiefly cooling effects, to aerosols, which are small particles or droplets suspended in the atmosphere.[27]

Key sources to which anthropogenic aerosols are attributed[28] include:

Attribution of 20th century climate change

Refer to caption
One global climate model's reconstruction of temperature change during the 20th century as the result of five studied forcing factors and the amount of temperature change attributed to each.

Over the past 150 years human activities have released increasing quantities of greenhouse gases into the atmosphere. This has led to increases in mean global temperature, or global warming. Other human effects are relevant—for example, sulphate aerosols are believed to have a cooling effect. Natural factors also contribute. According to the historical temperature record of the last century, the Earth's near-surface air temperature has risen around 0.74 ± 0.18 °Celsius (1.3 ± 0.32 °Fahrenheit).[29]

A historically important question in climate change research has regarded the relative importance of human activity and non-anthropogenic causes during the period of instrumental record. In the 1995 Second Assessment Report (SAR), the IPCC made the widely quoted statement that "The balance of evidence suggests a discernible human influence on global climate". The phrase "balance of evidence" suggested the (English) common-law standard of proof required in civil as opposed to criminal courts: not as high as "beyond reasonable doubt". In 2001 the Third Assessment Report (TAR) refined this, saying "There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities".[30] The 2007 Fourth Assessment Report (AR4) strengthened this finding:
  • "Anthropogenic warming of the climate system is widespread and can be detected in temperature observations taken at the surface, in the free atmosphere and in the oceans. Evidence of the effect of external influences, both anthropogenic and natural, on the climate system has continued to accumulate since the TAR."[31]
Other findings of the IPCC Fourth Assessment Report include:
  • "It is extremely unlikely (<5 class="reference" id="cite_ref-ar4_uncertainty_32-0" sup="">[32]
that the global pattern of warming during the past half century can be explained without external forcing (i.e., it is inconsistent with being the result of internal variability), and very unlikely[32] that it is due to known natural external causes alone. The warming occurred in both the ocean and the atmosphere and took place at a time when natural external forcing factors would likely have produced cooling."[33]
  • "From new estimates of the combined anthropogenic forcing due to greenhouse gases, aerosols, and land surface changes, it is extremely likely (>95%)[32] that human activities have exerted a substantial net warming influence on climate since 1750."[34]
  • "It is virtually certain[32] that anthropogenic aerosols produce a net negative radiative forcing (cooling influence) with a greater magnitude in the Northern Hemisphere than in the Southern Hemisphere."[34]
  • Over the past five decades there has been a global warming of approximately 0.65 °C (1.17 °F) at the Earth's surface (see historical temperature record). Among the possible factors that could produce changes in global mean temperature are internal variability of the climate system, external forcing, an increase in concentration of greenhouse gases, or any combination of these. Current studies indicate that the increase in greenhouse gases, most notably CO2, is mostly responsible for the observed warming. Evidence for this conclusion includes:
    • Estimates of internal variability from climate models, and reconstructions of past temperatures, indicate that the warming is unlikely to be entirely natural.
    • Climate models forced by natural factors and increased greenhouse gases and aerosols reproduce the observed global temperature changes; those forced by natural factors alone do not.[30]
    • "Fingerprint" methods (see below) indicate that the pattern of change is closer to that expected from greenhouse gas-forced change than from natural change.[35]
    • The plateau in warming from the 1940s to 1960s can be attributed largely to sulphate aerosol cooling.[36]

    Details on attribution

    Refer to caption
    For Northern Hemisphere temperature, recent decades appear to be the warmest since at least about 1000AD, and the warming since the late 19th century is unprecedented over the last 1000 years.[37] Older data are insufficient to provide reliable hemispheric temperature estimates.[37]

    Recent scientific assessments find that most of the warming of the Earth's surface over the past 50 years has been caused by human activities (see also the section on scientific literature and opinion). This conclusion rests on multiple lines of evidence. Like the warming "signal" that has gradually emerged from the "noise" of natural climate variability, the scientific evidence for a human influence on global climate has accumulated over the past several decades, from many hundreds of studies. No single study is a "smoking gun." Nor has any single study or combination of studies undermined the large body of evidence supporting the conclusion that human activity is the primary driver of recent warming.[1]

    The first line of evidence is based on a physical understanding of how greenhouse gases trap heat, how the climate system responds to increases in greenhouse gases, and how other human and natural factors influence climate. The second line of evidence is from indirect estimates of climate changes over the last 1,000 to 2,000 years. These records are obtained from living things and their remains (like tree rings and corals) and from physical quantities (like the ratio between lighter and heavier isotopes of oxygen in ice cores), which change in measurable ways as climate changes. The lesson from these data is that global surface temperatures over the last several decades are clearly unusual, in that they were higher than at any time during at least the past 400 years. For the Northern Hemisphere, the recent temperature rise is clearly unusual in at least the last 1,000 years (see graph opposite).[1]

    The third line of evidence is based on the broad, qualitative consistency between observed changes in climate and the computer model simulations of how climate would be expected to change in response to human activities. For example, when climate models are run with historical increases in greenhouse gases, they show gradual warming of the Earth and ocean surface, increases in ocean heat content and the temperature of the lower atmosphere, a rise in global sea level, retreat of sea ice and snow cover, cooling of the stratosphere, an increase in the amount of atmospheric water vapor, and changes in large-scale precipitation and pressure patterns. These and other aspects of modelled climate change are in agreement with observations.[1]

    "Fingerprint" studies

    Refer to caption
    Reconstructions of global temperature that include greenhouse gas increases and other human influences (red line, based on many models) closely match measured temperatures (dashed line).[38] Those that only include natural influences (blue line, based on many models) show a slight cooling, which has not occurred.[38] The ability of models to generate reasonable histories of global temperature is verified by their response to four 20th-century volcanic eruptions: each eruption caused brief cooling that appeared in observed as well as modeled records.[38]

    Finally, there is extensive statistical evidence from so-called "fingerprint" studies. Each factor that affects climate produces a unique pattern of climate response, much as each person has a unique fingerprint. Fingerprint studies exploit these unique signatures, and allow detailed comparisons of modelled and observed climate change patterns. Scientists rely on such studies to attribute observed changes in climate to a particular cause or set of causes. In the real world, the climate changes that have occurred since the start of the Industrial Revolution are due to a complex mixture of human and natural causes. The importance of each individual influence in this mixture changes over time. Of course, there are not multiple Earths, which would allow an experimenter to change one factor at a time on each Earth, thus helping to isolate different fingerprints. Therefore, climate models are used to study how individual factors affect climate. For example, a single factor (like greenhouse gases) or a set of factors can be varied, and the response of the modelled climate system to these individual or combined changes can thus be studied.[1]

    refer to caption
    key to above map of temperature changes
    Two fingerprints of human activities on the climate are that land areas will warm more than the oceans, and that high latitudes will warm more than low latitudes.[39] These projections have been confirmed by observations (shown above).[39]

    For example, when climate model simulations of the last century include all of the major influences on climate, both human-induced and natural, they can reproduce many important features of observed climate change patterns. When human influences are removed from the model experiments, results suggest that the surface of the Earth would actually have cooled slightly over the last 50 years (see graph, opposite). The clear message from fingerprint studies is that the observed warming over the last half-century cannot be explained by natural factors, and is instead caused primarily by human factors.[1]

    Another fingerprint of human effects on climate has been identified by looking at a slice through the layers of the atmosphere, and studying the pattern of temperature changes from the surface up through the stratosphere (see the section on solar activity). The earliest fingerprint work focused on changes in surface and atmospheric temperature. Scientists then applied fingerprint methods to a whole range of climate variables, identifying human-caused climate signals in the heat content of the oceans, the height of the tropopause (the boundary between the troposphere and stratosphere, which has shifted upward by hundreds of feet in recent decades), the geographical patterns of precipitation, drought, surface pressure, and the runoff from major river basins.[1]

    Studies published after the appearance of the IPCC Fourth Assessment Report in 2007 have also found human fingerprints in the increased levels of atmospheric moisture (both close to the surface and over the full extent of the atmosphere), in the decline of Arctic sea ice extent, and in the patterns of changes in Arctic and Antarctic surface temperatures.[1]

    The message from this entire body of work is that the climate system is telling a consistent story of increasingly dominant human influence – the changes in temperature, ice extent, moisture, and circulation patterns fit together in a physically consistent way, like pieces in a complex puzzle.[1]
    Increasingly, this type of fingerprint work is shifting its emphasis. As noted, clear and compelling scientific evidence supports the case for a pronounced human influence on global climate. Much of the recent attention is now on climate changes at continental and regional scales, and on variables that can have large impacts on societies. For example, scientists have established causal links between human activities and the changes in snowpack, maximum and minimum (diurnal) temperature, and the seasonal timing of runoff over mountainous regions of the western United States. Human activity is likely to have made a substantial contribution to ocean surface temperature changes in hurricane formation regions. Researchers are also looking beyond the physical climate system, and are beginning to tie changes in the distribution and seasonal behaviour of plant and animal species to human-caused changes in temperature and precipitation.[1]

    For over a decade, one aspect of the climate change story seemed to show a significant difference between models and observations. In the tropics, all models predicted that with a rise in greenhouse gases, the troposphere would be expected to warm more rapidly than the surface. Observations from weather balloons, satellites, and surface thermometers seemed to show the opposite behaviour (more rapid warming of the surface than the troposphere). This issue was a stumbling block in understanding the causes of climate change. It is now largely resolved. Research showed that there were large uncertainties in the satellite and weather balloon data. When uncertainties in models and observations are properly accounted for, newer observational data sets (with better treatment of known problems) are in agreement with climate model results.[1]

    Refer to caption
    This set of graphs shows the estimated contribution of various natural and human factors to changes in global mean temperature between 1889–2006.[40] Estimated contributions are based on multivariate analysis rather than model simulations.[41] The graphs show that human influence on climate has eclipsed the magnitude of natural temperature changes over the past 120 years.[42] Natural influences on temperature—El Niño, solar variability, and volcanic aerosols—have varied approximately plus and minus 0.2 °C (0.4 °F), (averaging to about zero), while human influences have contributed roughly 0.8 °C (1 °F) of warming since 1889.[42]

    This does not mean, however, that all remaining differences between models and observations have been resolved. The observed changes in some climate variables, such as Arctic sea ice, some aspects of precipitation, and patterns of surface pressure, appear to be proceeding much more rapidly than models have projected. The reasons for these differences are not well understood. Nevertheless, the bottom-line conclusion from climate fingerprinting is that most of the observed changes studied to date are consistent with each other, and are also consistent with our scientific understanding of how the climate system would be expected to respond to the increase in heat-trapping gases resulting from human activities.[1]

    Extreme weather events

    refer to caption
    Frequency of occurrence (vertical axis) of local June–July–August temperature anomalies (relative to 1951–1980 mean) for Northern Hemisphere land in units of local standard deviation (horizontal axis).[43] According to Hansen et al. (2012),[43] the distribution of anomalies has shifted to the right as a consequence of global warming, meaning that unusually hot summers have become more common. This is analogous to the rolling of a dice: cool summers now cover only half of one side of a six-sided die, white covers one side, red covers four sides, and an extremely hot (red-brown) anomaly covers half of one side.[43]

    One of the subjects discussed in the literature is whether or not extreme weather events can be attributed to human activities. Seneviratne et al. (2012)[44] stated that attributing individual extreme weather events to human activities was challenging. They were, however, more confident over attributing changes in long-term trends of extreme weather. For example, Seneviratne et al. (2012)[45] concluded that human activities had likely led to a warming of extreme daily minimum and maximum temperatures at the global scale.

    Another way of viewing the problem is to consider the effects of human-induced climate change on the probability of future extreme weather events. Stott et al. (2003),[46] for example, considered whether or not human activities had increased the risk of severe heat waves in Europe, like the one experienced in 2003. Their conclusion was that human activities had very likely more than doubled the risk of heat waves of this magnitude.[46]

    An analogy can be made between an athlete on steroids and human-induced climate change.[47] In the same way that an athlete's performance may increase from using steroids, human-induced climate change increases the risk of some extreme weather events.

    Hansen et al. (2012)[48] suggested that human activities have greatly increased the risk of summertime heat waves. According to their analysis, the land area of the Earth affected by very hot summer temperature anomalies has greatly increased over time (refer to graphs on the left). In the base period 1951-1980, these anomalies covered a few tenths of 1% of the global land area.[49] In recent years, this has increased to around 10% of the global land area. With high confidence, Hansen et al. (2012)[49] attributed the 2010 Moscow and 2011 Texas heat waves to human-induced global warming.

    An earlier study by Dole et al. (2011)[50] concluded that the 2010 Moscow heatwave was mostly due to natural weather variability. While not directly citing Dole et al. (2011),[50] Hansen et al. (2012)[49] rejected this type of explanation. Hansen et al. (2012)[49] stated that a combination of natural weather variability and human-induced global warming was responsible for the Moscow and Texas heat waves.

    Scientific literature and opinion

    There are a number of examples of published and informal support for the consensus view. As mentioned earlier, the IPCC has concluded that most of the observed increase in globally averaged temperatures since the mid-20th century is "very likely" due to human activities.[51] The IPCC's conclusions are consistent with those of several reports produced by the US National Research Council.[7][52][53] A report published in 2009 by the U.S. Global Change Research Program concluded that "[global] warming is unequivocal and primarily human-induced."[54] A number of scientific organizations have issued statements that support the consensus view. Two examples include:

    Detection and attribution studies

    The IPCC Fourth Assessment Report (2007), concluded that attribution was possible for a number of observed changes in the climate (see effects of global warming). However, attribution was found to be more difficult when assessing changes over smaller regions (less than continental scale) and over short time periods (less than 50 years).[33] Over larger regions, averaging reduces natural variability of the climate, making detection and attribution easier.
    • In 1996, in a paper in Nature titled "A search for human influences on the thermal structure of the atmosphere", Benjamin D. Santer et al. wrote: "The observed spatial patterns of temperature change in the free atmosphere from 1963 to 1987 are similar to those predicted by state-of-the-art climate models incorporating various combinations of changes in carbon dioxide, anthropogenic sulphate aerosol and stratospheric ozone concentrations. The degree of pattern similarity between models and observations increases through this period. It is likely that this trend is partially due to human activities, although many uncertainties remain, particularly relating to estimates of natural variability."
    • A 2002 paper in the Journal of Geophysical Research says "Our analysis suggests that the early twentieth century warming can best be explained by a combination of warming due to increases in greenhouse gases and natural forcing, some cooling due to other anthropogenic forcings, and a substantial, but not implausible, contribution from internal variability. In the second half of the century we find that the warming is largely caused by changes in greenhouse gases, with changes in sulphates and, perhaps, volcanic aerosol offsetting approximately one third of the warming."[57][58]
    • A 2005 review of detection and attribution studies by the International Ad Hoc Detection and Attribution Group[59] found that "natural drivers such as solar variability and volcanic activity are at most partially responsible for the large-scale temperature changes observed over the past century, and that a large fraction of the warming over the last 50 yr can be attributed to greenhouse gas increases. Thus, the recent research supports and strengthens the IPCC Third Assessment Report conclusion that 'most of the global warming over the past 50 years is likely due to the increase in greenhouse gases.'"
    • Barnett and colleagues (2005) say that the observed warming of the oceans "cannot be explained by natural internal climate variability or solar and volcanic forcing, but is well simulated by two anthropogenically forced climate models," concluding that "it is of human origin, a conclusion robust to observational sampling and model differences".[60]
    • Two papers in the journal Science in August 2005[61][62] resolve the problem, evident at the time of the TAR, of tropospheric temperature trends (see also the section on "fingerprint" studies) . The UAH version of the record contained errors, and there is evidence of spurious cooling trends in the radiosonde record, particularly in the tropics. See satellite temperature measurements for details; and the 2006 US CCSP report.[63]
    • Multiple independent reconstructions of the temperature record of the past 1000 years confirm that the late 20th century is probably the warmest period in that time (see the preceding section -details on attribution).

    Reviews of scientific opinion

    • An essay in Science surveyed 928 abstracts related to climate change, and concluded that most journal reports accepted the consensus.[64] This is discussed further in scientific opinion on climate change.
    • A 2010 paper in the Proceedings of the National Academy of Sciences found that among a pool of roughly 1,000 researchers who work directly on climate issues and publish the most frequently on the subject, 97% agree that anthropogenic climate change is happening.[65]
    • A 2011 paper from George Mason University published in the International Journal of Public Opinion Research, "The Structure of Scientific Opinion on Climate Change," collected the opinions of scientists in the earth, space, atmospheric, oceanic or hydrological sciences.[66] The 489 survey respondents—representing nearly half of all those eligible according to the survey's specific standards – work in academia, government, and industry, and are members of prominent professional organizations.[66] The study found that 97% of the 489 scientists surveyed agreed that global temperatures have risen over the past century.[66] Moreover, 84% agreed that "human-induced greenhouse warming" is now occurring."[66] Only 5% disagreed with the idea that human activity is a significant cause of global warming.[66]
    As described above, a small minority of scientists do disagree with the consensus: see list of scientists opposing global warming consensus. For example, Willie Soon and Richard Lindzen[67] say that there is insufficient proof for anthropogenic attribution. Generally this position requires new physical mechanisms to explain the observed warming.[68]

    Solar activity

    Refer to caption
    Solar radiation at the top of our atmosphere, and global temperature

    Modelled simulation of the effect of various factors (including GHGs, Solar irradiance) singly and in combination, showing in particular that solar activity produces a small and nearly uniform warming, unlike what is observed.

    Solar sunspot maximum occurs when the magnetic field of the sun collapses and reverse as part of its average 11 year solar cycle (22 years for complete North to North restoration).
    The role of the sun in recent climate change has been looked at by climate scientists. Since 1978, output from the Sun has been measured by satellites [8]:6 significantly more accurately than was previously possible from the surface. These measurements indicate that the Sun's total solar irradiance has not increased since 1978, so the warming during the past 30 years cannot be directly attributed to an increase in total solar energy reaching the Earth (see graph above, left). In the three decades since 1978, the combination of solar and volcanic activity probably had a slight cooling influence on the climate.[69]

    Climate models have been used to examine the role of the sun in recent climate change.[70] Models are unable to reproduce the rapid warming observed in recent decades when they only take into account variations in total solar irradiance and volcanic activity. Models are, however, able to simulate the observed 20th century changes in temperature when they include all of the most important external forcings, including human influences and natural forcings. As has already been stated, Hegerl et al. (2007) concluded that greenhouse gas forcing had "very likely" caused most of the observed global warming since the mid-20th century. In making this conclusion, Hegerl et al. (2007) allowed for the possibility that climate models had been underestimated the effect of solar forcing.[71]

    The role of solar activity in climate change has also been calculated over longer time periods using "proxy" datasets, such as tree rings.[72] Models indicate that solar and volcanic forcings can explain periods of relative warmth and cold between A.D. 1000 and 1900, but human-induced forcings are needed to reproduce the late-20th century warming.[73]

    Another line of evidence against the sun having caused recent climate change comes from looking at how temperatures at different levels in the Earth's atmosphere have changed.[74] Models and observations (see figure above, middle) show that greenhouse gas results in warming of the lower atmosphere at the surface (called the troposphere) but cooling of the upper atmosphere (called the stratosphere).[75] Depletion of the ozone layer by chemical refrigerants has also resulted in a cooling effect in the stratosphere. If the sun was responsible for observed warming, warming of the troposphere at the surface and warming at the top of the stratosphere would be expected as increase solar activity would replenish ozone and oxides of nitrogen.[76] The stratosphere has a reverse temperature gradient than the troposphere so as the temperature of the troposphere cools with altitude, the stratosphere rises with altitude. Hadley cells are the mechanism by which equatorial generated ozone in the tropics (highest area of UV irradiance in the stratosphere) is moved poleward. Global climate models suggest that climate change may widen the Hadley cells and push the jetstream northward thereby expanding the tropics region and resulting in warmer, dryer conditions in those areas overall.[77]

    Non-consensus views

    Refer to caption
    Contribution of natural factors and human activities to radiative forcing of climate change.[13] Radiative forcing values are for the year 2005, relative to the pre-industrial era (1750).[13] The contribution of solar irradiance to radiative forcing is 5% the value of the combined radiative forcing due to increases in the atmospheric concentrations of carbon dioxide, methane and nitrous oxide.[78]

    Habibullo Abdussamatov (2004), head of space research at St. Petersburg's Pulkovo Astronomical Observatory in Russia, has argued that the sun is responsible for recently observed climate change.[79] Journalists for news sources canada.com (Solomon, 2007b),[80] National Geographic News (Ravillious, 2007),[81] and LiveScience (Than, 2007)[82] reported on the story of warming on Mars. In these articles, Abdussamatov was quoted. He stated that warming on Mars was evidence that global warming on Earth was being caused by changes in the sun.

    Ravillious (2007)[81] quoted two scientists who disagreed with Abdussamatov: Amato Evan, a climate scientist at the University of Wisconsin-Madison, in the US, and Colin Wilson, a planetary physicist at Oxford University in the UK. According to Wilson, "Wobbles in the orbit of Mars are the main cause of its climate change in the current era" (see also orbital forcing).[83] Than (2007) quoted Charles Long, a climate physicist at Pacific Northwest National Laboratories in the US, who disagreed with Abdussamatov.[82]

    Than (2007) pointed to the view of Benny Peiser, a social anthropologist at Liverpool John Moores University in the UK.[82] In his newsletter, Peiser had cited a blog that had commented on warming observed on several planetary bodies in the Solar system. These included Neptune's moon Triton,[84] Jupiter,[85] Pluto[86] and Mars. In an e-mail interview with Than (2007), Peiser stated that:
    "I think it is an intriguing coincidence that warming trends have been observed on a number of very diverse planetary bodies in our solar system, (...) Perhaps this is just a fluke."
    Than (2007) provided alternative explanations of why warming had occurred on Triton, Pluto, Jupiter and Mars.

    The US Environmental Protection Agency (US EPA, 2009) responded to public comments on climate change attribution.[78] A number of commenters had argued that recent climate change could be attributed to changes in solar irradiance. According to the US EPA (2009), this attribution was not supported by the bulk of the scientific literature. Citing the work of the IPCC (2007), the US EPA pointed to the low contribution of solar irradiance to radiative forcing since the start of the Industrial Revolution in 1750. Over this time period (1750 to 2005),[87] the estimated contribution of solar irradiance to radiative forcing was 5% the value of the combined radiative forcing due to increases in the atmospheric concentrations of carbon dioxide, methane and nitrous oxide (see graph opposite).

    Effect of cosmic rays

    Henrik Svensmark has suggested that the magnetic activity of the sun deflects cosmic rays, and that this may influence the generation of cloud condensation nuclei, and thereby have an effect on the climate.[88] The website ScienceDaily reported on a 2009 study that looked at how past changes in climate have been affected by the Earth's magnetic field.[89] Geophysicist Mads Faurschou Knudsen, who co-authored the study, stated that the study's results supported Svensmark's theory. The authors of the study also acknowledged that CO2 plays an important role in climate change.

    Consensus view on cosmic rays

    The view that cosmic rays could provide the mechanism by which changes in solar activity affect climate is not supported by the literature.[90] Solomon et al. (2007)[91] state:
    [..] the cosmic ray time series does not appear to correspond to global total cloud cover after 1991 or to global low-level cloud cover after 1994. Together with the lack of a proven physical mechanism and the plausibility of other causal factors affecting changes in cloud cover, this makes the association between galactic cosmic ray-induced changes in aerosol and cloud formation controversial
    Studies by Lockwood and Fröhlich (2007)[92] and Sloan and Wolfendale (2008)[93] found no relation between warming in recent decades and cosmic rays. Pierce and Adams (2009)[94] used a model to simulate the effect of cosmic rays on cloud properties. They concluded that the hypothesized effect of cosmic rays was too small to explain recent climate change.[94] Pierce and Adams (2009)[95] noted that their findings did not rule out a possible connection between cosmic rays and climate change, and recommended further research.

    Erlykin et al. (2009)[96] found that the evidence showed that connections between solar variation and climate were more likely to be mediated by direct variation of insolation rather than cosmic rays, and concluded: "Hence within our assumptions, the effect of varying solar activity, either by direct solar irradiance or by varying cosmic ray rates, must be less than 0.07 °C since 1956, i.e. less than 14% of the observed global warming." Carslaw (2009)[97] and Pittock (2009)[98] review the recent and historical literature in this field and continue to find that the link between cosmic rays and climate is tenuous, though they encourage continued research. US EPA (2009)[90] commented on research by Duplissy et al. (2009):[99]
    The CLOUD experiments at CERN are interesting research but do not provide conclusive evidence that cosmic rays can serve as a major source of cloud seeding. Preliminary results from the experiment (Duplissy et al., 2009) suggest that though there was some evidence of ion mediated nucleation, for most of the nucleation events observed the contribution of ion processes appeared to be minor. These experiments also showed the difficulty in maintaining sufficiently clean conditions and stable temperatures to prevent spurious aerosol bursts. There is no indication that the earlier Svensmark experiments could even have matched the controlled conditions of the CERN experiment. We find that the Svensmark results on cloud seeding have not yet been shown to be robust or sufficient to materially alter the conclusions of the assessment literature, especially given the abundance of recent literature that is skeptical of the cosmic ray-climate linkage

    Monday, November 9, 2015

    The Cassandra Effect



    From Wikipedia, the free encyclopedia


    Painting of Cassandra by Evelyn De Morgan

    The Cassandra metaphor (variously labelled the Cassandra 'syndrome', 'complex', 'phenomenon', 'predicament', 'dilemma', or 'curse') occurs when valid warnings or concerns are dismissed or disbelieved.

    The term originates in Greek mythology. Cassandra was a daughter of Priam, the King of Troy. Struck by her beauty, Apollo provided her with the gift of prophecy, but when Cassandra refused Apollo's romantic advances, he placed a curse ensuring that nobody would believe her warnings. Cassandra was left with the knowledge of future events, but could neither alter these events nor convince others of the validity of her predictions.

    The metaphor has been applied in a variety of contexts such as psychology, environmentalism, politics, science, cinema, the corporate world, and in philosophy, and has been in circulation since at least 1949 when French philosopher Gaston Bachelard coined the term 'Cassandra Complex' to refer to a belief that things could be known in advance.[1]

    Usage

    Psychology

    The Cassandra metaphor is applied by some psychologists to individuals who experience physical and emotional suffering as a result of distressing personal perceptions, and who are disbelieved when they attempt to share the cause of their suffering with others.

    Melanie Klein

    In 1963, psychologist Melanie Klein provided an interpretation of Cassandra as representing the human moral conscience whose main task is to issue warnings. Cassandra as moral conscience, "predicts ill to come and warns that punishment will follow and grief arise."[2] Cassandra's need to point out moral infringements and subsequent social consequences is driven by what Klein calls "the destructive influences of the cruel super-ego," which is represented in the Greek myth by the god Apollo, Cassandra's overlord and persecutor.[3] Klein's use of the metaphor centers on the moral nature of certain predictions, which tends to evoke in others "a refusal to believe what at the same time they know to be true, and expresses the universal tendency toward denial, [with] denial being a potent defence against persecutory anxiety and guilt."[2]

    Laurie Layton Schapira

    In a 1988 study Jungian analyst Laurie Layton Schapira explored what she called the "Cassandra Complex" in the lives of two of her analysands.[4]

    Based on clinical experience, she delineates three factors which constitute the Cassandra complex:
    1. dysfunctional relationships with the "Apollo archetype",
    2. emotional or physical suffering, including hysteria or ‘women’s problems’,
    3. and being disbelieved when attempting to relate the facticity of these experiences to others.[4]
    Layton Schapira views the Cassandra complex as resulting from a dysfunctional relationship with what she calls the "Apollo archetype", which refers to any individual's or culture's pattern that is dedicated to, yet bound by, order, reason, intellect, truth and clarity that disavows itself of anything occult or irrational.[5] The intellectual specialization of this archetype creates emotional distance and can predispose relationships to a lack of emotional reciprocity and consequent dysfunctions.[4] She further states that a 'Cassandra woman' is very prone to hysteria because she "feels attacked not only from the outside world but also from within, especially from the body in the form of somatic, often gynaecological, complaints."[6]

    Addressing the metaphorical application of the Greek Cassandra myth, Layton Schapira states that:
    What the Cassandra woman sees is something dark and painful that may not be apparent on the surface of things or that objective facts do not corroborate. She may envision a negative or unexpected outcome; or something which would be difficult to deal with; or a truth which others, especially authority figures, would not accept. In her frightened, ego-less state, the Cassandra woman may blurt out what she sees, perhaps with the unconscious hope that others might be able to make some sense of it. But to them her words sound meaningless, disconnected and blown out of all proportion.[6]

    Jean Shinoda Bolen

    In 1989, Jean Shinoda Bolen, Clinical Professor of Psychiatry at the University of California, published an essay on the god Apollo[7] in which she detailed a psychological profile of the ‘Cassandra woman’ whom she suggested referred to someone suffering — as happened in the mythological relationship between Cassandra and Apollo — a dysfunctional relationship with an “Apollo man”. Bolen added that the Cassandra woman may exhibit “hysterical” overtones, and may be disbelieved when attempting to share what she knows.[8]

    According to Bolen, the archetypes of Cassandra and Apollo are not gender-specific. She states that "women often find that a particular [male] god exists in them as well, just as I found that when I spoke about goddesses men could identify a part of themselves with a specific goddess. Gods and goddesses represent different qualities in the human psyche. The pantheon of Greek deities together, male and female, exist as archetypes in us all… There are gods and goddesses in every person."[9]

    "As an archetype, Apollo personifies the aspect of the personality that wants clear definitions, is drawn to master a skill, values order and harmony, and prefers to look at the surface rather than at what underlies appearances. The Apollo archetype favors thinking over feeling, distance over closeness, objective assessment over subjective intuition."[10]

    Of what she describes as the negative Apollonic influence, Dr. Bolen writes:
    Individuals who resemble Apollo have difficulties that are related to emotional distance, such as communication problems, and the inability to be intimate… Rapport with another person is hard for the Apollo man. He prefers to access (or judge) the situation or the person from a distance, not knowing that he must "get close up" – be vulnerable and empathic – in order to truly know someone else…. But if the woman wants a deeper, more personal relationship, then there are difficulties… she may become increasingly irrational or hysterical.[8]
    Bolen suggests that a Cassandra woman (or man) may become increasingly hysterical and irrational when in a dysfunctional relationship with a negative Apollo, and may experience others' disbelief when describing her experiences.[8]

    Corporate world

    Foreseeing potential future directions for a corporation or company is sometimes called ‘visioning’.[11] Yet achieving a clear, shared vision in an organization is often difficult due to a lack of commitment to the new vision by some individuals in the organization, because it does not match reality as they see it. Those who support the new vision are termed ‘Cassandras’ – able to see what is going to happen, but not believed.[11] Sometimes the name Cassandra is applied to those who can predict rises, falls, and particularly crashes on the global stock market, as happened with Warren Buffett, who repeatedly warned that the 1990s stock market surge was a bubble, attracting to him the title of 'Wall Street Cassandra'.[12]

    Environmental movement

    Many environmentalists have predicted looming environmental catastrophes including climate change, rise in sea levels, irreversible pollution, and an impending collapse of ecosystems, including those of rainforests and ocean reefs.[13] Such individuals sometimes acquire the label of 'Cassandras', whose warnings of impending environmental disaster are disbelieved or mocked.[13]

    Environmentalist Alan Atkisson states that to understand that humanity is on a collision course with the laws of nature is to be stuck in what he calls the 'Cassandra dilemma' in which one can see the most likely outcome of current trends and can warn people about what is happening, but the vast majority can not, or will not respond, and later if catastrophe occurs, they may even blame you, as if your prediction set the disaster in motion.[14] Occasionally there may be a "successful" alert, though the succession of books, campaigns, organizations, and personalities that we think of as the environmental movement has more generally fallen toward the opposite side of this dilemma: a failure to "get through" to the people and avert disaster. In the words of Atkisson: "too often we watch helplessly, as Cassandra did, while the soldiers emerge from the Trojan horse just as foreseen and wreak their predicted havoc. Worse, Cassandra's dilemma has seemed to grow more inescapable even as the chorus of Cassandras has grown larger."[15]

    Other examples

    There are examples of the Cassandra metaphor being applied in the contexts of medical science,[16][17] the media,[18] to feminist perspectives on 'reality',[19][20] in relation to Asperger’s Disorder (a 'Cassandra Syndrome' is sometimes said to arise when partners or family members of the Asperger individual seek help but are disbelieved,)[21][22][23] and in politics.[24] There are also examples of the metaphor being used in popular music lyrics, such as the 1982 ABBA song "Cassandra"[25][26] and Star One's "Cassandra Complex". The five-part The Mars Volta song "Cassandra Gemini" may reference this syndrome,[27] as well as the film 12 Monkeys or in dead and divine's "cassandra syndrome".

    Novikov self-consistency principle



    From Wikipedia, the free encyclopedia

    The Novikov self-consistency principle, also known as the Novikov self-consistency conjecture, is a principle developed by Russian physicist Igor Dmitriyevich Novikov in the mid-1980s to solve the problem of paradoxes in time travel, which is theoretically permitted in certain solutions of general relativity (solutions containing what are known as closed timelike curves). The principle asserts that if an event exists that would give rise to a paradox, or to any "change" to the past whatsoever, then the probability of that event is zero. It would thus be impossible to create time paradoxes.

    History of the principle

    Physicists have long been aware that there are solutions to the theory of general relativity which contain closed timelike curves, or CTCs—see for example the Gödel metric. Novikov discussed the possibility of CTCs in books written in 1975 and 1983,[1] offering the opinion that only self-consistent trips back in time would be permitted.[2] In a 1990 paper by Novikov and several others, "Cauchy problem in spacetimes with closed timelike curves",[3] the authors state:
    The only type of causality violation that the authors would find unacceptable is that embodied in the science-fiction concept of going backward in time and killing one's younger self ("changing the past"). Some years ago one of us (Novikov10) briefly considered the possibility that CTCs might exist and argued that they cannot entail this type of causality violation: Events on a CTC are already guaranteed to be self-consistent, Novikov argued; they influence each other around a closed curve in a self-adjusted, cyclical, self-consistent way. The other authors recently have arrived at the same viewpoint.
    We shall embody this viewpoint in a principle of self-consistency, which states that the only solutions to the laws of physics that can occur locally in the real Universe are those which are globally self-consistent. This principle allows one to build a local solution to the equations of physics only if that local solution can be extended to a part of a (not necessarily unique) global solution, which is well defined throughout the nonsingular regions of the spacetime.
    Among the coauthors of this 1990 paper were Kip Thorne, Mike Morris, and Ulvi Yurtsever, who in 1988 had stirred up renewed interest in the subject of time travel in general relativity with their paper "Wormholes, Time Machines, and the Weak Energy Condition",[4] which showed that a new general relativity solution known as a traversable wormhole could lead to closed timelike curves, and unlike previous CTC-containing solutions it did not require unrealistic conditions for the universe as a whole. After discussions with another coauthor of the 1990 paper, John Friedman, they convinced themselves that time travel need not lead to unresolvable paradoxes, regardless of what type of object was sent through the wormhole.[5]:509

    In response, another physicist named Joseph Polchinski sent them a letter in which he argued that one could avoid questions of free will by considering a potentially paradoxical situation involving a billiard ball sent through a wormhole which sends it back in time. In this scenario, the ball is fired into a wormhole at an angle such that, if it continues along that path, it will exit the wormhole in the past at just the right angle to collide with its earlier self, thereby knocking it off course and preventing it from entering the wormhole in the first place. Thorne deemed this problem "Polchinski's paradox".[5]:510–511

    After considering the problem, two students at Caltech (where Thorne taught), Fernando Echeverria and Gunnar Klinkhammer, were able to find a solution beginning with the original billiard ball trajectory proposed by Polchinski which managed to avoid any inconsistencies. In this situation, the billiard ball emerges from the future at a different angle than the one used to generate the paradox, and delivers its younger self a glancing blow instead of knocking it completely away from the wormhole, a blow which changes its trajectory in just the right way so that it will travel back in time with the angle required to deliver its younger self this glancing blow. Echeverria and Klinkhammer actually found that there was more than one self-consistent solution, with slightly different angles for the glancing blow in each case. Later analysis by Thorne and Robert Forward showed that for certain initial trajectories of the billiard ball, there could actually be an infinite number of self-consistent solutions.[5]:511–513

    Echeverria, Klinkhammer and Thorne published a paper discussing these results in 1991;[6] in addition, they reported that they had tried to see if they could find any initial conditions for the billiard ball for which there were no self-consistent extensions, but were unable to do so. Thus it is plausible that there exist self-consistent extensions for every possible initial trajectory, although this has not been proven.[7]:184 This only applies to initial conditions which are outside of the chronology-violating region of spacetime,[7]:187 which is bounded by a Cauchy horizon.[8] This could mean that the Novikov self-consistency principle does not actually place any constraints on systems outside of the region of spacetime where time travel is possible, only inside it.

    Even if self-consistent extensions can be found for arbitrary initial conditions outside the Cauchy Horizon, the finding that there can be multiple distinct self-consistent extensions for the same initial condition—indeed, Echeverria et al. found an infinite number of consistent extensions for every initial trajectory they analyzed[7]:184—can be seen as problematic, since classically there seems to be no way to decide which extension the laws of physics will choose. To get around this difficulty, Thorne and Klinkhammer analyzed the billiard ball scenario using quantum mechanics,[5]:514–515 performing a quantum-mechanical sum over histories (path integral) using only the consistent extensions, and found that this resulted in a well-defined probability for each consistent extension. The authors of Cauchy problem in spacetimes with closed timelike curves write:
    The simplest way to impose the principle of self-consistency in quantum mechanics (in a classical space-time) is by a sum-over-histories formulation in which one includes all those, and only those, histories that are self-consistent. It turns out that, at least formally (modulo such issues as the convergence of the sum), for every choice of the billiard ball's initial, nonrelativistic wave function before the Cauchy horizon, such a sum over histories produces unique, self-consistent probabilities for the outcomes of all sets of subsequent measurements. ... We suspect, more generally, that for any quantum system in a classical wormhole spacetime with a stable Cauchy horizon, the sum over all self-consistent histories will give unique, self-consistent probabilities for the outcomes of all sets of measurements that one might choose to make.

    Assumptions of the Novikov self-consistency principle

    The Novikov consistency principle assumes certain conditions about what sort of time travel is possible. Specifically, it assumes either that there is only one timeline, or that any alternative timelines (such as those postulated by the many-worlds interpretation of quantum mechanics) are not accessible.

    Given these assumptions, the constraint that time travel must not lead to inconsistent outcomes could be seen merely as a tautology, a self-evident truth that cannot possibly be false, because if you make the assumption that it is false this would lead to a logical paradox. However, the Novikov self-consistency principle is intended to go beyond just the statement that history must be consistent, making the additional nontrivial assumption that the universe obeys the same local laws of physics in situations involving time travel that it does in regions of spacetime that lack closed timelike curves. This is made clear in the above-mentioned Cauchy problem in spacetimes with closed timelike curves,[3] where the authors write:
    That the principle of self-consistency is not totally tautological becomes clear when one considers the following alternative: The laws of physics might permit CTC's; and when CTC's occur, they might trigger new kinds of local physics which we have not previously met. ... The principle of self-consistency is intended to rule out such behavior. It insists that local physics is governed by the same types of physical laws as we deal with in the absence of CTC's: the laws that entail self-consistent single valuedness for the fields. In essence, the principle of self-consistency is a principle of no new physics. If one is inclined from the outset to ignore or discount the possibility of new physics, then one will regard self-consistency as a trivial principle.

    Implications for time travelers

    The assumptions of the self-consistency principle can be extended to hypothetical scenarios involving intelligent time travelers as well as unintelligent objects such as billiard balls. The authors of "Cauchy problem in spacetimes with closed timelike curves" commented on the issue in the paper's conclusion, writing:
    If CTC's are allowed, and if the above vision of theoretical physics' accommodation with them turns out to be more or less correct, then what will this imply about the philosophical notion of free will for humans and other intelligent beings? It certainly will imply that intelligent beings cannot change the past. Such change is incompatible with the principle of self-consistency. Consequently, any being who went through a wormhole and tried to change the past would be prevented by physical law from making the change; i.e., the "free will" of the being would be constrained. Although this constraint has a more global character than constraints on free will that follow from the standard, local laws of physics, it is not obvious to us that this constraint is more severe than those imposed by standard physical law.[3]
    Similarly, physicist and astronomer J. Craig Wheeler concludes that:
    According to the consistency conjecture, any complex interpersonal interactions must work themselves out self-consistently so that there is no paradox. That is the resolution. This means, if taken literally, that if time machines exist, there can be no free will. You cannot will yourself to kill your younger self if you travel back in time. You can coexist, take yourself out for a beer, celebrate your birthday together, but somehow circumstances will dictate that you cannot behave in a way that leads to a paradox in time. Novikov supports this point of view with another argument: physics already restricts your free will every day. You may will yourself to fly or to walk through a concrete wall, but gravity and condensed-matter physics dictate that you cannot. Why, Novikov asks, is the consistency restriction placed on a time traveler any different?[9]

    Time loop logic

    Time loop logic, coined by the roboticist and futurist Hans Moravec,[10] is the name of a hypothetical system of computation that exploits the Novikov self-consistency principle to compute answers much faster than possible with the standard model of computational complexity using Turing machines. In this system, a computer sends a result of a computation backwards through time and relies upon the self-consistency principle to force the sent result to be correct, providing the machine can reliably receive information from the future and providing the algorithm and the underlying mechanism are formally correct. An incorrect result or no result can still be produced if the time travel mechanism or algorithm are not guaranteed to be accurate.

    A simple example is an iterative method algorithm. Moravec states:
    Make a computing box that accepts an input, which represents an approximate solution to some problem, and produces an output that is an improved approximation. Conventionally you would apply such a computation repeatedly a finite number of times, and then settle for the better, but still approximate, result. Given an appropriate negative delay something else is possible: [...] the result of each iteration of the function is brought back in time to serve as the "first" approximation. As soon as the machine is activated, a so-called "fixed-point" of F, an input which produces an identical output, usually signaling a perfect answer, appears (by an extraordinary coincidence!) immediately and steadily. [...] If the iteration does not converge, that is, if F has no fixed point, the computer outputs and inputs will shut down or hover in an unlikely intermediate state.
    Physicist David Deutsch showed in 1991 that this model of computation could solve NP problems in polynomial time,[11] and Scott Aaronson later extended this result to show that the model could also be used to solve PSPACE problems in polynomial time.[12][13]

    Wormhole



    From Wikipedia, the free encyclopedia

    A wormhole or Einstein-Rosen Bridge is a hypothetical topological feature that would fundamentally be a shortcut connecting two separate points in spacetime that could connect extremely far distances such as a billion light years or more, short distances, such as a few feet, different universes, and in theory, different points in time. A wormhole is much like a tunnel with two ends, each in separate points in spacetime.
    For a simplified notion of a wormhole, space can be visualized as a two-dimensional (2D) surface. In this case, a wormhole would appear as a hole in that surface, lead into a 3D tube (the inside surface of a cylinder), then re-emerge at another location on the 2D surface with a hole similar to the entrance. An actual wormhole would be analogous to this, but with the spatial dimensions raised by one. For example, instead of circular holes on a 2D plane, the entry and exit points could be visualized as spheres in 3D space.

    Overview

    Researchers have no observational evidence for wormholes, but the equations of the theory of general relativity have valid solutions that contain wormholes. The first type of wormhole solution discovered was the Schwarzschild wormhole, which would be present in the Schwarzschild metric describing an eternal black hole, but it was found that it would collapse too quickly for anything to cross from one end to the other. Wormholes that could be crossed in both directions, known as traversable wormholes, would only be possible if exotic matter with negative energy density could be used to stabilize them. Wormholes are also a very powerful mathematical metaphor for teaching general relativity.

    The Casimir effect shows that quantum field theory allows the energy density in certain regions of space to be negative relative to the ordinary vacuum energy, and it has been shown theoretically that quantum field theory allows states where energy can be arbitrarily negative at a given point.[1] Many physicists, such as Stephen Hawking,[2] Kip Thorne[3] and others,[4][5][6] therefore argue that such effects might make it possible to stabilize a traversable wormhole. Physicists have not found any natural process that would be predicted to form a wormhole naturally in the context of general relativity, although the quantum foam hypothesis is sometimes used to suggest that tiny wormholes might appear and disappear spontaneously at the Planck scale,[7][8] and stable versions of such wormholes have been suggested as dark matter candidates.[9][10] It has also been proposed that, if a tiny wormhole held open by a negative mass cosmic string had appeared around the time of the Big Bang, it could have been inflated to macroscopic size by cosmic inflation.[11]

    The American theoretical physicist John Archibald Wheeler coined the term wormhole in 1957; the German mathematician Hermann Weyl, however, had proposed the wormhole theory in 1921, in connection with mass analysis of electromagnetic field energy.[12]
    This analysis forces one to consider situations... where there is a net flux of lines of force, through what topologists would call "a handle" of the multiply-connected space, and what physicists might perhaps be excused for more vividly terming a "wormhole".
    — John Wheeler in Annals of Physics

    "Embedding diagram" of a Schwarzschild wormhole (see below)

    Definition

    The basic notion of an intra-universe wormhole is that it is a compact region of spacetime whose boundary is topologically trivial, but whose interior is not simply connected. Formalizing this idea leads to definitions such as the following, taken from Matt Visser's Lorentzian Wormholes.
    If a Minkowski spacetime contains a compact region Ω, and if the topology of Ω is of the form Ω ~ R x Σ, where Σ is a three-manifold of the nontrivial topology, whose boundary has topology of the form ∂Σ ~ S2, and if, furthermore, the hypersurfaces Σ are all spacelike, then the region Ω contains a quasipermanent intra-universe wormhole.
    Wormholes have been defined geometrically, as opposed to topologically,[clarification needed] as regions of spacetime that constrain the incremental deformation of closed surfaces. For example, in Enrico Rodrigo’s The Physics of Stargates, a wormhole is defined informally as:
    a region of spacetime containing a "world tube" (the time evolution of a closed surface) that cannot be continuously deformed (shrunk) to a world line (the time evolution of a point).

    Schwarzschild wormholes


    An artist's impression of a wormhole from an observer's perspective, crossing the event horizon of a Schwarzschild wormhole that bridges two different universes. The observer originates from the right, and another universe becomes visible in the center of the wormhole’s shadow once the horizon is crossed, the observer seeing light that has fallen into the black hole interior region from the other universe; however, this other universe is unreachable in the case of a Schwarzschild wormhole, as the bridge always collapses before the observer has time to cross it, and everything that has fallen through the event horizon of either universe is inevitably crushed in the singularity.

    Lorentzian wormholes known as Schwarzschild wormholes or Einstein–Rosen bridges are connections between areas of space that can be modeled as vacuum solutions to the Einstein field equations, and that are now understood to be intrinsic parts of the maximally extended version of the Schwarzschild metric describing an eternal black hole with no charge and no rotation. Here, "maximally extended" refers to the idea that the spacetime should not have any "edges": for any possible trajectory of a free-falling particle (following a Geodesic in the spacetime, it should be possible to continue this path arbitrarily far into the particle's future or past, unless the trajectory hits a gravitational singularity like the one at the center of the black hole's interior. In order to satisfy this requirement, it turns out that in addition to the black hole interior region that particles enter when they fall through the event horizon from the outside, there must be a separate white hole interior region that allows us to extrapolate the trajectories of particles that an outside observer sees rising up away from the event horizon. And just as there are two separate interior regions of the maximally extended spacetime, there are also two separate exterior regions, sometimes called two different "universes", with the second universe allowing us to extrapolate some possible particle trajectories in the two interior regions. This means that the interior black hole region can contain a mix of particles that fell in from either universe (and thus an observer who fell in from one universe might be able to see light that fell in from the other one), and likewise particles from the interior white hole region can escape into either universe. All four regions can be seen in a spacetime diagram that uses Kruskal–Szekeres coordinates.

    In this spacetime, it is possible to come up with coordinate systems such that if you pick a hypersurface of constant time (a set of points that all have the same time coordinate, such that every point on the surface has a space-like separation, giving what is called a 'space-like surface') and draw an "embedding diagram" depicting the curvature of space at that time, the embedding diagram will look like a tube connecting the two exterior regions, known as an "Einstein–Rosen bridge". Note that the Schwarzschild metric describes an idealized black hole that exists eternally from the perspective of external observers; a more realistic black hole that forms at some particular time from a collapsing star would require a different metric. When the infalling stellar matter is added to a diagram of a black hole's history, it removes the part of the diagram corresponding to the white hole interior region, along with the part of the diagram corresponding to the other universe.[13]

    The Einstein–Rosen bridge was discovered by Ludwig Flamm[14] in 1916, a few months after Schwarzschild published his solution, and was rediscovered (although it is hard to imagine that Einstein had not seen Flamm's paper when it came out) by Albert Einstein and his colleague Nathan Rosen, who published their result in 1935. However, in 1962, John A. Wheeler and Robert W. Fuller published a paper showing that this type of wormhole is unstable if it connects two parts of the same universe, and that it will pinch off too quickly for light (or any particle moving slower than light) that falls in from one exterior region to make it to the other exterior region.

    According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular Schwarzschild black hole. In the Einstein–Cartan–Sciama–Kibble theory of gravity, however, it forms a regular Einstein–Rosen bridge. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. Torsion naturally accounts for the quantum-mechanical, intrinsic angular momentum (spin) of matter. The minimal coupling between torsion and Dirac spinors generates a repulsive spin–spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction prevents the formation of a gravitational singularity.[clarification needed] Instead, the collapsing matter reaches an enormous but finite density and rebounds, forming the other side of the bridge.[15]

    Before the stability problems of Schwarzschild wormholes were apparent, it was proposed that quasars were white holes forming the ends of wormholes of this type.[citation needed]

    Although Schwarzschild wormholes are not traversable in both directions, their existence inspired Kip Thorne to imagine traversable wormholes created by holding the "throat" of a Schwarzschild wormhole open with exotic matter (material that has negative mass/energy).

    Traversable wormholes


    Image of a simulated traversable wormhole that connects the square in front of the physical institutes of University of Tübingen with the sand dunes near Boulogne sur Mer in the north of France. The image is calculated with 4D raytracing in a Morris–Thorne wormhole metric, but the gravitational effects on the wavelength of light have not been simulated.[16]

    Lorentzian traversable wormholes would allow travel in both directions from one part of the universe to another part of that same universe very quickly or would allow travel from one universe to another. The possibility of traversable wormholes in general relativity was first demonstrated by Kip Thorne and his graduate student Mike Morris in a 1988 paper. For this reason, the type of traversable wormhole they proposed, held open by a spherical shell of exotic matter, is referred to as a Morris–Thorne wormhole. Later, other types of traversable wormholes were discovered as allowable solutions to the equations of general relativity, including a variety analyzed in a 1989 paper by Matt Visser, in which a path through the wormhole can be made where the traversing path does not pass through a region of exotic matter. However, in the pure Gauss–Bonnet gravity (a modification to general relativity involving extra spatial dimensions which is sometimes studied in the context of brane cosmology) exotic matter is not needed in order for wormholes to exist—they can exist even with no matter.[17] A type held open by negative mass cosmic strings was put forth by Visser in collaboration with Cramer et al.,[11] in which it was proposed that such wormholes could have been naturally created in the early universe.

    Wormholes connect two points in spacetime, which means that they would in principle allow travel in time, as well as in space. In 1988, Morris, Thorne and Yurtsever worked out explicitly how to convert a wormhole traversing space into one traversing time.[3] However, according to general relativity, it would not be possible to use a wormhole to travel back to a time earlier than when the wormhole was first converted into a time machine by accelerating one of its two mouths.[18]

    Raychaudhuri's theorem and exotic matter

    To see why exotic matter is required, consider an incoming light front traveling along geodesics, which then crosses the wormhole and re-expands on the other side. The expansion goes from negative to positive. As the wormhole neck is of finite size, we would not expect caustics to develop, at least within the vicinity of the neck. According to the optical Raychaudhuri's theorem, this requires a violation of the averaged null energy condition. Quantum effects such as the Casimir effect cannot violate the averaged null energy condition in any neighborhood of space with zero curvature,[19] but calculations in semiclassical gravity suggest that quantum effects may be able to violate this condition in curved spacetime.[20] Although it was hoped recently that quantum effects could not violate an achronal version of the averaged null energy condition,[21] violations have nevertheless been found,[22] so it remains an open possibility that quantum effects might be used to support a wormhole.

    Faster-than-light travel

    The impossibility of faster-than-light relative speed only applies locally. Wormholes might allow superluminal (faster-than-light) travel by ensuring that the speed of light is not exceeded locally at any time. While traveling through a wormhole, subluminal (slower-than-light) speeds are used. If two points are connected by a wormhole whose length is shorter than the distance between them outside the wormhole, the time taken to traverse it could be less than the time it would take a light beam to make the journey if it took a path through the space outside the wormhole. However, a light beam traveling through the wormhole would of course beat the traveler.

    Time travel

    The theory of general relativity predicts that if traversable wormholes exist, they could allow time travel.[3] This would be accomplished by accelerating one end of the wormhole to a high velocity relative to the other, and then sometime later bringing it back; relativistic time dilation would result in the accelerated wormhole mouth aging less than the stationary one as seen by an external observer, similar to what is seen in the twin paradox. However, time connects differently through the wormhole than outside it, so that synchronized clocks at each mouth will remain synchronized to someone traveling through the wormhole itself, no matter how the mouths move around.[23] This means that anything which entered the accelerated wormhole mouth would exit the stationary one at a point in time prior to its entry.

    For example, consider two clocks at both mouths both showing the date as 2000. After being taken on a trip at relativistic velocities, the accelerated mouth is brought back to the same region as the stationary mouth with the accelerated mouth's clock reading 2004 while the stationary mouth's clock read 2012. A traveler who entered the accelerated mouth at this moment would exit the stationary mouth when its clock also read 2004, in the same region but now eight years in the past. Such a configuration of wormholes would allow for a particle's world line to form a closed loop in spacetime, known as a closed timelike curve. An object traveling through a wormhole could carry energy or charge from one time to another, but this would not violate conservation of energy or charge in each time, because the energy/charge of the wormhole mouth itself would change to compensate for the object that fell into it or emerged from it.[24][25]

    It is thought that it may not be possible to convert a wormhole into a time machine in this manner; the predictions are made in the context of general relativity, but general relativity does not include quantum effects. Analyses using the semiclassical approach to incorporating quantum effects into general relativity have sometimes indicated that a feedback loop of virtual particles would circulate through the wormhole and pile up on themselves, driving the energy density in the region very high and possibly destroying it before any information could be passed through it, in keeping with the chronology protection conjecture. The debate on this matter is described by Kip S. Thorne in the book Black Holes and Time Warps, and a more technical discussion can be found in The quantum physics of chronology protection by Matt Visser.[26] There is also the Roman ring, which is a configuration of more than one wormhole. This ring seems to allow a closed time loop with stable wormholes when analyzed using semiclassical gravity, although without a full theory of quantum gravity it is uncertain whether the semiclassical approach is reliable in this case.

    Interuniversal travel

    A possible resolution to the paradoxes resulting from wormhole-enabled time travel rests on the many-worlds interpretation of quantum mechanics. In 1991 David Deutsch showed that quantum theory is fully consistent (in the sense that the so-called density matrix can be made free of discontinuities) in spacetimes with closed timelike curves.[27] However, later it was shown that such model of closed timelike curve can have internal inconsistencies as it will lead to strange phenomena like distinguishing non orthogonal quantum states and distinguishing proper and improper mixture.[28][29] Accordingly, the destructive positive feedback loop of virtual particles circulating through a wormhole time machine, a result indicated by semi-classical calculations, is averted. A particle returning from the future does not return to its universe of origination but to a parallel universe. This suggests that a wormhole time machine with an exceedingly short time jump is a theoretical bridge between contemporaneous parallel universes.[30] Because a wormhole time-machine introduces a type of nonlinearity into quantum theory, this sort of communication between parallel universes is consistent with Joseph Polchinski’s discovery of an "Everett phone" in Steven Weinberg’s formulation of nonlinear quantum mechanics.[31] Such a possibility is depicted in the science-fiction 2014 movie Interstellar.

    Metrics

    Theories of wormhole metrics describe the spacetime geometry of a wormhole and serve as theoretical models for time travel. An example of a (traversable) wormhole metric is the following:[clarification needed (equations that are not discussed, not part of general discussion)]
    ds^2= - c^2 dt^2 + dl^2 + (k^2 + l^2)(d \theta^2 + \sin^2 \theta \, d\phi^2).
    One type of non-traversable wormhole metric is the Schwarzschild solution (see the first diagram):
    ds^2= - c^2 \left(1 - \frac{2GM}{rc^2}\right)dt^2 + \frac{dr^2}{1 - \frac{2GM}{rc^2}} + r^2(d \theta^2 + \sin^2 \theta \, d\phi^2).

    In fiction

    Wormholes are a common element in science fiction as they allow interstellar, intergalactic, and sometimes interuniversal travel within human timescales. They have also served as a method for time travel.

    Memory and trauma

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Memory_and_trauma ...