Search This Blog

Monday, January 12, 2015

World's top climate scientists confess: Global warming is just QUARTER what we thought - and computers got the effects of greenhouse gases wrong

  • Leaked report reveals the world has warmed at quarter the rate claimed by IPCC in 2007
  • Scientists accept their computers may have exaggerated

Logo for the IPCC - Intergovernmental Panel on Climate Change

The Intergovernmental Panel on Climate Change has changed its story after issuing stern warnings about climate change for years

A leaked copy of the world’s most authoritative climate study reveals scientific forecasts of imminent doom were drastically wrong.

The Mail on Sunday has obtained the final draft of a report to be published later this month by the UN Intergovernmental Panel on Climate Change (IPCC), the ultimate watchdog whose massive, six-yearly ‘assessments’ are accepted by environmentalists, politicians and experts as the gospel of climate science. 

They are cited worldwide to justify swingeing fossil fuel taxes and subsidies for ‘renewable’ energy.

Yet the leaked report makes the extraordinary concession that over the past 15 years, recorded world temperatures have increased at only a quarter of the rate of IPCC claimed when it published its last assessment in 2007.

Back then, it said observed warming over the 15 years from 1990-2005 had taken place at a rate of 0.2C per decade, and it predicted this would continue for the following 20 years, on the basis of forecasts made by computer climate models.

But the new report says the observed warming over the more recent 15 years to 2012 was just 0.05C per decade - below almost all computer predictions.

The 31-page ‘summary for policymakers’ is based on a more technical 2,000-page analysis which will be issued at the same time. It also surprisingly reveals: IPCC scientists accept their forecast computers may have exaggerated the effect of increased carbon emissions on world temperatures  – and not taken enough notice of natural variability.

They recognise the global warming ‘pause’ first reported by The Mail on Sunday last year is real – and concede that their computer models did not predict it. But they cannot explain why world average temperatures have not shown any statistically significant increase since 1997.

lThey admit large parts of the world were as warm as they are now for decades at a time between 950 and 1250 AD – centuries before the Industrial Revolution, and when the population and CO2 levels were both much lower.

lThe IPCC admits that while computer models forecast a decline in Antarctic sea ice, it has actually grown to a new record high. Again, the IPCC cannot say why.

lA forecast in the 2007 report that hurricanes would become more intense has simply been dropped, without mention.

This year has been one of the quietest hurricane seasons in history and the US is currently enjoying its longest-ever period – almost eight years – without a single hurricane of Category 3 or above making landfall.
graphic


One of the report’s own authors, Professor Myles Allen, the director of Oxford University’s Climate Research Network, last night said this should be the last IPCC assessment – accusing its cumbersome production process of ‘misrepresenting how science works’.

Despite the many scientific uncertainties disclosed by the leaked report, it nonetheless draws familiar, apocalyptic conclusions – insisting that the IPCC is more confident than ever that global warming is mainly humans’ fault.

It says the world will continue to warm catastrophically unless there is drastic action to curb greenhouse gases – with big rises in sea level, floods, droughts and the disappearance of the Arctic icecap.

Last night Professor Judith Curry, head of climate science at Georgia Institute of Technology in Atlanta, said the leaked summary showed that ‘the science is clearly not settled, and  is in a state of flux’. 
She said  it therefore made no sense that the IPCC was claiming that its confidence in its forecasts and conclusions has increased.

For example, in the new report, the IPCC says it is ‘extremely likely’ – 95 per cent certain – that human  influence caused more than half  the temperature rises from 1951 to 2010, up from ‘very confident’ –  90 per cent certain – in 2007.

Prof Curry said: ‘This is incomprehensible to me’ – adding that the IPCC projections are ‘overconfident’, especially given the report’s admitted areas of doubt.

head of climate science at Georgia Institute of Technology in Atlanta, said the leaked summary showed that ¿the science is clearly not settled, and  is in a state of flux¿.
Head of climate science at Georgia Institute of Technology in Atlanta, said the leaked summary showed that 'the science is clearly not settled, and is in a state of flux'


Starting a week tomorrow, about 40 of the 250 authors who contributed to the report – and supposedly produced a definitive scientific consensus – will hold a four-day meeting in Stockholm, together with representatives of most of the 195 governments that fund the IPCC, established in 1998 by the World Meteorological Organisation (WMO) and the United Nations Environment Programme (UNEP). 

The governments have tabled 1,800 questions and are demanding major revisions, starting with the failure to account for the pause.

Prof Curry said she hoped that  the ‘inconsistencies will be pointed out’ at the meeting, adding: ‘The consensus-seeking process used by the IPCC creates and amplifies biases in the science. It should be abandoned in favour of a more traditional review that presents arguments for and against – which would  better support scientific progress, and be more useful for policy makers.’ Others agree that the unwieldy and expensive IPCC assessment process has now run its course. 

Prof Allen said: ‘The idea of producing a document of near-biblical infallibility is a misrepresentation of how science works, and we need to look very carefully about what the IPCC does in future.’

Climate change sceptics are more outspoken. Dr Benny Peiser, of the Global Warming Policy Foundation, described the leaked report as a ‘staggering concoction of confusion, speculation and sheer ignorance’. 

As for the pause, he said ‘it would appear that the IPCC is running out of answers .  .  . to explain why there is a widening gap between predictions and reality’. 

The Mail on Sunday has also seen an earlier draft of the report, dated October last year. There are many striking differences between it and the current, ‘final’ version.

The 2012 draft makes no mention of the pause and, far from admitting that the  Middle Ages were unusually warm, it states that today’s temperatures are the highest for at least 1,300 years, as it did in 2007. Prof Allen said the change ‘reflects greater uncertainty about what was happening around the last millennium but one’.

A further change in the new version is the first-ever scaling down of a crucial yardstick, the ‘equilibrium climate sensitivity’ – the extent to which the world is meant to warm each time CO2 levels double. 

As things stand, the atmosphere is expected to have twice as much CO2 as in pre-industrial times by about 2050. In 2007, the IPCC said the ‘likeliest’ figure was 3C, with up to 4.5C still ‘likely’.

Now it does not give a ‘likeliest’ value and admits it is ‘likely’ it may be as little as 1.5C – so giving the world many more decades to work out how to reduce carbon emissions before temperatures rise to dangerous levels. 

As a result of the warming pause, several recent peer-reviewed scientific studies have  suggested that the true figure for the sensitivity is much lower than anyone – the IPCC included – previously thought: probably less than 2C.

Last night IPCC communications chief Jonathan Lynn refused to comment, saying the leaked report was ‘still a work in progress’.

MET OFFICE'S COMPUTER 'FUNDAMENTALLY FLAWED' SAYS NEW ANALYSIS

The British Met Office has issued ‘erroneous statements  and misrepresentations’ about  the pause in global warming  – and its climate computer model is fundamentally flawed, says  a new analysis by a leading independent researcher.

Nic Lewis, a climate scientist and accredited ‘expert reviewer’ for the IPCC, also points out that Met Office’s flagship climate model suggests the world  will warm by twice as much in response to CO2 as some other leading institutes, such as Nasa’s climate centre in America.

The Met Office model’s current value for the ‘equilibrium climate sensitivity’ (ECS) – how much hotter the world will get each time CO2 doubles – is 4.6C. This  is above the IPCC’s own ‘likely’ range and the 95 per cent certainty’ level established by recent peer-reviewed research.

Lewis’s paper is scathing about the ‘future warming’ document issued by the Met Office in July, which purported to explain why the current 16-year global warming ‘pause’ is unimportant, and does not mean the ECS is lower than previously thought. 

Lewis says the document made misleading claims about other scientists’ work – for example, misrepresenting important details of a study by a team that included Lewis and 14 other  IPCC experts. The team’s paper, published in the prestigious journal Nature Geoscience in May, said the best estimate of the ECS was 2C or less – well under half the Met Office estimate.

He also gives evidence that another key Met Office model is inherently skewed. The result is that it will always produce  high values for CO2-induced warming, no matter how its control knobs are tweaked, because its computation of the  cooling effect of smoke and dust  pollution – what scientists call ‘aerosol forcing’ – is simply incompatible with the real world.
This has serious implications,  because the Met Office’s HadCM3 model is used to determine the Government’s climate projections, which influence policy.

Mr Lewis concludes that the Met Office modelling is ‘fundamentally unsatisfactory, because it effectively rules out from the start the possibility that both aerosol forcing and climate sensitivity are modest’. Yet this, he writes, ‘is the combination that recent observations support’.

The Met Office said it would examine the paper and respond in due course.



The Mail on Sunday’s report last week that Arctic ice has had a massive rebound this year from its 2012 record low was followed up around the world – and recorded 174,200 Facebook ‘shares’, by some distance a record for an article on the MailOnline website.

But the article and its author  also became the object of extraordinarily vitriolic attacks from climate commentators  who refuse to accept any evidence that may unsettle  their view of the science.

A Guardian website article claimed our report was ‘delusional’ because it ignored what it called an ‘Arctic death spiral’ caused by global warming.

Beneath this, some readers who made comments had their posts removed by the site moderator, because they ‘didn’t abide by our community standards’.

But among those that still remain on the site is one which likens the work of David Rose – who is Jewish – to Adolf Hitler’s anti-Semitic rant Mein Kampf.

Another suggests it would be reasonable if he were to be murdered by his own children.  A comment under the name DavidFTA read: ‘In a few years, self-defence is going to be made  a valid defence for parricide [killing one’s own father], so Rose’s children will have this article to present in their defence at the trial.’

Critics of the article entirely ignored its equally accurate statement that there is mounting evidence the Arctic sea ice retreat has in the past been cyclical: there were huge melts in the 1920s, followed by later advances.
David Rose¿s article in the Mail on Sunday last week attracted world wide interest
David Rose¿s article in the Mail on Sunday last week attracted world wide interest

Some scientists believe that  this may happen again, and may already be under way – delaying the date when the ice cap  might vanish by decades or  even centuries.

Another assault was mounted by Bob Ward, spokesman for the Grantham Institute 
for Climate Change at the London School  of Economics.
Mr Ward tweeted that the article was ‘error-strewn’.

The eminent US expert Professor Judith Curry, who unlike Mr Ward is a climate scientist with a long list of  peer-reviewed publications to  her name, disagreed.

On her blog Climate Etc she defended The Mail on Sunday, saying the article contained ‘good material’, and issued a tweet which challenged Mr Ward to say what these ‘errors’ were.
He has yet to reply.

'A REFLECTION OF EVIDENCE FROM NEW STUDIES'... THE IPCC CHANGES ITS STORY


Power house: The IPCC'S Headquarters in Geneva, Switzerland
Power house: The IPCC'S Headquarters in Geneva, Switzerland


What they say: ‘The rate of warming over the past 15 years [at 0.05C per decade] is smaller than the trend since 1951.'

What this means: In their last hugely influential report in 2007, the IPCC claimed the world had warmed at a rate of 0.2C per decade 1990-2005, and that this would continue for the following 20 years.

The unexpected 'pause' means that at just 0.05C per decade, the rate 1998-2012 is less than half the long-term trend since 1951, 0.12C per decade, and just a quarter of the 2007-2027 prediction.

Some scientists - such as Oxford's Myles Allen - argue that it is misleading to focus on this 'linear trend', and that one should only compare averages taken from decade-long blocks.

What they say: ‘Surface temperature reconstructions show multi-decadal intervals during the Medieval Climate Anomaly  (950-1250) that were in some regions as warm as in the late 20th Century.’

What this means: As recently as October 2012, in an earlier draft of this report, the IPCC was adamant that the world is warmer than at any time for at least 1,300 years. Their new inclusion  of the ‘Medieval Warm Period’ – long before the Industrial Revolution and  its associated fossil fuel burning – is a concession that its earlier statement  is highly questionable.

What they say: ‘Models do not generally reproduce the observed reduction in surface warming trend over the last 10 – 15 years.’

What this means: The ‘models’ are computer forecasts, which the IPCC admits failed to ‘see... a reduction in the warming trend’. In fact, there has been no statistically significant warming at all for almost 17 years – as first reported by this newspaper last October, when the Met Office tried to deny this ‘pause’ existed.In its 2012 draft, the IPCC didn’t mention it either. Now it not only accepts it is  real, it admits that its climate models  totally failed to predict it.

What they say: ‘There is medium confidence that this difference between models and observations is to a substantial degree caused by unpredictable climate variability, with possible contributions from inadequacies in the solar, volcanic, and aerosol forcings used by the models and, in some models, from too strong a response to increasing greenhouse-gas forcing.’


What this means: The IPCC knows the pause is  real, but has no idea what is causing it. It could be natural climate variability, the sun, volcanoes – and crucially, that the computers have been allowed to give too much weight to the effect carbon dioxide emissions (greenhouse gases) have on temperature change.

What they say: ‘Climate models now include more cloud and aerosol processes, but there remains low confidence in the representation and quantification of these processes in models.’

What this means: Its models don’t accurately forecast the impact of fundamental aspects of the atmosphere – clouds, smoke and dust.

What they say: ‘Most models simulate a small decreasing trend in Antarctic sea ice extent, in contrast  to the small increasing trend in observations... There is low confidence in the scientific understanding of the small observed increase in Antarctic sea ice extent.’

What this means: The models said Antarctic ice would decrease. It’s actually increased, and the IPCC doesn’t know why.

What they say: ‘ECS is likely in the range 1.5C to 4.5C... The lower limit of the assessed likely range is thus less than the 2C in the [2007 report], reflecting the evidence from new studies.’

What this means: ECS – ‘equilibrium climate sensitivity’ – is an estimate of how much the world will warm every time carbon dioxide levels double. A high value means we’re heading for disaster. Many recent studies say that previous IPCC claims, derived from the computer models, have been way too high. It looks as if they’re starting to take notice, and so are scaling down their estimate for the first time.

Clarification

An original version of this article sought to make the fairest updated comparison with the 0.2C warming rate stated by the IPCC in 2007.

It drew on the following sentence in the draft 2013 summary: ‘The rate of warming over the past 15 years… of 0.05C per decade is smaller than the trend since 1951, 0.12C per decade.’ This would represent a reduction in the rate of warming by a little under one half.

But critics argued that the 0.2C warming rate in the 2007 report relates only to the previous 15 years whereas the 0.12C figure in the forthcoming report relates to the half-century since 1951. They pointed out that the equivalent figure in the 2007 report was 0.13C.

This amended article compares the 0.05C per decade observed in the past 15 years with the 0.2C per decade observed in the period 1990-2005 and with the prediction that this rate per decade would continue for a further 20 years.

A  sentence saying that the IPCC now projects warming by 2035 to be between 0.4  and 1.0C, which was reproduced accurately from the leaked document, has been deleted, following representations that these figures were an IPCC typographic error.

Cosmic ray

From Wikipedia, the free encyclopedia
 
Cosmic ray flux versus particle energy

Cosmic rays are immensely high-energy radiation, mainly originating outside the Solar System.[1] They may produce showers of secondary particles that penetrate and impact the Earth's atmosphere and sometimes even reach the surface. Composed primarily of high-energy protons and atomic nuclei, they are of mysterious origin. Data from the Fermi space telescope (2013)[2] have been interpreted as evidence that a significant fraction of primary cosmic rays originate from the supernovae of massive stars.[3] However, this is not thought to be their only source. Active galactic nuclei probably also produce cosmic rays.

The term ray is a historical accident, as cosmic rays were at first, and wrongly, thought to be mostly electromagnetic radiation. In common scientific usage[4] high-energy particles with intrinsic mass are known as "cosmic" rays, and photons, which are quanta of electromagnetic radiation (and so have no intrinsic mass) are known by their common names, such as "gamma rays" or "X-rays", depending on their frequencies.

Cosmic rays attract great interest practically, due to the damage they inflict on microelectronics and life outside the protection of an atmosphere and magnetic field, and scientifically, because the energies of the most energetic ultra-high-energy cosmic rays (UHECRs) have been observed to approach 3 × 1020 eV,[5] about 40 million times the energy of particles accelerated by the Large Hadron Collider.[6] At 50 J,[7] the highest-energy ultra-high-energy cosmic rays have energies comparable to the kinetic energy of a 90-kilometre-per-hour (56 mph) baseball. As a result of these discoveries, there has been interest in investigating cosmic rays of even greater energies.[8] Most cosmic rays, however, do not have such extreme energies; the energy distribution of cosmic rays peaks at 0.3 gigaelectronvolts (4.8×10−11 J).[9]

Of primary cosmic rays, which originate outside of Earth's atmosphere, about 99% are the nuclei (stripped of their electron shells) of well-known atoms, and about 1% are solitary electrons (similar to beta particles). Of the nuclei, about 90% are simple protons, i. e. hydrogen nuclei; 9% are alpha particles, and 1% are the nuclei of heavier elements, called HZE ions.[10] A very small fraction are stable particles of antimatter, such as positrons or antiprotons. The precise nature of this remaining fraction is an area of active research. An active search from Earth orbit for anti-alpha particles has failed to detect them.

History

After the discovery of radioactivity by Henri Becquerel and Marie Curie in 1896, it was generally believed that atmospheric electricity, ionization of the air, was caused only by radiation from radioactive elements in the ground or the radioactive gases or isotopes of radon they produce. Measurements of ionization rates at increasing heights above the ground during the decade from 1900 to 1910 showed a decrease that could be explained as due to absorption of the ionizing radiation by the intervening air.

Discovery

In 1909 Theodor Wulf developed an electrometer, a device to measure the rate of ion production inside a hermetically sealed container, and used it to show higher levels of radiation at the top of the Eiffel Tower than at its base. However, his paper published in Physikalische Zeitschrift was not widely accepted. In 1911 Domenico Pacini observed simultaneous variations of the rate of ionization over a lake, over the sea, and at a depth of 3 meters from the surface. Pacini concluded from the decrease of radioactivity underwater that a certain part of the ionization must be due to sources other than the radioactivity of the Earth.[11]
Pacini makes a measurement in 1910.

Then, in 1912, Victor Hess carried three enhanced-accuracy Wulf electrometers[12] to an altitude of 5300 meters in a free balloon flight. He found the ionization rate increased approximately fourfold over the rate at ground level.[12] Hess also ruled out the Sun as the radiation's source by making a balloon ascent during a near-total eclipse. With the moon blocking much of the Sun's visible radiation, Hess still measured rising radiation at rising altitudes.[12] He concluded "The results of my observation are best explained by the assumption that a radiation of very great penetrating power enters our atmosphere from above." In 1913–1914, Werner Kolhörster confirmed Victor Hess' earlier results by measuring the increased ionization rate at an altitude of 9 km.
Increase of ionization with altitude as measured by Hess in 1912 (left) and by Kolhörster (right)

Hess received the Nobel Prize in Physics in 1936 for his discovery.[13][14]

The Hess balloon flight took place on 7 August 1912. By sheer coincidence, exactly 100 years later on 7 August 2012, the Mars Science Laboratory rover used its Radiation Assessment Detector (RAD) instrument to begin measuring the radiation levels on another planet for the first time. On 31 May 2013, NASA scientists reported that a possible manned mission to Mars may involve a greater radiation risk than previously believed, based on the amount of energetic particle radiation detected by the RAD on the Mars Science Laboratory while traveling from the Earth to Mars in 2011-2012.[15][16][17]
Hess lands after his balloon flight in 1912.

Identification

In the 1920s the term "cosmic rays" was coined by Robert Millikan who made measurements of ionization due to cosmic rays from deep under water to high altitudes and around the globe. Millikan believed that his measurements proved that the primary cosmic rays were gamma rays, i.e., energetic photons. And he proposed a theory that they were produced in interstellar space as by-products of the fusion of hydrogen atoms into the heavier elements, and that secondary electrons were produced in the atmosphere by Compton scattering of gamma rays. But then, in 1927, J. Clay found evidence,[18] later confirmed in many experiments, of a variation of cosmic ray intensity with latitude, which indicated that the primary cosmic rays are deflected by the geomagnetic field and must therefore be charged particles, not photons. In 1929, Bothe and Kolhörster discovered charged cosmic-ray particles that could penetrate 4.1 cm of gold.[19] Charged particles of such high energy could not possibly be produced by photons from Millikan's proposed interstellar fusion process.

In 1930, Bruno Rossi predicted a difference between the intensities of cosmic rays arriving from the east and the west that depends upon the charge of the primary particles - the so-called "east-west effect."[20] Three independent experiments[21][22][23] found that the intensity is, in fact, greater from the west, proving that most primaries are positive. During the years from 1930 to 1945, a wide variety of investigations confirmed that the primary cosmic rays are mostly protons, and the secondary radiation produced in the atmosphere is primarily electrons, photons and muons. In 1948, observations with nuclear emulsions carried by balloons to near the top of the atmosphere showed that approximately 10% of the primaries are helium nuclei (alpha particles) and 1% are heavier nuclei of the elements such as carbon, iron, and lead.[24][24]

During a test of his equipment for measuring the east-west effect, Rossi observed that the rate of near-simultaneous discharges of two widely separated Geiger counters was larger than the expected accidental rate. In his report on the experiment, Rossi wrote "... it seems that once in a while the recording equipment is struck by very extensive showers of particles, which causes coincidences between the counters, even placed at large distances from one another."[23] In 1937 Pierre Auger, unaware of Rossi's earlier report, detected the same phenomenon and investigated it in some detail. He concluded that high-energy primary cosmic-ray particles interact with air nuclei high in the atmosphere, initiating a cascade of secondary interactions that ultimately yield a shower of electrons, and photons that reach ground level.

Soviet physicist Sergey Vernov was the first to use radiosondes to perform cosmic ray readings with an instrument carried to high altitude by a balloon. On 1 April 1935, he took measurements at heights up to 13.6 kilometers using a pair of Geiger counters in an anti-coincidence circuit to avoid counting secondary ray showers.[25][26]

Homi J. Bhabha derived an expression for the probability of scattering positrons by electrons, a process now known as Bhabha scattering. His classic paper, jointly with Walter Heitler, published in 1937 described how primary cosmic rays from space interact with the upper atmosphere to produce particles observed at the ground level. Bhabha and Heitler explained the cosmic ray shower formation by the cascade production of gamma rays and positive and negative electron pairs.

Energy distribution

Measurements of the energy and arrival directions of the ultra-high energy primary cosmic rays by the techniques of "density sampling" and "fast timing" of extensive air showers were first carried out in 1954 by members of the Rossi Cosmic Ray Group at the Massachusetts Institute of Technology.[27] The experiment employed eleven scintillation detectors arranged within a circle 460 meters in diameter on the grounds of the Agassiz Station of the Harvard College Observatory. From that work, and from many other experiments carried out all over the world, the energy spectrum of the primary cosmic rays is now known to extend beyond 1020 eV. A huge air shower experiment called the Auger Project is currently operated at a site on the pampas of Argentina by an international consortium of physicists, led by James Cronin, winner of the 1980 Nobel Prize in Physics from the University of Chicago, and Alan Watson of the University of Leeds. Their aim is to explore the properties and arrival directions of the very highest-energy primary cosmic rays.[28] The results are expected to have important implications for particle physics and cosmology, due to a theoretical Greisen–Zatsepin–Kuzmin limit to the energies of cosmic rays from long distances (about 160 million light years) which occurs above 1020 eV because of interactions with the remnant photons from the big bang origin of the universe.

High-energy gamma rays (>50 MeV photons) were finally discovered in the primary cosmic radiation by an MIT experiment carried on the OSO-3 satellite in 1967.[29] Components of both galactic and extra-galactic origins were separately identified at intensities much less than 1% of the primary charged particles. Since then, numerous satellite gamma-ray observatories have mapped the gamma-ray sky. The most recent is the Fermi Observatory, which has produced a map showing a narrow band of gamma ray intensity produced in discrete and diffuse sources in our galaxy, and numerous point-like extra-galactic sources distributed over the celestial sphere.

Sources of cosmic rays

Early speculation on the sources of cosmic rays included a 1934 proposal by Baade and Zwicky suggesting cosmic rays originating from supernovae.[30] A 1948 proposal by Horace W. Babcock suggested that magnetic variable stars could be a source of cosmic rays.[31] Subsequently in 1951, Y. Sekido et al. identified the Crab Nebula as a source of cosmic rays.[32] Since then, a wide variety of potential sources for cosmic rays began to surface, including supernovae, active galactic nuclei, quasars, and gamma-ray bursts.[33]
Sources of Ionizing Radiation in Interplanetary Space.

Later experiments have helped to identify the sources of cosmic rays with greater certainty. In 2009, a paper presented at the International Cosmic Ray Conference (ICRC) by scientists at the Pierre Auger Observatory showed ultra-high energy cosmic rays (UHECRs) originating from a location in the sky very close to the radio galaxy Centaurus A, although the authors specifically stated that further investigation would be required to confirm Cen A as a source of cosmic rays.[34] However, no correlation was found between the incidence of gamma-ray bursts and cosmic rays, causing the authors to set a lower limit of 10−6 erg cm−2 on the flux of 1 GeV-1 TeV cosmic rays from gamma-ray bursts.[35]

In 2009, supernovae were said to have been "pinned down" as a source of cosmic rays, a discovery made by a group using data from the Very Large Telescope.[36] This analysis, however, was disputed in 2011 with data from PAMELA, which revealed that "spectral shapes of [hydrogen and helium nuclei] are different and cannot be described well by a single power law", suggesting a more complex process of cosmic ray formation.[37] In February 2013, though, research analyzing data from Fermi revealed through an observation of neutral pion decay that supernovae were indeed a source of cosmic rays, with each explosion producing roughly 3 × 1042 - 3 × 1043 J of cosmic rays.[2][3] However, supernovae do not produce all cosmic rays, and the proportion of cosmic rays that they do produce is a question which cannot be answered without further study.[38]

Types

Primary cosmic particle collides with a molecule of atmosphere.

Cosmic rays originate as primary cosmic rays, which are those originally produced in various astrophysical processes. Primary cosmic rays are composed primarily of protons and alpha particles (99%), with a small amount of heavier nuclei (~1%) and an extremely minute proportion of positrons and antiprotons.[10] Secondary cosmic rays, caused by a decay of primary cosmic rays as they impact an atmosphere, include neutrons, pions, positrons, and muons. Of these four, the latter three were first detected in cosmic rays.

Primary cosmic rays

Primary cosmic rays primarily originate from outside the Solar System and sometimes even the Milky Way. When they interact with Earth's atmosphere, they are converted to secondary particles. The mass ratio of helium to hydrogen nuclei, 28%, is similar to the primordial elemental abundance ratio of these elements, 24%.[39] The remaining fraction is made up of the other heavier nuclei that are nuclear synthesis end products, products of the Big Bang,[citation needed] primarily lithium, beryllium, and boron. These nuclei appear in cosmic rays in much greater abundance (~1%) than in the solar atmosphere, where they are only about 10−11 as abundant as helium. Cosmic rays made up of charged nuclei heavier than helium are called HZE ions. Due to the high charge and heavy nature of HZE ions, their contribution to an astronaut's radiation dose in space is significant even though they are relatively scarce.

This abundance difference is a result of the way secondary cosmic rays are formed. Carbon and oxygen nuclei collide with interstellar matter to form lithium, beryllium and boron in a process termed cosmic ray spallation. Spallation is also responsible for the abundances of scandium, titanium, vanadium, and manganese ions in cosmic rays produced by collisions of iron and nickel nuclei with interstellar matter.[40]

Primary cosmic ray antimatter

Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. These do not appear to be the products of large amounts of antimatter from the Big Bang, or indeed complex antimatter in the universe. Rather, they appear to consist of only these two elementary particles, newly made in energetic processes.
Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 10 GeV to 250 GeV. In September, 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters.[41][42] A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV.[43] These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.[44]

Cosmic ray antiprotons also have a much higher energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy.[45]

There is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (i.e., anti-alpha particles), in cosmic rays. These are actively being searched for. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the Space Shuttle Discovery on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio.[46]

The moon in cosmic rays
The moon's muon shadow
The Moon's cosmic ray shadow, as seen in secondary muons detected 700 m below ground, at the Soudan 2 detector
The moon as seen in gamma rays
The moon as seen by the Compton Gamma Ray Observatory, in gamma rays with energies greater than 20 MeV. These are produced by cosmic ray bombardment on its surface.[47]

Secondary cosmic rays

When cosmic rays enter the Earth's atmosphere they collide with molecules, mainly oxygen and nitrogen. The interaction produces a cascade of lighter particles, a so-called air shower secondary radiation that rains down, including x-rays, muons, protons, alpha particles, pions, electrons, and neutrons.[48] All of the produced particles stay within about one degree of the primary particle's path.

Typical particles produced in such collisions are neutrons and charged mesons such as positive or negative pions and kaons. Some of these subsequently decay into muons, which are able to reach the surface of the Earth, and even penetrate for some distance into shallow mines. The muons can be easily detected by many types of particle detectors, such as cloud chambers, bubble chambers or scintillation detectors. The observation of a secondary shower of particles in multiple detectors at the same time is an indication that all of the particles came from that event.

Cosmic rays impacting other planetary bodies in the Solar System are detected indirectly by observing high energy gamma ray emissions by gamma-ray telescope. These are distinguished from radioactive decay processes by their higher energies above  about 10 MeV.

Cosmic-ray flux

An overview of the space environment shows the relationship between the solar activity and galactic cosmic rays.[49]

The flux of incoming cosmic rays at the upper atmosphere is dependent on the solar wind, the Earth's magnetic field, and the energy of the cosmic rays. At distances of ~94 AU from the Sun, the solar wind undergoes a transition, called the termination shock, from supersonic to subsonic speeds. The region between the termination shock and the heliopause acts as a barrier to cosmic rays, decreasing the flux at lower energies (≤ 1 GeV) by about 90%. However, the strength of the solar wind is not constant, and hence it has been observed that cosmic ray flux is correlated with solar activity.

In addition, the Earth's magnetic field acts to deflect cosmic rays from its surface, giving rise to the observation that the flux is apparently dependent on latitude, longitude, and azimuth angle. The magnetic field lines deflect the cosmic rays towards the poles, giving rise to the aurorae.

The combined effects of all of the factors mentioned contribute to the flux of cosmic rays at Earth's surface. For 1 GeV particles, the rate of arrival is about 10,000 per square meter per second. At 1 TeV the rate is 1 particle per square meter per second. At 10 PeV there are only a few particles per square meter per year. Particles above 10 EeV arrive only at a rate of about one particle per square kilometer per year, and above 100 EeV at a rate of about one particle per square kilometer per century.[50]

In the past, it was believed that the cosmic ray flux remained fairly constant over time. However, recent research suggests 1.5 to 2-fold millennium-timescale changes in the cosmic ray flux in the past forty thousand years.[51]

The magnitude of the energy of cosmic ray flux in interstellar space is very comparable to that of other deep space energies: cosmic ray energy density averages about one electron-volt per cubic centimeter of interstellar space, or ~1 eV/cm3, which is comparable to the energy density of visible starlight at 0.3 eV/cm3, the galactic magnetic field energy density (assumed 3 microgauss) which is ~0.25 eV/cm3, or the cosmic microwave background (CMB) radiation energy density at ~ 0.25 eV/cm3.[52]

Detection methods

The VERITAS array of air Cherenkov telescopes.

There are several ground-based methods of detecting cosmic rays currently in use. The first detection method is called the air Cherenkov telescope, designed to detect low-energy (<200 a="" analyzing="" by="" cosmic="" ev="" href="http://en.wikipedia.org/wiki/Cherenkov_radiation" means="" nbsp="" of="" rays="" their="" title="Cherenkov radiation">Cherenkov radiation
, which for cosmic rays are gamma rays emitted as they travel faster than the speed of light in their medium, the atmosphere.[53] While these telescopes are extremely good at distinguishing between background radiation and that of cosmic-ray origin, they can only function well on clear nights without the Moon shining, and have very small fields of view and are only active for a few percent of the time. Another Cherenkov telescope uses water as a medium through which particles pass and produce Cherenkov radiation to make them detectable.[54]
Comparison of Radiation Doses - includes the amount detected on the trip from Earth to Mars by the RAD on the MSL (2011 - 2013).[15][16][17]

Extensive air shower (EAS) arrays, a second detection method, measure the charged particles which pass through them. EAS arrays measure much higher-energy cosmic rays than air Cherenkov telescopes, and can observe a broad area of the sky and can be active about 90% of the time. However, they are less able to segregate background effects from cosmic rays than can air Cherenkov telescopes. EAS arrays employ plastic scintillators in order to detect particles.

Another method was developed by Robert Fleischer, P. Buford Price, and Robert M. Walker for use in high-altitude balloons.[55] In this method, sheets of clear plastic, like 0.25 mm Lexan polycarbonate, are stacked together and exposed directly to cosmic rays in space or high altitude. The nuclear charge causes chemical bond breaking or ionization in the plastic. At the top of the plastic stack the ionization is less, due to the high cosmic ray speed. As the cosmic ray speed decreases due to deceleration in the stack, the ionization increases along the path. The resulting plastic sheets are "etched" or slowly dissolved in warm caustic sodium hydroxide solution, that removes the surface material at a slow, known rate. The caustic sodium hydroxide dissolves the plastic at a faster rate along the path of the ionized plastic. The net result is a conical etch pit in the plastic. The etch pits are measured under a high-power microscope (typically 1600x oil-immersion), and the etch rate is plotted as a function of the depth in the stacked plastic.

This technique yields a unique curve for each atomic nucleus from 1 to 92, allowing identification of both the charge and energy of the cosmic ray that traverses the plastic stack. The more extensive the ionization along the path, the higher the charge. In addition to its uses for cosmic-ray detection, the technique is also used to detect nuclei created as products of nuclear fission.

A fourth method involves the use of cloud chambers[56] to detect the secondary muons created when a pion decays. Cloud chambers in particular can be built from widely available materials and can be constructed even in a high-school laboratory. A fifth method, involving bubble chambers, can be used to detect cosmic ray particles.[57]

Another method detects the light from nitrogen fluorescence caused by the excitation of nitrogen in the atmosphere by the shower of particles moving through the atmosphere. This method allows for accurate detection of the direction from which the cosmic ray came.[58]

Finally, the CMOS devices in pervasive smartphone cameras have been proposed as a practical distributed network to detect air showers from ultra-high energy cosmic rays (UHECRs) which is at least comparable with that of conventional cosmic ray detectors.[59] The app, which is currently in beta and accepting applications, is CRAYFIS (Cosmic RAYs Found In Smartphones).[60][61]

Effects

Changes in atmospheric chemistry

Cosmic rays ionize the nitrogen and oxygen molecules in the atmosphere, which leads to a number of chemical reactions. One of the reactions results in ozone depletion. Cosmic rays are also responsible for the continuous production of a number of unstable isotopes in the Earth's atmosphere, such as carbon-14, via the reaction:
n + 14N → p + 14C
Cosmic rays kept the level of carbon-14[62] in the atmosphere roughly constant (70 tons) for at least the past 100,000 years, until the beginning of above-ground nuclear weapons testing in the early 1950s. This is an important fact used in radiocarbon dating used in archaeology.
Reaction products of primary cosmic rays, radioisotope half-lifetime, and production reaction.[63]
  • Tritium (12.3 years): 14N(n, 3H)12C (Spallation)
  • Beryllium-7 (53.3 days)
  • Beryllium-10 (1.39 million years): 14N(n,p α)10Be (Spallation)
  • Carbon-14 (5730 years): 14N(n, p)14C (Neutron activation)
  • Sodium-22 (2.6 years)
  • Sodium-24 (15 hours)
  • Magnesium-28 (20.9 hours)
  • Silicon-31 (2.6 hours)
  • Silicon-32 (101 years)
  • Phosphorus-32 (14.3 days)
  • Sulfur-35 (87.5 days)
  • Sulfur-38 (2.8 hours)
  • Chlorine-34 m (32 minutes)
  • Chlorine-36 (300,000 years)
  • Chlorine-38 (37.2 minutes)
  • Chlorine-39 (56 minutes)
  • Argon-39 (269 years)
  • Krypton-85 (10.7 years)

Role in ambient radiation

Cosmic rays constitute a fraction of the annual radiation exposure of human beings on the Earth, averaging 0.39 mSv out of a total of 3 mSv per year (13% of total background) for the Earth's population. However, the background radiation from cosmic rays increases with altitude, from 0.3 mSv per year for sea-level areas to 1.0 mSv per year for higher-altitude cities, raising cosmic radiation exposure to a quarter of total background radiation exposure for populations of said cities. Airline crews flying long distance high-altitude routes can be exposed to 2.2 mSv of extra radiation each year due to cosmic rays, nearly doubling their total ionizing radiation exposure.
Average annual radiation exposure (millisieverts)
 
Radiation UNSCEAR[64][65] Princeton[66] Wa State[67] MEXT[68]
Type Source World
average
Typical range USA USA Japan Remark
Natural Air 1.26 0.2-10.0a 2.29 2.00 0.40 Primarily from Radon, (a)depends on indoor accumulation of radon gas.
Internal 0.29 0.2-1.0b 0.16 0.40 0.40 Mainly from radioisotopes in food (40K, 14C, etc.) (b)depends on diet.
Terrestrial 0.48 0.3-1.0c 0.19 0.29 0.40 (c)Depends on soil composition and building material of structures.
Cosmic 0.39 0.3-1.0d 0.31 0.26 0.30 (d)Generally increases with elevation.
Subtotal 2.40 1.0-13.0 2.95 2.95 1.50
Artificial Medical 0.60 0.03-2.0 3.00 0.53 2.30
Fallout 0.007 0 - 1+ - - 0.01 Peaked in 1963 with a spike in 1986; still high near nuclear test and accident sites.
For the United States, fallout is incorporated into other categories.
others 0.0052 0-20 0.25 0.13 0.001 Average annual occupational exposure is 0.7 mSv; mining workers have higher exposure.
Populations near nuclear plants have an additional ~0.02 mSv of exposure annually.
Subtotal 0.6 0 to tens 3.25 0.66 2.311
Total 3.00 0 to tens 6.20 3.61 3.81
Figures are for the time before the Fukushima Daiichi nuclear disaster. Human-made values by UNSCEAR are from the Japanese National Institute of Radiological Sciences, which summarized the UNSCEAR data.

Effect on electronics

Cosmic rays have sufficient energy to alter the states of circuit components in electronic integrated circuits, causing transient errors to occur, such as corrupted data in electronic memory devices, or incorrect performance of CPUs, often referred to as "soft errors" (not to be confused with software errors caused by programming mistakes/bugs). This has been a problem in extremely high-altitude electronics, such as in satellites, but with transistors becoming smaller and smaller, this is becoming an increasing concern in ground-level electronics as well.[69] Studies by IBM in the 1990s suggest that computers typically experience about one cosmic-ray-induced error per 256 megabytes of RAM per month.[70] To alleviate this problem, the Intel Corporation has proposed a cosmic ray detector that could be integrated into future high-density microprocessors, allowing the processor to repeat the last command following a cosmic-ray event.[71]

Cosmic rays are suspected as a possible cause of an in-flight incident in 2008 where an Airbus A330 airliner of Qantas twice plunged hundreds of feet after an unexplained malfunction in its flight control system. Many passengers and crew members were injured, some seriously. After this incident, the accident investigators determined that the airliner's flight control system had received a data spike that could not be explained, and that all systems were in perfect working order. This has prompted a software upgrade to all A330 and A340 airliners, worldwide, so that any data spikes in this system are filtered out electronically.[72]

Significance to space travel

Galactic cosmic rays are one of the most important barriers standing in the way of plans for interplanetary travel by crewed spacecraft. Cosmic rays also pose a threat to electronics placed aboard outgoing probes. In 2010, a malfunction aboard the Voyager 2 space probe was credited to a single flipped bit, probably caused by a cosmic ray. Strategies such as physical or magnetic shielding for spacecraft have been considered in order to minimize the damage to electronics and human beings caused by cosmic rays.[73][74]

Role in lightning

Cosmic rays have been implicated in the triggering of electrical breakdown in lightning. It has been proposed that essentially all lightning is triggered through a relativistic process, "runaway breakdown", seeded by cosmic ray secondaries. Subsequent development of the lightning discharge then occurs through "conventional breakdown" mechanisms.[75]

Postulated role in climate change


A role of cosmic rays directly or via solar-induced modulations in climate change was suggested by Edward P. Ney in 1959[76] and by Robert E. Dickinson in 1975.[77] Despite the opinion of over 97% of climate scientists against this notion,[78] the idea has been revived in recent years, most notably by Henrik Svensmark, who has argued that because solar variations modulate the cosmic ray flux on Earth, they would consequently affect the rate of cloud formation and hence the climate.[79] Nevertheless, it has been noted by climate scientists actively publishing in the field[who?] that Svensmark has inconsistently altered data on most of his published work on the subject, an example being adjustment of cloud data that understates error in lower cloud data, but not in high cloud data.[80]

The 2007 IPCC synthesis report, however, strongly attributes a major role in the ongoing global warming to human-produced gases such as carbon dioxide, nitrous oxide, and halocarbons, and has stated that models including natural forcings only (including aerosol forcings, which cosmic rays are considered by some to contribute to) would result in far less warming than has actually been observed or predicted in models including anthropogenic forcings.[81]

Svensmark, being one of several scientists outspokenly opposed to the mainstream scientific assessment of global warming, has found eminence among the popular culture movement that denies the scientific consensus. Despite this, Svensmark's work exaggerating the magnitude of the effect of GCR on global warming continues to be refuted in the mainstream science.[82] For instance, a November 2013 study showed that less than 14 percent of global warming since the 1950s could be attributed to cosmic ray rate, and while the models showed a small correlation every 22 years, the cosmic ray rate did not match the changes in temperature, indicating that it was not a causal relationship.[83]

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...