Search This Blog

Wednesday, January 28, 2015

Theory of everything


From Wikipedia, the free encyclopedia

A theory of everything (ToE) or final theory, ultimate theory, or master theory is a hypothetical single, all-encompassing, coherent theoretical framework of physics that fully explains and links together all physical aspects of the universe.[1]:6 Finding a ToE is one of the major unsolved problems in physics. Over the past few centuries, two theoretical frameworks have been developed that, as a whole, most closely resemble a ToE. The two theories upon which all modern physics rests are general relativity (GR) and quantum field theory (QFT). GR is a theoretical framework that only focuses on the force of gravity for understanding the universe in regions of both large-scale and high-mass: stars, galaxies, clusters of galaxies, etc. On the other hand, QFT is a theoretical framework that only focuses on three non-gravitational forces for understanding the universe in regions of both small scale and low mass: sub-atomic particles, atoms, molecules, etc. QFT successfully implemented the Standard Model and unified the interactions (so-called Grand Unified Theory) between the three non-gravitational forces: weak, strong, and electromagnetic force.[2]:122

Through years of research, physicists have experimentally confirmed with tremendous accuracy virtually every prediction made by these two theories when in their appropriate domains of applicability. In accordance with their findings, scientists also learned that GR and QFT, as they are currently formulated, are mutually incompatible - they cannot both be right. Since the usual domains of applicability of GR and QFT are so different, most situations require that only one of the two theories be used.[3][4]:842–844 As it turns out, this incompatibility between GR and QFT is only an apparent issue in regions of extremely small-scale and high-mass, such as those that exist within a black hole or during the beginning stages of the universe (i.e., the moment immediately following the Big Bang). To resolve this conflict, a theoretical framework revealing a deeper underlying reality, unifying gravity with the other three interactions, must be discovered to harmoniously integrate the realms of GR and QFT into a seamless whole: a single theory that, in principle, is capable of describing all phenomena. In pursuit of this goal, quantum gravity has recently become an area of active research.

Over the past few decades, a single explanatory framework, called "string theory", has emerged that may turn out to be the ultimate theory of the universe. Many physicists believe that, at the beginning of the universe (up to 10−43 seconds after the Big Bang), the four fundamental forces were once a single fundamental force. Unlike most (if not all) other theories, string theory may be on its way to successfully incorporating each of the four fundamental forces into a unified whole. According to string theory, every particle in the universe, at its most microscopic level (Planck length), consists of varying combinations of vibrating strings (or strands) with preferred patterns of vibration. String theory claims that it is through these specific oscillatory patterns of strings that a particle of unique mass and force charge is created (that is to say, the electron is a type of string that vibrates one way, while the up-quark is a type of string vibrating another way, and so forth).

Initially, the term theory of everything was used with an ironic connotation to refer to various overgeneralized theories. For example, a grandfather of Ijon Tichy — a character from a cycle of Stanisław Lem's science fiction stories of the 1960s — was known to work on the "General Theory of Everything". Physicist John Ellis[5] claims to have introduced the term into the technical literature in an article in Nature in 1986.[6] Over time, the term stuck in popularizations of theoretical physics research.

Historical antecedents

From ancient Greece to Einstein

Archimedes was possibly the first scientist known to have described nature with axioms (or principles) and then deduce new results from them.[7] He thus tried to describe "everything" starting from a few axioms. Any "theory of everything" is similarly expected to be based on axioms and to deduce all observable phenomena from them.[8]:340

The concept of 'atom', introduced by Democritus, unified all phenomena observed in nature as the motion of atoms. In ancient Greek times philosophers speculated that the apparent diversity of observed phenomena was due to a single type of interaction, namely the collisions of atoms. Following atomism, the mechanical philosophy of the 17th century posited that all forces could be ultimately reduced to contact forces between the atoms, then imagined as tiny solid particles.[9]:184[10]

In the late 17th century, Isaac Newton's description of the long-distance force of gravity implied that not all forces in nature result from things coming into contact. Newton's work in his Principia dealt with this in a further example of unification, in this case unifying Galileo's work on terrestrial gravity, Kepler's laws of planetary motion and the phenomenon of tides by explaining these apparent actions at a distance under one single law: the law of universal gravitation.[11]

In 1814, building on these results, Laplace famously suggested that a sufficiently powerful intellect could, if it knew the position and velocity of every particle at a given time, along with the laws of nature, calculate the position of any particle at any other time:[12]:ch 7
An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Essai philosophique sur les probabilités, Introduction. 1814
Laplace thus envisaged a combination of gravitation and mechanics as a theory of everything.
Modern quantum mechanics implies that uncertainty is inescapable, and thus that Laplace's vision has to be amended: a theory of everything must include gravitation and quantum mechanics.
In 1820, Hans Christian Ørsted discovered a connection between electricity and magnetism, triggering decades of work that culminated in 1865, in James Clerk Maxwell's theory of electromagnetism. During the 19th and early 20th centuries, it gradually became apparent that many common examples of forces – contact forces, elasticity, viscosity, friction, and pressure – result from electrical interactions between the smallest particles of matter.

In his experiments of 1849–50, Michael Faraday was the first to search for a unification of gravity with electricity and magnetism.[13] However, he found no connection.

In 1900, David Hilbert published a famous list of mathematical problems. In Hilbert's sixth problem, he challenged researchers to find an axiomatic basis to all of physics. In this problem he thus asked for what today would be called a theory of everything.[14]

In the late 1920s, the new quantum mechanics showed that the chemical bonds between atoms were examples of (quantum) electrical forces, justifying Dirac's boast that "the underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known".[15]

After 1915, when Albert Einstein published the theory of gravity (general relativity), the search for a unified field theory combining gravity with electromagnetism began with a renewed interest. In Einstein's day, the strong and the weak forces had not yet been discovered, yet, he found the potential existence of two other distinct forces -gravity and electromagnetism- far more alluring. This launched his thirty-year voyage in search of the so-called "unified field theory" that he hoped would show that these two forces are really manifestations of one grand underlying principle. During these last few decades of his life, this quixotic quest isolated Einstein from the mainstream of physics.

Understandably, the mainstream was instead far more excited about the newly emerging framework of quantum mechanics. Einstein wrote to a friend in the early 1940s, "I have become a lonely old chap who is mainly known because he doesn't wear socks and who is exhibited as a curiosity on special occasions." Prominent contributors were Gunnar Nordström, Hermann Weyl, Arthur Eddington, Theodor Kaluza, Oskar Klein, and most notably, Albert Einstein and his collaborators. Einstein intensely searched for, but ultimately failed to find, a unifying theory.[16]:ch 17 (But see:Einstein–Maxwell–Dirac equations.) More than a half a century later, Einstein's dream of discovering a unified theory has become the Holy Grail of modern physics.

Twentieth century and the nuclear interactions

In the twentieth century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces (or interactions), which differ both from gravity and from electromagnetism. A further hurdle was the acceptance that in a ToE, quantum mechanics had to be incorporated from the start, rather than emerging as a consequence of a deterministic unified theory, as Einstein had hoped.

Gravity and electromagnetism could always peacefully coexist as entries in a list of classical forces, but for many years it seemed that gravity could not even be incorporated into the quantum framework, let alone unified with the other fundamental forces. For this reason, work on unification, for much of the twentieth century, focused on understanding the three "quantum" forces: electromagnetism and the weak and strong forces. The first two were combined in 1967–68 by Sheldon Glashow, Steven Weinberg, and Abdus Salam into the "electroweak" force.[17] Electroweak unification is a broken symmetry: the electromagnetic and weak forces appear distinct at low energies because the particles carrying the weak force, the W and Z bosons, have non-zero masses of 80.4 GeV/c2 and 91.2 GeV/c2, whereas the photon, which carries the electromagnetic force, is massless. At higher energies Ws and Zs can be created easily and the unified nature of the force becomes apparent.

While the strong and electroweak forces peacefully coexist in the Standard Model of particle physics, they remain distinct. So far, the quest for a theory of everything is thus unsuccessful on two points: neither a unification of the strong and electroweak forces – which Laplace would have called `contact forces' – has been achieved, nor has a unification of these forces with gravitation been achieved.

Modern physics

Conventional sequence of theories

A Theory of Everything would unify all the fundamental interactions of nature: gravitation, strong interaction, weak interaction, and electromagnetism. Because the weak interaction can transform elementary particles from one kind into another, the ToE should also yield a deep understanding of the various different kinds of possible particles. The usual assumed path of theories is given in the following graph, where each unification step leads one level up:

Theory of everything
Quantum gravity
Gravitation
Electronuclear force (GUT)
Standard model of cosmology
Standard model of particle physics
Strong interaction
SU(3)
Electroweak interaction
SU(2) x U(1)Y
Weak interaction
Electromagnetism
U(1)EM
Electricity
Magnetism

In this graph, electroweak unification occurs at around 100 GeV, grand unification is predicted to occur at 1016 GeV, and unification of the GUT force with gravity is expected at the Planck energy, roughly 1019 GeV.

Several Grand Unified Theories (GUTs) have been proposed to unify electromagnetism and the weak and strong forces. Grand unification would imply the existence of an electronuclear force; it is expected to set in at energies of the order of 1016 GeV, far greater than could be reached by any possible Earth-based particle accelerator. Although the simplest GUTs have been experimentally ruled out, the general idea, especially when linked with supersymmetry, remains a favorite candidate in the theoretical physics community. Supersymmetric GUTs seem plausible not only for their theoretical "beauty", but because they naturally produce large quantities of dark matter, and because the inflationary force may be related to GUT physics (although it does not seem to form an inevitable part of the theory). Yet GUTs are clearly not the final answer; both the current standard model and all proposed GUTs are quantum field theories which require the problematic technique of renormalization to yield sensible answers. This is usually regarded as a sign that these are only effective field theories, omitting crucial phenomena relevant only at very high energies.[3]

The final step in the graph requires resolving the separation between quantum mechanics and gravitation, often equated with general relativity. Numerous researchers concentrate their efforts on this specific step; nevertheless, no accepted theory of quantum gravity – and thus no accepted theory of everything – has emerged yet. It is usually assumed that the ToE will also solve the remaining problems of GUTs.

In addition to explaining the forces listed in the graph, a ToE may also explain the status of at least two candidate forces suggested by modern cosmology: an inflationary force and dark energy. Furthermore, cosmological experiments also suggest the existence of dark matter, supposedly composed of fundamental particles outside the scheme of the standard model. However, the existence of these forces and particles has not been proven yet.

String theory and M-theory

Since the 1990s, many physicists believe that 11-dimensional M-theory, which is described in some limits by one of the five perturbative superstring theories, and in another by the maximally-supersymmetric 11-dimensional supergravity, is the theory of everything. However, there is no widespread consensus on this issue.

A surprising property of string/M-theory is that extra dimensions are required for the theory's consistency. In this regard, string theory can be seen as building on the insights of the Kaluza–Klein theory, in which it was realized that applying general relativity to a five-dimensional universe (with one of them small and curled up) looks from the four-dimensional perspective like the usual general relativity together with Maxwell's electrodynamics. This lent credence to the idea of unifying gauge and gravity interactions, and to extra dimensions, but didn't address the detailed experimental requirements. Another important property of string theory is its supersymmetry, which together with extra dimensions are the two main proposals for resolving the hierarchy problem of the standard model, which is (roughly) the question of why gravity is so much weaker than any other force. The extra-dimensional solution involves allowing gravity to propagate into the other dimensions while keeping other forces confined to a four-dimensional spacetime, an idea that has been realized with explicit stringy mechanisms.[18]

Research into string theory has been encouraged by a variety of theoretical and experimental factors. On the experimental side, the particle content of the standard model supplemented with neutrino masses fits into a spinor representation of SO(10), a subgroup of E8 that routinely emerges in string theory, such as in heterotic string theory[19] or (sometimes equivalently) in F-theory.[20][21] String theory has mechanisms that may explain why fermions come in three hierarchical generations, and explain the mixing rates between quark generations.[22] On the theoretical side, it has begun to address some of the key questions in quantum gravity, such as resolving the black hole information paradox, counting the correct entropy of black holes[23][24] and allowing for topology-changing processes.[25][26][27] It has also led to many insights in pure mathematics and in ordinary, strongly-coupled gauge theory due to the Gauge/String duality.

In the late 1990s, it was noted that one major hurdle in this endeavor is that the number of possible four-dimensional universes is incredibly large. The small, "curled up" extra dimensions can be compactified in an enormous number of different ways (one estimate is 10500 ) each of which leads to different properties for the low-energy particles and forces. This array of models is known as the string theory landscape.[8]:347

One proposed solution is that many or all of these possibilities are realised in one or another of a huge number of universes, but that only a small number of them are habitable, and hence the fundamental constants of the universe are ultimately the result of the anthropic principle rather than dictated by theory. This has led to criticism of string theory,[28] arguing that it cannot make useful (i.e., original, falsifiable, and verifiable) predictions and regarding it as a pseudoscience. Others disagree,[29] and string theory remains an extremely active topic of investigation in theoretical physics.

Loop quantum gravity

Current research on loop quantum gravity may eventually play a fundamental role in a ToE, but that is not its primary aim.[30] Also loop quantum gravity introduces a lower bound on the possible length scales.

There have been recent claims that loop quantum gravity may be able to reproduce features resembling the Standard Model. So far only the first generation of fermions (leptons and quarks) with correct parity properties have been modelled by Sundance Bilson-Thompson using preons constituted of braids of spacetime as the building blocks.[31] However, there is no derivation of the Lagrangian that would describe the interactions of such particles, nor is it possible to show that such particles are fermions, nor that the gauge groups or interactions of the Standard Model are realised. Utilization of quantum computing concepts made it possible to demonstrate that the particles are able to survive quantum fluctuations.[32]

This model leads to an interpretation of electric and colour charge as topological quantities (electric as number and chirality of twists carried on the individual ribbons and colour as variants of such twisting for fixed electric charge).

Bilson-Thompson's original paper suggested that the higher-generation fermions could be represented by more complicated braidings, although explicit constructions of these structures were not given. The electric charge, colour, and parity properties of such fermions would arise in the same way as for the first generation. The model was expressly generalized for an infinite number of generations and for the weak force bosons (but not for photons or gluons) in a 2008 paper by Bilson-Thompson, Hackett, Kauffman and Smolin.[33]

Other attempts

Any ToE must include general relativity and the Standard Model of particle physics.

A recent and very prolific attempt is called Causal Sets. As some of the approaches mentioned above, its direct goal isn't necessarily to achieve a ToE but primarily a working theory of quantum gravity, which might eventually include the standard model and become a candidate for a ToE. Its founding principle is that spacetime is fundamentally discrete and that the spacetime events are related by a partial order. This partial order has the physical meaning of the causality relations between relative past and future distinguishing spacetime events.

Outside the previously mentioned attempts there is Garrett Lisi's E8 proposal. This theory provides an attempt of identifying general relativity and the standard model within the Lie group E8. The theory doesn't provide a novel quantization procedure and the author suggests its quantization might follow the Loop Quantum Gravity approach above mentioned.[34]

Christoph Schiller's Strand Model attempts to account for the gauge symmetry of the Standard Model of particle physics, U(1)×SU(2)×SU(3), with the three Reidemeister moves of knot theory by equating each elementary particle to a different tangle of one, two, or three strands (selectively a long prime knot or unknotted curve, a rational tangle, or a braided tangle respectively).

Present status

At present, there is no candidate theory of everything that includes the standard model of particle physics and general relativity. For example, no candidate theory is able to calculate the fine structure constant or the mass of the electron. Most particle physicists expect that the outcome of the ongoing experiments – the search for new particles at the large particle accelerators and for dark matter – are needed in order to provide further input for a ToE.

Theory of everything and philosophy

The philosophical implications of a physical ToE are frequently debated. For example, if philosophical physicalism is true, a physical ToE will coincide with a philosophical theory of everything.

The "system building" style of metaphysics attempts to answer all the important questions in a coherent way, providing a complete picture of the world. Plato and Aristotle could be said to have created early examples of comprehensive systems. In the early modern period (17th and 18th centuries), the system-building scope of philosophy is often linked to the rationalist method of philosophy, which is the technique of deducing the nature of the world by pure a priori reason. Examples from the early modern period include the Leibniz's Monadology, Descarte's Dualism, and Spinoza's Monism. Hegel's Absolute idealism and Whitehead's Process philosophy were later systems.

Arguments against a theory of everything

In parallel to the intense search for a ToE, various scholars have seriously debated the possibility of its discovery.

Gödel's incompleteness theorem

A number of scholars claim that Gödel's incompleteness theorem suggests that any attempt to construct a ToE is bound to fail. Gödel's theorem, informally stated, asserts that any formal theory expressive enough for elementary arithmetical facts to be expressed and strong enough for them to be proved is either inconsistent (both a statement and its denial can be derived from its axioms) or incomplete, in the sense that there is a true statement that can't be derived in the formal theory.

Stanley Jaki, in his 1966 book The Relevance of Physics, pointed out that, because any "theory of everything" will certainly be a consistent non-trivial mathematical theory, it must be incomplete. He claims that this dooms searches for a deterministic theory of everything.[35] In a later reflection, Jaki states that it is wrong to say that a final theory is impossible, but rather that "when it is on hand one cannot know rigorously that it is a final theory."[36]

Freeman Dyson has stated that

Stephen Hawking was originally a believer in the Theory of Everything but, after considering Gödel's Theorem, concluded that one was not obtainable.


Jürgen Schmidhuber (1997) has argued against this view; he points out that Gödel's theorems are irrelevant for computable physics.[37] In 2000, Schmidhuber explicitly constructed limit-computable, deterministic universes whose pseudo-randomness based on undecidable, Gödel-like halting problems is extremely hard to detect but does not at all prevent formal ToEs describable by very few bits of information.[38]

Related critique was offered by Solomon Feferman,[39] among others. Douglas S. Robertson offers Conway's game of life as an example:[40] The underlying rules are simple and complete, but there are formally undecidable questions about the game's behaviors. Analogously, it may (or may not) be possible to completely state the underlying rules of physics with a finite number of well-defined laws, but there is little doubt that there are questions about the behavior of physical systems which are formally undecidable on the basis of those underlying laws.

Since most physicists would consider the statement of the underlying rules to suffice as the definition of a "theory of everything", most physicists argue that Gödel's Theorem does not mean that a ToE cannot exist. On the other hand, the scholars invoking Gödel's Theorem appear, at least in some cases, to be referring not to the underlying rules, but to the understandability of the behavior of all physical systems, as when Hawking mentions arranging blocks into rectangles, turning the computation of prime numbers into a physical question.[41] This definitional discrepancy may explain some of the disagreement among researchers.

Fundamental limits in accuracy

No physical theory to date is believed to be precisely accurate. Instead, physics has proceeded by a series of "successive approximations" allowing more and more accurate predictions over a wider and wider range of phenomena. Some physicists believe that it is therefore a mistake to confuse theoretical models with the true nature of reality, and hold that the series of approximations will never terminate in the "truth". Einstein himself expressed this view on occasions.[42] Following this view, we may reasonably hope for a theory of everything which self-consistently incorporates all currently known forces, but we should not expect it to be the final answer.

On the other hand it is often claimed that, despite the apparently ever-increasing complexity of the mathematics of each new theory, in a deep sense associated with their underlying gauge symmetry and the number of fundamental physical constants, the theories are becoming simpler. If this is the case, the process of simplification cannot continue indefinitely.

Lack of fundamental laws

There is a philosophical debate within the physics community as to whether a theory of everything deserves to be called the fundamental law of the universe.[43] One view is the hard reductionist position that the ToE is the fundamental law and that all other theories that apply within the universe are a consequence of the ToE. Another view is that emergent laws, which govern the behavior of complex systems, should be seen as equally fundamental. Examples of emergent laws are the second law of thermodynamics and the theory of natural selection. The advocates of emergence argue that emergent laws, especially those describing complex or living systems are independent of the low-level, microscopic laws. In this view, emergent laws are as fundamental as a ToE.

The debates do not make the point at issue clear. Possibly the only issue at stake is the right to apply the high-status term "fundamental" to the respective subjects of research. A well-known one took place between Steven Weinberg and Philip Anderson[citation needed]

Impossibility of being "of everything"

Although the name "theory of everything" suggests the determinism of Laplace's quotation, this gives a very misleading impression. Determinism is frustrated by the probabilistic nature of quantum mechanical predictions, by the extreme sensitivity to initial conditions that leads to mathematical chaos, by the limitations due to event horizons, and by the extreme mathematical difficulty of applying the theory. Thus, although the current standard model of particle physics "in principle" predicts almost all known non-gravitational phenomena, in practice only a few quantitative results have been derived from the full theory (e.g., the masses of some of the simplest hadrons), and these results (especially the particle masses which are most relevant for low-energy physics) are less accurate than existing experimental measurements. The ToE would almost certainly be even harder to apply for the prediction of experimental results, and thus might be of limited use.

A motive for seeking a ToE,[citation needed] apart from the pure intellectual satisfaction of completing a centuries-long quest, is that prior examples of unification have predicted new phenomena, some of which (e.g., electrical generators) have proved of great practical importance. And like in these prior examples of unification, the ToE would probably allow us to confidently define the domain of validity and residual error of low-energy approximations to the full theory.

Infinite number of onion layers

Lee Smolin regularly argues that the layers of nature may be like the layers of an onion, and that the number of layers might be infinite.[citation needed] This would imply an infinite sequence of physical theories.

The argument is not universally accepted, because it is not obvious that infinity is a concept that applies to the foundations of nature.

Impossibility of calculation

Weinberg[44] points out that calculating the precise motion of an actual projectile in the Earth's atmosphere is impossible. So how can we know we have an adequate theory for describing the motion of projectiles? Weinberg suggests that we know principles (Newton's laws of motion and gravitation) that work "well enough" for simple examples, like the motion of planets in empty space. These principles have worked so well on simple examples that we can be reasonably confident they will work for more complex examples. For example, although general relativity includes equations that do not have exact solutions, it is widely accepted as a valid theory because all of its equations with exact solutions have been experimentally verified. Likewise, a ToE must work for a wide range of simple examples in such a way that we can be reasonably confident it will work for every situation in physics.

Pests invade Europe after neonicotinoids ban, with no benefit to bee health

| January 27, 2015 |
 
Original link:  http://geneticliteracyproject.org/2015/01/27/pests-invade-europe-after-neonicotinoids-ban-with-no-benefit-to-bee-health/
 
colorado potato beetle on eggplant1 - Copy

This month, more than 100 natural food brands, including Clif Bar and Stonyfield, joined together in a drive to encourage the Obama Administration to ban pesticides linked to bee deaths. The culprit, they say, is neonicotinoids, which is a class of chemicals commonly called neonics, introduced in the 1990s, that are mostly coated onto seeds to help farmers control insects.

“(Neonicotinoids) poison the whole treated plant including the nectar and pollen that bees eat – and they are persistent, lasting months or even years in the plant, soil, and waterways.” writes Jennifer Sass, a scientist with the National Resources Defense Council, which has been pressing the  Environmental Protection Agency to conduct a one-year review of neonics to determine if a ban is necessary. “Traditional best management practices for bee protection, such as not spraying during the day or on bloom, doesn’t work for neonics.” she claims.

Last November, the NRDC submitted signatures from almost 275,000 of its members urging EPA to respond to its legal petition to expedite the review of neonics.
While there are a number of factors contributing to the dramatic die-off of bees – both honey bees and native bees – there is now a wealth of science that demonstrates that pesticides are a big part of the problem. In particular, the neonic pesticides (imidaclopridclothianidin, and others) have been linked to impaired bee health, making it more difficult for the colony to breed, to fight off disease and pathogens, and to survive winter.  What makes neonics so harmful to bees is that they are systemic — meaning they poison the whole treated plant including the nectar and pollen that bees eat — and they are persistent, lasting months or even years in the plant, soil, and waterways they contaminate. Traditional best management practices for bee protection, such as not spraying during the day or on bloom, doesn’t work for neonics.
Yet, as activists continue to campaign to get neonics banned, news from Europe, where a two-year moratorium went into effect last year, suggests that farmers are unable to control pests without them.
Partly in desperation, they are replacing neonics with pesticides that are older, less effective and demonstrably more harmful to humans and social insects, and farm yields are dropping.

The European Commission banned the use of neonics despite the fact that the science community is sharply split as to whether neonics plays a significant role in bee deaths. The causes of CCD and subsequent winter-related problems have since remained a mystery—and a heated controversy.

Bees play an integral role in agriculture helping to pollinate roughly one-third of crop species in the US, including many fruits, vegetables, nuts and livestock feed such as alfalfa and clover. In 2006, as much as 80 percent of the hives in California, the center of the world almond industry, died in what was dubbed Colony Collapse Disorder (CCD). More recently, overwinter deaths of bees in the United States has hovered well above the 19 percent loss level that is common and considered acceptable, sometimes reaching as high as 30 percent. Europe has faced similar overwinter die-offs.

But there is no bee crisis, say most mainstream entomologists. Globally, beehive counts have increased by 45 percent in the last 50 years, according to a United Nations report. Neonics are widely used in Australia were there have been no mass bee deaths, and in Western Canada, where bees are thriving. Over the past past two winters, bee losses have moderated considerably throughout Europe and beehives have gone up steadily over the past two decades as the use of neonics has risen.
2014-12-14-european_union_beehive_totals-thumb
That did not stop the EU ban from being instituted. In North America, despite the 2006 CCD crisis, beehive numbers have held steady since the time neonics were introduced, challenging one of the central claims of environmental critics.2014-12-14-NA-thumb

While many environmental activists, and some scientists, have coalesced around the belief that neonics as a likely culprit, most mainstream entomologists disagreed. May Berenbaum, the renowned University of Illinois entomologist and chairwoman of a major National Academy of Sciences study on the loss of pollinators, has said that she is “extremely dubious” that banning neonics, as many greens are demanding, would have any positive effect.

Jeff Pettis, research leader of the Bee Research Laboratory in Beltsville, Maryland, who formerly headed the USDA’s Agricultural Research Service’s research on bee Colony Collapse Disorder, said the bee problem has been perplexing:
We know more now than we did a few years ago, but CCD has really been a 1,000-piece jigsaw puzzle, and the best I can say is that a lot of pieces have been turned over. The problem is that they have almost all been blue-sky pieces—frame but no center picture.
Last summer, President Barack Obama issued a memorandum asking agencies to address steps to protect pollinators, however, the report is not expected to be released until next year. The panel is being headed Illinois entomologist May Berenbaum. In a controversial move, the National Wildlife Refuge System announced a ban on both neonics and genetically modified organisms last August.

Cities, states and provinces in Canada, egged on by environmental activists, are beginning to act unilaterally. Ontario voted to ban the chemicals, as have several cities or counties, including Vancouver; Seattle, Thurston County, Wash.; Spokane, Wash.; Cannon Beach, Ore.; and Shorewood, MN. Oregon held a hearing recently to consider a policy that would limit neonics use.

European fallout

While pressures on politicians increase, farmers in Europe say they are already seeing the fallout on crop yields from the ban–what many claim is a politically driven policy. This is the first season for growing oilseed rape following the EU ban, and there has been a noticeable rise in beetle damage.
Flea-beetle-larvae-615x346
Last autumn saw beetle numbers swell in areas of eastern England and the damage from their larvae could leave crops open to other pest damage and lodging.

Ryan Hudson, agronomist with distribution group Farmacy, says that growers in the beetle hotspot areas are seeing some fields “riddled” with the larvae.

“They could do a lot of damage – arguably more than the adults because we cannot control them now and I think we will find out the true extent this season,” he explains.

Near Cambridge, England, farmer Martin Jenkins found flea beetles for the first time in almost a decade on his 750 acres of rapeseed (commonly called canola in the U.S.). He told Bloomberg:
When we remove a tool from the box, that puts even more pressure on the tools we’ve got left. More pesticides are being used, and even more ridiculous is there will be massively less rapeseed.
There is little growers can do now as the only option of tackling the larvae is an autumn spraying of pyrethroid, an older chemical phased out for multiple reasons, so they must now focus on stimulating growth.The infestation may cause a 15 percent drop canola yields in Europe this year and some areas are even worse off. Last fall, some canola fields in Germany were so damaged, farmers plowed them under and replanted winter cereals. Nick von Westenholz, chief executive of the UK’s Crop Protection Association, an industry group explained:
Farmers have had to go back to older chemistry and chemistry that is increasingly less effective. Companies would like to innovate and bring newer stuff, but the neonicotinoid example is not a tempting one.
Bringing new chemicals to market is expensive and takes time to move through the regulatory system. Meanwhile, canola farmers are spraying almost twice as much alternative chemicals from the class of pyrethroids, said Manuela Specht from the German oilseed trade group UFOP in Berlin.
Last fall, UK farmer Peter Kendall said he sprayed his crop with pyrethroids three times last year before giving up, replanting and spraying again.

This increased spraying with harsher chemicals may harm the honeybees, which the neonics ban intended to protect in the first place. . A 2014 study by researchers at the University of London found that exposure to pyrethroids can reduce bee size.

“There is a strong feeling among farmers that we are worse off and the environment is worse off,” said Kendall.

Rebecca Randall is a journalist focusing on international relations and global food issues. Follow her @beccawrites.

Additional resources:

Climate sensitivity

From Wikipedia, the free encyclopedia

Refer to caption and adjacent text
Frequency distribution of climate sensitivity, based on model simulations.[1] Few of the simulations result in less than 2 °C of warming—near the low end of estimates by the Intergovernmental Panel on Climate Change (IPCC).[1] Some simulations result in significantly more than the 4 °C, which is at the high end of the IPCC estimates.[1] This pattern (statisticians call it a "right-skewed distribution") suggests that if carbon dioxide concentrations double, the probability of very large increases in temperature is greater than the probability of very small increases.[1]

Climate sensitivity is the equilibrium temperature change in response to changes of the radiative forcing.[2] Therefore climate sensitivity depends on the initial climate state, but potentially can be accurately inferred from precise palaeoclimate data. Slow climate feedbacks, especially changes of ice sheet size and atmospheric CO2, amplify the total Earth system sensitivity by an amount that depends on the time scale considered.[3]

Although climate sensitivity is usually used in the context of radiative forcing by carbon dioxide (CO2), it is thought of as a general property of the climate system: the change in surface air temperature (ΔTs) following a unit change in radiative forcing (RF), and thus is expressed in units of °C/(W/m2). For this to be useful, the measure must be independent of the nature of the forcing (e.g. from greenhouse gases or solar variation); to first order this is indeed found to be so[citation needed].

The climate sensitivity specifically due to CO
2
is often expressed as the temperature change in °C associated with a doubling of the concentration of carbon dioxide in Earth's atmosphere.

For coupled atmosphere-ocean global climate models (e.g. CMIP5) the climate sensitivity is an emergent property: it is not a model parameter, but rather a result of a combination of model physics and parameters. By contrast, simpler energy-balance models may have climate sensitivity as an explicit parameter.

 \Delta T_s = \lambda \cdot RF

The terms represented in the equation relate radiative forcing (RF) to linear changes in global surface temperature change (ΔTs) via the climate sensitivity λ.

It is also possible to estimate climate sensitivity from observations; however, this is difficult due to uncertainties in the forcing and temperature histories.

Equilibrium and transient climate sensitivity

The equilibrium climate sensitivity (ECS) refers to the equilibrium change in global mean near-surface air temperature that would result from a sustained doubling of the atmospheric (equivalent) carbon dioxide concentration (ΔTx2). As estimated by the IPCC Fifth Assessment Report (AR5) "there is high confidence that ECS is extremely unlikely less than 1°C and medium confidence that the ECS is likely between 1.5°C and 4.5°C and very unlikely greater than 6°C."[4] This is a change from the IPCC Fourth Assessment Report (AR4), which said it was likely to be in the range 2 to 4.5 °C with a best estimate of about 3 °C, and is very unlikely to be less than 1.5 °C. Values substantially higher than 4.5 °C cannot be excluded, but agreement of models with observations is not as good for those values.[5] The IPCC Third Assessment Report (TAR) said it was "likely to be in the range of 1.5 to 4.5 °C".[6] Other estimates of climate sensitivity are discussed later on.

A model estimate of equilibrium sensitivity thus requires a very long model integration; fully equilibrating ocean temperatures requires integrations of thousands of model years. A measure requiring shorter integrations is the transient climate response (TCR) which is defined as the average temperature response over a twenty-year period centered at CO
2
doubling in a transient simulation with CO
2
increasing at 1% per year.[7] The transient response is lower than the equilibrium sensitivity, due to the "inertia" of ocean heat uptake.

Over the 50–100 year timescale, the climate response to forcing is likely to follow the TCR; for considerations of climate stabilization, the ECS is more useful.

An estimate of the equilibrium climate sensitivity may be made from combining the effective climate sensitivity with the known properties of the ocean reservoirs and the surface heat fluxes; this is the effective climate sensitivity. This "may vary with forcing history and climate state".[8] [9]

A less commonly used concept, the Earth system sensitivity (ESS), can be defined which includes the effects of slower feedbacks, such as the albedo change from melting the large ice sheets that covered much of the northern hemisphere during the last glacial maximum. These extra feedbacks make the ESS larger than the ECS — possibly twice as large — but also mean that it may well not apply to current conditions.[10]

Sensitivity to carbon dioxide forcing

Climate sensitivity is often evaluated in terms of the change in equilibrium temperature due to radiative forcing due to the greenhouse effect. According to the Arrhenius relation,[11] the radiative forcing (and hence the change in temperature) is proportional to the logarithm of the concentration of infrared-absorbing gasses in the atmosphere. Thus, the sensitivity of temperature to gasses in the atmosphere (most notably carbon dioxide) is often expressed in terms of the change in temperature per doubling of the concentration of the gas.

Radiative forcing due to doubled CO
2

CO
2
climate sensitivity has a component directly due to radiative forcing by CO
2
, and a further contribution arising from climate feedbacks, both positive and negative. "Without any feedbacks, a doubling of CO
2
(which amounts to a forcing of 3.7 W/m2) would result in 1 °C global warming, which is easy to calculate and is undisputed. The remaining uncertainty is due entirely to feedbacks in the system, namely, the water vapor feedback, the ice-albedo feedback, the cloud feedback, and the lapse rate feedback";[12] addition of these feedbacks leads to a value of the sensitivity to CO
2
doubling of approximately 3 °C ± 1.5 °C, which corresponds to a value of λ of 0.8 K/(W/m2).

In the earlier 1979 NAS report[13] (p. 7), the radiative forcing due to doubled CO
2
is estimated to be 4 W/m2, as calculated (for example) in Ramanathan et al. (1979).[14] In 2001 the IPCC adopted the revised value of 3.7 W/m2, the difference attributed to a "stratospheric temperature adjustment".[15] More recently an intercomparison of radiative transfer codes (Collins et al., 2006)[16] showed discrepancies among climate models and between climate models and more exact radiation codes in the forcing attributed to doubled CO
2
even in cloud-free sky; presumably the differences would be even greater if forcing were evaluated in the presence of clouds because of differences in the treatment of clouds in different models. Undoubtedly the difference in forcing attributed to doubled CO
2
in different climate models contributes to differences in apparent sensitivities of the models, although this effect is thought to be small relative to the intrinsic differences in sensitivities of the models themselves.[17]

Refer to caption and adjacent text
Frequency distribution of climate sensitivity, based on model simulations.[1] Few of the simulations result in less than 2 °C of warming—near the low end of estimates by the Intergovernmental Panel on Climate Change (IPCC).[1] Some simulations result in significantly more than the 4 °C, which is at the high end of the IPCC estimates.[1] This pattern (statisticians call it a "right-skewed distribution") suggests that if carbon dioxide concentrations double, the probability of very large increases in temperature is greater than the probability of very small increases.[1]

Consensus estimates

A committee on anthropogenic global warming convened in 1979 by the National Academy of Sciences and chaired by Jule Charney[13] estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. Only two sets of models were available; one, due to Syukuro Manabe, exhibited a climate sensitivity of 2 °C, the other, due to James E. Hansen, exhibited a climate sensitivity of 4 °C.
"According to Manabe, Charney chose 0.5 °C as a not-unreasonable margin of error, subtracted it from Manabe’s number, and added it to Hansen’s. Thus was born the 1.5 °C-to-4.5 °C range of likely climate sensitivity that has appeared in every greenhouse assessment since..."[18]

Chapter 4 of the "Charney report" compares the predictions of the models: "We conclude that the predictions ... are basically consistent and mutually supporting. The differences in model results are relatively small and may be accounted for by differences in model characteristics and simplifying assumptions."[13]

In 2008 climatologist Stefan Rahmstorf wrote, regarding the Charney report's original range of uncertainty: "At that time, this range was on very shaky ground. Since then, many vastly improved models have been developed by a number of climate research centers around the world. Current state-of-the-art climate models span a range of 2.6–4.1 °C, most clustering around 3 °C."[12]

Intergovernmental Panel on Climate Change

The 1990 IPCC First Assessment Report estimated that equilibrium climate sensitivity to CO
2
doubling lay between 1.5 and 4.5 °C, with a "best guess in the light of current knowledge" of 2.5 °C.[19] This used models with strongly simplified representations of the ocean dynamics. The IPCC supplementary report, 1992 which used full ocean GCMs nonetheless saw "no compelling reason to warrant changing" from this estimate [20] and the IPCC Second Assessment Report found that "No strong reasons have emerged to change" these estimates,[21] with much of the uncertainty attributed to cloud processes. As noted above, the IPCC TAR retained the likely range 1.5 to 4.5 °C.[6]

Authors of the IPCC Fourth Assessment Report (Meehl et al., 2007)[22] stated that confidence in estimates of equilibrium climate sensitivity had increased substantially since the TAR. AR4's assessment was based on a combination of several independent lines of evidence, including observed climate change and the strength of known "feedbacks" simulated in general circulation models.[23] IPCC authors concluded that the global mean equilibrium warming for doubling CO
2
(to a concentration of 560 ppmv), or equilibrium climate sensitivity, very likely is greater than 1.5 °C (2.7 °F) and likely to lie in the range 2 to 4.5 °C (4 to 8.1 °F), with a most likely value of about 3 °C (5 °F). For fundamental physical reasons, as well as data limitations, the IPCC states a climate sensitivity higher than 4.5 °C (8.1 °F) cannot be ruled out, but that agreement for these values with observations and "proxy" climate data is generally worse compared to values in the 2 to 4.5 °C (4 to 8.1 °F) range.[23]

The TAR uses the word "likely" in a qualitative sense to describe the likelihood of the 1.5 to 4.5 °C range being correct.[22] AR4, however, quantifies the probable range of climate sensitivity estimates:[24]
  • 2-4.5 °C is "likely", = greater than 66% chance of being correct
  • less than 1.5 °C is "very unlikely" = less than 10%
The IPCC Fifth Assessment Report stated: Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence).

These are Bayesian probabilities, which are based on an expert assessment of the available evidence.[24]

Calculations of CO
2
sensitivity from observational data

Sample calculation using industrial-age data

Rahmstorf (2008)[12] provides an informal example of how climate sensitivity might be estimated empirically, from which the following is modified. Denote the sensitivity, i.e. the equilibrium increase in global mean temperature including the effects of feedbacks due to a sustained forcing by doubled CO
2
(taken as 3.7 W/m2), as x °C. If Earth were to experience an equilibrium temperature change of ΔT (°C) due to a sustained forcing of ΔF (W/m2), then one might say that x/(ΔT) = (3.7 W/m2)/(ΔF), i.e. that x = ΔT * (3.7 W/m2)/ΔF. The global temperature increase since the beginning of the industrial period (taken as 1750) is about 0.8 °C, and the radiative forcing due to CO
2
and other long-lived greenhouse gases (mainly methane, nitrous oxide, and chlorofluorocarbons) emitted since that time is about 2.6 W/m2. Neglecting other forcings and considering the temperature increase to be an equilibrium increase would lead to a sensitivity of about 1.1 °C. However, ΔF also contains contributions due to solar activity (+0.3 W/m2), aerosols (-1 W/m2), ozone (0.3 W/m2) and other lesser influences, bringing the total forcing over the industrial period to 1.6 W/m2 according to best estimate of the IPCC AR4, albeit with substantial uncertainty. Additionally the fact that the climate system is not at equilibrium must be accounted for; this is done by subtracting the planetary heat uptake rate H from the forcing; i.e., x = ΔT * (3.7 W/m2)/(ΔF-H). Taking planetary heat uptake rate as the rate of ocean heat uptake, estimated by the IPCC AR4 as 0.2 W/m2, yields a value for x of 2.1 °C. (All numbers are approximate and quite uncertain.)

Sample calculation using ice-age data

In 2008, Farley wrote: "... examine the change in temperature and solar forcing between glaciation (ice age) and interglacial (no ice age) periods. The change in temperature, revealed in ice core samples, is 5 °C, while the change in solar forcing is 7.1 W/m2. The computed climate sensitivity is therefore 5/7.1 = 0.7 K(W/m2)−1. We can use this empirically derived climate sensitivity to predict the temperature rise from a forcing of 4 W/m2, arising from a doubling of the atmospheric CO
2
from pre-industrial levels. The result is a predicted temperature increase of 3 °C."[25]

Based on analysis of uncertainties in total forcing, in Antarctic cooling, and in the ratio of global to Antarctic cooling of the last glacial maximum relative to the present, Ganopolski and Schneider von Deimling (2008) infer a range of 1.3 to 6.8 °C for climate sensitivity determined by this approach.[26]

A lower figure was calculated in a 2011 Science paper by Schmittner et al., who combined temperature reconstructions of the Last Glacial Maximum with climate model simulations to suggest a rate of global warming from doubling of atmospheric carbon dioxide of a median of 2.3 °C and uncertainty 1.7–2.6 °C (66% probability range), less than the earlier estimates of 2 to 4.5 °C as the 66% probability range. Schmittner et al. said their "results imply less probability of extreme climatic change than previously thought." Their work suggests that climate sensitivities >6 °C "cannot be reconciled with paleoclimatic and geologic evidence, and hence should be assigned near-zero probability."[27][28]

Other experimental estimates

Idso (1998)[29] calculated based on eight natural experiments a λ of 0.1 °C/(Wm−2) resulting in a climate sensitivity of only 0.4 °C for a doubling of the concentration of CO
2
in the atmosphere.

Andronova and Schlesinger (2001) found that the climate sensitivity could lie between 1 and 10 °C, with a 54 percent likelihood that it lies outside the IPCC range.[30] The exact range depends on which factors are most important during the instrumental period: "At present, the most likely scenario is one that includes anthropogenic sulfate aerosol forcing but not solar variation. Although the value of the climate sensitivity in that case is most uncertain, there is a 70 percent chance that it exceeds the maximum IPCC value. This is not good news," said Schlesinger.

Forest, et al. (2002)[31] using patterns of change and the MIT EMIC estimated a 95% confidence interval of 1.4–7.7 °C for the climate sensitivity, and a 30% probability that sensitivity was outside the 1.5 to 4.5 °C range.

Gregory, et al. (2002)[32] estimated a lower bound of 1.6 °C by estimating the change in Earth's radiation budget and comparing it to the global warming observed over the 20th century.

Shaviv (2005)[33] carried out a similar analysis for 6 different time scales, ranging from the 11-yr solar cycle to the climate variations over geological time scales. He found a typical sensitivity of 0.54±0.12 K/(W m−2) or 2.1 °C (ranging between 1.6 °C and 2.5 °C at 99% confidence) if there is no cosmic-ray climate connection, or a typical sensitivity of 0.35±0.09 K/(W m−2) or 1.3 °C (between 1.0 °C and 1.7 °C at 99% confidence), if the cosmic-ray climate link is real. (Note Shaviv quotes a radiative forcing equivalent of 3.8 Wm−2. [ΔTx2=3.8 Wm−2 λ].)

Frame, et al. (2005)[34] noted that the range of the confidence limits is dependent on the nature of the prior assumptions made.

Annan and Hargreaves (2006)[35] presented an estimate that resulted from combining prior estimates based on analyses of paleoclimate, responses to volcanic eruptions, and the temperature change in response to forcings over the twentieth century. They also introduced a triad notation (L, C, H) to convey the probability distribution function (pdf) of the sensitivity, where the central value C indicates the maximum likelihood estimate in degrees Celsius and the outer values L and H represent the limits of the 95% confidence interval for a pdf, or 95% of the area under the curve for a likelihood function. In this notation their estimate of sensitivity was (1.7, 2.9, 4.9) °C.

Forster and Gregory (2006)[36] presented a new independent estimate based on the slope of a plot of calculated greenhouse gas forcing minus top-of-atmosphere energy imbalance, as measured by satellite borne radiometers, versus global mean surface temperature. In the triad notation of Annan and Hargreaves their estimate of sensitivity was (1.0, 1.6, 4.1) °C.

Royer, et al. (2007)[37] determined climate sensitivity within a major part of the Phanerozoic. The range of values—1.5 °C minimum, 2.8 °C best estimate, and 6.2 °C maximum—is, given various uncertainties, consistent with sensitivities of current climate models and with other determinations.[38]

Lindzen and Choi (2011) find the equilibrium climate sensitivity to be 0.7 C, implying a negative feedback of clouds.[39]

Ring et all (2012) find the equilibrium climate sensitivity to be in the range 1.45 C- 2.01 C, depending on the data set used as an input in model simulations.[40]

Skeie et al (2013) use the Bayesian analysis of the OHC data and conclude that the equilibrium climate sensitivity is 1.8 C, far lower than previous best estimate relied upon by the IPCC.[41]

Aldrin et al (2012)use simple deterministic climate model, modelling yearly hemispheric surface temperature and global ocean heat content as a function of historical radiative forcing and combine it with an empirical, stochastic model. By using a Bayesian framework they estimate the equilibrium climate sensitivity to be 1.98 C.[42]

Lewis (2013) estimates by using the Bayesian framework that the equilibrium climate sensitivity is 1.6 K, with the likely range (90% confidence level) 1.2-2.2 K.[43]

ScienceDaily reported on a study by Fasullo and Trenberth (2012),[44] who tested model estimates of climate sensitivity based on their ability to reproduce observed relative humidity in the tropics and subtropics. The best performing models tended to project relatively high climate sensitivities, of around 4 °C.[44]

Previdi et al. 2013 reviewed the 2×CO2 Earth system sensitivity, and concluded it is higher if the ice sheet and the vegetation albedo feedback is included in addition to the fast feedbacks, being ∼4–6 °C, and higher still if climate–GHG feedbacks are also included.[45]

Lewis and Curry (2014) estimated that equilibrium climate sensitivity was 1.64  °C, based on the 1750-2011 time series and "the uncertainty ranges for forcing components" in the IPCC's Fifth Assessment Report.[46]

Literature reviews

A literature review by Knutti and Hegerl (2008)[47] concluded that "various observations favour a climate sensitivity value of about 3 °C, with a likely range of about 2-4.5 °C. However, the physics of the response and uncertainties in forcing lead to difficulties in ruling out higher values."

Radiative forcing functions

A number of different inputs can give rise to radiative forcing. In addition to the downwelling radiation due to the greenhouse effect, the IPCC First Scientific Assessment Report listed solar radiation variability due to orbital changes, variability due to changes in solar irradiance, direct aerosol effects (e.g., changes in albedo due to cloud cover), indirect aerosol effects, and surface characteristics.[48]

Sensitivity to solar forcing

Solar irradiance is about 0.9 W/m2 brighter during solar maximum than during solar minimum.
Analysis by Camp and Tung shows that this correlates with a variation of ±0.1°C in measured average global temperature between the peak and minimum of the 11-year solar cycle.[49] From this data (incorporating the Earth's albedo and the fact that the solar absorption cross-section is 1/4 of the surface area of the Earth), Tung, Zhou and Camp (2008) derive a transient sensitivity value of 0.69 to 0.97 °C/(W/m2).[50] This would correspond to a transient climate sensitivity to carbon dioxide doubling of 2.5 to 3.6 K, similar to the range of the current scientific consensus. However, they note that this is the transient response to a forcing with an 11 year cycle; due to lag effects, they estimate the equilibrium response to forcing would be about 1.5 times higher.

Magnet school

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Magnet_sc...