Search This Blog

Wednesday, January 28, 2015

Antimatter


From Wikipedia, the free encyclopedia

In particle physics, antimatter is material composed of antiparticles, which have the same mass as particles of ordinary matter but have opposite charge and other particle properties such as lepton and baryon number, quantum spin, etc. Encounters between particles and antiparticles lead to the annihilation of both, giving rise to varying proportions of high-energy photons (gamma rays), neutrinos, and lower-mass particle–antiparticle pairs. Setting aside the mass of any product neutrinos, which represent released energy that generally continues to be unavailable, the end result of annihilation is a release of energy available to do work, proportional to the total matter and antimatter mass, in accord with the mass-energy equivalence equation, E=mc2.[1]

Antiparticles bind with each other to form antimatter just as ordinary particles bind to form normal matter. For example, a positron (the antiparticle of the electron) and an antiproton can form an antihydrogen atom. Physical principles indicate that complex antimatter atomic nuclei are possible, as well as anti-atoms corresponding to the known chemical elements. To date, however, anti-atoms more complex than antihelium have neither been artificially produced nor observed in nature. Studies of cosmic rays have identified both positrons and antiprotons, presumably produced by high-energy collisions between particles of ordinary matter.

There is considerable speculation as to why the observable universe is apparently composed almost entirely of ordinary matter, as opposed to a more symmetric combination of matter and antimatter. This asymmetry of matter and antimatter in the visible universe is one of the greatest unsolved problems in physics.[2] The process by which this asymmetry between particles and antiparticles developed is called baryogenesis.

Antimatter in the form of anti-atoms is one of the most difficult materials to produce. Antimatter in the form of individual anti-particles, however, is commonly produced by particle accelerators and in some types of radioactive decay.
There are some 500 terrestrial gamma-ray flashes daily. The red dots show those the Fermi Gamma-ray Space Telescope spotted through 2010.
A video showing how scientists used the Fermi Gamma-ray Space Telescope's gamma-ray detector to uncover bursts of antimatter from thunderstorms

History of the concept

The idea of negative matter appears in past theories of matter that have now been abandoned. Using the once popular vortex theory of gravity, the possibility of matter with negative gravity was discussed by William Hicks in the 1880s. Between the 1880s and the 1890s, Karl Pearson proposed the existence of "squirts" [3] and sinks of the flow of aether. The squirts represented normal matter and the sinks represented negative matter. Pearson's theory required a fourth dimension for the aether to flow from and into.[4]

The term antimatter was first used by Arthur Schuster in two rather whimsical letters to Nature in 1898,[5] in which he coined the term. He hypothesized antiatoms, as well as whole antimatter solar systems, and discussed the possibility of matter and antimatter annihilating each other. Schuster's ideas were not a serious theoretical proposal, merely speculation, and like the previous ideas, differed from the modern concept of antimatter in that it possessed negative gravity.[6]

The modern theory of antimatter began in 1928, with a paper[7] by Paul Dirac. Dirac realised that his relativistic version of the Schrödinger wave equation for electrons predicted the possibility of antielectrons. These were discovered by Carl D. Anderson in 1932 and named positrons (a contraction of "positive electrons"). Although Dirac did not himself use the term antimatter, its use follows on naturally enough from antielectrons, antiprotons, etc.[8] A complete periodic table of antimatter was envisaged by Charles Janet in 1929.[9]

Notation

One way to denote an antiparticle is by adding a bar over the particle's symbol. For example, the proton and antiproton are denoted as p and p, respectively. The same rule applies if one were to address a particle by its constituent components. A proton is made up of uud quarks, so an antiproton must therefore be formed from uud antiquarks. Another convention is to distinguish particles by their electric charge. Thus, the electron and positron are denoted simply as e and e+ respectively.
However, to prevent confusion, the two conventions are never mixed.

Origin and asymmetry

Almost all matter observable from the Earth seems to be made of matter rather than antimatter. If antimatter-dominated regions of space existed, the gamma rays produced in annihilation reactions along the boundary between matter and antimatter regions would be detectable.[10]

Antiparticles are created everywhere in the universe where high-energy particle collisions take place. High-energy cosmic rays impacting Earth's atmosphere (or any other matter in the Solar System) produce minute quantities of antiparticles in the resulting particle jets, which are immediately annihilated by contact with nearby matter. They may similarly be produced in regions like the center of the Milky Way and other galaxies, where very energetic celestial events occur (principally the interaction of relativistic jets with the interstellar medium). The presence of the resulting antimatter is detectable by the two gamma rays produced every time positrons annihilate with nearby matter. The frequency and wavelength of the gamma rays indicate that each carries 511 keV of energy (i.e., the rest mass of an electron multiplied by c2).

Recent observations by the European Space Agency's INTEGRAL satellite may explain the origin of a giant cloud of antimatter surrounding the galactic center. The observations show that the cloud is asymmetrical and matches the pattern of X-ray binaries (binary star systems containing black holes or neutron stars), mostly on one side of the galactic center. While the mechanism is not fully understood, it is likely to involve the production of electron–positron pairs, as ordinary matter gains tremendous energy while falling into a stellar remnant.[11][12]

Antimatter may exist in relatively large amounts in far-away galaxies due to cosmic inflation in the primordial time of the universe. Antimatter galaxies, if they exist, are expected to have the same chemistry and absorption and emission spectra as normal-matter galaxies, and their astronomical objects would be observationally identical, making them difficult to distinguish.[13] NASA is trying to determine if such galaxies exist by looking for X-ray and gamma-ray signatures of annihilation events in colliding superclusters.[14]

Natural production

Positrons are produced naturally in β+ decays of naturally occurring radioactive isotopes (for example, potassium-40) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle created by natural radioactivity (β decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays. In January 2011, research by the American Astronomical Society discovered antimatter (positrons) originating above thunderstorm clouds; positrons are produced in gamma-ray flashes created by electrons accelerated by strong electric fields in the clouds.[15] Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module.[16][17]

Antiparticles are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). During the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter,[18] also called baryon asymmetry, is attributed to CP-violation: a violation of the CP-symmetry relating matter to antimatter. The exact mechanism of this violation during baryogenesis remains a mystery.

Positrons can be produced by radioactive β+ decay, but this mechanism can occur both naturally and artificially.

Observation in cosmic rays

Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. These do not appear to be the products of large amounts of antimatter from the Big Bang, or indeed complex antimatter in the universe. Rather, they appear to consist of only these two elementary particles, newly made in energetic processes.[citation needed]
Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 10 GeV to 250 GeV. In September, 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters.[19][20] A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV.[21] These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.[22]

Cosmic ray antiprotons also have a much higher energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy.[23]

There is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (i.e., anti-alpha particles), in cosmic rays. These are actively being searched for. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the Space Shuttle Discovery on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio.[24]

Artificial production

Positrons

Positrons were reported[25] in November 2008 to have been generated by Lawrence Livermore National Laboratory in larger numbers than by any previous synthetic process. A laser drove electrons through a millimeter-radius gold target's nuclei, which caused the incoming electrons to emit energy quanta that decayed into both matter and antimatter. Positrons were detected at a higher rate and in greater density than ever previously detected in a laboratory. Previous experiments made smaller quantities of positrons using lasers and paper-thin targets; however, new simulations showed that short, ultra-intense lasers and millimeter-thick gold are a far more effective source.[26]

Antiprotons, antineutrons, and antinuclei

The existence of the antiproton was experimentally confirmed in 1955 by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics.[27] An antiproton consists of two up antiquarks and one down antiquark (uud). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception of the antiproton having opposite electric charge and magnetic moment from the proton. Shortly afterwards, in 1956, the antineutron was discovered in proton–proton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by Bruce Cork and colleagues.[28]

In addition to antibaryons, anti-nuclei consisting of multiple bound antiprotons and antineutrons have been created. These are typically produced at energies far too high to form antimatter atoms (with bound positrons in place of electrons). In 1965, a group of researchers led by Antonino Zichichi reported production of nuclei of antideuterium at the Proton Synchrotron at CERN.[29] At roughly the same time, observations of antideuterium nuclei were reported by a group of American physicists at the Alternating Gradient Synchrotron at Brookhaven National Laboratory.[30]

Antihydrogen atoms

In 1995, CERN announced that it had successfully brought into existence nine antihydrogen atoms by implementing the SLAC/Fermilab concept during the PS210 experiment. The experiment was performed using the Low Energy Antiproton Ring (LEAR), and was led by Walter Oelert and Mario Macri.[citation needed] Fermilab soon confirmed the CERN findings by producing approximately 100 antihydrogen atoms at their facilities. The antihydrogen atoms created during PS210 and subsequent experiments (at both CERN and Fermilab) were extremely energetic ("hot") and were not well suited to study. To resolve this hurdle, and to gain a better understanding of antihydrogen, two collaborations were formed in the late 1990s, namely, ATHENA and ATRAP. In 2005, ATHENA disbanded and some of the former members (along with others) formed the ALPHA Collaboration, which is also based at CERN. The primary goal of these collaborations is the creation of less energetic ("cold") antihydrogen, better suited to study.[citation needed]

In 1999, CERN activated the Antiproton Decelerator, a device capable of decelerating antiprotons from 3.5 GeV to 5.3 MeV — still too "hot" to produce study-effective antihydrogen, but a huge leap forward. In late 2002 the ATHENA project announced that they had created the world's first "cold" antihydrogen.[31] The ATRAP project released similar results very shortly thereafter.[32] The antiprotons used in these experiments were cooled by decelerating them with the Antiproton Decelerator, passing them through a thin sheet of foil, and finally capturing them in a Penning-Malmberg trap.[33] The overall cooling process is workable, but highly inefficient; approximately 25 million antiprotons leave the Antiproton Decelerator and roughly 25,000 make it to the Penning-Malmberg trap, which is about 11000 or 0.1% of the original amount.

The antiprotons are still hot when initially trapped. To cool them further, they are mixed into an electron plasma. The electrons in this plasma cool via cyclotron radiation, and then sympathetically cool the antiprotons via Coulomb collisions. Eventually, the electrons are removed by the application of short-duration electric fields, leaving the antiprotons with energies less than 100 meV.[34] While the antiprotons are being cooled in the first trap, a small cloud of positrons is captured from radioactive sodium in a Surko-style positron accumulator.[35] This cloud is then recaptured in a second trap near the antiprotons. Manipulations of the trap electrodes then tip the antiprotons into the positron plasma, where some combine with antiprotons to form antihydrogen. This neutral antihydrogen is unaffected by the electric and magnetic fields used to trap the charged positrons and antiprotons, and within a few microseconds the antihydrogen hits the trap walls, where it annihilates. Some hundreds of millions of antihydrogen atoms have been made in this fashion.

Most of the sought-after high-precision tests of the properties of antihydrogen could only be performed if the antihydrogen were trapped, that is, held in place for a relatively long time. While antihydrogen atoms are electrically neutral, the spins of their component particles produce a magnetic moment. These magnetic moments can interact with an inhomogeneous magnetic field; some of the antihydrogen atoms can be attracted to a magnetic minimum. Such a minimum can be created by a combination of mirror and multipole fields.[36] Antihydrogen can be trapped in such a magnetic minimum (minimum-B) trap; in November 2010, the ALPHA collaboration announced that they had so trapped 38 antihydrogen atoms for about a sixth of a second.[37][38] This was the first time that neutral antimatter had been trapped.

On 26 April 2011, ALPHA announced that they had trapped 309 antihydrogen atoms, some for as long as 1,000 seconds (about 17 minutes). This was longer than neutral antimatter had ever been trapped before.[39][40] ALPHA has used these trapped atoms to initiate research into the spectral properties of the antihydrogen.[41]

The biggest limiting factor in the large-scale production of antimatter is the availability of antiprotons. Recent data released by CERN states that, when fully operational, their facilities are capable of producing ten million antiprotons per minute.[42] Assuming a 100% conversion of antiprotons to antihydrogen, it would take 100 billion years to produce 1 gram or 1 mole of antihydrogen (approximately 6.02×1023 atoms of antihydrogen).

Antihelium

Antihelium-3 nuclei (3He) were first observed in the 1970s in proton-nucleus collision experiments[43] and later created in nucleus-nucleus collision experiments.[44] Nucleus-nucleus collisions produce antinuclei through the coalescense of antiprotons and antineutrons created in these reactions. In 2011, the STAR detector reported the observation of artificially-created antihelium-4 nuclei (anti-alpha particles) (4He) from such collisions.[45]

Preservation

Antimatter cannot be stored in a container made of ordinary matter because antimatter reacts with any matter it touches, annihilating itself and an equal amount of the container. Antimatter in the form of charged particles can be contained by a combination of electric and magnetic fields, in a device called a Penning trap. This device cannot, however, contain antimatter that consists of uncharged particles, for which atomic traps are used. In particular, such a trap may use the dipole moment (electric or magnetic) of the trapped particles. At high vacuum, the matter or antimatter particles can be trapped and cooled with slightly off-resonant laser radiation using a magneto-optical trap or magnetic trap. Small particles can also be suspended with optical tweezers, using a highly focused laser beam.[citation needed]

In 2011, CERN scientists were able to preserve antihydrogen for approximately 17 minutes.[46]

Cost

Scientists claim that antimatter is the costliest material to make.[47] In 2006, Gerald Smith estimated $250 million could produce 10 milligrams of positrons[48] (equivalent to $25 billion per gram); in 1999, NASA gave a figure of $62.5 trillion per gram of antihydrogen.[47] This is because production is difficult (only very few antiprotons are produced in reactions in particle accelerators), and because there is higher demand for other uses of particle accelerators. According to CERN, it has cost a few hundred million Swiss Francs to produce about 1 billionth of a gram (the amount used so far for particle/antiparticle collisions).[49] By way of comparison the cost of the Manhattan project to produce the first atomic weapon was estimated at $23 billion at 2007 prices.[50]

Several studies funded by the NASA Institute for Advanced Concepts are exploring whether it might be possible to use magnetic scoops to collect the antimatter that occurs naturally in the Van Allen belt of the Earth, and ultimately, the belts of gas giants, like Jupiter, hopefully at a lower cost per gram.[51]

Uses

Medical

Matter-antimatter reactions have practical applications in medical imaging, such as positron emission tomography (PET). In positive beta decay, a nuclide loses surplus positive charge by emitting a positron (in the same event, a proton becomes a neutron, and a neutrino is also emitted). Nuclides with surplus positive charge are easily made in a cyclotron and are widely generated for medical use.
Antiprotons have also been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy.[52]

Fuel

Isolated and stored anti-matter could be used as a fuel for interplanetary or interstellar travel[53] as part of an antimatter catalyzed nuclear pulse propulsion or other antimatter rocketry, such as the redshift rocket. Since the energy density of antimatter is higher than that of conventional fuels, an antimatter-fueled spacecraft would have a higher thrust-to-weight ratio than a conventional spacecraft.

If matter-antimatter collisions resulted only in photon emission, the entire rest mass of the particles would be converted to kinetic energy. The energy per unit mass (9×1016 J/kg) is about 10 orders of magnitude greater than chemical energies,[54] and about 3 orders of magnitude greater than the nuclear potential energy that can be liberated, today, using nuclear fission (about 200 MeV per fission reaction[55] or 8×1013 J/kg), and about 2 orders of magnitude greater than the best possible results expected from fusion (about 6.3×1014 J/kg for the proton-proton chain). The reaction of kg of antimatter with 1 kg of matter would produce 1.8×1017 J (180 petajoules) of energy (by the mass-energy equivalence formula, E = mc2), or the rough equivalent of 43 megatons of TNT – slightly less than the yield of the 27,000 kg Tsar Bomb, the largest thermonuclear weapon ever detonated.

Not all of that energy can be utilized by any realistic propulsion technology because of the nature of the annihilation products. While electron-positron reactions result in gamma ray photons, these are difficult to direct and use for thrust. In reactions between protons and antiprotons, their energy is converted largely into relativistic neutral and charged pions. The neutral pions decay almost immediately (with a half-life of 84 attoseconds) into high-energy photons, but the charged pions decay more slowly (with a half-life of 26 nanoseconds) and can be deflected magnetically to produce thrust.

Note that charged pions ultimately decay into a combination of neutrinos (carrying about 22% of the energy of the charged pions) and unstable charged muons (carrying about 78% of the charged pion energy), with the muons then decaying into a combination of electrons, positrons and neutrinos (cf. muon decay; the neutrinos from this decay carry about 2/3 of the energy of the muons, meaning that from the original charged pions, the total fraction of their energy converted to neutrinos by one route or another would be about 0.22 + (2/3)*0.78 = 0.74).[56]

Weapons

Antimatter has been considered as a trigger mechanism for nuclear weapons.[57] A major obstacle is the difficulty of producing antimatter in large enough quantities, and there is no evidence that it will ever be feasible.[58] However, the U.S. Air Force funded studies of the physics of antimatter in the Cold War, and began considering its possible use in weapons, not just as a trigger, but as the explosive itself.[59]

Theory of everything


From Wikipedia, the free encyclopedia

A theory of everything (ToE) or final theory, ultimate theory, or master theory is a hypothetical single, all-encompassing, coherent theoretical framework of physics that fully explains and links together all physical aspects of the universe.[1]:6 Finding a ToE is one of the major unsolved problems in physics. Over the past few centuries, two theoretical frameworks have been developed that, as a whole, most closely resemble a ToE. The two theories upon which all modern physics rests are general relativity (GR) and quantum field theory (QFT). GR is a theoretical framework that only focuses on the force of gravity for understanding the universe in regions of both large-scale and high-mass: stars, galaxies, clusters of galaxies, etc. On the other hand, QFT is a theoretical framework that only focuses on three non-gravitational forces for understanding the universe in regions of both small scale and low mass: sub-atomic particles, atoms, molecules, etc. QFT successfully implemented the Standard Model and unified the interactions (so-called Grand Unified Theory) between the three non-gravitational forces: weak, strong, and electromagnetic force.[2]:122

Through years of research, physicists have experimentally confirmed with tremendous accuracy virtually every prediction made by these two theories when in their appropriate domains of applicability. In accordance with their findings, scientists also learned that GR and QFT, as they are currently formulated, are mutually incompatible - they cannot both be right. Since the usual domains of applicability of GR and QFT are so different, most situations require that only one of the two theories be used.[3][4]:842–844 As it turns out, this incompatibility between GR and QFT is only an apparent issue in regions of extremely small-scale and high-mass, such as those that exist within a black hole or during the beginning stages of the universe (i.e., the moment immediately following the Big Bang). To resolve this conflict, a theoretical framework revealing a deeper underlying reality, unifying gravity with the other three interactions, must be discovered to harmoniously integrate the realms of GR and QFT into a seamless whole: a single theory that, in principle, is capable of describing all phenomena. In pursuit of this goal, quantum gravity has recently become an area of active research.

Over the past few decades, a single explanatory framework, called "string theory", has emerged that may turn out to be the ultimate theory of the universe. Many physicists believe that, at the beginning of the universe (up to 10−43 seconds after the Big Bang), the four fundamental forces were once a single fundamental force. Unlike most (if not all) other theories, string theory may be on its way to successfully incorporating each of the four fundamental forces into a unified whole. According to string theory, every particle in the universe, at its most microscopic level (Planck length), consists of varying combinations of vibrating strings (or strands) with preferred patterns of vibration. String theory claims that it is through these specific oscillatory patterns of strings that a particle of unique mass and force charge is created (that is to say, the electron is a type of string that vibrates one way, while the up-quark is a type of string vibrating another way, and so forth).

Initially, the term theory of everything was used with an ironic connotation to refer to various overgeneralized theories. For example, a grandfather of Ijon Tichy — a character from a cycle of Stanisław Lem's science fiction stories of the 1960s — was known to work on the "General Theory of Everything". Physicist John Ellis[5] claims to have introduced the term into the technical literature in an article in Nature in 1986.[6] Over time, the term stuck in popularizations of theoretical physics research.

Historical antecedents

From ancient Greece to Einstein

Archimedes was possibly the first scientist known to have described nature with axioms (or principles) and then deduce new results from them.[7] He thus tried to describe "everything" starting from a few axioms. Any "theory of everything" is similarly expected to be based on axioms and to deduce all observable phenomena from them.[8]:340

The concept of 'atom', introduced by Democritus, unified all phenomena observed in nature as the motion of atoms. In ancient Greek times philosophers speculated that the apparent diversity of observed phenomena was due to a single type of interaction, namely the collisions of atoms. Following atomism, the mechanical philosophy of the 17th century posited that all forces could be ultimately reduced to contact forces between the atoms, then imagined as tiny solid particles.[9]:184[10]

In the late 17th century, Isaac Newton's description of the long-distance force of gravity implied that not all forces in nature result from things coming into contact. Newton's work in his Principia dealt with this in a further example of unification, in this case unifying Galileo's work on terrestrial gravity, Kepler's laws of planetary motion and the phenomenon of tides by explaining these apparent actions at a distance under one single law: the law of universal gravitation.[11]

In 1814, building on these results, Laplace famously suggested that a sufficiently powerful intellect could, if it knew the position and velocity of every particle at a given time, along with the laws of nature, calculate the position of any particle at any other time:[12]:ch 7
An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Essai philosophique sur les probabilités, Introduction. 1814
Laplace thus envisaged a combination of gravitation and mechanics as a theory of everything.
Modern quantum mechanics implies that uncertainty is inescapable, and thus that Laplace's vision has to be amended: a theory of everything must include gravitation and quantum mechanics.
In 1820, Hans Christian Ørsted discovered a connection between electricity and magnetism, triggering decades of work that culminated in 1865, in James Clerk Maxwell's theory of electromagnetism. During the 19th and early 20th centuries, it gradually became apparent that many common examples of forces – contact forces, elasticity, viscosity, friction, and pressure – result from electrical interactions between the smallest particles of matter.

In his experiments of 1849–50, Michael Faraday was the first to search for a unification of gravity with electricity and magnetism.[13] However, he found no connection.

In 1900, David Hilbert published a famous list of mathematical problems. In Hilbert's sixth problem, he challenged researchers to find an axiomatic basis to all of physics. In this problem he thus asked for what today would be called a theory of everything.[14]

In the late 1920s, the new quantum mechanics showed that the chemical bonds between atoms were examples of (quantum) electrical forces, justifying Dirac's boast that "the underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known".[15]

After 1915, when Albert Einstein published the theory of gravity (general relativity), the search for a unified field theory combining gravity with electromagnetism began with a renewed interest. In Einstein's day, the strong and the weak forces had not yet been discovered, yet, he found the potential existence of two other distinct forces -gravity and electromagnetism- far more alluring. This launched his thirty-year voyage in search of the so-called "unified field theory" that he hoped would show that these two forces are really manifestations of one grand underlying principle. During these last few decades of his life, this quixotic quest isolated Einstein from the mainstream of physics.

Understandably, the mainstream was instead far more excited about the newly emerging framework of quantum mechanics. Einstein wrote to a friend in the early 1940s, "I have become a lonely old chap who is mainly known because he doesn't wear socks and who is exhibited as a curiosity on special occasions." Prominent contributors were Gunnar Nordström, Hermann Weyl, Arthur Eddington, Theodor Kaluza, Oskar Klein, and most notably, Albert Einstein and his collaborators. Einstein intensely searched for, but ultimately failed to find, a unifying theory.[16]:ch 17 (But see:Einstein–Maxwell–Dirac equations.) More than a half a century later, Einstein's dream of discovering a unified theory has become the Holy Grail of modern physics.

Twentieth century and the nuclear interactions

In the twentieth century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces (or interactions), which differ both from gravity and from electromagnetism. A further hurdle was the acceptance that in a ToE, quantum mechanics had to be incorporated from the start, rather than emerging as a consequence of a deterministic unified theory, as Einstein had hoped.

Gravity and electromagnetism could always peacefully coexist as entries in a list of classical forces, but for many years it seemed that gravity could not even be incorporated into the quantum framework, let alone unified with the other fundamental forces. For this reason, work on unification, for much of the twentieth century, focused on understanding the three "quantum" forces: electromagnetism and the weak and strong forces. The first two were combined in 1967–68 by Sheldon Glashow, Steven Weinberg, and Abdus Salam into the "electroweak" force.[17] Electroweak unification is a broken symmetry: the electromagnetic and weak forces appear distinct at low energies because the particles carrying the weak force, the W and Z bosons, have non-zero masses of 80.4 GeV/c2 and 91.2 GeV/c2, whereas the photon, which carries the electromagnetic force, is massless. At higher energies Ws and Zs can be created easily and the unified nature of the force becomes apparent.

While the strong and electroweak forces peacefully coexist in the Standard Model of particle physics, they remain distinct. So far, the quest for a theory of everything is thus unsuccessful on two points: neither a unification of the strong and electroweak forces – which Laplace would have called `contact forces' – has been achieved, nor has a unification of these forces with gravitation been achieved.

Modern physics

Conventional sequence of theories

A Theory of Everything would unify all the fundamental interactions of nature: gravitation, strong interaction, weak interaction, and electromagnetism. Because the weak interaction can transform elementary particles from one kind into another, the ToE should also yield a deep understanding of the various different kinds of possible particles. The usual assumed path of theories is given in the following graph, where each unification step leads one level up:

Theory of everything
Quantum gravity
Gravitation
Electronuclear force (GUT)
Standard model of cosmology
Standard model of particle physics
Strong interaction
SU(3)
Electroweak interaction
SU(2) x U(1)Y
Weak interaction
Electromagnetism
U(1)EM
Electricity
Magnetism

In this graph, electroweak unification occurs at around 100 GeV, grand unification is predicted to occur at 1016 GeV, and unification of the GUT force with gravity is expected at the Planck energy, roughly 1019 GeV.

Several Grand Unified Theories (GUTs) have been proposed to unify electromagnetism and the weak and strong forces. Grand unification would imply the existence of an electronuclear force; it is expected to set in at energies of the order of 1016 GeV, far greater than could be reached by any possible Earth-based particle accelerator. Although the simplest GUTs have been experimentally ruled out, the general idea, especially when linked with supersymmetry, remains a favorite candidate in the theoretical physics community. Supersymmetric GUTs seem plausible not only for their theoretical "beauty", but because they naturally produce large quantities of dark matter, and because the inflationary force may be related to GUT physics (although it does not seem to form an inevitable part of the theory). Yet GUTs are clearly not the final answer; both the current standard model and all proposed GUTs are quantum field theories which require the problematic technique of renormalization to yield sensible answers. This is usually regarded as a sign that these are only effective field theories, omitting crucial phenomena relevant only at very high energies.[3]

The final step in the graph requires resolving the separation between quantum mechanics and gravitation, often equated with general relativity. Numerous researchers concentrate their efforts on this specific step; nevertheless, no accepted theory of quantum gravity – and thus no accepted theory of everything – has emerged yet. It is usually assumed that the ToE will also solve the remaining problems of GUTs.

In addition to explaining the forces listed in the graph, a ToE may also explain the status of at least two candidate forces suggested by modern cosmology: an inflationary force and dark energy. Furthermore, cosmological experiments also suggest the existence of dark matter, supposedly composed of fundamental particles outside the scheme of the standard model. However, the existence of these forces and particles has not been proven yet.

String theory and M-theory

Since the 1990s, many physicists believe that 11-dimensional M-theory, which is described in some limits by one of the five perturbative superstring theories, and in another by the maximally-supersymmetric 11-dimensional supergravity, is the theory of everything. However, there is no widespread consensus on this issue.

A surprising property of string/M-theory is that extra dimensions are required for the theory's consistency. In this regard, string theory can be seen as building on the insights of the Kaluza–Klein theory, in which it was realized that applying general relativity to a five-dimensional universe (with one of them small and curled up) looks from the four-dimensional perspective like the usual general relativity together with Maxwell's electrodynamics. This lent credence to the idea of unifying gauge and gravity interactions, and to extra dimensions, but didn't address the detailed experimental requirements. Another important property of string theory is its supersymmetry, which together with extra dimensions are the two main proposals for resolving the hierarchy problem of the standard model, which is (roughly) the question of why gravity is so much weaker than any other force. The extra-dimensional solution involves allowing gravity to propagate into the other dimensions while keeping other forces confined to a four-dimensional spacetime, an idea that has been realized with explicit stringy mechanisms.[18]

Research into string theory has been encouraged by a variety of theoretical and experimental factors. On the experimental side, the particle content of the standard model supplemented with neutrino masses fits into a spinor representation of SO(10), a subgroup of E8 that routinely emerges in string theory, such as in heterotic string theory[19] or (sometimes equivalently) in F-theory.[20][21] String theory has mechanisms that may explain why fermions come in three hierarchical generations, and explain the mixing rates between quark generations.[22] On the theoretical side, it has begun to address some of the key questions in quantum gravity, such as resolving the black hole information paradox, counting the correct entropy of black holes[23][24] and allowing for topology-changing processes.[25][26][27] It has also led to many insights in pure mathematics and in ordinary, strongly-coupled gauge theory due to the Gauge/String duality.

In the late 1990s, it was noted that one major hurdle in this endeavor is that the number of possible four-dimensional universes is incredibly large. The small, "curled up" extra dimensions can be compactified in an enormous number of different ways (one estimate is 10500 ) each of which leads to different properties for the low-energy particles and forces. This array of models is known as the string theory landscape.[8]:347

One proposed solution is that many or all of these possibilities are realised in one or another of a huge number of universes, but that only a small number of them are habitable, and hence the fundamental constants of the universe are ultimately the result of the anthropic principle rather than dictated by theory. This has led to criticism of string theory,[28] arguing that it cannot make useful (i.e., original, falsifiable, and verifiable) predictions and regarding it as a pseudoscience. Others disagree,[29] and string theory remains an extremely active topic of investigation in theoretical physics.

Loop quantum gravity

Current research on loop quantum gravity may eventually play a fundamental role in a ToE, but that is not its primary aim.[30] Also loop quantum gravity introduces a lower bound on the possible length scales.

There have been recent claims that loop quantum gravity may be able to reproduce features resembling the Standard Model. So far only the first generation of fermions (leptons and quarks) with correct parity properties have been modelled by Sundance Bilson-Thompson using preons constituted of braids of spacetime as the building blocks.[31] However, there is no derivation of the Lagrangian that would describe the interactions of such particles, nor is it possible to show that such particles are fermions, nor that the gauge groups or interactions of the Standard Model are realised. Utilization of quantum computing concepts made it possible to demonstrate that the particles are able to survive quantum fluctuations.[32]

This model leads to an interpretation of electric and colour charge as topological quantities (electric as number and chirality of twists carried on the individual ribbons and colour as variants of such twisting for fixed electric charge).

Bilson-Thompson's original paper suggested that the higher-generation fermions could be represented by more complicated braidings, although explicit constructions of these structures were not given. The electric charge, colour, and parity properties of such fermions would arise in the same way as for the first generation. The model was expressly generalized for an infinite number of generations and for the weak force bosons (but not for photons or gluons) in a 2008 paper by Bilson-Thompson, Hackett, Kauffman and Smolin.[33]

Other attempts

Any ToE must include general relativity and the Standard Model of particle physics.

A recent and very prolific attempt is called Causal Sets. As some of the approaches mentioned above, its direct goal isn't necessarily to achieve a ToE but primarily a working theory of quantum gravity, which might eventually include the standard model and become a candidate for a ToE. Its founding principle is that spacetime is fundamentally discrete and that the spacetime events are related by a partial order. This partial order has the physical meaning of the causality relations between relative past and future distinguishing spacetime events.

Outside the previously mentioned attempts there is Garrett Lisi's E8 proposal. This theory provides an attempt of identifying general relativity and the standard model within the Lie group E8. The theory doesn't provide a novel quantization procedure and the author suggests its quantization might follow the Loop Quantum Gravity approach above mentioned.[34]

Christoph Schiller's Strand Model attempts to account for the gauge symmetry of the Standard Model of particle physics, U(1)×SU(2)×SU(3), with the three Reidemeister moves of knot theory by equating each elementary particle to a different tangle of one, two, or three strands (selectively a long prime knot or unknotted curve, a rational tangle, or a braided tangle respectively).

Present status

At present, there is no candidate theory of everything that includes the standard model of particle physics and general relativity. For example, no candidate theory is able to calculate the fine structure constant or the mass of the electron. Most particle physicists expect that the outcome of the ongoing experiments – the search for new particles at the large particle accelerators and for dark matter – are needed in order to provide further input for a ToE.

Theory of everything and philosophy

The philosophical implications of a physical ToE are frequently debated. For example, if philosophical physicalism is true, a physical ToE will coincide with a philosophical theory of everything.

The "system building" style of metaphysics attempts to answer all the important questions in a coherent way, providing a complete picture of the world. Plato and Aristotle could be said to have created early examples of comprehensive systems. In the early modern period (17th and 18th centuries), the system-building scope of philosophy is often linked to the rationalist method of philosophy, which is the technique of deducing the nature of the world by pure a priori reason. Examples from the early modern period include the Leibniz's Monadology, Descarte's Dualism, and Spinoza's Monism. Hegel's Absolute idealism and Whitehead's Process philosophy were later systems.

Arguments against a theory of everything

In parallel to the intense search for a ToE, various scholars have seriously debated the possibility of its discovery.

Gödel's incompleteness theorem

A number of scholars claim that Gödel's incompleteness theorem suggests that any attempt to construct a ToE is bound to fail. Gödel's theorem, informally stated, asserts that any formal theory expressive enough for elementary arithmetical facts to be expressed and strong enough for them to be proved is either inconsistent (both a statement and its denial can be derived from its axioms) or incomplete, in the sense that there is a true statement that can't be derived in the formal theory.

Stanley Jaki, in his 1966 book The Relevance of Physics, pointed out that, because any "theory of everything" will certainly be a consistent non-trivial mathematical theory, it must be incomplete. He claims that this dooms searches for a deterministic theory of everything.[35] In a later reflection, Jaki states that it is wrong to say that a final theory is impossible, but rather that "when it is on hand one cannot know rigorously that it is a final theory."[36]

Freeman Dyson has stated that

Stephen Hawking was originally a believer in the Theory of Everything but, after considering Gödel's Theorem, concluded that one was not obtainable.


Jürgen Schmidhuber (1997) has argued against this view; he points out that Gödel's theorems are irrelevant for computable physics.[37] In 2000, Schmidhuber explicitly constructed limit-computable, deterministic universes whose pseudo-randomness based on undecidable, Gödel-like halting problems is extremely hard to detect but does not at all prevent formal ToEs describable by very few bits of information.[38]

Related critique was offered by Solomon Feferman,[39] among others. Douglas S. Robertson offers Conway's game of life as an example:[40] The underlying rules are simple and complete, but there are formally undecidable questions about the game's behaviors. Analogously, it may (or may not) be possible to completely state the underlying rules of physics with a finite number of well-defined laws, but there is little doubt that there are questions about the behavior of physical systems which are formally undecidable on the basis of those underlying laws.

Since most physicists would consider the statement of the underlying rules to suffice as the definition of a "theory of everything", most physicists argue that Gödel's Theorem does not mean that a ToE cannot exist. On the other hand, the scholars invoking Gödel's Theorem appear, at least in some cases, to be referring not to the underlying rules, but to the understandability of the behavior of all physical systems, as when Hawking mentions arranging blocks into rectangles, turning the computation of prime numbers into a physical question.[41] This definitional discrepancy may explain some of the disagreement among researchers.

Fundamental limits in accuracy

No physical theory to date is believed to be precisely accurate. Instead, physics has proceeded by a series of "successive approximations" allowing more and more accurate predictions over a wider and wider range of phenomena. Some physicists believe that it is therefore a mistake to confuse theoretical models with the true nature of reality, and hold that the series of approximations will never terminate in the "truth". Einstein himself expressed this view on occasions.[42] Following this view, we may reasonably hope for a theory of everything which self-consistently incorporates all currently known forces, but we should not expect it to be the final answer.

On the other hand it is often claimed that, despite the apparently ever-increasing complexity of the mathematics of each new theory, in a deep sense associated with their underlying gauge symmetry and the number of fundamental physical constants, the theories are becoming simpler. If this is the case, the process of simplification cannot continue indefinitely.

Lack of fundamental laws

There is a philosophical debate within the physics community as to whether a theory of everything deserves to be called the fundamental law of the universe.[43] One view is the hard reductionist position that the ToE is the fundamental law and that all other theories that apply within the universe are a consequence of the ToE. Another view is that emergent laws, which govern the behavior of complex systems, should be seen as equally fundamental. Examples of emergent laws are the second law of thermodynamics and the theory of natural selection. The advocates of emergence argue that emergent laws, especially those describing complex or living systems are independent of the low-level, microscopic laws. In this view, emergent laws are as fundamental as a ToE.

The debates do not make the point at issue clear. Possibly the only issue at stake is the right to apply the high-status term "fundamental" to the respective subjects of research. A well-known one took place between Steven Weinberg and Philip Anderson[citation needed]

Impossibility of being "of everything"

Although the name "theory of everything" suggests the determinism of Laplace's quotation, this gives a very misleading impression. Determinism is frustrated by the probabilistic nature of quantum mechanical predictions, by the extreme sensitivity to initial conditions that leads to mathematical chaos, by the limitations due to event horizons, and by the extreme mathematical difficulty of applying the theory. Thus, although the current standard model of particle physics "in principle" predicts almost all known non-gravitational phenomena, in practice only a few quantitative results have been derived from the full theory (e.g., the masses of some of the simplest hadrons), and these results (especially the particle masses which are most relevant for low-energy physics) are less accurate than existing experimental measurements. The ToE would almost certainly be even harder to apply for the prediction of experimental results, and thus might be of limited use.

A motive for seeking a ToE,[citation needed] apart from the pure intellectual satisfaction of completing a centuries-long quest, is that prior examples of unification have predicted new phenomena, some of which (e.g., electrical generators) have proved of great practical importance. And like in these prior examples of unification, the ToE would probably allow us to confidently define the domain of validity and residual error of low-energy approximations to the full theory.

Infinite number of onion layers

Lee Smolin regularly argues that the layers of nature may be like the layers of an onion, and that the number of layers might be infinite.[citation needed] This would imply an infinite sequence of physical theories.

The argument is not universally accepted, because it is not obvious that infinity is a concept that applies to the foundations of nature.

Impossibility of calculation

Weinberg[44] points out that calculating the precise motion of an actual projectile in the Earth's atmosphere is impossible. So how can we know we have an adequate theory for describing the motion of projectiles? Weinberg suggests that we know principles (Newton's laws of motion and gravitation) that work "well enough" for simple examples, like the motion of planets in empty space. These principles have worked so well on simple examples that we can be reasonably confident they will work for more complex examples. For example, although general relativity includes equations that do not have exact solutions, it is widely accepted as a valid theory because all of its equations with exact solutions have been experimentally verified. Likewise, a ToE must work for a wide range of simple examples in such a way that we can be reasonably confident it will work for every situation in physics.

Pests invade Europe after neonicotinoids ban, with no benefit to bee health

| January 27, 2015 |
 
Original link:  http://geneticliteracyproject.org/2015/01/27/pests-invade-europe-after-neonicotinoids-ban-with-no-benefit-to-bee-health/
 
colorado potato beetle on eggplant1 - Copy

This month, more than 100 natural food brands, including Clif Bar and Stonyfield, joined together in a drive to encourage the Obama Administration to ban pesticides linked to bee deaths. The culprit, they say, is neonicotinoids, which is a class of chemicals commonly called neonics, introduced in the 1990s, that are mostly coated onto seeds to help farmers control insects.

“(Neonicotinoids) poison the whole treated plant including the nectar and pollen that bees eat – and they are persistent, lasting months or even years in the plant, soil, and waterways.” writes Jennifer Sass, a scientist with the National Resources Defense Council, which has been pressing the  Environmental Protection Agency to conduct a one-year review of neonics to determine if a ban is necessary. “Traditional best management practices for bee protection, such as not spraying during the day or on bloom, doesn’t work for neonics.” she claims.

Last November, the NRDC submitted signatures from almost 275,000 of its members urging EPA to respond to its legal petition to expedite the review of neonics.
While there are a number of factors contributing to the dramatic die-off of bees – both honey bees and native bees – there is now a wealth of science that demonstrates that pesticides are a big part of the problem. In particular, the neonic pesticides (imidaclopridclothianidin, and others) have been linked to impaired bee health, making it more difficult for the colony to breed, to fight off disease and pathogens, and to survive winter.  What makes neonics so harmful to bees is that they are systemic — meaning they poison the whole treated plant including the nectar and pollen that bees eat — and they are persistent, lasting months or even years in the plant, soil, and waterways they contaminate. Traditional best management practices for bee protection, such as not spraying during the day or on bloom, doesn’t work for neonics.
Yet, as activists continue to campaign to get neonics banned, news from Europe, where a two-year moratorium went into effect last year, suggests that farmers are unable to control pests without them.
Partly in desperation, they are replacing neonics with pesticides that are older, less effective and demonstrably more harmful to humans and social insects, and farm yields are dropping.

The European Commission banned the use of neonics despite the fact that the science community is sharply split as to whether neonics plays a significant role in bee deaths. The causes of CCD and subsequent winter-related problems have since remained a mystery—and a heated controversy.

Bees play an integral role in agriculture helping to pollinate roughly one-third of crop species in the US, including many fruits, vegetables, nuts and livestock feed such as alfalfa and clover. In 2006, as much as 80 percent of the hives in California, the center of the world almond industry, died in what was dubbed Colony Collapse Disorder (CCD). More recently, overwinter deaths of bees in the United States has hovered well above the 19 percent loss level that is common and considered acceptable, sometimes reaching as high as 30 percent. Europe has faced similar overwinter die-offs.

But there is no bee crisis, say most mainstream entomologists. Globally, beehive counts have increased by 45 percent in the last 50 years, according to a United Nations report. Neonics are widely used in Australia were there have been no mass bee deaths, and in Western Canada, where bees are thriving. Over the past past two winters, bee losses have moderated considerably throughout Europe and beehives have gone up steadily over the past two decades as the use of neonics has risen.
2014-12-14-european_union_beehive_totals-thumb
That did not stop the EU ban from being instituted. In North America, despite the 2006 CCD crisis, beehive numbers have held steady since the time neonics were introduced, challenging one of the central claims of environmental critics.2014-12-14-NA-thumb

While many environmental activists, and some scientists, have coalesced around the belief that neonics as a likely culprit, most mainstream entomologists disagreed. May Berenbaum, the renowned University of Illinois entomologist and chairwoman of a major National Academy of Sciences study on the loss of pollinators, has said that she is “extremely dubious” that banning neonics, as many greens are demanding, would have any positive effect.

Jeff Pettis, research leader of the Bee Research Laboratory in Beltsville, Maryland, who formerly headed the USDA’s Agricultural Research Service’s research on bee Colony Collapse Disorder, said the bee problem has been perplexing:
We know more now than we did a few years ago, but CCD has really been a 1,000-piece jigsaw puzzle, and the best I can say is that a lot of pieces have been turned over. The problem is that they have almost all been blue-sky pieces—frame but no center picture.
Last summer, President Barack Obama issued a memorandum asking agencies to address steps to protect pollinators, however, the report is not expected to be released until next year. The panel is being headed Illinois entomologist May Berenbaum. In a controversial move, the National Wildlife Refuge System announced a ban on both neonics and genetically modified organisms last August.

Cities, states and provinces in Canada, egged on by environmental activists, are beginning to act unilaterally. Ontario voted to ban the chemicals, as have several cities or counties, including Vancouver; Seattle, Thurston County, Wash.; Spokane, Wash.; Cannon Beach, Ore.; and Shorewood, MN. Oregon held a hearing recently to consider a policy that would limit neonics use.

European fallout

While pressures on politicians increase, farmers in Europe say they are already seeing the fallout on crop yields from the ban–what many claim is a politically driven policy. This is the first season for growing oilseed rape following the EU ban, and there has been a noticeable rise in beetle damage.
Flea-beetle-larvae-615x346
Last autumn saw beetle numbers swell in areas of eastern England and the damage from their larvae could leave crops open to other pest damage and lodging.

Ryan Hudson, agronomist with distribution group Farmacy, says that growers in the beetle hotspot areas are seeing some fields “riddled” with the larvae.

“They could do a lot of damage – arguably more than the adults because we cannot control them now and I think we will find out the true extent this season,” he explains.

Near Cambridge, England, farmer Martin Jenkins found flea beetles for the first time in almost a decade on his 750 acres of rapeseed (commonly called canola in the U.S.). He told Bloomberg:
When we remove a tool from the box, that puts even more pressure on the tools we’ve got left. More pesticides are being used, and even more ridiculous is there will be massively less rapeseed.
There is little growers can do now as the only option of tackling the larvae is an autumn spraying of pyrethroid, an older chemical phased out for multiple reasons, so they must now focus on stimulating growth.The infestation may cause a 15 percent drop canola yields in Europe this year and some areas are even worse off. Last fall, some canola fields in Germany were so damaged, farmers plowed them under and replanted winter cereals. Nick von Westenholz, chief executive of the UK’s Crop Protection Association, an industry group explained:
Farmers have had to go back to older chemistry and chemistry that is increasingly less effective. Companies would like to innovate and bring newer stuff, but the neonicotinoid example is not a tempting one.
Bringing new chemicals to market is expensive and takes time to move through the regulatory system. Meanwhile, canola farmers are spraying almost twice as much alternative chemicals from the class of pyrethroids, said Manuela Specht from the German oilseed trade group UFOP in Berlin.
Last fall, UK farmer Peter Kendall said he sprayed his crop with pyrethroids three times last year before giving up, replanting and spraying again.

This increased spraying with harsher chemicals may harm the honeybees, which the neonics ban intended to protect in the first place. . A 2014 study by researchers at the University of London found that exposure to pyrethroids can reduce bee size.

“There is a strong feeling among farmers that we are worse off and the environment is worse off,” said Kendall.

Rebecca Randall is a journalist focusing on international relations and global food issues. Follow her @beccawrites.

Additional resources:

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...