Search This Blog

Sunday, August 31, 2014

Nucleosynthesis

Nucleosynthesis

From Wikipedia, the free encyclopedia

Nucleosynthesis is the process that creates new atomic nuclei from pre-existing nucleons, primarily protons and neutrons. The first nuclei were formed about three minutes after the Big Bang, through the process called Big Bang nucleosynthesis. It was then that hydrogen and helium formed that became the content of the first stars, and is responsible for the present hydrogen/helium ratio of the cosmos.

With the formation of stars, heavier nuclei were created from hydrogen and helium by stellar nucleosynthesis, a process that continues today. Some of these elements, particularly those lighter than iron, continue to be delivered to the interstellar medium when low mass stars eject their outer envelope before they collapse to form white dwarfs. The remains of their ejected mass form the planetary nebulae observable throughout our galaxy.

Supernova nucleosynthesis within exploding stars by fusing carbon and oxygen is responsible for the abundances of elements between magnesium (atomic number 12) and nickel (atomic number 28).[1] Supernova nucleosynthesis is also thought to be responsible for the creation of rarer elements heavier than iron and nickel, in the last few seconds of a type II supernova event. The synthesis of these heavier elements absorbs energy (endothermic) as they are created, from the energy produced during the supernova explosion. Some of those elements are created from the absorption of multiple neutrons (the R process) in the period of a few seconds during the explosion. The elements formed in supernovas include the heaviest elements known, such as the long-lived elements uranium and thorium.

Cosmic ray spallation, caused when cosmic rays impact the interstellar medium and fragment larger atomic species, is a significant source of the lighter nuclei, particularly 3He, 9Be and 10,11B, that are not created by stellar nucleosynthesis.

In addition to the fusion processes responsible for the growing abundances of elements in the universe, a few minor natural processes continue to produce very small numbers of new nuclides on Earth. These nuclides contribute little to their abundances, but may account for the presence of specific new nuclei. These nuclides are produced via radiogenesis (decay) of long-lived, heavy, primordial radionuclides such as uranium and thorium. Cosmic ray bombardment of elements on Earth also contribute to the presence of rare, short-lived atomic species called cosmogenic nuclides.

Timeline

It is thought that the primordial nucleons themselves were formed from the quark–gluon plasma during the Big Bang as it cooled below two trillion degrees. A few minutes afterward, starting with only protons and neutrons, nuclei up to lithium and beryllium (both with mass number 7) were formed, but the abundances of other elements dropped sharply with growing atomic mass. Some boron may have been formed at this time, but the process stopped before significant carbon could be formed, as this element requires a far higher product of helium density and time than were present in the short nucleosynthesis period of the Big Bang. That fusion process essentially shut down at about 20 minutes, due to drops in temperature and density as the universe continued to expand. This first process, Big Bang nucleosynthesis, was the first type of nucleogenesis to occur in the universe.

The subsequent nucleosynthesis of the heavier elements requires the extreme temperatures and pressures of stars and supernovas. These processes began as hydrogen and helium from the Big Bang collapsed into the first stars at 500 million years. Star formation has occurred continuously in the galaxy since that time. The elements found on Earth, the so-called primordial elements, were created prior to Earth's formation by stellar nucleosynthesis and by supernova nucleosynthesis. They range in atomic numbers from Z=6 (carbon) to Z=94 (plutonium). Synthesis of these elements occurred either by nuclear fusion (including both rapid and slow multiple neutron capture) or to a lesser degree by nuclear fission followed by beta decay.

A star gains heavier elements by combining its lighter nuclei, hydrogen, deuterium, beryllium, lithium, and boron, which were found in the initial compositions of the star. Interstellar gas therefore contains declining abundances of these light elements, which are present only by virtue of their nucleosynthesis during the Big Bang. Larger quantities of these lighter elements in the present universe are therefore thought to have been restored through billions of years of cosmic ray (mostly high-energy proton) mediated breakup of heavier elements in interstellar gas and dust. The fragments of these cosmic-ray collisions include the light elements Li, Be and B.

History of nucleosynthesis theory

The first ideas on nucleosynthesis were simply that the chemical elements were created at the beginning of the universe, but no rational physical scenario for this could be identified. Gradually it became clear that hydrogen and helium are much more abundant than any of the other elements. All the rest constitute less than 2% of the mass of the solar system, and of other star systems as well. At the same time it was clear that oxygen and carbon were the next two most common elements, and also that there was a general trend toward high abundance of the light elements, especially those composed of whole numbers of helium-4 nuclei.

Arthur Stanley Eddington first suggested in 1920, that stars obtain their energy by fusing hydrogen into helium. This idea was not generally accepted, as the nuclear mechanism was not understood. In the years immediately before World War II Hans Bethe first elucidated those nuclear mechanisms by which hydrogen is fused into helium. However, neither of these early works on stellar power addressed the origin of the elements heavier than helium.

Fred Hoyle's original work on nucleosynthesis of heavier elements in stars, occurred just after World War II.[2] His work explained the production of all heavier elements, starting from hydrogen. Hoyle proposed that hydrogen is continuously created in the universe from vacuum and energy, without need for universal beginning.

Hoyle's work explained how the abundances of the elements increased with time as the galaxy aged. Subsequently, Hoyle's picture was expanded during the 1960s by contributions from William A. Fowler, Alastair G. W. Cameron, and Donald D. Clayton, followed by many others. The creative 1957 review paper by E. M. Burbidge, G. R. Burbidge, Fowler and Hoyle (see Ref. list) is a well-known summary of the state of the field in 1957. That paper defined new processes for changing one heavy nucleus into others within stars, processes that could be documented by astronomers.

The Big Bang itself had been proposed in 1931, long before this period, by Georges Lemaître, a Belgian physicist and Roman Catholic priest, who suggested that the evident expansion of the Universe in time required that the Universe, if contracted backwards in time, would continue to do so until it could contract no further. This would bring all the mass of the Universe to a single point, a "primeval atom", to a state before which time and space did not exist. Hoyle later gave Lemaître's model the derisive term of Big Bang, not realizing that Lemaître's model was needed to explain the existence of deuterium and nuclides between helium and carbon, as well as the fundamentally high amount of helium present, not only in stars but also in interstellar gas. As it happened, both Lemaître and Hoyle's models of nucleosynthesis would be needed to explain the elemental abundances in the universe.

The goal of nucleosynthesis is to understand the vastly differing abundances of the chemical elements and their several isotopes from the perspective of natural processes. The primary stimulus to the development of this theory was the shape of a plot of the abundances verses the atomic number of the elements. Those abundances, when plotted on a graph as a function of atomic number, have a jagged sawtooth structure that varies by factors up to ten million. A very influential stimulus to nucleosynthesis research was an abundance table created by Hans Suess and Harold Urey that was based on the unfractionated abundances of the non-volatile elements found within unevolved meteorites.[3] Such a graph of the abundances is displayed on a logarithmic scale below, where the dramatically jagged structure is visually suppressed by the many powers of ten spanned in this graph. See Handbook of Isotopes in the Cosmos for more data and discussion of abundances of the isotopes.[4]
Abundances of the chemical elements in the Solar system. 
Hydrogen and helium are most common, 
residuals within the paradigm of the Big Bang.[5] 
The next three elements (Li, Be, B) are rare because
they are poorly synthesized in the Big Bang and also in stars. 
The two general trends in the remaining
stellar-produced elements are: (1) an alternation of abundance
 of elements according to whether they
have even or odd atomic numbers, and (2) a general decrease in
abundance, as elements become heavier. Within this trend is a 
peak at abundances of iron and nickel, which is especially visible
on a logarithmic graph spanning fewer powers of ten, say between
logA=2 (A=100) and logA=6 (A=1,000,000).

Processes

There are a number of astrophysical processes which are believed to be responsible for nucleosynthesis. The majority of these occur in shells within stars and the chain of nuclear fusion processes are known as hydrogen burning (via the proton-proton chain or the CNO cycle), helium burning, carbon burning, neon burning, oxygen burning and silicon burning. These processes are able to create elements up to and including iron and nickel. This is the region of nucleosynthesis within which the isotopes with the highest binding energy per nucleon are created. Heavier elements can be assembled within stars by a neutron capture process known as the s-process or in explosive environments, such as supernovae, by a number of other processes. Some of those other include the r-process, which involves rapid neutron captures, the rp-process, and the p-process (sometimes known as the gamma process), which involves photodisintegration of existing nuclei.

The major types of nucleosynthesis

Periodic table showing the origin of elements

Big Bang nucleosynthesis

Big Bang nucleosynthesis occurred within the first three minutes of the beginning of the universe and is responsible for much of the abundance of 1H (protium), 2H (D, deuterium), 3He (helium-3), and 4He (helium-4), in the universe. Although 4He continues to be produced by stellar fusion and alpha decays and trace amounts of 1H continue to be produced by spallation and certain types of radioactive decay, most of the mass of the isotopes in the universe are thought to have been produced in the Big Bang. The nuclei of these elements, along with some 7Li and 7Be are considered to have been formed between 100 and 300 seconds after the Big Bang when the primordial quark–gluon plasma froze out to form protons and neutrons. Because of the very short period in which nucleosynthesis occurred before it was stopped by expansion and cooling (about 20 minutes), no elements heavier than beryllium (or possibly boron) could be formed. Elements formed during this time were in the plasma state, and did not cool to the state of neutral atoms until much later.[citation needed]

 
Chief nuclear reactions responsible for the relative abundances of light atomic nuclei observed throughout the universe.

Stellar nucleosynthesis

Stellar nucleosynthesis is the nuclear process by which new nuclei are produced. It occurs naturally in stars during stellar evolution. It is responsible for the galactic abundances of elements from carbon to iron. Stars are thermonuclear furnaces in which H and He are fused into heavier nuclei by increasingly high temperatures as the composition of the core evolves.[6] Of particular importance is carbon, because its formation from He is a bottleneck in the entire process. Carbon is produced by the triple-alpha process in all stars. Carbon is also the main element that causes the release of free neutrons within stars, giving rise to the s-process, in which the slow absorption of neutrons converts iron into elements heavier than iron and nickel.[7]
The products of stellar nucleosynthesis are generally dispersed into the interstellar gas through mass loss episodes and the stellar winds of low mass stars. The mass loss events can be witnesses in the planetary nebulae phase of low-mass star evolution, and the explosive ending of stars, called supernovae, of those with more than eight times the mass of the sun.

The first direct proof that nucleosynthesis occurs in stars was the astronomical observation that interstellar gas has become enriched with heavy elements as time passed. As a result, stars that were born from it late in the galaxy, formed with much higher initial heavy element abundances than those that had formed earlier. The detection of technetium in the atmosphere of a red giant star in 1952,[8] by spectroscopy, provided the first evidence of nuclear activity within stars. Because technetium is radioactive, with a half-life much less than the age of the star, its abundance must reflect its recent creation within that star. Equally convincing evidence of the stellar origin of heavy elements, is the large overabundances of specific stable elements found in stellar atmospheres of asymptotic giant branch stars. Observation of barium abundances some 20-50 times greater than found in unevolved stars is evidence of the operation of the s-process within such stars. Many modern proofs of stellar nucleosynthesis are provided by the isotopic compositions of stardust, solid grains that have condensed from the gases of individual stars and which have been extracted from meteorites. Stardust is one component of cosmic dust, and is frequently called presolar grains. The measured isotopic compositions in stardust grains demonstrate many aspects of nucleosynthesis within the stars from which the grains condensed during the star's late-life mass-loss episodes.[9]

Explosive nucleosynthesis

Supernova nucleosynthesis occurs in the energetic environment in supernovae, in which the elements between silicon and nickel are synthesized in quasiequilibrium[10] established during fast fusion that attaches by reciprocating balanced nuclear reactions to 28Si. Quasiequilibrium can be thought of as almost equilibrium except for a high abundance of the 28Si nuclei in the feverishly burning mix. This concept[11] was the most important discovery in nucleosynthesis theory of the intermediate-mass elements since Hoyle's 1954 paper because it provided an overarching understanding of the abundant and chemically important elements between silicon (A=28) and nickel (A=60). It replaced the incorrect although much cited alpha process of the B2FH paper, which inadvertently obscured Hoyle's better 1954 theory.[12] Further nucleosynthesis processes can occur, in particular the r-process (rapid process) described by the B2FH paper and first calculated by Seeger, Fowler and Clayton,[13] in which the most neutron-rich isotopes of elements heavier than nickel are produced by rapid absorption of free neutrons. The creation of free neutrons by electron capture during the rapid compression of the supernova core along with assembly of some neutron-rich seed nuclei makes the r-process a primary process, and one that can occur even in a star of pure H and He. This is in contrast to the B2FH designation of the process as a secondary process. This promising scenario, though generally supported by supernova experts, has yet to achieve a totally satisfactory calculation of r-process abundances. The primary r-process has been confirmed by astronomers who have observed old stars born when galactic metallicity was still small, that nonetheless contain their complement of r-process nuclei; thereby demonstrating that the metallicity is a product of an internal process. The r-process is responsible for our natural cohort of radioactive elements, such as uranium and thorium, as well as the most neutron-rich isotopes of each heavy element.
The rp-process (rapid proton) involves the rapid absorption of free protons as well as neutrons, but its role and its existence are less certain.

Explosive nucleosynthesis occurs too rapidly for radioactive decay to decrease the number of neutrons, so that many abundant isotopes with equal and even numbers of protons and neutrons are synthesized by the silicon quasiequilibrium process.[14] During this process, the burning of oxygen and silicon fuses nuclei that themselves have equal numbers of protons and neutrons to produce nuclides which consist of whole numbers of helium nuclei, up to 15 (representing 60Ni). Such multiple-alpha-particle nuclides are totally stable up to 40Ca (made of 10 helium nuclei), but heavier nuclei with equal and even numbers of protons and neutrons are tightly bound but unstable. The quasiequilibrium produces radioactive isobars 44Ti, 48Cr, 52Fe, and 56Ni, which (except 44Ti) are created in abundance but decay after the explosion and leave the most stable isotope of the corresponding element at the same atomic weight. The most abundant and extant isotopes of elements produced in this way are 48Ti, 52Cr, and 56Fe. These decays are accompanied by the emission of gamma-rays (radiation from the nucleus), whose spectroscopic lines can be used to identify the isotope created by the decay. The detection of these emission lines were an important early product of gamma-ray astronomy.[15]

The most convincing proof of explosive nucleosynthesis in supernovae occurred in 1987 when those gamma-ray lines were detected emerging from supernova 1987A. Gamma ray lines identifying 56Co and 57Co nuclei, whose radioactive halflives limit their age to about a year, proved that they were created by their radioactive cobalt parents. This nuclear astronomy observation was predicted in 1969[16] as a way to confirm explosive nucleosynthesis of the elements, and that prediction played an important role in the planning for NASA's Compton Gamma-Ray Observatory.

Other proofs of explosive nucleosynthesis are found within the stardust grains that condensed within the interiors of supernovae as they expanded and cooled. Stardust grains are one component of cosmic dust. In particular, radioactive 44Ti was measured to be very abundant within supernova stardust grains at the time they condensed during the supernova expansion.[17] This confirmed a 1975 prediction of the identification of supernova stardust (SUNOCONs), which became part of the pantheon of presolar grains. Other unusual isotopic ratios within these grains reveal many specific aspects of explosive nucleosynthesis.

Cosmic ray spallation

Cosmic ray spallation process reduces the atomic weight of interstellar matter by the impact with cosmic rays, to produce some of the lightest elements present in the universe (though not a significant amount of deuterium). Most notably spallation is believed to be responsible for the generation of almost all of 3He and the elements lithium, beryllium, and boron, although some 7Li and 7Be are thought to have been produced in the Big Bang. The spallation process results from the impact of cosmic rays (mostly fast protons) against the interstellar medium. These impacts fragment carbon, nitrogen, and oxygen nuclei present. The process results in the light elements beryllium, boron, and lithium in cosmos at much greater abundances than they are within solar atmospheres. The light elements 1H and 4He nuclei are not a product of spallation and are represented in the cosmos with approximately primordial abundance.
Beryllium and boron are not significantly produced by stellar fusion processes, due to the instability of any 8Be formed from two 4He nuclei.

Empirical evidence

Theories of nucleosynthesis are tested by calculating isotope abundances and comparing those results with observed results. Isotope abundances are typically calculated from the transition rates between isotopes in a network. Often these calculations can be simplified as a few key reactions control the rate of other reactions.

Minor mechanisms and processes

Very small amounts of certain nuclides are produced on Earth by artificial means. Those are our primary source, for example, of technetium. However, some nuclides are also produced by a number of natural means that have continued after primordial elements were in place. These often act to produce new elements in ways that can be used to date rocks or to trace the source of geological processes. Although these processes do not produce the nuclides in abundance, they are the entire source of the existing natural supply of those nuclides.

These mechanisms include:
  • Radioactive decay may lead to radiogenic daughter nuclides. The nuclear decay of many long-lived primordial isotopes, especially uranium-235, uranium-238, and thorium-232 produce many intermediate daughter nuclides, before they too finally decay to isotopes of lead. The Earth's natural supply of elements like radon and polonium is via this mechanism. The atmosphere's supply of argon-40 is due mostly to the radioactive decay of potassium-40 in the time since the formation of the Earth. Little of the atmospheric argon is primordial. Helium-4 is produced by alpha-decay, and the helium trapped in Earth's crust is also mostly non-primordial. In other types of radioactive decay, such as cluster decay, larger species of nuclei are ejected (for example, neon-20), and these eventually become newly formed stable atoms.
  • Radioactive decay may lead to spontaneous fission. This is not cluster decay, as the fission products may be split among nearly any type of atom. Uranium-235 and uranium-238 are both primordial isotopes that undergo spontaneous fission. Natural technetium and promethium are produced in this manner.
  • Nuclear reactions. Naturally-occurring nuclear reactions powered by radioactive decay give rise to so-called nucleogenic nuclides. This process happens when an energetic particle from a radioactive decay, often an alpha particle, reacts with a nucleus of another atom to change the nucleus into another nuclide. This process may also cause the production of further subatomic particles, such as neutrons. Neutrons can also be produced in spontaneous fission and by neutron emission. These neutrons can then go on to produce other nuclides via neutron-induced fission, or by neutron capture. For example, some stable isotopes such as neon-21 and neon-22 are produced by several routes of nucleogenic synthesis, and thus only part of their abundance is primordial.
  • Nuclear reactions due to cosmic rays. By convention, these reaction-products are not termed "nucleogenic" nuclides, but rather cosmogenic nuclides. Cosmic rays continue to produce new elements on Earth by the same cosmogenic processes discussed above that produce primordial beryllium and boron. One important example is carbon-14, produced from nitrogen-14 in the atmosphere by cosmic rays. Iodine-129 is another example.
In addition to artificial processes, it is postulated that neutron star collision is the main source of elements heavier than iron.[18]

High-efficiency spray-on solar power tech can turn any surface into a cheap solar cell

High-efficiency spray-on solar power tech can turn any surface into a cheap solar cell

  • By on August 2, 2014 at 9:02 am
  • Original link:  http://www.extremetech.com/extreme/187416-high-efficiency-spray-on-solar-power-tech-can-turn-any-surface-into-a-cheap-solar-cell
Solar Cells
Solar panels suffer from two fundamental problems that have continued to persists even after decades of research: they’re not very efficient, and they cost a lot to produce. At least one of these problems has to be solved before solar power can overtake cheap energy sources like fossil fuels, and some scientists have had their hopes pinned on a common mineral called perovskite. This is an organometal with peculiar light-absorbing properties, and a team of researchers from the University of Sheffield say they’ve figured out how to create high-efficiency perovskite solar cells with a spray painting process. Yes, spray-on solar panels might actually happen.

Perovskite is a crystalline organometal made mostly of calcium titanate, and is found in deposits all over the world. It was first discovered over 150 years ago, but only recently have scientists started investigating its use as a solar panel semiconductor replacement for silicon. It certainly makes sense if we can work out the kinks. Perovskite is considerably cheaper to obtain and process than silicon, and the light absorbing layer can be incredibly thin — about 1 micrometer at minimum versus at least 180 micrometers for silicon. That’s why the spray-on solar panel tech demonstrated by the University of Sheffield is plausible as a real-world solution.

Nozzle

That raises the question, how efficient is this spray-on solar cell? Right now the researchers have managed to eke out 11% efficiency from a thin layer of perovskite. Traditionally manufactured solar cells based on the mineral have reached as high as 19%, and the spray-on variety is expected to reach similar levels eventually. That might not sound very impressive, but nearly 20% efficiency is rather good for an experimental solar panel. The average efficiency of silicon cells is only 25% after all. Other materials claim higher numbers, but they aren’t nearly ready for use.

Spray-on

The breakthrough here is in the process of applying perovskite in a thin uniform layer so it can efficiently absorb light on almost any surface. A layer of this material could be used as the basis for solar panels on cars or mobile devices that don’t have completely flat surfaces for mounting standard solar panels — the structure and properties of crystalline silicon simply don’t allow for very much flexibility. A solar panel on your phone? Sure, why not? However, the University of Sheffield team cautions the efficiency of spray-on perovskite will decrease a bit on curved surfaces. [DOI: 10.1039/C4EE01546K - "Efficient planar heterojunction mixed-halide perovskite solar cells deposited via spray-deposition"]

The spray-on process has several key benefits in addition to the obvious non-flat solar cells. Most importantly, it should be incredibly easy to scale up — or down for that matter. The same nozzle can be used to manufacture a small solar panel for personal electronics and a large one for a car. It’s just about the number of passes it takes to coat the surface. The perovskite solution used can also be mass produced cheaply and is easier to handle than silicon. This all combines to lower the potential cost of solar power considerably.

Perovskite is nearing the point that it could actually supplant silicon as the standard for solar panel tech. In just a few years these panels have gone from low single digit efficiencies to nearly matching silicon. This might finally be the breakthrough we’ve been waiting for to move renewable energy forward.

Cosmic microwave background

Cosmic microwave background

From Wikipedia, the free encyclopedia

The cosmic microwave background (CMB) is the thermal radiation assumed to be left over from the "Big Bang" of cosmology. In older literature, the CMB is also variously known as cosmic microwave background radiation (CMBR) or "relic radiation." The CMB is a cosmic background radiation that is fundamental to observational cosmology because it is the oldest light in the universe, dating to the epoch of recombination. With a traditional optical telescope, the space between stars and galaxies (the background) is completely dark. However, a sufficiently sensitive radio telescope shows a faint background glow, almost exactly the same in all directions, that is not associated with any star, galaxy, or other object. This glow is strongest in the microwave region of the radio spectrum. The accidental discovery of CMB in 1964 by American radio astronomers Arno Penzias and Robert Wilson[1][2] was the culmination of work initiated in the 1940s, and earned the discoverers the 1978 Nobel Prize.
The CMB is a snapshot of the oldest light in our Universe, imprinted on the sky when the Universe was just 380,000 years old. It shows tiny temperature fluctuations that correspond to regions of slightly different densities, representing the seeds of all future structure: the stars and galaxies of today.[3]
The CMB is well explained as radiation left over from an early stage in the development of the universe, and its discovery is considered a landmark test of the Big Bang model of the universe.
When the universe was young, before the formation of stars and planets, it was denser, much hotter, and filled with a uniform glow from a white-hot fog of hydrogen plasma. As the universe expanded, both the plasma and the radiation filling it grew cooler. When the universe cooled enough, protons and electrons combined to form neutral atoms. These atoms could no longer absorb the thermal radiation, and so the universe became transparent instead of being an opaque fog. Cosmologists refer to the time period when neutral atoms first formed as the recombination epoch, and the event shortly afterwards when photons started to travel freely through space rather than constantly being scattered by electrons and protons in plasma is referred to as photon decoupling. The photons that existed at the time of photon decoupling have been propagating ever since, though growing fainter and less energetic, since the expansion of space causes their wavelength to increase over time (and wavelength is inversely proportional to energy according to Planck's relation). This is the source of the alternative term relic radiation. The surface of last scattering refers to the set of points in space at the right distance from us so that we are now receiving photons originally emitted from those points at the time of photon decoupling.

Precise measurements of the CMB are critical to cosmology, since any proposed model of the universe must explain this radiation. The CMB has a thermal black body spectrum at a temperature of 2.72548±0.00057 K.[4] The spectral radiance dEν/dν peaks at 160.2 GHz, in the microwave range of frequencies. (Alternatively if spectral radiance is defined as dEλ/dλ then the peak wavelength is 1.063 mm.) The glow is very nearly uniform in all directions, but the tiny residual variations show a very specific pattern, the same as that expected of a fairly uniformly distributed hot gas that has expanded to the current size of the universe. In particular, the spectral radiance at different angles of observation in the sky contains small anisotropies, or irregularities, which vary with the size of the region examined. They have been measured in detail, and match what would be expected if small thermal variations, generated by quantum fluctuations of matter in a very tiny space, had expanded to the size of the observable universe we see today. This is a very active field of study, with scientists seeking both better data (for example, the Planck spacecraft) and better interpretations of the initial conditions of expansion. Although many different processes might produce the general form of a black body spectrum, no model other than the Big Bang has yet explained the fluctuations. As a result, most cosmologists consider the Big Bang model of the universe to be the best explanation for the CMB.

The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the ΛCDM model in particular. Moreover, the WMAP[5] and BICEP[6] experiments have observed coherence of these fluctuations on angular scales that are larger than the apparent cosmological horizon at recombination. Either such coherence is acausally fine-tuned, or cosmic inflation occurred.[7][8]

On 17 March 2014, astronomers from the California Institute of Technology, the Harvard-Smithsonian Center for Astrophysics, Stanford University, and the University of Minnesota announced their detection of signature patterns of polarized light in the CMB, attributed to gravitational waves in the early universe, which if confirmed would provide strong evidence of cosmic inflation and the Big Bang.[9][10][11][12] However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported.[13][14][15]

Features

Graph of cosmic microwave background spectrum measured by the FIRAS instrument on the COBE, the most precisely measured black body spectrum in nature.[16] The error bars are too small to be seen even in an enlarged image, and it is impossible to distinguish the observed data from the theoretical curve

The cosmic microwave background radiation is an emission of uniform, black body thermal energy coming from all parts of the sky. The radiation is isotropic to roughly one part in 100,000: the root mean square variations are only 18 µK,[17] after subtracting out a dipole anisotropy from the Doppler shift of the background radiation. The latter is caused by the peculiar velocity of the Earth relative to the comoving cosmic rest frame as the planet moves at some 371 km/s towards the constellation Leo. The CMB dipole as well as aberration at higher multipoles have been measured, consistent with galactic motion.[18]

In the Big Bang model for the formation of the universe, Inflationary Cosmology predicts that after about 10−37 seconds[19] the nascent universe underwent exponential growth that smoothed out nearly all inhomogeneities. The remaining inhomogeneities were caused by quantum fluctuations in the inflaton field that caused the inflation event.[20] After 10−6 seconds, the early universe was made up of a hot, interacting plasma of photons, electrons, and baryons. As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old.[21] At this point, the photons no longer interacted with the now electrically neutral atoms and began to travel freely through space, resulting in the decoupling of matter and radiation.[22]

The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to 2.7260 ± 0.0013 K,[4] it will continue to drop as the universe expands. The intensity of the radiation also corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called the surface of last scattering. This represents the set of locations in space at which the decoupling event is estimated to have occurred[23] and at a point in time such that the photons from that distance have just reached observers. Most of the radiation energy in the universe is in the cosmic microwave background,[24] making up a fraction of roughly 6×10−5 of the total density of the universe.[25]

Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. The CMB spectrum has become the most precisely measured black body spectrum in nature.[16]

History

The cosmic microwave background was first predicted in 1948 by Ralph Alpher, and Robert Herman.[38][39][40] Alpher and Herman were able to estimate the temperature of the cosmic microwave background to be 5 K, though two years later they re-estimated it at 28 K. This high estimate was due to a mis-estimate of the Hubble constant by Alfred Behr, which could not be replicated and was later abandoned for the earlier estimate. Although there were several previous estimates of the temperature of space, these suffered from two flaws. First, they were measurements of the effective temperature of space and did not suggest that space was filled with a thermal Planck spectrum. Next, they depend on our being at a special spot at the edge of the Milky Way galaxy and they did not suggest the radiation is isotropic. The estimates would yield very different predictions if Earth happened to be located elsewhere in the Universe.[41]
The 1948 results of Alpher and Herman were discussed in many physics settings through about 1955, when both left the Applied Physics Laboratory at Johns Hopkins University. The mainstream astronomical community, however, was not intrigued at the time by cosmology. Alpher and Herman's prediction was rediscovered by Yakov Zel'dovich in the early 1960s, and independently predicted by Robert Dicke at the same time. The first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964.[42] In 1964, David Todd Wilkinson and Peter Roll, Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background.[43] In 1964, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments. On 20 May 1964 they made their first measurement clearly showing the presence of the microwave background,[44] with their instrument having an excess 4.2K antenna temperature which they could not account for. After receiving a telephone call from Crawford Hill, Dicke famously quipped: "Boys, we've been scooped."[1][45][46] A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. Penzias and Wilson received the 1978 Nobel Prize in Physics for their discovery.[47]

The interpretation of the cosmic microwave background was a controversial issue in the 1960s with some proponents of the steady state theory arguing that the microwave background was the result of scattered starlight from distant galaxies.[48] Using this model, and based on the study of narrow absorption line features in the spectra of stars, the astronomer Andrew McKellar wrote in 1941: "It can be calculated that the 'rotational temperature' of interstellar space is 2 K."[26] However, during the 1970s the consensus was established that the cosmic microwave background is a remnant of the big bang. This was largely because new measurements at a range of frequencies showed that the spectrum was a thermal, black body spectrum, a result that the steady state model was unable to reproduce.[49]
The Holmdel Horn Antenna on which Penzias and Wilson discovered the cosmic microwave background.

Harrison, Peebles, Yu and Zel'dovich realized that the early universe would have to have inhomogeneities at the level of 10−4 or 10−5.[50][51][52] Rashid Sunyaev later calculated the observable imprint that these inhomogeneities would have on the cosmic microwave background.[53] Increasingly stringent limits on the anisotropy of the cosmic microwave background were set by ground based experiments during the 1980s. RELIKT-1, a Soviet cosmic microwave background anisotropy experiment on board the Prognoz 9 satellite (launched 1 July 1983) gave upper limits on the large-scale anisotropy. The NASA COBE mission clearly confirmed the primary anisotropy with the Differential Microwave Radiometer instrument, publishing their findings in 1992.[54][55] The team received the Nobel Prize in physics for 2006 for this discovery.

Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the scale of the first acoustic peak, which COBE did not have sufficient resolution to resolve. This peak corresponds to large scale density variations in the early universe that are created by gravitational instabilities, resulting in acoustical oscillations in the plasma.[56] The first peak in the anisotropy was tentatively detected by the Toco experiment and the result was confirmed by the BOOMERanG and MAXIMA experiments.[57][58][59] These measurements demonstrated that the geometry of the Universe is approximately flat, rather than curved.[60] They ruled out cosmic strings as a major component of cosmic structure formation and suggested cosmic inflation was the right theory of structure formation.[61]

The second peak was tentatively detected by several experiments before being definitively detected by WMAP, which has also tentatively detected the third peak.[62] As of 2010, several experiments to improve measurements of the polarization and the microwave background on small angular scales are ongoing. These include DASI, WMAP, BOOMERanG, QUaD, Planck spacecraft, Atacama Cosmology Telescope, South Pole Telescope and the QUIET telescope.

Relationship to the Big Bang

The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang theory. Measurements of the CMB have made the inflationary Big Bang theory the Standard Model of Cosmology.[63] The discovery of the CMB in the mid-1960s curtailed interest in alternatives such as the steady state theory.[64]

The CMB essentially confirms the Big Bang theory. In the late 1940s Alpher and Herman reasoned that if there was a big bang, the expansion of the Universe would have stretched and cooled the high-energy radiation of the very early Universe into the microwave region and down to a temperature of about 5 K. They were slightly off with their estimate, but they had exactly the right idea. They predicted the CMB. It took another 15 years for Penzias and Wilson to stumble into discovering that the microwave background was actually there.[65]

The CMB gives a snapshot of the universe when, according to standard cosmology, the temperature dropped enough to allow electrons and protons to form hydrogen atoms, thus making the universe transparent to radiation. When it originated some 380,000 years after the Big Bang—this time is generally known as the "time of last scattering" or the period of recombination or decoupling—the temperature of the universe was about 3000 K. This corresponds to an energy of about 0.25 eV, which is much less than the 13.6 eV ionization energy of hydrogen.[66]

Since decoupling, the temperature of the background radiation has dropped by a factor of roughly 1,100[67] due to the expansion of the universe. As the universe expands, the CMB photons are redshifted, making the radiation's temperature inversely proportional to a parameter called the universe's scale length. The temperature Tr of the CMB as a function of redshift, z, can be shown to be proportional to the temperature of the CMB as observed in the present day (2.725 K or 0.235 meV):[68]
Tr = 2.725(1 + z)

Primary anisotropy

The power spectrum of the cosmic microwave background radiation temperature anisotropy in terms of the angular scale (or multipole moment). The data shown come from the WMAP (2006), Acbar (2004) Boomerang (2005), CBI (2004), and VSA (2004) instruments. Also shown is a theoretical model (solid line).

The anisotropy of the cosmic microwave background is divided into two types: primary anisotropy, due to effects which occur at the last scattering surface and before; and secondary anisotropy, due to effects such as interactions of the background radiation with hot gas or gravitational potentials, which occur between the last scattering surface and the observer.

The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping). The acoustic oscillations arise because of a conflict in the photonbaryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons—moving at speeds much slower than light—makes them tend to collapse to form dense haloes. These two effects compete to create acoustic oscillations which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude.

The peaks contain interesting physical signatures. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The next peak—ratio of the odd peaks to the even peaks—determines the reduced baryon density.[69] The third peak can be used to get information about the dark matter density.[70]

The locations of the peaks also give important information about the nature of the primordial density perturbations. There are two fundamental types of density perturbations—called adiabatic and isocurvature. A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures.
  • Adiabatic density perturbations
the fractional additional density of each type of particle (baryons, photons ...) is the same. That is, if at one place there is 1% more energy in baryons than average, then at that place there is also 1% more energy in photons (and 1% more energy in neutrinos) than average. Cosmic inflation predicts that the primordial perturbations are adiabatic.
  • Isocurvature density perturbations
in each place the sum (over different types of particle) of the fractional additional densities is zero. That is, a perturbation where at some spot there is 1% more energy in baryons than average, 1% more energy in photons than average, and 2% less energy in neutrinos than average, would be a pure isocurvature perturbation. Cosmic strings would produce mostly isocurvature primordial perturbations.
The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. Isocurvature density perturbations produce a series of peaks whose angular scales (l-values of the peaks) are roughly in the ratio 1:3:5:..., while adiabatic density perturbations produce peaks whose locations are in the ratio 1:2:3:...[71] Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving, for example, cosmic strings.

Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down:
  • the increasing mean free path of the photons as the primordial plasma becomes increasingly rarefied in an expanding universe
  • the finite depth of the last scattering surface (LSS), which causes the mean free path to increase rapidly during decoupling, even while some Compton scattering is still occurring.
These effects contribute about equally to the suppression of anisotropies at small scales, and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies.

The depth of the LSS refers to the fact that the decoupling of the photons and baryons does not happen instantaneously, but instead requires an appreciable fraction of the age of the Universe up to that era. One method of quantifying how long this process took uses the photon visibility function (PVF). This function is defined so that, denoting the PVF by P(t), the probability that a CMB photon last scattered between time t and t+dt is given by P(t)dt.

The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The first-year WMAP results put the time at which P(t) is maximum as 372,000 years.[72] This is often taken as the "time" at which the CMB formed. However, to figure out how long it took the photons and baryons to decouple, we need a measure of the width of the PVF. The WMAP team finds that the PVF is greater than half of its maximum value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. By this measure, decoupling took place over roughly 115,000 years, and when it was complete, the universe was roughly 487,000 years old.

Late time anisotropy

Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions.

The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the Universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB:
  1. Small scale anisotropies are erased. (Just as when looking at an object through fog, details of the object appear fuzzy.)
  2. The physics of how photons are scattered by free electrons (Thomson scattering) induces polarization anisotropies on large angular scales. This broad angle polarization is correlated with the broad angle temperature perturbation.
Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift more than 17.[clarification needed] The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes.

The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the dark age, and is a period which is under intense study by astronomers (See 21 centimeter radiation).

Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zel'dovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields.

Polarization

The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-modes and B-modes. This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. The E-modes arise naturally from Thomson scattering in a heterogeneous plasma. The B-modes are not sourced by standard scalar type perturbations. Instead they can be sourced by two mechanisms: first one is by gravitational lensing of E-modes, which has been measured by South Pole Telescope in 2013.[73] Second one is from gravitational waves arising from cosmic inflation.
Detecting the B-modes is extremely difficult, particularly as the degree of foreground contamination is unknown, and the weak gravitational lensing signal mixes the relatively strong E-mode signal with the B-mode signal.[74]

Microwave background observations

Subsequent to the discovery of the CMB, hundreds of cosmic microwave background experiments have been conducted to measure and characterize the signatures of the radiation. The most famous experiment is probably the NASA Cosmic Background Explorer (COBE) satellite that orbited in 1989–1996 and which detected and quantified the large scale anisotropies at the limit of its detection capabilities. Inspired by the initial COBE results of an extremely isotropic and homogeneous background, a series of ground- and balloon-based experiments quantified CMB anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. Together with other cosmological data, these results implied that the geometry of the Universe is flat. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, Degree Angular Scale Interferometer (DASI), and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum.
In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. WMAP used symmetric, rapid-multi-modulated scanning, rapid switching radiometers to minimize non-sky signal noise.[67] The first results from this mission, disclosed in 2003, were detailed measurements of the angular power spectrum at a scale of less than one degree, tightly constraining various cosmological parameters. The results are broadly consistent with those expected from cosmic inflation as well as various other competing theories, and are available in detail at NASA's data bank for Cosmic Microwave Background (CMB) (see links below). Although WMAP provided very accurate measurements of the large scale angular fluctuations in the CMB (structures about as broad in the sky as the moon), it did not have the angular resolution to measure the smaller scale fluctuations which had been observed by former ground-based interferometers.

All-sky map

Ilc 9yr moll4096.png
All-sky map of the CMB, created from 9 years of WMAP data

A third space mission, the ESA (European Space Agency) Planck Surveyor, was launched in May 2009 and is currently performing an even more detailed investigation. Planck employs both HEMT radiometers and bolometer technology and will measure the CMB at a smaller scale than WMAP. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment—which has produced the most precise measurements at small angular scales to date—and in the Archeops balloon telescope.
Comparison of CMB results from COBE, WMAP and Planck – March 21, 2013.

On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background.[75][76] The map suggests the universe is slightly older than researchers thought. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about 370,000 years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. According to the team, the universe is 13.798 ± 0.037 billion years old,[77] and contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. Also, the Hubble constant was measured to be 67.80 ± 0.77 (km/s)/Mpc.[75][78][79][80]

Additional ground-based instruments such as the South Pole Telescope in Antarctica and the proposed Clover Project, Atacama Cosmology Telescope and the QUIET telescope in Chile will provide additional data not available from satellite observations, possibly including the B-mode polarization.

Data reduction and analysis

Raw CMBR data from the space vehicle (i.e. WMAP) contain foreground effects that completely obscure the fine-scale structure of the cosmic microwave background. The fine-scale structure is superimposed on the raw CMBR data but is too small to be seen at the scale of the raw data. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background. The dipole anisotropy and others due to Earth's annual motion relative to the Sun and numerous microwave sources in the galactic plane and elsewhere must be subtracted out to reveal the extremely tiny variations characterizing the fine-scale structure of the CMBR background.

The detailed analysis of CMBR data to produce maps, an angular power spectrum, and ultimately cosmological parameters is a complicated, computationally difficult problem. Although computing a power spectrum from a map is in principle a simple Fourier transform, decomposing the map of the sky into spherical harmonics, in practice it is hard to take the effects of noise and foreground sources into account. In particular, these foregrounds are dominated by galactic emissions such as Bremsstrahlung, synchrotron, and dust that emit in the microwave band; in practice, the galaxy has to be removed, resulting in a CMB map that is not a full-sky map. In addition, point sources like galaxies and clusters represent another source of foreground which must be removed so as not to distort the short scale structure of the CMB power spectrum.

Constraints on many cosmological parameters can be obtained from their effects on the power spectrum, and results are often calculated using Markov Chain Monte Carlo sampling techniques.

CMBR dipole anisotropy

From the CMB data it is seen that our local group of galaxies (the galactic cluster that includes the Solar System's Milky Way Galaxy) appears to be moving at 369±0.9 km/s relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB) in the direction of galactic longitude l = 263.99±0.14°, b = 48.26±0.03°.[81][82] This motion results in an anisotropy of the data (CMB appearing slightly warmer in the direction of movement than in the opposite direction).[83] The standard interpretation of this temperature variation is a simple velocity red shift and blue shift due to motion relative to the CMB, but alternative cosmological models can explain some fraction of the observed dipole temperature distribution in the CMB.[84]

Low multipoles and other anomalies

With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions.[85][86][87][88] The most longstanding of these is the low-l multipole controversy. Even in the COBE map, it was observed that the quadrupole (l =2, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. In particular, the quadrupole and octupole (l =3) modes appear to have an unexplained alignment with each other and with both the ecliptic plane and equinoxes,[89][90][91] an alignment sometimes referred to as the axis of evil.[86] A number of groups have suggested that this could be the signature of new physics at the greatest observable scales; other groups suspect systematic errors in the data.[92][93][94] Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the "internal linear combination" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others.[62][67][95] Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and Bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole. A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable.[96] Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%.[97][98][99][100]

Recent observations with the Planck telescope, which is very much more sensitive than WMAP and has a larger angular resolution, confirm the observation of the axis of evil. Since two different instruments recorded the same anomaly, instrumental error (but not foreground contamination) appears to be ruled out.[101] Coincidence is a possible explanation, chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, "I do think there is a bit of a psychological effect; people want to find unusual things." [102]

In popular culture

  • In the Stargate Universe TV series, an Ancient spaceship, Destiny, was built to study patterns in the CMBR which indicate that the universe as we know it might have been created by some form of sentient intelligence.[103]
  • In Wheelers, a novel by Ian Stewart & Jack Cohen, CMBR is explained as the encrypted transmissions of an ancient civilization. This allows the Jovian "blimps" to have a society older than the currently-observed age of the universe.

Ferguson isn’t about black rage against cops. It’s white rage against progress.

Ferguson isn’t about black rage against cops. It’s white rage against progress.


Original link:  http://www.washingtonpost.com/opinions/ferguson-wasnt-black-rage-against-copsit-was-white-rage-against-progress/2014/08/29/3055e3f4-2d75-11e4-bb9b-997ae96fad33_story.html?tid=pm_pop

August 29
 
Carol Anderson is an associate professor of African American studies and history at Emory University and a public voices fellow with the Op-Ed Project. She is the author of “Bourgeois Radicals: The NAACP and the Struggle for Colonial Liberation, 1941-1960.”

When we look back on what happened in Ferguson, Mo., during the summer of 2014, it will be easy to think of it as yet one more episode of black rage ignited by yet another police killing of an unarmed African American male. But that has it precisely backward. What we’ve actually seen is the latest outbreak of white rage. Sure, it is cloaked in the niceties of law and order, but it is rage nonetheless.
 
Protests and looting naturally capture attention. But the real rage smolders in meetings where officials redraw precincts to dilute African American voting strength or seek to slash the government payrolls that have long served as sources of black employment. It goes virtually unnoticed, however, because white rage doesn’t have to take to the streets and face rubber bullets to be heard. Instead, white rage carries an aura of respectability and has access to the courts, police, legislatures and governors, who cast its efforts as noble, though they are actually driven by the most ignoble motivations.

White rage recurs in American history. It exploded after the Civil War, erupted again to undermine the Supreme Court’s Brown v. Board of Education decision and took on its latest incarnation with Barack Obama’s ascent to the White House. For every action of African American advancement, there’s a reaction, a backlash.

The North’s victory in the Civil War did not bring peace. Instead, emancipation brought white resentment that the good ol’ days of black subjugation were over. Legislatures throughout the South scrambled to reinscribe white supremacy and restore the aura of legitimacy that the anti-slavery campaign had tarnished. Lawmakers in several states created the Black Codes, which effectively criminalized blackness, sanctioned forced labor and undermined every tenet of democracy. Even the federal authorities’ promise of 40 acres — land seized from traitors who had tried to destroy the United States of America — crumbled like dust.

Influential white legislators such as Rep. Thaddeus Stevens (R-Pa.) and Sen. Charles Sumner (R-Mass.) tried to make this nation live its creed, but they were no match for the swelling resentment that neutralized the 13th, 14th and 15th amendments, and welcomed the Supreme Court’s 1876 United States vs. Cruikshank decision, which undercut a law aimed at stopping the terror of the Ku Klux Klan.

Nearly 80 years later, Brown v. Board of Education seemed like another moment of triumph — with the ruling on the unconstitutionality of separate public schools for black and white students affirming African Americans’ rights as citizens. But black children, hungry for quality education, ran headlong into more white rage. Bricks and mobs at school doors were only the most obvious signs. In March 1956, 101 members of Congress issued the Southern Manifesto, declaring war on the Brown decision. Governors in Virginia, Arkansas, Alabama, Georgia and elsewhere then launched “massive resistance.” They created a legal doctrine, interposition, that supposedly nullified any federal law or court decision with which a state disagreed. They passed legislation to withhold public funding from any school that abided by Brown. They shut down public school systems and used tax dollars to ensure that whites could continue their education at racially exclusive private academies. Black children were left to rot with no viable option.

A little more than half a century after Brown, the election of Obama gave hope to the country and the world that a new racial climate had emerged in America, or that it would. But such audacious hopes would be short-lived. A rash of voter-suppression legislation, a series of unfathomable Supreme Court decisions, the rise of stand-your-ground laws and continuing police brutality make clear that Obama’s election and reelection have unleashed yet another wave of fear and anger.

It’s more subtle — less overtly racist — than in 1865 or even 1954. It’s a remake of the Southern Strategy, crafted in the wake of the civil rights movement to exploit white resentment against African Americans, and deployed with precision by Presidents Richard Nixon and Ronald Reagan. As Reagan’s key political strategist, Lee Atwater, explained in a 1981 interview: “You start out in 1954 by saying, ‘N-----, n-----, n-----.’ By 1968 you can’t say ‘n-----’ — that hurts you. Backfires. So you say stuff like ‘forced busing,’ ‘states’ rights’ and all that stuff. You’re getting so abstract now you’re talking about cutting taxes, and all these things you’re talking about are totally economic things, and a byproduct of them is blacks get hurt worse than whites. And subconsciously maybe that is part of it. I’m not saying that.” (The interview was originally published anonymously, and only years later did it emerge that Atwater was the subject.)

Now, under the guise of protecting the sanctity of the ballot box, conservatives have devised measures — such as photo ID requirements — to block African Americans’ access to the polls. A joint report by the NAACP Legal Defense and Educational Fund and the NAACP emphasized that the ID requirements would adversely affect more than 6 million African American voters. (Twenty-five percent of black Americans lack a government-issued photo ID, the report noted, compared with only 8 percent of white Americans.) The Supreme Court sanctioned this discrimination in Shelby County v. Holder , which gutted the Voting Rights Act and opened the door to 21st-century versions of 19th-century literacy tests and poll taxes.

The economic devastation of the Great Recession also shows African Americans under siege. The foreclosure crisis hit black Americans harder than any other group in the United States. A 2013 report by researchers at Brandeis University calculated that “half the collective wealth of African-American families was stripped away during the Great Recession,” in large part because of the impact on home equity. In the process, the wealth gap between blacks and whites grew: Right before the recession, white Americans had four times more wealth than black Americans, on average; by 2010, the gap had increased to six times. This was a targeted hit. Communities of color were far more likely to have riskier, higher-interest-rate loans than white communities, with good credit scores often making no difference.

Add to this the tea party movement’s assault on so-called Big Government, which despite the sanitized language of fiscal responsibility constitutes an attack on African American jobs. Public-sector employment, where there is less discrimination in hiring and pay, has traditionally been an important venue for creating a black middle class.

So when you think of Ferguson, don’t just think of black resentment at a criminal justice system that allows a white police officer to put six bullets into an unarmed black teen. Consider the economic dislocation of black America. Remember a Florida judge instructing a jury to focus only on the moment when George Zimmerman and Trayvon Martin interacted, thus transforming a 17-year-old, unarmed kid into a big, scary black guy, while the grown man who stalked him through the neighborhood with a loaded gun becomes a victim. Remember the assault on the Voting Rights Act. Look at Connick v. Thompson, a partisan 5-4 Supreme Court decision in 2011 that ruled it was legal for a city prosecutor’s staff to hide evidence that exonerated a black man who was rotting on death row for 14 years. And think of a recent study by Stanford University psychology researchers concluding that, when white people were told that black Americans are incarcerated in numbers far beyond their proportion of the population, “they reported being more afraid of crime and more likely to support the kinds of punitive policies that exacerbate the racial disparities,” such as three-strikes or stop-and-frisk laws.

Only then does Ferguson make sense. It’s about white rage.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...