Search This Blog

Sunday, August 31, 2014

Stratosphere

Stratosphere

From Wikipedia, the free encyclopedia

Space Shuttle Endeavour appears to straddle the stratosphere and mesosphere in this photo. "The orange layer is the troposphere, where all of the weather and clouds which we typically watch and experience are generated and contained. This orange layer gives way to the whitish Stratosphere and then into the Mesosphere."[1] (The shuttle is actually orbiting at more than 200 miles in altitude, far above this transition layer.)
Atmosphere diagram showing stratosphere. The layers are to scale: from Earth's surface to the top of the stratosphere (50km) is just under 1% of Earth's radius.
This image shows the temperature trend in the Lower Stratosphere as measured by a series of satellite-based instruments between January 1979 and December 2005. The Lower Stratosphere is centered around 18 kilometers above Earth's surface. The stratosphere image is dominated by blues and greens, which indicates a cooling over time. Source: [1]

The stratosphere /ˈstrætəsfɪər/ is the second major layer of Earth's atmosphere, just above the troposphere, and below the mesosphere. It is stratified in temperature, with warmer layers higher up and cooler layers farther down. This is in contrast to the troposphere near the Earth's surface, which is cooler higher up and warmer farther down. The border of the troposphere and stratosphere, the tropopause, is marked by where this inversion begins, which in terms of atmospheric thermodynamics is the equilibrium level. At moderate latitudes the stratosphere is situated between about 10–13 km (33,000–43,000 ft; 6.2–8.1 mi) and 50 km (160,000 ft; 31 mi) altitude above the surface, while at the poles it starts at about 8 km (26,000 ft; 5.0 mi) altitude, and near the equator it may start at altitudes as high as 18 km (59,000 ft; 11 mi).

Ozone and temperature

Within this layer, temperature increases as altitude increases (see temperature inversion); the top of the stratosphere has a temperature of about 270 K (−3°C or 26.6°F), just slightly below the freezing point of water.[2] The stratosphere is layered in temperature because ozone (O3) here absorbs high energy UVB and UVC energy waves from the Sun and is broken down into atomic oxygen (O) and diatomic oxygen (O2). Atomic oxygen is found prevalent in the upper stratosphere due to the bombardment of UV light and the destruction of both ozone and diatomic oxygen. The mid stratosphere has less UV light passing through it, O and O2 are able to combine, and is where the majority of natural ozone is produced. It is when these two forms of oxygen recombine to form ozone that they release the heat found in the stratosphere. The lower stratosphere receives very low amounts of UVC, thus atomic oxygen is not found here and ozone is not formed (with heat as the byproduct).[verification needed] This vertical stratification, with warmer layers above and cooler layers below, makes the stratosphere dynamically stable: there is no regular convection and associated turbulence in this part of the atmosphere. The top of the stratosphere is called the stratopause, above which the temperature decreases with height.

Methane (CH4), while not a direct cause of ozone destruction in the stratosphere, does lead to the formation of compounds that destroy ozone. Monoatomic oxygen (O) in the upper stratosphere reacts with methane (CH4) to form a hydroxyl radical (OH·). This hydroxyl radical is then able to interact with non-soluble compounds like chlorofluorocarbons, and UV light breaks off chlorine radicals (Cl·). These chlorine radicals break off an oxygen atom from the ozone molecule, creating an oxygen molecule (O2) and a hypochloryl radical (ClO·). The hypochloryl radical then reacts with an atomic oxygen creating another oxygen molecule and another chlorine radical, thereby preventing the reaction of monoatomic oxygen with O2 to create natural ozone.

Aircraft flight

Commercial airliners typically cruise at altitudes of 9–12 km (30,000–39,000 ft) in temperate latitudes (in the lower reaches of the stratosphere).[3] This optimizes fuel burn, mostly due to the low temperatures encountered near the tropopause and low air density, reducing parasitic drag on the airframe. (Stated another way, it allows the airliner to fly faster for the same amount of drag.) It also allows them to stay above hard weather (extreme turbulence).

Concorde would cruise at mach 2 at about 18,000 m (59,000 ft), and the SR-71 would cruise at mach 3 at 26,000 m (85,000 ft), all still in the stratosphere.

Because the temperature in the tropopause and lower stratosphere remains constant (or slightly decreases) with increasing altitude, very little convective turbulence occurs at these altitudes. Though most turbulence at this altitude is caused by variations in the jet stream and other local wind shears, areas of significant convective activity (thunderstorms) in the troposphere below may produce convective overshoot.

Although a few gliders have achieved great altitudes in the powerful thermals in thunderstorms,[citation needed] this is dangerous. Most high altitude flights by gliders use lee waves from mountain ranges and were used to set the current record of 15,447 m (50,679 ft).

On October 14, 2012, Felix Baumgartner became the record holder for both reaching the altitude record for a manned balloon and highest skydive ever from 39.04 km (128,100 ft).[4]

Circulation and mixing

The stratosphere is a region of intense interactions among radiative, dynamical, and chemical processes, in which the horizontal mixing of gaseous components proceeds much more rapidly than in vertical mixing.

An interesting feature of stratospheric circulation is the quasi-biennial oscillation (QBO) in the tropical latitudes, which is driven by gravity waves that are convectively generated in the troposphere. The QBO induces a secondary circulation that is important for the global stratospheric transport of tracers, such as ozone[5] or water vapor.

In northern hemispheric winter, sudden stratospheric warmings, caused by the absorption of Rossby waves in the stratosphere, can be observed in approximately half of winters when easterly winds develop in the stratosphere. These events often precede unusual winter weather [6] and may even be responsible for the cold European winters of the 1960s.[7]

Life

Bacteria

Bacterial life survives in the stratosphere, making it a part of the biosphere.[8] In 2001 an Indian experiment, involving a high-altitude balloon, was carried out at a height of 41 kilometres and a sample of dust was collected with bacterial material inside.[9]

Birds

Also, some bird species have been reported to fly at the lower levels of the stratosphere. On November 29, 1975, a Rüppell's Vulture was ingested into a jet engine 11,552 m (37,900 ft) above the Ivory Coast, and Bar-headed geese reportedly overfly Mount Everest's summit, which is 8,848 m (29,029 ft).[10][11]

Troposphere

Troposphere

From Wikipedia, the free encyclopedia

Space Shuttle Endeavour sillouetted against the atmosphere. The orange layer is the troposphere, the white layer is the stratosphere and the blue layer the mesosphere.[1] (The shuttle is actually orbiting at an altitude of more than 200 miles, far above all three layers.)
Earth's atmosphere diagram showing the exosphere and other layers. The layers are to scale. From Earth's surface to the top of the stratosphere (50km) is just under 1% of Earth's radius.

The troposphere is the lowest portion of Earth's atmosphere. It contains approximately 80% of the atmosphere's mass and 99% of its water vapour and aerosols.[2] The average depth of the troposphere is approximately 17 km (11 mi) in the middle latitudes. It is deeper in the tropics, up to 20 km (12 mi), and shallower near the polar regions, approximately 7 km (4.3 mi) in winter. The lowest part of the troposphere, where friction with the Earth's surface influences air flow, is the planetary boundary layer. This layer is typically a few hundred metres to 2 km (1.2 mi) deep depending on the landform and time of day. The border between the troposphere and stratosphere, called the tropopause, is a temperature inversion.[3]

The word troposphere derives from the Greek: tropos for "change" reflecting the fact that turbulent mixing plays an important role in the troposphere's structure and behaviour. Most of the phenomena we associate with day-to-day weather occur in the troposphere.[3]

Pressure and temperature structure

A view of Earth's troposphere from an airplane.
Atmospheric circulation shown with three large cells.

Composition

The chemical composition of the troposphere is essentially uniform, with the notable exception of water vapor. The source of water vapour is at the surface through the processes of evaporation and transpiration. Furthermore the temperature of the troposphere decreases with height, and saturation vapor pressure decreases strongly as temperature drops, so the amount of water vapor that can exist in the atmosphere decreases strongly with height. Thus the proportion of water vapour is normally greatest near the surface and decreases with height.

Pressure

The pressure of the atmosphere is maximum at sea level and decreases with higher altitude. This is because the atmosphere is very nearly in hydrostatic equilibrium, so that the pressure is equal to the weight of air above a given point. The change in pressure with height, therefore can be equated to the density with this hydrostatic equation:[4]
 \frac{dp}{dz} = -\rho g_n = - \frac {mpg}{RT}
where:
Since temperature in principle also depends on altitude, one needs a second equation to determine the pressure as a function of height, as discussed in the next section.*

Temperature

This image shows the temperature trend in the Middle Troposphere as measured by a series of satellite-based instruments between January 1979 and December 2005. The middle troposphere is centered around 5 kilometers above the surface. Oranges and yellows dominate the troposphere image, indicating that the air nearest the Earth’s surface warmed during the period.Source: [1]

The temperature of the troposphere generally decreases as altitude increases. The rate at which the temperature decreases, -dT/dz, is called the environmental lapse rate (ELR). The ELR is nothing more than the difference in temperature between the surface and the tropopause divided by the height. The reason for this temperature difference is that most absorption of the sun's energy occurs at the ground which then heats the lower levels of the atmosphere, and the radiation of heat occurs at the top of the atmosphere cooling the earth, this process maintaining the overall heat balance of the earth.

As parcels of air in the atmosphere rise and fall, they also undergo changes in temperature for reasons described below. The rate of change of the temperature in the parcel may be less than or more than the ELR. When a parcel of air rises, it expands, because the pressure is lower at higher altitudes. As the air parcel expands, it pushes on the air around it, doing work; but generally it does not gain heat in exchange from its environment, because its thermal conductivity is low (such a process is called adiabatic). Since the parcel does work and gains no heat, it loses energy, and so its temperature decreases. (The reverse, of course, will be true for a sinking parcel of air.) [3]

Since the heat exchanged dQ is related to the entropy change dS by dQ=T dS, the equation governing the temperature as a function of height for a thoroughly mixed atmosphere is
 \frac{dS}{dz} = 0
where S is the entropy. The rate at which temperature decreases with height under such conditions is called the adiabatic lapse rate.

For dry air, which is approximately an ideal gas, we can proceed further. The adiabatic equation for an ideal gas is [5]
 p(z)T(z)^{-\frac{\gamma}{\gamma-1}}=constant
where \gamma is the heat capacity ratio (\gamma=7/5, for air). Combining with the equation for the pressure, one arrives at the dry adiabatic lapse rate,[6]

\frac{dT}{dz}=- \frac{mg}{R} \frac{\gamma-1}{\gamma}=-9.8^{\circ}\mathrm{C}/\mathrm{km}
If the air contains water vapor, then cooling of the air can cause the water to condense, and the behavior is no longer that of an ideal gas. If the air is at the saturated vapor pressure, then the rate at which temperature drops with height is called the saturated adiabatic lapse rate. More generally, the actual rate at which the temperature drops with altitude is called the environmental lapse rate. In the troposphere, the average environmental lapse rate is a drop of about 6.5 °C for every 1 km (1,000 meters) in increased height.[3]

The environmental lapse rate (the actual rate at which temperature drops with height, dT/dz) is not usually equal to the adiabatic lapse rate (or correspondingly, dS/dz \ne 0). If the upper air is warmer than predicted by the adiabatic lapse rate (dS/dz > 0), then when a parcel of air rises and expands, it will arrive at the new height at a lower temperature than its surroundings. In this case, the air parcel is denser than its surroundings, so it sinks back to its original height, and the air is stable against being lifted. If, on the contrary, the upper air is cooler than predicted by the adiabatic lapse rate, then when the air parcel rises to its new height it will have a higher temperature and a lower density than its surroundings, and will continue to accelerate upward.[3][4]

The troposphere is heated from below by latent heat, longwave radiation, and sensible heat. Surplus heating and vertical expansion of the troposphere occurs in the tropics. At middle latitudes, tropospheric temperatures decrease from an average of 15°C at sea level to about -55°C at the tropopause. At the poles, tropospheric temperature only decreases from an average of 0ºC at sea level to about -45°C at the tropopause. At the equator, tropospheric temperatures decrease from an average of 20ºC at sea level to about -70 to -75°C at the tropopause. The troposphere is thinner at the poles and thicker at the equator. The average thickness of the tropical tropopause is roughly 7 kilometers greater than the average tropopause thickness at the poles. [7]

Tropopause

The tropopause is the boundary region between the troposphere and the stratosphere.
Measuring the temperature change with height through the troposphere and the stratosphere identifies the location of the tropopause. In the troposphere, temperature decreases with altitude. In the stratosphere, however, the temperature remains constant for a while and then increases with altitude. The region of the atmosphere where the lapse rate changes from positive (in the troposphere) to negative (in the stratosphere), is defined as the tropopause.[3] Thus, the tropopause is an inversion layer, and there is little mixing between the two layers of the atmosphere.

Atmospheric flow

The flow of the atmosphere generally moves in a west to east direction. This however can often become interrupted, creating a more north to south or south to north flow. These scenarios are often described in meteorology as zonal or meridional. These terms, however, tend to be used in reference to localised areas of atmosphere (at a synoptic scale). A fuller explanation of the flow of atmosphere around the Earth as a whole can be found in the three-cell model.

Zonal Flow

A zonal flow regime is the meteorological term meaning that the general flow pattern is west to east along the Earth's latitude lines, with weak shortwaves embedded in the flow.[7] The use of the word "zone" refers to the flow being along the Earth's latitudinal "zones". This pattern can buckle and thus become a meridional flow.

Meridional flow

Meridional Flow pattern of October 23, 2003. Note the amplified troughs and ridges in this 500 hPa height pattern.

When the zonal flow buckles, the atmosphere can flow in a more longitudinal (or meridional) direction, and thus the term "meridional flow" arises. Meridional flow patterns feature strong, amplified troughs and ridges, with more north-south flow in the general pattern than west-to-east flow.[8]

Three-cell model

The three cells model attempts to describe the actual flow of the Earth's atmosphere as a whole. It divides the Earth into the tropical (Hadley cell), mid latitude (Ferrel cell), and polar (polar cell) regions, dealing with energy flow and global circulation. Its fundamental principle is that of balance - the energy that the Earth absorbs from the sun each year is equal to that which it loses back into space, but this however is not a balance precisely maintained in each latitude due to the varying strength of the sun in each "cell" resulting from the tilt of the Earth's axis in relation to its orbit. It demonstrates that a pattern emerges to mirror that of the ocean - the tropics do not continue to get warmer because the atmosphere transports warm air poleward and cold air equatorward, the effect of which appears to be that of heat and moisture distribution around the planet.[9]

Synoptic scale observations and concepts

Forcing

Forcing is a term used by meteorologists to describe the situation where a change or an event in one part of the atmosphere causes a strengthening change in another part of the atmosphere. It is usually used to describe connections between upper, middle or lower levels (such as upper-level divergence causing lower level convergence in cyclone formation), but can sometimes also be used to describe such connections over distance rather than height alone. In some respects, teleconnections could be considered a type of forcing.

Divergence and convergence

An area of convergence is one in which the total mass of air is increasing with time, resulting in an increase in pressure at locations below the convergence level (recall that atmospheric pressure is just the total weight of air above a given point). Divergence is the opposite of convergence - an area where the total mass of air is decreasing with time, resulting in falling pressure in regions below the area of divergence. Where divergence is occurring in the upper atmosphere, there will be air coming in to try to balance the net loss of mass (this is called the principle of mass conservation), and there is a resulting upward motion (positive vertical velocity). Another way to state this is to say that regions of upper air divergence are conducive to lower level convergence, cyclone formation, and positive vertical velocity. Therefore, identifying regions of upper air divergence is an important step in forecasting the formation of a surface low pressure area.

Nucleosynthesis

Nucleosynthesis

From Wikipedia, the free encyclopedia

Nucleosynthesis is the process that creates new atomic nuclei from pre-existing nucleons, primarily protons and neutrons. The first nuclei were formed about three minutes after the Big Bang, through the process called Big Bang nucleosynthesis. It was then that hydrogen and helium formed that became the content of the first stars, and is responsible for the present hydrogen/helium ratio of the cosmos.

With the formation of stars, heavier nuclei were created from hydrogen and helium by stellar nucleosynthesis, a process that continues today. Some of these elements, particularly those lighter than iron, continue to be delivered to the interstellar medium when low mass stars eject their outer envelope before they collapse to form white dwarfs. The remains of their ejected mass form the planetary nebulae observable throughout our galaxy.

Supernova nucleosynthesis within exploding stars by fusing carbon and oxygen is responsible for the abundances of elements between magnesium (atomic number 12) and nickel (atomic number 28).[1] Supernova nucleosynthesis is also thought to be responsible for the creation of rarer elements heavier than iron and nickel, in the last few seconds of a type II supernova event. The synthesis of these heavier elements absorbs energy (endothermic) as they are created, from the energy produced during the supernova explosion. Some of those elements are created from the absorption of multiple neutrons (the R process) in the period of a few seconds during the explosion. The elements formed in supernovas include the heaviest elements known, such as the long-lived elements uranium and thorium.

Cosmic ray spallation, caused when cosmic rays impact the interstellar medium and fragment larger atomic species, is a significant source of the lighter nuclei, particularly 3He, 9Be and 10,11B, that are not created by stellar nucleosynthesis.

In addition to the fusion processes responsible for the growing abundances of elements in the universe, a few minor natural processes continue to produce very small numbers of new nuclides on Earth. These nuclides contribute little to their abundances, but may account for the presence of specific new nuclei. These nuclides are produced via radiogenesis (decay) of long-lived, heavy, primordial radionuclides such as uranium and thorium. Cosmic ray bombardment of elements on Earth also contribute to the presence of rare, short-lived atomic species called cosmogenic nuclides.

Timeline

It is thought that the primordial nucleons themselves were formed from the quark–gluon plasma during the Big Bang as it cooled below two trillion degrees. A few minutes afterward, starting with only protons and neutrons, nuclei up to lithium and beryllium (both with mass number 7) were formed, but the abundances of other elements dropped sharply with growing atomic mass. Some boron may have been formed at this time, but the process stopped before significant carbon could be formed, as this element requires a far higher product of helium density and time than were present in the short nucleosynthesis period of the Big Bang. That fusion process essentially shut down at about 20 minutes, due to drops in temperature and density as the universe continued to expand. This first process, Big Bang nucleosynthesis, was the first type of nucleogenesis to occur in the universe.

The subsequent nucleosynthesis of the heavier elements requires the extreme temperatures and pressures of stars and supernovas. These processes began as hydrogen and helium from the Big Bang collapsed into the first stars at 500 million years. Star formation has occurred continuously in the galaxy since that time. The elements found on Earth, the so-called primordial elements, were created prior to Earth's formation by stellar nucleosynthesis and by supernova nucleosynthesis. They range in atomic numbers from Z=6 (carbon) to Z=94 (plutonium). Synthesis of these elements occurred either by nuclear fusion (including both rapid and slow multiple neutron capture) or to a lesser degree by nuclear fission followed by beta decay.

A star gains heavier elements by combining its lighter nuclei, hydrogen, deuterium, beryllium, lithium, and boron, which were found in the initial compositions of the star. Interstellar gas therefore contains declining abundances of these light elements, which are present only by virtue of their nucleosynthesis during the Big Bang. Larger quantities of these lighter elements in the present universe are therefore thought to have been restored through billions of years of cosmic ray (mostly high-energy proton) mediated breakup of heavier elements in interstellar gas and dust. The fragments of these cosmic-ray collisions include the light elements Li, Be and B.

History of nucleosynthesis theory

The first ideas on nucleosynthesis were simply that the chemical elements were created at the beginning of the universe, but no rational physical scenario for this could be identified. Gradually it became clear that hydrogen and helium are much more abundant than any of the other elements. All the rest constitute less than 2% of the mass of the solar system, and of other star systems as well. At the same time it was clear that oxygen and carbon were the next two most common elements, and also that there was a general trend toward high abundance of the light elements, especially those composed of whole numbers of helium-4 nuclei.

Arthur Stanley Eddington first suggested in 1920, that stars obtain their energy by fusing hydrogen into helium. This idea was not generally accepted, as the nuclear mechanism was not understood. In the years immediately before World War II Hans Bethe first elucidated those nuclear mechanisms by which hydrogen is fused into helium. However, neither of these early works on stellar power addressed the origin of the elements heavier than helium.

Fred Hoyle's original work on nucleosynthesis of heavier elements in stars, occurred just after World War II.[2] His work explained the production of all heavier elements, starting from hydrogen. Hoyle proposed that hydrogen is continuously created in the universe from vacuum and energy, without need for universal beginning.

Hoyle's work explained how the abundances of the elements increased with time as the galaxy aged. Subsequently, Hoyle's picture was expanded during the 1960s by contributions from William A. Fowler, Alastair G. W. Cameron, and Donald D. Clayton, followed by many others. The creative 1957 review paper by E. M. Burbidge, G. R. Burbidge, Fowler and Hoyle (see Ref. list) is a well-known summary of the state of the field in 1957. That paper defined new processes for changing one heavy nucleus into others within stars, processes that could be documented by astronomers.

The Big Bang itself had been proposed in 1931, long before this period, by Georges Lemaître, a Belgian physicist and Roman Catholic priest, who suggested that the evident expansion of the Universe in time required that the Universe, if contracted backwards in time, would continue to do so until it could contract no further. This would bring all the mass of the Universe to a single point, a "primeval atom", to a state before which time and space did not exist. Hoyle later gave Lemaître's model the derisive term of Big Bang, not realizing that Lemaître's model was needed to explain the existence of deuterium and nuclides between helium and carbon, as well as the fundamentally high amount of helium present, not only in stars but also in interstellar gas. As it happened, both Lemaître and Hoyle's models of nucleosynthesis would be needed to explain the elemental abundances in the universe.

The goal of nucleosynthesis is to understand the vastly differing abundances of the chemical elements and their several isotopes from the perspective of natural processes. The primary stimulus to the development of this theory was the shape of a plot of the abundances verses the atomic number of the elements. Those abundances, when plotted on a graph as a function of atomic number, have a jagged sawtooth structure that varies by factors up to ten million. A very influential stimulus to nucleosynthesis research was an abundance table created by Hans Suess and Harold Urey that was based on the unfractionated abundances of the non-volatile elements found within unevolved meteorites.[3] Such a graph of the abundances is displayed on a logarithmic scale below, where the dramatically jagged structure is visually suppressed by the many powers of ten spanned in this graph. See Handbook of Isotopes in the Cosmos for more data and discussion of abundances of the isotopes.[4]
Abundances of the chemical elements in the Solar system. 
Hydrogen and helium are most common, 
residuals within the paradigm of the Big Bang.[5] 
The next three elements (Li, Be, B) are rare because
they are poorly synthesized in the Big Bang and also in stars. 
The two general trends in the remaining
stellar-produced elements are: (1) an alternation of abundance
 of elements according to whether they
have even or odd atomic numbers, and (2) a general decrease in
abundance, as elements become heavier. Within this trend is a 
peak at abundances of iron and nickel, which is especially visible
on a logarithmic graph spanning fewer powers of ten, say between
logA=2 (A=100) and logA=6 (A=1,000,000).

Processes

There are a number of astrophysical processes which are believed to be responsible for nucleosynthesis. The majority of these occur in shells within stars and the chain of nuclear fusion processes are known as hydrogen burning (via the proton-proton chain or the CNO cycle), helium burning, carbon burning, neon burning, oxygen burning and silicon burning. These processes are able to create elements up to and including iron and nickel. This is the region of nucleosynthesis within which the isotopes with the highest binding energy per nucleon are created. Heavier elements can be assembled within stars by a neutron capture process known as the s-process or in explosive environments, such as supernovae, by a number of other processes. Some of those other include the r-process, which involves rapid neutron captures, the rp-process, and the p-process (sometimes known as the gamma process), which involves photodisintegration of existing nuclei.

The major types of nucleosynthesis

Periodic table showing the origin of elements

Big Bang nucleosynthesis

Big Bang nucleosynthesis occurred within the first three minutes of the beginning of the universe and is responsible for much of the abundance of 1H (protium), 2H (D, deuterium), 3He (helium-3), and 4He (helium-4), in the universe. Although 4He continues to be produced by stellar fusion and alpha decays and trace amounts of 1H continue to be produced by spallation and certain types of radioactive decay, most of the mass of the isotopes in the universe are thought to have been produced in the Big Bang. The nuclei of these elements, along with some 7Li and 7Be are considered to have been formed between 100 and 300 seconds after the Big Bang when the primordial quark–gluon plasma froze out to form protons and neutrons. Because of the very short period in which nucleosynthesis occurred before it was stopped by expansion and cooling (about 20 minutes), no elements heavier than beryllium (or possibly boron) could be formed. Elements formed during this time were in the plasma state, and did not cool to the state of neutral atoms until much later.[citation needed]

 
Chief nuclear reactions responsible for the relative abundances of light atomic nuclei observed throughout the universe.

Stellar nucleosynthesis

Stellar nucleosynthesis is the nuclear process by which new nuclei are produced. It occurs naturally in stars during stellar evolution. It is responsible for the galactic abundances of elements from carbon to iron. Stars are thermonuclear furnaces in which H and He are fused into heavier nuclei by increasingly high temperatures as the composition of the core evolves.[6] Of particular importance is carbon, because its formation from He is a bottleneck in the entire process. Carbon is produced by the triple-alpha process in all stars. Carbon is also the main element that causes the release of free neutrons within stars, giving rise to the s-process, in which the slow absorption of neutrons converts iron into elements heavier than iron and nickel.[7]
The products of stellar nucleosynthesis are generally dispersed into the interstellar gas through mass loss episodes and the stellar winds of low mass stars. The mass loss events can be witnesses in the planetary nebulae phase of low-mass star evolution, and the explosive ending of stars, called supernovae, of those with more than eight times the mass of the sun.

The first direct proof that nucleosynthesis occurs in stars was the astronomical observation that interstellar gas has become enriched with heavy elements as time passed. As a result, stars that were born from it late in the galaxy, formed with much higher initial heavy element abundances than those that had formed earlier. The detection of technetium in the atmosphere of a red giant star in 1952,[8] by spectroscopy, provided the first evidence of nuclear activity within stars. Because technetium is radioactive, with a half-life much less than the age of the star, its abundance must reflect its recent creation within that star. Equally convincing evidence of the stellar origin of heavy elements, is the large overabundances of specific stable elements found in stellar atmospheres of asymptotic giant branch stars. Observation of barium abundances some 20-50 times greater than found in unevolved stars is evidence of the operation of the s-process within such stars. Many modern proofs of stellar nucleosynthesis are provided by the isotopic compositions of stardust, solid grains that have condensed from the gases of individual stars and which have been extracted from meteorites. Stardust is one component of cosmic dust, and is frequently called presolar grains. The measured isotopic compositions in stardust grains demonstrate many aspects of nucleosynthesis within the stars from which the grains condensed during the star's late-life mass-loss episodes.[9]

Explosive nucleosynthesis

Supernova nucleosynthesis occurs in the energetic environment in supernovae, in which the elements between silicon and nickel are synthesized in quasiequilibrium[10] established during fast fusion that attaches by reciprocating balanced nuclear reactions to 28Si. Quasiequilibrium can be thought of as almost equilibrium except for a high abundance of the 28Si nuclei in the feverishly burning mix. This concept[11] was the most important discovery in nucleosynthesis theory of the intermediate-mass elements since Hoyle's 1954 paper because it provided an overarching understanding of the abundant and chemically important elements between silicon (A=28) and nickel (A=60). It replaced the incorrect although much cited alpha process of the B2FH paper, which inadvertently obscured Hoyle's better 1954 theory.[12] Further nucleosynthesis processes can occur, in particular the r-process (rapid process) described by the B2FH paper and first calculated by Seeger, Fowler and Clayton,[13] in which the most neutron-rich isotopes of elements heavier than nickel are produced by rapid absorption of free neutrons. The creation of free neutrons by electron capture during the rapid compression of the supernova core along with assembly of some neutron-rich seed nuclei makes the r-process a primary process, and one that can occur even in a star of pure H and He. This is in contrast to the B2FH designation of the process as a secondary process. This promising scenario, though generally supported by supernova experts, has yet to achieve a totally satisfactory calculation of r-process abundances. The primary r-process has been confirmed by astronomers who have observed old stars born when galactic metallicity was still small, that nonetheless contain their complement of r-process nuclei; thereby demonstrating that the metallicity is a product of an internal process. The r-process is responsible for our natural cohort of radioactive elements, such as uranium and thorium, as well as the most neutron-rich isotopes of each heavy element.
The rp-process (rapid proton) involves the rapid absorption of free protons as well as neutrons, but its role and its existence are less certain.

Explosive nucleosynthesis occurs too rapidly for radioactive decay to decrease the number of neutrons, so that many abundant isotopes with equal and even numbers of protons and neutrons are synthesized by the silicon quasiequilibrium process.[14] During this process, the burning of oxygen and silicon fuses nuclei that themselves have equal numbers of protons and neutrons to produce nuclides which consist of whole numbers of helium nuclei, up to 15 (representing 60Ni). Such multiple-alpha-particle nuclides are totally stable up to 40Ca (made of 10 helium nuclei), but heavier nuclei with equal and even numbers of protons and neutrons are tightly bound but unstable. The quasiequilibrium produces radioactive isobars 44Ti, 48Cr, 52Fe, and 56Ni, which (except 44Ti) are created in abundance but decay after the explosion and leave the most stable isotope of the corresponding element at the same atomic weight. The most abundant and extant isotopes of elements produced in this way are 48Ti, 52Cr, and 56Fe. These decays are accompanied by the emission of gamma-rays (radiation from the nucleus), whose spectroscopic lines can be used to identify the isotope created by the decay. The detection of these emission lines were an important early product of gamma-ray astronomy.[15]

The most convincing proof of explosive nucleosynthesis in supernovae occurred in 1987 when those gamma-ray lines were detected emerging from supernova 1987A. Gamma ray lines identifying 56Co and 57Co nuclei, whose radioactive halflives limit their age to about a year, proved that they were created by their radioactive cobalt parents. This nuclear astronomy observation was predicted in 1969[16] as a way to confirm explosive nucleosynthesis of the elements, and that prediction played an important role in the planning for NASA's Compton Gamma-Ray Observatory.

Other proofs of explosive nucleosynthesis are found within the stardust grains that condensed within the interiors of supernovae as they expanded and cooled. Stardust grains are one component of cosmic dust. In particular, radioactive 44Ti was measured to be very abundant within supernova stardust grains at the time they condensed during the supernova expansion.[17] This confirmed a 1975 prediction of the identification of supernova stardust (SUNOCONs), which became part of the pantheon of presolar grains. Other unusual isotopic ratios within these grains reveal many specific aspects of explosive nucleosynthesis.

Cosmic ray spallation

Cosmic ray spallation process reduces the atomic weight of interstellar matter by the impact with cosmic rays, to produce some of the lightest elements present in the universe (though not a significant amount of deuterium). Most notably spallation is believed to be responsible for the generation of almost all of 3He and the elements lithium, beryllium, and boron, although some 7Li and 7Be are thought to have been produced in the Big Bang. The spallation process results from the impact of cosmic rays (mostly fast protons) against the interstellar medium. These impacts fragment carbon, nitrogen, and oxygen nuclei present. The process results in the light elements beryllium, boron, and lithium in cosmos at much greater abundances than they are within solar atmospheres. The light elements 1H and 4He nuclei are not a product of spallation and are represented in the cosmos with approximately primordial abundance.
Beryllium and boron are not significantly produced by stellar fusion processes, due to the instability of any 8Be formed from two 4He nuclei.

Empirical evidence

Theories of nucleosynthesis are tested by calculating isotope abundances and comparing those results with observed results. Isotope abundances are typically calculated from the transition rates between isotopes in a network. Often these calculations can be simplified as a few key reactions control the rate of other reactions.

Minor mechanisms and processes

Very small amounts of certain nuclides are produced on Earth by artificial means. Those are our primary source, for example, of technetium. However, some nuclides are also produced by a number of natural means that have continued after primordial elements were in place. These often act to produce new elements in ways that can be used to date rocks or to trace the source of geological processes. Although these processes do not produce the nuclides in abundance, they are the entire source of the existing natural supply of those nuclides.

These mechanisms include:
  • Radioactive decay may lead to radiogenic daughter nuclides. The nuclear decay of many long-lived primordial isotopes, especially uranium-235, uranium-238, and thorium-232 produce many intermediate daughter nuclides, before they too finally decay to isotopes of lead. The Earth's natural supply of elements like radon and polonium is via this mechanism. The atmosphere's supply of argon-40 is due mostly to the radioactive decay of potassium-40 in the time since the formation of the Earth. Little of the atmospheric argon is primordial. Helium-4 is produced by alpha-decay, and the helium trapped in Earth's crust is also mostly non-primordial. In other types of radioactive decay, such as cluster decay, larger species of nuclei are ejected (for example, neon-20), and these eventually become newly formed stable atoms.
  • Radioactive decay may lead to spontaneous fission. This is not cluster decay, as the fission products may be split among nearly any type of atom. Uranium-235 and uranium-238 are both primordial isotopes that undergo spontaneous fission. Natural technetium and promethium are produced in this manner.
  • Nuclear reactions. Naturally-occurring nuclear reactions powered by radioactive decay give rise to so-called nucleogenic nuclides. This process happens when an energetic particle from a radioactive decay, often an alpha particle, reacts with a nucleus of another atom to change the nucleus into another nuclide. This process may also cause the production of further subatomic particles, such as neutrons. Neutrons can also be produced in spontaneous fission and by neutron emission. These neutrons can then go on to produce other nuclides via neutron-induced fission, or by neutron capture. For example, some stable isotopes such as neon-21 and neon-22 are produced by several routes of nucleogenic synthesis, and thus only part of their abundance is primordial.
  • Nuclear reactions due to cosmic rays. By convention, these reaction-products are not termed "nucleogenic" nuclides, but rather cosmogenic nuclides. Cosmic rays continue to produce new elements on Earth by the same cosmogenic processes discussed above that produce primordial beryllium and boron. One important example is carbon-14, produced from nitrogen-14 in the atmosphere by cosmic rays. Iodine-129 is another example.
In addition to artificial processes, it is postulated that neutron star collision is the main source of elements heavier than iron.[18]

High-efficiency spray-on solar power tech can turn any surface into a cheap solar cell

High-efficiency spray-on solar power tech can turn any surface into a cheap solar cell

  • By on August 2, 2014 at 9:02 am
  • Original link:  http://www.extremetech.com/extreme/187416-high-efficiency-spray-on-solar-power-tech-can-turn-any-surface-into-a-cheap-solar-cell
Solar Cells
Solar panels suffer from two fundamental problems that have continued to persists even after decades of research: they’re not very efficient, and they cost a lot to produce. At least one of these problems has to be solved before solar power can overtake cheap energy sources like fossil fuels, and some scientists have had their hopes pinned on a common mineral called perovskite. This is an organometal with peculiar light-absorbing properties, and a team of researchers from the University of Sheffield say they’ve figured out how to create high-efficiency perovskite solar cells with a spray painting process. Yes, spray-on solar panels might actually happen.

Perovskite is a crystalline organometal made mostly of calcium titanate, and is found in deposits all over the world. It was first discovered over 150 years ago, but only recently have scientists started investigating its use as a solar panel semiconductor replacement for silicon. It certainly makes sense if we can work out the kinks. Perovskite is considerably cheaper to obtain and process than silicon, and the light absorbing layer can be incredibly thin — about 1 micrometer at minimum versus at least 180 micrometers for silicon. That’s why the spray-on solar panel tech demonstrated by the University of Sheffield is plausible as a real-world solution.

Nozzle

That raises the question, how efficient is this spray-on solar cell? Right now the researchers have managed to eke out 11% efficiency from a thin layer of perovskite. Traditionally manufactured solar cells based on the mineral have reached as high as 19%, and the spray-on variety is expected to reach similar levels eventually. That might not sound very impressive, but nearly 20% efficiency is rather good for an experimental solar panel. The average efficiency of silicon cells is only 25% after all. Other materials claim higher numbers, but they aren’t nearly ready for use.

Spray-on

The breakthrough here is in the process of applying perovskite in a thin uniform layer so it can efficiently absorb light on almost any surface. A layer of this material could be used as the basis for solar panels on cars or mobile devices that don’t have completely flat surfaces for mounting standard solar panels — the structure and properties of crystalline silicon simply don’t allow for very much flexibility. A solar panel on your phone? Sure, why not? However, the University of Sheffield team cautions the efficiency of spray-on perovskite will decrease a bit on curved surfaces. [DOI: 10.1039/C4EE01546K - "Efficient planar heterojunction mixed-halide perovskite solar cells deposited via spray-deposition"]

The spray-on process has several key benefits in addition to the obvious non-flat solar cells. Most importantly, it should be incredibly easy to scale up — or down for that matter. The same nozzle can be used to manufacture a small solar panel for personal electronics and a large one for a car. It’s just about the number of passes it takes to coat the surface. The perovskite solution used can also be mass produced cheaply and is easier to handle than silicon. This all combines to lower the potential cost of solar power considerably.

Perovskite is nearing the point that it could actually supplant silicon as the standard for solar panel tech. In just a few years these panels have gone from low single digit efficiencies to nearly matching silicon. This might finally be the breakthrough we’ve been waiting for to move renewable energy forward.

Cosmic microwave background

Cosmic microwave background

From Wikipedia, the free encyclopedia

The cosmic microwave background (CMB) is the thermal radiation assumed to be left over from the "Big Bang" of cosmology. In older literature, the CMB is also variously known as cosmic microwave background radiation (CMBR) or "relic radiation." The CMB is a cosmic background radiation that is fundamental to observational cosmology because it is the oldest light in the universe, dating to the epoch of recombination. With a traditional optical telescope, the space between stars and galaxies (the background) is completely dark. However, a sufficiently sensitive radio telescope shows a faint background glow, almost exactly the same in all directions, that is not associated with any star, galaxy, or other object. This glow is strongest in the microwave region of the radio spectrum. The accidental discovery of CMB in 1964 by American radio astronomers Arno Penzias and Robert Wilson[1][2] was the culmination of work initiated in the 1940s, and earned the discoverers the 1978 Nobel Prize.
The CMB is a snapshot of the oldest light in our Universe, imprinted on the sky when the Universe was just 380,000 years old. It shows tiny temperature fluctuations that correspond to regions of slightly different densities, representing the seeds of all future structure: the stars and galaxies of today.[3]
The CMB is well explained as radiation left over from an early stage in the development of the universe, and its discovery is considered a landmark test of the Big Bang model of the universe.
When the universe was young, before the formation of stars and planets, it was denser, much hotter, and filled with a uniform glow from a white-hot fog of hydrogen plasma. As the universe expanded, both the plasma and the radiation filling it grew cooler. When the universe cooled enough, protons and electrons combined to form neutral atoms. These atoms could no longer absorb the thermal radiation, and so the universe became transparent instead of being an opaque fog. Cosmologists refer to the time period when neutral atoms first formed as the recombination epoch, and the event shortly afterwards when photons started to travel freely through space rather than constantly being scattered by electrons and protons in plasma is referred to as photon decoupling. The photons that existed at the time of photon decoupling have been propagating ever since, though growing fainter and less energetic, since the expansion of space causes their wavelength to increase over time (and wavelength is inversely proportional to energy according to Planck's relation). This is the source of the alternative term relic radiation. The surface of last scattering refers to the set of points in space at the right distance from us so that we are now receiving photons originally emitted from those points at the time of photon decoupling.

Precise measurements of the CMB are critical to cosmology, since any proposed model of the universe must explain this radiation. The CMB has a thermal black body spectrum at a temperature of 2.72548±0.00057 K.[4] The spectral radiance dEν/dν peaks at 160.2 GHz, in the microwave range of frequencies. (Alternatively if spectral radiance is defined as dEλ/dλ then the peak wavelength is 1.063 mm.) The glow is very nearly uniform in all directions, but the tiny residual variations show a very specific pattern, the same as that expected of a fairly uniformly distributed hot gas that has expanded to the current size of the universe. In particular, the spectral radiance at different angles of observation in the sky contains small anisotropies, or irregularities, which vary with the size of the region examined. They have been measured in detail, and match what would be expected if small thermal variations, generated by quantum fluctuations of matter in a very tiny space, had expanded to the size of the observable universe we see today. This is a very active field of study, with scientists seeking both better data (for example, the Planck spacecraft) and better interpretations of the initial conditions of expansion. Although many different processes might produce the general form of a black body spectrum, no model other than the Big Bang has yet explained the fluctuations. As a result, most cosmologists consider the Big Bang model of the universe to be the best explanation for the CMB.

The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the ΛCDM model in particular. Moreover, the WMAP[5] and BICEP[6] experiments have observed coherence of these fluctuations on angular scales that are larger than the apparent cosmological horizon at recombination. Either such coherence is acausally fine-tuned, or cosmic inflation occurred.[7][8]

On 17 March 2014, astronomers from the California Institute of Technology, the Harvard-Smithsonian Center for Astrophysics, Stanford University, and the University of Minnesota announced their detection of signature patterns of polarized light in the CMB, attributed to gravitational waves in the early universe, which if confirmed would provide strong evidence of cosmic inflation and the Big Bang.[9][10][11][12] However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported.[13][14][15]

Features

Graph of cosmic microwave background spectrum measured by the FIRAS instrument on the COBE, the most precisely measured black body spectrum in nature.[16] The error bars are too small to be seen even in an enlarged image, and it is impossible to distinguish the observed data from the theoretical curve

The cosmic microwave background radiation is an emission of uniform, black body thermal energy coming from all parts of the sky. The radiation is isotropic to roughly one part in 100,000: the root mean square variations are only 18 µK,[17] after subtracting out a dipole anisotropy from the Doppler shift of the background radiation. The latter is caused by the peculiar velocity of the Earth relative to the comoving cosmic rest frame as the planet moves at some 371 km/s towards the constellation Leo. The CMB dipole as well as aberration at higher multipoles have been measured, consistent with galactic motion.[18]

In the Big Bang model for the formation of the universe, Inflationary Cosmology predicts that after about 10−37 seconds[19] the nascent universe underwent exponential growth that smoothed out nearly all inhomogeneities. The remaining inhomogeneities were caused by quantum fluctuations in the inflaton field that caused the inflation event.[20] After 10−6 seconds, the early universe was made up of a hot, interacting plasma of photons, electrons, and baryons. As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old.[21] At this point, the photons no longer interacted with the now electrically neutral atoms and began to travel freely through space, resulting in the decoupling of matter and radiation.[22]

The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to 2.7260 ± 0.0013 K,[4] it will continue to drop as the universe expands. The intensity of the radiation also corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called the surface of last scattering. This represents the set of locations in space at which the decoupling event is estimated to have occurred[23] and at a point in time such that the photons from that distance have just reached observers. Most of the radiation energy in the universe is in the cosmic microwave background,[24] making up a fraction of roughly 6×10−5 of the total density of the universe.[25]

Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. The CMB spectrum has become the most precisely measured black body spectrum in nature.[16]

History

The cosmic microwave background was first predicted in 1948 by Ralph Alpher, and Robert Herman.[38][39][40] Alpher and Herman were able to estimate the temperature of the cosmic microwave background to be 5 K, though two years later they re-estimated it at 28 K. This high estimate was due to a mis-estimate of the Hubble constant by Alfred Behr, which could not be replicated and was later abandoned for the earlier estimate. Although there were several previous estimates of the temperature of space, these suffered from two flaws. First, they were measurements of the effective temperature of space and did not suggest that space was filled with a thermal Planck spectrum. Next, they depend on our being at a special spot at the edge of the Milky Way galaxy and they did not suggest the radiation is isotropic. The estimates would yield very different predictions if Earth happened to be located elsewhere in the Universe.[41]
The 1948 results of Alpher and Herman were discussed in many physics settings through about 1955, when both left the Applied Physics Laboratory at Johns Hopkins University. The mainstream astronomical community, however, was not intrigued at the time by cosmology. Alpher and Herman's prediction was rediscovered by Yakov Zel'dovich in the early 1960s, and independently predicted by Robert Dicke at the same time. The first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964.[42] In 1964, David Todd Wilkinson and Peter Roll, Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background.[43] In 1964, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments. On 20 May 1964 they made their first measurement clearly showing the presence of the microwave background,[44] with their instrument having an excess 4.2K antenna temperature which they could not account for. After receiving a telephone call from Crawford Hill, Dicke famously quipped: "Boys, we've been scooped."[1][45][46] A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. Penzias and Wilson received the 1978 Nobel Prize in Physics for their discovery.[47]

The interpretation of the cosmic microwave background was a controversial issue in the 1960s with some proponents of the steady state theory arguing that the microwave background was the result of scattered starlight from distant galaxies.[48] Using this model, and based on the study of narrow absorption line features in the spectra of stars, the astronomer Andrew McKellar wrote in 1941: "It can be calculated that the 'rotational temperature' of interstellar space is 2 K."[26] However, during the 1970s the consensus was established that the cosmic microwave background is a remnant of the big bang. This was largely because new measurements at a range of frequencies showed that the spectrum was a thermal, black body spectrum, a result that the steady state model was unable to reproduce.[49]
The Holmdel Horn Antenna on which Penzias and Wilson discovered the cosmic microwave background.

Harrison, Peebles, Yu and Zel'dovich realized that the early universe would have to have inhomogeneities at the level of 10−4 or 10−5.[50][51][52] Rashid Sunyaev later calculated the observable imprint that these inhomogeneities would have on the cosmic microwave background.[53] Increasingly stringent limits on the anisotropy of the cosmic microwave background were set by ground based experiments during the 1980s. RELIKT-1, a Soviet cosmic microwave background anisotropy experiment on board the Prognoz 9 satellite (launched 1 July 1983) gave upper limits on the large-scale anisotropy. The NASA COBE mission clearly confirmed the primary anisotropy with the Differential Microwave Radiometer instrument, publishing their findings in 1992.[54][55] The team received the Nobel Prize in physics for 2006 for this discovery.

Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the scale of the first acoustic peak, which COBE did not have sufficient resolution to resolve. This peak corresponds to large scale density variations in the early universe that are created by gravitational instabilities, resulting in acoustical oscillations in the plasma.[56] The first peak in the anisotropy was tentatively detected by the Toco experiment and the result was confirmed by the BOOMERanG and MAXIMA experiments.[57][58][59] These measurements demonstrated that the geometry of the Universe is approximately flat, rather than curved.[60] They ruled out cosmic strings as a major component of cosmic structure formation and suggested cosmic inflation was the right theory of structure formation.[61]

The second peak was tentatively detected by several experiments before being definitively detected by WMAP, which has also tentatively detected the third peak.[62] As of 2010, several experiments to improve measurements of the polarization and the microwave background on small angular scales are ongoing. These include DASI, WMAP, BOOMERanG, QUaD, Planck spacecraft, Atacama Cosmology Telescope, South Pole Telescope and the QUIET telescope.

Relationship to the Big Bang

The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang theory. Measurements of the CMB have made the inflationary Big Bang theory the Standard Model of Cosmology.[63] The discovery of the CMB in the mid-1960s curtailed interest in alternatives such as the steady state theory.[64]

The CMB essentially confirms the Big Bang theory. In the late 1940s Alpher and Herman reasoned that if there was a big bang, the expansion of the Universe would have stretched and cooled the high-energy radiation of the very early Universe into the microwave region and down to a temperature of about 5 K. They were slightly off with their estimate, but they had exactly the right idea. They predicted the CMB. It took another 15 years for Penzias and Wilson to stumble into discovering that the microwave background was actually there.[65]

The CMB gives a snapshot of the universe when, according to standard cosmology, the temperature dropped enough to allow electrons and protons to form hydrogen atoms, thus making the universe transparent to radiation. When it originated some 380,000 years after the Big Bang—this time is generally known as the "time of last scattering" or the period of recombination or decoupling—the temperature of the universe was about 3000 K. This corresponds to an energy of about 0.25 eV, which is much less than the 13.6 eV ionization energy of hydrogen.[66]

Since decoupling, the temperature of the background radiation has dropped by a factor of roughly 1,100[67] due to the expansion of the universe. As the universe expands, the CMB photons are redshifted, making the radiation's temperature inversely proportional to a parameter called the universe's scale length. The temperature Tr of the CMB as a function of redshift, z, can be shown to be proportional to the temperature of the CMB as observed in the present day (2.725 K or 0.235 meV):[68]
Tr = 2.725(1 + z)

Primary anisotropy

The power spectrum of the cosmic microwave background radiation temperature anisotropy in terms of the angular scale (or multipole moment). The data shown come from the WMAP (2006), Acbar (2004) Boomerang (2005), CBI (2004), and VSA (2004) instruments. Also shown is a theoretical model (solid line).

The anisotropy of the cosmic microwave background is divided into two types: primary anisotropy, due to effects which occur at the last scattering surface and before; and secondary anisotropy, due to effects such as interactions of the background radiation with hot gas or gravitational potentials, which occur between the last scattering surface and the observer.

The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping). The acoustic oscillations arise because of a conflict in the photonbaryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons—moving at speeds much slower than light—makes them tend to collapse to form dense haloes. These two effects compete to create acoustic oscillations which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude.

The peaks contain interesting physical signatures. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The next peak—ratio of the odd peaks to the even peaks—determines the reduced baryon density.[69] The third peak can be used to get information about the dark matter density.[70]

The locations of the peaks also give important information about the nature of the primordial density perturbations. There are two fundamental types of density perturbations—called adiabatic and isocurvature. A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures.
  • Adiabatic density perturbations
the fractional additional density of each type of particle (baryons, photons ...) is the same. That is, if at one place there is 1% more energy in baryons than average, then at that place there is also 1% more energy in photons (and 1% more energy in neutrinos) than average. Cosmic inflation predicts that the primordial perturbations are adiabatic.
  • Isocurvature density perturbations
in each place the sum (over different types of particle) of the fractional additional densities is zero. That is, a perturbation where at some spot there is 1% more energy in baryons than average, 1% more energy in photons than average, and 2% less energy in neutrinos than average, would be a pure isocurvature perturbation. Cosmic strings would produce mostly isocurvature primordial perturbations.
The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. Isocurvature density perturbations produce a series of peaks whose angular scales (l-values of the peaks) are roughly in the ratio 1:3:5:..., while adiabatic density perturbations produce peaks whose locations are in the ratio 1:2:3:...[71] Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving, for example, cosmic strings.

Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down:
  • the increasing mean free path of the photons as the primordial plasma becomes increasingly rarefied in an expanding universe
  • the finite depth of the last scattering surface (LSS), which causes the mean free path to increase rapidly during decoupling, even while some Compton scattering is still occurring.
These effects contribute about equally to the suppression of anisotropies at small scales, and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies.

The depth of the LSS refers to the fact that the decoupling of the photons and baryons does not happen instantaneously, but instead requires an appreciable fraction of the age of the Universe up to that era. One method of quantifying how long this process took uses the photon visibility function (PVF). This function is defined so that, denoting the PVF by P(t), the probability that a CMB photon last scattered between time t and t+dt is given by P(t)dt.

The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The first-year WMAP results put the time at which P(t) is maximum as 372,000 years.[72] This is often taken as the "time" at which the CMB formed. However, to figure out how long it took the photons and baryons to decouple, we need a measure of the width of the PVF. The WMAP team finds that the PVF is greater than half of its maximum value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. By this measure, decoupling took place over roughly 115,000 years, and when it was complete, the universe was roughly 487,000 years old.

Late time anisotropy

Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions.

The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the Universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB:
  1. Small scale anisotropies are erased. (Just as when looking at an object through fog, details of the object appear fuzzy.)
  2. The physics of how photons are scattered by free electrons (Thomson scattering) induces polarization anisotropies on large angular scales. This broad angle polarization is correlated with the broad angle temperature perturbation.
Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift more than 17.[clarification needed] The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes.

The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the dark age, and is a period which is under intense study by astronomers (See 21 centimeter radiation).

Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zel'dovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields.

Polarization

The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-modes and B-modes. This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. The E-modes arise naturally from Thomson scattering in a heterogeneous plasma. The B-modes are not sourced by standard scalar type perturbations. Instead they can be sourced by two mechanisms: first one is by gravitational lensing of E-modes, which has been measured by South Pole Telescope in 2013.[73] Second one is from gravitational waves arising from cosmic inflation.
Detecting the B-modes is extremely difficult, particularly as the degree of foreground contamination is unknown, and the weak gravitational lensing signal mixes the relatively strong E-mode signal with the B-mode signal.[74]

Microwave background observations

Subsequent to the discovery of the CMB, hundreds of cosmic microwave background experiments have been conducted to measure and characterize the signatures of the radiation. The most famous experiment is probably the NASA Cosmic Background Explorer (COBE) satellite that orbited in 1989–1996 and which detected and quantified the large scale anisotropies at the limit of its detection capabilities. Inspired by the initial COBE results of an extremely isotropic and homogeneous background, a series of ground- and balloon-based experiments quantified CMB anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. Together with other cosmological data, these results implied that the geometry of the Universe is flat. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, Degree Angular Scale Interferometer (DASI), and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum.
In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. WMAP used symmetric, rapid-multi-modulated scanning, rapid switching radiometers to minimize non-sky signal noise.[67] The first results from this mission, disclosed in 2003, were detailed measurements of the angular power spectrum at a scale of less than one degree, tightly constraining various cosmological parameters. The results are broadly consistent with those expected from cosmic inflation as well as various other competing theories, and are available in detail at NASA's data bank for Cosmic Microwave Background (CMB) (see links below). Although WMAP provided very accurate measurements of the large scale angular fluctuations in the CMB (structures about as broad in the sky as the moon), it did not have the angular resolution to measure the smaller scale fluctuations which had been observed by former ground-based interferometers.

All-sky map

Ilc 9yr moll4096.png
All-sky map of the CMB, created from 9 years of WMAP data

A third space mission, the ESA (European Space Agency) Planck Surveyor, was launched in May 2009 and is currently performing an even more detailed investigation. Planck employs both HEMT radiometers and bolometer technology and will measure the CMB at a smaller scale than WMAP. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment—which has produced the most precise measurements at small angular scales to date—and in the Archeops balloon telescope.
Comparison of CMB results from COBE, WMAP and Planck – March 21, 2013.

On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background.[75][76] The map suggests the universe is slightly older than researchers thought. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about 370,000 years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. According to the team, the universe is 13.798 ± 0.037 billion years old,[77] and contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. Also, the Hubble constant was measured to be 67.80 ± 0.77 (km/s)/Mpc.[75][78][79][80]

Additional ground-based instruments such as the South Pole Telescope in Antarctica and the proposed Clover Project, Atacama Cosmology Telescope and the QUIET telescope in Chile will provide additional data not available from satellite observations, possibly including the B-mode polarization.

Data reduction and analysis

Raw CMBR data from the space vehicle (i.e. WMAP) contain foreground effects that completely obscure the fine-scale structure of the cosmic microwave background. The fine-scale structure is superimposed on the raw CMBR data but is too small to be seen at the scale of the raw data. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background. The dipole anisotropy and others due to Earth's annual motion relative to the Sun and numerous microwave sources in the galactic plane and elsewhere must be subtracted out to reveal the extremely tiny variations characterizing the fine-scale structure of the CMBR background.

The detailed analysis of CMBR data to produce maps, an angular power spectrum, and ultimately cosmological parameters is a complicated, computationally difficult problem. Although computing a power spectrum from a map is in principle a simple Fourier transform, decomposing the map of the sky into spherical harmonics, in practice it is hard to take the effects of noise and foreground sources into account. In particular, these foregrounds are dominated by galactic emissions such as Bremsstrahlung, synchrotron, and dust that emit in the microwave band; in practice, the galaxy has to be removed, resulting in a CMB map that is not a full-sky map. In addition, point sources like galaxies and clusters represent another source of foreground which must be removed so as not to distort the short scale structure of the CMB power spectrum.

Constraints on many cosmological parameters can be obtained from their effects on the power spectrum, and results are often calculated using Markov Chain Monte Carlo sampling techniques.

CMBR dipole anisotropy

From the CMB data it is seen that our local group of galaxies (the galactic cluster that includes the Solar System's Milky Way Galaxy) appears to be moving at 369±0.9 km/s relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB) in the direction of galactic longitude l = 263.99±0.14°, b = 48.26±0.03°.[81][82] This motion results in an anisotropy of the data (CMB appearing slightly warmer in the direction of movement than in the opposite direction).[83] The standard interpretation of this temperature variation is a simple velocity red shift and blue shift due to motion relative to the CMB, but alternative cosmological models can explain some fraction of the observed dipole temperature distribution in the CMB.[84]

Low multipoles and other anomalies

With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions.[85][86][87][88] The most longstanding of these is the low-l multipole controversy. Even in the COBE map, it was observed that the quadrupole (l =2, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. In particular, the quadrupole and octupole (l =3) modes appear to have an unexplained alignment with each other and with both the ecliptic plane and equinoxes,[89][90][91] an alignment sometimes referred to as the axis of evil.[86] A number of groups have suggested that this could be the signature of new physics at the greatest observable scales; other groups suspect systematic errors in the data.[92][93][94] Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the "internal linear combination" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others.[62][67][95] Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and Bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole. A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable.[96] Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%.[97][98][99][100]

Recent observations with the Planck telescope, which is very much more sensitive than WMAP and has a larger angular resolution, confirm the observation of the axis of evil. Since two different instruments recorded the same anomaly, instrumental error (but not foreground contamination) appears to be ruled out.[101] Coincidence is a possible explanation, chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, "I do think there is a bit of a psychological effect; people want to find unusual things." [102]

In popular culture

  • In the Stargate Universe TV series, an Ancient spaceship, Destiny, was built to study patterns in the CMBR which indicate that the universe as we know it might have been created by some form of sentient intelligence.[103]
  • In Wheelers, a novel by Ian Stewart & Jack Cohen, CMBR is explained as the encrypted transmissions of an ancient civilization. This allows the Jovian "blimps" to have a society older than the currently-observed age of the universe.

Israel and apartheid

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Israel_and_apartheid A Palestinian c...