Search This Blog

Wednesday, September 17, 2014

Type Ia supernova

Type Ia supernova

From Wikipedia, the free encyclopedia

Type Ia supernovae occur in binary systems (two stars orbiting one another) in which one of the stars is a white dwarf while the other can vary from a giant star to an even smaller white dwarf.[1] A white dwarf is the remnant of a star that has completed its normal life cycle and has ceased nuclear fusion. However, white dwarfs of the common carbon-oxygen variety are capable of further fusion reactions that release a great deal of energy if their temperatures rise high enough.

Physically, carbon-oxygen white dwarfs with a low rate of rotation are limited to below 1.38 solar masses.[2][3] Beyond this, they re-ignite and in some cases trigger a supernova explosion. Somewhat confusingly, this limit is often referred to as the Chandrasekhar mass, despite being marginally different from the absolute Chandrasekhar limit where electron degeneracy pressure is unable to prevent catastrophic collapse. If a white dwarf gradually accretes mass from a binary companion, the general hypothesis is that its core will reach the ignition temperature for carbon fusion as it approaches the limit. If the white dwarf merges with another star (a very rare event), it will momentarily exceed the limit and begin to collapse, again raising its temperature past the nuclear fusion ignition point. Within a few seconds of initiation of nuclear fusion, a substantial fraction of the matter in the white dwarf undergoes a runaway reaction, releasing enough energy (1–2×1044 J)[4] to unbind the star in a supernova explosion.[5]

This category of supernovae produces consistent peak luminosity because of the uniform mass of white dwarfs that explode via the accretion mechanism. The stability of this value allows these explosions to be used as standard candles to measure the distance to their host galaxies because the visual magnitude of the supernovae depends primarily on the distance.

Consensus model

Spectrum of SN1998aq, a Type Ia supernova, one day after maximum light in the B band[6]

The Type Ia supernova is a sub-category in the Minkowski-Zwicky supernova classification scheme, which was devised by American astronomer Rudolph Minkowski and Swiss astronomer Fritz Zwicky.[7] There are several means by which a supernova of this type can form, but they share a common underlying mechanism. When a slowly-rotating[2] carbon-oxygen white dwarf accretes matter from a companion, it can exceed the Chandrasekhar limit of about 1.44 solar masses, beyond which it can no longer support its weight with electron degeneracy pressure.[8] In the absence of a countervailing process, the white dwarf would collapse to form a neutron star,[9] as normally occurs in the case of a white dwarf that is primarily composed of magnesium, neon, and oxygen.[10]
The current view among astronomers who model Type Ia supernova explosions, however, is that this limit is never actually attained and collapse is never initiated. Instead, the increase in pressure and density due to the increasing weight raises the temperature of the core,[3] and as the white dwarf approaches about 99% of the limit,[11] a period of convection ensues, lasting approximately 1,000 years.[12] At some point in this simmering phase, a deflagration flame front is born, powered by carbon fusion. The details of the ignition are still unknown, including the location and number of points where the flame begins.[13] Oxygen fusion is initiated shortly thereafter, but this fuel is not consumed as completely as carbon.[14]

Once fusion has begun, the temperature of the white dwarf starts to rise. A main sequence star supported by thermal pressure would expand and cool in order to counterbalance an increase in thermal energy. However, degeneracy pressure is independent of temperature; the white dwarf is unable to regulate the burning process in the manner of normal stars, so it is vulnerable to a runaway fusion reaction. The flame accelerates dramatically, in part due to the Rayleigh–Taylor instability and interactions with turbulence. It is still a matter of considerable debate whether this flame transforms into a supersonic detonation from a subsonic deflagration.[12][15]

Regardless of the exact details of nuclear burning, it is generally accepted that a substantial fraction of the carbon and oxygen in the white dwarf is burned into heavier elements within a period of only a few seconds,[14] raising the internal temperature to billions of degrees. This energy release from thermonuclear burning (1–2×1044 J[4]) is more than enough to unbind the star; that is, the individual particles making up the white dwarf gain enough kinetic energy to fly apart from each other. The star explodes violently and releases a shock wave in which matter is typically ejected at speeds on the order of 5,000–20000 km/s, roughly 6% of the speed of light. The energy released in the explosion also causes an extreme increase in luminosity. The typical visual absolute magnitude of Type Ia supernovae is Mv = −19.3 (about 5 billion times brighter than the Sun), with little variation.[12]

The theory of this type of supernovae is similar to that of novae, in which a white dwarf accretes matter more slowly and does not approach the Chandrasekhar limit. In the case of a nova, the infalling matter causes a hydrogen fusion surface explosion that does not disrupt the star.[12] This type of supernova differs from a core-collapse supernova, which is caused by the cataclysmic explosion of the outer layers of a massive star as its core implodes.[16]

Formation

Formation process
Gas is being stripped from a giant star to form an accretion disc around a compact companion (such as a white dwarf star). NASA image
Four images of a simulation of Type Ia supernova
Simulation of the explosion phase of the deflagration-to-detonation model of supernovae formation, run on scientific supercomputer. Argonne National Laboratory image

Single degenerate progenitors

One model for the formation of this category of supernova is a close binary star system. The progenitor binary system consists of main sequence stars, with the primary possessing more mass than the secondary. Being greater in mass, the primary is the first of the pair to evolve onto the asymptotic giant branch, where the star's envelope expands considerably. If the two stars share a common envelope then the system can lose significant amounts of mass, reducing the angular momentum, orbital radius and period. After the primary has degenerated into a white dwarf, the secondary star later evolves into a red giant and the stage is set for mass accretion onto the primary.
During this final shared-envelope phase, the two stars spiral in closer together as angular momentum is lost. The resulting orbit can have a period as brief as a few hours.[17][18] If the accretion continues long enough, the white dwarf may eventually approach the Chandrasekhar limit.

The white dwarf companion could also accrete matter from other types of companions, including a subgiant or (if the orbit is sufficiently close) even a main sequence star. The actual evolutionary process during this accretion stage remains uncertain, as it can depend both on the rate of accretion and the transfer of angular momentum to the white dwarf companion.[19]

It has been estimated that single degenerate progenitors account for no more than 20% of all Type Ia supernovae.[20]

Double degenerate progenitors

A second possible mechanism for triggering a Type Ia supernova is the merger of two white dwarfs whose combined mass exceeds the Chandrasekhar limit. The resulting merger is called a super-Chandrasekhar mass white dwarf.[21][22] In such a case, the total mass would not be constrained by the Chandrasekhar limit.

Collisions of solitary stars within the Milky Way occur only once every 107-1013 years; far less frequently than the appearance of novae.[23] Collisions occur with greater frequency in the dense core regions of globular clusters.[24] (Cf. blue stragglers) A likely scenario is a collision with a binary star system, or between two binary systems containing white dwarfs. This collision can leave behind a close binary system of two white dwarfs. Their orbit decays and they merge through their shared envelope.[25] However, a study based on SDSS spectra found 15 double systems of the 4,000 white dwarfs tested, implying a double white dwarf merger every 100 years in the Milky Way.
Conveniently, this rate matches the number of Type Ia supernovae detected in our neighborhood.[26]

A double degenerate scenario is one of several explanations proposed for the anomalously massive (2 solar mass) progenitor of the SN 2003fg.[27][28] It is the only possible explanation for SNR 0509-67.5, as all possible models with only one white dwarf have been ruled out.[29] It has also been strongly suggested for SN 1006, given that no companion star remnant has been found there.[20] Observations made with NASA's Swift space telescope ruled out existing supergiant or giant companion stars of every Type Ia supernovae studied. The supergiant companion's blown out outer shell should emit X-rays, but this glow wasn't detected by Swift's XRT (X-Ray telescope) in the 53 closest supernova remnants. For 12 Type Ia supernovae observed within 10 days of the explosion, the satellite's UVOT (Ultraviolet/Optical Telescope) showed no ultraviolet radiation originating from the heated companion star's surface hit by the supernova shock wave, meaning there were no red giants or larger stars orbiting those supernova progenitors. In the case of SN 2011fe, the companion star must have been smaller than the Sun, if it existed.[30] The Chandra X-ray Observatory revealed that the X-ray radiation of five elliptical galaxies and the bulge of the Andromeda galaxy is 30-50 times fainter than expected. X-ray radiation should be emitted by the accretion discs of Type Ia supernova progenitors. The missing radiation indicates that few white dwarfs possess accretion discs, ruling out the common, accretion-based model of Ia supernovae.[31] Inward spiraling white dwarf pairs must be strong sources of gravitational waves, but this can't be detected as of 2012.
Double degenerate scenarios raise questions about the applicability of Type Ia supernovae as standard candles, since total mass of the two merging white dwarfs varies significantly, meaning luminosity also varies.

Type Iax

It has been proposed that a group of sub-luminous supernovae that occur when helium accretes onto a white dwarf should be classified as type Iax.[32][33] This type of supernova may not always completely destroy the white dwarf progenitor.[34]

Observation

Unlike the other types of supernovae, Type Ia supernovae generally occur in all types of galaxies, including ellipticals. They show no preference for regions of current stellar formation.[35] As white dwarf stars form at the end of a star's main sequence evolutionary period, such a long-lived star system may have wandered far from the region where it originally formed. Thereafter a close binary system may spend another million years in the mass transfer stage (possibly forming persistent nova outbursts) before the conditions are ripe for a Type Ia supernova to occur.[36]

A long-standing problem in astronomy has been the identification of supernova progenitors. Direct observation of a progenitor would provide useful constraints on supernova models. As of 2006, the search for such a progenitor had been ongoing for longer than a century.[37] Observation of the supernova SN 2011fe has provided useful constraints. Previous observations with the Hubble Space Telescope did not show a star at the position of the event, thereby excluding a red giant as the source. The expanding plasma from the explosion was found to contain carbon and oxygen, making it likely the progenitor was a white dwarf primarily composed of these elements.[38] Similarly, observations of the nearby SN PTF 11kx,[39] discovered January 16, 2011 (UT) by the Palomar Transient Factory (PTF), lead to the conclusion that this explosion arises from single-degenerate progenitor, with a red giant companion, thus suggesting there is no single progenitor path to SN Ia. Direct observations of the progenitor of PTF11kx were reported in the August 24 edition of Science and confirm this conclusion, and also show that the progenitor star experienced periodic nova eruptions before the supernova - another surprising discovery. [40][41]

Light curve

This plot of luminosity (relative to the Sun, L0) versus time shows the characteristic light curve for a Type Ia supernova. The peak is primarily due to the decay of Nickel (Ni), while the later stage is powered by Cobalt (Co).

Type Ia supernovae have a characteristic light curve, their graph of luminosity as a function of time after the explosion. Near the time of maximum luminosity, the spectrum contains lines of intermediate-mass elements from oxygen to calcium; these are the main constituents of the outer layers of the star. Months after the explosion, when the outer layers have expanded to the point of transparency, the spectrum is dominated by light emitted by material near the core of the star, heavy elements synthesized during the explosion; most prominently isotopes close to the mass of iron (or iron peak elements). The radioactive decay of nickel-56 through cobalt-56 to iron-56 produces high-energy photons which dominate the energy output of the ejecta at intermediate to late times.[12]

The use of Type Ia supernovae to measure precise distances was pioneered by a collaboration of Chilean and US astronomers, the Calán/Tololo Supernova Survey.[42] In a series of papers in the 1990s the survey showed that while Type Ia supernovae do not all reach the same peak luminosity, a single parameter measured from the light curve can be used to correct unreddened Type Ia supernovae to standard candle values. The original correction to standard candle value is known as the Phillips relationship [43] and was shown by this group to be able to measure relative distances to 7% accuracy.[44] The cause of this uniformity in peak brightness is related to the amount of 56Ni produced in white dwarfs presumably exploding near the Chandrasekhar limit.[45]

The similarity in the absolute luminosity profiles of nearly all known Type Ia supernovae has led to their use as a secondary standard candle in extragalactic astronomy. [46] Improved calibrations of the Cepheid variable distance scale [47] and direct geometric distance measurements to NGC 4258 from the dynamics of maser emission [48] when combined with the Hubble diagram of the Type Ia supernova distances have led to an improved value of the Hubble constant.

In 1998, observations of distant Type Ia supernovae indicated the unexpected result that the Universe seems to undergo an accelerating expansion.[49][50]

Cosmic distance ladder

Cosmic distance ladder

From Wikipedia, the free encyclopedia

The cosmic distance ladder (also known as the extragalactic distance scale) is the succession of methods by which astronomers determine the distances to celestial objects. A real direct distance measurement of an astronomical object is possible only for those objects that are "close enough" (within about a thousand parsecs) to Earth. The techniques for determining distances to more distant objects are all based on various measured correlations between methods that work at close distances with methods that work at larger distances. Several methods rely on a standard candle, which is an astronomical object that has a known luminosity.

The ladder analogy arises because no one technique can measure distances at all ranges encountered in astronomy. Instead, one method can be used to measure nearby distances, a second can be used to measure nearby to intermediate distances, and so on. Each rung of the ladder provides information that can be used to determine the distances at the next higher rung.

Direct measurement

Statue of an astronomer and the concept of the cosmic distance ladder by the parallax method, made from the azimuth ring and other parts of the Yale–Columbia Refractor (telescope) (c 1925) wrecked by the 2003 Canberra bushfires which burned out the Mount Stromlo Observatory; at Questacon, Canberra, Australian Capital Territory

At the base of the ladder are fundamental distance measurements, in which distances are determined directly, with no physical assumptions about the nature of the object in question. The precise measurement of stellar positions is part of the discipline of astrometry.

Astronomical unit

Direct distance measurements are based upon precise determination of the distance between the Earth and the Sun, which is called the Astronomical Unit (AU). Historically, observations of transits of Venus were crucial in determining the AU; in the first half of the 20th century, observations of asteroids were also important. Presently the orbit of Earth is determined with high precision using radar measurements of Venus and other nearby planets and asteroids,[1] and by tracking interplanetary spacecraft in their orbits around the Sun through the Solar System. Kepler's Laws provide precise ratios of the sizes of the orbits of objects revolving around the Sun, but not a real measure of the orbits themselves. Radar provides a value in kilometers for the difference in two orbits' sizes, and from that and the ratio of the two orbit sizes, the size of Earth's orbit comes directly. The orbit is known with a precision of a few meters.

Parallax

The most important fundamental distance measurements come from trigonometric parallax. As the Earth orbits around the Sun, the position of nearby stars will appear to shift slightly against the more distant background. These shifts are angles in an isosceles triangle, with 2 AU (the distance between the extreme positions of earth's orbit around the sun) making the short leg of the triangle and the distance to the star being the long legs. The amount of shift is quite small, measuring 1 arcsecond for an object at a distance of 1 parsec (3.26 light-years), thereafter decreasing in angular amount as the reciprocal of the distance. Astronomers usually express distances in units of parsecs (parallax arcseconds); light-years are used in popular media, but almost invariably values in light-years have been converted from numbers tabulated in parsecs in the original source.
Because parallax becomes smaller for a greater stellar distance, useful distances can be measured only for stars whose parallax is larger than the precision of the measurement. Parallax measurements typically have an accuracy measured in milliarcseconds.[2] In the 1990s, for example, the Hipparcos mission obtained parallaxes for over a hundred thousand stars with a precision of about a milliarcsecond,[3] providing useful distances for stars out to a few hundred parsecs.

Stars can have a velocity relative to the Sun that causes proper motion and radial velocity (motion toward or away from the Sun). The former is determined by plotting the changing position of the stars over many years, while the latter comes from measuring the Doppler shift in their spectrum caused by motion along the line of sight. For a group of stars with the same spectral class and a similar magnitude range, a mean parallax can be derived from statistical analysis of the proper motions relative to their radial velocities. This statistical parallax method is useful for measuring the distances of bright stars beyond 50 parsecs and giant variable stars, including Cepheids and the RR Lyrae variables.[4]

The motion of the Sun through space provides a longer baseline that will increase the accuracy of parallax measurements, known as secular parallax. For stars in the Milky Way disk, this corresponds to a mean baseline of 4 A.U. per year, while for halo stars the baseline is 40 A.U. per year. After several decades, the baseline can be orders of magnitude greater than the Earth–Sun baseline used for traditional parallax. However, secular parallax introduces a higher level of uncertainty because the relative velocity of other stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the uncertainty is inversely proportional to the square root of the sample size.[5]

Moving cluster parallax is a technique where the motions of individual stars in a nearby star cluster can be used to find the distance to the cluster. Only open clusters are near enough for this technique to be useful. In particular the distance obtained for the Hyades has been an important step in the distance ladder.

Other individual objects can have fundamental distance estimates made for them under special circumstances. If the expansion of a gas cloud, like a supernova remnant or planetary nebula, can be observed over time, then an expansion parallax distance to that cloud can be estimated. Binary stars which are both visual and spectroscopic binaries also can have their distance estimated by similar means. The common characteristic to these is that a measurement of angular motion is combined with a measurement of the absolute velocity (usually obtained via the Doppler effect). The distance estimate comes from computing how far away the object must be to make its observed absolute velocity appear with the observed angular motion.

Expansion parallaxes in particular can give fundamental distance estimates for objects that are very far away, because supernova ejecta have large expansion velocities and large sizes (compared to stars). Further, they can be observed with radio interferometers which can measure very small angular motions. These combine to mean that some supernovae in other galaxies have fundamental distance estimates.[6] Though valuable, such cases are quite rare, so they serve as important consistency checks on the distance ladder rather than workhorse steps by themselves.

Standard candles

Almost all of the physical distance indicators are standard candles. These are objects that belong to some class that have a known brightness. By comparing the known luminosity of the latter to its observed brightness, the distance to the object can be computed using the inverse square law. These objects of known brightness are termed standard candles.

In astronomy, the brightness of an object is given in terms of its absolute magnitude. This quantity is derived from the logarithm of its luminosity as seen from a distance of 10 parsecs. The apparent magnitude, or the magnitude as seen by the observer, can be used to determine the distance D to the object in kiloparsecs (where 1 kpc equals 1000 parsecs) as follows:
5 \cdot \log_{10} D = m - M - 10,
where m the apparent magnitude and M the absolute magnitude. For this to be accurate, both magnitudes must be in the same frequency band and there can be no relative motion in the radial direction.

Some means of accounting for interstellar extinction, which also makes objects appear fainter and more red, is also needed, especially if the object lies within a dusty or gaseous region.[7] The difference between absolute and apparent magnitudes is called the distance modulus, and astronomical distances, especially intergalactic ones, are sometimes tabulated in this way.

Problems

Two problems exist for any class of standard candle. The principal one is calibration, determining exactly what the absolute magnitude of the candle is. This includes defining the class well enough that members can be recognized, and finding enough members with well-known distances that their true absolute magnitude can be determined with enough accuracy. The second lies in recognizing members of the class, and not mistakenly using the standard candle calibration upon an object which does not belong to the class. At extreme distances, which is where one most wishes to use a distance indicator, this recognition problem can be quite serious.

A significant issue with standard candles is the recurring question of how standard they are. For example, all observations seem to indicate that Type Ia supernovae that are of known distance have the same brightness (corrected by the shape of the light curve). The basis for this closeness in brightness is discussed below; however, the possibility exists that the distant Type Ia supernovae have different properties than nearby Type Ia supernovae. The use of Type Ia supernovae is crucial in determining the correct cosmological model. If indeed the properties of Type Ia supernovae are different at large distances, i.e. if the extrapolation of their calibration to arbitrary distances is not valid, ignoring this variation can dangerously bias the reconstruction of the cosmological parameters, in particular the reconstruction of the matter density parameter.[8]

That this is not merely a philosophical issue can be seen from the history of distance measurements using Cepheid variables. In the 1950s, Walter Baade discovered that the nearby Cepheid variables used to calibrate the standard candle were of a different type than the ones used to measure distances to nearby galaxies. The nearby Cepheid variables were population I stars with much higher metal content than the distant population II stars. As a result, the population II stars were actually much brighter than believed, and this had the effect of doubling the distances to the globular clusters, the nearby galaxies, and the diameter of the Milky Way.

(Another class of physical distance indicator is the standard ruler. In 2008, galaxy diameters have been proposed as a possible standard ruler for cosmological parameter determination.[9])

Galactic distance indicators

With few exceptions, distances based on direct measurements are available only out to about a thousand parsecs, which is a modest portion of our own Galaxy. For distances beyond that, measures depend upon physical assumptions, that is, the assertion that one recognizes the object in question, and the class of objects is homogeneous enough that its members can be used for meaningful estimation of distance.
Physical distance indicators, used on progressively larger distance scales, include:

Main sequence fitting

When the absolute magnitude for a group of stars is plotted against the spectral classification of the star, in a Hertzsprung–Russell diagram, evolutionary patterns are found that relate to the mass, age and composition of the star. In particular, during their hydrogen burning period, stars lie along a curve in the diagram called the main sequence. By measuring these properties from a star's spectrum, the position of a main sequence star on the H–R diagram can be determined, and thereby the star's absolute magnitude estimated. A comparison of this value with the apparent magnitude allows the approximate distance to be determined, after correcting for interstellar extinction of the luminosity because of gas and dust.

In a gravitationally-bound star cluster such as the Hyades, the stars formed at approximately the same age and lie at the same distance. This allows relatively accurate main sequence fitting, providing both age and distance determination.

 Extragalactic distance scale

Extragalactic distance indicators[13]
Method Uncertainty for Single Galaxy (mag) Distance to Virgo Cluster (Mpc) Range (Mpc)
Classical Cepheids 0.16 15–25 29
Novae 0.4 21.1 ± 3.9 20
Planetary Nebula Luminosity Function 0.3 15.4 ± 1.1 50
Globular Cluster Luminosity Function 0.4 18.8 ± 3.8 50
Surface Brightness Fluctuations 0.3 15.9 ± 0.9 50
D–σ relation 0.5 16.8 ± 2.4 > 100
Type Ia Supernovae 0.10 19.4 ± 5.0 > 1000
The extragalactic distance scale is a series of techniques used today by astronomers to determine the distance of cosmological bodies beyond our own galaxy, which are not easily obtained with traditional methods. Some procedures utilize properties of these objects, such as stars, globular clusters, nebulae, and galaxies as a whole.
Other methods are based more on the statistics and probabilities of things such as entire galaxy clusters.

Wilson–Bappu effect

Discovered in 1956 by Olin Wilson and M.K. Vainu Bappu, The Wilson–Bappu effect utilizes the effect known as spectroscopic parallax. Certain stars have features in their emission/absorption spectra allowing relatively easy absolute magnitude calculation. Certain spectral lines are directly related to an object's magnitude, such as the K absorption line of calcium. Distance to the star can be calculated from magnitude by the distance modulus:
\ M - m = - 2.5 \log_{10}(F_1/F_2) \,.
Though in theory this method has the ability to provide reliable distance calculations to stars roughly 7 megaparsecs (Mpc) away, it is generally only used for stars hundreds of kiloparsecs (kpc) away.

This method is only valid for stars over 15 magnitudes.

Classical Cepheids

Beyond the reach of the Wilson–Bappu effect, the next method relies on the period-luminosity relation of classical Cepheid variable stars, first discovered by Henrietta Leavitt. The following relation can be used to calculate the distance to Galactic and extragalactic classical Cepheids:
 5\log_{10}{d}=V+ (3.34) \log_{10}{P} - (2.45) (V-I) + 7.52 \,. [14]
 5\log_{10}{d}=V+ (3.37) \log_{10}{P} - (2.55) (V-I) + 7.48 \,. [15]
Several problems complicate the use of Cepheids as standard candles and are actively debated, chief among them are: the nature and linearity of the period-luminosity relation in various passbands and the impact of metallicity on both the zero-point and slope of those relations, and the effects of photometric contamination (blending) and a changing (typically unknown) extinction law on Cepheid distances.[16][17][18][19][20][21][22][23][24]

These unresolved matters have resulted in cited values for the Hubble Constant ranging between 60 km/s/Mpc and 80 km/s/Mpc. Resolving this discrepancy is one of the foremost problems in astronomy since the cosmological parameters of the Universe may be constrained by supplying a precise value of the Hubble constant.[25][26]

Cepheid variable stars were the key instrument in Edwin Hubble’s 1923 conclusion that M31 (Andromeda) was an external galaxy, as opposed to a smaller nebula within the Milky Way. He was able to calculate the distance of M31 to 285 Kpc, today’s value being 770 Kpc.

As detected thus far, NGC 3370, a spiral galaxy in the constellation Leo, contains the farthest Cepheids yet found at a distance of 29 Mpc. Cepheid variable stars are in no way perfect distance markers: at nearby galaxies they have an error of about 7% and up to a 15% error for the most distant.

Supernovae

SN 1994D (bright spot on the lower left) in the NGC 4526 galaxy. Image by NASA, ESA, The Hubble Key Project Team, and The High-Z Supernova Search Team

There are several different methods for which supernovae can be used to measure extragalactic distances, here we cover the most used.

Measuring a supernova's photosphere

We can assume that a supernova expands in a spherically symmetric manner. If the supernova is close enough such that we can measure the angular extent, θ(t), of its photosphere, we can use the equation
\omega = \frac{\Delta\theta}{\Delta t} \,,
where ω is angular velocity, θ is angular extent. In order to get an accurate measurement, it is necessary to make two observations separated by time Δt. Subsequently, we can use
\ d = \frac{V_{ej}}{\omega} \,,
where d is the distance to the supernova, Vej is the supernova's ejecta's radial velocity (it can be assumed that Vej equals Vθ if spherically symmetric).

This method works only if the supernova is close enough to be able to measure accurately the photosphere. Similarly, the expanding shell of gas is in fact not perfectly spherical nor a perfect blackbody. Also interstellar extinction can hinder the accurate measurements of the photosphere. This problem is further exacerbated by core-collapse supernova. All of these factors contribute to the distance error of up to 25%.

Type Ia light curves

Type Ia supernovae are some of the best ways to determine extragalactic distances. Ia's occur when a binary white dwarf star begins to accrete matter from its companion Red Dwarf star. As the white dwarf gains matter, eventually it reaches its Chandrasekhar Limit of  1.4 M_{\odot} .

Once reached, the star becomes unstable and undergoes a runaway nuclear fusion reaction. Because all Type Ia supernovae explode at about the same mass, their absolute magnitudes are all the same. This makes them very useful as standard candles. All Type Ia supernovae have a standard blue and visual magnitude of
\ M_B \approx M_V \approx -19.3 \pm 0.3 \,.
Therefore, when observing a Type Ia supernova, if it is possible to determine what its peak magnitude was, then its distance can be calculated. It is not intrinsically necessary to capture the supernova directly at its peak magnitude; using the multicolor light curve shape method (MLCS), the shape of the light curve (taken at any reasonable time after the initial explosion) is compared to a family of parameterized curves that will determine the absolute magnitude at the maximum brightness. This method also takes into effect interstellar extinction/dimming from dust and gas.

Similarly, the stretch method fits the particular supernovae magnitude light curves to a template light curve. This template, as opposed to being several light curves at different wavelengths (MLCS) is just a single light curve that has been stretched (or compressed) in time. By using this Stretch Factor, the peak magnitude can be determined .[citation needed]

Using Type Ia supernovae is one of the most accurate methods, particularly since supernova explosions can be visible at great distances (their luminosities rival that of the galaxy in which they are situated), much farther than Cepheid Variables (500 times farther). Much time has been devoted to the refining of this method. The current uncertainty approaches a mere 5%, corresponding to an uncertainty of just 0.1 magnitudes.

Novae in distance determinations

Novae can be used in much the same way as supernovae to derive extragalactic distances. There is a direct relation between a nova's max magnitude and the time for its visible light to decline by two magnitudes. This relation is shown to be:
\ M^\max_V = -9.96 - 2.31 \log_{10} \dot{x} \,.
Where \dot{x} is the time derivative of the nova's mag, describing the average rate of decline over the first 2 magnitudes.

After novae fade, they are about as bright as the most luminous Cepheid Variable stars, therefore both these techniques have about the same max distance: ~ 20 Mpc. The error in this method produces an uncertainty in magnitude of about ±0.4

Globular cluster luminosity function

Based on the method of comparing the luminosities of globular clusters (located in galactic halos) from distant galaxies to that of the Virgo cluster, the globular cluster luminosity function carries an uncertainty of distance of about 20% (or 0.4 magnitudes).

US astronomer William Alvin Baum first attempted to use globular clusters to measure distant elliptical galaxies. He compared the brightest globular clusters in Virgo A galaxy with those in Andromeda, assuming the luminosities of the clusters were the same in both. Knowing the distance to Andromeda, Baum has assumed a direct correlation and estimated Virgo A’s distance.

Baum used just a single globular cluster, but individual formations are often poor standard candles. Canadian astronomer René Racine assumed the use of the globular cluster luminosity function (GCLF) would lead to a better approximation. The number of globular clusters as a function of magnitude is given by:
\ \Phi (m) = A e^{(m-m_0)^2/2\sigma^2} \,
where m0 is the turnover magnitude, M0 is the magnitude of the Virgo cluster, and sigma is the dispersion ~ 1.4 mag.

It is important to remember that it is assumed that globular clusters all have roughly the same luminosities within the universe. There is no universal globular cluster luminosity function that applies to all galaxies.

Planetary nebula luminosity function

Like the GCLF method, a similar numerical analysis can be used for planetary nebulae (note the use of more than one!) within far off galaxies. The planetary nebula luminosity function (PNLF) was first proposed in the late 1970s by Holland Cole and David Jenner. They suggested that all planetary nebulae might all have similar maximum intrinsic brightness, now calculated to be M = −4.53. This would therefore make them potential standard candles for determining extragalactic distances.

Astronomer George Howard Jacoby and his colleagues later proposed that the PNLF function equaled:
\ N (M) \propto e^{0.307 M} (1 - e^{3(M^{*} - M)} )  \,.
Where N(M) is number of planetary nebula, having absolute magnitude M. M* is equal to the nebula with the brightest magnitude.

Surface brightness fluctuation method

Galaxy cluster

The following method deals with the overall inherent properties of galaxies. These methods, though with varying error percentages, have the ability to make distance estimates beyond 100 Mpc, though it is usually applied more locally.

The surface brightness fluctuation (SBF) method takes advantage of the use of CCD cameras on telescopes. Because of spatial fluctuations in a galaxy’s surface brightness, some pixels on these cameras will pick up more stars than others. However, as distance increases the picture will become increasingly smoother. Analysis of this describes a magnitude of the pixel-to-pixel variation, which is directly related to a galaxy’s distance.

D–σ relation

The D–σ relation, used in elliptical galaxies, relates the angular diameter (D) of the galaxy to its velocity dispersion. It is important to describe exactly what D represents, in order to understand this method. It is, more precisely, the galaxy’s angular diameter out to the surface brightness level of 20.75 B-mag arcsec−2. This surface brightness is independent of the galaxy’s actual distance from us. Instead, D is inversely proportional to the galaxy’s distance, represented as d. Thus, this relation does not employ standard candles. Rather, D provides a standard ruler. This relation between D and σ is
 \log_{10}(D) = 1.333 \log (\sigma) + C \,.
Where C is a constant which depends on the distance to the galaxy clusters.[citation needed]

This method has the potential to become one of the strongest methods of galactic distance calculators, perhaps exceeding the range of even the Tully–Fisher method. As of today, however, elliptical galaxies aren’t bright enough to provide a calibration for this method through the use of techniques such as Cepheids. Instead, calibration is done using more crude methods.

Overlap and scaling

A succession of distance indicators, which is the distance ladder, is needed for determining distances to other galaxies. The reason is that objects bright enough to be recognized and measured at such distances are so rare that few or none are present nearby, so there are too few examples close enough with reliable trigonometric parallax to calibrate the indicator. For example, Cepheid variables, one of the best indicators for nearby spiral galaxies, cannot be satisfactorily calibrated by parallax alone.
The situation is further complicated by the fact that different stellar populations generally do not have all types of stars in them. Cepheids in particular are massive stars, with short lifetimes, so they will only be found in places where stars have very recently been formed. Consequently, because elliptical galaxies usually have long ceased to have large-scale star formation, they will not have Cepheids. Instead, distance indicators whose origins are in an older stellar population (like novae and RR Lyrae variables) must be used. However, RR Lyrae variables are less luminous than Cepheids (so they cannot be seen as far away as Cepheids can), and novae are unpredictable and an intensive monitoring program – and luck during that program – is needed to gather enough novae in the target galaxy for a good distance estimate.

Because the more distant steps of the cosmic distance ladder depend upon the nearer ones, the more distant steps include the effects of errors in the nearer steps, both systematic and statistical ones. The result of these propagating errors means that distances in astronomy are rarely known to the same level of precision as measurements in the other sciences, and that the precision necessarily is poorer for more distant types of object.

Another concern, especially for the very brightest standard candles, is their "standardness": how homogeneous the objects are in their true absolute magnitude. For some of these different standard candles, the homogeneity is based on theories about the formation and evolution of stars and galaxies, and is thus also subject to uncertainties in those aspects. For the most luminous of distance indicators, the Type Ia supernovae, this homogeneity is known to be poor ;[citation needed] however, no other class of object is bright enough to be detected at such large distances, so the class is useful simply because there is no real alternative.

The observational result of Hubble's Law, the proportional relationship between distance and the speed with which a galaxy is moving away from us (usually referred to as redshift) is a product of the cosmic distance ladder. Hubble observed that fainter galaxies are more redshifted. Finding the value of the Hubble constant was the result of decades of work by many astronomers, both in amassing the measurements of galaxy redshifts and in calibrating the steps of the distance ladder. Hubble's Law is the primary means we have for estimating the distances of quasars and distant galaxies in which individual distance indicators cannot be seen.

Dark matter

Dark matter

From Wikipedia, the free encyclopedia
 
Estimated distribution of matter and energy in the universe, today (top) and when the CMB was released (bottom)

Dark matter is a kind of matter hypothesized in astronomy and cosmology to account for gravitational effects that appear to be the result of invisible mass. Dark matter cannot be seen directly with telescopes; evidently it neither emits nor absorbs light or other electromagnetic radiation at any significant level. It is otherwise hypothesized to simply be matter that is not reactant to light.[1] Instead, the existence and properties of dark matter are inferred from its gravitational effects on visible matter, radiation, and the large-scale structure of the universe. According to the Planck mission team, and based on the standard model of cosmology, the total mass–energy of the known universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy.[2][3] Thus, dark matter is estimated to constitute 84.5% of the total matter in the universe, while dark energy plus dark matter constitute 95.1% of the total content of the universe.[4][5]

Astrophysicists hypothesized dark matter because of discrepancies between the mass of large astronomical objects determined from their gravitational effects and the mass calculated from the "luminous matter" they contain: stars, gas, and dust. It was first postulated by Jan Oort in 1932 to account for the orbital velocities of stars in the Milky Way and by Fritz Zwicky in 1933 to account for evidence of "missing mass" in the orbital velocities of galaxies in clusters. Subsequently, many other observations have indicated the presence of dark matter in the universe, including the rotational speeds of galaxies by Vera Rubin[6] in the 1960s–1970s, gravitational lensing of background objects by galaxy clusters such as the Bullet Cluster, the temperature distribution of hot gas in galaxies and clusters of galaxies, and more recently the pattern of anisotropies in the cosmic microwave background. According to consensus among cosmologists, dark matter is composed primarily of a not yet characterized type of subatomic particle.[7][8] The search for this particle, by a variety of means, is one of the major efforts in particle physics today.[9]

Although the existence of dark matter is generally accepted by the mainstream scientific community, some alternative theories of gravity have been proposed, such as MOND and TeVeS, which try to account for the anomalous observations without requiring additional matter.

Overview

Dark matter's existence is inferred from gravitational effects on visible matter and gravitational lensing of background radiation, and was originally hypothesized to account for discrepancies between calculations of the mass of galaxies, clusters of galaxies and the entire universe made through dynamical and general relativistic means, and calculations based on the mass of the visible "luminous" matter these objects contain: stars and the gas and dust of the interstellar and intergalactic medium.[1]

The most widely accepted explanation for these phenomena is that dark matter exists and that it is most probably[7] composed of weakly interacting massive particles (WIMPs) that interact only through gravity and the weak force. Alternative explanations have been proposed, and there is not yet sufficient experimental evidence to determine whether any of them are correct. Many experiments to detect proposed dark matter particles through non-gravitational means are under way.[9]
According to observations of structures larger than star systems, as well as Big Bang cosmology interpreted under the Friedmann equations and the Friedmann–Lemaître–Robertson–Walker metric, dark matter accounts for 26.8% of the mass-energy content of the observable universe. In comparison, ordinary (baryonic) matter accounts for only 4.9% of the mass-energy content of the observable universe, with the remainder being attributable to dark energy.[3] From these figures, matter accounts for 31.7% of the mass-energy content of the universe, and 84.5% of the matter is dark matter.[4]

Dark matter plays a central role in state-of-the-art modeling of cosmic structure formation and Galaxy formation and evolution and has measurable effects on the anisotropies observed in the cosmic microwave background. All these lines of evidence suggest that galaxies, clusters of galaxies, and the universe as a whole contain far more matter than that which interacts with electromagnetic radiation.[10]

Important as dark matter is thought to be in the cosmos, direct evidence of its existence and a concrete understanding of its nature have remained elusive. Though the theory of dark matter remains the most widely accepted theory to explain the anomalies in observed galactic rotation, some alternative theoretical approaches have been developed which broadly fall into the categories of modified gravitational laws and quantum gravitational laws.[11]

Baryonic and nonbaryonic dark matter

Fermi-LAT observations of dwarf galaxies provide new insights on dark matter.

There are three separate lines of evidence that the majority of dark matter is not made of baryons (ordinary matter including protons and neutrons):
  • The theory of Big Bang nucleosynthesis, which very accurately predicts the observed abundance of the chemical elements,[12] predicts that baryonic matter accounts for around 4–5 percent of the critical density of the Universe. In contrast, evidence from large-scale structure and other observations indicates that the total matter density is about 30% of the critical density.
  • Large astronomical searches for gravitational microlensing, including the MACHO, EROS and OGLE projects, have shown that only a small fraction of the dark matter in the Milky Way can be hiding in dark compact objects; the excluded range covers objects above half the Earth's mass up to 30 solar masses, excluding nearly all the plausible candidates.
  • Detailed analysis of the small irregularities (anisotropies) in the cosmic microwave background observed by WMAP and Planck shows that around five-sixths of the total matter is in a form which does not interact significantly with ordinary matter or photons.
A small proportion of dark matter may be baryonic dark matter: astronomical bodies, such as massive compact halo objects, that are composed of ordinary matter but which emit little or no electromagnetic radiation. Study of nucleosynthesis in the Big Bang produces an upper bound on the amount of baryonic matter in the universe,[13] which indicates that the vast majority of dark matter in the universe cannot be baryons, and thus does not form atoms. It also cannot interact with ordinary matter via electromagnetic forces; in particular, dark matter particles do not carry any electric charge.

Candidates for nonbaryonic dark matter are hypothetical particles such as axions, or supersymmetric particles; neutrinos can only form a small fraction of the dark matter, due to limits from large-scale structure and high-redshift galaxies. Unlike baryonic dark matter, nonbaryonic dark matter does not contribute to the formation of the elements in the early universe ("Big Bang nucleosynthesis")[7] and so its presence is revealed only via its gravitational attraction. In addition, if the particles of which it is composed are supersymmetric, they can undergo annihilation interactions with themselves, possibly resulting in observable by-products such as gamma rays and neutrinos ("indirect detection").[14]

Nonbaryonic dark matter is classified in terms of the mass of the particle(s) that is assumed to make it up, and/or the typical velocity dispersion of those particles (since more massive particles move more slowly). There are three prominent hypotheses on nonbaryonic dark matter, called cold dark matter (CDM), warm dark matter (WDM), and hot dark matter (HDM); some combination of these is also possible. The most widely discussed models for nonbaryonic dark matter are based on the cold dark matter hypothesis, and the corresponding particle is most commonly assumed to be a weakly interacting massive particle (WIMP). Hot dark matter may include (massive) neutrinos, but observations imply that only a small fraction of dark matter can be hot. Cold dark matter leads to a "bottom-up" formation of structure in the universe while hot dark matter would result in a "top-down" formation scenario; since the late 1990s, the latter has been ruled out by observations of high-redshift galaxies such as the Hubble Ultra-Deep Field.[9]

Observational evidence

This artist’s impression shows the expected distribution of dark matter in the Milky Way galaxy as a blue halo of material surrounding the galaxy.[15]

The first person to interpret evidence and infer the presence of dark matter was Dutch astronomer Jan Oort, a pioneer in radio astronomy, in 1932.[16] Oort was studying stellar motions in the local galactic neighbourhood and found that the mass in the galactic plane must be more than the material that could be seen, but this measurement was later determined to be essentially erroneous.[17] In 1933, the Swiss astrophysicist Fritz Zwicky, who studied clusters of galaxies while working at the California Institute of Technology, made a similar inference.[18][19] Zwicky applied the virial theorem to the Coma cluster of galaxies and obtained evidence of unseen mass. Zwicky estimated the cluster's total mass based on the motions of galaxies near its edge and compared that estimate to one based on the number of galaxies and total brightness of the cluster. He found that there was about 400 times more estimated mass than was visually observable. The gravity of the visible galaxies in the cluster would be far too small for such fast orbits, so something extra was required. This is known as the "missing mass problem". Based on these conclusions, Zwicky inferred that there must be some non-visible form of matter which would provide enough of the mass and gravity to hold the cluster together.

Much of the evidence for dark matter comes from the study of the motions of galaxies.[20] Many of these appear to be fairly uniform, so by the virial theorem, the total kinetic energy should be half the total gravitational binding energy of the galaxies. Observationally, however, the total kinetic energy is found to be much greater: in particular, assuming the gravitational mass is due to only the visible matter of the galaxy, stars far from the center of galaxies have much higher velocities than predicted by the virial theorem. Galactic rotation curves, which illustrate the velocity of rotation versus the distance from the galactic center, cannot be explained by only the visible matter. Assuming that the visible material makes up only a small part of the cluster is the most straightforward way of accounting for this. Galaxies show signs of being composed largely of a roughly spherically symmetric, centrally concentrated halo of dark matter with the visible matter concentrated in a disc at the center. Low surface brightness dwarf galaxies are important sources of information for studying dark matter, as they have an uncommonly low ratio of visible matter to dark matter, and have few bright stars at the center which would otherwise impair observations of the rotation curve of outlying stars.

Gravitational lensing observations of galaxy clusters allow direct estimates of the gravitational mass based on its effect on light from background galaxies, since large collections of matter (dark or otherwise) will gravitationally deflect light. In clusters such as Abell 1689, lensing observations confirm the presence of considerably more mass than is indicated by the clusters' light alone. In the Bullet Cluster, lensing observations show that much of the lensing mass is separated from the X-ray-emitting baryonic mass. In July 2012, lensing observations were used to identify a "filament" of dark matter between two clusters of galaxies, as cosmological simulations have predicted.[21]

Galaxy rotation curves

Rotation curve of a typical spiral galaxy: predicted (A) and observed (B). Dark matter can explain the 'flat' appearance of the velocity curve out to a large radius

After Zwicky's initial observations, the first indication that the mass to light ratio was anything other than unity came from measurements made by Horace W. Babcock. In 1939, Babcock reported in his PhD thesis measurements of the rotation curve for the Andromeda nebula which suggested that the mass-to-luminosity ratio increases radially.[22] He, however, attributed it to either absorption of light within the galaxy or modified dynamics in the outer portions of the spiral and not to any form of missing matter. In the late 1960s and early 1970s, Vera Rubin, a young astronomer at the Department of Terrestrial Magnetism at the Carnegie Institution of Washington, worked with a new sensitive spectrograph that could measure the velocity curve of edge-on spiral galaxies to a greater degree of accuracy than had ever before been achieved.[23] Together with fellow staff-member Kent Ford, Rubin announced at a 1975 meeting of the American Astronomical Society the discovery that most stars in spiral galaxies orbit at roughly the same speed, which implied that the mass densities of the galaxies were uniform well beyond the regions containing most of the stars (the galactic bulge), a result independently found in 1978.[24] An influential paper presented Rubin's results in 1980.[25] Rubin's observations and calculations showed that most galaxies must contain about six times as much “dark” mass as can be accounted for by the visible stars. Eventually other astronomers began to corroborate her work and it soon became well-established that most galaxies were dominated by "dark matter":
  • Low Surface Brightness (LSB) galaxies.[26] LSBs are probably everywhere dark matter-dominated, with the observed stellar populations making only a small contribution to rotation curves. Such a property is extremely important because it allows one to avoid the difficulties associated with the deprojection and disentanglement of the dark and visible contributions to the rotation curves.[9]
  • Spiral Galaxies.[27] Rotation curves of both low and high surface luminosity galaxies appear to suggest a universal density profile, which can be expressed as the sum of an exponential thin stellar disk, and a spherical dark matter halo with a flat core of radius r0 and density ρ0 = 4.5 × 10−2(r0/kpc)−2/3 Mpc−3.
  • Elliptical galaxies. Some elliptical galaxies show evidence for dark matter via strong gravitational lensing,[28] X-ray evidence reveals the presence of extended atmospheres of hot gas that fill the dark haloes of isolated ellipticals and whose hydrostatic support provides evidence for dark matter. Other ellipticals have low velocities in their outskirts (tracked for example by planetary nebulae) and were interpreted as not having dark matter haloes.[9] However, simulations of disk-galaxy mergers indicate that stars were torn by tidal forces from their original galaxies during the first close passage and put on outgoing trajectories, explaining the low velocities even with a DM halo.[29] More research is needed to clarify this situation.
Simulated dark matter haloes have significantly steeper density profiles (having central cusps) than are inferred from observations, which is a problem for cosmological models with dark matter at the smallest scale of galaxies as of 2008.[9] This may only be a problem of resolution: star-forming regions which might alter the dark matter distribution via outflows of gas have been too small to resolve and model simultaneously with larger dark matter clumps. A recent simulation[30] of a dwarf galaxy resolving these star-forming regions reported that strong outflows from supernovae remove low-angular-momentum gas, which inhibits the formation of a galactic bulge and decreases the dark matter density to less than half of what it would have been in the central kiloparsec. These simulation predictions—bulgeless and with shallow central dark matter profiles—correspond closely to observations of actual dwarf galaxies. There are no such discrepancies at the larger scales of clusters of galaxies and above, or in the outer regions of haloes of galaxies.

Exceptions to this general picture of dark matter haloes for galaxies appear to be galaxies with mass-to-light ratios close to that of stars.[citation needed] Subsequent to this, numerous observations have been made that do indicate the presence of dark matter in various parts of the cosmos, such as observations of the cosmic microwave background, of supernovas used as distance measures, of gravitational lensing at various scales, and many types of sky survey. Together with Rubin's findings for spiral galaxies and Zwicky's work on galaxy clusters, the observational evidence for dark matter has been collecting over the decades to the point that by the 1980s most astrophysicists accepted its existence.[31] As a unifying concept, dark matter is one of the dominant features considered in the analysis of structures on the order of galactic scale and larger.

Velocity dispersions of galaxies

In astronomy, the velocity dispersion σ, is the range of velocities about the mean velocity for a group of objects, such as a cluster of stars about a galaxy.

Rubin's pioneering work has stood the test of time. Measurements of velocity curves in spiral galaxies were soon followed up with velocity dispersions of elliptical galaxies.[32] While sometimes appearing with lower mass-to-light ratios, measurements of ellipticals still indicate a relatively high dark matter content. Likewise, measurements of the diffuse interstellar gas found at the edge of galaxies indicate not only dark matter distributions that extend beyond the visible limit of the galaxies, but also that the galaxies are virialized (i.e. gravitationally bound with velocities corresponding to predicted orbital velocities of general relativity) up to ten times their visible radii.[citation needed] This has the effect of pushing up the dark matter as a fraction of the total amount of gravitating matter from 50% measured by Rubin to the now accepted value of nearly 95%.

There are places where dark matter seems to be a small component or totally absent. Globular clusters show little evidence that they contain dark matter,[33] though their orbital interactions with galaxies do show evidence for galactic dark matter.[citation needed] For some time, measurements of the velocity profile of stars seemed to indicate concentration of dark matter in the disk of the Milky Way. It now appears, however, that the high concentration of baryonic matter in the disk of the galaxy (especially in the interstellar medium) can account for this motion. Galaxy mass profiles are thought to look very different from the light profiles. The typical model for dark matter galaxies is a smooth, spherical distribution in virialized halos. Such would have to be the case to avoid small-scale (stellar) dynamical effects. Recent research reported in January 2006 from the University of Massachusetts Amherst would explain the previously mysterious warp in the disk of the Milky Way by the interaction of the Large and Small Magellanic Clouds and the predicted 20 fold increase in mass of the Milky Way taking into account dark matter.[34]

In 2005, astronomers from Cardiff University claimed to have discovered a galaxy made almost entirely of dark matter, 50 million light years away in the Virgo Cluster, which was named VIRGOHI21.[35] Unusually, VIRGOHI21 does not appear to contain any visible stars: it was seen with radio frequency observations of hydrogen. Based on rotation profiles, the scientists estimate that this object contains approximately 1000 times more dark matter than hydrogen and has a total mass of about 1/10 that of the Milky Way. For comparison, the Milky Way is estimated to have roughly 10 times as much dark matter as ordinary matter. Models of the Big Bang and structure formation have suggested that such dark galaxies should be very common in the universe[citation needed], but none had previously been detected. If the existence of this dark galaxy is confirmed, it provides strong evidence for the theory of galaxy formation and poses problems for alternative explanations of dark matter.

There are some galaxies whose velocity profile indicates an absence of dark matter, such as NGC 3379.[36]

Galaxy clusters and gravitational lensing

Strong gravitational lensing as observed by the Hubble Space Telescope in Abell 1689 indicates the presence of dark matter—enlarge the image to see the lensing arcs.

Galaxy clusters are especially important for dark matter studies since their masses can be estimated in three independent ways:
  • From the scatter in radial velocities of the galaxies within them (as in Zwicky's early observations, with much larger modern samples).
  • From X-rays emitted by very hot gas within the clusters. The temperature and density of the gas can be estimated from the energy and flux of the X-rays, hence the gas pressure; assuming pressure and gravity balance, this enables the mass profile of the cluster to be derived. Many of the experiments of the Chandra X-ray Observatory use this technique to independently determine the mass of clusters. These observations generally indicate a ratio of baryonic to total mass approximately 12–15 percent, in reasonable agreement with the Planck spacecraft cosmic average of 15.5–16 percent.[37]
  • From their gravitational lensing effects on background objects, usually more distant galaxies. This is observed as "strong lensing" (multiple images) near the cluster core, and weak lensing (shape distortions) in the outer parts. Several large Hubble projects have used this method to measure cluster masses.
Generally these three methods are in reasonable agreement, that clusters contain much more matter than the visible galaxies and gas.

A gravitational lens is formed when the light from a more distant source (such as a quasar) is "bent" around a massive object (such as a cluster of galaxies) between the source object and the observer. The process is known as gravitational lensing.

The galaxy cluster Abell 2029 is composed of thousands of galaxies enveloped in a cloud of hot gas, and an amount of dark matter equivalent to more than 1014 M. At the center of this cluster is an enormous, elliptically shaped galaxy that is thought to have been formed from the mergers of many smaller galaxies.[38] The measured orbital velocities of galaxies within galactic clusters have been found to be consistent with dark matter observations.

Another important tool for future dark matter observations is gravitational lensing. Lensing relies on the effects of general relativity to predict masses without relying on dynamics, and so is a completely independent means of measuring the dark matter. Strong lensing, the observed distortion of background galaxies into arcs when the light passes through a gravitational lens, has been observed around a few distant clusters including Abell 1689 (pictured right).[39] By measuring the distortion geometry, the mass of the cluster causing the phenomena can be obtained. In the dozens of cases where this has been done, the mass-to-light ratios obtained correspond to the dynamical dark matter measurements of clusters.[40]

Weak gravitational lensing looks at minute distortions of galaxies observed in vast galaxy surveys due to foreground objects through statistical analyses. By examining the apparent shear deformation of the adjacent background galaxies, astrophysicists can characterize the mean distribution of dark matter by statistical means and have found mass-to-light ratios that correspond to dark matter densities predicted by other large-scale structure measurements.[41] The correspondence of the two gravitational lens techniques to other dark matter measurements has convinced almost all astrophysicists that dark matter actually exists as a major component of the universe's composition.
The Bullet Cluster: HST image with overlays. The total projected mass distribution reconstructed from strong and weak gravitational lensing is shown in blue, while the X-ray emitting hot gas observed with Chandra is shown in red.

The most direct observational evidence to date for dark matter is in a system known as the Bullet Cluster. In most regions of the universe, dark matter and visible material are found together,[42] as expected because of their mutual gravitational attraction. In the Bullet Cluster, a collision between two galaxy clusters appears to have caused a separation of dark matter and baryonic matter. X-ray observations show that much of the baryonic matter (in the form of 107–108 Kelvin[43] gas or plasma) in the system is concentrated in the center of the system. Electromagnetic interactions between passing gas particles caused them to slow down and settle near the point of impact. However, weak gravitational lensing observations of the same system show that much of the mass resides outside of the central region of baryonic gas. Because dark matter does not interact by electromagnetic forces, it would not have been slowed in the same way as the X-ray visible gas, so the dark matter components of the two clusters passed through each other without slowing down substantially. This accounts for the separation. Unlike the galactic rotation curves, this evidence for dark matter is independent of the details of Newtonian gravity, so it is claimed to be direct evidence of the existence of dark matter.[43] Another galaxy cluster, known as the Train Wreck Cluster/Abell 520, appears to have an unusually massive and dark core containing few of the cluster's galaxies, which presents problems for standard dark matter models.[44]

This may be explained by the dark core actually being a long, low-density dark matter filament (containing few galaxies) along the line of sight, projected onto the cluster core.[45]

The observed behavior of dark matter in clusters constrains whether and how much dark matter scatters off other dark matter particles, quantified as its self-interaction cross section. More simply, the question is whether the dark matter has pressure, and thus can be described as a perfect fluid.[46] The distribution of mass (and thus dark matter) in galaxy clusters has been used to argue both for[47] and against[48] the existence of significant self-interaction in dark matter. Specifically, the distribution of dark matter in merging clusters such as the Bullet Cluster shows that dark matter scatters off other dark matter particles only very weakly if at all.[49]

Cosmic microwave background

Angular fluctuations in the cosmic microwave background (CMB) spectrum provide evidence for dark matter. Since the 1964 discovery and confirmation of the CMB radiation,[50] many measurements of the CMB have supported and constrained this theory. The NASA Cosmic Background Explorer (COBE) found that the CMB spectrum is a blackbody spectrum with a temperature of 2.726 K. In 1992, COBE detected fluctuations (anisotropies) in the CMB spectrum, at a level of about one part in 105.[51] During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. The primary goal of these experiments was to measure the angular scale of the first acoustic peak of the power spectrum of the anisotropies, for which COBE did not have sufficient resolution. In 2000–2001, several experiments, most notably BOOMERanG[52] found the Universe to be almost spatially flat by measuring the typical angular size (the size on the sky) of the anisotropies. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory.
A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, the Degree Angular Scale Interferometer (DASI) and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB,[53][54] and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum.[55] COBE's successor, the Wilkinson Microwave Anisotropy Probe (WMAP) has provided the most detailed measurements of (large-scale) anisotropies in the CMB as of 2009 with ESA's Planck spacecraft returning more detailed results in 2012-2014.[56] WMAP's measurements played the key role in establishing the current Standard Model of Cosmology, namely the Lambda-CDM model, a flat universe dominated by dark energy, supplemented by dark matter and atoms with density fluctuations seeded by a Gaussian, adiabatic, nearly scale invariant process. The basic properties of this universe are determined by five numbers: the density of matter, the density of atoms, the age of the universe (or equivalently, the Hubble constant today), the amplitude of the initial fluctuations, and their scale dependence.

A successful Big Bang cosmology theory must fit with all available astronomical observations, including the CMB. In cosmology, the CMB is explained as relic radiation from shortly after the big bang. The anisotropies in the CMB are explained as acoustic oscillations in the photon-baryon plasma (prior to the emission of the CMB after the photons decouple from the baryons at 379,000 years after the Big Bang) whose restoring force is gravity.[57] Ordinary (baryonic) matter interacts strongly with radiation whereas, by definition, dark matter does not. Both affect the oscillations by their gravity, so the two forms of matter will have different effects. The typical angular scales of the oscillations in the CMB, measured as the power spectrum of the CMB anisotropies, thus reveal the different effects of baryonic matter and dark matter. The CMB power spectrum shows a large first peak and smaller successive peaks, with three peaks resolved as of 2009.[56] The first peak tells mostly about the density of baryonic matter and the third peak mostly about the density of dark matter, measuring the density of matter and the density of atoms in the universe.

Sky surveys and baryon acoustic oscillations

The acoustic oscillations in the early universe (see the previous section) leave their imprint in the visible matter by Baryon Acoustic Oscillation (BAO) clustering, in a way that can be measured with sky surveys such as the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey.[58] These measurements are consistent with those of the CMB derived from the WMAP spacecraft and further constrain the Lambda CDM model and dark matter. Note that the CMB data and the BAO data measure the acoustic oscillations at very different distance scales.[57]

Type Ia supernovae distance measurements

Type Ia supernovae can be used as "standard candles" to measure extragalactic distances, and extensive data sets of these supernovae can be used to constrain cosmological models.[59] They constrain the dark energy density ΩΛ = ~0.713 for a flat, Lambda CDM Universe and the parameter w for a quintessence model. Once again, the values obtained are roughly consistent with those derived from the WMAP observations and further constrain the Lambda CDM model and (indirectly) dark matter.[57]

Lyman-alpha forest

In astronomical spectroscopy, the Lyman-alpha forest is the sum of absorption lines arising from the Lyman-alpha transition of the neutral hydrogen in the spectra of distant galaxies and quasars. Observations of the Lyman-alpha forest can also be used to constrain cosmological models.[60] These constraints are again in agreement with those obtained from WMAP data.

Structure formation

3D map of the large-scale distribution of dark matter, reconstructed from measurements of weak gravitational lensing with the Hubble Space Telescope. [61]

Dark matter is crucial to the Big Bang model of cosmology as a component which corresponds directly to measurements of the parameters associated with Friedmann cosmology solutions to general relativity. In particular, measurements of the cosmic microwave background anisotropies correspond to a cosmology where much of the matter interacts with photons more weakly than the known forces that couple light interactions to baryonic matter. Likewise, a significant amount of non-baryonic, cold matter is necessary to explain the large-scale structure of the universe.

Observations suggest that structure formation in the universe proceeds hierarchically, with the smallest structures collapsing first and followed by galaxies and then clusters of galaxies. As the structures collapse in the evolving universe, they begin to "light up" as the baryonic matter heats up through gravitational contraction and the object approaches hydrostatic pressure balance. Ordinary baryonic matter had too high a temperature, and too much pressure left over from the Big Bang to collapse and form smaller structures, such as stars, via the Jeans instability. Dark matter acts as a compactor of structure. This model not only corresponds with statistical surveying of the visible structure in the universe but also corresponds precisely to the dark matter predictions of the cosmic microwave background.

This bottom up model of structure formation requires something like cold dark matter to succeed. Large computer simulations of billions of dark matter particles have been used[62] to confirm that the cold dark matter model of structure formation is consistent with the structures observed in the universe through galaxy surveys, such as the Sloan Digital Sky Survey and 2dF Galaxy Redshift Survey, as well as observations of the Lyman-alpha forest. These studies have been crucial in constructing the Lambda-CDM model which measures the cosmological parameters, including the fraction of the universe made up of baryons and dark matter.

There are, however, several points of tension between observation and simulations of structure formation driven by dark matter. There is evidence that there are 10 to 100 times fewer small galaxies than permitted by what the dark matter theory of galaxy formation predicts.[63][64] This is known as the dwarf galaxy problem. In addition, the simulations predict dark matter distributions with a very dense cusp near the centers of galaxies, but the observed halos are smoother than predicted.

History of the search for its composition

List of unsolved problems in physics
What is dark matter? How is it generated? Is it related to supersymmetry?
Although dark matter had historically been inferred by many astronomical observations, its composition long remained speculative. Early theories of dark matter concentrated on hidden heavy normal objects (such as black holes, neutron stars, faint old white dwarfs, and brown dwarfs) as the possible candidates for dark matter, collectively known as massive compact halo objects or MACHOs. Astronomical surveys for gravitational microlensing, including the MACHO, EROS and OGLE projects, along with Hubble telescope searches for ultra-faint stars, have not found enough of these hidden MACHOs.[65][66][67] Some hard-to-detect baryonic matter, such as MACHOs and some forms of gas, were additionally speculated to make a contribution to the overall dark matter content, but evidence indicated such would constitute only a small portion.[68][69][70]

Furthermore, data from a number of lines of other evidence, including galaxy rotation curves, gravitational lensing, structure formation, and the fraction of baryons in clusters and the cluster abundance combined with independent evidence for the baryon density, indicated that 85–90% of the mass in the universe does not interact with the electromagnetic force. This "nonbaryonic dark matter" is evident through its gravitational effect. Consequently, the most commonly held view was that dark matter is primarily non-baryonic, made of one or more elementary particles other than the usual electrons, protons, neutrons, and known neutrinos. The most commonly proposed particles then became WIMPs (Weakly Interacting Massive Particles, including neutralinos), axions, or sterile neutrinos, though many other possible candidates have been proposed.

The dark matter component has much more mass than the "visible" component of the universe.[71] Only about 4.6% of the mass-energy of the Universe is ordinary matter. About 23% is thought to be composed of dark matter. The remaining 72% is thought to consist of dark energy, an even stranger component, distributed almost uniformly in space and with energy density non-evolving or slowly evolving with time.[72] Determining the nature of this dark matter is one of the most important problems in modern cosmology and particle physics. It has been noted that the names "dark matter" and "dark energy" serve mainly as expressions of human ignorance, much like the marking of early maps with "terra incognita".[72]

Dark matter candidates can be approximately divided into three classes, called cold, warm and hot dark matter.[73]

These categories do not correspond to an actual temperature, but instead refer to how fast the particles were moving, thus how far they moved due to random motions in the early universe, before they slowed down due to the expansion of the Universe – this is an important distance called the "free streaming length". Primordial density fluctuations smaller than this free-streaming length get washed out as particles move from overdense to underdense regions, while fluctuations larger than the free-streaming length are unaffected; therefore this free-streaming length sets a minimum scale for structure formation.
  • Cold dark matter – objects with a free-streaming length much smaller than a protogalaxy.[74]
  • Warm dark matter – particles with a free-streaming length similar to a protogalaxy.
  • Hot dark matter – particles with a free-streaming length much larger than a protogalaxy.[75]
Though a fourth category had been considered early on, called mixed dark matter, it was quickly eliminated (from the 1990s) since the discovery of dark energy.

As an example, Davis et al. wrote in 1985:
Candidate particles can be grouped into three categories on the basis of their effect on the fluctuation spectrum (Bond et al. 1983). If the dark matter is composed of abundant light particles which remain relativistic until shortly before recombination, then it may be termed "hot". The best candidate for hot dark matter is a neutrino ... A second possibility is for the dark matter particles to interact more weakly than neutrinos, to be less abundant, and to have a mass of order 1 keV. Such particles are termed "warm dark matter", because they have lower thermal velocities than massive neutrinos ... there are at present few candidate particles which fit this description. Gravitinos and photinos have been suggested (Pagels and Primack 1982; Bond, Szalay and Turner 1982) ... Any particles which became nonrelativistic very early, and so were able to diffuse a negligible distance, are termed "cold" dark matter (CDM). There are many candidates for CDM including supersymmetric particles.[76]
The full calculations are quite technical, but an approximate dividing line is that "warm" dark matter particles became non-relativistic when the universe was approximately 1 year old and 1 millionth of its present size; standard hot big bang theory implies the universe was then in the radiation-dominated era (photons and neutrinos), with a photon temperature 2.7 million K. Standard physical cosmology gives the particle horizon size as 2ct in the radiation-dominated era, thus 2 light-years, and a region of this size would expand to 2 million light years today (if there were no structure formation). The actual free-streaming length is roughly 5 times larger than the above length, since the free-streaming length continues to grow slowly as particle velocities decrease inversely with the scale factor after they become non-relativistic; therefore, in this example the free-streaming length would correspond to 10 million light-years or 3 Mpc today, which is around the size containing on average the mass of a large galaxy.

The above temperature 2.7 million K which gives a typical photon energy of 250 electron-volts, so this sets a typical mass scale for "warm" dark matter: particles much more massive than this, such as GeV – TeV mass WIMPs, would become non-relativistic much earlier than 1 year after the Big Bang, thus have a free-streaming length which is much smaller than a proto-galaxy and effectively negligible (thus cold dark matter). Conversely, much lighter particles (e.g. neutrinos of mass ~ few eV) have a free-streaming length much larger than a proto-galaxy (thus hot dark matter).

Cold dark matter

Today, cold dark matter is the simplest explanation for most cosmological observations. "Cold" dark matter is dark matter composed of constituents with a free-streaming length much smaller than the ancestor of a galaxy-scale perturbation. This is currently the area of greatest interest for dark matter research, as hot dark matter does not seem to be viable for galaxy and galaxy cluster formation, and most particle candidates become non-relativistic at very early times, hence are classified as cold.
The composition of the constituents of cold dark matter is currently unknown. Possibilities range from large objects like MACHOs (such as black holes[77]) or RAMBOs, to new particles like WIMPs and axions. Possibilities involving normal baryonic matter include brown dwarfs, other stellar remnants such as white dwarfs, or perhaps small, dense chunks of heavy elements.

Studies of big bang nucleosynthesis and gravitational lensing have convinced most scientists[9][78][79][80][81][82] that MACHOs of any type cannot be more than a small fraction of the total dark matter.[7][78] Black holes of nearly any mass are ruled out as a primary dark matter constituent by a variety of searches and constraints.[78][80] According to A. Peter: "...the only really plausible dark-matter candidates are new particles."[79]

The DAMA/NaI experiment and its successor DAMA/LIBRA have claimed to directly detect dark matter particles passing through the Earth, but many scientists remain skeptical, as negative results from similar experiments seem incompatible with the DAMA results.

Many supersymmetric models naturally give rise to stable dark matter candidates in the form of the Lightest Supersymmetric Particle (LSP). Separately, heavy sterile neutrinos exist in non-supersymmetric extensions to the standard model that explain the small neutrino mass through the seesaw mechanism.

 Warm dark matter

Warm dark matter refers to particles with a free-streaming length comparable to the size of a region which subsequently evolved into a dwarf galaxy. This leads to predictions which are very similar to cold dark matter on large scales, including the CMB, galaxy clustering and large galaxy rotation curves, but with less small-scale density perturbations. This reduces the predicted abundance of dwarf galaxies and may lead to lower density of dark matter in the central parts of large galaxies; some researchers consider this may be a better fit to observations. A challenge for this model is that there are no very well-motivated particle physics candidates with the required mass ~ 300 eV to 3000 eV.
There have been no particles discovered so far that can be categorized as warm dark matter. There is a postulated candidate for the warm dark matter category, which is the sterile neutrino: a heavier, slower form of neutrino which does not even interact through the Weak force unlike regular neutrinos. Interestingly, some modified gravity theories, such as Scalar-tensor-vector gravity, also require that a warm dark matter exist to make their equations work out.

 Hot dark matter

Hot dark matter consists of particles that have a free-streaming length much larger than that of a proto-galaxy.
An example of hot dark matter is already known: the neutrino. Neutrinos were discovered quite separately from the search for dark matter, and long before it seriously began: they were first postulated in 1930, and first detected in 1956. Neutrinos have a very small mass: at least 100,000 times less massive than an electron. Other than gravity, neutrinos only interact with normal matter via the weak force making them very difficult to detect (the weak force only works over a small distance, thus a neutrino will only trigger a weak force event if it hits a nucleus directly head-on). This would makes them as weakly interacting light particles (WILPs), as opposed to cold dark matter's theoretical candidates, the weakly interacting massive particles (WIMPs).

There are three different known flavors of neutrinos (i.e. the electron, muon, and tau neutrinos), and their masses are slightly different. The resolution to the solar neutrino problem demonstrated that these three types of neutrinos actually change and oscillate from one flavor to the others and back as they are in-flight. It's hard to determine an exact upper bound on the collective average mass of the three neutrinos (let alone a mass for any of the three individually). For example, if the average neutrino mass were chosen to be over 50 eV/c2 (which is still less than 1/10,000th of the mass of an electron), just by the sheer number of them in the universe, the universe would collapse due to their mass. So other observations have served to estimate an upper-bound for the neutrino mass. Using cosmic microwave background data and other methods, the current conclusion is that their average mass probably does not exceed 0.3 eV/c2 Thus, the normal forms of neutrinos cannot be responsible for the measured dark matter component from cosmology.[83]

Hot dark matter was popular for a time in the early 1980s, but it suffers from a severe problem: because all galaxy-size density fluctuations get washed out by free-streaming, the first objects that can form are huge supercluster-size pancakes, which then were theorised somehow to fragment into galaxies. Deep-field observations clearly show that galaxies formed at early times, with clusters and superclusters forming later as galaxies clump together, so any model dominated by hot dark matter is seriously in conflict with observations.

Mixed dark matter

Mixed dark matter is a now obsolete model, with a specifically chosen mass ratio of 80% cold dark matter and 20% hot dark matter (neutrinos) content. Though it is presumable that hot dark matter coexists with cold dark matter in any case, there was a very specific reason for choosing this particular ratio of hot to cold dark matter in this model. During the early 1990s it became steadily clear that a Universe with critical density of cold dark matter did not fit the COBE and large-scale galaxy clustering observations; either the 80/20 mixed dark matter model, or LambdaCDM, were able to reconcile these. With the discovery of the accelerating universe from supernovae, and more accurate measurements of CMB anisotropy and galaxy clustering, the mixed dark matter model was essentially ruled out while the concordance LambdaCDM model remained a good fit.

Detection

If the dark matter within our galaxy is made up of Weakly Interacting Massive Particles (WIMPs), then millions, possibly billions, of WIMPs must pass through every square centimeter of the Earth each second.[84][85] There are many experiments currently running, or planned, aiming to test this hypothesis by searching for WIMPs. Although WIMPs are the historically more popular dark matter candidate for searches,[9] there are experiments searching for other particle candidates; the Axion Dark Matter eXperiment (ADMX) is currently searching for the dark matter axion, a well-motivated and constrained dark matter source. It is also possible that dark matter consists of very heavy hidden sector particles which only interact with ordinary matter via gravity.

These experiments can be divided into two classes: direct detection experiments, which search for the scattering of dark matter particles off atomic nuclei within a detector; and indirect detection, which look for the products of WIMP annihilations.[14]

An alternative approach to the detection of WIMPs in nature is to produce them in the laboratory. Experiments with the Large Hadron Collider (LHC) may be able to detect WIMPs produced in collisions of the LHC proton beams. Because a WIMP has negligible interactions with matter, it may be detected indirectly as (large amounts of) missing energy and momentum which escape the LHC detectors, provided all the other (non-negligible) collision products are detected.[86] These experiments could show that WIMPs can be created, but it would still require a direct detection experiment to show that they exist in sufficient numbers in the galaxy to account for dark matter.

 Direct detection experiments

Direct detection experiments typically operate in deep underground laboratories to reduce the background from cosmic rays. These include: the Soudan mine; the SNOLAB underground laboratory at Sudbury, Ontario (Canada); the Gran Sasso National Laboratory (Italy); the Canfranc Underground Laboratory (Spain); the Boulby Underground Laboratory (UK); and the Deep Underground Science and Engineering Laboratory, South Dakota (US).

The majority of present experiments use one of two detector technologies: cryogenic detectors, operating at temperatures below 100mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect the flash of scintillation light produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include: CDMS, CRESST, EDELWEISS, EURECA. Noble liquid experiments include ZEPLIN, XENON, DEAP, ArDM, WARP, DarkSide and LUX, the Large Underground Xenon Detector. Both of these detector techniques are capable of distinguishing background particles which scatter off electrons, from dark matter particles which scatter off nuclei. Other experiments include SIMPLE and PICASSO.

The DAMA/NaI, DAMA/LIBRA experiments have detected an annual modulation in the event rate,[87] which they claim is due to dark matter particles. (As the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount depending on the time of year). This claim is so far unconfirmed and difficult to reconcile with the negative results of other experiments assuming that the WIMP scenario is correct.[88]

Directional detection of dark matter is a search strategy based on the motion of the Solar System around the galactic center.[89][90][91][92]

By using a low pressure TPC, it is possible to access information on recoiling tracks (3D reconstruction if possible) and to constrain the WIMP-nucleus kinematics. WIMPs coming from the direction in which the Sun is travelling (roughly in the direction of the Cygnus constellation) may then be separated from background noise, which should be isotropic. Directional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC.

On 17 December 2009 CDMS researchers reported two possible WIMP candidate events. They estimate that the probability that these events are due to a known background (neutrons or misidentified beta or gamma events) is 23%, and conclude "this analysis cannot be interpreted as significant evidence for WIMP interactions, but we cannot reject either event as signal."[93]

More recently, on 4 September 2011, researchers using the CRESST detectors presented evidence[94] of 67 collisions occurring in detector crystals from sub-atomic particles, calculating there is a less than 1 in 10,000 chance that all were caused by known sources of interference or contamination. It is quite possible then that many of these collisions were caused by WIMPs, and/or other unknown particles.

Indirect detection experiments

Indirect detection experiments search for the products of WIMP annihilation or decay. If WIMPs are Majorana particles (WIMPs are their own antiparticle) then two WIMPs could annihilate to produce gamma rays or Standard Model particle-antiparticle pairs. Additionally, if the WIMP is unstable, WIMPs could decay into standard model particles. These processes could be detected indirectly through an excess of gamma rays, antiprotons or positrons emanating from regions of high dark matter density. The detection of such a signal is not conclusive evidence for dark matter, as the production of gamma rays from other sources is not fully understood.[9][14]

The EGRET gamma ray telescope observed more gamma rays than expected from the Milky Way, but scientists concluded that this was most likely due to a mis-estimation of the telescope's sensitivity.[95]

The Fermi Gamma-ray Space Telescope, launched 11 June 2008, is searching for gamma rays from dark matter annihilation and decay.[96] In April 2012, an analysis[97] of previously available data from its Large Area Telescope instrument produced strong statistical evidence of a 130 GeV line in the gamma radiation coming from the center of the Milky Way. At the time, WIMP annihilation was the most probable explanation for that line.[98]

At higher energies, ground-based gamma-ray telescopes have set limits on the annihilation of dark matter in dwarf spheroidal galaxies[99] and in clusters of galaxies.[100]

The PAMELA experiment (launched 2006) has detected a larger number of positrons than expected. These extra positrons could be produced by dark matter annihilation, but may also come from pulsars. No excess of anti-protons has been observed.[101] The Alpha Magnetic Spectrometer on the International Space Station is designed to directly measure the fraction of cosmic rays which are positrons. The first results, published in April 2013, indicate an excess of high-energy cosmic rays which could potentially be due to annihilation of dark matter.[102][103][104][105][106][107]

A few of the WIMPs passing through the Sun or Earth may scatter off atoms and lose energy. This way a large population of WIMPs may accumulate at the center of these bodies, increasing the chance that two will collide and annihilate. This could produce a distinctive signal in the form of high-energy neutrinos originating from the center of the Sun or Earth.[108] It is generally considered that the detection of such a signal would be the strongest indirect proof of WIMP dark matter.[9] High-energy neutrino telescopes such as AMANDA, IceCube and ANTARES are searching for this signal.

WIMP annihilation from the Milky Way Galaxy as a whole may also be detected in the form of various annihilation products.[109] The Galactic center is a particularly good place to look because the density of dark matter may be very high there.[110]

In 2014, two independent and separate groups, one led by the Leiden astrophysicist Alexey Boyarsky and another from Harvard reported an unidentified X-ray emission line around 3.5 keV in the spectra of clusters of galaxies; it is possible this could be an indirect signal from dark matter and that it could be a new particle, a sterile neutrino which has mass.[111]

Alternative theories

Numerous alternatives have been proposed to explain these observations without the need for a large amount of undetected matter. Most of these modify the laws of gravity established by Newton and Einstein in some way.

Modified gravity laws

The earliest modified gravity model to emerge was Mordehai Milgrom's Modified Newtonian Dynamics (MOND) in 1983, which adjusts Newton's laws to create a stronger gravitational field when gravitational acceleration levels become tiny (such as near the rim of a galaxy). It had some success explaining galactic scale features, such as rotational velocity curves of elliptical galaxies, and dwarf elliptical galaxies, but did not successfully explain galaxy cluster gravitational lensing.
However, MOND was not relativistic, since it was just a straight adjustment of the older Newtonian account of gravitation, not of the newer account in Einstein's general relativity. Soon after 1983, attempts were made to bring MOND into conformity with General Relativity; this is an ongoing process, and many competing hypotheses have emerged based around the original MOND model—including TeVeS, MOG or STV gravity, and phenomenological covariant approach,[112] among others.

In 2007, John W. Moffat proposed a modified gravity hypothesis based on the Nonsymmetric Gravitational Theory (NGT) that claims to account for the behavior of colliding galaxies.[113] This model requires the presence of non-relativistic neutrinos, or other candidates for (cold) dark matter, to work.

Another proposal uses a gravitational backreaction in an emerging theoretical field that seeks to explain gravity between objects as an action, a reaction, and then a back-reaction. Simply, an object A affects an object B, and the object B then re-affects object A, and so on: creating a sort of feedback loop that strengthens gravity.[114]

Recently, another group has proposed a modification of large scale gravity in a hypothesis named "dark fluid". In this formulation, the attractive gravitational effects attributed to dark matter are instead a side-effect of dark energy. Dark fluid combines dark matter and dark energy in a single energy field that produces different effects at different scales. This treatment is a simplified approach to a previous fluid-like model called the Generalized Chaplygin gas model where the whole of spacetime is a compressible gas.[115] Dark fluid can be compared to an atmospheric system. Atmospheric pressure causes air to expand, but part of the air can collapse to form clouds. In the same way, the dark fluid might generally expand, but it also could collect around galaxies to help hold them together.[115]

Another set of proposals is based on the possibility of a double metric tensor for space-time.[116] It has been argued that time-reversed solutions in general relativity require such double metric for consistency, and that both Dark Matter and Dark Energy can be understood in terms of time-reversed solutions of general relativity.[117]

Popular culture

Mention of dark matter is made in some video games and other works of fiction. In such cases, it is usually attributed extraordinary physical or magical properties. Such descriptions are often inconsistent with the properties of dark matter proposed in physics and cosmology.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...