Search This Blog

Wednesday, January 16, 2019

Dark matter (updated)

From Wikipedia, the free encyclopedia

Dark matter is a hypothetical form of matter that is thought to account for approximately 85% of the matter in the universe, and about a quarter of its total energy density. The majority of dark matter is thought to be non-baryonic in nature, possibly being composed of some as-yet undiscovered subatomic particles. Its presence is implied in a variety of astrophysical observations, including gravitational effects that cannot be explained unless more matter is present than can be seen. For this reason, most experts think dark matter to be ubiquitous in the universe and to have had a strong influence on its structure and evolution. Dark matter is called dark because it does not appear to interact with observable electromagnetic radiation, such as light, and is thus invisible to the entire electromagnetic spectrum, making it extremely difficult to detect using usual astronomical equipment.
 
The primary evidence for dark matter is that calculations show that many galaxies would fly apart instead of rotating, or would not have formed or move as they do, if they did not contain a large amount of unseen matter. Other lines of evidence include observations in gravitational lensing, from the cosmic microwave background, from astronomical observations of the observable universe's current structure, from the formation and evolution of galaxies, from mass location during galactic collisions, and from the motion of galaxies within galaxy clusters. In the standard Lambda-CDM model of cosmology, the total mass–energy of the universe contains 5% ordinary matter and energy, 27% dark matter and 68% of an unknown form of energy known as dark energy. Thus, dark matter constitutes 85% of total mass, while dark energy plus dark matter constitute 95% of total mass–energy content.

Because dark matter has not yet been observed directly, it must barely interact with ordinary baryonic matter and radiation. The primary candidate for dark matter is some new kind of elementary particle that has not yet been discovered, in particular, weakly-interacting massive particles (WIMPs), or gravitationally-interacting massive particles (GIMPs). Many experiments to directly detect and study dark matter particles are being actively undertaken, but none has yet succeeded. Dark matter is classified as cold, warm, or hot according to its velocity (more precisely, its free streaming length). Current models favor a cold dark matter scenario, in which structures emerge by gradual accumulation of particles.

Although the existence of dark matter is generally accepted by the scientific community, some astrophysicists, intrigued by certain observations that do not fit the dark matter theory, argue for various modifications of the standard laws of general relativity, such as modified Newtonian dynamics, tensor–vector–scalar gravity, or entropic gravity. These models attempt to account for all observations without invoking supplemental non-baryonic matter.

History

Early history

The hypothesis of dark matter has an elaborate history. In a talk given in 1884, Lord Kelvin estimated the number of dark bodies in the Milky Way from the observed velocity dispersion of the stars orbiting around the center of the galaxy. By using these measurements, he estimated the mass of the galaxy, which he determined is different from the mass of visible stars. Lord Kelvin thus concluded that "many of our stars, perhaps a great majority of them, may be dark bodies". In 1906 Henri Poincaré in "The Milky Way and Theory of Gases" used "dark matter", or "matière obscure" in French, in discussing Kelvin's work.

The first to suggest the existence of dark matter, using stellar velocities, was Dutch astronomer Jacobus Kapteyn in 1922. Fellow Dutchman and radio astronomy pioneer Jan Oort also hypothesized the existence of dark matter in 1932. Oort was studying stellar motions in the local galactic neighborhood and found that the mass in the galactic plane must be greater than what was observed, but this measurement was later determined to be erroneous.

In 1933, Swiss astrophysicist Fritz Zwicky, who studied galaxy clusters while working at the California Institute of Technology, made a similar inference. Zwicky applied the virial theorem to the Coma Cluster and obtained evidence of unseen mass that he called dunkle Materie ('dark matter'). Zwicky estimated its mass based on the motions of galaxies near its edge and compared that to an estimate based on its brightness and number of galaxies. He estimated that the cluster had about 400 times more mass than was visually observable. The gravity effect of the visible galaxies was far too small for such fast orbits, thus mass must be hidden from view. Based on these conclusions, Zwicky inferred that some unseen matter provided the mass and associated gravitation attraction to hold the cluster together. This was the first formal inference about the existence of dark matter. Zwicky's estimates were off by more than an order of magnitude, mainly due to an obsolete value of the Hubble constant; the same calculation today shows a smaller fraction, using greater values for luminous mass. However, Zwicky did correctly infer that the bulk of the matter was dark.

Further indications that the mass-to-light ratio was not unity came from measurements of galaxy rotation curves. In 1939, Horace W. Babcock reported the rotation curve for the Andromeda nebula (known now as the Andromeda Galaxy), which suggested that the mass-to-luminosity ratio increases radially. He attributed it to either light absorption within the galaxy or modified dynamics in the outer portions of the spiral and not to the missing matter that he had uncovered. Following Babcock's 1939 report of unexpectedly rapid rotation in the outskirts of the Andromeda galaxy and a mass-to-light ratio of 50, in 1940 Jan Oort discovered and wrote about the large non-visible halo of NGC 3115.

1970s

Vera Rubin, Kent Ford and Ken Freeman's in the 1960s and 1970s, provided further strong evidence, also using galaxy rotation curves. Rubin worked with a new spectrograph to measure the velocity curve of edge-on spiral galaxies with greater accuracy. This result was confirmed in 1978. An influential paper presented Rubin's results in 1980. Rubin found that most galaxies must contain about six times as much dark as visible mass; thus, by around 1980 the apparent need for dark matter was widely recognized as a major unsolved problem in astronomy.

At the same time that Rubin and Ford were exploring optical rotation curves, radio astronomers were making use of new radio telescopes to map the 21 cm line of atomic hydrogen in nearby galaxies. The radial distribution of interstellar atomic hydrogen (HI) often extends to much larger galactic radii than those accessible by optical studies, extending the sampling of rotation curves—and thus of the total mass distribution—to a new dynamical regime. Early mapping of Andromeda with the 300-foot telescope at Green Bank and the 250-foot dish at Jodrell Bank already showed that the HI rotation curve did not trace the expected Keplerian decline. As more sensitive receivers became available, Morton Roberts and Robert Whitehurst were able to trace the rotational velocity of Andromeda to 30 kpc, much beyond the optical measurements. Illustrating the advantage of tracing the gas disk at large radii, Figure 16 of that paper combines the optical data (the cluster of points at radii of less than 15 kpc with a single point further out) with the HI data between 20 and 30 kpc, exhibiting the flatness of the outer galaxy rotation curve; the solid curve peaking at the center is the optical surface density, while the other curve shows the cumulative mass, still rising linearly at the outermost measurement. In parallel, the use of interferometric arrays for extragalactic HI spectroscopy was being developed. In 1972, David Rogstad and Seth Shostak published HI rotation curves of five spirals mapped with the Owens Valley interferometer; the rotation curves of all five were very flat, suggesting very large values of mass-to-light ratio in the outer parts of their extended HI disks.

A stream of observations in the 1980s supported the presence of dark matter, including gravitational lensing of background objects by galaxy clusters, the temperature distribution of hot gas in galaxies and clusters, and the pattern of anisotropies in the cosmic microwave background. According to consensus among cosmologists, dark matter is composed primarily of a not yet characterized type of subatomic particle. The search for this particle, by a variety of means, is one of the major efforts in particle physics.

Technical definition

In standard cosmology, matter is anything whose energy density scales with the inverse cube of the scale factor, i.e., ρ ∝ a−3. This is in contrast to radiation, which scales as the inverse fourth power of the scale factor ρ ∝ a−4 , and a cosmological constant, which is independent of a. These scalings can be understood intuitively: for an ordinary particle in a cubical box, doubling the length of the sides of the box decreases the density (and hence energy density) by a factor of eight (23). For radiation, the decrease in energy density is larger because an increase in scale factor causes a proportional redshift. A cosmological constant, as an intrinsic property of space, has a constant energy density regardless of the volume under consideration.

In principle, "dark matter" means all components of the universe that are not visible but still obey ρ ∝ a−3. In practice, the term "dark matter" is often used to mean only the non-baryonic component of dark matter, i.e., excluding "missing baryons." Context will usually indicate which meaning is intended.

Observational evidence

File:Artist’s impression of the expected dark matter distribution around the Milky Way.ogv
This artist's impression shows the expected distribution of dark matter in the Milky Way galaxy as a blue halo of material surrounding the galaxy.

Galaxy rotation curves

Rotation curve of a typical spiral galaxy: predicted (A) and observed (B). Dark matter can explain the 'flat' appearance of the velocity curve out to a large radius.
 
The arms of spiral galaxies rotate around the galactic center. The luminous mass density of a spiral galaxy decreases as one goes from the center to the outskirts. If luminous mass were all the matter, then we can model the galaxy as a point mass in the centre and test masses orbiting around it, similar to the Solar System. From Kepler's Second Law, it is expected that the rotation velocities will decrease with distance from the center, similar to the Solar System. This is not observed. Instead, the galaxy rotation curve remains flat as distance from the center increases.

If Kepler's laws are correct, then the obvious way to resolve this discrepancy is to conclude that the mass distribution in spiral galaxies is not similar to that of the Solar System. In particular, there is a lot of non-luminous matter (dark matter) in the outskirts of the galaxy.

Velocity dispersion

Stars in bound systems must obey the virial theorem. The theorem, together with the measured velocity distribution, can be used to measure the mass distribution in a bound system, such as elliptical galaxies or globular clusters. With some exceptions, velocity dispersion estimates of elliptical galaxies do not match the predicted velocity dispersion from the observed mass distribution, even assuming complicated distributions of stellar orbits.

As with galaxy rotation curves, the obvious way to resolve the discrepancy is to postulate the existence of non-luminous matter.

Galaxy clusters

Galaxy clusters are particularly important for dark matter studies since their masses can be estimated in three independent ways:
  • From the scatter in radial velocities of the galaxies within clusters
  • From X-rays emitted by hot gas in the clusters. From the X-ray energy spectrum and flux, the gas temperature and density can be estimated, hence giving the pressure; assuming pressure and gravity balance determines the cluster's mass profile.
  • Gravitational lensing (usually of more distant galaxies) can measure cluster masses without relying on observations of dynamics (e.g., velocity).
Generally, these three methods are in reasonable agreement that dark matter outweighs visible matter by approximately 5 to 1.

Gravitational lensing

Strong gravitational lensing as observed by the Hubble Space Telescope in Abell 1689 indicates the presence of dark matter—enlarge the image to see the lensing arcs.
 
Dark matter map for a patch of sky based on gravitational lensing analysis of a Kilo-Degree survey.
 
One of the consequences of general relativity is that massive objects (such as a cluster of galaxies) lying between a more distant source (such as a quasar) and an observer should act as a lens to bend the light from this source. The more massive an object, the more lensing is observed.

Strong lensing is the observed distortion of background galaxies into arcs when their light passes through such a gravitational lens. It has been observed around many distant clusters including Abell 1689. By measuring the distortion geometry, the mass of the intervening cluster can be obtained. In the dozens of cases where this has been done, the mass-to-light ratios obtained correspond to the dynamical dark matter measurements of clusters. Lensing can lead to multiple copies of an image. By analyzing the distribution of multiple image copies, scientists have been able to deduce and map the distribution of dark matter around the MACS J0416.1-2403 galaxy cluster.

Weak gravitational lensing investigates minute distortions of galaxies, using statistical analyses from vast galaxy surveys. By examining the apparent shear deformation of the adjacent background galaxies, the mean distribution of dark matter can be characterized. The mass-to-light ratios correspond to dark matter densities predicted by other large-scale structure measurements. Dark matter does not bend light itself; mass (in this case the mass of the dark matter) bends spacetime. Light follows the curvature of spacetime, resulting in the lensing effect.

Cosmic microwave background

Estimated division of total energy in the universe into matter, dark matter and dark energy based on five years of WMAP data.
 
Although both dark matter and ordinary matter are matter, they do not behave in the same way. In particular, in the early universe, ordinary matter was ionized and interacted strongly with radiation via Thomson scattering. Dark matter does not interact directly with radiation, but it does affect the CMB by its gravitational potential (mainly on large scales), and by its effects on the density and velocity of ordinary matter. Ordinary and dark matter perturbations, therefore, evolve differently with time and leave different imprints on the cosmic microwave background (CMB).

The cosmic microwave background is very close to a perfect blackbody but contains very small temperature anisotropies of a few parts in 100,000. A sky map of anisotropies can be decomposed into an angular power spectrum, which is observed to contain a series of acoustic peaks at near-equal spacing but different heights. The series of peaks can be predicted for any assumed set of cosmological parameters by modern computer codes such as CMBFast and CAMB, and matching theory to data, therefore, constrains cosmological parameters. The first peak mostly shows the density of baryonic matter, while the third peak relates mostly to the density of dark matter, measuring the density of matter and the density of atoms.

The CMB anisotropy was first discovered by COBE in 1992, though this had too coarse resolution to detect the acoustic peaks. After the discovery of the first acoustic peak by the balloon-borne BOOMERanG experiment in 2000, the power spectrum was precisely observed by WMAP in 2003-12, and even more precisely by the Planck spacecraft in 2013-15. The results support the Lambda-CDM model.

The observed CMB angular power spectrum provides powerful evidence in support of dark matter, as its precise structure is well fitted by the Lambda-CDM model, but difficult to reproduce with any competing model such as modified Newtonian dynamics (MOND).

Structure formation

3D map of the large-scale distribution of dark matter, reconstructed from measurements of weak gravitational lensing with the Hubble Space Telescope.
 
Structure formation refers to the period after the Big Bang when density perturbations collapsed to form stars, galaxies, and clusters. Prior to structure formation, the Friedmann solutions to general relativity describe a homogeneous universe. Later, small anisotropies gradually grew and condensed the homogeneous universe into stars, galaxies and larger structures. Ordinary matter is affected by radiation, which is the dominant element of the universe at very early times. As a result, its density perturbations are washed out and unable to condense into structure. If there were only ordinary matter in the universe, there would not have been enough time for density perturbations to grow into the galaxies and clusters currently seen.

Dark matter provides a solution to this problem because it is unaffected by radiation. Therefore, its density perturbations can grow first. The resulting gravitational potential acts as an attractive potential well for ordinary matter collapsing later, speeding up the structure formation process.

Bullet Cluster

If dark matter does not exist, then the next most likely explanation is that general relativity—the prevailing theory of gravity—is incorrect. The Bullet Cluster, the result of a recent collision of two galaxy clusters, provides a challenge for modified gravity theories because its apparent center of mass is far displaced from the baryonic center of mass. Standard dark matter theory can easily explain this observation, but modified gravity has a much harder time, especially since the observational evidence is model-independent.

Type Ia supernova distance measurements

Type Ia supernovae can be used as standard candles to measure extragalactic distances, which can in turn be used to measure how fast the universe has expanded in the past. The data indicates that the universe is expanding at an accelerating rate, the cause of which is usually ascribed to dark energy. Since observations indicate the universe is almost flat, it is expected that the total energy density of everything in the universe should sum to 1 (Ωtot ~ 1). The measured dark energy density is ΩΛ = ~0.690; the observed ordinary (baryonic) matter energy density is Ωb = ~0.0482 and the energy density of radiation is negligible. This leaves a missing Ωdm = ~0.258 that nonetheless behaves like matter (see technical definition section above)—dark matter.

Sky surveys and baryon acoustic oscillations

Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scales. These are predicted to arise in the Lambda-CDM model due to acoustic oscillations in the photon-baryon fluid of the early universe, and can be observed in the cosmic microwave background angular power spectrum. BAOs set up a preferred length scale for baryons. As the dark matter and baryons clumped together after recombination, the effect is much weaker in the galaxy distribution in the nearby universe, but is detectable as a subtle (~ 1 percent) preference for pairs of galaxies to be separated by 147 Mpc, compared to those separated by 130 or 160 Mpc. This feature was predicted theoretically in the 1990s and then discovered in 2005, in two large galaxy redshift surveys, the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Combining the CMB observations with BAO measurements from galaxy redshift surveys provides a precise estimate of the Hubble constant and the average matter density in the Universe. The results support the Lambda-CDM model.

Redshift-space distortions

Large galaxy redshift surveys may be used to make a three-dimensional map of the galaxy distribution. These maps are slightly distorted because distances are estimated from observed redshifts; the redshift contains a contribution from the galaxy's so-called peculiar velocity in addition to the dominant Hubble expansion term. On average, superclusters are expanding but more slowly than the cosmic mean due to their gravity, while voids are expanding faster than average. In a redshift map, galaxies in front of a supercluster have excess radial velocities towards it and have redshifts slightly higher than their distance would imply, while galaxies behind the supercluster have redshifts slightly low for their distance. This effect causes superclusters to appear squashed in the radial direction, and likewise voids are stretched. Their angular positions are unaffected. The effect is not detectable for any one structure since the true shape is not known, but can be measured by averaging over many structures assuming Earth is not at a special location in the Universe. 

The effect was predicted quantitatively by Nick Kaiser in 1987, and first decisively measured in 2001 by the 2dF Galaxy Redshift Survey. Results are in agreement with the Lambda-CDM model.

Lyman-alpha forest

In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasars. Lyman-alpha forest observations can also constrain cosmological models. These constraints agree with those obtained from WMAP data.

Composition of dark matter: baryonic vs. nonbaryonic

There are various hypotheses about what dark matter could consist of, as set out in the table below.

Some dark matter hypotheses
Light bosons quantum chromodynamics axions
axion-like particles
fuzzy cold dark matter
neutrinos Standard Model
sterile neutrinos
weak scale supersymmetry
extra dimensions
little Higgs
effective field theory
simplified models
other particles WIMPzilla
self-interacting dark matter
superfluid vacuum theory
macroscopic primordial black holes
massive compact halo objects (MaCHOs)
Macroscopic dark matter (Macros)
modified gravity (MOG) modified Newtonian dynamics (MoND)
Tensor–vector–scalar gravity (TeVeS)
Entropic gravity

Dark matter can refer to any substance that interacts predominantly via gravity with visible matter (e.g., stars and planets). Hence in principle it need not be composed of a new type of fundamental particle but could, at least in part, be made up of standard baryonic matter, such as protons or neutrons. However, for the reasons outlined below, most scientists think the dark matter is dominated by a non-baryonic component, which is likely composed of a currently unknown fundamental particle (or similar exotic state). 

File:Fermi Observations of Dwarf Galaxies Provide New Insights on Dark Matter.ogv
Fermi-LAT observations of dwarf galaxies provide new insights on dark matter.

Baryonic matter

Baryons (protons and neutrons) make up ordinary stars and planets. However, baryonic matter also encompasses less common black holes, neutron stars, faint old white dwarfs and brown dwarfs, collectively known as massive compact halo objects (MACHOs), which can be hard to detect.

However, multiple lines of evidence suggest the majority of dark matter is not made of baryons:
  • Sufficient diffuse, baryonic gas or dust would be visible when backlit by stars.
  • The theory of Big Bang nucleosynthesis predicts the observed abundance of the chemical elements. If there are more baryons, then there should also be more helium, lithium and heavier elements synthesized during the Big Bang. Agreement with observed abundances requires that baryonic matter makes up between 4–5% of the universe's critical density. In contrast, large-scale structure and other observations indicate that the total matter density is about 30% of the critical density.
  • Astronomical searches for gravitational microlensing in the Milky Way found that at most a small fraction of the dark matter may be in dark, compact, conventional objects (MACHOs, etc.); the excluded range of object masses is from half the Earth's mass up to 30 solar masses, which covers nearly all the plausible candidates.
  • Detailed analysis of the small irregularities (anisotropies) in the cosmic microwave background. Observations by WMAP and Planck indicate that around five sixths of the total matter is in a form that interacts significantly with ordinary matter or photons only through gravitational effects.

Non-baryonic matter

Candidates for non-baryonic dark matter are hypothetical particles such as axions, sterile neutrinos, weakly interacting massive particles (WIMPs), gravitationally-interacting massive particles (GIMPs), or supersymmetric particles. The three neutrino types already observed are indeed abundant, and dark, and matter, but because their individual masses—however uncertain they may be—are almost certainly tiny, they can only supply a small fraction of dark matter, due to limits derived from large-scale structure and high-redshift galaxies.

Unlike baryonic matter, nonbaryonic matter did not contribute to the formation of the elements in the early universe (Big Bang nucleosynthesis) and so its presence is revealed only via its gravitational effects, or weak lensing. In addition, if the particles of which it is composed are supersymmetric, they can undergo annihilation interactions with themselves, possibly resulting in observable by-products such as gamma rays and neutrinos (indirect detection).

Dark matter aggregation and dense dark matter objects

If dark matter is as common as observations suggest, an obvious question is whether it can form objects equivalent to planets, stars, or black holes. The answer has historically been that it cannot, because of two factors:
  • It lacks an efficient means to lose energy: Ordinary matter forms dense objects because it has numerous ways to lose energy. Losing energy would be essential for object formation, because a particle that gains energy during compaction or falling "inward" under gravity, and cannot lose it any other way, will heat up and increase velocity and momentum. Dark matter appears to lack means to lose energy, simply because it is not capable of interacting strongly in other ways except through gravity. The Virial theorem suggests that such a particle would not stay bound to the gradually forming object—as the object began to form and compact, the dark matter particles within it would speed up and tend to escape.
  • It lacks a range of interactions needed to form structures: Ordinary matter interacts in many different ways. This allow it to form more complex structures. For example, stars form through gravity, but the particles within them interact and can emit energy in the form of neutrinos and electromagnetic radiation through fusion when they become energetic enough. Protons and neutrons can bind via the strong interaction and then form atoms with electrons largely through electromagnetic interaction. But there is no evidence that dark matter is capable of such a wide variety of interactions, since it only seems to interact through gravity and through some means no stronger than the weak interaction (although this is speculative until dark matter is better understood).
In 2015–2017 the idea that dense dark matter was composed of primordial black holes, made a comeback following results of gravitation wave measurements which detected the merger of intermediate mass black holes. Black holes with about 30 solar masses are not predicted to form by either stellar collapse (typically less than 15 solar masses) or by the merger of black holes in galactic centers (millions or billions of solar masses). It was proposed that the intermediate mass black holes causing the detected merger formed in the hot dense early phase of the universe due to denser regions collapsing. However this was later ruled out by a survey of about a thousand supernova which detected no gravitational lensing events, although about 8 would be expected if intermediate mass primordial black holes accounted for the majority of dark matter. The possibility that atom-sized primordial black holes account for a significant fraction of dark matter was ruled out by measurements of positron and electron fluxes outside the suns heliosphere by the Voyager 1 spacecraft. Tiny black holes are theorized to emit Hawking radiation. However the detected fluxes were too low and did not have the expected energy spectrum suggesting that tiny primordial black holes are not widespread enough to account for dark matter. None-the-less research and theories proposing that dense dark matter account for dark matter continue as of 2018, including approaches to dark matter cooling, and the question remains unsettled.

Classification of dark matter: cold, warm or hot

Dark matter can be divided into cold, warm, and hot categories. These categories refer to velocity rather than an actual temperature, indicating how far corresponding objects moved due to random motions in the early universe, before they slowed due to cosmic expansion—this is an important distance called the free streaming length (FSL). Primordial density fluctuations smaller than this length get washed out as particles spread from overdense to underdense regions, while larger fluctuations are unaffected; therefore this length sets a minimum scale for later structure formation. The categories are set with respect to the size of a protogalaxy (an object that later evolves into a dwarf galaxy): dark matter particles are classified as cold, warm, or hot according to their FSL; much smaller (cold), similar to (warm), or much larger (hot) than a protogalaxy.

Mixtures of the above are also possible: a theory of mixed dark matter was popular in the mid-1990s, but was rejected following the discovery of dark energy.

Cold dark matter leads to a bottom-up formation of structure with galaxies forming first and galaxy clusters at a latter stage, while hot dark matter would result in a top-down formation scenario with large matter aggregations forming early, later fragmenting into separate galaxies; the latter is excluded by high-redshift galaxy observations.

Alternative definitions

These categories also correspond to fluctuation spectrum effects and the interval following the Big Bang at which each type became non-relativistic. Davis et al. wrote in 1985:
Candidate particles can be grouped into three categories on the basis of their effect on the fluctuation spectrum (Bond et al. 1983). If the dark matter is composed of abundant light particles which remain relativistic until shortly before recombination, then it may be termed "hot". The best candidate for hot dark matter is a neutrino ... A second possibility is for the dark matter particles to interact more weakly than neutrinos, to be less abundant, and to have a mass of order 1 keV. Such particles are termed "warm dark matter", because they have lower thermal velocities than massive neutrinos ... there are at present few candidate particles which fit this description. Gravitinos and photinos have been suggested (Pagels and Primack 1982; Bond, Szalay and Turner 1982) ... Any particles which became nonrelativistic very early, and so were able to diffuse a negligible distance, are termed "cold" dark matter (CDM). There are many candidates for CDM including supersymmetric particles.
— M. Davis, G. Efstathiou, C. S. Frenk, and S. D. M. White, The evolution of large-scale structure in a universe dominated by cold dark matter
Another approximate dividing line is that warm dark matter became non-relativistic when the universe was approximately 1 year old and 1 millionth of its present size and in the radiation-dominated era (photons and neutrinos), with a photon temperature 2.7 million K. Standard physical cosmology gives the particle horizon size as 2ct (speed of light multiplied by time) in the radiation-dominated era, thus 2 light-years. A region of this size would expand to 2 million light-years today (absent structure formation). The actual FSL is approximately 5 times the above length, since it continues to grow slowly as particle velocities decrease inversely with the scale factor after they become non-relativistic. In this example the FSL would correspond to 10 million light-years, or 3 megaparsecs, today, around the size containing an average large galaxy.

The 2.7 million K photon temperature gives a typical photon energy of 250 electron-volts, thereby setting a typical mass scale for warm dark matter: particles much more massive than this, such as GeV–TeV mass WIMPs, would become non-relativistic much earlier than one year after the Big Bang and thus have FSLs much smaller than a protogalaxy, making them cold. Conversely, much lighter particles, such as neutrinos with masses of only a few eV, have FSLs much larger than a protogalaxy, thus qualifying them as hot.

Cold dark matter

Cold dark matter offers the simplest explanation for most cosmological observations. It is dark matter composed of constituents with an FSL much smaller than a protogalaxy. This is the focus for dark matter research, as hot dark matter does not seem capable of supporting galaxy or galaxy cluster formation, and most particle candidates slowed early.

The constituents of cold dark matter are unknown. Possibilities range from large objects like MACHOs (such as black holes and Preon stars) or RAMBOs (such as clusters of brown dwarfs), to new particles such as WIMPs and axions.

Studies of Big Bang nucleosynthesis and gravitational lensing convinced most cosmologists that MACHOs cannot make up more than a small fraction of dark matter. According to A. Peter: "... the only really plausible dark-matter candidates are new particles." Specifically, Jamie Farnes proposes a particle with negative mass. 

The 1997 DAMA/NaI experiment and its successor DAMA/LIBRA in 2013, claimed to directly detect dark matter particles passing through the Earth, but many researchers remain skeptical, as negative results from similar experiments seem incompatible with the DAMA results.

Many supersymmetric models offer dark matter candidates in the form of the WIMPy Lightest Supersymmetric Particle (LSP). Separately, heavy sterile neutrinos exist in non-supersymmetric extensions to the standard model that explain the small neutrino mass through the seesaw mechanism.

Warm dark matter

Warm dark matter comprises particles with an FSL comparable to the size of a protogalaxy. Predictions based on warm dark matter are similar to those for cold dark matter on large scales, but with less small-scale density perturbations. This reduces the predicted abundance of dwarf galaxies and may lead to lower density of dark matter in the central parts of large galaxies. Some researchers consider this a better fit to observations. A challenge for this model is the lack of particle candidates with the required mass ~ 300 eV to 3000 eV.

No known particles can be categorized as warm dark matter. A postulated candidate is the sterile neutrino: a heavier, slower form of neutrino that does not interact through the weak force, unlike other neutrinos. Some modified gravity theories, such as scalar–tensor–vector gravity, require "warm" dark matter to make their equations work.

Hot dark matter

Hot dark matter consists of particles whose FSL is much larger than the size of a protogalaxy. The neutrino qualifies as such particle. They were discovered independently, long before the hunt for dark matter: they were postulated in 1930, and detected in 1956. Neutrinos' mass is less than 10−6 that of an electron. Neutrinos interact with normal matter only via gravity and the weak force, making them difficult to detect (the weak force only works over a small distance, thus a neutrino triggers a weak force event only if it hits a nucleus head-on). This makes them 'weakly interacting light particles' (WILPs), as opposed to WIMPs. 

The three known flavours of neutrinos are the electron, muon, and tau. Their masses are slightly different. Neutrinos oscillate among the flavours as they move. It is hard to determine an exact upper bound on the collective average mass of the three neutrinos (or for any of the three individually). For example, if the average neutrino mass were over 50 eV/c2 (less than 10−5 of the mass of an electron), the universe would collapse. CMB data and other methods indicate that their average mass probably does not exceed 0.3 eV/c2. Thus, observed neutrinos cannot explain dark matter.

Because galaxy-size density fluctuations get washed out by free-streaming, hot dark matter implies that the first objects that can form are huge supercluster-size pancakes, which then fragment into galaxies. Deep-field observations show instead that galaxies formed first, followed by clusters and superclusters as galaxies clump together.

Detection of dark matter particles

If dark matter is made up of sub-atomic particles, then millions, possibly billions, of such particles must pass through every square centimeter of the Earth each second. Many experiments aim to test this hypothesis. Although WIMPs are popular search candidates, the Axion Dark Matter Experiment (ADMX) searches for axions. Another candidate is heavy hidden sector particles that only interact with ordinary matter via gravity. 

These experiments can be divided into two classes: direct detection experiments, which search for the scattering of dark matter particles off atomic nuclei within a detector; and indirect detection, which look for the products of dark matter particle annihilations or decays.

Direct detection

Direct detection experiments aim to observe low-energy recoils (typically a few keVs) of nuclei induced by interactions with particles of dark matter, which (in theory) are passing through the Earth. After such a recoil the nucleus will emit energy as, e.g., scintillation light or phonons, which is then detected by sensitive apparatus. To do this effectively, it is crucial to maintain a low background, and so such experiments operate deep underground to reduce the interference from cosmic rays. Examples of underground laboratories with direct detection experiments include the Stawell mine, the Soudan mine, the SNOLAB underground laboratory at Sudbury, the Gran Sasso National Laboratory, the Canfranc Underground Laboratory, the Boulby Underground Laboratory, the Deep Underground Science and Engineering Laboratory and the China Jinping Underground Laboratory

These experiments mostly use either cryogenic or noble liquid detector technologies. Cryogenic detectors operating at temperatures below 100 mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include: CDMS, CRESST, EDELWEISS, EURECA. Noble liquid experiments include ZEPLIN, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experiment. Both of these techniques focus strongly on their ability to distinguish background particles (which predominantly scatter off electrons) from dark matter particles (that scatter off nuclei). Other experiments include SIMPLE and PICASSO.

Currently there has been no well-established claim of dark matter detection from a direct detection experiment, leading instead to strong upper limits on the mass and interaction cross section with nucleons of such dark matter particles. The DAMA/NaI and more recent DAMA/LIBRA experimental collaborations have detected an annual modulation in the rate of events in their detectors, which they claim is due to dark matter. This results from the expectation that as the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount. This claim is so far unconfirmed and in contradiction with negative results from other experiments such as LUX and SuperCDMS.

A special case of direct detection experiments covers those with directional sensitivity. This is a search strategy based on the motion of the Solar System around the Galactic Center. A low-pressure time projection chamber makes it possible to access information on recoiling tracks and constrain WIMP-nucleus kinematics. WIMPs coming from the direction in which the Sun travels (approximately towards Cygnus) may then be separated from background, which should be isotropic. Directional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC.

Indirect detection

Collage of six cluster collisions with dark matter maps. The clusters were observed in a study of how dark matter in clusters of galaxies behaves when the clusters collide.
 
Indirect detection experiments search for the products of the self-annihilation or decay of dark matter particles in outer space. For example, in regions of high dark matter density (e.g., the center of our galaxy) two dark matter particles could annihilate to produce gamma rays or Standard Model particle-antiparticle pairs. Alternatively if the dark matter particle is unstable, it could decay into standard model (or other) particles. These processes could be detected indirectly through an excess of gamma rays, antiprotons or positrons emanating from high density regions in our galaxy or others. A major difficulty inherent in such searches is that various astrophysical sources can mimic the signal expected from dark matter, and so multiple signals are likely required for a conclusive discovery.
 
A few of the dark matter particles passing through the Sun or Earth may scatter off atoms and lose energy. Thus dark matter may accumulate at the center of these bodies, increasing the chance of collision/annihilation. This could produce a distinctive signal in the form of high-energy neutrinos. Such a signal would be strong indirect proof of WIMP dark matter. High-energy neutrino telescopes such as AMANDA, IceCube and ANTARES are searching for this signal. The detection by LIGO in September 2015 of gravitational waves, opens the possibility of observing dark matter in a new way, particularly if it is in the form of primordial black holes.

Many experimental searches have been undertaken to look for such emission from dark matter annihilation or decay, examples of which follow. The Energetic Gamma Ray Experiment Telescope observed more gamma rays in 2008 than expected from the Milky Way, but scientists concluded that this was most likely due to incorrect estimation of the telescope's sensitivity.

The Fermi Gamma-ray Space Telescope is searching for similar gamma rays. In April 2012, an analysis of previously available data from its Large Area Telescope instrument produced statistical evidence of a 130 GeV signal in the gamma radiation coming from the center of the Milky Way. WIMP annihilation was seen as the most probable explanation.

At higher energies, ground-based gamma-ray telescopes have set limits on the annihilation of dark matter in dwarf spheroidal galaxies and in clusters of galaxies.

The PAMELA experiment (launched in 2006) detected excess positrons. They could be from dark matter annihilation or from pulsars. No excess antiprotons were observed.

In 2013 results from the Alpha Magnetic Spectrometer on the International Space Station indicated excess high-energy cosmic rays that could be due to dark matter annihilation.

Collider searches for dark matter

An alternative approach to the detection of dark matter particles in nature is to produce them in a laboratory. Experiments with the Large Hadron Collider (LHC) may be able to detect dark matter particles produced in collisions of the LHC proton beams. Because a dark matter particle should have negligible interactions with normal visible matter, it may be detected indirectly as (large amounts of) missing energy and momentum that escape the detectors, provided other (non-negligible) collision products are detected. Constraints on dark matter also exist from the LEP experiment using a similar principle, but probing the interaction of dark matter particles with electrons rather than quarks. It is important to note that any discovery from collider searches must be corroborated by discoveries in the indirect or direct detection sectors to prove that the particle discovered is, in fact, dark matter.

Alternative hypotheses

Because dark matter remains to be conclusively identified, many other hypotheses have emerged aiming to explain the observational phenomena that dark matter was conceived to explain. The most common method is to modify general relativity. General relativity is well-tested on solar system scales, but its validity on galactic or cosmological scales has not been well proven. A suitable modification to general relativity can conceivably eliminate the need for dark matter. The best-known theories of this class are MOND and its relativistic generalization tensor-vector-scalar gravity (TeVeS), f(R) gravity and entropic gravity. Alternative theories abound.

A problem with alternative hypotheses is that the observational evidence for dark matter comes from so many independent approaches (see the "observational evidence" section above). Explaining any individual observation is possible but explaining all of them is very difficult. Nonetheless, there have been some scattered successes for alternative hypotheses, such as a 2016 test of gravitational lensing in entropic gravity.

The prevailing opinion among most astrophysicists is that while modifications to general relativity can conceivably explain part of the observational evidence, there is probably enough data to conclude there must be some form of dark matter.

In philosophy of science

In philosophy of science, dark matter is an example of an auxiliary hypothesis, an ad hoc postulate that is added to a theory in response to observations that falsify it. It has been argued that the dark matter hypothesis is a conventionalist hypothesis, that is, a hypothesis that adds no empirical content and hence is unfalsifiable in the sense defined by Karl Popper.

In popular culture

Mention of dark matter is made in works of fiction. In such cases, it is usually attributed extraordinary physical or magical properties. Such descriptions are often inconsistent with the hypothesized properties of dark matter in physics and cosmology.

Cosmic distance ladder (updated)

From Wikipedia, the free encyclopedia

The cosmic distance ladder (also known as the extragalactic distance scale) is the succession of methods by which astronomers determine the distances to celestial objects. A real direct distance measurement of an astronomical object is possible only for those objects that are "close enough" (within about a thousand parsecs) to Earth. The techniques for determining distances to more distant objects are all based on various measured correlations between methods that work at close distances and methods that work at larger distances. Several methods rely on a standard candle, which is an astronomical object that has a known luminosity.

The ladder analogy arises because no single technique can measure distances at all ranges encountered in astronomy. Instead, one method can be used to measure nearby distances, a second can be used to measure nearby to intermediate distances, and so on. Each rung of the ladder provides information that can be used to determine the distances at the next higher rung.

Direct measurement

Statue of an astronomer and the concept of the cosmic distance ladder by the parallax method, made from the azimuth ring and other parts of the Yale–Columbia Refractor (telescope) (c 1925) wrecked by the 2003 Canberra bushfires which burned out the Mount Stromlo Observatory; at Questacon, Canberra, Australian Capital Territory.
 
At the base of the ladder are fundamental distance measurements, in which distances are determined directly, with no physical assumptions about the nature of the object in question. The precise measurement of stellar positions is part of the discipline of astrometry.

Astronomical unit

Direct distance measurements are based upon the astronomical unit (AU), which is the distance between the Earth and the Sun. Kepler's laws provide precise ratios of the sizes of the orbits of objects orbiting the Sun, but provides no measurement of the overall scale of the orbit system. Radar is used to measure the distance between the orbits of the Earth and of a second body. From that measurement and the ratio of the two orbit sizes, the size of Earth's orbit is calculated. The Earth's orbit is known with an absolute precision of a few meters and a relative precision of a few 1×10−11

Historically, observations of transits of Venus were crucial in determining the AU; in the first half of the 20th century, observations of asteroids were also important. Presently the orbit of Earth is determined with high precision using radar measurements of distances to Venus and other nearby planets and asteroids, and by tracking interplanetary spacecraft in their orbits around the Sun through the Solar System.

Parallax

Stellar parallax motion from annual parallax. Half the apex angle is the parallax angle.
 
The most important fundamental distance measurements come from trigonometric parallax. As the Earth orbits the Sun, the position of nearby stars will appear to shift slightly against the more distant background. These shifts are angles in an isosceles triangle, with 2 AU (the distance between the extreme positions of Earth's orbit around the Sun) making the base leg of the triangle and the distance to the star being the long equal length legs. The amount of shift is quite small, measuring 1 arcsecond for an object at 1 parsec's distance (3.26 light-years) of the nearest stars, and thereafter decreasing in angular amount as the distance increases. Astronomers usually express distances in units of parsecs (parallax arcseconds); light-years are used in popular media. 

Because parallax becomes smaller for a greater stellar distance, useful distances can be measured only for stars which are near enough to have a parallax larger than a few times the precision of the measurement. Parallax measurements typically have an accuracy measured in milliarcseconds. In the 1990s, for example, the Hipparcos mission obtained parallaxes for over a hundred thousand stars with a precision of about a milliarcsecond, providing useful distances for stars out to a few hundred parsecs. The Hubble telescope WFC3 now has the potential to provide a precision of 20 to 40 microarcseconds, enabling reliable distance measurements up to 5,000 parsecs (16,000 ly) for small numbers of stars. In 2018, Data Release 2 from the Gaia space mission provides similarly accurate distances to most stars brighter than 15th magnitude.

Stars have a velocity relative to the Sun that causes proper motion (transverse across the sky) and radial velocity (motion toward or away from the Sun). The former is determined by plotting the changing position of the stars over many years, while the latter comes from measuring the Doppler shift of the star's spectrum caused by motion along the line of sight. For a group of stars with the same spectral class and a similar magnitude range, a mean parallax can be derived from statistical analysis of the proper motions relative to their radial velocities. This statistical parallax method is useful for measuring the distances of bright stars beyond 50 parsecs and giant variable stars, including Cepheids and the RR Lyrae variables.

Parallax measurements may be an important clue to understanding three of the universe's most elusive components: dark matter, dark energy and neutrinos.
 
Hubble precision stellar distance measurement has been extended 10 times further into the Milky Way.
 
The motion of the Sun through space provides a longer baseline that will increase the accuracy of parallax measurements, known as secular parallax. For stars in the Milky Way disk, this corresponds to a mean baseline of 4 AU per year, while for halo stars the baseline is 40 AU per year. After several decades, the baseline can be orders of magnitude greater than the Earth–Sun baseline used for traditional parallax. However, secular parallax introduces a higher level of uncertainty because the relative velocity of observed stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the uncertainty is inversely proportional to the square root of the sample size.

Moving cluster parallax is a technique where the motions of individual stars in a nearby star cluster can be used to find the distance to the cluster. Only open clusters are near enough for this technique to be useful. In particular the distance obtained for the Hyades has historically been an important step in the distance ladder.

Other individual objects can have fundamental distance estimates made for them under special circumstances. If the expansion of a gas cloud, like a supernova remnant or planetary nebula, can be observed over time, then an expansion parallax distance to that cloud can be estimated. Those measurements however suffer from uncertainties in the deviation of the object from sphericity. Binary stars which are both visual and spectroscopic binaries also can have their distance estimated by similar means, and don't suffer from the above geometric uncertainty. The common characteristic to these methods is that a measurement of angular motion is combined with a measurement of the absolute velocity (usually obtained via the Doppler effect). The distance estimate comes from computing how far the object must be to make its observed absolute velocity appear with the observed angular motion.

Expansion parallaxes in particular can give fundamental distance estimates for objects that are very far, because supernova ejecta have large expansion velocities and large sizes (compared to stars). Further, they can be observed with radio interferometers which can measure very small angular motions. These combine to provide fundamental distance estimates to supernovae in other galaxies. Though valuable, such cases are quite rare, so they serve as important consistency checks on the distance ladder rather than workhorse steps by themselves.

Standard candles

Almost all astronomical objects used as physical distance indicators belong to a class that has a known brightness. By comparing this known luminosity to an object's observed brightness, the distance to the object can be computed using the inverse-square law. These objects of known brightness are termed standard candles

The brightness of an object can be expressed in terms of its absolute magnitude. This quantity is derived from the logarithm of its luminosity as seen from a distance of 10 parsecs. The apparent magnitude, the magnitude as seen by the observer (an instrument called a bolometer is used), can be measured and used with the absolute magnitude to calculate the distance D to the object in kiloparsecs (where 1 kpc equals 1000 parsecs) as follows:
or
where m is the apparent magnitude and M the absolute magnitude. For this to be accurate, both magnitudes must be in the same frequency band and there can be no relative motion in the radial direction. 

Some means of correcting for interstellar extinction, which also makes objects appear fainter and more red, is needed, especially if the object lies within a dusty or gaseous region. The difference between an object's absolute and apparent magnitudes is called its distance modulus, and astronomical distances, especially intergalactic ones, are sometimes tabulated in this way.

Problems

Two problems exist for any class of standard candle. The principal one is calibration, that is the determination of exactly what the absolute magnitude of the candle is. This includes defining the class well enough that members can be recognized, and finding enough members of that class with well-known distances to allow their true absolute magnitude to be determined with enough accuracy. The second problem lies in recognizing members of the class, and not mistakenly using a standard candle calibration on an object which does not belong to the class. At extreme distances, which is where one most wishes to use a distance indicator, this recognition problem can be quite serious.

A significant issue with standard candles is the recurring question of how standard they are. For example, all observations seem to indicate that Type Ia supernovae that are of known distance have the same brightness (corrected by the shape of the light curve). The basis for this closeness in brightness is discussed below; however, the possibility exists that the distant Type Ia supernovae have different properties than nearby Type Ia supernovae. The use of Type Ia supernovae is crucial in determining the correct cosmological model. If indeed the properties of Type Ia supernovae are different at large distances, i.e. if the extrapolation of their calibration to arbitrary distances is not valid, ignoring this variation can dangerously bias the reconstruction of the cosmological parameters, in particular the reconstruction of the matter density parameter.

That this is not merely a philosophical issue can be seen from the history of distance measurements using Cepheid variables. In the 1950s, Walter Baade discovered that the nearby Cepheid variables used to calibrate the standard candle were of a different type than the ones used to measure distances to nearby galaxies. The nearby Cepheid variables were population I stars with much higher metal content than the distant population II stars. As a result, the population II stars were actually much brighter than believed, and when corrected, this had the effect of doubling the distances to the globular clusters, the nearby galaxies, and the diameter of the Milky Way.

Standard siren

Gravitational waves originating from the inspiral phase of compact binary systems, such as neutron stars or black holes, have the useful property that both the amplitude and shape of the emitted gravitational radiation depend strongly on the chirp mass of the system. By observing the waveform, the chirp mass can be computed. With the chirp mass and the measured amplitude, distance to the source can be determined. Further, gravitational waves are not subject to extinction due to an absorbing intervening medium. (They are subject to gravitational lensing, however.) Thus, such a gravitational wave source is a "standard siren" of known loudness.

The amplitude and shape of the detected gravitational radiation allows the distance to be computed. Therefore, a standard siren can be used as a distance indicator on a cosmic scale. When the collision can be observed optically as well (in the case of a kilonova such as GW170817), the Doppler shift can be measured and the Hubble constant computed.

Standard ruler

Another class of physical distance indicator is the standard ruler. In 2008, galaxy diameters have been proposed as a possible standard ruler for cosmological parameter determination. More recently the physical scale imprinted by baryon acoustic oscillations (BAO) in the early universe has been used. In the early universe (before recombination) the baryons and photons scatter off each other, and form a tightly-coupled fluid that can support sound waves. The waves are sourced by primordial density perturbations, and travel at speed that can be predicted from the baryon density and other cosmological parameters. The total distance that these sound waves can travel before recombination determines a fixed scale, which simply expands with the universe after recombination. BAO therefore provide a standard ruler that can be measured in galaxy surveys from the effect of baryons on the clustering of galaxies. The method requires an extensive galaxy survey in order to make this scale visible, but has been measured with percent-level precision. The scale does depend on cosmological parameters like the baryon and matter densities, and the number of neutrinos, so distances based on BAO are more dependent on cosmological model than those based on local measurements. 

Light echos can be also used as standard rulers, although it is challenging to correctly measure the source geometry.

Galactic distance indicators

With few exceptions, distances based on direct measurements are available only out to about a thousand parsecs, which is a modest portion of our own Galaxy. For distances beyond that, measures depend upon physical assumptions, that is, the assertion that one recognizes the object in question, and the class of objects is homogeneous enough that its members can be used for meaningful estimation of distance.

Physical distance indicators, used on progressively larger distance scales, include:

Main sequence fitting

When the absolute magnitude for a group of stars is plotted against the spectral classification of the star, in a Hertzsprung–Russell diagram, evolutionary patterns are found that relate to the mass, age and composition of the star. In particular, during their hydrogen burning period, stars lie along a curve in the diagram called the main sequence. By measuring these properties from a star's spectrum, the position of a main sequence star on the H–R diagram can be determined, and thereby the star's absolute magnitude estimated. A comparison of this value with the apparent magnitude allows the approximate distance to be determined, after correcting for interstellar extinction of the luminosity because of gas and dust.

In a gravitationally-bound star cluster such as the Hyades, the stars formed at approximately the same age and lie at the same distance. This allows relatively accurate main sequence fitting, providing both age and distance determination.

Extragalactic distance scale

Extragalactic distance indicators
Method Uncertainty for Single Galaxy (mag) Distance to Virgo Cluster (Mpc) Range (Mpc)
Classical Cepheids 0.16 15–25 29
Novae 0.4 21.1 ± 3.9 20
Planetary Nebula Luminosity Function 0.3 15.4 ± 1.1 50
Globular Cluster Luminosity Function 0.4 18.8 ± 3.8 50
Surface Brightness Fluctuations 0.3 15.9 ± 0.9 50
D–σ relation 0.5 16.8 ± 2.4 > 100
Type Ia Supernovae 0.10 19.4 ± 5.0 > 1000
The extragalactic distance scale is a series of techniques used today by astronomers to determine the distance of cosmological bodies beyond our own galaxy, which are not easily obtained with traditional methods. Some procedures utilize properties of these objects, such as stars, globular clusters, nebulae, and galaxies as a whole. Other methods are based more on the statistics and probabilities of things such as entire galaxy clusters.

Wilson–Bappu effect

Discovered in 1956 by Olin Wilson and M.K. Vainu Bappu, the Wilson–Bappu effect utilizes the effect known as spectroscopic parallax. Many stars have features in their spectra, such as the calcium K-line, that indicate their absolute magnitude. The distance to the star can then be calculated from its apparent magnitude using the distance modulus

There are major limitations to this method for finding stellar distances. The calibration of the spectral line strengths has limited accuracy and it requires a correction for interstellar extinction. Though in theory this method has the ability to provide reliable distance calculations to stars up to 7 megaparsecs (Mpc), it is generally only used for stars at hundreds of kiloparsecs (kpc).

Classical Cepheids

Beyond the reach of the Wilson–Bappu effect, the next method relies on the period-luminosity relation of classical Cepheid variable stars. The following relation can be used to calculate the distance to Galactic and extra-galactic classical Cepheids:
Several problems complicate the use of Cepheids as standard candles and are actively debated, chief among them are: the nature and linearity of the period-luminosity relation in various passbands and the impact of metallicity on both the zero-point and slope of those relations, and the effects of photometric contamination (blending) and a changing (typically unknown) extinction law on Cepheid distances.

These unresolved matters have resulted in cited values for the Hubble constant ranging between 60 km/s/Mpc and 80 km/s/Mpc. Resolving this discrepancy is one of the foremost problems in astronomy since some cosmological parameters of the Universe may be constrained significantly better by supplying a precise value of the Hubble constant.

Cepheid variable stars were the key instrument in Edwin Hubble's 1923 conclusion that M31 (Andromeda) was an external galaxy, as opposed to a smaller nebula within the Milky Way. He was able to calculate the distance of M31 to 285 Kpc, today's value being 770 Kpc.

As detected thus far, NGC 3370, a spiral galaxy in the constellation Leo, contains the farthest Cepheids yet found at a distance of 29 Mpc. Cepheid variable stars are in no way perfect distance markers: at nearby galaxies they have an error of about 7% and up to a 15% error for the most distant.

Supernovae

SN 1994D (bright spot on the lower left) in the NGC 4526 galaxy. Image by NASA, ESA, The Hubble Key Project Team, and The High-Z Supernova Search Team

There are several different methods for which supernovae can be used to measure extragalactic distances.

Measuring a supernova's photosphere

We can assume that a supernova expands in a spherically symmetric manner. If the supernova is close enough such that we can measure the angular extent, θ(t), of its photosphere, we can use the equation
where ω is angular velocity, θ is angular extent. In order to get an accurate measurement, it is necessary to make two observations separated by time Δt. Subsequently, we can use
where d is the distance to the supernova, Vej is the supernova's ejecta's radial velocity (it can be assumed that Vej equals Vθ if spherically symmetric). 

This method works only if the supernova is close enough to be able to measure accurately the photosphere. Similarly, the expanding shell of gas is in fact not perfectly spherical nor a perfect blackbody. Also interstellar extinction can hinder the accurate measurements of the photosphere. This problem is further exacerbated by core-collapse supernova. All of these factors contribute to the distance error of up to 25%.

Type Ia light curves

Type Ia supernovae are some of the best ways to determine extragalactic distances. Ia's occur when a binary white dwarf star begins to accrete matter from its companion star. As the white dwarf gains matter, eventually it reaches its Chandrasekhar limit of

Once reached, the star becomes unstable and undergoes a runaway nuclear fusion reaction. Because all Type Ia supernovae explode at about the same mass, their absolute magnitudes are all the same. This makes them very useful as standard candles. All Type Ia supernovae have a standard blue and visual magnitude of
Therefore, when observing a Type Ia supernova, if it is possible to determine what its peak magnitude was, then its distance can be calculated. It is not intrinsically necessary to capture the supernova directly at its peak magnitude; using the multicolor light curve shape method (MLCS), the shape of the light curve (taken at any reasonable time after the initial explosion) is compared to a family of parameterized curves that will determine the absolute magnitude at the maximum brightness. This method also takes into effect interstellar extinction/dimming from dust and gas.

Similarly, the stretch method fits the particular supernovae magnitude light curves to a template light curve. This template, as opposed to being several light curves at different wavelengths (MLCS) is just a single light curve that has been stretched (or compressed) in time. By using this Stretch Factor, the peak magnitude can be determined. 

Using Type Ia supernovae is one of the most accurate methods, particularly since supernova explosions can be visible at great distances (their luminosities rival that of the galaxy in which they are situated), much farther than Cepheid Variables (500 times farther). Much time has been devoted to the refining of this method. The current uncertainty approaches a mere 5%, corresponding to an uncertainty of just 0.1 magnitudes.

Novae in distance determinations

Novae can be used in much the same way as supernovae to derive extragalactic distances. There is a direct relation between a nova's max magnitude and the time for its visible light to decline by two magnitudes. This relation is shown to be:
Where is the time derivative of the nova's mag, describing the average rate of decline over the first 2 magnitudes. 

After novae fade, they are about as bright as the most luminous Cepheid variable stars, therefore both these techniques have about the same max distance: ~ 20 Mpc. The error in this method produces an uncertainty in magnitude of about ±0.4

Globular cluster luminosity function

Based on the method of comparing the luminosities of globular clusters (located in galactic halos) from distant galaxies to that of the Virgo Cluster, the globular cluster luminosity function carries an uncertainty of distance of about 20% (or 0.4 magnitudes). 

US astronomer William Alvin Baum first attempted to use globular clusters to measure distant elliptical galaxies. He compared the brightest globular clusters in Virgo A galaxy with those in Andromeda, assuming the luminosities of the clusters were the same in both. Knowing the distance to Andromeda, Baum has assumed a direct correlation and estimated Virgo A's distance. 

Baum used just a single globular cluster, but individual formations are often poor standard candles. Canadian astronomer René Racine assumed the use of the globular cluster luminosity function (GCLF) would lead to a better approximation. The number of globular clusters as a function of magnitude is given by:
where m0 is the turnover magnitude, M0 is the magnitude of the Virgo cluster, and sigma is the dispersion ~ 1.4 mag. 

It is important to remember that it is assumed that globular clusters all have roughly the same luminosity within the universe. There is no universal globular cluster luminosity function that applies to all galaxies.

Planetary nebula luminosity function

Like the GCLF method, a similar numerical analysis can be used for planetary nebulae (note the use of more than one!) within far off galaxies. The planetary nebula luminosity function (PNLF) was first proposed in the late 1970s by Holland Cole and David Jenner. They suggested that all planetary nebulae might all have similar maximum intrinsic brightness, now calculated to be M = −4.53. This would therefore make them potential standard candles for determining extragalactic distances. 

Astronomer George Howard Jacoby and his colleagues later proposed that the PNLF function equaled:
Where N(M) is number of planetary nebula, having absolute magnitude M. M* is equal to the nebula with the brightest magnitude.

Surface brightness fluctuation method

Galaxy cluster

The following method deals with the overall inherent properties of galaxies. These methods, though with varying error percentages, have the ability to make distance estimates beyond 100 Mpc, though it is usually applied more locally.

The surface brightness fluctuation (SBF) method takes advantage of the use of CCD cameras on telescopes. Because of spatial fluctuations in a galaxy's surface brightness, some pixels on these cameras will pick up more stars than others. However, as distance increases the picture will become increasingly smoother. Analysis of this describes a magnitude of the pixel-to-pixel variation, which is directly related to a galaxy's distance.

D–σ relation

The D–σ relation, used in elliptical galaxies, relates the angular diameter (D) of the galaxy to its velocity dispersion. It is important to describe exactly what D represents, in order to understand this method. It is, more precisely, the galaxy's angular diameter out to the surface brightness level of 20.75 B-mag arcsec−2. This surface brightness is independent of the galaxy's actual distance from us. Instead, D is inversely proportional to the galaxy's distance, represented as d. Thus, this relation does not employ standard candles. Rather, D provides a standard ruler. This relation between D and σ is
Where C is a constant which depends on the distance to the galaxy clusters.

This method has the potential to become one of the strongest methods of galactic distance calculators, perhaps exceeding the range of even the Tully–Fisher method. As of today, however, elliptical galaxies aren't bright enough to provide a calibration for this method through the use of techniques such as Cepheids. Instead, calibration is done using more crude methods.

Overlap and scaling

A succession of distance indicators, which is the distance ladder, is needed for determining distances to other galaxies. The reason is that objects bright enough to be recognized and measured at such distances are so rare that few or none are present nearby, so there are too few examples close enough with reliable trigonometric parallax to calibrate the indicator. For example, Cepheid variables, one of the best indicators for nearby spiral galaxies, cannot yet be satisfactorily calibrated by parallax alone, though the Gaia space mission is expected to solve that specific problem. The situation is further complicated by the fact that different stellar populations generally do not have all types of stars in them. Cepheids in particular are massive stars, with short lifetimes, so they will only be found in places where stars have very recently been formed. Consequently, because elliptical galaxies usually have long ceased to have large-scale star formation, they will not have Cepheids. Instead, distance indicators whose origins are in an older stellar population (like novae and RR Lyrae variables) must be used. However, RR Lyrae variables are less luminous than Cepheids, and novae are unpredictable and an intensive monitoring program—and luck during that program—is needed to gather enough novae in the target galaxy for a good distance estimate. 

Because the more distant steps of the cosmic distance ladder depend upon the nearer ones, the more distant steps include the effects of errors in the nearer steps, both systematic and statistical ones. The result of these propagating errors means that distances in astronomy are rarely known to the same level of precision as measurements in the other sciences, and that the precision necessarily is poorer for more distant types of object. 

Another concern, especially for the very brightest standard candles, is their "standardness": how homogeneous the objects are in their true absolute magnitude. For some of these different standard candles, the homogeneity is based on theories about the formation and evolution of stars and galaxies, and is thus also subject to uncertainties in those aspects. For the most luminous of distance indicators, the Type Ia supernovae, this homogeneity is known to be poor; however, no other class of object is bright enough to be detected at such large distances, so the class is useful simply because there is no real alternative. 

The observational result of Hubble's Law, the proportional relationship between distance and the speed with which a galaxy is moving away from us (usually referred to as redshift) is a product of the cosmic distance ladder. Edwin Hubble observed that fainter galaxies are more redshifted. Finding the value of the Hubble constant was the result of decades of work by many astronomers, both in amassing the measurements of galaxy redshifts and in calibrating the steps of the distance ladder. Hubble's Law is the primary means we have for estimating the distances of quasars and distant galaxies in which individual distance indicators cannot be seen.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...