Search This Blog

Saturday, September 20, 2014

Degenerate matter

Degenerate matter

From Wikipedia, the free encyclopedia
 
Degenerate matter[1][2] in physics is a collection of free, non-interacting particles with a pressure and other physical characteristics determined by quantum mechanical effects. It is the analogue of an ideal gas in classical mechanics. The degenerate state of matter, in the sense of deviant from an ideal gas, arises at extraordinarily high density (in compact stars) or at extremely low temperatures in laboratories.[3][4] It occurs for matter particles such as electrons, neutrons, protons, and fermions in general and is referred to as electron-degenerate matter, neutron-degenerate matter, etc. In a mixture of particles, such as ions and electrons in white dwarfs or metals, the electrons may be degenerate, while the ions are not.

In a quantum mechanical description, free particles limited to a finite volume may take only a discrete set of energies, called quantum states. The Pauli exclusion principle prevents identical fermions from occupying the same quantum state. At lowest total energy (when the thermal energy of the particles is negligible), all the lowest energy quantum states are filled. This state is referred to as full degeneracy. The pressure (called degeneracy pressure or Fermi pressure) remains nonzero even near absolute zero temperature.[3][4] Adding particles or reducing the volume forces the particles into higher-energy quantum states. This requires a compression force, and is made manifest as a resisting pressure. The key feature is that this degeneracy pressure does not depend on the temperature and only on the density of the fermions. It keeps dense stars in equilibrium independent of the thermal structure of the star.

Degenerate matter is also called a Fermi gas or a degenerate gas. A degenerate state with velocities of the fermions close to the speed of light (particle energy larger than its rest mass energy) is called relativistic degenerate matter.

Degenerate matter was first described for a mixture of ions and electrons in 1926 by Ralph H. Fowler,[5] showing that at densities observed in white dwarfs the electrons (obeying Fermi–Dirac statistics, the term degenerate was not yet in use) have a pressure much higher than the partial pressure of the ions.

Concept

Imagine that a plasma is cooled and compressed repeatedly. Eventually, it will not be possible to compress the plasma any further, because the Pauli exclusion principle states that two fermions cannot share the same quantum state. When in this state, since there is no extra space for any particles, we can also say that a particle's location is extremely defined. Therefore, since (according to the Heisenberg uncertainty principle) ΔpΔxħ/2 where Δp is the uncertainty in the particle's momentum and Δx is the uncertainty in position, then we must say that their momentum is extremely uncertain since the particles are located in a very confined space. Therefore, even though the plasma is cold, the particles must be moving very fast on average. This leads to the conclusion that in order to compress an object into a very small space, tremendous force is required to control its particles' momentum.

Unlike a classical ideal gas, whose pressure is proportional to its temperature (P = nkT/V, where P is pressure, V is the volume, n is the number of particles—typically atoms or molecules—k is Boltzmann's constant, and T is temperature), the pressure exerted by degenerate matter depends only weakly on its temperature. In particular, the pressure remains nonzero even at absolute zero temperature. At relatively low densities, the pressure of a fully degenerate gas is given by P = K(n/V)5/3, where K depends on the properties of the particles making up the gas. At very high densities, where most of the particles are forced into quantum states with relativistic energies, the pressure is given by P = K′(n/V)4/3, where K′ again depends on the properties of the particles making up the gas.[6]

All matter experiences both normal thermal pressure and degeneracy pressure, but in commonly encountered gases, thermal pressure dominates so much that degeneracy pressure can be ignored. Likewise, degenerate matter still has normal thermal pressure, but at extremely high densities the degeneracy pressure usually dominates.

Exotic examples of degenerate matter include neutronium, strange matter, metallic hydrogen and white dwarf matter. Degeneracy pressure contributes to the pressure of conventional solids, but these are not usually considered to be degenerate matter because a significant contribution to their pressure is provided by electrical repulsion of atomic nuclei and the screening of nuclei from each other by electrons. In metals it is useful to treat the conduction electrons alone as a degenerate, free electron gas while the majority of the electrons are regarded as occupying bound quantum states. This contrasts with degenerate matter that forms the body of a white dwarf, where all the electrons would be treated as occupying free particle momentum states.

Degenerate gases

Degenerate gases are gases composed of fermions that have a particular configuration that usually forms at high densities. Fermions are particles with half-integer spin. Their behavior is regulated by a set of quantum mechanical rules called the Fermi–Dirac statistics. One particular rule is the Pauli exclusion principle, which states that there can be only one fermion occupying each quantum state, which also applies to electrons that are not bound to a nucleus but merely confined to a fixed volume, such as in the deep interior of a star. Such particles as electrons, protons, neutrons, and neutrinos are all fermions and obey Fermi–Dirac statistics.

A fermion gas in which all energy states below some energy level are filled is called a fully degenerate fermion gas. The difference between this energy level and the lowest energy level is known as the Fermi energy. The electron gas in ordinary metals and in the interior of white dwarf stars constitute two examples of a degenerate electron gas. Most stars are supported against their own gravitation by normal thermal gas pressure. White dwarf stars are supported by the degeneracy pressure of the electron gas in their interior, while for neutron stars the degenerate particles are neutrons.

Electron degeneracy

In an ordinary fermion gas in which thermal effects dominate, most of the available electron energy levels are unfilled and the electrons are free to move to these states. As particle density is increased, electrons progressively fill the lower energy states and additional electrons are forced to occupy states of higher energy even at low temperatures. Degenerate gases strongly resist further compression because the electrons cannot move to already filled lower energy levels due to the Pauli exclusion principle. Since electrons cannot give up energy by moving to lower energy states, no thermal energy can be extracted. The momentum of the fermions in the fermion gas nevertheless generates pressure, termed degeneracy pressure.

Under high densities the matter becomes a degenerate gas when the electrons are all stripped from their parent atoms. In the core of a star, once hydrogen burning in nuclear fusion reactions stops, it becomes a collection of positively charged ions, largely helium and carbon nuclei, floating in a sea of electrons, which have been stripped from the nuclei. Degenerate gas is an almost perfect conductor of heat and does not obey the ordinary gas laws. White dwarfs are luminous not because they are generating any energy but rather because they have trapped a large amount of heat which is gradually radiated away. Normal gas exerts higher pressure when it is heated and expands, but the pressure in a degenerate gas does not depend on the temperature. When gas becomes super-compressed, particles position right up against each other to produce degenerate gas that behaves more like a solid. In degenerate gases the kinetic energies of electrons are quite high and the rate of collision between electrons and other particles is quite low, therefore degenerate electrons can travel great distances at velocities that approach the speed of light. Instead of temperature, the pressure in a degenerate gas depends only on the speed of the degenerate particles; however, adding heat does not increase the speed. Pressure is only increased by the mass of the particles, which increases the gravitational force pulling the particles closer together. Therefore, the phenomenon is the opposite of that normally found in matter where if the mass of the matter is increased, the object becomes bigger. In degenerate gas, when the mass is increased, the pressure is increased, and the particles become spaced closer together, so the object becomes smaller. Degenerate gas can be compressed to very high densities, typical values being in the range of 10,000 kilograms per cubic centimeter.

There is an upper limit to the mass of an electron-degenerate object, the Chandrasekhar limit, beyond which electron degeneracy pressure cannot support the object against collapse. The limit is approximately 1.44 solar masses for objects with compositions similar to the sun. The mass cutoff changes with the chemical composition of the object, as this affects the ratio of mass to number of electrons present. Celestial objects below this limit are white dwarf stars, formed by the collapse of the cores of stars that run out of fuel. During collapse, an electron-degenerate gas forms in the core, providing sufficient degeneracy pressure as it is compressed to resist further collapse. Above this mass limit, a neutron star (supported by neutron degeneracy pressure) or a black hole may be formed instead.

Proton degeneracy

Sufficiently dense matter containing protons experiences proton degeneracy pressure, in a manner similar to the electron degeneracy pressure in electron-degenerate matter: protons confined to a sufficiently small volume have a large uncertainty in their momentum due to the Heisenberg uncertainty principle. Because protons are much more massive than electrons, the same momentum represents a much smaller velocity for protons than for electrons. As a result, in matter with approximately equal numbers of protons and electrons, proton degeneracy pressure is much smaller than electron degeneracy pressure, and proton degeneracy is usually modeled as a correction to the equations of state of electron-degenerate matter.

Neutron degeneracy

Neutron degeneracy is analogous to electron degeneracy and is demonstrated in neutron stars, which are primarily supported by the pressure from a degenerate neutron gas.[7] This happens when a stellar core above 1.44 solar masses, the Chandrasekhar limit, collapses and is not halted by the degenerate electrons. As the star collapses, the Fermi energy of the electrons increases to the point where it is energetically favorable for them to combine with protons to produce neutrons (via inverse beta decay, also termed electron capture and "neutralization"). The result of this collapse is an extremely compact star composed of nuclear matter, which is predominantly a degenerate neutron gas, sometimes called neutronium, with a small admixture of degenerate proton and electron gases.

Neutrons in a degenerate neutron gas are spaced much more closely than electrons in an electron-degenerate gas, because the more massive neutron has a much shorter wavelength at a given energy. In the case of neutron stars and white dwarf stars, this is compounded by the fact that the pressures within neutron stars are much higher than those in white dwarfs. The pressure increase is caused by the fact that the compactness of a neutron star causes gravitational forces to be much higher than in a less compact body with similar mass. This results in a star with a diameter on the order of a thousandth that of a white dwarf.

There is an upper limit to the mass of a neutron-degenerate object, the Tolman–Oppenheimer–Volkoff limit, which is analogous to the Chandrasekhar limit for electron-degenerate objects. The precise limit is unknown, as it depends on the equations of state of nuclear matter, for which a highly accurate model is not yet available. Above this limit, a neutron star may collapse into a black hole, or into other, denser forms of degenerate matter (such as quark matter) if these forms exist and have suitable properties (mainly related to degree of compressibility, or "stiffness", described by the equations of state).

Quark degeneracy

At densities greater than those supported by neutron degeneracy, quark matter is expected to occur. Several variations of this have been proposed that represent quark-degenerate states. Strange matter is a degenerate gas of quarks that is often assumed to contain strange quarks in addition to the usual up and down quarks. Color superconductor materials are degenerate gases of quarks in which quarks pair up in a manner similar to Cooper pairing in electrical superconductors. The equations of state for the various proposed forms of quark-degenerate matter vary widely, and are usually also poorly defined, due to the difficulty of modeling strong force interactions.

Quark-degenerate matter may occur in the cores of neutron stars, depending on the equations of state of neutron-degenerate matter. It may also occur in hypothetical quark stars, formed by the collapse of objects above the Tolman–Oppenheimer–Volkoff mass limit for neutron-degenerate objects. Whether quark-degenerate matter forms at all in these situations depends on the equations of state of both neutron-degenerate matter and quark-degenerate matter, both of which are poorly known.

Preon degeneracy hypothesis

Preons are subatomic particles proposed to be the constituents of quarks, which become composite particles in preon-based models. If preons exist, preon-degenerate matter might occur at densities greater than that which can be supported by quark-degenerate matter. The expected properties of preon-degenerate matter depend very strongly on the model chosen to describe preons, and the existence of preons is not assumed by the majority of the scientific community, due to conflicts between the preon models originally proposed and experimental data from particle accelerators.

Singularity

At densities greater than those supported by any degeneracy, gravity overwhelms all other forces. To the best of our current understanding, the body collapses to form a black hole. In the frame of reference that is co-moving with the collapsing matter, all the matter ends up in an infinitely dense singularity at the center of the event horizon. In the frame of reference of an observer at infinity, the collapse asymptotically approaches the event horizon.

As a consequence of relativity, the extreme gravitational field and orbital velocity experienced by infalling matter around a black hole would "slow" time for that matter relative to a distant observer.

Wednesday, September 17, 2014

Type Ia supernova

Type Ia supernova

From Wikipedia, the free encyclopedia

Type Ia supernovae occur in binary systems (two stars orbiting one another) in which one of the stars is a white dwarf while the other can vary from a giant star to an even smaller white dwarf.[1] A white dwarf is the remnant of a star that has completed its normal life cycle and has ceased nuclear fusion. However, white dwarfs of the common carbon-oxygen variety are capable of further fusion reactions that release a great deal of energy if their temperatures rise high enough.

Physically, carbon-oxygen white dwarfs with a low rate of rotation are limited to below 1.38 solar masses.[2][3] Beyond this, they re-ignite and in some cases trigger a supernova explosion. Somewhat confusingly, this limit is often referred to as the Chandrasekhar mass, despite being marginally different from the absolute Chandrasekhar limit where electron degeneracy pressure is unable to prevent catastrophic collapse. If a white dwarf gradually accretes mass from a binary companion, the general hypothesis is that its core will reach the ignition temperature for carbon fusion as it approaches the limit. If the white dwarf merges with another star (a very rare event), it will momentarily exceed the limit and begin to collapse, again raising its temperature past the nuclear fusion ignition point. Within a few seconds of initiation of nuclear fusion, a substantial fraction of the matter in the white dwarf undergoes a runaway reaction, releasing enough energy (1–2×1044 J)[4] to unbind the star in a supernova explosion.[5]

This category of supernovae produces consistent peak luminosity because of the uniform mass of white dwarfs that explode via the accretion mechanism. The stability of this value allows these explosions to be used as standard candles to measure the distance to their host galaxies because the visual magnitude of the supernovae depends primarily on the distance.

Consensus model

Spectrum of SN1998aq, a Type Ia supernova, one day after maximum light in the B band[6]

The Type Ia supernova is a sub-category in the Minkowski-Zwicky supernova classification scheme, which was devised by American astronomer Rudolph Minkowski and Swiss astronomer Fritz Zwicky.[7] There are several means by which a supernova of this type can form, but they share a common underlying mechanism. When a slowly-rotating[2] carbon-oxygen white dwarf accretes matter from a companion, it can exceed the Chandrasekhar limit of about 1.44 solar masses, beyond which it can no longer support its weight with electron degeneracy pressure.[8] In the absence of a countervailing process, the white dwarf would collapse to form a neutron star,[9] as normally occurs in the case of a white dwarf that is primarily composed of magnesium, neon, and oxygen.[10]
The current view among astronomers who model Type Ia supernova explosions, however, is that this limit is never actually attained and collapse is never initiated. Instead, the increase in pressure and density due to the increasing weight raises the temperature of the core,[3] and as the white dwarf approaches about 99% of the limit,[11] a period of convection ensues, lasting approximately 1,000 years.[12] At some point in this simmering phase, a deflagration flame front is born, powered by carbon fusion. The details of the ignition are still unknown, including the location and number of points where the flame begins.[13] Oxygen fusion is initiated shortly thereafter, but this fuel is not consumed as completely as carbon.[14]

Once fusion has begun, the temperature of the white dwarf starts to rise. A main sequence star supported by thermal pressure would expand and cool in order to counterbalance an increase in thermal energy. However, degeneracy pressure is independent of temperature; the white dwarf is unable to regulate the burning process in the manner of normal stars, so it is vulnerable to a runaway fusion reaction. The flame accelerates dramatically, in part due to the Rayleigh–Taylor instability and interactions with turbulence. It is still a matter of considerable debate whether this flame transforms into a supersonic detonation from a subsonic deflagration.[12][15]

Regardless of the exact details of nuclear burning, it is generally accepted that a substantial fraction of the carbon and oxygen in the white dwarf is burned into heavier elements within a period of only a few seconds,[14] raising the internal temperature to billions of degrees. This energy release from thermonuclear burning (1–2×1044 J[4]) is more than enough to unbind the star; that is, the individual particles making up the white dwarf gain enough kinetic energy to fly apart from each other. The star explodes violently and releases a shock wave in which matter is typically ejected at speeds on the order of 5,000–20000 km/s, roughly 6% of the speed of light. The energy released in the explosion also causes an extreme increase in luminosity. The typical visual absolute magnitude of Type Ia supernovae is Mv = −19.3 (about 5 billion times brighter than the Sun), with little variation.[12]

The theory of this type of supernovae is similar to that of novae, in which a white dwarf accretes matter more slowly and does not approach the Chandrasekhar limit. In the case of a nova, the infalling matter causes a hydrogen fusion surface explosion that does not disrupt the star.[12] This type of supernova differs from a core-collapse supernova, which is caused by the cataclysmic explosion of the outer layers of a massive star as its core implodes.[16]

Formation

Formation process
Gas is being stripped from a giant star to form an accretion disc around a compact companion (such as a white dwarf star). NASA image
Four images of a simulation of Type Ia supernova
Simulation of the explosion phase of the deflagration-to-detonation model of supernovae formation, run on scientific supercomputer. Argonne National Laboratory image

Single degenerate progenitors

One model for the formation of this category of supernova is a close binary star system. The progenitor binary system consists of main sequence stars, with the primary possessing more mass than the secondary. Being greater in mass, the primary is the first of the pair to evolve onto the asymptotic giant branch, where the star's envelope expands considerably. If the two stars share a common envelope then the system can lose significant amounts of mass, reducing the angular momentum, orbital radius and period. After the primary has degenerated into a white dwarf, the secondary star later evolves into a red giant and the stage is set for mass accretion onto the primary.
During this final shared-envelope phase, the two stars spiral in closer together as angular momentum is lost. The resulting orbit can have a period as brief as a few hours.[17][18] If the accretion continues long enough, the white dwarf may eventually approach the Chandrasekhar limit.

The white dwarf companion could also accrete matter from other types of companions, including a subgiant or (if the orbit is sufficiently close) even a main sequence star. The actual evolutionary process during this accretion stage remains uncertain, as it can depend both on the rate of accretion and the transfer of angular momentum to the white dwarf companion.[19]

It has been estimated that single degenerate progenitors account for no more than 20% of all Type Ia supernovae.[20]

Double degenerate progenitors

A second possible mechanism for triggering a Type Ia supernova is the merger of two white dwarfs whose combined mass exceeds the Chandrasekhar limit. The resulting merger is called a super-Chandrasekhar mass white dwarf.[21][22] In such a case, the total mass would not be constrained by the Chandrasekhar limit.

Collisions of solitary stars within the Milky Way occur only once every 107-1013 years; far less frequently than the appearance of novae.[23] Collisions occur with greater frequency in the dense core regions of globular clusters.[24] (Cf. blue stragglers) A likely scenario is a collision with a binary star system, or between two binary systems containing white dwarfs. This collision can leave behind a close binary system of two white dwarfs. Their orbit decays and they merge through their shared envelope.[25] However, a study based on SDSS spectra found 15 double systems of the 4,000 white dwarfs tested, implying a double white dwarf merger every 100 years in the Milky Way.
Conveniently, this rate matches the number of Type Ia supernovae detected in our neighborhood.[26]

A double degenerate scenario is one of several explanations proposed for the anomalously massive (2 solar mass) progenitor of the SN 2003fg.[27][28] It is the only possible explanation for SNR 0509-67.5, as all possible models with only one white dwarf have been ruled out.[29] It has also been strongly suggested for SN 1006, given that no companion star remnant has been found there.[20] Observations made with NASA's Swift space telescope ruled out existing supergiant or giant companion stars of every Type Ia supernovae studied. The supergiant companion's blown out outer shell should emit X-rays, but this glow wasn't detected by Swift's XRT (X-Ray telescope) in the 53 closest supernova remnants. For 12 Type Ia supernovae observed within 10 days of the explosion, the satellite's UVOT (Ultraviolet/Optical Telescope) showed no ultraviolet radiation originating from the heated companion star's surface hit by the supernova shock wave, meaning there were no red giants or larger stars orbiting those supernova progenitors. In the case of SN 2011fe, the companion star must have been smaller than the Sun, if it existed.[30] The Chandra X-ray Observatory revealed that the X-ray radiation of five elliptical galaxies and the bulge of the Andromeda galaxy is 30-50 times fainter than expected. X-ray radiation should be emitted by the accretion discs of Type Ia supernova progenitors. The missing radiation indicates that few white dwarfs possess accretion discs, ruling out the common, accretion-based model of Ia supernovae.[31] Inward spiraling white dwarf pairs must be strong sources of gravitational waves, but this can't be detected as of 2012.
Double degenerate scenarios raise questions about the applicability of Type Ia supernovae as standard candles, since total mass of the two merging white dwarfs varies significantly, meaning luminosity also varies.

Type Iax

It has been proposed that a group of sub-luminous supernovae that occur when helium accretes onto a white dwarf should be classified as type Iax.[32][33] This type of supernova may not always completely destroy the white dwarf progenitor.[34]

Observation

Unlike the other types of supernovae, Type Ia supernovae generally occur in all types of galaxies, including ellipticals. They show no preference for regions of current stellar formation.[35] As white dwarf stars form at the end of a star's main sequence evolutionary period, such a long-lived star system may have wandered far from the region where it originally formed. Thereafter a close binary system may spend another million years in the mass transfer stage (possibly forming persistent nova outbursts) before the conditions are ripe for a Type Ia supernova to occur.[36]

A long-standing problem in astronomy has been the identification of supernova progenitors. Direct observation of a progenitor would provide useful constraints on supernova models. As of 2006, the search for such a progenitor had been ongoing for longer than a century.[37] Observation of the supernova SN 2011fe has provided useful constraints. Previous observations with the Hubble Space Telescope did not show a star at the position of the event, thereby excluding a red giant as the source. The expanding plasma from the explosion was found to contain carbon and oxygen, making it likely the progenitor was a white dwarf primarily composed of these elements.[38] Similarly, observations of the nearby SN PTF 11kx,[39] discovered January 16, 2011 (UT) by the Palomar Transient Factory (PTF), lead to the conclusion that this explosion arises from single-degenerate progenitor, with a red giant companion, thus suggesting there is no single progenitor path to SN Ia. Direct observations of the progenitor of PTF11kx were reported in the August 24 edition of Science and confirm this conclusion, and also show that the progenitor star experienced periodic nova eruptions before the supernova - another surprising discovery. [40][41]

Light curve

This plot of luminosity (relative to the Sun, L0) versus time shows the characteristic light curve for a Type Ia supernova. The peak is primarily due to the decay of Nickel (Ni), while the later stage is powered by Cobalt (Co).

Type Ia supernovae have a characteristic light curve, their graph of luminosity as a function of time after the explosion. Near the time of maximum luminosity, the spectrum contains lines of intermediate-mass elements from oxygen to calcium; these are the main constituents of the outer layers of the star. Months after the explosion, when the outer layers have expanded to the point of transparency, the spectrum is dominated by light emitted by material near the core of the star, heavy elements synthesized during the explosion; most prominently isotopes close to the mass of iron (or iron peak elements). The radioactive decay of nickel-56 through cobalt-56 to iron-56 produces high-energy photons which dominate the energy output of the ejecta at intermediate to late times.[12]

The use of Type Ia supernovae to measure precise distances was pioneered by a collaboration of Chilean and US astronomers, the Calán/Tololo Supernova Survey.[42] In a series of papers in the 1990s the survey showed that while Type Ia supernovae do not all reach the same peak luminosity, a single parameter measured from the light curve can be used to correct unreddened Type Ia supernovae to standard candle values. The original correction to standard candle value is known as the Phillips relationship [43] and was shown by this group to be able to measure relative distances to 7% accuracy.[44] The cause of this uniformity in peak brightness is related to the amount of 56Ni produced in white dwarfs presumably exploding near the Chandrasekhar limit.[45]

The similarity in the absolute luminosity profiles of nearly all known Type Ia supernovae has led to their use as a secondary standard candle in extragalactic astronomy. [46] Improved calibrations of the Cepheid variable distance scale [47] and direct geometric distance measurements to NGC 4258 from the dynamics of maser emission [48] when combined with the Hubble diagram of the Type Ia supernova distances have led to an improved value of the Hubble constant.

In 1998, observations of distant Type Ia supernovae indicated the unexpected result that the Universe seems to undergo an accelerating expansion.[49][50]

Cosmic distance ladder

Cosmic distance ladder

From Wikipedia, the free encyclopedia

The cosmic distance ladder (also known as the extragalactic distance scale) is the succession of methods by which astronomers determine the distances to celestial objects. A real direct distance measurement of an astronomical object is possible only for those objects that are "close enough" (within about a thousand parsecs) to Earth. The techniques for determining distances to more distant objects are all based on various measured correlations between methods that work at close distances with methods that work at larger distances. Several methods rely on a standard candle, which is an astronomical object that has a known luminosity.

The ladder analogy arises because no one technique can measure distances at all ranges encountered in astronomy. Instead, one method can be used to measure nearby distances, a second can be used to measure nearby to intermediate distances, and so on. Each rung of the ladder provides information that can be used to determine the distances at the next higher rung.

Direct measurement

Statue of an astronomer and the concept of the cosmic distance ladder by the parallax method, made from the azimuth ring and other parts of the Yale–Columbia Refractor (telescope) (c 1925) wrecked by the 2003 Canberra bushfires which burned out the Mount Stromlo Observatory; at Questacon, Canberra, Australian Capital Territory

At the base of the ladder are fundamental distance measurements, in which distances are determined directly, with no physical assumptions about the nature of the object in question. The precise measurement of stellar positions is part of the discipline of astrometry.

Astronomical unit

Direct distance measurements are based upon precise determination of the distance between the Earth and the Sun, which is called the Astronomical Unit (AU). Historically, observations of transits of Venus were crucial in determining the AU; in the first half of the 20th century, observations of asteroids were also important. Presently the orbit of Earth is determined with high precision using radar measurements of Venus and other nearby planets and asteroids,[1] and by tracking interplanetary spacecraft in their orbits around the Sun through the Solar System. Kepler's Laws provide precise ratios of the sizes of the orbits of objects revolving around the Sun, but not a real measure of the orbits themselves. Radar provides a value in kilometers for the difference in two orbits' sizes, and from that and the ratio of the two orbit sizes, the size of Earth's orbit comes directly. The orbit is known with a precision of a few meters.

Parallax

The most important fundamental distance measurements come from trigonometric parallax. As the Earth orbits around the Sun, the position of nearby stars will appear to shift slightly against the more distant background. These shifts are angles in an isosceles triangle, with 2 AU (the distance between the extreme positions of earth's orbit around the sun) making the short leg of the triangle and the distance to the star being the long legs. The amount of shift is quite small, measuring 1 arcsecond for an object at a distance of 1 parsec (3.26 light-years), thereafter decreasing in angular amount as the reciprocal of the distance. Astronomers usually express distances in units of parsecs (parallax arcseconds); light-years are used in popular media, but almost invariably values in light-years have been converted from numbers tabulated in parsecs in the original source.
Because parallax becomes smaller for a greater stellar distance, useful distances can be measured only for stars whose parallax is larger than the precision of the measurement. Parallax measurements typically have an accuracy measured in milliarcseconds.[2] In the 1990s, for example, the Hipparcos mission obtained parallaxes for over a hundred thousand stars with a precision of about a milliarcsecond,[3] providing useful distances for stars out to a few hundred parsecs.

Stars can have a velocity relative to the Sun that causes proper motion and radial velocity (motion toward or away from the Sun). The former is determined by plotting the changing position of the stars over many years, while the latter comes from measuring the Doppler shift in their spectrum caused by motion along the line of sight. For a group of stars with the same spectral class and a similar magnitude range, a mean parallax can be derived from statistical analysis of the proper motions relative to their radial velocities. This statistical parallax method is useful for measuring the distances of bright stars beyond 50 parsecs and giant variable stars, including Cepheids and the RR Lyrae variables.[4]

The motion of the Sun through space provides a longer baseline that will increase the accuracy of parallax measurements, known as secular parallax. For stars in the Milky Way disk, this corresponds to a mean baseline of 4 A.U. per year, while for halo stars the baseline is 40 A.U. per year. After several decades, the baseline can be orders of magnitude greater than the Earth–Sun baseline used for traditional parallax. However, secular parallax introduces a higher level of uncertainty because the relative velocity of other stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the uncertainty is inversely proportional to the square root of the sample size.[5]

Moving cluster parallax is a technique where the motions of individual stars in a nearby star cluster can be used to find the distance to the cluster. Only open clusters are near enough for this technique to be useful. In particular the distance obtained for the Hyades has been an important step in the distance ladder.

Other individual objects can have fundamental distance estimates made for them under special circumstances. If the expansion of a gas cloud, like a supernova remnant or planetary nebula, can be observed over time, then an expansion parallax distance to that cloud can be estimated. Binary stars which are both visual and spectroscopic binaries also can have their distance estimated by similar means. The common characteristic to these is that a measurement of angular motion is combined with a measurement of the absolute velocity (usually obtained via the Doppler effect). The distance estimate comes from computing how far away the object must be to make its observed absolute velocity appear with the observed angular motion.

Expansion parallaxes in particular can give fundamental distance estimates for objects that are very far away, because supernova ejecta have large expansion velocities and large sizes (compared to stars). Further, they can be observed with radio interferometers which can measure very small angular motions. These combine to mean that some supernovae in other galaxies have fundamental distance estimates.[6] Though valuable, such cases are quite rare, so they serve as important consistency checks on the distance ladder rather than workhorse steps by themselves.

Standard candles

Almost all of the physical distance indicators are standard candles. These are objects that belong to some class that have a known brightness. By comparing the known luminosity of the latter to its observed brightness, the distance to the object can be computed using the inverse square law. These objects of known brightness are termed standard candles.

In astronomy, the brightness of an object is given in terms of its absolute magnitude. This quantity is derived from the logarithm of its luminosity as seen from a distance of 10 parsecs. The apparent magnitude, or the magnitude as seen by the observer, can be used to determine the distance D to the object in kiloparsecs (where 1 kpc equals 1000 parsecs) as follows:
5 \cdot \log_{10} D = m - M - 10,
where m the apparent magnitude and M the absolute magnitude. For this to be accurate, both magnitudes must be in the same frequency band and there can be no relative motion in the radial direction.

Some means of accounting for interstellar extinction, which also makes objects appear fainter and more red, is also needed, especially if the object lies within a dusty or gaseous region.[7] The difference between absolute and apparent magnitudes is called the distance modulus, and astronomical distances, especially intergalactic ones, are sometimes tabulated in this way.

Problems

Two problems exist for any class of standard candle. The principal one is calibration, determining exactly what the absolute magnitude of the candle is. This includes defining the class well enough that members can be recognized, and finding enough members with well-known distances that their true absolute magnitude can be determined with enough accuracy. The second lies in recognizing members of the class, and not mistakenly using the standard candle calibration upon an object which does not belong to the class. At extreme distances, which is where one most wishes to use a distance indicator, this recognition problem can be quite serious.

A significant issue with standard candles is the recurring question of how standard they are. For example, all observations seem to indicate that Type Ia supernovae that are of known distance have the same brightness (corrected by the shape of the light curve). The basis for this closeness in brightness is discussed below; however, the possibility exists that the distant Type Ia supernovae have different properties than nearby Type Ia supernovae. The use of Type Ia supernovae is crucial in determining the correct cosmological model. If indeed the properties of Type Ia supernovae are different at large distances, i.e. if the extrapolation of their calibration to arbitrary distances is not valid, ignoring this variation can dangerously bias the reconstruction of the cosmological parameters, in particular the reconstruction of the matter density parameter.[8]

That this is not merely a philosophical issue can be seen from the history of distance measurements using Cepheid variables. In the 1950s, Walter Baade discovered that the nearby Cepheid variables used to calibrate the standard candle were of a different type than the ones used to measure distances to nearby galaxies. The nearby Cepheid variables were population I stars with much higher metal content than the distant population II stars. As a result, the population II stars were actually much brighter than believed, and this had the effect of doubling the distances to the globular clusters, the nearby galaxies, and the diameter of the Milky Way.

(Another class of physical distance indicator is the standard ruler. In 2008, galaxy diameters have been proposed as a possible standard ruler for cosmological parameter determination.[9])

Galactic distance indicators

With few exceptions, distances based on direct measurements are available only out to about a thousand parsecs, which is a modest portion of our own Galaxy. For distances beyond that, measures depend upon physical assumptions, that is, the assertion that one recognizes the object in question, and the class of objects is homogeneous enough that its members can be used for meaningful estimation of distance.
Physical distance indicators, used on progressively larger distance scales, include:

Main sequence fitting

When the absolute magnitude for a group of stars is plotted against the spectral classification of the star, in a Hertzsprung–Russell diagram, evolutionary patterns are found that relate to the mass, age and composition of the star. In particular, during their hydrogen burning period, stars lie along a curve in the diagram called the main sequence. By measuring these properties from a star's spectrum, the position of a main sequence star on the H–R diagram can be determined, and thereby the star's absolute magnitude estimated. A comparison of this value with the apparent magnitude allows the approximate distance to be determined, after correcting for interstellar extinction of the luminosity because of gas and dust.

In a gravitationally-bound star cluster such as the Hyades, the stars formed at approximately the same age and lie at the same distance. This allows relatively accurate main sequence fitting, providing both age and distance determination.

 Extragalactic distance scale

Extragalactic distance indicators[13]
Method Uncertainty for Single Galaxy (mag) Distance to Virgo Cluster (Mpc) Range (Mpc)
Classical Cepheids 0.16 15–25 29
Novae 0.4 21.1 ± 3.9 20
Planetary Nebula Luminosity Function 0.3 15.4 ± 1.1 50
Globular Cluster Luminosity Function 0.4 18.8 ± 3.8 50
Surface Brightness Fluctuations 0.3 15.9 ± 0.9 50
D–σ relation 0.5 16.8 ± 2.4 > 100
Type Ia Supernovae 0.10 19.4 ± 5.0 > 1000
The extragalactic distance scale is a series of techniques used today by astronomers to determine the distance of cosmological bodies beyond our own galaxy, which are not easily obtained with traditional methods. Some procedures utilize properties of these objects, such as stars, globular clusters, nebulae, and galaxies as a whole.
Other methods are based more on the statistics and probabilities of things such as entire galaxy clusters.

Wilson–Bappu effect

Discovered in 1956 by Olin Wilson and M.K. Vainu Bappu, The Wilson–Bappu effect utilizes the effect known as spectroscopic parallax. Certain stars have features in their emission/absorption spectra allowing relatively easy absolute magnitude calculation. Certain spectral lines are directly related to an object's magnitude, such as the K absorption line of calcium. Distance to the star can be calculated from magnitude by the distance modulus:
\ M - m = - 2.5 \log_{10}(F_1/F_2) \,.
Though in theory this method has the ability to provide reliable distance calculations to stars roughly 7 megaparsecs (Mpc) away, it is generally only used for stars hundreds of kiloparsecs (kpc) away.

This method is only valid for stars over 15 magnitudes.

Classical Cepheids

Beyond the reach of the Wilson–Bappu effect, the next method relies on the period-luminosity relation of classical Cepheid variable stars, first discovered by Henrietta Leavitt. The following relation can be used to calculate the distance to Galactic and extragalactic classical Cepheids:
 5\log_{10}{d}=V+ (3.34) \log_{10}{P} - (2.45) (V-I) + 7.52 \,. [14]
 5\log_{10}{d}=V+ (3.37) \log_{10}{P} - (2.55) (V-I) + 7.48 \,. [15]
Several problems complicate the use of Cepheids as standard candles and are actively debated, chief among them are: the nature and linearity of the period-luminosity relation in various passbands and the impact of metallicity on both the zero-point and slope of those relations, and the effects of photometric contamination (blending) and a changing (typically unknown) extinction law on Cepheid distances.[16][17][18][19][20][21][22][23][24]

These unresolved matters have resulted in cited values for the Hubble Constant ranging between 60 km/s/Mpc and 80 km/s/Mpc. Resolving this discrepancy is one of the foremost problems in astronomy since the cosmological parameters of the Universe may be constrained by supplying a precise value of the Hubble constant.[25][26]

Cepheid variable stars were the key instrument in Edwin Hubble’s 1923 conclusion that M31 (Andromeda) was an external galaxy, as opposed to a smaller nebula within the Milky Way. He was able to calculate the distance of M31 to 285 Kpc, today’s value being 770 Kpc.

As detected thus far, NGC 3370, a spiral galaxy in the constellation Leo, contains the farthest Cepheids yet found at a distance of 29 Mpc. Cepheid variable stars are in no way perfect distance markers: at nearby galaxies they have an error of about 7% and up to a 15% error for the most distant.

Supernovae

SN 1994D (bright spot on the lower left) in the NGC 4526 galaxy. Image by NASA, ESA, The Hubble Key Project Team, and The High-Z Supernova Search Team

There are several different methods for which supernovae can be used to measure extragalactic distances, here we cover the most used.

Measuring a supernova's photosphere

We can assume that a supernova expands in a spherically symmetric manner. If the supernova is close enough such that we can measure the angular extent, θ(t), of its photosphere, we can use the equation
\omega = \frac{\Delta\theta}{\Delta t} \,,
where ω is angular velocity, θ is angular extent. In order to get an accurate measurement, it is necessary to make two observations separated by time Δt. Subsequently, we can use
\ d = \frac{V_{ej}}{\omega} \,,
where d is the distance to the supernova, Vej is the supernova's ejecta's radial velocity (it can be assumed that Vej equals Vθ if spherically symmetric).

This method works only if the supernova is close enough to be able to measure accurately the photosphere. Similarly, the expanding shell of gas is in fact not perfectly spherical nor a perfect blackbody. Also interstellar extinction can hinder the accurate measurements of the photosphere. This problem is further exacerbated by core-collapse supernova. All of these factors contribute to the distance error of up to 25%.

Type Ia light curves

Type Ia supernovae are some of the best ways to determine extragalactic distances. Ia's occur when a binary white dwarf star begins to accrete matter from its companion Red Dwarf star. As the white dwarf gains matter, eventually it reaches its Chandrasekhar Limit of  1.4 M_{\odot} .

Once reached, the star becomes unstable and undergoes a runaway nuclear fusion reaction. Because all Type Ia supernovae explode at about the same mass, their absolute magnitudes are all the same. This makes them very useful as standard candles. All Type Ia supernovae have a standard blue and visual magnitude of
\ M_B \approx M_V \approx -19.3 \pm 0.3 \,.
Therefore, when observing a Type Ia supernova, if it is possible to determine what its peak magnitude was, then its distance can be calculated. It is not intrinsically necessary to capture the supernova directly at its peak magnitude; using the multicolor light curve shape method (MLCS), the shape of the light curve (taken at any reasonable time after the initial explosion) is compared to a family of parameterized curves that will determine the absolute magnitude at the maximum brightness. This method also takes into effect interstellar extinction/dimming from dust and gas.

Similarly, the stretch method fits the particular supernovae magnitude light curves to a template light curve. This template, as opposed to being several light curves at different wavelengths (MLCS) is just a single light curve that has been stretched (or compressed) in time. By using this Stretch Factor, the peak magnitude can be determined .[citation needed]

Using Type Ia supernovae is one of the most accurate methods, particularly since supernova explosions can be visible at great distances (their luminosities rival that of the galaxy in which they are situated), much farther than Cepheid Variables (500 times farther). Much time has been devoted to the refining of this method. The current uncertainty approaches a mere 5%, corresponding to an uncertainty of just 0.1 magnitudes.

Novae in distance determinations

Novae can be used in much the same way as supernovae to derive extragalactic distances. There is a direct relation between a nova's max magnitude and the time for its visible light to decline by two magnitudes. This relation is shown to be:
\ M^\max_V = -9.96 - 2.31 \log_{10} \dot{x} \,.
Where \dot{x} is the time derivative of the nova's mag, describing the average rate of decline over the first 2 magnitudes.

After novae fade, they are about as bright as the most luminous Cepheid Variable stars, therefore both these techniques have about the same max distance: ~ 20 Mpc. The error in this method produces an uncertainty in magnitude of about ±0.4

Globular cluster luminosity function

Based on the method of comparing the luminosities of globular clusters (located in galactic halos) from distant galaxies to that of the Virgo cluster, the globular cluster luminosity function carries an uncertainty of distance of about 20% (or 0.4 magnitudes).

US astronomer William Alvin Baum first attempted to use globular clusters to measure distant elliptical galaxies. He compared the brightest globular clusters in Virgo A galaxy with those in Andromeda, assuming the luminosities of the clusters were the same in both. Knowing the distance to Andromeda, Baum has assumed a direct correlation and estimated Virgo A’s distance.

Baum used just a single globular cluster, but individual formations are often poor standard candles. Canadian astronomer René Racine assumed the use of the globular cluster luminosity function (GCLF) would lead to a better approximation. The number of globular clusters as a function of magnitude is given by:
\ \Phi (m) = A e^{(m-m_0)^2/2\sigma^2} \,
where m0 is the turnover magnitude, M0 is the magnitude of the Virgo cluster, and sigma is the dispersion ~ 1.4 mag.

It is important to remember that it is assumed that globular clusters all have roughly the same luminosities within the universe. There is no universal globular cluster luminosity function that applies to all galaxies.

Planetary nebula luminosity function

Like the GCLF method, a similar numerical analysis can be used for planetary nebulae (note the use of more than one!) within far off galaxies. The planetary nebula luminosity function (PNLF) was first proposed in the late 1970s by Holland Cole and David Jenner. They suggested that all planetary nebulae might all have similar maximum intrinsic brightness, now calculated to be M = −4.53. This would therefore make them potential standard candles for determining extragalactic distances.

Astronomer George Howard Jacoby and his colleagues later proposed that the PNLF function equaled:
\ N (M) \propto e^{0.307 M} (1 - e^{3(M^{*} - M)} )  \,.
Where N(M) is number of planetary nebula, having absolute magnitude M. M* is equal to the nebula with the brightest magnitude.

Surface brightness fluctuation method

Galaxy cluster

The following method deals with the overall inherent properties of galaxies. These methods, though with varying error percentages, have the ability to make distance estimates beyond 100 Mpc, though it is usually applied more locally.

The surface brightness fluctuation (SBF) method takes advantage of the use of CCD cameras on telescopes. Because of spatial fluctuations in a galaxy’s surface brightness, some pixels on these cameras will pick up more stars than others. However, as distance increases the picture will become increasingly smoother. Analysis of this describes a magnitude of the pixel-to-pixel variation, which is directly related to a galaxy’s distance.

D–σ relation

The D–σ relation, used in elliptical galaxies, relates the angular diameter (D) of the galaxy to its velocity dispersion. It is important to describe exactly what D represents, in order to understand this method. It is, more precisely, the galaxy’s angular diameter out to the surface brightness level of 20.75 B-mag arcsec−2. This surface brightness is independent of the galaxy’s actual distance from us. Instead, D is inversely proportional to the galaxy’s distance, represented as d. Thus, this relation does not employ standard candles. Rather, D provides a standard ruler. This relation between D and σ is
 \log_{10}(D) = 1.333 \log (\sigma) + C \,.
Where C is a constant which depends on the distance to the galaxy clusters.[citation needed]

This method has the potential to become one of the strongest methods of galactic distance calculators, perhaps exceeding the range of even the Tully–Fisher method. As of today, however, elliptical galaxies aren’t bright enough to provide a calibration for this method through the use of techniques such as Cepheids. Instead, calibration is done using more crude methods.

Overlap and scaling

A succession of distance indicators, which is the distance ladder, is needed for determining distances to other galaxies. The reason is that objects bright enough to be recognized and measured at such distances are so rare that few or none are present nearby, so there are too few examples close enough with reliable trigonometric parallax to calibrate the indicator. For example, Cepheid variables, one of the best indicators for nearby spiral galaxies, cannot be satisfactorily calibrated by parallax alone.
The situation is further complicated by the fact that different stellar populations generally do not have all types of stars in them. Cepheids in particular are massive stars, with short lifetimes, so they will only be found in places where stars have very recently been formed. Consequently, because elliptical galaxies usually have long ceased to have large-scale star formation, they will not have Cepheids. Instead, distance indicators whose origins are in an older stellar population (like novae and RR Lyrae variables) must be used. However, RR Lyrae variables are less luminous than Cepheids (so they cannot be seen as far away as Cepheids can), and novae are unpredictable and an intensive monitoring program – and luck during that program – is needed to gather enough novae in the target galaxy for a good distance estimate.

Because the more distant steps of the cosmic distance ladder depend upon the nearer ones, the more distant steps include the effects of errors in the nearer steps, both systematic and statistical ones. The result of these propagating errors means that distances in astronomy are rarely known to the same level of precision as measurements in the other sciences, and that the precision necessarily is poorer for more distant types of object.

Another concern, especially for the very brightest standard candles, is their "standardness": how homogeneous the objects are in their true absolute magnitude. For some of these different standard candles, the homogeneity is based on theories about the formation and evolution of stars and galaxies, and is thus also subject to uncertainties in those aspects. For the most luminous of distance indicators, the Type Ia supernovae, this homogeneity is known to be poor ;[citation needed] however, no other class of object is bright enough to be detected at such large distances, so the class is useful simply because there is no real alternative.

The observational result of Hubble's Law, the proportional relationship between distance and the speed with which a galaxy is moving away from us (usually referred to as redshift) is a product of the cosmic distance ladder. Hubble observed that fainter galaxies are more redshifted. Finding the value of the Hubble constant was the result of decades of work by many astronomers, both in amassing the measurements of galaxy redshifts and in calibrating the steps of the distance ladder. Hubble's Law is the primary means we have for estimating the distances of quasars and distant galaxies in which individual distance indicators cannot be seen.

Declaration of the Rights of Man and of the Citizen

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Declarati...