The distribution of ionized hydrogen
(known by astronomers as H II from old spectroscopic terminology) in
the parts of the Galactic interstellar medium visible from the Earth's
northern hemisphere as observed with the Wisconsin Hα Mapper (Haffner et al. 2003).
The interstellar medium is composed of multiple phases,
distinguished by whether matter is ionic, atomic, or molecular, and the
temperature and density of the matter. The interstellar medium is
composed primarily of hydrogen followed by helium with trace amounts of carbon, oxygen, and nitrogen comparatively to hydrogen. The thermal pressures of these phases are in rough equilibrium with one another. Magnetic fields and turbulent motions also provide pressure in the ISM, and are typically more important dynamically than the thermal pressure is.
In all phases, the interstellar medium is extremely tenuous by
terrestrial standards. In cool, dense regions of the ISM, matter is
primarily in molecular form, and reaches number densities of 106 molecules per cm3 (1 million molecules per cm3). In hot, diffuse regions of the ISM, matter is primarily ionized, and the density may be as low as 10−4 ions per cm3. Compare this with a number density of roughly 1019 molecules per cm3 for air at sea level, and 1010 molecules per cm3 (10 billion molecules per cm3) for a laboratory high-vacuum chamber. By mass, 99% of the ISM is gas in any form, and 1% is dust. Of the gas in the ISM, by number 91% of atoms are hydrogen and 8.9% are helium, with 0.1% being atoms of elements heavier than hydrogen or helium, known as "metals"
in astronomical parlance. By mass this amounts to 70% hydrogen, 28%
helium, and 1.5% heavier elements. The hydrogen and helium are primarily
a result of primordial nucleosynthesis, while the heavier elements in the ISM are mostly a result of enrichment in the process of stellar evolution.
The ISM plays a crucial role in astrophysics
precisely because of its intermediate role between stellar and galactic
scales. Stars form within the densest regions of the ISM, which
ultimately contributes to molecular clouds and replenishes the ISM with matter and energy through planetary nebulae, stellar winds, and supernovae.
This interplay between stars and the ISM helps determine the rate at
which a galaxy depletes its gaseous content, and therefore its lifespan
of active star formation.
Voyager 1
reached the ISM on August 25, 2012, making it the first artificial
object from Earth to do so. Interstellar plasma and dust will be studied
until the mission's end in 2025.
Voyager 1 is the first artificial object to reach the ISM
Interstellar matter
Table 1 shows a breakdown of the properties of the components of the ISM of the Milky Way.
X-ray emission; absorption lines of highly ionized metals, primarily in the ultraviolet
The three-phase model
Field, Goldsmith & Habing (1969) put forward the static two phase equilibrium model to explain the observed properties of the ISM. Their modeled ISM consisted of a cold dense phase (T < 300 K), consisting of clouds of neutral and molecular hydrogen, and a warm intercloud phase (T ~ 104K), consisting of rarefied neutral and ionized gas. McKee & Ostriker (1977) added a dynamic third phase that represented the very hot (T ~ 106K) gas which had been shock heated by supernovae
and constituted most of the volume of the ISM.
These phases are the temperatures where heating and cooling can reach a
stable equilibrium. Their paper formed the basis for further study over
the past three decades. However, the relative proportions of the phases
and their subdivisions are still not well known.
The atomic hydrogen model
This
model takes into account only atomic hydrogen : Temperature larger than
3000 K breaks molecules, lower than 50 000 K leaves atoms in their
ground state. It is assumed that influence of other atoms (He ...) is
negligible. Pressure is assumed very low, so that durations of free
paths of atoms are larger than ~ 1 nanosecond duration of light pulses
which make ordinary, temporally incoherent light .
In this collisionless gas, Einstein’s theory of coherent
light-matter interactions applies, all gas-light interactions are
spatially coherent.
Suppose that a monochromatic light is pulsed, then scattered by
molecules having a quadrupole (Raman) resonance frequency. If “length of
light pulses is shorter than all involved time constants” (Lamb
(1971)), an “impulsive stimulated Raman scattering (ISRS) ” (Yan, Gamble
& Nelson (1985)) works: While light generated by incoherent Raman
at a shifted frequency has a phase independent on phase of exciting
light, thus generates a new spectral line, coherence between incident
and scattered light allows their interference into a single frequency,
thus shifts incident frequency.
Assume that a star radiates a continuous light spectrum up to X rays. Lyman frequencies are absorbed in this light and pump atoms mainly to
first excited state. In this state, hyperfine periods are longer than 1
ns, so that an ISRS “may” redshift light frequency, populating high
hyperfine levels. An other ISRS “may” transfer energy from hyperfine
levels to thermal electromagnetic waves, so that redshift is permanent.
Temperature of a light beam is defined from frequency and spectral
radiance by Planck’s formula. As entropy must increase, “may” becomes
“does”.
However, where a previously absorbed line (first Lyman beta, ...)
reaches Lyman alpha frequency, redshifting process stops and all
hydrogen lines are strongly absorbed. But the stop is not perfect if
there is energy at frequency shifted to Lyman beta frequency, which
produces a slow redshift. Successive redshifts separated by Lyman
absorptions generate many absorption lines, frequencies of which,
deduced from absorption process, obey a law more dependable than
Karlsson’s formula.
The previous process excites more and more atoms because a
de-excitation obeys Einstein’s law of coherent interactions: Variation
dI of radiance I of a light beam along a path dx is dI=BIdx, where B is
Einstein amplification coefficient which depends on medium. I is the
modulus of Poynting vector of field, absorption occurs for an opposed
vector, which corresponds to a change of sign of B. Factor I in this
formula shows that intense rays are more amplified than weak ones
(competition of modes). Emission of a flare requires a sufficient
radiance I provided by random zero point field. After emission of a
flare, weak B increases by pumping while I remains close to zero:
De-excitation by a coherent emission involves stochastic parameters of
zero point field, as observed close to quasars (and in polar auroras).
The ISM is turbulent and therefore full of structure on all spatial scales. Stars are born deep inside large complexes of molecular clouds, typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM.
Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae
inject enormous amounts of energy into their surroundings, which leads
to hypersonic turbulence. The resultant structures – of varying sizes –
can be observed, such as stellar wind bubbles and superbubbles of hot gas, seen by X-ray satellite telescopes or turbulent flows observed in radio telescope maps.
Short, narrated video about IBEX's interstellar matter observations.
The interstellar medium begins where the interplanetary medium of the Solar System ends. The solar wind slows to subsonic velocities at the termination shock, 90—100 astronomical units from the Sun. In the region beyond the termination shock, called the heliosheath, interstellar matter interacts with the solar wind. Voyager 1, the farthest human-made object from the Earth (after 1998), crossed the termination shock December 16, 2004 and later entered interstellar space when it crossed the heliopause on August 25, 2012, providing the first direct probe of conditions in the ISM (Stone et al. 2005).
Interstellar extinction
The ISM is also responsible for extinction and reddening, the decreasing light intensity and shift in the dominant observable wavelengths of light from a star. These effects are caused by scattering and absorption of photons and allow the ISM to be observed with the naked eye in a dark sky. The apparent rifts that can be seen in the band of the Milky Way
– a uniform disk of stars – are caused by absorption of background
starlight by molecular clouds within a few thousand light years from
Earth.
Far ultraviolet light is absorbed effectively by the neutral components of the ISM. For example, a typical absorption wavelength of atomic hydrogen lies at about 121.5 nanometers, the Lyman-alpha
transition. Therefore, it is nearly impossible to see light emitted at
that wavelength from a star farther than a few hundred light years from
Earth, because most of it is absorbed during the trip to Earth by
intervening neutral hydrogen.
Heating and cooling
The ISM is usually far from thermodynamic equilibrium. Collisions establish a Maxwell–Boltzmann distribution
of velocities, and the 'temperature' normally used to describe
interstellar gas is the 'kinetic temperature', which describes the
temperature at which the particles would have the observed
Maxwell–Boltzmann velocity distribution in thermodynamic equilibrium.
However, the interstellar radiation field is typically much weaker than a
medium in thermodynamic equilibrium; it is most often roughly that of
an A star (surface temperature of ~10,000 K) highly diluted. Therefore, bound levels within an atom or molecule in the ISM are rarely populated according to the Boltzmann formula (Spitzer 1978, § 2.4).
Depending on the temperature, density, and ionization state of a
portion of the ISM, different heating and cooling mechanisms determine
the temperature of the gas.
The first mechanism proposed for heating the ISM was heating by low-energy cosmic rays. Cosmic rays are an efficient heating source able to penetrate in the depths of molecular clouds. Cosmic rays transfer energy to gas through both ionization and excitation and to free electrons through Coulomb interactions. Low-energy cosmic rays (a few MeV) are more important because they are far more numerous than high-energy cosmic rays.
Photoelectric heating by grains
The ultraviolet radiation emitted by hot stars can remove electrons from dust grains. The photon is absorbed by the dust grain, and some of its energy is used to overcome the potential energy barrier and remove the electron from the grain. This potential barrier is due to the binding energy of the electron (the work function) and the charge of the grain. The remainder of the photon's energy gives the ejected electronkinetic energy which heats the gas through collisions with other particles. A typical size distribution of dust grains is n(r) ∝ r−3.5, where r is the radius of the dust particle. Assuming this, the projected grain surface area distribution is πr2n(r) ∝ r−1.5. This indicates that the smallest dust grains dominate this method of heating[7].
Photoionization
When an electron is freed from an atom (typically from absorption of a UV photon) it carries kinetic energy away of the order Ephoton − Eionization. This heating mechanism dominates in H II regions, but is negligible in the diffuse ISM due to the relative lack of neutral carbonatoms.
X-rays remove electrons from atoms and ions,
and those photoelectrons can provoke secondary ionizations. As the
intensity is often low, this heating is only efficient in warm, less
dense atomic medium (as the column density is small). For example, in
molecular clouds only hard x-rays can penetrate and x-ray heating can be ignored. This is assuming the region is not near an x-ray source such as a supernova remnant.
Chemical heating
Molecular hydrogen (H2) can be formed on the surface of dust grains when two H
atoms (which can travel over the grain) meet. This process yields
4.48 eV of energy distributed over the rotational and vibrational modes,
kinetic energy of the H2 molecule, as well as heating the
dust grain. This kinetic energy, as well as the energy transferred from
de-excitation of the hydrogen molecule through collisions, heats the
gas.
Grain-gas heating
Collisions at high densities between gas atoms and molecules with
dust grains can transfer thermal energy. This is not important in HII
regions because UV radiation is more important. It is also not important
in diffuse ionized medium due to the low density. In the neutral
diffuse medium grains are always colder, but do not effectively cool the
gas due to the low densities.
Grain heating by thermal exchange is very important in supernova remnants where densities and temperatures are very high.
Gas heating via grain-gas collisions is dominant deep in giant molecular clouds (especially at high densities). Far infrared
radiation penetrates deeply due to the low optical depth. Dust grains
are heated via this radiation and can transfer thermal energy during
collisions with the gas. A measure of efficiency in the heating is given
by the accommodation coefficient:
where T is the gas temperature, Td the dust temperature, and T2 the post-collision temperature of the gas atom or molecule. This coefficient was measured by (Burke & Hollenbach 1983) as α = 0.35.
Other heating mechanisms
A variety of macroscopic heating mechanisms are present including:
The process of fine structure cooling is dominant in most regions of the Interstellar Medium, except regions of hot gas and regions deep in molecular clouds. It occurs most efficiently with abundant atoms
having fine structure levels close to the fundamental level such as:
C II and O I in the neutral medium and O II, O III, N II, N III, Ne II
and Ne III in H II regions. Collisions will excite these atoms to higher
levels, and they will eventually de-excite through photon emission,
which will carry the energy out of the region.
Cooling by permitted lines
At lower temperatures, more levels than fine structure levels can be
populated via collisions. For example, collisional excitation of the n = 2 level of hydrogen will release a Ly-α photon upon de-excitation. In molecular clouds, excitation of rotational lines of CO is important. Once a molecule is excited, it eventually returns to a lower energy state, emitting a photon which can leave the region, cooling the cloud.
Radiowave propagation
Atmospheric attenuation in dB/km
as a function of frequency over the EHF band. Peaks in absorption at
specific frequencies are a problem, due to atmosphere constituents such
as water vapor (H2O) and carbon dioxide (CO2).
Radio waves from ≈10 kHz (very low frequency) to ≈300 GHz (extremely high frequency)
propagate differently in interstellar space than on the Earth's
surface. There are many sources of interference and signal distortion
that do not exist on Earth. A great deal of radio astronomy depends on compensating for the different propagation effects to uncover the desired signal.
History of knowledge of interstellar space
Herbig–Haro 110 object ejects gas through interstellar space.
The nature of the interstellar medium has received the attention of astronomers and scientists over the centuries, and understanding of the ISM has developed.
However, they first had to acknowledge the basic concept of
"interstellar" space. The term appears to have been first used in print
by Bacon (1626,
§ 354–5): "The Interstellar Skie.. hath .. so much Affinity with the
Starre, that there is a Rotation of that, as well as of the Starre."
Later, natural philosopherRobert Boyle (1674) discussed "The inter-stellar part of heaven, which several of the modern Epicureans would have to be empty."
Before modern electromagnetic theory, early physicists postulated that an invisible luminiferous aether existed as a medium to carry lightwaves. It was assumed that this aether extended into interstellar space, as Patterson (1862) wrote, "this efflux occasions a thrill, or vibratory motion, in the ether which fills the interstellar spaces."
The advent of deep photographic imaging allowed Edward Barnard to produce the first images of dark nebulae
silhouetted against the background star field of the galaxy, while the
first actual detection of cold diffuse matter in interstellar space was
made by Johannes Hartmann in 1904 through the use of absorption line spectroscopy. In his historic study of the spectrum and orbit of Delta Orionis,
Hartmann observed the light coming from this star and realized that
some of this light was being absorbed before it reached the Earth.
Hartmann reported that absorption from the "K" line of calcium
appeared "extraordinarily weak, but almost perfectly sharp" and also
reported the "quite surprising result that the calcium line at
393.4 nanometres does not share in the periodic displacements of the
lines caused by the orbital motion of the spectroscopic binary
star". The stationary nature of the line led Hartmann to conclude that
the gas responsible for the absorption was not present in the atmosphere
of Delta Orionis, but was instead located within an isolated cloud of
matter residing somewhere along the line-of-sight to this star. This
discovery launched the study of the Interstellar Medium.
In the series of investigations, Viktor Ambartsumian introduced the now commonly accepted notion that interstellar matter occurs in the form of clouds.
Following Hartmann's identification of interstellar calcium absorption, interstellar sodium was detected by Heger (1919)
through the observation of stationary absorption from the atom's "D"
lines at 589.0 and 589.6 nanometres towards Delta Orionis and Beta Scorpii.
Subsequent observations of the "H" and "K" lines of calcium by Beals (1936) revealed double and asymmetric profiles in the spectra of Epsilon and Zeta Orionis. These were the first steps in the study of the very complex interstellar sightline towards Orion.
Asymmetric absorption line profiles are the result of the superposition
of multiple absorption lines, each corresponding to the same atomic
transition (for example the "K" line of calcium), but occurring in
interstellar clouds with different radial velocities.
Because each cloud has a different velocity (either towards or away
from the observer/Earth) the absorption lines occurring within each
cloud are either Blue-shifted or Red-shifted (respectively) from the lines' rest wavelength, through the Doppler Effect.
These observations confirming that matter is not distributed
homogeneously were the first evidence of multiple discrete clouds within
the ISM.
This light-year-long knot of interstellar gas and dust resembles a caterpillar.
The growing evidence for interstellar material led Pickering (1912)
to comment that "While the interstellar absorbing medium may be simply
the ether, yet the character of its selective absorption, as indicated
by Kapteyn, is characteristic of a gas, and free gaseous molecules are certainly there, since they are probably constantly being expelled by the Sun and stars."
The same year Victor Hess's discovery of cosmic rays,
highly energetic charged particles that rain onto the Earth from space,
led others to speculate whether they also pervaded interstellar space.
The following year the Norwegian explorer and physicist Kristian Birkeland
wrote: "It seems to be a natural consequence of our points of view to
assume that the whole of space is filled with electrons and flying
electric ions
of all kinds. We have assumed that each stellar system in evolutions
throws off electric corpuscles into space. It does not seem unreasonable
therefore to think that the greater part of the material masses in the
universe is found, not in the solar systems or nebulae, but in 'empty' space" (Birkeland 1913).
Thorndike (1930)
noted that "it could scarcely have been believed that the enormous gaps
between the stars are completely void. Terrestrial aurorae are not
improbably excited by charged particles emitted by the Sun. If the millions of other stars are also ejecting ions, as is undoubtedly true, no absolute vacuum can exist within the galaxy."
The accelerating expansion of the universe is the observation that the universe appears to be expanding at an increasing rate, so that the velocity at which a distant galaxy is receding from the observer is continuously increasing with time.
The accelerated expansion was discovered in 1998, by two independent projects, the Supernova Cosmology Project and the High-Z Supernova Search Team, which both used distant type Ia supernovae to measure the acceleration. The idea was that these type 1a supernovae all have almost the same intrinsic brightness (a standard candle).
Since objects that are further away appear dimmer, we can use the
observed brightness of these supernovae to measure the distance to them.
The distance can then be compared to the supernovae's cosmological redshift, which measures how fast the supernovae are receding from us.
The unexpected result was that the universe seems to be expanding at an
accelerating rate. Cosmologists at the time expected that the expansion
would be decelerating due to the gravitational attraction of the matter
in the universe. Three members of these two groups have subsequently
been awarded Nobel Prizes for their discovery. Confirmatory evidence has been found in baryon acoustic oscillations, and in analyses of the clustering of galaxies.
The expansion of the universe is thought to have been accelerating since the universe entered its dark-energy-dominated era roughly 5 billion years ago.
Within the framework of general relativity, an accelerating expansion can be accounted for by a positive value of the cosmological constantΛ, equivalent to the presence of a positive vacuum energy, dubbed "dark energy". While there are alternative possible explanations, the description assuming dark energy (positive Λ) is used in the current standard model of cosmology, which also includes cold dark matter (CDM) and is known as the Lambda-CDM model.
Background
In the decades since the detection of cosmic microwave background (CMB) in 1965, the Big Bang model has become the most accepted model explaining the evolution of our universe. The Friedmann equation defines how the energy in the universe drives its expansion.
where the four currently hypothesized contributors to the energy density of the universe are curvature, matter, radiation and dark energy.
Each of the components decreases with the expansion of the universe
(increasing scale factor), except perhaps the dark energy term. It is
the values of these cosmological parameters which physicists use to
determine the acceleration of the universe.
Physicists at one time were so assured of the deceleration of the universe's expansion that they introduced a so-called deceleration parameterq0. Current observations point towards this deceleration parameter being negative.
Relation to inflation
According to the theory of cosmic inflation,
the very early universe underwent a period of very rapid,
quasi-exponential expansion. While the time-scale for this period of
expansion was far shorter than that of the current expansion, this was a
period of accelerated expansion with some similarities to the current
epoch.
Evidence for acceleration
To learn about the rate of expansion of the universe we look at the magnitude-redshift relationship of astronomical objects using standard candles, or their distance-redshift relationship using standard rulers. We can also look at the growth of large-scale structure,
and find that the observed values of the cosmological parameters are
best described by models which include an accelerating expansion.
Supernova observation
Artist's impression of a Type Ia supernova, as revealed by spectro-polarimetry observations
The first evidence for acceleration came from the observation of Type Ia supernovae, which are exploding white dwarfs that have exceeded their stability limit. Because they all have similar masses, their intrinsic luminosity
is standardizable. Repeated imaging of selected areas of the sky is
used to discover the supernovae, then follow-up observations give their
peak brightness, which is converted into a quantity known as luminosity
distance. Spectral lines of their light can be used to determine their redshift.
For supernovae at redshift less than around 0.1, or light travel
time less than 10 percent of the age of the universe, this gives a
nearly linear distance–redshift relation due to Hubble's law.
At larger distances, since the expansion rate of the universe has
changed over time, the distance-redshift relation deviates from
linearity, and this deviation depends on how the expansion rate has
changed over time. The full calculation requires computer integration of
the Friedmann equation, but a simple derivation can be given as
follows: the redshift z directly gives the cosmic scale factor at the time the supernova exploded.
So a supernova with a measured redshift z = 0.5 implies the universe was 1/1 + 0.5 = 2/3
of its present size when the supernova exploded. In an accelerating
universe, the universe was expanding more slowly in the past than it is
today, which means it took a longer time to expand from two thirds its
present size to its present size, compared to a non-accelerating
universe with the same present-day value of the Hubble constant. This
results in a larger light-travel time, larger distance and fainter
supernovae, which corresponds to the actual observations. Adam Riesset al. found that "the distances of the high-redshift SNe Ia were, on average, 10% to 15% farther than expected in a low mass density ΩM = 0.2 universe without a cosmological constant". This means that the measured high-redshift distances were too large, compared to nearby ones, for a decelerating universe.
Baryon acoustic oscillations
In the early universe before recombination and decoupling took place, photons and matter existed in a primordial plasma.
Points of higher density in the photon-baryon plasma would contract,
being compressed by gravity until the pressure became too large and they
expanded again. This contraction and expansion created vibrations in the plasma analogous to sound waves. Since dark matter only interacts gravitationally
it stayed at the centre of the sound wave, the origin of the original
overdensity. When decoupling occurred, approximately 380,000 years after
the Big Bang, photons separated from matter and were able to stream freely through the universe, creating the cosmic microwave background as we know it. This left shells of baryonic matter
at a fixed radius from the overdensities of dark matter, a distance
known as the sound horizon. As time passed and the universe expanded, it
was at these anisotropies of matter density where galaxies started to
form. So by looking at the distances at which galaxies at different
redshifts tend to cluster, it is possible to determine a standard angular diameter distance and use that to compare to the distances predicted by different cosmological models.
Peaks have been found in the correlation function (the probability that two galaxies will be a certain distance apart) at 100 h−1Mpc,
indicating that this is the size of the sound horizon today, and by
comparing this to the sound horizon at the time of decoupling (using the
CMB), we can confirm that the expansion of the universe is
accelerating.
Clusters of galaxies
Measuring the mass functions of galaxy clusters, which describe the number density of the clusters above a threshold mass, also provides evidence for dark energy. By comparing these mass functions at high and low redshifts to those predicted by different cosmological models, values for w and Ωm are obtained which confirm a low matter density and a non zero amount of dark energy.
Age of the universe
Given a cosmological model with certain values of the cosmological
density parameters, it is possible to integrate the Friedmann equations
and derive the age of the universe.
By comparing this to actual measured values of the cosmological
parameters, we can confirm the validity of a model which is accelerating
now, and had a slower expansion in the past.
Gravitational waves as standard sirens
Recent discoveries of gravitational waves through LIGO and VIRGO not only confirmed Einstein's predictions but also opened a new window
into the universe. These gravitational waves can work as sort of standard sirens
to measure the expansion rate of the universe. Abbot et al. 2017
measured the Hubble constant value to be approximately 70 kilometres per
second per megaparsec.
The amplitudes of the strain 'h' is dependent on the masses of the
objects causing waves, distances from observation point and
gravitational waves detection frequencies. The associated distance
measures are dependent on the cosmological parameters like the Hubble
Constant for nearby objects and will be dependent on other cosmological parameters like the dark energy density, matter density, etc. for distant sources.
Explanatory models
The expansion of the Universe accelerating. Time flows from bottom to top
Dark energy
The most important property of dark energy is that it has negative
pressure which is distributed relatively homogeneously in space.
where c is the speed of light and ρ is the energy density. Different theories of dark energy suggest different values of w, with w < −1/3 for cosmic acceleration (this leads to a positive value of ä in the acceleration equation above).
The simplest explanation for dark energy is that it is a cosmological constant or vacuum energy; in this case w = −1. This leads to the Lambda-CDM model,
which has generally been known as the Standard Model of Cosmology from
2003 through the present, since it is the simplest model in good
agreement with a variety of recent observations. Riess et al. found that their results from supernovae observations favoured expanding models with positive cosmological constant (Ωλ > 0) and a current acceleration of the expansion (q0 < 0).
Phantom energy
Current observations allow the possibility of a cosmological model containing a dark energy component with equation of state w < −1.
This phantom energy density would become infinite in finite time,
causing such a huge gravitational repulsion that the universe would lose
all structure and end in a Big Rip. For example, for w = −3/2 and H0 =70 km·s−1·Mpc−1, the time remaining before the universe ends in this Big Rip is 22 billion years.
Alternative theories
There are many alternative explanations for the accelerating universe. Some examples are quintessence, a proposed form of dark energy with a non-constant state equation, whose density decreases with time. Dark fluid
is an alternative explanation for accelerating expansion which attempts
to unite dark matter and dark energy into a single framework. Alternatively, some authors have argued that the universe expansion acceleration could be due to a repulsive gravitational interaction of antimatter
or a deviation of the gravitational laws from general relativity. The
measurement of the speed of gravity with the gravitational wave event GW170817 ruled out many modified gravity theories as alternative explanation to dark energy.
Another type of model, the backreaction conjecture, was proposed by cosmologist Syksy Räsänen: the rate of expansion is not homogenous, but we are in a region where
expansion is faster than the background. Inhomogeneities in the early
universe cause the formation of walls and bubbles, where the inside of a
bubble has less matter than on average. According to general
relativity, space is less curved than on the walls, and thus appears to
have more volume and a higher expansion rate. In the denser regions, the
expansion is retarded by a higher gravitational attraction. Therefore,
the inward collapse of the denser regions looks the same as an
accelerating expansion of the bubbles, leading us to conclude that the
universe is expanding at an accelerating rate.
The benefit is that it does not require any new physics such as dark
energy. Räsänen does not consider the model likely, but without any
falsification, it must remain a possibility. It would require rather
large density fluctuations (20%) to work.
A final possibility is that dark energy is an illusion caused by
some bias in measurements. For example, if we are located in an
emptier-than-average region of space, the observed cosmic expansion rate
could be mistaken for a variation in time, or acceleration. A different approach uses a cosmological extension of the equivalence principle
to show how space might appear to be expanding more rapidly in the
voids surrounding our local cluster. While weak, such effects considered
cumulatively over billions of years could become significant, creating
the illusion of cosmic acceleration, and making it appear as if we live
in a Hubble bubble.
Yet other possibilities are that the accelerated expansion of the
universe is an illusion caused by the relative motion of us to the rest
of the universe, or that the supernovae sample size used wasn't large enough.
Theories for the consequences to the universe
As the universe expands, the density of radiation and ordinary dark matter declines more quickly than the density of dark energy (see equation of state)
and, eventually, dark energy dominates. Specifically, when the scale of
the universe doubles, the density of matter is reduced by a factor of
8, but the density of dark energy is nearly unchanged (it is exactly
constant if the dark energy is a cosmological constant).
In models where dark energy is a cosmological constant, the
universe will expand exponentially with time from now on, coming closer
and closer to a de Sitter spacetime.
This will eventually lead to all evidence for the Big Bang
disappearing, as the cosmic microwave background is redshifted to lower
intensities and longer wavelengths. Eventually its frequency will be low
enough that it will be absorbed by the interstellar medium,
and so be screened from any observer within the galaxy. This will occur
when the universe is less than 50 times its current age, leading to the
end of cosmology as we know it as the distant universe turns dark.
A constantly expanding universe with non-zero cosmological
constant has mass density decreasing over time, to an undetermined point
when zero matter density is reached. All matter (electrons, protons and
neutrons) would ionize and disintegrate, with objects dissipating away.
The expansion of the universe is the increase of the distance between two distant parts of the universe with time. It is an intrinsic expansion whereby the scale of space itself changes.
The universe does not expand "into" anything and does not require
space to exist "outside" it. Technically, neither space nor objects in
space move. Instead it is the metric governing the size and geometry of spacetime itself that changes in scale. Although light and objects within spacetime cannot travel faster than the speed of light, this limitation does not restrict the metric itself. To an observer it appears that space is expanding and all but the nearest galaxies are receding into the distance.
During the inflationary epoch about 10−32 of a second after the Big Bang, the universe suddenly expanded, and its volume increased by a factor of at least 1078 (an expansion of distance by a factor of at least 1026 in each of the three dimensions), equivalent to expanding an object 1 nanometer (10−9m, about half the width of a molecule of DNA) in length to one approximately 10.6 light years (about 1017m
or 62 trillion miles) long. A much slower and gradual expansion of
space continued after this, until at around 9.8 billion years after the
Big Bang (4 billion years ago) it began to gradually expand more quickly, and is still doing so today.
The metric expansion of space is of a kind completely different from the expansions and explosions seen in daily life. It also seems to be a property of the universe as a whole rather than a phenomenon that applies just to one part of the universe or can be observed from "outside" it.
Metric expansion is a key feature of Big Bang cosmology, is modeled mathematically with the Friedmann-Lemaître-Robertson-Walker metric and is a generic property of the universe we inhabit. However, the model is valid only on large scales (roughly the scale of galaxy clusters and above), because gravitational attraction
binds matter together strongly enough that metric expansion cannot be
observed at this time, on a smaller scale. As such, the only galaxies
receding from one another as a result of metric expansion are those
separated by cosmologically relevant scales larger than the length scales associated with the gravitational collapse that are possible in the age of the universe given the matter density and average expansion rate.
Physicists have postulated the existence of dark energy, appearing as a cosmological constant
in the simplest gravitational models as a way to explain the
acceleration. According to the simplest extrapolation of the
currently-favored cosmological model, the Lambda-CDM model, this acceleration becomes more dominant into the future. In June 2016, NASA and ESA scientists reported that the universe was found to be expanding 5% to 9% faster than thought earlier, based on studies using the Hubble Space Telescope.
While special relativity prohibits objects from moving faster than light with respect to a local reference frame where spacetime can be treated as flat and unchanging, it does not apply to situations where spacetime curvature or evolution in time become important. These situations are described by general relativity,
which allows the separation between two distant objects to increase
faster than the speed of light, although the definition of "separation"
is different from that used in an inertial frame. This can be seen when
observing distant galaxies more than the Hubble radius away from us (approximately 4.5 gigaparsecs or 14.7 billion light-years); these galaxies have a recession speed that is faster than the speed of light. Light that is emitted today from galaxies beyond the cosmological event horizon,
about 5 gigaparsecs or 16 billion light-years, will never reach us,
although we can still see the light that these galaxies emitted in the
past. Because of the high rate of expansion, it is also possible for a
distance between two objects to be greater than the value calculated by
multiplying the speed of light by the age of the universe. These details
are a frequent source of confusion among amateurs and even professional
physicists.
Due to the non-intuitive nature of the subject and what has been
described by some as "careless" choices of wording, certain descriptions
of the metric expansion of space and the misconceptions to which such
descriptions can lead are an ongoing subject of discussion within education and communication of scientific concepts.
Cosmic inflation
In 1929, Edwin Hubble discovered that light from remote galaxies was redshifted;
i.e. the more remote galaxies were, the more shifted was the light
coming from them. This observation was quickly interpreted as galaxies receding from earth. If earth
is not in some special, privileged, central position in the universe,
then it would mean all galaxies are moving apart, and the further away,
the faster they are moving away. It is now understood that the universe
is expanding, carrying the galaxies with it, and causing this
observation. Many other observations agree, and also lead to the same
conclusion. However, for many years it was not clear why or how the
universe might be expanding, or what it might signify.
Based on a huge amount of experimental observation and
theoretical work, it is now believed that the reason for the observation
is that space itself is expanding, and that it expanded very rapidly within the first fraction of a second after the Big Bang. This kind of expansion is known as the "metric expansion". In mathematics and physics, a "metric" means a measure of distance, and the term implies that the sense of distance within the universe is itself changing, although at this time it is far too small an effect to see on less than an intergalactic scale.
The modern explanation for the metric expansion of space was proposed by physicist Alan Guth in 1979, while investigating the problem of why no magnetic monopoles are seen today. Guth found in his investigation that if the universe contained a field that has a positive-energy false vacuum state, then according to general relativity it would generate an exponential expansion of space.
It was very quickly realized that such an expansion would resolve many
other long-standing problems. These problems arise from the observation
that to look like it does today, the universe would have to have started
from very finely tuned,
or "special" initial conditions at the Big Bang. Inflation theory
largely resolves these problems as well, thus making a universe like
ours much more likely in the context of Big Bang theory.
No field responsible for the cosmic inflation has been discovered. However such a field, if found in the future, would be scalar. The first similar scalar field proven to exist was only discovered
in 2012 - 2013 and is still being researched. So it is not seen as
problematic that a field responsible for cosmic inflation and the metric
expansion of space has not yet been discovered.
The proposed field and its quanta (the subatomic particles related to it) have been named inflaton.
If this field did not exist, scientists would have to propose a
different explanation for all the observations that strongly suggest a
metric expansion of space has occurred, and is still occurring much more
slowly today.
Overview of metrics and comoving coordinates
To understand the metric expansion of the universe, it is helpful to
discuss briefly what a metric is, and how metric expansion works.
A metric defines the concept of distance, by stating in mathematical terms how distances between two nearby points in space are measured, in terms of the coordinate system. Coordinate systems locate points in a space (of whatever number of dimensions) by assigning unique positions on a grid, known as coordinates, to each point. GPS, latitude and longitude, and x-y graphs are common examples of coordinates. A metric is a formula which describes how a number known as "distance" is to be measured between two points.
It may seem obvious that distance is measured by a straight line, but in many cases it is not. For example, long haul aircraft travel along a curve known as a "great circle"
and not a straight line, because that is a better metric for air
travel. (A straight line would go through the earth). Another example is
planning a car journey, where one might want the shortest journey in
terms of travel time - in that case a straight line is a poor choice of
metric because the shortest distance by road is not normally a straight
line, and even the path nearest to a straight line will not necessarily
be the quickest. A final example is the internet,
where even for nearby towns, the quickest route for data can be via
major connections that go across the country and back again. In this
case the metric used will be the shortest time that data takes to travel
between two points on the network.
In cosmology, we cannot use a ruler to measure metric expansion,
because our ruler will also be expanding (extremely slowly). Also any
objects on or near earth that we might measure are being held together
or pushed apart by several forces which are far larger in their effects.
So even if we could measure the tiny expansion that is still happening,
we would not notice the change on a small scale or in everyday life. On
a large intergalactic scale, we can use other tests of distance and
these do show that space is expanding, even if a ruler on earth could not measure it.
Example: "Great Circle" metric for Earth's surface
For
example, consider the measurement of distance between two places on the
surface of the Earth. This is a simple, familiar example of spherical geometry.
Because the surface of the Earth is two-dimensional, points on the
surface of the Earth can be specified by two coordinates — for example,
the latitude and longitude. Specification of a metric requires that one
first specify the coordinates used. In our simple example of the surface
of the Earth, we could choose any kind of coordinate system we wish,
for example latitude and longitude, or X-Y-Z Cartesian coordinates.
Once we have chosen a specific coordinate system, the numerical values
of the coordinates of any two points are uniquely determined, and based
upon the properties of the space being discussed, the appropriate metric
is mathematically established too. On the curved surface of the Earth,
we can see this effect in long-haul airline flights where the distance between two points is measured based upon a great circle,
rather than the straight line one might plot on a two-dimensional map
of the Earth's surface. In general, such shortest-distance paths are
called "geodesics". In Euclidean geometry, the geodesic is a straight line, while in non-Euclidean geometry
such as on the Earth's surface, this is not the case. Indeed, even the
shortest-distance great circle path is always longer than the Euclidean
straight line path which passes through the interior of the Earth. The
difference between the straight line path and the shortest-distance
great circle path is due to the curvature
of the Earth's surface. While there is always an effect due to this
curvature, at short distances the effect is small enough to be
unnoticeable.
On plane maps, great circles of the Earth are mostly not shown as straight lines. Indeed, there is a seldom-used map projection, namely the gnomonic projection,
where all great circles are shown as straight lines, but in this
projection, the distance scale varies very much in different areas.
There is no map projection in which the distance between any two points
on Earth, measured along the great circle geodesics, is directly
proportional to their distance on the map; such accuracy is possible
only with a globe.
Metric tensors
In differential geometry, the backbone mathematics for general relativity, a metric tensor
can be defined which precisely characterizes the space being described
by explaining the way distances should be measured in every possible
direction. General relativity necessarily invokes a metric in four
dimensions (one of time, three of space) because, in general, different
reference frames will experience different intervals of time and space depending on the inertial frame. This means that the metric tensor in general relativity relates precisely how two events in spacetime are separated. A metric expansion occurs when the metric tensor changes with time
(and, specifically, whenever the spatial part of the metric gets larger
as time goes forward). This kind of expansion is different from all
kinds of expansions and explosions commonly seen in nature in no small part because times and distances
are not the same in all reference frames, but are instead subject to
change. A useful visualization is to approach the subject rather than
objects in a fixed "space" moving apart into "emptiness", as space
itself growing between objects without any acceleration of the objects themselves. The space between objects shrinks or grows as the various geodesics converge or diverge.
Because this expansion is caused by relative changes in the
distance-defining metric, this expansion (and the resultant movement
apart of objects) is not restricted by the speed of lightupper bound of special relativity.
Two reference frames that are globally separated can be moving apart
faster than light without violating special relativity, although
whenever two reference frames diverge from each other faster than the
speed of light, there will be observable effects associated with such
situations including the existence of various cosmological horizons.
Theory and observations suggest that very early in the history of the universe, there was an inflationary
phase where the metric changed very rapidly, and that the remaining
time-dependence of this metric is what we observe as the so-called Hubble expansion,
the moving apart of all gravitationally unbound objects in the
universe. The expanding universe is therefore a fundamental feature of
the universe we inhabit — a universe fundamentally different from the static universeAlbert Einstein first considered when he developed his gravitational theory.
Comoving coordinates
In expanding space, proper distances are dynamical quantities which change with time. An easy way to correct for this is to use comoving coordinates
which remove this feature and allow for a characterization of different
locations in the universe without having to characterize the physics
associated with metric expansion. In comoving coordinates, the distances
between all objects are fixed and the instantaneous dynamics of matter and light are determined by the normal physics of gravity and electromagnetic radiation. Any time-evolution however must be accounted for by taking into account the Hubble law expansion in the appropriate equations in addition to any other effects that may be operating (gravity, dark energy, or curvature,
for example). Cosmological simulations that run through significant
fractions of the universe's history therefore must include such effects
in order to make applicable predictions for observational cosmology.
Understanding the expansion of the universe
Measurement of expansion and change of rate of expansion
When an object is receding, its light gets stretched (redshifted). When the object is approaching, its light gets compressed (blueshifted).
In principle, the expansion of the universe could be measured by
taking a standard ruler and measuring the distance between two
cosmologically distant points, waiting a certain time, and then
measuring the distance again, but in practice, standard rulers are not
easy to find on cosmological scales and the timescales over which a
measurable expansion would be visible are too great to be observable
even by multiple generations of humans. The expansion of space is
measured indirectly. The theory of relativity predicts phenomena associated with the expansion, notably the redshift-versus-distance relationship known as Hubble's Law; functional forms for cosmological distance measurements that differ from what would be expected if space were not expanding; and an observable change in the matter and energy density of the universe seen at different lookback times.
The first measurement of the expansion of space occurred with the creation of the Hubble diagram. Using standard candles with known intrinsic brightness, the expansion of the universe has been measured using redshift to derive Hubble's Constant: H0 = 67.15 ± 1.2 (km/s)/Mpc. For every million parsecs of distance from the observer, the rate of expansion increases by about 67 kilometers per second.
The Hubble parameter is not thought to be constant through time.
There are dynamical forces acting on the particles in the universe which
affect the expansion rate. It was earlier expected that the Hubble
parameter would be decreasing as time went on due to the influence of
gravitational interactions in the universe, and thus there is an
additional observable quantity in the universe called the deceleration parameter
which cosmologists expected to be directly related to the matter
density of the universe. Surprisingly, the deceleration parameter was
measured by two different groups to be less than zero (actually,
consistent with −1) which implied that today the Hubble parameter is
converging to a constant value as time goes on. Some cosmologists have
whimsically called the effect associated with the "accelerating
universe" the "cosmic jerk". The 2011 Nobel Prize in Physics was given for the discovery of this phenomenon.
In October 2018, scientists presented a new third way (two earlier methods, one based on redshifts and another on the cosmic distance ladder, gave results that do not agree), using information from gravitational wave events (especially those involving the merger of neutron stars, like GW170817), of determining the Hubble Constant, essential in establishing the rate of expansion of the universe.
At cosmological scales the present universe is geometrically flat, which is to say that the rules of Euclidean geometry associated with Euclid's fifth postulate hold, though in the past spacetime could have been highly curved. In part to accommodate such different geometries, the expansion of the universe is inherently general relativistic; it cannot be modeled with special relativity alone, though such models exist, they are at fundamental odds with the observed interaction between matter and spacetime seen in our universe.
The images to the right show two views of spacetime diagrams that show the large-scale geometry of the universe according to the ΛCDM
cosmological model. Two of the dimensions of space are omitted, leaving
one dimension of space (the dimension that grows as the cone gets
larger) and one of time (the dimension that proceeds "up" the cone's
surface). The narrow circular end of the diagram corresponds to a cosmological time
of 700 million years after the big bang while the wide end is a
cosmological time of 18 billion years, where one can see the beginning
of the accelerating expansion
as a splaying outward of the spacetime, a feature which eventually
dominates in this model. The purple grid lines mark off cosmological
time at intervals of one billion years from the big bang. The cyan grid
lines mark off comoving distance
at intervals of one billion light years in the present era (less in the
past and more in the future). Note that the circular curling of the
surface is an artifact of the embedding with no physical significance
and is done purely to make the illustration viewable; space does not
actually curl around on itself. (A similar effect can be seen in the
tubular shape of the pseudosphere.)
The brown line on the diagram is the worldline
of the Earth (or, at earlier times, of the matter which condensed to
form the Earth). The yellow line is the worldline of the most distant
known quasar.
The red line is the path of a light beam emitted by the quasar about 13
billion years ago and reaching the Earth in the present day. The orange
line shows the present-day distance between the quasar and the Earth,
about 28 billion light years, which is, notably, a larger distance than
the age of the universe multiplied by the speed of light: ct.
According to the equivalence principle of general relativity, the rules of special relativity are locally valid in small regions of spacetime that are approximately flat. In particular, light always travels locally at the speed c;
in our diagram, this means, according to the convention of constructing
spacetime diagrams, that light beams always make an angle of 45° with
the local grid lines. It does not follow, however, that light travels a
distance ct in a time t, as the red worldline illustrates. While it always moves locally at c,
its time in transit (about 13 billion years) is not related to the
distance traveled in any simple way since the universe expands as the
light beam traverses space and time. In fact the distance traveled is
inherently ambiguous because of the changing scale of the universe. Nevertheless, we can single out two distances which appear to be
physically meaningful: the distance between the Earth and the quasar
when the light was emitted, and the distance between them in the present
era (taking a slice of the cone along the dimension that we've declared
to be the spatial dimension). The former distance is about 4 billion
light years, much smaller than ct because the universe expanded
as the light traveled the distance, the light had to "run against the
treadmill" and therefore went farther than the initial separation
between the Earth and the quasar. The latter distance (shown by the
orange line) is about 28 billion light years, much larger than ct.
If expansion could be instantaneously stopped today, it would take 28
billion years for light to travel between the Earth and the quasar while
if the expansion had stopped at the earlier time, it would have taken
only 4 billion years.
The light took much longer than 4 billion years to reach us
though it was emitted from only 4 billion light years away, and, in
fact, the light emitted towards the Earth was actually moving away
from the Earth when it was first emitted, in the sense that the metric
distance to the Earth increased with cosmological time for the first few
billion years of its travel time, and also indicating that the
expansion of space between the Earth and the quasar at the early time
was faster than the speed of light. None of this surprising behavior
originates from a special property of metric expansion, but simply from
local principles of special relativity integrated over a curved surface.
Topology of expanding space
A
graphical representation of the expansion of the universe
with the
inflationary epoch represented as the dramatic
expansion of the metric
seen on the left. This diagram can be confusing because the expansion
of space looks like it is
happening into an empty "nothingness".
However, this is a
choice made for convenience of visualization: it is
not a part of
the physical models which describe the expansion.
Over time, the space that makes up the universe is expanding. The words 'space' and 'universe',
sometimes used interchangeably, have distinct meanings in this context.
Here 'space' is a mathematical concept that stands for the
three-dimensional manifold
into which our respective positions are embedded while 'universe'
refers to everything that exists including the matter and energy in
space, the extra-dimensions that may be wrapped up in various strings,
and the time through which various events take place. The expansion of
space is in reference to this 3-D manifold only; that is, the
description involves no structures such as extra dimensions or an
exterior universe.
The ultimate topology of space is a posteriori
— something which in principle must be observed — as there are no
constraints that can simply be reasoned out (in other words there can
not be any a priori constraints) on how the space in which we live is connected or whether it wraps around on itself as a compact space. Though certain cosmological models such as Gödel's universe even permit bizarre worldlines which intersect with themselves, ultimately the question as to whether we are in something like a "Pac-Man
universe" where if traveling far enough in one direction would allow
one to simply end up back in the same place like going all the way
around the surface of a balloon (or a planet like the Earth) is an observational question which is constrained as measurable or non-measurable by the universe's global geometry.
At present, observations are consistent with the universe being
infinite in extent and simply connected, though we are limited in
distinguishing between simple and more complicated proposals by cosmological horizons. The universe could be infinite in extent or it could be finite; but the evidence that leads to the inflationary model of the early universe also implies that the "total universe" is much larger than the observable universe,
and so any edges or exotic geometries or topologies would not be
directly observable as light has not reached scales on which such
aspects of the universe, if they exist, are still allowed. For all
intents and purposes, it is safe to assume that the universe is infinite
in spatial extent, without edge or strange connectedness.
Regardless of the overall shape of the universe, the question of
what the universe is expanding into is one which does not require an
answer according to the theories which describe the expansion; the way
we define space in our universe in no way requires additional exterior
space into which it can expand since an expansion of an infinite expanse
can happen without changing the infinite extent of the expanse. All
that is certain is that the manifold of space in which we live simply
has the property that the distances between objects are getting larger
as time goes on. This only implies the simple observational consequences
associated with the metric expansion explored below. No "outside" or
embedding in hyperspace is required for an expansion to occur. The
visualizations often seen of the universe growing as a bubble into
nothingness are misleading in that respect. There is no reason to
believe there is anything "outside" of the expanding universe into which
the universe expands.
Even if the overall spatial extent is infinite and thus the
universe cannot get any "larger", we still say that space is expanding
because, locally, the characteristic distance between objects is
increasing. As an infinite space grows, it remains infinite.
Density of universe during expansion
Despite being extremely dense when very young and during part of its early expansion - far denser than is usually required to form a black hole - the universe did not re-collapse into a black hole. This is because commonly-used calculations for gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang.
Effects of expansion on small scales
The
expansion of space is sometimes described as a force which acts to push
objects apart. Though this is an accurate description of the effect of
the cosmological constant,
it is not an accurate picture of the phenomenon of expansion in
general. For much of the universe's history the expansion has been due
mainly to inertia. The matter in the very early universe was flying apart for unknown reasons (most likely as a result of cosmic inflation) and has simply continued to do so, though at an ever-decreasing rate due to the attractive effect of gravity.
Animation
of an expanding raisin bread model. As the bread doubles in width
(depth and length), the distances between raisins also double.
In addition to slowing the overall expansion, gravity causes local
clumping of matter into stars and galaxies. Once objects are formed and
bound by gravity, they "drop out" of the expansion and do not
subsequently expand under the influence of the cosmological metric,
there being no force compelling them to do so.
There is no difference between the inertial expansion of the
universe and the inertial separation of nearby objects in a vacuum; the
former is simply a large-scale extrapolation of the latter.
Once objects are bound by gravity, they no longer recede from
each other. Thus, the Andromeda galaxy, which is bound to the Milky Way
galaxy, is actually falling towards us and is not expanding away. Within the Local Group,
the gravitational interactions have changed the inertial patterns of
objects such that there is no cosmological expansion taking place. Once
one goes beyond the Local Group, the inertial expansion is measurable,
though systematic gravitational effects imply that larger and larger
parts of space will eventually fall out of the "Hubble Flow" and end up as bound, non-expanding objects up to the scales of superclusters
of galaxies. We can predict such future events by knowing the precise
way the Hubble Flow is changing as well as the masses of the objects to
which we are being gravitationally pulled. Currently, the Local Group is
being gravitationally pulled towards either the Shapley Supercluster or the "Great Attractor" with which, if dark energy were not acting, we would eventually merge and no longer see expand away from us after such a time.
A consequence of metric expansion being due to inertial motion is
that a uniform local "explosion" of matter into a vacuum can be locally
described by the FLRW geometry, the same geometry which describes the expansion of the universe as a whole and was also the basis for the simpler Milne universe which ignores the effects of gravity. In particular, general relativity predicts that light will move at the speed c with respect to the local motion of the exploding matter, a phenomenon analogous to frame dragging.
The situation changes somewhat with the introduction of dark
energy or a cosmological constant. A cosmological constant due to a vacuum energy
density has the effect of adding a repulsive force between objects
which is proportional (not inversely proportional) to distance. Unlike
inertia it actively "pulls" on objects which have clumped together under
the influence of gravity, and even on individual atoms. However, this
does not cause the objects to grow steadily or to disintegrate; unless
they are very weakly bound, they will simply settle into an equilibrium
state which is slightly (undetectably) larger than it would otherwise
have been. As the universe expands and the matter in it thins, the
gravitational attraction decreases (since it is proportional to the
density), while the cosmological repulsion increases; thus the ultimate
fate of the ΛCDM universe is a near vacuum expanding at an
ever-increasing rate under the influence of the cosmological constant.
However, the only locally visible effect of the accelerating expansion is the disappearance (by runaway redshift)
of distant galaxies; gravitationally bound objects like the Milky Way
do not expand and the Andromeda galaxy is moving fast enough towards us
that it will still merge with the Milky Way in 3 billion years time, and
it is also likely that the merged supergalaxy that forms will
eventually fall in and merge with the nearby Virgo Cluster.
However, galaxies lying farther away from this will recede away at
ever-increasing speed and be redshifted out of our range of visibility.
While special relativity prohibits objects from moving faster than light with respect to a local reference frame where spacetime can be treated as flat and unchanging, it does not apply to situations where spacetime curvature or evolution in time become important. These situations are described by general relativity,
which allows the separation between two distant objects to increase
faster than the speed of light, although the definition of "distance"
here is somewhat different from that used in an inertial frame. The
definition of distance used here is the summation or integration of
local comoving distances, all done at constant local proper time. For example, galaxies that are more than the Hubble radius, approximately 4.5 gigaparsecs or 14.7 billion light-years, away from us have a recession speed that is faster than the speed of light.
Visibility of these objects depends on the exact expansion history of
the universe. Light that is emitted today from galaxies beyond the cosmological event horizon,
about 5 gigaparsecs or 16 billion light-years, will never reach us,
although we can still see the light that these galaxies emitted in the
past.
Because of the high rate of expansion, it is also possible for a
distance between two objects to be greater than the value calculated by
multiplying the speed of light by the age of the universe. These details
are a frequent source of confusion among amateurs and even professional
physicists.
Due to the non-intuitive nature of the subject and what has been
described by some as "careless" choices of wording, certain descriptions
of the metric expansion of space and the misconceptions to which such
descriptions can lead are an ongoing subject of discussion in the realm
of pedagogy and communication of scientific concepts. In June 2016, NASA and ESA scientists reported that the universe was found to be expanding 5% to 9% faster than thought earlier, based on studies using the Hubble Space Telescope.
Scale factor
At
a fundamental level, the expansion of the universe is a property of
spatial measurement on the largest measurable scales of our universe.
The distances between cosmologically relevant points increases as time
passes leading to observable effects outlined below. This feature of the
universe can be characterized by a single parameter that is called the scale factor which is a function
of time and a single value for all of space at any instant (if the
scale factor were a function of space, this would violate the cosmological principle).
By convention, the scale factor is set to be unity at the present time
and, because the universe is expanding, is smaller in the past and
larger in the future. Extrapolating back in time with certain
cosmological models will yield a moment when the scale factor was zero;
our current understanding of cosmology sets this time at 13.799 ± 0.021 billion years ago.
If the universe continues to expand forever, the scale factor will
approach infinity in the future. In principle, there is no reason that
the expansion of the universe must be monotonic
and there are models where at some time in the future the scale factor
decreases with an attendant contraction of space rather than an
expansion.
Other conceptual models of expansion
The
expansion of space is often illustrated with conceptual models which
show only the size of space at a particular time, leaving the dimension
of time implicit.
In the "ant on a rubber rope
model" one imagines an ant (idealized as pointlike) crawling at a
constant speed on a perfectly elastic rope which is constantly
stretching. If we stretch the rope in accordance with the ΛCDM scale
factor and think of the ant's speed as the speed of light, then this
analogy is numerically accurate — the ant's position over time will
match the path of the red line on the embedding diagram above.
In the "rubber sheet model" one replaces the rope with a flat
two-dimensional rubber sheet which expands uniformly in all directions.
The addition of a second spatial dimension raises the possibility of
showing local perturbations of the spatial geometry by local curvature
in the sheet.
In the "balloon model" the flat sheet is replaced by a spherical
balloon which is inflated from an initial size of zero (representing the
big bang). A balloon has positive Gaussian curvature while observations
suggest that the real universe is spatially flat, but this
inconsistency can be eliminated by making the balloon very large so that
it is locally flat to within the limits of observation. This analogy is
potentially confusing since it wrongly suggests that the big bang took
place at the center of the balloon. In fact points off the surface of
the balloon have no meaning, even if they were occupied by the balloon
at an earlier time.
In the "raisin bread model" one imagines a loaf of raisin bread
expanding in the oven. The loaf (space) expands as a whole, but the
raisins (gravitationally bound objects) do not expand; they merely grow
farther away from each other.
Theoretical basis and first evidence
The expansion of the universe proceeds in all directions as determined by the Hubble constant.
However, the Hubble constant can change in the past and in the future,
dependent on the observed value of density parameters (Ω). Before the
discovery of dark energy,
it was believed that the universe was matter-dominated, and so Ω on
this graph corresponds to the ratio of the matter density to the critical density ().
Hubble's law
Technically, the metric expansion of space is a feature of many solutions to the Einstein field equations of general relativity, and distance is measured using the Lorentz interval. This explains observations which indicate that galaxies that are more distant from us are receding faster than galaxies that are closer to us.
Cosmological constant and the Friedmann equations
The
first general relativistic models predicted that a universe which was
dynamical and contained ordinary gravitational matter would contract
rather than expand. Einstein's first proposal for a solution to this
problem involved adding a cosmological constant into his theories to balance out the contraction, in order to obtain a static universe solution. But in 1922 Alexander Friedmann derived a set of equations known as the Friedmann equations, showing that the universe might expand and presenting the expansion speed in this case. The observations of Edwin Hubble
in 1929 suggested that distant galaxies were all apparently moving away
from us, so that many scientists came to accept that the universe was
expanding.
Hubble's concerns over the rate of expansion
While
the metric expansion of space appeared to be implied by Hubble's 1929
observations, Hubble disagreed with the expanding-universe
interpretation of the data:
[...] if redshift are not primarily
due to velocity shift [...] the velocity-distance relation is linear,
the distribution of the nebula is uniform, there is no evidence of
expansion, no trace of curvature, no restriction of the time scale [...]
and we find ourselves in the presence of one of the principles of
nature that is still unknown to us today [...] whereas, if redshifts are
velocity shifts which measure the rate of expansion, the expanding
models are definitely inconsistent with the observations that have been
made [...] expanding models are a forced interpretation of the
observational results.
[If the redshifts are a Doppler
shift ...] the observations as they stand lead to the anomaly of a
closed universe, curiously small and dense, and, it may be added,
suspiciously young. On the other hand, if redshifts are not Doppler
effects, these anomalies disappear and the region observed appears as a
small, homogeneous, but insignificant portion of a universe extended
indefinitely both in space and time.
Hubble's skepticism about the universe being too small, dense, and
young turned out to be based on an observational error. Later
investigations appeared to show that Hubble had confused distant H II regions for Cepheid variables and the Cepheid variables themselves had been inappropriately lumped together with low-luminosity RR Lyrae stars causing calibration errors that led to a value of the Hubble Constant of approximately 500 km/s/Mpc
instead of the true value of approximately 70 km/s/Mpc. The higher
value meant that an expanding universe would have an age of 2 billion
years (younger than the Age of the Earth)
and extrapolating the observed number density of galaxies to a rapidly
expanding universe implied a mass density that was too high by a similar
factor, enough to force the universe into a peculiar closed geometry which also implied an impending Big Crunch
that would occur on a similar time-scale. After fixing these errors in
the 1950s, the new lower values for the Hubble Constant accorded with
the expectations of an older universe and the density parameter was
found to be fairly close to a geometrically flat universe.
However, recent measurements of the distances and velocities of
faraway galaxies revealed a 9 percent discrepancy in the value of the
Hubble constant, implying a universe that seems expanding too fast
compared to previous measurements.
In 2001, Dr. Wendy Freedman determined space to expand at 72 kilometers
per second per megaparsec - roughly 3.3 million light years - meaning
that as we move away from Earth every 3.3 million light years is moving
72 kilometers a second faster.
In the summer of 2016, another measurement reported a value of 73 for
the constant, thereby contradicting 2013 measurements from the European
Planck mission of slower expansion value of 67. The discrepancy opened
new questions concerning the nature of dark energy, or of neutrinos.
Inflation as an explanation for the expansion
Until
the theoretical developments in the 1980s no one had an explanation for
why this seemed to be the case, but with the development of models of cosmic inflation, the expansion of the universe became a general feature resulting from vacuum decay.
Accordingly, the question "why is the universe expanding?" is now
answered by understanding the details of the inflation decay process
which occurred in the first 10−32 seconds of the existence of our universe. During inflation, the metric changed exponentially, causing any volume of space that was smaller than an atom to grow to around 100 million light years across in a time scale similar to the time when inflation occurred (10−32 seconds).
Measuring distance in a metric space
The
diagram depicts the expansion of the universe and the relative observer
phenomenon. The blue galaxies have expanded further apart than the
white galaxies. When choosing an arbitrary reference point such as the
gold galaxy or the red galaxy, the increased distance to other galaxies
the further away they are appear the same. This phenomenon of expansion
indicates two factors: there is no centralized point in the universe,
and that the Milky Way Galaxy is not the center of the universe. The
appearance of centrality is due to an observer bias that is equivalent
no matter what location an observer sits.
In expanding space, distance is a dynamic quantity which changes with
time. There are several different ways of defining distance in
cosmology, known as distance measures, but a common method used amongst modern astronomers is comoving distance.
The metric only defines the distance between nearby (so-called
"local") points. In order to define the distance between arbitrarily
distant points, one must specify both the points and a specific curve
(known as a "spacetime interval")
connecting them. The distance between the points can then be found by
finding the length of this connecting curve through the three dimensions
of space. Comoving distance defines this connecting curve to be a curve
of constant cosmological time. Operationally, comoving distances cannot be directly measured by a
single Earth-bound observer. To determine the distance of distant
objects, astronomers generally measure luminosity of standard candles,
or the redshift factor 'z' of distant galaxies, and then convert these
measurements into distances based on some particular model of spacetime,
such as the Lambda-CDM model.
It is, indeed, by making such observations that it was determined that
there is no evidence for any 'slowing down' of the expansion in the
current epoch.
Observational evidence
Theoretical cosmologists developing models of the universe
have drawn upon a small number of reasonable assumptions in their work.
These workings have led to models in which the metric expansion of
space is a likely feature of the universe. Chief among the underlying
principles that result in models including metric expansion as a feature
are:
the Cosmological Principle which demands that the universe looks the same way in all directions (isotropic) and has roughly the same smooth mixture of material (homogeneous).
the Copernican Principle which demands that no place in the universe is preferred (that is, the universe has no "starting point").
Scientists have tested carefully whether these assumptions are valid and borne out by observation. Observational cosmologists
have discovered evidence — very strong in some cases — that supports
these assumptions, and as a result, metric expansion of space is
considered by cosmologists to be an observed feature on the basis that
although we cannot see it directly, scientists have tested the
properties of the universe and observation provides compelling
confirmation. Sources of this confidence and confirmation include:
Hubble demonstrated that all galaxies and distant astronomical
objects were moving away from us, as predicted by a universal expansion. Using the redshift of their electromagnetic spectra
to determine the distance and speed of remote objects in space, he
showed that all objects are moving away from us, and that their speed is
proportional to their distance, a feature of metric expansion. Further
studies have since shown the expansion to be highly isotropic and homogeneous,
that is, it does not seem to have a special point as a "center", but
appears universal and independent of any fixed central point.
In studies of large-scale structure of the cosmos taken from redshift surveys a so-called "End of Greatness"
was discovered at the largest scales of the universe. Until these
scales were surveyed, the universe appeared "lumpy" with clumps of galaxy clusters, superclusters and filaments
which were anything but isotropic and homogeneous. This lumpiness
disappears into a smooth distribution of galaxies at the largest scales.
The isotropic distribution across the sky of distant gamma-ray bursts and supernovae is another confirmation of the Cosmological Principle.
The Copernican Principle was not truly tested on a cosmological scale until measurements of the effects of the cosmic microwave background radiation on the dynamics of distant astrophysical systems were made. A group of astronomers at the European Southern Observatory
noticed, by measuring the temperature of a distant intergalactic cloud
in thermal equilibrium with the cosmic microwave background, that the
radiation from the Big Bang was demonstrably warmer at earlier times.
Uniform cooling of the cosmic microwave background over billions of
years is strong and direct observational evidence for metric expansion.
Taken together, these phenomena overwhelmingly support models that
rely on space expanding through a change in metric. It was not until the
discovery in the year 2000 of direct observational evidence for the
changing temperature of the cosmic microwave background that more
bizarre constructions could be ruled out. Until that time, it was based
purely on an assumption that the universe did not behave as one with the
Milky Way sitting at the middle of a fixed-metric with a universal explosion of galaxies in all directions (as seen in, for example, an early model proposed by Milne). Yet before this evidence, many rejected the Milne viewpoint based on the mediocrity principle.