Search This Blog

Thursday, May 31, 2018

Flatness problem

From Wikipedia, the free encyclopedia
The local geometry of the universe is determined by whether the relative density Ω is less than, equal to or greater than 1. From top to bottom: a spherical universe with greater than critical density (Ω>1, k>0); a hyperbolic, underdense universe (Ω<1 a="" and="" critical="" density="" diagrams="" div="" exactly="" flat="" four-dimensional.="" is="" k="0)." spahe="" the="" universe="" unlike="" with="">

The flatness problem is a cosmological fine-tuning problem within the Big Bang model of the universe. Such problems arise from the observation that some of the initial conditions of the universe appear to be fine-tuned to very 'special' values, and that small deviations from these values would have extreme effects on the appearance of the universe at the current time.

In the case of the flatness problem, the parameter which appears fine-tuned is the density of matter and energy in the universe. This value affects the curvature of space-time, with a very specific critical value being required for a flat universe. The current density of the universe is observed to be very close to this critical value. Since the total density departs rapidly from the critical value over cosmic time,[1] the early universe must have had a density even closer to the critical density, departing from it by one part in 1062 or less. This leads cosmologists to question how the initial density came to be so closely fine-tuned to this 'special' value.

The problem was first mentioned by Robert Dicke in 1969.[2]:62,[3]:61 The most commonly accepted solution among cosmologists is cosmic inflation, the idea that the universe went through a brief period of extremely rapid expansion in the first fraction of a second after the Big Bang; along with the monopole problem and the horizon problem, the flatness problem is one of the three primary motivations for inflationary theory.[4]

Energy density and the Friedmann equation

According to Einstein's field equations of general relativity, the structure of spacetime is affected by the presence of matter and energy. On small scales space appears flat – as does the surface of the Earth if one looks at a small area. On large scales however, space is bent by the gravitational effect of matter. Since relativity indicates that matter and energy are equivalent, this effect is also produced by the presence of energy (such as light and other electromagnetic radiation) in addition to matter. The amount of bending (or curvature) of the universe depends on the density of matter/energy present.

This relationship can be expressed by the first Friedmann equation. In a universe without a cosmological constant, this is:
H^2 = \frac{8 \pi G}{3} \rho - \frac{kc^2}{a^2}
Here H is the Hubble parameter, a measure of the rate at which the universe is expanding. \rho is the total density of mass and energy in the universe, a is the scale factor (essentially the 'size' of the universe), and k is the curvature parameter — that is, a measure of how curved spacetime is. A positive, zero or negative value of k corresponds to a respectively closed, flat or open universe. The constants G and c are Newton's gravitational constant and the speed of light, respectively.

Cosmologists often simplify this equation by defining a critical density, \rho _{c}. For a given value of H, this is defined as the density required for a flat universe, i.e. k=0. Thus the above equation implies
\rho_c = \frac{3H^2}{8\pi G}.
Since the constant G is known and the expansion rate H can be measured by observing the speed at which distant galaxies are receding from us, \rho _{c} can be determined. Its value is currently around 10−26 kg m−3. The ratio of the actual density to this critical value is called Ω, and its difference from 1 determines the geometry of the universe: Ω > 1 corresponds to a greater than critical density, \rho > \rho_c, and hence a closed universe. Ω < 1 gives a low density open universe, and Ω equal to exactly 1 gives a flat universe.

The Friedmann equation,
{\displaystyle {\frac {3a^{2}}{8\pi G}}H^{2}=\rho a^{2}-{\frac {3kc^{2}}{8\pi G}},}
can be re-arranged into
{\displaystyle \rho _{c}a^{2}-\rho a^{2}=-{\frac {3kc^{2}}{8\pi G}},}
which after factoring \rho a^2, and using {\displaystyle \Omega =\rho /\rho _{c}}, leads to
(\Omega^{-1} - 1)\rho a^2 = \frac{-3kc^2}{8 \pi G}.[5]
The right hand side of the last expression above contains constants only and therefore the left hand side must remain constant throughout the evolution of the universe.

As the universe expands the scale factor a increases, but the density \rho decreases as matter (or energy) becomes spread out. For the standard model of the universe which contains mainly matter and radiation for most of its history, \rho decreases more quickly than a^{2} increases, and so the factor \rho a^2 will decrease. Since the time of the Planck era, shortly after the Big Bang, this term has decreased by a factor of around 10^{60},[5] and so (\Omega^{-1} - 1) must have increased by a similar amount to retain the constant value of their product.

Current value of Ω

The relative density Ω against cosmic time t (neither axis to scale). Each curve represents a possible universe: note that Ω diverges rapidly from 1. The blue curve is a universe similar to our own, which at the present time (right of the graph) has a small |Ω − 1| and therefore must have begun with Ω very close to 1 indeed. The red curve is a hypothetical different universe in which the initial value of Ω differed slightly too much from 1: by the present day it has diverged extremely and would not be able to support galaxies, stars or planets.

Measurement

The value of Ω at the present time is denoted Ω0. This value can be deduced by measuring the curvature of spacetime (since Ω = 1, or \rho=\rho_c, is defined as the density for which the curvature k = 0). The curvature can be inferred from a number of observations.

One such observation is that of anisotropies (that is, variations with direction - see below) in the Cosmic Microwave Background (CMB) radiation. The CMB is electromagnetic radiation which fills the universe, left over from an early stage in its history when it was filled with photons and a hot, dense plasma. This plasma cooled as the universe expanded, and when it cooled enough to form stable atoms it no longer absorbed the photons. The photons present at that stage have been propagating ever since, growing fainter and less energetic as they spread through the ever-expanding universe.

The temperature of this radiation is almost the same at all points on the sky, but there is a slight variation (around one part in 100,000) between the temperature received from different directions. The angular scale of these fluctuations - the typical angle between a hot patch and a cold patch on the sky[nb 1] - depends on the curvature of the universe which in turn depends on its density as described above. Thus, measurements of this angular scale allow an estimation of Ω0.[6][nb 2]

Another probe of Ω0 is the frequency of Type-Ia supernovae at different distances from Earth.[7][8] These supernovae, the explosions of degenerate white dwarf stars, are a type of standard candle; this means that the processes governing their intrinsic brightness are well understood so that a measure of apparent brightness when seen from Earth can be used to derive accurate distance measures for them (the apparent brightness decreasing in proportion to the square of the distance - see luminosity distance). Comparing this distance to the redshift of the supernovae gives a measure of the rate at which the universe has been expanding at different points in history. Since the expansion rate evolves differently over time in cosmologies with different total densities, Ω0 can be inferred from the supernovae data.

Data from the Wilkinson Microwave Anisotropy Probe (measuring CMB anisotropies) combined with that from the Sloan Digital Sky Survey and observations of type-Ia supernovae constrain Ω0 to be 1 within 1%.[9] In other words, the term |Ω − 1| is currently less than 0.01, and therefore must have been less than 10−62 at the Planck era.

Implication

This tiny value is the crux of the flatness problem. If the initial density of the universe could take any value, it would seem extremely surprising to find it so 'finely tuned' to the critical value \rho _{c}. Indeed, a very small departure of Ω from 1 in the early universe would have been magnified during billions of years of expansion to create a current density very far from critical. In the case of an overdensity (\rho > \rho_c) this would lead to a universe so dense it would cease expanding and collapse into a Big Crunch (an opposite to the Big Bang in which all matter and energy falls back into an extremely dense state) in a few years or less; in the case of an underdensity (\rho < \rho_c) it would expand so quickly and become so sparse it would soon seem essentially empty, and gravity would not be strong enough by comparison to cause matter to collapse and form galaxies. In either case the universe would contain no complex structures such as galaxies, stars, planets and any form of life.[10]

This problem with the Big Bang model was first pointed out by Robert Dicke in 1969,[11] and it motivated a search for some reason the density should take such a specific value.

Solutions to the problem

Some cosmologists agreed with Dicke that the flatness problem was a serious one, in need of a fundamental reason for the closeness of the density to criticality. But there was also a school of thought which denied that there was a problem to solve, arguing instead that since the universe must have some density it may as well have one close to \rho_{crit} as far from it, and that speculating on a reason for any particular value was "beyond the domain of science".[11] Enough cosmologists saw the problem as a real one, however, for various solutions to be proposed.

Anthropic principle

One solution to the problem is to invoke the anthropic principle, which states that humans should take into account the conditions necessary for them to exist when speculating about causes of the universe's properties. If two types of universe seem equally likely but only one is suitable for the evolution of intelligent life, the anthropic principle suggests that finding ourselves in that universe is no surprise: if the other universe had existed instead, there would be no observers to notice the fact.

The principle can be applied to solve the flatness problem in two somewhat different ways. The first (an application of the 'strong anthropic principle') was suggested by C. B. Collins and Stephen Hawking,[12] who in 1973 considered the existence of an infinite number of universes such that every possible combination of initial properties was held by some universe. In such a situation, they argued, only those universes with exactly the correct density for forming galaxies and stars would give rise to intelligent observers such as humans: therefore, the fact that we observe Ω to be so close to 1 would be "simply a reflection of our own existence."[12]

An alternative approach, which makes use of the 'weak anthropic principle', is to suppose that the universe is infinite in size, but with the density varying in different places (i.e. an inhomogeneous universe). Thus some regions will be over-dense (Ω > 1) and some under-dense (Ω < 1). These regions may be extremely far apart - perhaps so far that light has not had time to travel from one to another during the age of the universe (that is, they lie outside one another's cosmological horizons). Therefore, each region would behave essentially as a separate universe: if we happened to live in a large patch of almost-critical density we would have no way of knowing of the existence of far-off under- or over-dense patches since no light or other signal has reached us from them. An appeal to the anthropic principle can then be made, arguing that intelligent life would only arise in those patches with Ω very close to 1, and that therefore our living in such a patch is unsurprising.[13]

This latter argument makes use of a version of the anthropic principle which is 'weaker' in the sense that it requires no speculation on multiple universes, or on the probabilities of various different universes existing instead of the current one. It requires only a single universe which is infinite - or merely large enough that many disconnected patches can form - and that the density varies in different regions (which is certainly the case on smaller scales, giving rise to galactic clusters and voids).

However, the anthropic principle has been criticised by many scientists.[14] For example, in 1979 Bernard Carr and Martin Rees argued that the principle “is entirely post hoc: it has not yet been used to predict any feature of the Universe.”[14][15] Others have taken objection to its philosophical basis, with Ernan McMullin writing in 1994 that "the weak Anthropic principle is trivial ... and the strong Anthropic principle is indefensible." Since many physicists and philosophers of science do not consider the principle to be compatible with the scientific method,[14] another explanation for the flatness problem was needed.

Inflation

The standard solution to the flatness problem invokes cosmic inflation, a process whereby the universe expands exponentially quickly (i.e. a grows as e^{\lambda t} with time t, for some constant \lambda ) during a short period in its early history. The theory of inflation was first proposed in 1979, and published in 1981, by Alan Guth.[16][17] His two main motivations for doing so were the flatness problem and the horizon problem, another fine-tuning problem of physical cosmology.
The proposed cause of inflation is a field which permeates space and drives the expansion. The field contains a certain energy density, but unlike the density of the matter or radiation present in the late universe, which decrease over time, the density of the inflationary field remains roughly constant as space expands. Therefore, the term \rho a^2 increases extremely rapidly as the scale factor a grows exponentially. Recalling the Friedmann Equation
(\Omega^{-1} - 1)\rho a^2 = \frac{-3kc^2}{8\pi G},
and the fact that the right-hand side of this expression is constant, the term  | \Omega^{-1} - 1 | must therefore decrease with time.

Thus if  | \Omega^{-1} - 1 | initially takes any arbitrary value, a period of inflation can force it down towards 0 and leave it extremely small - around 10^{-62} as required above, for example. Subsequent evolution of the universe will cause the value to grow, bringing it to the currently observed value of around 0.01. Thus the sensitive dependence on the initial value of Ω has been removed: a large and therefore 'unsurprising' starting value need not become amplified and lead to a very curved universe with no opportunity to form galaxies and other structures.

This success in solving the flatness problem is considered one of the major motivations for inflationary theory.[4][18]

Post inflation

Although inflationary theory is regarded as having had much success, and the evidence for it is compelling, it is not universally accepted: cosmologists recognize that there are still gaps in the theory and are open to the possibility that future observations will disprove it.[19][20] In particular, in the absence of any firm evidence for what the field driving inflation should be, many different versions of the theory have been proposed.[21] Many of these contain parameters or initial conditions which themselves require fine-tuning[21] in much the way that the early density does without inflation.

For these reasons work is still being done on alternative solutions to the flatness problem. These have included non-standard interpretations of the effect of dark energy[22] and gravity,[23] particle production in an oscillating universe,[24] and use of a Bayesian statistical approach to argue that the problem is non-existent. The latter argument, suggested for example by Evrard and Coles, maintains that the idea that Ω being close to 1 is 'unlikely' is based on assumptions about the likely distribution of the parameter which are not necessarily justified.[25] Despite this ongoing work, inflation remains by far the dominant explanation for the flatness problem.[1][4]

Einstein–Cartan theory

The flatness problem is naturally solved by the Einstein–Cartan–Sciama–Kibble theory of gravity, without an exotic form of matter required in inflationary theory.[26][27] This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. It has no free parameters. Including torsion gives the correct conservation law for the total (orbital plus intrinsic) angular momentum of matter in the presence of gravity. The minimal coupling between torsion and Dirac spinors obeying the nonlinear Dirac equation generates a spin-spin interaction which is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical big bang singularity, replacing it with a bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the big bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era.

Baryon acoustic oscillations

From Wikipedia, the free encyclopedia

In cosmology, baryon acoustic oscillations (BAO) are regular, periodic fluctuations in the density of the visible baryonic matter (normal matter) of the universe. In the same way that supernovae provide a "standard candle" for astronomical observations,[1] BAO matter clustering provides a "standard ruler" for length scale in cosmology.[2] The length of this standard ruler (~490 million light years in today's universe[3]) can be measured by looking at the large scale structure of matter using astronomical surveys.[3] BAO measurements help cosmologists understand more about the nature of dark energy (which causes the apparent slight acceleration of the expansion of the universe) by constraining cosmological parameters.[2]

The early universe

The early universe consisted of a hot, dense plasma of electrons and baryons (protons and neutrons).  Photons (light particles) traveling in this universe were essentially trapped, unable to travel for any considerable distance before interacting with the plasma via Thomson scattering.[4] As the universe expanded, the plasma cooled to below 3000 K—a low enough energy such that the electrons and protons in the plasma could combine to form neutral hydrogen atoms. This recombination happened when the universe was around 379,000 years old, or at a redshift of z = 1089.[4] Photons interact to a much lesser degree with neutral matter, and therefore at recombination the universe became transparent to photons, allowing them to decouple from the matter and free-stream through the universe.[4] Technically speaking, the mean free path of the photons became of order the size of the universe. The cosmic microwave background (CMB) radiation is light that was emitted after recombination that is only now reaching our telescopes. Therefore, looking at, for example, Wilkinson Microwave Anisotropy Probe (WMAP) data, one is basically looking back in time to see an image of the universe when it was only 379,000 years old.[4]


Figure 1: Temperature anisotropies of the CMB based on the nine year WMAP data (2012).[5][6][7]

WMAP indicates (Figure 1) a smooth, homogeneous universe with density anisotropies of 10 parts per million.[4] However, There are large structures and density fluctuations in the present universe. Galaxies, for instance, are a million times more dense than the universe's mean density.[2] The current belief is that the universe was built in a bottom-up fashion, meaning that the small anisotropies of the early universe acted as gravitational seeds for the structure observed today. Overdense regions attract more matter, whereas underdense regions attract less, and thus these small anisotropies, seen in the CMB, became the large scale structures in the universe today.

Cosmic sound

Imagine an overdense region of the primordial plasma. While this region of overdensity gravitationally attracts matter towards it, the heat of photon-matter interactions creates a large amount of outward pressure. These counteracting forces of gravity and pressure created oscillations, analogous to sound waves created in air by pressure differences.[3]

Consider a single wave originating from this overdense region from the center of the plasma. This region contains dark matter, baryons and photons. The pressure results in a spherical sound wave of both baryons and photons moving with a speed slightly over half the speed of light[8][9] outwards from the overdensity. The dark matter interacts only gravitationally, and so it stays at the center of the sound wave, the origin of the overdensity. Before decoupling, the photons and baryons moved outwards together. After decoupling the photons were no longer interacting with the baryonic matter and they diffused away. That relieved the pressure on the system, leaving behind a shell of baryonic matter at a fixed radius. This radius is often referred to as the sound horizon.[3] Without the photo-baryon pressure driving the system outwards, the only remaining force on the baryons was gravitational. Therefore, the baryons and dark matter (left behind at the center of the perturbation) formed a configuration which included overdensities of matter both at the original site of the anisotropy and in the shell at the sound horizon for that anisotropy.[3]

Many such anisotropies created the ripples in the density of space that attracted matter and eventually galaxies formed in a similar pattern. Therefore, one would expect to see a greater number of galaxies separated by the sound horizon than at other length scales.[3][clarification needed] This particular configuration of matter occurred at each anisotropy in the early universe, and therefore the universe is not composed of one sound ripple,[10] but many overlapping ripples.[11] As an analogy, imagine dropping many pebbles into a pond and watching the resulting wave patterns in the water.[2] It is not possible to observe this preferred separation of galaxies on the sound horizon scale by eye, but one can measure this artifact statistically by looking at the separations of large numbers of galaxies.

Standard ruler

The physics of the propagation of the baryon waves in the early universe is fairly simple; as a result cosmologists can predict the size of the sound horizon at the time of recombination. In addition the CMB provides a measurement of this scale to high accuracy.[3] However, in the time between recombination and present day, the universe has been expanding. This expansion is well supported by observations and is one of the foundations of the Big Bang Model. In the late 1990s, observations of supernovae[1] determined that not only is the universe expanding, it is expanding at an increasing rate. A better understanding of the acceleration of the universe, or dark energy, has become one of the most important questions in cosmology today. In order to understand the nature of the dark energy, it is important to have a variety of ways of measuring the acceleration. BAO can add to the body of knowledge about this acceleration by comparing observations of the sound horizon today (using clustering of galaxies) to that of the sound horizon at the time of recombination (using the CMB).[3] Thus BAO provides a measuring stick with which to better understand the nature of the acceleration, completely independent from the supernova technique.

BAO signal in the Sloan Digital Sky Survey

The Sloan Digital Sky Survey (SDSS) is a 2.5-metre wide-angle optical telescope at Apache Point Observatory in New Mexico. The goal of this five-year survey was to take images and spectra of millions of celestial objects. The result of compiling the SDSS data is a three-dimensional map of objects in the nearby universe: the SDSS catalog. The SDSS catalog provides a picture of the distribution of matter in a large enough portion of the universe that one can search for a BAO signal by noting whether there is a statistically significant overabundance of galaxies separated by the predicted sound horizon distance.

The SDSS team looked at a sample of 46,748 luminous red galaxies (LRGs), over 3,816 square-degrees of sky (approximately five billion light years in diameter) and out to a redshift of z = 0.47.[3] They analyzed the clustering of these galaxies by calculating a two-point correlation function on the data.[12] The correlation function (ξ) is a function of comoving galaxy separation distance (s) and describes the probability that one galaxy will be found within a given distance of another.[13] One would expect a high correlation of galaxies at small separation distances (due to the clumpy nature of galaxy formation) and a low correlation at large separation distances. The BAO signal would show up as a bump in the correlation function at a comoving separation equal to the sound horizon. This signal was detected by the SDSS team in 2005.[3][14] SDSS confirmed the WMAP results that the sound horizon is ~150 Mpc in today's universe.[2][3]

Detection in other galaxy surveys

The 2dFGRS collaboration and the SDSS collaboration reported a detection of the BAO signal in the power spectrum at around the same time in 2005.[15] Both teams are credited and recognized for the discovery by the community as evidenced by the 2014 Shaw Prize in Astronomy[16] which was awarded to both groups. Since then, further detections have been reported in the 6dF Galaxy Survey (6dFGS) in 2011[17], WiggleZ in2011[18] and BOSS in 2012.[19]

BAO and dark energy formalism

BAO constraints dark energy parameters

The BAO in the radial and tangential directions provide measurements of the Hubble parameter and angular diameter distance, respectively. The angular diameter distance and Hubble parameter can include different functions that explain dark energy behavior.[20][21] These functions have two parameters w0 and w1 and one can constrain them with chi-square technique.[22]

General relativity and dark energy

In general relativity, the expansion of the universe is parametrized by a scale factor a(t) which is related to redshift:[4]
a(t)\equiv (1+z(t))^{{-1}}\!
The Hubble parameter, H(z), in terms of the scale factor is:
H(t)\equiv {\frac  {{\dot  a}}{a}}\!
where {\dot  a} is the time-derivative of the scale factor. The Friedmann equations express the expansion of the universe in terms of Newton's gravitational constant, G_{{N}}, the mean gauge pressure, P, the Universe's density \rho \!, the curvature, k, and the cosmological constant, \Lambda \!:[4]
H^{2}=\left({\frac  {{\dot  {a}}}{a}}\right)^{2}={\frac  {8\pi G}{3}}\rho -{\frac  {kc^{2}}{a^{2}}}+{\frac  {\Lambda c^{2}}{3}}
{\dot  {H}}+H^{2}={\frac  {{\ddot  {a}}}{a}}=-{\frac  {4\pi G}{3}}\left(\rho +{\frac  {3p}{c^{2}}}\right)+{\frac  {\Lambda c^{2}}{3}}
Observational evidence of the acceleration of the universe implies that (at present time) {\ddot  {a}}>0. Therefore, the following are possible explanations:[23]
  • The universe is dominated by some field or particle that has negative pressure such that the equation of state:
w={\frac  {P}{\rho }}<-1/3\!
  • There is a non-zero cosmological constant, \Lambda \!.
  • The Friedmann equations are incorrect since they contain over simplifications in order to make the general relativistic field equations easier to compute.
In order to differentiate between these scenarios, precise measurements of the Hubble parameter as a function of redshift are needed.

Measured observables of dark energy

The density parameter, \Omega \!, of various components, x, of the universe can be expressed as ratios of the density of x to the critical density, \rho _{c}\!:[23]
\rho _{c}={\frac  {3H^{2}}{8\pi G}}
\Omega _{x}\equiv {\frac  {\rho _{x}}{\rho _{c}}}={\frac  {8\pi G\rho _{x}}{3H^{2}}}
The Friedman equation can be rewritten in terms of the density parameter. For the current prevailing model of the universe, ΛCDM, this equation is as follows:[23]
H^{2}(a)=\left({\frac  {{\dot  {a}}}{a}}\right)^{2}=H_{0}^{2}\left[\Omega _{m}a^{{-3}}+\Omega _{r}a^{{-4}}+\Omega _{k}a^{{-2}}+\Omega _{\Lambda }a^{{-3(1+w)}}\right]



where m is matter, r is radiation, k is curvature, Λ is dark energy, and w is the equation of state. Measurements of the CMB from WMAP put tight constraints on many of these parameters; however it is important to confirm and further constrain them using an independent method with different systematics.

The BAO signal is a standard ruler such that the length of the sound horizon can be measured as a function of cosmic time.[3] This measures two cosmological distances: the Hubble parameter, H(z), and the angular diameter distance, d_{A}(z), as a function of redshift (z).[24] By measuring the subtended angle, \Delta \theta , of the ruler of length \Delta \chi , these parameters are determined as follows:[24]
\Delta \theta ={\frac  {\Delta \chi }{d_{A}(z)}}\!
d_{A}(z)\propto \int _{{0}}^{{z}}{\frac  {dz'}{H(z')}}\!
the redshift interval, \Delta z, can be measured from the data and thus determining the Hubble parameter as a function of redshift:
c\Delta z=H(z)\Delta \chi \!
Therefore, the BAO technique helps constrain cosmological parameters and provide further insight into the nature of dark energy.

Void (astronomy)

From Wikipedia, the free encyclopedia
 
Structure of the Universe
Matter distribution in a cubic section of the universe. The blue fiber structures represent the matter (primarily dark matter) and the empty regions in between represent the cosmic voids.

Cosmic voids are vast spaces between filaments (the largest-scale structures in the universe), which contain very few or no galaxies. Voids typically have a diameter of 10 to 100 megaparsecs; particularly large voids, defined by the absence of rich superclusters, are sometimes called supervoids. They have less than one tenth of the average density of matter abundance that is considered typical for the observable universe. They were first discovered in 1978 in a pioneering study by Stephen Gregory and Laird A. Thompson at the Kitt Peak National Observatory.[1]

Voids are believed to have been formed by baryon acoustic oscillations in the Big Bang, collapses of mass followed by implosions of the compressed baryonic matter. Starting from initially small anisotropies from quantum fluctuations in the early universe, the anisotropies grew larger in scale over time. Regions of higher density collapsed more rapidly under gravity, eventually resulting in the large-scale, foam-like structure or "cosmic web" of voids and galaxy filaments seen today. Voids located in high-density environments are smaller than voids situated in low-density spaces of the universe.[2]

Voids appear to correlate with the observed temperature of the cosmic microwave background (CMB) because of the Sachs–Wolfe effect. Colder regions correlate with voids and hotter regions correlate with filaments because of gravitational redshifting. As the Sachs–Wolfe effect is only significant if the universe is dominated by radiation or dark energy, the existence of voids is significant in providing physical evidence for dark energy.[3][4]

Large-scale structure

The structure of our Universe can be broken down into components that can help describe the characteristics of individual regions of the cosmos. These are the main structural components of the cosmic web:
  • Voids – vast, largely spherical[5] regions with very low cosmic mean densities, up to 100 megaparsecs (Mpc) in diameter.[6]
  • Walls – the regions that contain the typical cosmic mean density of matter abundance. Walls can be further broken down into two smaller structural features:
    • Clusters – highly concentrated zones where walls meet and intersect, adding to the effective size of the local wall.
    • Filaments – the branching arms of walls that can stretch for tens of megaparsecs.[7]
Voids have a mean density less than a tenth of the average density of the universe. This serves as a working definition even though there is no single agreed-upon definition of what constitutes a void. The matter density value used for describing the cosmic mean density is usually based on a ratio of the number of galaxies per unit volume rather than the total mass of the matter contained in a unit volume.[8]

History and discovery

Cosmic voids as a topic of study in astrophysics began in the mid-1970s when redshift surveys became more popular and led two separate teams of astrophysicists in 1978 to identifying superclusters and voids in the distribution of galaxies and Abell clusters in a large region of space.[9][10] The new redshift surveys revolutionized the field of astronomy by adding depth to the two-dimensional maps of cosmological structure, which were often densely packed and overlapping,[6] allowing for the first three-dimensional mapping of the universe. In the redshift surveys, the depth was calculated from the individual redshifts of the galaxies due to the expansion of the universe according to Hubble's law.[11]

Timeline

A summarized timeline of important events in the field of cosmic voids from its beginning to recent times is listed below:
  • 1961 – Large-scale structural features such as "second-order clusters", a specific type of supercluster, were brought to the astronomical community's attention.[12]
  • 1978 – The first two papers on the topic of voids in the large-scale structure were published referencing voids found in the foreground of the Coma/A1367 clusters.[9][13]
  • 1981 – Discovery of a large void in the Boötes region of the sky that was nearly 50 h−1 Mpc in diameter (which was later recalculated to be about 34 h−1 Mpc).[14][15]
  • 1983 – Computer simulations sophisticated enough to provide relatively reliable results of growth and evolution of the large-scale structure emerged and yielded insight on key features of the large-scale galaxy distribution.[16][17]
  • 1985 – Details of the supercluster and void structure of the Perseus-Pisces region were surveyed.[18]
  • 1989 – The Center for Astrophysics Redshift Survey revealed that large voids, sharp filaments, and the walls that surround them dominate the large-scale structure of the universe.[19]
  • 1991 – The Las Campanas Redshift Survey confirmed the abundance of voids in the large-scale structure of the universe (Kirshner et al. 1991).[20]
  • 1995 – Comparisons of optically selected galaxy surveys indicate that the same voids are found regardless of the sample selection.[21]
  • 2001 – The completed two-degree Field Galaxy Redshift Survey adds a significantly large amount of voids to the database of all known cosmic voids.[22]
  • 2009 – The Sloan Digital Sky Survey (SDSS) data combined with previous large-scale surveys now provide the most complete view of the detailed structure of cosmic voids.[23][24][25]

Methods for finding

There exist a number of ways for finding voids with the results of large-scale surveys of the universe. Of the many different algorithms, virtually all fall into one of three general categories.[26] The first class consists of void finders that try to find empty regions of space based on local galaxy density.[27] The second class are those which try to find voids via the geometrical structures in the dark matter distribution as suggested by the galaxies.[28] The third class is made up of those finders which identify structures dynamically by using gravitationally unstable points in the distribution of dark matter.[29] The three most popular methods through the study of cosmic voids are listed below:

VoidFinder algorithm

This first-class method uses each galaxy in a catalog as its target and then uses the Nearest Neighbor Approximation to calculate the cosmic density in the region contained in a spherical radius determined by the distance to the third-closest galaxy.[30] El Ad & Piran introduced this method in 1997 to allow a quick and effective method for standardizing the cataloging of voids. Once the spherical cells are mined from all of the structure data, each cell is expanded until the underdensity returns to average expected wall density values.[31] One of the helpful features of void regions is that their boundaries are very distinct and defined, with a cosmic mean density that starts at 10% in the body and quickly rises to 20% at the edge and then to 100% in the walls directly outside the edges. The remaining walls and overlapping void regions are then gridded into, respectively, distinct and intertwining zones of filaments, clusters, and near-empty voids. Any overlap of more than 10% with already known voids are considered to be subregions within those known voids. All voids admitted to the catalog had a minimum radius of 10 Mpc in order to ensure all identified voids were not accidentally cataloged due to sampling errors.[30]

Zone bordering on voidness (ZOBOV) algorithm

This particular second-class algorithm uses a Voronoi tessellation technique and mock border particles in order to categorize regions based on a high-density contrasting border with a very low amount of bias.[32] Neyrinck introduced this algorithm in 2008 with the purpose of introducing a method that did not contain free parameters or presumed shape tessellations. Therefore, this technique can create more accurately shaped and sized void regions. Although this algorithm has some advantages in shape and size, it has been criticized often for sometimes providing loosely defined results. Since it has no free parameters, it mostly finds small and trivial voids, although the algorithm places a statistical significance on each void it finds. A physical significance parameter can be applied in order to reduce the number of trivial voids by including a minimum density to average density ratio of at least 1:5. Subvoids are also identified using this process which raises more philosophical questions on what qualifies as a void.[33] Void finders such as VIDE[34] are based on ZOBOV.

Dynamical void analysis (DIVA) algorithm

This third-class method is drastically different from the previous two algorithms listed. The most striking aspect is that it requires a different definition of what it means to be a void. Instead of the general notion that a void is a region of space with a low cosmic mean density; a hole in the distribution of galaxies, it defines voids to be regions in which matter is escaping; which corresponds to the dark energy equation of state, w. Void centers are then considered to be the maximal source of the displacement field denoted as Sψ. The purpose for this change in definitions was presented by Lavaux and Wandelt in 2009 as a way to yield cosmic voids such that exact analytical calculations can be made on their dynamical and geometrical properties. This allows DIVA to heavily explore the ellipticity of voids and how they evolve in the large-scale structure, subsequently leading to the classification of three distinct types of voids. These three morphological classes are True voids, Pancake voids, and Filament voids. Another notable quality is that even though DIVA also contains selection function bias just as first-class methods do, DIVA is devised such that this bias can be precisely calibrated, leading to much more reliable results. Multiple shortfalls of this Lagrangian-Eulerian hybrid approach exist. One example is that the resulting voids from this method are intrinsically different than those found by other methods, which makes an all-data points inclusive comparison between results of differing algorithms very difficult.[26]

Robustness testing

Once an algorithm is presented to find what it deems to be cosmic voids, it is crucial that its findings approximately match what is expected by the current simulations and models of large-scale structure. In order to perform this, the number, size, and proportion as well as other features of voids found by the algorithm are then checked by placing mock data through a Smoothed Particle Hydrodynamic Halo simulation, ΛCDM model, or other reliable simulator. An algorithm is much more robust if its data is in concordance with the results of these simulations for a range of input criterion (Pan et al. 2011).[35]

Significance

Voids have contributed significantly to the modern understanding of the cosmos, with applications ranging from shedding light on the current understanding of dark energy, to refining and constraining cosmological evolution models.[4] Some popular applications are mentioned in detail below.

Dark energy

The simultaneous existence of the largest-known voids and galaxy clusters requires about 70% dark energy in the universe today, consistent with the latest data from the cosmic microwave background.[4] Voids act as bubbles in the universe that are sensitive to background cosmological changes. This means that the evolution of a void's shape is in part the result of the expansion of the universe. Since this acceleration is believed to be caused by dark energy, studying the changes of a void's shape over a period of time can further refine the Quintessence + Cold Dark Matter (QCDM) model and provide a more accurate dark energy equation of state.[36] Additionally the abundance of voids is a promising way to constrain the dark energy equation of state.[37]

Galactic formation and evolution models

Large-scale structure formation
A 43×43×43-megaparsec cube shows the evolution of the large-scale structure over a logarithmic period starting from a redshift of 30 and ending at redshift 0. The model makes it clear to see how the matter-dense regions contract under the collective gravitational force while simultaneously aiding in the expansion of cosmic voids as the matter flees to the walls and filaments.

Cosmic voids contain a mix of galaxies and matter that is slightly different than other regions in the universe. This unique mix supports the biased galaxy formation picture predicted in Gaussian adiabatic cold dark matter models. This phenomenon provides an opportunity to modify the morphology-density correlation that holds discrepancies with these voids. Such observations like the morphology-density correlation can help uncover new facets about how galaxies form and evolve on the large scale.[38] On a more local scale, galaxies that reside in voids have differing morphological and spectral properties than those that are located in the walls. One feature that has been found is that voids have been shown to contain a significantly higher fraction of starburst galaxies of young, hot stars when compared to samples of galaxies in walls.[39]

Anomalies in anisotropies

Cold spots in the cosmic microwave background, such as the WMAP cold spot found by Wilkinson Microwave Anisotropy Probe, could possibly be explained by an extremely large cosmic void that has a radius of ~120 Mpc, as long as the late integrated Sachs–Wolfe effect was accounted for in the possible solution. Anomalies in CMB screenings are now being potentially explained through the existence of large voids located down the line-of-sight in which the cold spots lie.[40]

Cosmic Microwave Background screening of Universe.
CMB screening of the universe.

Accelerating expansion of the universe

Although dark energy is currently the most popular explanation for the acceleration in the expansion of the universe, another theory elaborates on the possibility of our galaxy being part of a very large, not-so-underdense, cosmic void. According to this theory, such an environment could naively lead to the demand for dark energy to solve the problem with the observed acceleration. As more data has been released on this topic the chances of it being a realistic solution in place of the current ΛCDM interpretation has been largely diminished but not all together abandoned.[41]

Gravitational theories

The abundance of voids, particularly when combined with the abundance of clusters of galaxies, is a promising method for precision tests of deviations from general relativity on large scales and in low-density regions.[42]

The insides of voids often seem to adhere to cosmological parameters which differ from those of the known universe. It is because of this unique feature that cosmic voids make for great laboratories to study the effects that gravitational clustering and growth rates have on local galaxies and structure when the cosmological parameters have different values from the outside universe. Due to the observation that larger voids predominantly remain in a linear regime, with most structures within exhibiting spherical symmetry in the underdense environment; that is, the underdensity leads to near-negligible particle-particle gravitational interactions that would otherwise occur in a region of normal galactic density. Testing models for voids can be performed with very high accuracy. The cosmological parameters that differ in these voids are Ωm, ΩΛ, and H0.[43]

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...