Search This Blog

Thursday, May 31, 2018

Baryon acoustic oscillations

From Wikipedia, the free encyclopedia

In cosmology, baryon acoustic oscillations (BAO) are regular, periodic fluctuations in the density of the visible baryonic matter (normal matter) of the universe. In the same way that supernovae provide a "standard candle" for astronomical observations,[1] BAO matter clustering provides a "standard ruler" for length scale in cosmology.[2] The length of this standard ruler (~490 million light years in today's universe[3]) can be measured by looking at the large scale structure of matter using astronomical surveys.[3] BAO measurements help cosmologists understand more about the nature of dark energy (which causes the apparent slight acceleration of the expansion of the universe) by constraining cosmological parameters.[2]

The early universe

The early universe consisted of a hot, dense plasma of electrons and baryons (protons and neutrons).  Photons (light particles) traveling in this universe were essentially trapped, unable to travel for any considerable distance before interacting with the plasma via Thomson scattering.[4] As the universe expanded, the plasma cooled to below 3000 K—a low enough energy such that the electrons and protons in the plasma could combine to form neutral hydrogen atoms. This recombination happened when the universe was around 379,000 years old, or at a redshift of z = 1089.[4] Photons interact to a much lesser degree with neutral matter, and therefore at recombination the universe became transparent to photons, allowing them to decouple from the matter and free-stream through the universe.[4] Technically speaking, the mean free path of the photons became of order the size of the universe. The cosmic microwave background (CMB) radiation is light that was emitted after recombination that is only now reaching our telescopes. Therefore, looking at, for example, Wilkinson Microwave Anisotropy Probe (WMAP) data, one is basically looking back in time to see an image of the universe when it was only 379,000 years old.[4]


Figure 1: Temperature anisotropies of the CMB based on the nine year WMAP data (2012).[5][6][7]

WMAP indicates (Figure 1) a smooth, homogeneous universe with density anisotropies of 10 parts per million.[4] However, There are large structures and density fluctuations in the present universe. Galaxies, for instance, are a million times more dense than the universe's mean density.[2] The current belief is that the universe was built in a bottom-up fashion, meaning that the small anisotropies of the early universe acted as gravitational seeds for the structure observed today. Overdense regions attract more matter, whereas underdense regions attract less, and thus these small anisotropies, seen in the CMB, became the large scale structures in the universe today.

Cosmic sound

Imagine an overdense region of the primordial plasma. While this region of overdensity gravitationally attracts matter towards it, the heat of photon-matter interactions creates a large amount of outward pressure. These counteracting forces of gravity and pressure created oscillations, analogous to sound waves created in air by pressure differences.[3]

Consider a single wave originating from this overdense region from the center of the plasma. This region contains dark matter, baryons and photons. The pressure results in a spherical sound wave of both baryons and photons moving with a speed slightly over half the speed of light[8][9] outwards from the overdensity. The dark matter interacts only gravitationally, and so it stays at the center of the sound wave, the origin of the overdensity. Before decoupling, the photons and baryons moved outwards together. After decoupling the photons were no longer interacting with the baryonic matter and they diffused away. That relieved the pressure on the system, leaving behind a shell of baryonic matter at a fixed radius. This radius is often referred to as the sound horizon.[3] Without the photo-baryon pressure driving the system outwards, the only remaining force on the baryons was gravitational. Therefore, the baryons and dark matter (left behind at the center of the perturbation) formed a configuration which included overdensities of matter both at the original site of the anisotropy and in the shell at the sound horizon for that anisotropy.[3]

Many such anisotropies created the ripples in the density of space that attracted matter and eventually galaxies formed in a similar pattern. Therefore, one would expect to see a greater number of galaxies separated by the sound horizon than at other length scales.[3][clarification needed] This particular configuration of matter occurred at each anisotropy in the early universe, and therefore the universe is not composed of one sound ripple,[10] but many overlapping ripples.[11] As an analogy, imagine dropping many pebbles into a pond and watching the resulting wave patterns in the water.[2] It is not possible to observe this preferred separation of galaxies on the sound horizon scale by eye, but one can measure this artifact statistically by looking at the separations of large numbers of galaxies.

Standard ruler

The physics of the propagation of the baryon waves in the early universe is fairly simple; as a result cosmologists can predict the size of the sound horizon at the time of recombination. In addition the CMB provides a measurement of this scale to high accuracy.[3] However, in the time between recombination and present day, the universe has been expanding. This expansion is well supported by observations and is one of the foundations of the Big Bang Model. In the late 1990s, observations of supernovae[1] determined that not only is the universe expanding, it is expanding at an increasing rate. A better understanding of the acceleration of the universe, or dark energy, has become one of the most important questions in cosmology today. In order to understand the nature of the dark energy, it is important to have a variety of ways of measuring the acceleration. BAO can add to the body of knowledge about this acceleration by comparing observations of the sound horizon today (using clustering of galaxies) to that of the sound horizon at the time of recombination (using the CMB).[3] Thus BAO provides a measuring stick with which to better understand the nature of the acceleration, completely independent from the supernova technique.

BAO signal in the Sloan Digital Sky Survey

The Sloan Digital Sky Survey (SDSS) is a 2.5-metre wide-angle optical telescope at Apache Point Observatory in New Mexico. The goal of this five-year survey was to take images and spectra of millions of celestial objects. The result of compiling the SDSS data is a three-dimensional map of objects in the nearby universe: the SDSS catalog. The SDSS catalog provides a picture of the distribution of matter in a large enough portion of the universe that one can search for a BAO signal by noting whether there is a statistically significant overabundance of galaxies separated by the predicted sound horizon distance.

The SDSS team looked at a sample of 46,748 luminous red galaxies (LRGs), over 3,816 square-degrees of sky (approximately five billion light years in diameter) and out to a redshift of z = 0.47.[3] They analyzed the clustering of these galaxies by calculating a two-point correlation function on the data.[12] The correlation function (ξ) is a function of comoving galaxy separation distance (s) and describes the probability that one galaxy will be found within a given distance of another.[13] One would expect a high correlation of galaxies at small separation distances (due to the clumpy nature of galaxy formation) and a low correlation at large separation distances. The BAO signal would show up as a bump in the correlation function at a comoving separation equal to the sound horizon. This signal was detected by the SDSS team in 2005.[3][14] SDSS confirmed the WMAP results that the sound horizon is ~150 Mpc in today's universe.[2][3]

Detection in other galaxy surveys

The 2dFGRS collaboration and the SDSS collaboration reported a detection of the BAO signal in the power spectrum at around the same time in 2005.[15] Both teams are credited and recognized for the discovery by the community as evidenced by the 2014 Shaw Prize in Astronomy[16] which was awarded to both groups. Since then, further detections have been reported in the 6dF Galaxy Survey (6dFGS) in 2011[17], WiggleZ in2011[18] and BOSS in 2012.[19]

BAO and dark energy formalism

BAO constraints dark energy parameters

The BAO in the radial and tangential directions provide measurements of the Hubble parameter and angular diameter distance, respectively. The angular diameter distance and Hubble parameter can include different functions that explain dark energy behavior.[20][21] These functions have two parameters w0 and w1 and one can constrain them with chi-square technique.[22]

General relativity and dark energy

In general relativity, the expansion of the universe is parametrized by a scale factor a(t) which is related to redshift:[4]
a(t)\equiv (1+z(t))^{{-1}}\!
The Hubble parameter, H(z), in terms of the scale factor is:
H(t)\equiv {\frac  {{\dot  a}}{a}}\!
where {\dot  a} is the time-derivative of the scale factor. The Friedmann equations express the expansion of the universe in terms of Newton's gravitational constant, G_{{N}}, the mean gauge pressure, P, the Universe's density \rho \!, the curvature, k, and the cosmological constant, \Lambda \!:[4]
H^{2}=\left({\frac  {{\dot  {a}}}{a}}\right)^{2}={\frac  {8\pi G}{3}}\rho -{\frac  {kc^{2}}{a^{2}}}+{\frac  {\Lambda c^{2}}{3}}
{\dot  {H}}+H^{2}={\frac  {{\ddot  {a}}}{a}}=-{\frac  {4\pi G}{3}}\left(\rho +{\frac  {3p}{c^{2}}}\right)+{\frac  {\Lambda c^{2}}{3}}
Observational evidence of the acceleration of the universe implies that (at present time) {\ddot  {a}}>0. Therefore, the following are possible explanations:[23]
  • The universe is dominated by some field or particle that has negative pressure such that the equation of state:
w={\frac  {P}{\rho }}<-1/3\!
  • There is a non-zero cosmological constant, \Lambda \!.
  • The Friedmann equations are incorrect since they contain over simplifications in order to make the general relativistic field equations easier to compute.
In order to differentiate between these scenarios, precise measurements of the Hubble parameter as a function of redshift are needed.

Measured observables of dark energy

The density parameter, \Omega \!, of various components, x, of the universe can be expressed as ratios of the density of x to the critical density, \rho _{c}\!:[23]
\rho _{c}={\frac  {3H^{2}}{8\pi G}}
\Omega _{x}\equiv {\frac  {\rho _{x}}{\rho _{c}}}={\frac  {8\pi G\rho _{x}}{3H^{2}}}
The Friedman equation can be rewritten in terms of the density parameter. For the current prevailing model of the universe, ΛCDM, this equation is as follows:[23]
H^{2}(a)=\left({\frac  {{\dot  {a}}}{a}}\right)^{2}=H_{0}^{2}\left[\Omega _{m}a^{{-3}}+\Omega _{r}a^{{-4}}+\Omega _{k}a^{{-2}}+\Omega _{\Lambda }a^{{-3(1+w)}}\right]



where m is matter, r is radiation, k is curvature, Λ is dark energy, and w is the equation of state. Measurements of the CMB from WMAP put tight constraints on many of these parameters; however it is important to confirm and further constrain them using an independent method with different systematics.

The BAO signal is a standard ruler such that the length of the sound horizon can be measured as a function of cosmic time.[3] This measures two cosmological distances: the Hubble parameter, H(z), and the angular diameter distance, d_{A}(z), as a function of redshift (z).[24] By measuring the subtended angle, \Delta \theta , of the ruler of length \Delta \chi , these parameters are determined as follows:[24]
\Delta \theta ={\frac  {\Delta \chi }{d_{A}(z)}}\!
d_{A}(z)\propto \int _{{0}}^{{z}}{\frac  {dz'}{H(z')}}\!
the redshift interval, \Delta z, can be measured from the data and thus determining the Hubble parameter as a function of redshift:
c\Delta z=H(z)\Delta \chi \!
Therefore, the BAO technique helps constrain cosmological parameters and provide further insight into the nature of dark energy.

Void (astronomy)

From Wikipedia, the free encyclopedia
 
Structure of the Universe
Matter distribution in a cubic section of the universe. The blue fiber structures represent the matter (primarily dark matter) and the empty regions in between represent the cosmic voids.

Cosmic voids are vast spaces between filaments (the largest-scale structures in the universe), which contain very few or no galaxies. Voids typically have a diameter of 10 to 100 megaparsecs; particularly large voids, defined by the absence of rich superclusters, are sometimes called supervoids. They have less than one tenth of the average density of matter abundance that is considered typical for the observable universe. They were first discovered in 1978 in a pioneering study by Stephen Gregory and Laird A. Thompson at the Kitt Peak National Observatory.[1]

Voids are believed to have been formed by baryon acoustic oscillations in the Big Bang, collapses of mass followed by implosions of the compressed baryonic matter. Starting from initially small anisotropies from quantum fluctuations in the early universe, the anisotropies grew larger in scale over time. Regions of higher density collapsed more rapidly under gravity, eventually resulting in the large-scale, foam-like structure or "cosmic web" of voids and galaxy filaments seen today. Voids located in high-density environments are smaller than voids situated in low-density spaces of the universe.[2]

Voids appear to correlate with the observed temperature of the cosmic microwave background (CMB) because of the Sachs–Wolfe effect. Colder regions correlate with voids and hotter regions correlate with filaments because of gravitational redshifting. As the Sachs–Wolfe effect is only significant if the universe is dominated by radiation or dark energy, the existence of voids is significant in providing physical evidence for dark energy.[3][4]

Large-scale structure

The structure of our Universe can be broken down into components that can help describe the characteristics of individual regions of the cosmos. These are the main structural components of the cosmic web:
  • Voids – vast, largely spherical[5] regions with very low cosmic mean densities, up to 100 megaparsecs (Mpc) in diameter.[6]
  • Walls – the regions that contain the typical cosmic mean density of matter abundance. Walls can be further broken down into two smaller structural features:
    • Clusters – highly concentrated zones where walls meet and intersect, adding to the effective size of the local wall.
    • Filaments – the branching arms of walls that can stretch for tens of megaparsecs.[7]
Voids have a mean density less than a tenth of the average density of the universe. This serves as a working definition even though there is no single agreed-upon definition of what constitutes a void. The matter density value used for describing the cosmic mean density is usually based on a ratio of the number of galaxies per unit volume rather than the total mass of the matter contained in a unit volume.[8]

History and discovery

Cosmic voids as a topic of study in astrophysics began in the mid-1970s when redshift surveys became more popular and led two separate teams of astrophysicists in 1978 to identifying superclusters and voids in the distribution of galaxies and Abell clusters in a large region of space.[9][10] The new redshift surveys revolutionized the field of astronomy by adding depth to the two-dimensional maps of cosmological structure, which were often densely packed and overlapping,[6] allowing for the first three-dimensional mapping of the universe. In the redshift surveys, the depth was calculated from the individual redshifts of the galaxies due to the expansion of the universe according to Hubble's law.[11]

Timeline

A summarized timeline of important events in the field of cosmic voids from its beginning to recent times is listed below:
  • 1961 – Large-scale structural features such as "second-order clusters", a specific type of supercluster, were brought to the astronomical community's attention.[12]
  • 1978 – The first two papers on the topic of voids in the large-scale structure were published referencing voids found in the foreground of the Coma/A1367 clusters.[9][13]
  • 1981 – Discovery of a large void in the Boötes region of the sky that was nearly 50 h−1 Mpc in diameter (which was later recalculated to be about 34 h−1 Mpc).[14][15]
  • 1983 – Computer simulations sophisticated enough to provide relatively reliable results of growth and evolution of the large-scale structure emerged and yielded insight on key features of the large-scale galaxy distribution.[16][17]
  • 1985 – Details of the supercluster and void structure of the Perseus-Pisces region were surveyed.[18]
  • 1989 – The Center for Astrophysics Redshift Survey revealed that large voids, sharp filaments, and the walls that surround them dominate the large-scale structure of the universe.[19]
  • 1991 – The Las Campanas Redshift Survey confirmed the abundance of voids in the large-scale structure of the universe (Kirshner et al. 1991).[20]
  • 1995 – Comparisons of optically selected galaxy surveys indicate that the same voids are found regardless of the sample selection.[21]
  • 2001 – The completed two-degree Field Galaxy Redshift Survey adds a significantly large amount of voids to the database of all known cosmic voids.[22]
  • 2009 – The Sloan Digital Sky Survey (SDSS) data combined with previous large-scale surveys now provide the most complete view of the detailed structure of cosmic voids.[23][24][25]

Methods for finding

There exist a number of ways for finding voids with the results of large-scale surveys of the universe. Of the many different algorithms, virtually all fall into one of three general categories.[26] The first class consists of void finders that try to find empty regions of space based on local galaxy density.[27] The second class are those which try to find voids via the geometrical structures in the dark matter distribution as suggested by the galaxies.[28] The third class is made up of those finders which identify structures dynamically by using gravitationally unstable points in the distribution of dark matter.[29] The three most popular methods through the study of cosmic voids are listed below:

VoidFinder algorithm

This first-class method uses each galaxy in a catalog as its target and then uses the Nearest Neighbor Approximation to calculate the cosmic density in the region contained in a spherical radius determined by the distance to the third-closest galaxy.[30] El Ad & Piran introduced this method in 1997 to allow a quick and effective method for standardizing the cataloging of voids. Once the spherical cells are mined from all of the structure data, each cell is expanded until the underdensity returns to average expected wall density values.[31] One of the helpful features of void regions is that their boundaries are very distinct and defined, with a cosmic mean density that starts at 10% in the body and quickly rises to 20% at the edge and then to 100% in the walls directly outside the edges. The remaining walls and overlapping void regions are then gridded into, respectively, distinct and intertwining zones of filaments, clusters, and near-empty voids. Any overlap of more than 10% with already known voids are considered to be subregions within those known voids. All voids admitted to the catalog had a minimum radius of 10 Mpc in order to ensure all identified voids were not accidentally cataloged due to sampling errors.[30]

Zone bordering on voidness (ZOBOV) algorithm

This particular second-class algorithm uses a Voronoi tessellation technique and mock border particles in order to categorize regions based on a high-density contrasting border with a very low amount of bias.[32] Neyrinck introduced this algorithm in 2008 with the purpose of introducing a method that did not contain free parameters or presumed shape tessellations. Therefore, this technique can create more accurately shaped and sized void regions. Although this algorithm has some advantages in shape and size, it has been criticized often for sometimes providing loosely defined results. Since it has no free parameters, it mostly finds small and trivial voids, although the algorithm places a statistical significance on each void it finds. A physical significance parameter can be applied in order to reduce the number of trivial voids by including a minimum density to average density ratio of at least 1:5. Subvoids are also identified using this process which raises more philosophical questions on what qualifies as a void.[33] Void finders such as VIDE[34] are based on ZOBOV.

Dynamical void analysis (DIVA) algorithm

This third-class method is drastically different from the previous two algorithms listed. The most striking aspect is that it requires a different definition of what it means to be a void. Instead of the general notion that a void is a region of space with a low cosmic mean density; a hole in the distribution of galaxies, it defines voids to be regions in which matter is escaping; which corresponds to the dark energy equation of state, w. Void centers are then considered to be the maximal source of the displacement field denoted as Sψ. The purpose for this change in definitions was presented by Lavaux and Wandelt in 2009 as a way to yield cosmic voids such that exact analytical calculations can be made on their dynamical and geometrical properties. This allows DIVA to heavily explore the ellipticity of voids and how they evolve in the large-scale structure, subsequently leading to the classification of three distinct types of voids. These three morphological classes are True voids, Pancake voids, and Filament voids. Another notable quality is that even though DIVA also contains selection function bias just as first-class methods do, DIVA is devised such that this bias can be precisely calibrated, leading to much more reliable results. Multiple shortfalls of this Lagrangian-Eulerian hybrid approach exist. One example is that the resulting voids from this method are intrinsically different than those found by other methods, which makes an all-data points inclusive comparison between results of differing algorithms very difficult.[26]

Robustness testing

Once an algorithm is presented to find what it deems to be cosmic voids, it is crucial that its findings approximately match what is expected by the current simulations and models of large-scale structure. In order to perform this, the number, size, and proportion as well as other features of voids found by the algorithm are then checked by placing mock data through a Smoothed Particle Hydrodynamic Halo simulation, ΛCDM model, or other reliable simulator. An algorithm is much more robust if its data is in concordance with the results of these simulations for a range of input criterion (Pan et al. 2011).[35]

Significance

Voids have contributed significantly to the modern understanding of the cosmos, with applications ranging from shedding light on the current understanding of dark energy, to refining and constraining cosmological evolution models.[4] Some popular applications are mentioned in detail below.

Dark energy

The simultaneous existence of the largest-known voids and galaxy clusters requires about 70% dark energy in the universe today, consistent with the latest data from the cosmic microwave background.[4] Voids act as bubbles in the universe that are sensitive to background cosmological changes. This means that the evolution of a void's shape is in part the result of the expansion of the universe. Since this acceleration is believed to be caused by dark energy, studying the changes of a void's shape over a period of time can further refine the Quintessence + Cold Dark Matter (QCDM) model and provide a more accurate dark energy equation of state.[36] Additionally the abundance of voids is a promising way to constrain the dark energy equation of state.[37]

Galactic formation and evolution models

Large-scale structure formation
A 43×43×43-megaparsec cube shows the evolution of the large-scale structure over a logarithmic period starting from a redshift of 30 and ending at redshift 0. The model makes it clear to see how the matter-dense regions contract under the collective gravitational force while simultaneously aiding in the expansion of cosmic voids as the matter flees to the walls and filaments.

Cosmic voids contain a mix of galaxies and matter that is slightly different than other regions in the universe. This unique mix supports the biased galaxy formation picture predicted in Gaussian adiabatic cold dark matter models. This phenomenon provides an opportunity to modify the morphology-density correlation that holds discrepancies with these voids. Such observations like the morphology-density correlation can help uncover new facets about how galaxies form and evolve on the large scale.[38] On a more local scale, galaxies that reside in voids have differing morphological and spectral properties than those that are located in the walls. One feature that has been found is that voids have been shown to contain a significantly higher fraction of starburst galaxies of young, hot stars when compared to samples of galaxies in walls.[39]

Anomalies in anisotropies

Cold spots in the cosmic microwave background, such as the WMAP cold spot found by Wilkinson Microwave Anisotropy Probe, could possibly be explained by an extremely large cosmic void that has a radius of ~120 Mpc, as long as the late integrated Sachs–Wolfe effect was accounted for in the possible solution. Anomalies in CMB screenings are now being potentially explained through the existence of large voids located down the line-of-sight in which the cold spots lie.[40]

Cosmic Microwave Background screening of Universe.
CMB screening of the universe.

Accelerating expansion of the universe

Although dark energy is currently the most popular explanation for the acceleration in the expansion of the universe, another theory elaborates on the possibility of our galaxy being part of a very large, not-so-underdense, cosmic void. According to this theory, such an environment could naively lead to the demand for dark energy to solve the problem with the observed acceleration. As more data has been released on this topic the chances of it being a realistic solution in place of the current ΛCDM interpretation has been largely diminished but not all together abandoned.[41]

Gravitational theories

The abundance of voids, particularly when combined with the abundance of clusters of galaxies, is a promising method for precision tests of deviations from general relativity on large scales and in low-density regions.[42]

The insides of voids often seem to adhere to cosmological parameters which differ from those of the known universe. It is because of this unique feature that cosmic voids make for great laboratories to study the effects that gravitational clustering and growth rates have on local galaxies and structure when the cosmological parameters have different values from the outside universe. Due to the observation that larger voids predominantly remain in a linear regime, with most structures within exhibiting spherical symmetry in the underdense environment; that is, the underdensity leads to near-negligible particle-particle gravitational interactions that would otherwise occur in a region of normal galactic density. Testing models for voids can be performed with very high accuracy. The cosmological parameters that differ in these voids are Ωm, ΩΛ, and H0.[43]

Vacuum state

From Wikipedia, the free encyclopedia

In quantum field theory, the quantum vacuum state (also called the quantum vacuum or vacuum state) is the quantum state with the lowest possible energy. Generally, it contains no physical particles. Zero-point field is sometimes used as a synonym for the vacuum state of an individual quantized field.

According to present-day understanding of what is called the vacuum state or the quantum vacuum, it is "by no means a simple empty space".[1][2] According to quantum mechanics, the vacuum state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into and out of existence.[3][4][5]

The QED vacuum of quantum electrodynamics (or QED) was the first vacuum of quantum field theory to be developed. QED originated in the 1930s, and in the late 1940s and early 1950s it was reformulated by Feynman, Tomonaga and Schwinger, who jointly received the Nobel prize for this work in 1965.[6] Today the electromagnetic interactions and the weak interactions are unified in the theory of the electroweak interaction.

The Standard Model is a generalization of the QED work to include all the known elementary particles and their interactions (except gravity). Quantum chromodynamics is the portion of the Standard Model that deals with strong interactions, and QCD vacuum is the vacuum of quantum chromodynamics. It is the object of study in the Large Hadron Collider and the Relativistic Heavy Ion Collider, and is related to the so-called vacuum structure of strong interactions.[7]

Non-zero expectation value

File:Vacuum fluctuations revealed through spontaneous parametric down-conversion.ogvPlay media

The video of an experiment showing vacuum fluctuations (in the red ring) amplified by spontaneous parametric down-conversion.

If the quantum field theory can be accurately described through perturbation theory, then the properties of the vacuum are analogous to the properties of the ground state of a quantum mechanical harmonic oscillator, or more accurately, the ground state of a measurement problem. In this case the vacuum expectation value (VEV) of any field operator vanishes. For quantum field theories in which perturbation theory breaks down at low energies (for example, Quantum chromodynamics or the BCS theory of superconductivity) field operators may have non-vanishing vacuum expectation values called condensates. In the Standard Model, the non-zero vacuum expectation value of the Higgs field, arising from spontaneous symmetry breaking, is the mechanism by which the other fields in the theory acquire mass.

Energy

In many situations, the vacuum state can be defined to have zero energy, although the actual situation is considerably more subtle. The vacuum state is associated with a zero-point energy, and this zero-point energy has measurable effects. In the laboratory, it may be detected as the Casimir effect. In physical cosmology, the energy of the cosmological vacuum appears as the cosmological constant. In fact, the energy of a cubic centimeter of empty space has been calculated figuratively to be one trillionth of an erg (or 0.6 eV).[8] An outstanding requirement imposed on a potential Theory of Everything is that the energy of the quantum vacuum state must explain the physically observed cosmological constant.

Symmetry

For a relativistic field theory, the vacuum is Poincaré invariant, which follows from Wightman axioms but can be also proved directly without these axioms.[9] Poincaré invariance implies that only scalar combinations of field operators have non-vanishing VEV's. The VEV may break some of the internal symmetries of the Lagrangian of the field theory. In this case the vacuum has less symmetry than the theory allows, and one says that spontaneous symmetry breaking has occurred.

Electrical permittivity

In principle, quantum corrections to Maxwell's equations can cause the experimental electrical permittivity ε of the vacuum state to deviate from the defined scalar value ε0 of the electric constant.[10] These theoretical developments are described, for example, in Dittrich and Gies.[5] In particular, the theory of quantum electrodynamics predicts that the QED vacuum should exhibit nonlinear effects that will make it behave like a birefringent material with ε slightly greater than ε0 for extremely strong electric fields.[11][12] Explanations for dichroism from particle physics, outside quantum electrodynamics, also have been proposed.[13] Active attempts to measure such effects have yielded negative results so far.[14]

Notations

The vacuum state is written as |0\rangle or |\rangle. The vacuum expectation value (see also Expectation value) of any field φ should be written as \langle0|\phi|0\rangle.

Virtual particles

The presence of virtual particles can be rigorously based upon the non-commutation of the quantized electromagnetic fields. Non-commutation means that although the average values of the fields vanish in a quantum vacuum, their variances do not.[15] The term "vacuum fluctuations" refers to the variance of the field strength in the minimal energy state,[16] and is described picturesquely as evidence of "virtual particles".[17] It is sometimes attempted to provide an intuitive picture of virtual particles, or variances, based upon the Heisenberg energy-time uncertainty principle:
\Delta E \Delta t \ge \hbar \ ,
(with ΔE and Δt being the energy and time variations respectively; ΔE is the accuracy in the measurement of energy and Δt is the time taken in the measurement, and ħ is the Reduced Planck constant) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times.[18] Although the phenomenon of virtual particles is accepted, this interpretation of the energy-time uncertainty relation is not universal.[19][20] One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty Δt determines a "budget" for borrowing energy ΔE. Another issue is the meaning of "time" in this relation, because energy and time (unlike position q and momentum p, for example) do not satisfy a canonical commutation relation (such as [q, p] = i ħ).[21] Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy.[22][23] The very many approaches to the energy-time uncertainty principle are a long and continuing subject.[23]

Physical nature of the quantum vacuum

According to Astrid Lambrecht (2002): "When one empties out a space of all matter and lowers the temperature to absolute zero, one produces in a Gedankenexperiment [mental experiment] the quantum vacuum state."[1] According to Fowler & Guggenheim (1939/1965), the third law of thermodynamics may be precisely enunciated as follows:
It is impossible by any procedure, no matter how idealized, to reduce any assembly to the absolute zero in a finite number of operations.[24] (See also.[25][26][27])
Photon-photon interaction can occur only through interaction with the vacuum state of some other field, for example through the Dirac electron-positron vacuum field; this is associated with the concept of vacuum polarization.[28] According to Milonni (1994): "... all quantum fields have zero-point energies and vacuum fluctuations."[29] This means that there is a component of the quantum vacuum respectively for each component field (considered in the conceptual absence of the other fields), such as the electromagnetic field, the Dirac electron-positron field, and so on. According to Milonni (1994), some of the effects attributed to the vacuum electromagnetic field can have several physical interpretations, some more conventional than others. The Casimir attraction between uncharged conductive plates is often proposed as an example of an effect of the vacuum electromagnetic field. Schwinger, DeRaad, and Milton (1978) are cited by Milonni (1994) as validly, though unconventionally, explaining the Casimir effect with a model in which "the vacuum is regarded as truly a state with all physical properties equal to zero."[30][31] In this model, the observed phenomena are explained as the effects of the electron motions on the electromagnetic field, called the source field effect. Milonni writes:
The basic idea here will be that the Casimir force may be derived from the source fields alone even in completely conventional QED, ... Milonni provides detailed argument that the measurable physical effects usually attributed to the vacuum electromagnetic field cannot be explained by that field alone, but require in addition a contribution from the self-energy of the electrons, or their radiation reaction. He writes: "The radiation reaction and the vacuum fields are two aspects of the same thing when it comes to physical interpretations of various QED processes including the Lamb shift, van der Waals forces, and Casimir effects.[32]
This point of view is also stated by Jaffe (2005): "The Casimir force can be calculated without reference to vacuum fluctuations, and like all other observable effects in QED, it vanishes as the fine structure constant, α, goes to zero."[33]

Politics of Europe

From Wikipedia, the free encyclopedia ...