Search This Blog

Saturday, October 23, 2021

Expectation value (quantum mechanics)

From Wikipedia, the free encyclopedia

In quantum mechanics, the expectation value is the probabilistic expected value of the result (measurement) of an experiment. It can be thought of as an average of all the possible outcomes of a measurement as weighted by their likelihood, and as such it is not the most probable value of a measurement; indeed the expectation value may have zero probability of occurring (e.g. measurements which can only yield integer values may have a non-integer mean). It is a fundamental concept in all areas of quantum physics.

Operational definition

Consider an operator . The expectation value is then in Dirac notation with a normalized state vector.

Formalism in quantum mechanics

In quantum theory, an experimental setup is described by the observable to be measured, and the state of the system. The expectation value of in the state is denoted as .

Mathematically, is a self-adjoint operator on a Hilbert space. In the most commonly used case in quantum mechanics, is a pure state, described by a normalized vector in the Hilbert space. The expectation value of in the state is defined as

 

 

 

 

(1)

If dynamics is considered, either the vector or the operator is taken to be time-dependent, depending on whether the Schrödinger picture or Heisenberg picture is used. The evolution of the expectation value does not depend on this choice, however.

If has a complete set of eigenvectors , with eigenvalues , then (1) can be expressed as

 

 

 

 

(2)

This expression is similar to the arithmetic mean, and illustrates the physical meaning of the mathematical formalism: The eigenvalues are the possible outcomes of the experiment, and their corresponding coefficient is the probability that this outcome will occur; it is often called the transition probability.

A particularly simple case arises when is a projection, and thus has only the eigenvalues 0 and 1. This physically corresponds to a "yes-no" type of experiment. In this case, the expectation value is the probability that the experiment results in "1", and it can be computed as

 

 

 

 

(3)

In quantum theory, it is also possible for an operator to have a non-discrete spectrum, such as the position operator in quantum mechanics. This operator has a completely continuous spectrum, with eigenvalues and eigenvectors depending on a continuous parameter, . Specifically, the operator acts on a spatial vector as . In this case, the vector can be written as a complex-valued function on the spectrum of (usually the real line). This is formally achieved by projecting the state vector onto the eigenvalues of the operator, as in the discrete case . It happens that the eigenvectors of the position operator form a complete basis for the vector space of states, and therefore obey a closure relation:

The above may be used to derive the common, integral expression for the expected value (4), by inserting identities into the vector expression of expected value, then expanding in the position basis:

Where the orthonormality relation of the position basis vectors , reduces the double integral to a single integral. The last line uses the modulus of a complex valued function to replace with , which is a common substitution in quantum-mechanical integrals.

The expectation value may then be stated, where x is unbounded, as the formula

 

 

 

 

(4)

A similar formula holds for the momentum operator , in systems where it has continuous spectrum.

All the above formulas are valid for pure states only. Prominently in thermodynamics and quantum optics, also mixed states are of importance; these are described by a positive trace-class operator , the statistical operator or density matrix. The expectation value then can be obtained as

 

 

 

 

(5)

General formulation

In general, quantum states are described by positive normalized linear functionals on the set of observables, mathematically often taken to be a C* algebra. The expectation value of an observable is then given by

 

 

 

 

(6)

If the algebra of observables acts irreducibly on a Hilbert space, and if is a normal functional, that is, it is continuous in the ultraweak topology, then it can be written as

with a positive trace-class operator of trace 1. This gives formula (5) above. In the case of a pure state, is a projection onto a unit vector . Then , which gives formula (1) above.

is assumed to be a self-adjoint operator. In the general case, its spectrum will neither be entirely discrete nor entirely continuous. Still, one can write in a spectral decomposition,

with a projector-valued measure . For the expectation value of in a pure state , this means
which may be seen as a common generalization of formulas (2) and (4) above.

In non-relativistic theories of finitely many particles (quantum mechanics, in the strict sense), the states considered are generally normal. However, in other areas of quantum theory, also non-normal states are in use: They appear, for example. in the form of KMS states in quantum statistical mechanics of infinitely extended media, and as charged states in quantum field theory. In these cases, the expectation value is determined only by the more general formula (6).

Example in configuration space

As an example, consider a quantum mechanical particle in one spatial dimension, in the configuration space representation. Here the Hilbert space is , the space of square-integrable functions on the real line. Vectors are represented by functions , called wave functions. The scalar product is given by . The wave functions have a direct interpretation as a probability distribution:

gives the probability of finding the particle in an infinitesimal interval of length about some point .

As an observable, consider the position operator , which acts on wavefunctions by

The expectation value, or mean value of measurements, of performed on a very large number of identical independent systems will be given by

The expectation value only exists if the integral converges, which is not the case for all vectors . This is because the position operator is unbounded, and has to be chosen from its domain of definition.

In general, the expectation of any observable can be calculated by replacing with the appropriate operator. For example, to calculate the average momentum, one uses the momentum operator in configuration space, . Explicitly, its expectation value is

Not all operators in general provide a measurable value. An operator that has a pure real expectation value is called an observable and its value can be directly measured in experiment.

Dark energy

From Wikipedia, the free encyclopedia

In physical cosmology and astronomy, dark energy is an unknown form of energy that affects the universe on the largest scales. The first observational evidence for its existence came from measurements of supernovae, which showed that the universe does not expand at a constant rate; rather, the expansion of the universe is accelerating. Understanding the evolution of the universe requires knowledge of its starting conditions and its composition. Prior to these observations, it was thought that all forms of matter and energy in the universe would only cause the expansion to slow down over time. Measurements of the cosmic microwave background suggest the universe began in a hot Big Bang, from which general relativity explains its evolution and the subsequent large-scale motion. Without introducing a new form of energy, there was no way to explain how an accelerating universe could be measured. Since the 1990s, dark energy has been the most accepted premise to account for the accelerated expansion. As of 2021, there are active areas of cosmology research aimed at understanding the fundamental nature of dark energy.

Assuming that the lambda-CDM model of cosmology is correct, the best current measurements indicate that dark energy contributes 68% of the total energy in the present-day observable universe. The mass–energy of dark matter and ordinary (baryonic) matter contributes 26% and 5%, respectively, and other components such as neutrinos and photons contribute a very small amount. The density of dark energy is very low (~ 7 × 10−30 g/cm3), much less than the density of ordinary matter or dark matter within galaxies. However, it dominates the mass–energy of the universe because it is uniform across space.

Two proposed forms of dark energy are the cosmological constant, representing a constant energy density filling space homogeneously, and scalar fields such as quintessence or moduli, dynamic quantities having energy densities that can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to the zero-point radiation of space i.e. the vacuum energy. Scalar fields that change in space can be difficult to distinguish from a cosmological constant because the change may be extremely slow.

Due to the toy model nature of concordance cosmology, some experts believe that a more accurate general relativistic treatment of the structures that exist on all scales in the real universe may do away with the need to invoke dark energy. Inhomogeneous cosmologies, which attempt to account for the back-reaction of structure formation on the metric, generally do not acknowledge any dark energy contribution to the energy density of the Universe.

History of discovery and previous speculation

Einstein's cosmological constant

The "cosmological constant" is a constant term that can be added to Einstein's field equation of general relativity. If considered as a "source term" in the field equation, it can be viewed as equivalent to the mass of empty space (which conceptually could be either positive or negative), or "vacuum energy".

The cosmological constant was first proposed by Einstein as a mechanism to obtain a solution of the gravitational field equation that would lead to a static universe, effectively using dark energy to balance gravity. Einstein gave the cosmological constant the symbol Λ (capital lambda). Einstein stated that the cosmological constant required that 'empty space takes the role of gravitating negative masses which are distributed all over the interstellar space'.

The mechanism was an example of fine-tuning, and it was later realized that Einstein's static universe would not be stable: local inhomogeneities would ultimately lead to either the runaway expansion or contraction of the universe. The equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe which contracts slightly will continue contracting. These sorts of disturbances are inevitable, due to the uneven distribution of matter throughout the universe. Further, observations made by Edwin Hubble in 1929 showed that the universe appears to be expanding and not static at all. Einstein reportedly referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder.

Inflationary dark energy

Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe. Inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is an essential feature of most current models of the Big Bang. However, inflation must have occurred at a much higher energy density than the dark energy we observe today and is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Even after inflationary models became accepted, the cosmological constant was thought to be irrelevant to the current universe.

Nearly all inflation models predict that the total (matter+energy) density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter (CDM) and 5% ordinary matter (baryons). These models were found to be successful at forming realistic galaxies and clusters, but some problems appeared in the late 1980s: in particular, the model required a value for the Hubble constant lower than preferred by observations, and the model under-predicted observations of large-scale galaxy clustering. These difficulties became stronger after the discovery of anisotropy in the cosmic microwave background by the COBE spacecraft in 1992, and several modified CDM models came under active study through the mid-1990s: these included the Lambda-CDM model and a mixed cold/hot dark matter model. The first direct evidence for dark energy came from supernova observations in 1998 of accelerated expansion in Riess et al. and in Perlmutter et al., and the Lambda-CDM model then became the leading model. Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background (CMB) experiments observed the first acoustic peak in the CMB, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the difference. Much more precise measurements from WMAP in 2003–2010 have continued to support the standard model and give more accurate measurements of the key parameters.

The term "dark energy", echoing Fritz Zwicky's "dark matter" from the 1930s, was coined by Michael Turner in 1998.

Change in expansion over time

Diagram representing the accelerated expansion of the universe due to dark energy.

High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time and space. In general relativity, the evolution of the expansion rate is estimated from the curvature of the universe and the cosmological equation of state (the relationship between temperature, pressure, and combined matter, energy, and vacuum energy density for any region of space). Measuring the equation of state for dark energy is one of the biggest efforts in observational cosmology today. Adding the cosmological constant to cosmology's standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the "standard model of cosmology" because of its precise agreement with observations.

As of 2013, the Lambda-CDM model is consistent with a series of increasingly rigorous cosmological observations, including the Planck spacecraft and the Supernova Legacy Survey. First results from the SNLS reveal that the average behavior (i.e., equation of state) of dark energy behaves like Einstein's cosmological constant to a precision of 10%. Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period preceding cosmic acceleration.

Nature

The nature of dark energy is more hypothetical than that of dark matter, and many things about it remain in the realm of speculation. Dark energy is thought to be very homogeneous and not very dense, and is not known to interact through any of the fundamental forces other than gravity. Since it is quite rarefied and un-massive—roughly 10−27 kg/m3—it is unlikely to be detectable in laboratory experiments. The reason dark energy can have such a profound effect on the universe, making up 68% of universal density in spite of being so dilute, is that it uniformly fills otherwise empty space.

Independently of its actual nature, dark energy would need to have a strong negative pressure to explain the observed acceleration of the expansion of the universe. According to general relativity, the pressure within a substance contributes to its gravitational attraction for other objects just as its mass density does. This happens because the physical quantity that causes matter to generate gravitational effects is the stress–energy tensor, which contains both the energy (or matter) density of a substance and its pressure. In the Friedmann–Lemaître–Robertson–Walker metric, it can be shown that a strong constant negative pressure (i.e., tension) in all the universe causes an acceleration in the expansion if the universe is already expanding, or a deceleration in contraction if the universe is already contracting. This accelerating expansion effect is sometimes labeled "gravitational repulsion".

Technical definition

In standard cosmology, there are three components of the universe: matter, radiation, and dark energy. Matter is anything whose energy density scales with the inverse cube of the scale factor, i.e., ρ ∝ a−3, while radiation is anything which scales to the inverse fourth power of the scale factor (ρ ∝ a−4). This can be understood intuitively: for an ordinary particle in a cube-shaped box, doubling the length of an edge of the box decreases the density (and hence energy density) by a factor of eight (23). For radiation, the decrease in energy density is greater, because an increase in spatial distance also causes a redshift.

The final component is dark energy; "dark energy" is anything that is, in its effect, an intrinsic property of space: That has a constant energy density, regardless of the dimensions of the volume under consideration (ρ ∝ a0). Thus, unlike ordinary matter, it is not diluted by the expansion of space.

Evidence of existence

The evidence for dark energy is indirect but comes from three independent sources:

  • Distance measurements and their relation to redshift, which suggest the universe has expanded more in the latter half of its life.
  • The theoretical need for a type of additional energy that is not matter or dark matter to form the observationally flat universe (absence of any detectable global curvature).
  • Measures of large-scale wave patterns of mass density in the universe.

Supernovae

A Type Ia supernova (bright spot on the bottom-left) near a galaxy

In 1998, the High-Z Supernova Search Team published observations of Type Ia ("one-A") supernovae. In 1999, the Supernova Cosmology Project followed by suggesting that the expansion of the universe is accelerating. The 2011 Nobel Prize in Physics was awarded to Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess for their leadership in the discovery.

Since then, these observations have been corroborated by several independent sources. Measurements of the cosmic microwave background, gravitational lensing, and the large-scale structure of the cosmos, as well as improved measurements of supernovae, have been consistent with the Lambda-CDM model. Some people argue that the only indications for the existence of dark energy are observations of distance measurements and their associated redshifts. Cosmic microwave background anisotropies and baryon acoustic oscillations serve only to demonstrate that distances to a given redshift are larger than would be expected from a "dusty" Friedmann–Lemaître universe and the local measured Hubble constant.

Supernovae are useful for cosmology because they are excellent standard candles across cosmological distances. They allow researchers to measure the expansion history of the universe by looking at the relationship between the distance to an object and its redshift, which gives how fast it is receding from us. The relationship is roughly linear, according to Hubble's law. It is relatively easy to measure redshift, but finding the distance to an object is more difficult. Usually, astronomers use standard candles: objects for which the intrinsic brightness, or absolute magnitude, is known. This allows the object's distance to be measured from its actual observed brightness, or apparent magnitude. Type Ia supernovae are the best-known standard candles across cosmological distances because of their extreme and consistent luminosity.

Recent observations of supernovae are consistent with a universe made up 71.3% of dark energy and 27.4% of a combination of dark matter and baryonic matter.

Cosmic microwave background

Estimated division of total energy in the universe into matter, dark matter and dark energy based on five years of WMAP data.

The existence of dark energy, in whatever form, is needed to reconcile the measured geometry of space with the total amount of matter in the universe. Measurements of cosmic microwave background (CMB) anisotropies indicate that the universe is close to flat. For the shape of the universe to be flat, the mass–energy density of the universe must be equal to the critical density. The total amount of matter in the universe (including baryons and dark matter), as measured from the CMB spectrum, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%. The Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft seven-year analysis estimated a universe made up of 72.8% dark energy, 22.7% dark matter, and 4.5% ordinary matter. Work done in 2013 based on the Planck spacecraft observations of the CMB gave a more accurate estimate of 68.3% dark energy, 26.8% dark matter, and 4.9% ordinary matter.

Large-scale structure

The theory of large-scale structure, which governs the formation of structures in the universe (stars, quasars, galaxies and galaxy groups and clusters), also suggests that the density of matter in the universe is only 30% of the critical density.

A 2011 survey, the WiggleZ galaxy survey of more than 200,000 galaxies, provided further evidence towards the existence of dark energy, although the exact physics behind it remains unknown. The WiggleZ survey from the Australian Astronomical Observatory scanned the galaxies to determine their redshift. Then, by exploiting the fact that baryon acoustic oscillations have left voids regularly of ≈150 Mpc diameter, surrounded by the galaxies, the voids were used as standard rulers to estimate distances to galaxies as far as 2,000 Mpc (redshift 0.6), allowing for accurate estimate of the speeds of galaxies from their redshift and distance. The data confirmed cosmic acceleration up to half of the age of the universe (7 billion years) and constrain its inhomogeneity to 1 part in 10. This provides a confirmation to cosmic acceleration independent of supernovae.

Late-time integrated Sachs–Wolfe effect

Accelerated cosmic expansion causes gravitational potential wells and hills to flatten as photons pass through them, producing cold spots and hot spots on the CMB aligned with vast supervoids and superclusters. This so-called late-time Integrated Sachs–Wolfe effect (ISW) is a direct signal of dark energy in a flat universe. It was reported at high significance in 2008 by Ho et al. and Giannantonio et al.

Observational Hubble constant data

A new approach to test evidence of dark energy through observational Hubble constant data (OHD) has gained significant attention in recent years.

The Hubble constant, H(z), is measured as a function of cosmological redshift. OHD directly tracks the expansion history of the universe by taking passively evolving early-type galaxies as “cosmic chronometers”. From this point, this approach provides standard clocks in the universe. The core of this idea is the measurement of the differential age evolution as a function of redshift of these cosmic chronometers. Thus, it provides a direct estimate of the Hubble parameter

The reliance on a differential quantity, Δz/Δt, brings more information and is appealing for computation: It can minimize many common issues and systematic effects. Analyses of supernovae and baryon acoustic oscillations (BAO) are based on integrals of the Hubble parameter, whereas Δz/Δt measures it directly. For these reasons, this method has been widely used to examine the accelerated cosmic expansion and study properties of dark energy.

Direct observation

An attempt to directly observe dark energy in a laboratory failed to detect a new force. Recently, it has been speculated that the currently unexplained excess observed in the XENON1T detector in Italy may have been caused by a chameleon model of dark energy.

Theories of dark energy

Dark energy's status as a hypothetical force with unknown properties makes it a very active target of research. The problem is attacked from a great variety of angles, such as modifying the prevailing theory of gravity (general relativity), attempting to pin down the properties of dark energy, and finding alternative ways to explain the observational data.

The equation of state of Dark Energy for 4 common models by Redshift.
A: CPL Model,
B: Jassal Model,
C: Barboza & Alcaniz Model,
D: Wetterich Model

Cosmological constant

Estimated distribution of matter and energy in the universe

The simplest explanation for dark energy is that it is an intrinsic, fundamental energy of space. This is the cosmological constant, usually represented by the Greek letter Λ (Lambda, hence Lambda-CDM model). Since energy and mass are related according to the equation E = mc2 , Einstein's theory of general relativity predicts that this energy will have a gravitational effect. It is sometimes called a vacuum energy because it is the energy density of empty space – the vacuum.

A major outstanding problem is that the same quantum field theories predict a huge cosmological constant, about 120 orders of magnitude too large. This would need to be almost, but not exactly, cancelled by an equally large term of the opposite sign.

Some supersymmetric theories require a cosmological constant that is exactly zero. Also, it is unknown if there is a metastable vacuum state in string theory with a positive cosmological constant, and it has been conjectured by Ulf Danielsson et al. that no such state exists. This conjecture would not rule out other models of dark energy, such as quintessence, that could be compatible with string theory.

Quintessence

In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength.

No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses.

The coincidence problem asks why the acceleration of the Universe began when it did. If acceleration began earlier in the universe, structures such as galaxies would never have had time to form, and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called "tracker" behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter–radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy.

In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w = −1) from above to below. A no-go theorem has been proved that this scenario requires models with at least two types of quintessence. This scenario is the so-called Quintom scenario.

Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy such as a negative kinetic energy. They can have unusual properties: phantom energy, for example, can cause a Big Rip.

Interacting dark energy

This class of theories attempts to come up with an all-encompassing theory of both dark matter and dark energy as a single phenomenon that modifies the laws of gravity at various scales. This could, for example, treat dark energy and dark matter as different facets of the same unknown substance, or postulate that cold dark matter decays into dark energy. Another class of theories that unifies dark matter and dark energy are suggested to be covariant theories of modified gravities. These theories alter the dynamics of the spacetime such that the modified dynamics stems to what have been assigned to the presence of dark energy and dark matter. Dark energy could in principle interact not only with the rest of the dark sector, but also with ordinary matter. However, cosmology alone is not sufficient to effectively constrain the strength of the coupling between dark energy and baryons, so that other indirect techniques or laboratory searches have to be adopted. A recent proposal speculates that the currently unexplained excess observed in the XENON1T detector in Italy may have been caused by a chameleon model of dark energy.

Variable dark energy models

The density of the dark energy might have varied in time during the history of the universe. Modern observational data allow us to estimate the present density of the dark energy. Using baryon acoustic oscillations, it is possible to investigate the effect of dark energy in the history of the Universe, and constrain parameters of the equation of state of dark energy. To that end, several models have been proposed. One of the most popular models is the Chevallier–Polarski–Linder model (CPL). Some other common models are, (Barboza & Alcaniz. 2008), (Jassal et al. 2005), (Wetterich. 2004), (Oztas et al. 2018).

Observational skepticism

Some alternatives to dark energy, such as inhomogeneous cosmology, aim to explain the observational data by a more refined use of established theories. In this scenario, dark energy doesn't actually exist, and is merely a measurement artifact. For example, if we are located in an emptier-than-average region of space, the observed cosmic expansion rate could be mistaken for a variation in time, or acceleration. A different approach uses a cosmological extension of the equivalence principle to show how space might appear to be expanding more rapidly in the voids surrounding our local cluster. While weak, such effects considered cumulatively over billions of years could become significant, creating the illusion of cosmic acceleration, and making it appear as if we live in a Hubble bubble. Yet other possibilities are that the accelerated expansion of the universe is an illusion caused by the relative motion of us to the rest of the universe, or that the statistical methods employed were flawed. It has also been suggested that the anisotropy of the local Universe has been misrepresented as dark energy. This claim was quickly countered by others, including a paper by physicists D. Rubin and J. Heitlauf. A laboratory direct detection attempt failed to detect any force associated with dark energy.

A study published in 2020 questioned the validity of the essential assumption that the luminosity of Type Ia supernovae does not vary with stellar population age, and suggests that dark energy may not actually exist. Lead researcher of the new study, Young-Wook Lee of Yonsei University, said "Our result illustrates that dark energy from SN cosmology, which led to the 2011 Nobel Prize in Physics, might be an artifact of a fragile and false assumption." Multiple issues with this paper were raised by other cosmologists, including Adam Riess, who won the 2011 Nobel Prize for the discovery of dark energy.

Other mechanism driving acceleration

Modified gravity

The evidence for dark energy is heavily dependent on the theory of general relativity. Therefore, it is conceivable that a modification to general relativity also eliminates the need for dark energy. There are very many such theories, and research is ongoing. The measurement of the speed of gravity in the first gravitational wave measured by non-gravitational means (GW170817) ruled out many modified gravity theories as explanations to dark energy.

Astrophysicist Ethan Siegel states that, while such alternatives gain a lot of mainstream press coverage, almost all professional astrophysicists are confident that dark energy exists, and that none of the competing theories successfully explain observations to the same level of precision as standard dark energy.

Implications for the fate of the universe

Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of matter. The density of dark matter in an expanding universe decreases more quickly than dark energy, and eventually the dark energy dominates. Specifically, when the volume of the universe doubles, the density of dark matter is halved, but the density of dark energy is nearly unchanged (it is exactly constant in the case of a cosmological constant).

Projections into the future can differ radically for different models of dark energy. For a cosmological constant, or any other model that predicts that the acceleration will continue indefinitely, the ultimate result will be that galaxies outside the Local Group will have a line-of-sight velocity that continually increases with time, eventually far exceeding the speed of light. This is not a violation of special relativity because the notion of "velocity" used here is different from that of velocity in a local inertial frame of reference, which is still constrained to be less than the speed of light for any massive object (see Uses of the proper distance for a discussion of the subtleties of defining any notion of relative velocity in cosmology). Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually.

However, because of the accelerating expansion, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future because the light never reaches a point where its "peculiar velocity" toward us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Uses of the proper distance). Assuming the dark energy is constant (a cosmological constant), the current distance to this cosmological event horizon is about 16 billion light years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event were less than 16 billion light years away, but the signal would never reach us if the event were more than 16 billion light years away.

As galaxies approach the point of crossing this cosmological event horizon, the light from them will become more and more redshifted, to the point where the wavelength becomes too large to detect in practice and the galaxies appear to vanish completely. Planet Earth, the Milky Way, and the Local Group of which the Milky Way is a part, would all remain virtually undisturbed as the rest of the universe recedes and disappears from view. In this scenario, the Local Group would ultimately suffer heat death, just as was hypothesized for the flat, matter-dominated universe before measurements of cosmic acceleration.

There are other, more speculative ideas about the future of the universe. The phantom energy model of dark energy results in divergent expansion, which would imply that the effective force of dark energy continues growing until it dominates all other forces in the universe. Under this scenario, dark energy would ultimately tear apart all gravitationally bound structures, including galaxies and solar systems, and eventually overcome the electrical and nuclear forces to tear apart atoms themselves, ending the universe in a "Big Rip". On the other hand, dark energy might dissipate with time or even become attractive. Such uncertainties leave open the possibility of gravity eventually prevailing and lead to a universe that contracts in on itself in a "Big Crunch", or that there may even be a dark energy cycle, which implies a cyclic model of the universe in which every iteration (Big Bang then eventually a Big Crunch) takes about a trillion (1012) years. While none of these are supported by observations, they are not ruled out.

In philosophy of science

In philosophy of science, dark energy is an example of an "auxiliary hypothesis", an ad hoc postulate that is added to a theory in response to observations that falsify it. It has been argued that the dark energy hypothesis is a conventionalist hypothesis, that is, a hypothesis that adds no empirical content and hence is unfalsifiable in the sense defined by Karl Popper.

Eternal inflation

From Wikipedia, the free encyclopedia

Eternal inflation is a hypothetical inflationary universe model, which is itself an outgrowth or extension of the Big Bang theory.

According to eternal inflation, the inflationary phase of the universe's expansion lasts forever throughout most of the universe. Because the regions expand exponentially rapidly, most of the volume of the universe at any given time is inflating. Eternal inflation, therefore, produces a hypothetically infinite multiverse, in which only an insignificant fractal volume ends inflation.

Paul Steinhardt, one of the original researchers of the inflationary model, introduced the first example of eternal inflation in 1983, and Alexander Vilenkin showed that it is generic.

Alan Guth's 2007 paper, "Eternal inflation and its implications", states that under reasonable assumptions "Although inflation is generically eternal into the future, it is not eternal into the past." Guth detailed what was known about the subject at the time, and demonstrated that eternal inflation was still considered the likely outcome of inflation, more than 20 years after eternal inflation was first introduced by Steinhardt.

Overview

Development of the theory

Inflation, or the inflationary universe theory, was originally developed as a way to overcome the few remaining problems with what was otherwise considered a successful theory of cosmology, the Big Bang model.

In 1979, Alan Guth introduced the inflationary model of the universe to explain why the universe is flat and homogeneous (which refers to the smooth distribution of matter and radiation on a large scale). The basic idea was that the universe underwent a period of rapidly accelerating expansion a few instants after the Big Bang. He offered a mechanism for causing the inflation to begin: false vacuum energy. Guth coined the term "inflation," and was the first to discuss the theory with other scientists worldwide.

Guth's original formulation was problematic, as there was no consistent way to bring an end to the inflationary epoch and end up with the hot, isotropic, homogeneous universe observed today. Although the false vacuum could decay into empty "bubbles" of "true vacuum" that expanded at the speed of light, the empty bubbles could not coalesce to reheat the universe, because they could not keep up with the remaining inflating universe.

In 1982, this "graceful exit problem" was solved independently by Andrei Linde and by Andreas Albrecht and Paul J. Steinhardt who showed how to end inflation without making empty bubbles and, instead, end up with a hot expanding universe. The basic idea was to have a continuous "slow-roll" or slow evolution from false vacuum to true without making any bubbles. The improved model was called "new inflation."

In 1983, Paul Steinhardt was the first to show that this "new inflation" does not have to end everywhere. Instead, it might only end in a finite patch or a hot bubble full of matter and radiation, and that inflation continues in most of the universe while producing hot bubble after hot bubble along the way. Alexander Vilenkin showed that when quantum effects are properly included, this is actually generic to all new inflation models.

Using ideas introduced by Steinhardt and Vilenkin, Andrei Linde published an alternative model of inflation in 1986 which used these ideas to provide a detailed description of what has become known as the Chaotic Inflation theory or eternal inflation.

Quantum fluctuations

New inflation does not produce a perfectly symmetric universe due to quantum fluctuations during inflation. The fluctuations cause the energy and matter density to be different in different points in space.

Quantum fluctuations in the hypothetical inflation field produce changes in the rate of expansion that are responsible for eternal inflation. Those regions with a higher rate of inflation expand faster and dominate the universe, despite the natural tendency of inflation to end in other regions. This allows inflation to continue forever, to produce future-eternal inflation. As a simplified example, suppose that during inflation, the natural decay rate of the inflaton field is slow compared to the effect of quantum fluctuation. When a mini-universe inflates and "self-reproduces" into, say, twenty causally-disconnected mini-universes of equal size to the original mini-universe, perhaps nine of the new mini-universes will have a larger, rather than smaller, average inflaton field value than the original mini-universe, because they inflated from regions of the original mini-universe where quantum fluctuation pushed the inflaton value up more than the slow inflation decay rate brought the inflaton value down. Originally there was one mini-universe with a given inflaton value; now there are nine mini-universes that have a slightly larger inflaton value. (Of course, there are also eleven mini-universes where the inflaton value is slightly lower than it originally was.) Each mini-universe with the larger inflaton field value restarts a similar round of approximate self-reproduction within itself. (The mini-universes with lower inflaton values may also reproduce, unless its inflaton value is small enough that the region drops out of inflation and ceases self-reproduction.) This process continues indefinitely; nine high-inflaton mini-universes might become 81, then 729... Thus, there is eternal inflation.

In 1980, quantum fluctuations were suggested by Viatcheslav Mukhanov and Gennady Chibisov in the Soviet Union in the context of a model of modified gravity by Alexei Starobinsky to be possible seeds for forming galaxies.

In the context of inflation, quantum fluctuations were first analyzed at the three-week 1982 Nuffield Workshop on the Very Early Universe at Cambridge University. The average strength of the fluctuations was first calculated by four groups working separately over the course of the workshop: Stephen Hawking; Starobinsky; Guth and So-Young Pi; and James M. Bardeen, Paul Steinhardt and Michael Turner.

The early calculations derived at the Nuffield Workshop only focused on the average fluctuations, whose magnitude is too small to affect inflation. However, beginning with the examples presented by Steinhardt and Vilenkin, the same quantum physics was later shown to produce occasional large fluctuations that increase the rate of inflation and keep inflation going eternally.

Further developments

In analyzing the Planck Satellite data from 2013, Anna Ijjas and Paul Steinhardt showed that the simplest textbook inflationary models were eliminated and that the remaining models require exponentially more tuned starting conditions, more parameters to be adjusted, and less inflation. Later Planck observations reported in 2015 confirmed these conclusions.

A 2014 paper by Kohli and Haslam called into question the viability of the eternal inflation theory, by analyzing Linde's chaotic inflation theory in which the quantum fluctuations are modeled as Gaussian white noise. They showed that in this popular scenario, eternal inflation in fact cannot be eternal, and the random noise leads to spacetime being filled with singularities. This was demonstrated by showing that solutions to the Einstein field equations diverge in a finite time. Their paper therefore concluded that the theory of eternal inflation based on random quantum fluctuations would not be a viable theory, and the resulting existence of a multiverse is "still very much an open question that will require much deeper investigation".

Inflation, eternal inflation, and the multiverse

In 1983, it was shown that inflation could be eternal, leading to a multiverse in which space is broken up into bubbles or patches whose properties differ from patch to patch spanning all physical possibilities.

Paul Steinhardt, who produced the first example of eternal inflation, eventually became a strong and vocal opponent of the theory. He argued that the multiverse represented a breakdown of the inflationary theory, because, in a multiverse, any outcome is equally possible, so inflation makes no predictions and, hence, is untestable. Consequently, he argued, inflation fails a key condition for a scientific theory.

Both Linde and Guth, however, continued to support the inflationary theory and the multiverse. Guth declared:

It's hard to build models of inflation that don't lead to a multiverse. It's not impossible, so I think there's still certainly research that needs to be done. But most models of inflation do lead to a multiverse, and evidence for inflation will be pushing us in the direction of taking the idea of a multiverse seriously.

According to Linde, "It's possible to invent models of inflation that do not allow a multiverse, but it's difficult. Every experiment that brings better credence to inflationary theory brings us much closer to hints that the multiverse is real."

In 2018 the late Stephen Hawking and Thomas Hertog published a paper in which the need for an infinite multiverse vanishes as Hawking describes their theory gives universes which are "reasonably smooth and globally finite". The theory uses the holographic principle to define an 'exit plane' from the timeless state of eternal inflation, the universes which are generated on the plane are described using a redefinition of the no-boundary wavefunction, in fact the theory requires a boundary at the beginning of time. Stated simply Hawking says that their findings "imply a significant reduction of the multiverse" which as the University of Cambridge points out, makes the theory "predictive and testable" using gravitational wave astronomy.

 

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...