Search This Blog

Sunday, September 21, 2014

Virtual particle

Virtual particle

From Wikipedia, the free encyclopedia
In physics, a virtual particle is a transient fluctuation that exhibits many of the characteristics of an ordinary particle, but that exists for a limited time. The concept of virtual particles arises in perturbation theory of quantum field theory where interactions between ordinary particles are described in terms of exchanges of virtual particles. Any process involving virtual particles admits a schematic representation known as a Feynman diagram, in which virtual particles are represented by internal lines. [1][2]

Virtual particles do not necessarily carry the same mass as the corresponding real particle, and they do not always have to conserve energy and momentum, since, being short-lived and transient, their existence is subject to the uncertainty principle. The longer the virtual particle exists, the closer its characteristics come to those of ordinary particles. They are important in the physics of many processes, including particle scattering and Casimir forces. In quantum field theory, even classical forces — such as the electromagnetic repulsion or attraction between two charges — can be thought of as due to the exchange of many virtual photons between the charges.
The term is somewhat loose and vaguely defined, in that it refers to the view that the world is made up of "real particles": it is not; rather, "real particles" are better understood to be excitations of the underlying quantum fields. Virtual particles are also excitations of the underlying fields, but are "temporary" in the sense that they appear in calculations of interactions, but never as asymptotic states or indices to the scattering matrix.

Antiparticles should not be confused with virtual particles or virtual antiparticles.
Note that it is common to find physicists who believe that, because of its intrinsically perturbative character, the concept of virtual particles is a frequently confusing and misleading one, and is thus best to be avoided. [3][4]

Properties

The concept of virtual particles arises in the perturbation theory of quantum field theory, an approximation scheme in which interactions (in essence, forces) between actual particles are calculated in terms of exchanges of virtual particles.[citation needed] Such calculations are often performed using schematic representations known as Feynman diagrams, in which virtual particles appear as internal lines.

A virtual particle does not precisely obey the formula m2c4 = E2 − p2c2.[5] In other words, its kinetic energy may not have the usual relationship to velocity–indeed, it can be negative. The probability amplitude for it to exist tends to be canceled out by destructive interference over longer distances and times. A virtual particle can be considered a manifestation of quantum tunnelling. The range of forces carried by virtual particles is limited by the uncertainty principle, which regards energy and time as conjugate variables; thus, virtual particles of larger mass have more limited range.[citation needed]
Written in the usual mathematical notations, in the equations of physics, there is no mark of the distinction between virtual and actual particles. The amplitude that a virtual particle exists interferes with the amplitude for its non-existence, whereas for an actual particle the cases of existence and non-existence cease to be coherent with each other and do not interfere any more. In the quantum field theory view, actual particles are viewed as being detectable excitations of underlying quantum fields. Virtual particles are also viewed as excitations of the underlying fields, but are detectable only as forces but not particles. They are "temporary" in the sense that they appear in calculations, but are not detected as single particles. Thus, in mathematical terms, they never appear as indices to the scattering matrix, which is to say, they never appear as the observable inputs and outputs of the physical process being modelled.

There are two principal ways in which the notion of virtual particles appears in modern physics. They appear as intermediate terms in Feynman diagrams; that is, as terms in a perturbative calculation. They also appear as an infinite set of states to be summed or integrated over in the calculation of a semi-non-perturbative effect. In the latter case, it is sometimes said that virtual particles contribute to a mechanism that mediates the effect, or that the effect occurs through the virtual particles.

Manifestations

There are many observable physical phenomena that arise in interactions involving virtual particles.
For bosonic particles that exhibit rest mass when they are free and actual, virtual interactions are characterized by the relatively short range of the force interaction produced by particle exchange.[citation needed] Examples of such short-range interactions are the strong and weak forces, and their associated field bosons. For the gravitational and electromagnetic forces, the zero rest-mass of the associated boson particle permits long-range forces to be mediated by virtual particles. However, in the case of photons, power and information transfer by virtual particles is a relatively short-range phenomenon (existing only within a few wavelengths of the field-disturbance, which carries information or transferred power), as for example seen in the characteristically short range of inductive and capacitative effects in the near field zone of coils and antennas.[citation needed]

Some field interactions which may be seen in terms of virtual particles are:
  • The Coulomb force (static electric force) between electric charges. It is caused by the exchange of virtual photons. In symmetric 3-dimensional space this exchange results in the inverse square law for electric force. Since the photon has no mass, the coulomb potential has an infinite range.
  • The magnetic field between magnetic dipoles. It is caused by the exchange of virtual photons. In symmetric 3-dimensional space this exchange results in the inverse cube law for magnetic force. Since the photon has no mass, the magnetic potential has an infinite range.
  • Electromagnetic induction. This phenomenon transfers energy to and from a magnetic coil via a changing (electro)magnetic field.
  • The strong nuclear force between quarks is the result of interaction of virtual gluons. The residual of this force outside of quark triplets (neutron and proton) holds neutrons and protons together in nuclei, and is due to virtual mesons such as the pi meson and rho meson.
  • The weak nuclear force - it is the result of exchange by virtual W and Z bosons.
  • The spontaneous emission of a photon during the decay of an excited atom or excited nucleus; such a decay is prohibited by ordinary quantum mechanics and requires the quantization of the electromagnetic field for its explanation.
  • The Casimir effect, where the ground state of the quantized electromagnetic field causes attraction between a pair of electrically neutral metal plates.
  • The van der Waals force, which is partly due to the Casimir effect between two atoms.
  • Vacuum polarization, which involves pair production or the decay of the vacuum, which is the spontaneous production of particle-antiparticle pairs (such as electron-positron).
  • Lamb shift of positions of atomic levels.
  • Hawking radiation, where the gravitational field is so strong that it causes the spontaneous production of photon pairs (with black body energy distribution) and even of particle pairs.
  • Much of the so-called near-field of radio antennas, where the magnetic and electric effects of the changing current in the antenna wire and the charge effects of the wire's capacitive charge may be (and usually are) important contributors to the total EM field close to the source, but both of which effects are dipole effects that decay with increasing distance from the antenna much more quickly than do the influence of "conventional" electromagnetic waves that are "far" from the source. ["Far" in terms of ratio of antenna length or diameter, to wavelength]. These far-field waves, for which E is (in the limit of long distance) equal to cB, are composed of actual photons. It should be noted that actual and virtual photons are mixed near an antenna, with the virtual photons responsible only for the "extra" magnetic-inductive and transient electric-dipole effects, which cause any imbalance between E and cB. As distance from the antenna grows, the near-field effects (as dipole fields) die out more quickly, and only the "radiative" effects that are due to actual photons remain as important effects. Although virtual effects extend to infinity, they drop off in field strength as 1/r2 rather than the field of EM waves composed of actual photons, which drop 1/r (the powers, respectively, decrease as 1/r4 and 1/r2). See near and far field for a more detailed discussion. See near field communication for practical communications applications of near fields.
Most of these have analogous effects in solid-state physics; indeed, one can often gain a better intuitive understanding by examining these cases. In semiconductors, the roles of electrons, positrons and photons in field theory are replaced by electrons in the conduction band, holes in the valence band, and phonons or vibrations of the crystal lattice. A virtual particle is in a virtual state where the probability amplitude is not conserved. Examples of macroscopic virtual phonons, photons, and electrons in the case of the tunneling process were presented by Günter Nimtz [6] and Alfons A. Stahlhofen.[7]

History

Paul Dirac was the first to propose that empty space (a vacuum) can be visualized as consisting of a sea of electrons with negative energy, known as the Dirac sea. The Dirac sea has a direct analog to the electronic band structure in crystalline solids as described in solid state physics. Here, particles correspond to conduction electrons, and antiparticles to holes. A variety of interesting phenomena can be attributed to this structure. The development of quantum field theory (QFT) in the 1930s made it possible to reformulate the Dirac equation in a way that treats the positron as a "real" particle rather than the absence of a particle, and makes the vacuum the state in which no particles exist instead of an infinite sea of particles.

Feynman diagrams

One particle exchange scattering diagram

The calculation of scattering amplitudes in theoretical particle physics requires the use of some rather large and complicated integrals over a large number of variables. These integrals do, however, have a regular structure, and may be represented as Feynman diagrams. The appeal of the Feynman diagrams is strong, as it allows for a simple visual presentation of what would otherwise be a rather arcane and abstract formula. In particular, part of the appeal is that the outgoing legs of a Feynman diagram can be associated with actual, on-shell particles. Thus, it is natural to associate the other lines in the diagram with particles as well, called the "virtual particles". In mathematical terms, they correspond to the propagators appearing in the diagram.

In the image to the right, the solid lines correspond to actual particles (of momentum p1 and so on), while the dotted line corresponds to a virtual particle carrying momentum k. For example, if the solid lines were to correspond to electrons interacting by means of the electromagnetic interaction, the dotted line would correspond to the exchange of a virtual photon. In the case of interacting nucleons, the dotted line would be a virtual pion. In the case of quarks interacting by means of the strong force, the dotted line would be a virtual gluon, and so on.
One-loop diagram with fermion propagator

Virtual particles may be mesons or vector bosons, as in the example above; they may also be fermions. However, in order to preserve quantum numbers, most simple diagrams involving fermion exchange are prohibited. The image to the right shows an allowed diagram, a one-loop diagram. The solid lines correspond to a fermion propagator, the wavy lines to bosons.

Vacuums

In formal terms, a particle is considered to be an eigenstate of the particle number operator aa, where a is the particle annihilation operator and a the particle creation operator (sometimes collectively called ladder operators). In many cases, the particle number operator does not commute with the Hamiltonian for the system. This implies the number of particles in an area of space is not a well-defined quantity but, like other quantum observables, is represented by a probability distribution. Since these particles do not have a permanent existence,[clarification needed] they are called virtual particles or vacuum fluctuations of vacuum energy. In a certain sense, they can be understood to be a manifestation of the time-energy uncertainty principle in a vacuum.[8][9]
An important example of the "presence" of virtual particles in a vacuum is the Casimir effect.[10] Here, the explanation of the effect requires that the total energy of all of the virtual particles in a vacuum can be added together. Thus, although the virtual particles themselves are not directly observable in the laboratory, they do leave an observable effect: Their zero-point energy results in forces acting on suitably arranged metal plates or dielectrics. By other hand, Casimir effect can be interpreted as relativistic van der Waals force.

Pair production

In order to conserve the total fermion number of the universe, a fermion cannot be created without also creating its antiparticle; thus, many physical processes lead to pair creation. The need for the normal ordering of particle fields in the vacuum can be interpreted by the idea that a pair of virtual particles may briefly "pop into existence", and then annihilate each other a short while later.
Thus, virtual particles are often popularly described as coming in pairs, a particle and antiparticle, which can be of any kind. These pairs exist for an extremely short time, and mutually annihilate in short order. In some cases, however, it is possible to boost the pair apart using external energy so that they avoid annihilation and become actual particles.

This may occur in one of two ways. In an accelerating frame of reference, the virtual particles may appear to be actual to the accelerating observer; this is known as the Unruh effect. In short, the vacuum of a stationary frame appears, to the accelerated observer, to be a warm gas of actual particles in thermodynamic equilibrium.

Another example is pair production in very strong electric fields, sometimes called vacuum decay. If, for example, a pair of atomic nuclei are merged to very briefly form a nucleus with a charge greater than about 140, (that is, larger than about the inverse of the fine structure constant, which is a dimensionless quantity), the strength of the electric field will be such that it will be energetically favorable to create positron-electron pairs out of the vacuum or Dirac sea, with the electron attracted to the nucleus to annihilate the positive charge. This pair-creation amplitude was first calculated by Julian Schwinger in 1951.

The restriction to particle–antiparticle pairs is actually only necessary if the particles in question carry a conserved quantity, such as electric charge, which is not present in the initial or final state. Otherwise, other situations can arise. For instance, the beta decay of a neutron can happen through the emission of a single virtual, negatively charged W particle that almost immediately decays into an actual electron and antineutrino; the neutron turns into a proton when it emits the W particle. The evaporation of a black hole is a process dominated by photons, which are their own antiparticles and are uncharged.

Actual and virtual particles compared

As a consequence of quantum mechanical uncertainty, any object or process that exists for a limited time or in a limited volume cannot have a precisely defined energy or momentum. This is the reason that virtual particles — which exist only temporarily as they are exchanged between ordinary particles — do not necessarily obey the mass-shell relation. However, the longer a virtual particle exists, the more closely it adheres to the mass-shell relation. A "virtual" particle that exists for an arbitrarily long time is simply an ordinary particle.

However, all particles have a finite lifetime, as they are created and eventually destroyed by some processes. As such, there is no absolute distinction between "real" and "virtual" particles. In practice, the lifetime of "ordinary" particles is far longer than the lifetime of the virtual particles that contribute to processes in particle physics, and as such the distinction is useful to make.

Vacuum energy

Vacuum energy

From Wikipedia, the free encyclopedia
Vacuum energy is an underlying background energy that exists in space throughout the entire Universe. One contribution to the vacuum energy may be from virtual particles which are thought to be particle pairs that blink into existence and then annihilate in a timespan too short to observe. They are expected to do this everywhere, throughout the Universe. Their behavior is codified in Heisenberg's energy–time uncertainty principle. Still, the exact effect of such fleeting bits of energy is difficult to quantify.

The effects of vacuum energy can be experimentally observed in various phenomena such as spontaneous emission, the Casimir effect and the Lamb shift, and are thought to influence the behavior of the Universe on cosmological scales. Using the upper limit of the cosmological constant, the vacuum energy in a cubic meter of free space has been estimated to be 10−9 joules (10-2 ergs).[1] However, in both Quantum Electrodynamics (QED) and Stochastic Electrodynamics (SED), consistency with the principle of Lorentz covariance and with the magnitude of the Planck constant requires it to have a much larger value of 10113 joules per cubic meter.[2][3] This huge discrepancy is known as the vacuum catastrophe.

Origin

Quantum field theory states that all fundamental fields, such as the electromagnetic field, must be quantized at each and every point in space. A field in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field were like the displacement of a ball from its rest position. The theory requires "vibrations" in, or more accurately changes in the strength of, such a field to propagate as per the appropriate wave equation for the particular field in question. The second quantization of quantum field theory requires that each such ball-spring combination be quantized, that is, that the strength of the field be quantized at each point in space. Canonically, if the field at each point in space is a simple harmonic oscillator, its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particles of particle physics. Thus, according to the theory, even the vacuum has a vastly complex structure and all calculations of quantum field theory must be made in relation to this model of the vacuum.

The theory considers vacuum to implicitly have the same properties as a particle, such as spin or polarization in the case of light, energy, and so on. According to the theory, most of these properties cancel out on average leaving the vacuum empty in the literal sense of the word. One important exception, however, is the vacuum energy or the vacuum expectation value of the energy. The quantization of a simple harmonic oscillator requires the lowest possible energy, or zero-point energy of such an oscillator to be:

{E}  =\frac{1}{2} h \nu.

Summing over all possible oscillators at all points in space gives an infinite quantity. To remove this infinity, one may argue that only differences in energy are physically measurable, much as the concept of potential energy has been treated in classical mechanics for centuries. This argument is the underpinning of the theory of renormalization. In all practical calculations, this is how the infinity is handled.

Vacuum energy can also be thought of in terms of virtual particles (also known as vacuum fluctuations) which are created and destroyed out of the vacuum. These particles are always created out of the vacuum in particle-antiparticle pairs, which in most cases shortly annihilate each other and disappear. However, these particles and antiparticles may interact with others before disappearing, a process which can be mapped using Feynman diagrams. Note that this method of computing vacuum energy is mathematically equivalent to having a quantum harmonic oscillator at each point and, therefore, suffers the same renormalization problems.

Additional contributions to the vacuum energy come from spontaneous symmetry breaking in quantum field theory.

Implications

Vacuum energy has a number of consequences. In 1948, Dutch physicists Hendrik B. G. Casimir and Dirk Polder predicted the existence of a tiny attractive force between closely placed metal plates due to resonances in the vacuum energy in the space between them. This is now known as the Casimir effect and has since been extensively experimentally verified. It is therefore believed that the vacuum energy is "real" in the same sense that more familiar conceptual objects such as electrons, magnetic fields, etc., are real. However, alternate explanations for the Casimir effect have since been proposed.[4]

Other predictions are harder to verify. Vacuum fluctuations are always created as particle–antiparticle pairs. The creation of these virtual particles near the event horizon of a black hole has been hypothesized by physicist Stephen Hawking to be a mechanism for the eventual "evaporation" of black holes.[citation needed] The net energy of the Universe remains zero so long as the particle pairs annihilate each other within Planck time. If one of the pair is pulled into the black hole before this, then the other particle becomes "real" and energy/mass is essentially radiated into space from the black hole. This loss is cumulative and could result in the black hole's disappearance over time. The time required is dependent on the mass of the black hole but could be on the order of 10100 years for large solar-mass black holes.[citation needed]

The vacuum energy also has important consequences for physical cosmology. Special relativity predicts that energy is equivalent to mass, and therefore, if the vacuum energy is "really there", it should exert a gravitational force. Essentially, a non-zero vacuum energy is expected to contribute to the cosmological constant, which affects the expansion of the universe.[citation needed] In the special case of vacuum energy, general relativity stipulates that the gravitational field is proportional to ρ+3p (where ρ is the mass-energy density, and p is the pressure). Quantum theory of the vacuum further stipulates that the pressure of the zero-state vacuum energy is always negative and equal in magnitude to ρ. Thus, the total is ρ+3p = ρ-3ρ = -2ρ, a negative value. This calculation implies a repulsive gravitational field, giving rise to acceleration of the expansion of the universe,[citation needed] if indeed the vacuum ground state has non-zero energy. However, the vacuum energy is mathematically infinite without renormalization, which is based on the assumption that we can only measure energy in a relative sense, which is not true if we can observe it indirectly via the cosmological constant.[citation needed]

The existence of vacuum energy is also sometimes used as theoretical justification for the possibility of free-energy machines. It has been argued that due to the broken symmetry (in QED), free energy does not violate conservation of energy, since the laws of thermodynamics only apply to equilibrium systems. However, consensus amongst physicists is that this is incorrect and that vacuum energy cannot be harnessed to generate free energy.[5][not in citation given] In particular, the second law of thermodynamics is unaffected by the existence of vacuum energy.[citation needed] However, in Stochastic Electrodynamics, the energy density is taken to be a classical random noise wave field which consists of real electromagnetic noise waves propagating isotropically in all directions. The energy in such a wave field would seem to be accessible, e.g., with nothing more complicated than a directional coupler.[citation needed] The most obvious difficulty appears to be the spectral distribution of the energy, which compatibility with Lorentz invariance requires to take the form Kf3, where K is a constant and f denotes frequency.[6][7] It follows that the energy and momentum flux in this wave field only becomes significant at extremely short wavelengths where directional coupler technology is currently lacking.[citation needed]

History

In 1934, Georges Lemaître used an unusual perfect-fluid equation of state to interpret the cosmological constant as due to vacuum energy. In 1948, the Casimir effect provided the first experimental verification of the existence of vacuum energy. In 1957, Lee and Yang proved the concepts of broken symmetry and parity violation, for which they won the Nobel prize. In 1973, Edward Tryon proposed the zero-energy universe hypothesis: that the Universe may be a large-scale quantum-mechanical vacuum fluctuation where positive mass-energy is balanced by negative gravitational potential energy. During the 1980s, there were many attempts to relate the fields that generate the vacuum energy to specific fields that were predicted by attempts at a Grand unification theory and to use observations of the Universe to confirm one or another version. However, the exact nature of the particles (or fields) that generate vacuum energy, with a density such as that required by inflation theory, remains a mystery.

Quantum foam

Quantum foam

From Wikipedia, the free encyclopedia

Quantum foam (also referred to as space-time foam) is a concept in quantum mechanics devised by John Wheeler in 1955. The foam is supposed to be conceptualized as the foundation of the fabric of the universe.[1]

Additionally, quantum foam can be used as a qualitative description of subatomic space-time turbulence at extremely small distances (on the order of the Planck length). At such small scales of time and space, the Heisenberg uncertainty principle allows energy to briefly decay into particles and antiparticles and then annihilate without violating physical conservation laws. As the scale of time and space being discussed shrinks, the energy of the virtual particles increases. According to Einstein's theory of general relativity, energy curves space-time. This suggests that—at sufficiently small scales—the energy of these fluctuations would be large enough to cause significant departures from the smooth space-time seen at larger scales, giving space-time a "foamy" character.

With an incomplete theory of quantum gravity, it is impossible to be certain what space-time would look like at these small scales, because existing theories of gravity do not give accurate predictions in that realm. Therefore, any of the developing theories of quantum gravity may improve our understanding of quantum foam as they are tested. However, observations of radiation from nearby quasars by Floyd Stecker of NASA's Goddard Space Flight Center have placed strong experimental limits on the possible violations of Einstein's special theory of relativity implied by the existence of quantum foam.[2] Thus experimental evidence so far has given a range of values in which scientists can test for quantum foam.

Experimental evidence (and counter-evidence)

The MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov) telescopes have detected that among gamma-ray photons arriving from the blazar Markarian 501, some photons at different energy levels arrived at different times, suggesting that some of the photons had moved more slowly and thus contradicting the theory of general relativity's notion of the speed of light being constant, a discrepancy which could be explained by the irregularity of quantum foam.[3] More recent experiments were however unable to confirm the supposed variation on the speed of light due to graininess of space.[4] Other experiments involving the polarization of light from distant gamma ray bursts have also produced contradictory results.[5] More Earth-based experiments are ongoing[6] or proposed.[7]

Relation to other theories

Quantum foam is theorized to be the 'fabric' of the Universe, but cannot be observed yet because it is too small. Also, quantum foam is theorized to be created by virtual particles of very high energy.
Virtual particles appear in quantum field theory, arising briefly and then annihilating during particle interactions in such a way that they affect the measured outputs of the interaction, even though the virtual particles are themselves space. These "vacuum fluctuations" affect the properties of the vacuum, giving it a nonzero energy known as vacuum energy, itself a type of zero-point energy. However, physicists are uncertain about the magnitude of this form of energy.[8]
The Casimir effect can also be understood in terms of the behavior of virtual particles in the empty space between two parallel plates. Ordinarily, quantum field theory does not deal with virtual particles of sufficient energy to curve spacetime significantly, so quantum foam is a speculative extension of these concepts which imagines the consequences of such high-energy virtual particles at very short distances and times. Spin foam theory is a modern attempt to make Wheeler's idea quantitative.

Metric expansion of space

Metric expansion of space

From Wikipedia, the free encyclopedia

The metric expansion of space is the increase of the distance between two distant parts of the universe with time. It is an intrinsic expansion whereby the scale of space itself changes. This is different from other examples of expansions and explosions in that, as far as observations can ascertain, it is a property of the entirety of the universe rather than a phenomenon that can be contained and observed from the outside.

Metric expansion is a key feature of Big Bang cosmology, is modeled mathematically with the FLRW metric, and is a generic property of the universe we inhabit. However, the model is valid only on large scales (roughly the scale of galaxy clusters and above). At smaller scales matter has become bound together under the influence of gravitational attraction and such things do not expand at the metric expansion rate as the universe ages. As such, the only galaxies receding from one another as a result of metric expansion are those separated by cosmologically relevant scales larger than the length scales associated with the gravitational collapse that are possible in the age of the Universe given the matter density and average expansion rate.

At the end of the early universe's inflationary period, all the matter and energy in the universe was set on an inertial trajectory consistent with the equivalence principle and Einstein's general theory of relativity and this is when the precise and regular form of the universe's expansion had its origin (that is, matter in the universe is separating because it was separating in the past due to the inflaton field).
According to measurements, the universe's expansion rate was decelerating until about 5 billion years ago due to the gravitational attraction of the matter content of the universe, after which time the expansion began accelerating. In order to explain the acceleration physicists have postulated the existence of dark energy which appears in the simplest theoretical models as a cosmological constant. According to the simplest extrapolation of the currently-favored cosmological model (known as "ΛCDM"), this acceleration becomes more dominant into the future.

While special relativity constrains objects in the universe from moving faster than light with respect to each other when they are in a local, dynamical relationship, it places no theoretical constraint on the relative motion between two objects that are globally separated and out of causal contact. It is thus possible for two objects to become separated in space by more than the distance light could have travelled, which means that, if the expansion remains constant, the two objects will never come into causal contact. For example, galaxies that are more than approximately 4.5 gigaparsecs away from us are expanding away from us faster than light. We can still see such objects because the universe in the past was expanding more slowly than it is today, so the ancient light being received from these objects is still able to reach us, though if the expansion continues unabated there will never come a time that we will see the light from such objects being produced today (on a so-called "space-like slice of spacetime") and vice-versa because space itself is expanding between Earth and the source faster than any light can be exchanged. Space, in theory, is not infinite.

Because of the high rate of expansion, it is also possible for a distance between two objects to be greater than the value calculated by multiplying the speed of light by the age of the universe. These details are a frequent source of confusion among amateurs and even professional physicists.[1]

Due to the non-intuitive nature of the subject and what has been described by some as "careless" choices of wording, certain descriptions of the metric expansion of space and the misconceptions to which such descriptions can lead are an ongoing subject of discussion in the realm of pedagogy and communication of scientific concepts.[2][3][4][5]

Basic concepts and overview

Overview of metrics

To understand the metric expansion of the universe, it is helpful to discuss briefly what a metric is, and how metric expansion works.

Definition of a metric

A metric defines how a distance can be measured between two nearby points in space, in terms of the coordinate system. Coordinate systems locate points in a space (of whatever number of dimensions) by assigning unique positions on a grid, known as coordinates, to each point. The metric is then a formula which describes how displacement through the space of interest can be translated into distances.

Metric for Earth's surface

For example, consider the measurement of distance between two places on the surface of the Earth. This is a simple, familiar example of spherical geometry. Because the surface of the Earth is two-dimensional, points on the surface of the earth can be specified by two coordinates—for example, the latitude and longitude. Specification of a metric requires that one first specify the coordinates used. In our simple example of the surface of the Earth, we could choose any kind of coordinate system we wish, for example latitude and longitude, or X-Y-Z Cartesian coordinates. Once we have chosen a specific coordinate system, the numerical values of the coordinates of any two points are uniquely determined, and based upon the properties of the space being discussed, the appropriate metric is mathematically established too. On the curved surface of the Earth, we can see this effect in long-haul airline flights where the distance between two points is measured based upon a Great circle, rather than the straight line one might plot on a two-dimensional map of the Earth's surface. In general, such shortest-distance paths are called, "geodesics". In Euclidean geometry, the geodesic is a straight line, while in non-Euclidean geometry such as on the Earth's surface, this is not the case. Indeed even the shortest-distance great circle path is always longer than the Euclidean straight line path which passes through the interior of the Earth. The difference between the straight line path and the shortest-distance great circle path is due to the curvature of the Earth's surface. While there is always an effect due to this curvature, at short distances the effect is small enough to be unnoticeable.

On plane maps, Great circles of the Earth are mostly not shown as straight lines. Indeed, there is a seldom-used map projection, namely the gnomonic projection, where all Great circles are shown as straight lines, but in this projection, the distance scale varies very much in different areas. There is no map projection in which the distance between any two points on Earth, measured along the Great Circle geodesics, is directly proportional to their distance on the map.

Metric tensor

In differential geometry, the backbone mathematics for general relativity, a metric tensor can be defined which precisely characterizes the space being described by explaining the way distances should be measured in every possible direction. General relativity necessarily invokes a metric in four dimensions (one of time, three of space) because, in general, different reference frames will experience different intervals of time and space depending on the inertial frame. This means that the metric tensor in general relativity relates precisely how two events in spacetime are separated. A metric expansion occurs when the metric tensor changes with time (and, specifically, whenever the spatial part of the metric gets larger as time goes forward). This kind of expansion is different from all kinds of expansions and explosions commonly seen in nature in no small part because times and distances are not the same in all reference frames, but are instead subject to change. A useful visualization is to approach the subject rather than objects in a fixed "space" moving apart into "emptiness", as space itself growing between objects without any acceleration of the objects themselves. The space between objects grows or shrinks as the various geodesics converge or diverge.

Because this expansion is caused by relative changes in the distance-defining metric, this expansion (and the resultant movement apart of objects) is not restricted by the speed of light upper bound of special relativity. Two reference frames that are globally separated can be moving apart faster than light without violating special relativity, although whenever two reference frames diverge from each other faster than the speed of light, there will be observable effects associated with such situations including the existence of various cosmological horizons.

Theory and observations suggest that very early in the history of the universe, there was an inflationary phase where the metric changed very rapidly, and that the remaining time-dependence of this metric is what we observe as the so-called Hubble expansion, the moving apart of all gravitationally unbound objects in the universe. The expanding universe is therefore a fundamental feature of the universe we inhabit - a universe fundamentally different from the static universe Albert Einstein first considered when he developed his gravitational theory.

Measuring distances in expanding spaces

In expanding space, proper distances are dynamical quantities which change with time. An easy way to correct for this is to use comoving coordinates which remove this feature and allow for a characterization of different locations in the universe without having to characterize the physics associated with metric expansion. In comoving coordinates, the distances between all objects are fixed and the instantaneous dynamics of matter and light are determined by the normal physics of gravity and electromagnetic radiation. Any time-evolution however must be accounted for by taking into account the Hubble law expansion in the appropriate equations in addition to any other effects that may be operating (gravity, dark energy, or curvature, for example). Cosmological simulations that run through significant fractions of the universe's history therefore must include such effects in order to make applicable predictions for observational cosmology.

Understanding the expansion of the universe

Measurement of expansion and change of rate of expansion

In principle, the expansion of the universe could be measured by taking a standard ruler and measuring the distance between two cosmologically distant points, waiting a certain time, and then measuring the distance again, but in practice, standard rulers are not easy to find on cosmological scales and the time scales over which a measurable expansion would be visible are too great to be observable even by multiple generations of humans. The expansion of space is measured indirectly.
The theory of relativity predicts phenomena associated with the expansion, notably the redshift-versus-distance relationship known as Hubble's Law; functional forms for cosmological distance measurements that differ from what would be expected if space were not expanding; and an observable change in the matter and energy density of the universe seen at different lookback times.

The first measurement of the expansion of space occurred with the creation of the Hubble diagram. Using standard candles with known intrinsic brightness, the expansion of the universe has been measured using redshift to derive Hubble's Constant: H0 = 67.15 ± 1.2 (km/s)/Mpc. For every million parsecs of distance from the observer, the rate of expansion increases by about 67 kilometers per second.[6][7][8]

Hubble's Constant is not thought to be constant through time. There are dynamical forces acting on the particles in the universe which affect the expansion rate. It was earlier expected that the Hubble Constant would be decreasing as time went on due to the influence of gravitational interactions in the universe, and thus there is an additional observable quantity in the universe called the deceleration parameter which cosmologists expected to be directly related to the matter density of the universe. Surprisingly, the deceleration parameter was measured by two different groups to be less than zero (actually, consistent with −1) which implied that today Hubble's Constant is increasing as time goes on. Some cosmologists have whimsically called the effect associated with the "accelerating universe" the "cosmic jerk".[9] The 2011 Nobel Prize in Physics was given for the discovery of this phenomenon.[10]

Measuring distances in expanding space

Two views of an isometric embedding of part of the visible universe over most of its history, showing how a light ray (red line) can travel an effective distance of 28 billion light years (orange line) in just 13 billion years of cosmological time. Click the images to zoom.

At cosmological scales the present universe is geometrically flat, which is to say that the rules of Euclidean geometry associated with Euclid's fifth postulate hold, though in the past spacetime could have been highly curved. In part to accommodate such different geometries, the expansion of the universe is inherently general relativistic; it cannot be modeled with special relativity alone, though such models can be written down, they are at fundamental odds with the observed interaction between matter and spacetime seen in our universe.

The images to the right show two views of spacetime diagrams that show the large-scale geometry of the universe according to the ΛCDM cosmological model. Two of the dimensions of space are omitted, leaving one dimension of space (the dimension that grows as the cone gets larger) and one of time (the dimension that proceeds "up" the cone's surface). The narrow circular end of the diagram corresponds to a cosmological time of 700 million years after the big bang while the wide end is a cosmological time of 18 billion years, where one can see the beginning of the accelerating expansion as a splaying outward of the spacetime, a feature which eventually dominates in this model. The purple grid lines mark off cosmological time at intervals of one billion years from the big bang. The cyan grid lines mark off comoving distance at intervals of one billion light years in the present era (less in the past and more in the future). Note that the circular curling of the surface is an artifact of the embedding with no physical significance and is done purely to make the illustration viewable; space does not actually curl around on itself. (A similar effect can be seen in the tubular shape of the pseudosphere.)

The brown line on the diagram is the worldline of the Earth (or, at earlier times, of the matter which condensed to form the Earth). The yellow line is the worldline of the most distant known quasar. The red line is the path of a light beam emitted by the quasar about 13 billion years ago and reaching the Earth in the present day. The orange line shows the present-day distance between the quasar and the Earth, about 28 billion light years, which is, notably, a larger distance than the age of the universe multiplied by the speed of light: ct.

According to the equivalence principle of general relativity, the rules of special relativity are locally valid in small regions of spacetime that are approximately flat. In particular, light always travels locally at the speed c; in our diagram, this means, according to the convention of constructing spacetime diagrams, that light beams always make an angle of 45° with the local grid lines. It does not follow, however, that light travels a distance ct in a time t, as the red worldline illustrates. While it always moves locally at c, its time in transit (about 13 billion years) is not related to the distance traveled in any simple way since the universe expands as the light beam traverses space and time. In fact the distance traveled is inherently ambiguous because of the changing scale of the universe. Nevertheless, we can single out two distances which appear to be physically meaningful: the distance between the Earth and the quasar when the light was emitted, and the distance between them in the present era (taking a slice of the cone along the dimension that we've declared to be the spatial dimension). The former distance is about 4 billion light years, much smaller than ct because the universe expanded as the light traveled the distance, the light had to "run against the treadmill" and therefore went farther than the initial separation between the Earth and the quasar. The latter distance (shown by the orange line) is about 28 billion light years, much larger than ct. If expansion could be instantaneously stopped today, it would take 28 billion years for light to travel between the Earth and the quasar while if the expansion had stopped at the earlier time, it would have taken only 4 billion years.

The light took much longer than 4 billion years to reach us though it was emitted from only 4 billion light years away, and, in fact, the light emitted towards the Earth was actually moving away from the Earth when it was first emitted, in the sense that the metric distance to the Earth increased with cosmological time for the first few billion years of its travel time, and also indicating that the expansion of space between the Earth and the quasar at the early time was faster than the speed of light. None of this surprising behavior originates from a special property of metric expansion, but simply from local principles of special relativity integrated over a curved surface.

Topology of expanding space

A graphical representation of the expansion of the universe with the inflationary epoch represented as the dramatic expansion of the metric seen on the left. This diagram can be confusing because the expansion of space looks like it is happening into an empty "nothingness". However, this is a choice made for convenience of visualization: it is not a part of the physical models which describe the expansion.

Over time, the space that makes up the universe is expanding. The words 'space' and 'universe', sometimes used interchangeably, have distinct meanings in this context. Here 'space' is a mathematical concept that stands for the three-dimensional manifold into which our respective positions are embedded while 'universe' refers to everything that exists including the matter and energy in space, the extra-dimensions that may be wrapped up in various strings, and the time through which various events take place. The expansion of space is in reference to this 3-D manifold only; that is, the description involves no structures such as extra dimensions or an exterior universe.[11]

The ultimate topology of space is a posteriori—something which in principle must be observed—as there are no constraints that can simply be reasoned out (in other words there can not be any a priori constraints) on how the space in which we live is connected or whether it wraps around on itself as a compact space. Though certain cosmological models such as Gödel's universe even permit bizarre worldlines which intersect with themselves, ultimately the question as to whether we are in something like a "pac-man universe" where if traveling far enough in one direction would allow one to simply end up back in the same place like going all the way around the surface of a balloon (or a planet like the Earth) is an observational question which is constrained as measurable or non-measurable by the universe's global geometry. At present, observations are consistent with the universe being infinite in extent and simply connected, though we are limited in distinguishing between simple and more complicated proposals by cosmological horizons. The universe could be infinite in extent or it could be finite; but the evidence that leads to the inflationary model of the early universe also implies that the "total universe" is much larger than the observable universe, and so any edges or exotic geometries or topologies would not be directly observable as light has not reached scales on which such aspects of the universe, if they exist, are still allowed. For all intents and purposes, it is safe to assume that the universe is infinite in spatial extent, without edge or strange connectedness.[12]

Regardless of the overall shape of the universe, the question of what the universe is expanding into is one which does not require an answer according to the theories which describe the expansion; the way we define space in our universe in no way requires additional exterior space into which it can expand since an expansion of an infinite expanse can happen without changing the infinite extent of the expanse. All that is certain is that the manifold of space in which we live simply has the property that the distances between objects are getting larger as time goes on. This only implies the simple observational consequences associated with the metric expansion explored below. No "outside" or embedding in hyperspace is required for an expansion to occur. The visualizations often seen of the universe growing as a bubble into nothingness are misleading in that respect. There is no reason to believe there is anything "outside" of the expanding universe into which the universe expands.

Even if the overall spatial extent is infinite and thus the universe can't get any "larger", we still say that space is expanding because, locally, the characteristic distance between objects is increasing. As an infinite space grows, it remains infinite.

Effects of expansion on small scales

The expansion of space is sometimes described as a force which acts to push objects apart. Though this is an accurate description of the effect of the cosmological constant, it is not an accurate picture of the phenomenon of expansion in general. For much of the universe's history the expansion has been due mainly to inertia. The matter in the very early universe was flying apart for unknown reasons (most likely as a result of cosmic inflation) and has simply continued to do so, though at an ever-decreasing rate due to the attractive effect of gravity.

In addition to slowing the overall expansion, gravity causes local clumping of matter into stars and galaxies. Once objects are formed and bound by gravity, they "drop out" of the expansion and do not subsequently expand under the influence of the cosmological metric, there being no force compelling them to do so.

There is no difference between the inertial expansion of the universe and the inertial separation of nearby objects in a vacuum; the former is simply a large-scale extrapolation of the latter.

Once objects are bound by gravity, they no longer recede from each other. Thus, the Andromeda galaxy, which is bound to the Milky Way galaxy, is actually falling towards us and is not expanding away. Within our Local Group of galaxies, the gravitational interactions have changed the inertial patterns of objects such that there is no cosmological expansion taking place. Once one goes beyond the local group, the inertial expansion is measurable, though systematic gravitational effects imply that larger and larger parts of space will eventually fall out of the "Hubble Flow" and end up as bound, non-expanding objects up to the scales of superclusters of galaxies. We can predict such future events by knowing the precise way the Hubble Flow is changing as well as the masses of the objects to which we are being gravitationally pulled. Currently, our Local Group is being gravitationally pulled towards either the Shapley Supercluster or the "Great Attractor" with which, if dark energy were not acting, we would eventually merge and no longer see expand away from us after such a time.

A consequence of metric expansion being due to inertial motion is that a uniform local "explosion" of matter into a vacuum can be locally described by the FLRW geometry, the same geometry which describes the expansion of the universe as a whole and was also the basis for the simpler Milne universe which ignores the effects of gravity. In particular, general relativity predicts that light will move at the speed c with respect to the local motion of the exploding matter, a phenomenon analogous to frame dragging.

The situation changes somewhat with the introduction of dark energy or a cosmological constant. A cosmological constant due to a vacuum energy density has the effect of adding a repulsive force between objects which is proportional (not inversely proportional) to distance. Unlike inertia it actively "pulls" on objects which have clumped together under the influence of gravity, and even on individual atoms. However, this does not cause the objects to grow steadily or to disintegrate; unless they are very weakly bound, they will simply settle into an equilibrium state which is slightly (undetectably) larger than it would otherwise have been. As the universe expands and the matter in it thins, the gravitational attraction decreases (since it is proportional to the density), while the cosmological repulsion increases; thus the ultimate fate of the ΛCDM universe is a near vacuum expanding at an ever increasing rate under the influence of the cosmological constant. However, the only locally visible effect of the accelerating expansion is the disappearance (by runaway redshift) of distant galaxies; gravitationally bound objects like the Milky Way do not expand and the Andromeda galaxy is moving fast enough towards us that it will still merge with the Milky Way in 3 billion years time, and it is also likely that the merged supergalaxy that forms will eventually fall in and merge with the nearby Virgo Cluster. However, galaxies lying farther away from this will recede away at ever-increasing rates of speed and be redshifted out of our range of visibility.

Scale factor

At a fundamental level, the expansion of the universe is a property of spatial measurement on the largest measurable scales of our universe. The distances between cosmologically relevant points increases as time passes leading to observable effects outlined below. This feature of the universe can be characterized by a single parameter that is called the scale factor which is a function of time and a single value for all of space at any instant (if the scale factor were a function of space, this would violate the cosmological principle). By convention, the scale factor is set to be unity at the present time and, because the universe is expanding, is smaller in the past and larger in the future.
Extrapolating back in time with certain cosmological models will yield a moment when the scale factor was zero, our current understanding of cosmology sets this time at 13.798 ± 0.037 billion years ago. If the universe continues to expand forever, the scale factor will approach infinity in the future. In principle, there is no reason that the expansion of the universe must be monotonic and there are models that exist where at some time in the future the scale factor decreases with an attendant contraction of space rather than an expansion.

Other conceptual models of expansion

The expansion of space is often illustrated with conceptual models which show only the size of space at a particular time, leaving the dimension of time implicit.

In the "ant on a rubber rope model" one imagines an ant (idealized as pointlike) crawling at a constant speed on a perfectly elastic rope which is constantly stretching. If we stretch the rope in accordance with the ΛCDM scale factor and think of the ant's speed as the speed of light, then this analogy is numerically accurate—the ant's position over time will match the path of the red line on the embedding diagram above.

In the "rubber sheet model" one replaces the rope with a flat two-dimensional rubber sheet which expands uniformly in all directions. The addition of a second spatial dimension raises the possibility of showing local perturbations of the spatial geometry by local curvature in the sheet.

In the "balloon model" the flat sheet is replaced by a spherical balloon which is inflated from an initial size of zero (representing the big bang). A balloon has positive Gaussian curvature while observations suggest that the real universe is spatially flat, but this inconsistency can be eliminated by making the balloon very large so that it is locally flat to within the limits of observation. This analogy is potentially confusing since it wrongly suggests that the big bang took place at the center of the balloon. In fact points off the surface of the balloon have no meaning, even if they were occupied by the balloon at an earlier time.
Animation of an expanding raisin bread model. As the bread doubles in width (depth and length), the distances between raisins also double.

In the "raisin bread model" one imagines a loaf of raisin bread expanding in the oven. The loaf (space) expands as a whole, but the raisins (gravitationally bound objects) do not expand; they merely grow farther away from each other.

All of these models have the conceptual problem of requiring an outside force acting on the "space" at all times to make it expand. Unlike real cosmological matter, sheets of rubber and loaves of bread are bound together electromagnetically and will not continue to expand on their own after an initial tug.

Theoretical basis and first evidence

Hubble's law

Technically, the metric expansion of space is a feature of many solutions to the Einstein field equations of general relativity, and distance is measured using the Lorentz interval. This explains observations which indicate that galaxies that are more distant from us are receding faster than galaxies that are closer to us (Hubble's law).

Cosmological constant and the Friedmann equations

The first general relativistic models predicted that a universe which was dynamical and contained ordinary gravitational matter would contract rather than expand. Einstein's first proposal for a solution to this problem involved adding a cosmological constant into his theories to balance out the contraction, in order to obtain a static universe solution. But in 1922 Alexander Friedman derived a set of equations known as the Friedmann equations, showing that the universe might expand and presenting the expansion speed in this case.[13] The observations of Edwin Hubble in 1929 suggested that distant galaxies were all apparently moving away from us, so that many scientists came to accept that the universe was expanding.

Hubble's concerns over the rate of expansion

While the metric expansion of space is implied by Hubble's 1929 observations, Hubble was concerned with the observational implications of the precise value he measured:
"… if redshift are not primarily due to velocity shift … the velocity-distance relation is linear, the distribution of the nebula is uniform, there is no evidence of expansion, no trace of curvature, no restriction of the time scale … and we find ourselves in the presence of one of the principles of nature that is still unknown to us today … whereas, if redshifts are velocity shifts which measure the rate of expansion, the expanding models are definitely inconsistent with the observations that have been made … expanding models are a forced interpretation of the observational results"
— E. Hubble, Ap. J., 84, 517, 1936 [14]
"[If the redshifts are a Doppler shift] … the observations as they stand lead to the anomaly of a closed universe, curiously small and dense, and, it may be added, suspiciously young. On the other hand, if redshifts are not Doppler effects, these anomalies disappear and the region observed appears as a small, homogeneous, but insignificant portion of a universe extended indefinitely both in space and time."
In fact, Hubble's skepticism about the universe being too small, dense, and young was justified, though it turned out to be an observational error rather than an error of interpretation. Later investigations showed that Hubble had confused distant HII regions for Cepheid variables and the Cepheid variables themselves had been inappropriately lumped together with low-luminosity RR Lyrae stars causing calibration errors that led to a value of the Hubble Constant of approximately 500 km/s/Mpc instead of the true value of approximately 70 km/s/Mpc. The higher value meant that an expanding universe would have an age of 2 billion years (younger than the Age of the Earth) and extrapolating the observed number density of galaxies to a rapidly expanding universe implied a mass density that was too high by a similar factor, enough to force the universe into a peculiar closed geometry which also implied an impending Big Crunch that would occur on a similar time-scale.
After fixing these errors in the 1950s, the new lower values for the Hubble Constant accorded with the expectations of an older universe and the density parameter was found to be fairly close to a geometrically flat universe.[16]

Inflation as an explanation for the expansion

Until the theoretical developments in the 1980s no one had an explanation for why this seemed to be the case, but with the development of models of cosmic inflation, the expansion of the universe became a general feature resulting from vacuum decay. Accordingly, the question "why is the universe expanding?" is now answered by understanding the details of the inflation decay process which occurred in the first 10−32 seconds of the existence of our universe.[17] During inflation, the metric changed exponentially, causing any volume of space that was smaller than an atom to grow to around 100 million light years across in a time scale similar to the time when inflation occurred (10−32 seconds).
The expansion of the universe proceeds in all directions as determined by the Hubble constant. However, the Hubble constant can change in the past and in the future, dependent on the observed value of density parameters (Ω). Before the discovery of dark energy, it was believed that the universe was matter-dominated, and so Ω on this graph corresponds to the ratio of the matter density to the critical density (\Omega_m).

Measuring distance in a metric space

In expanding space, distance is a dynamic quantity which changes with time. There are several different ways of defining distance in cosmology, known as distance measures, but a common method used amongst modern astronomers is comoving distance.
The metric only defines the distance between nearby (so-called "local") points. In order to define the distance between arbitrarily distant points, one must specify both the points and a specific curve (known as a "spacetime interval") connecting them. The distance between the points can then be found by finding the length of this connecting curve through the three dimensions of space. Comoving distance defines this connecting curve to be a curve of constant cosmological time. Operationally, comoving distances cannot be directly measured by a single Earth-bound observer. To determine the distance of distant objects, astronomers generally measure luminosity of standard candles, or the redshift factor 'z' of distant galaxies, and then convert these measurements into distances based on some particular model of space-time, such as the Lambda-CDM model. It is, indeed, by making such observations that it was determined that there is no evidence for any 'slowing down' of the expansion in the current epoch.

Observational evidence

A diagram depicting the expansion of the universe and the appearance of galaxies moving away from a single galaxy. The phenomenon is relative to the observer. Object t1 is a smaller expansion than t2. Each section represents the movement of the red galaxies over the white galaxies for comparison. The blue and green galaxies are markers to show which galaxy is the same one (fixed center point) in the subsequent box. t = time.

Theoretical cosmologists developing models of the universe have drawn upon a small number of reasonable assumptions in their work. These workings have led to models in which the metric expansion of space is a likely feature of the universe. Chief among the underlying principles that result in models including metric expansion as a feature are:
Scientists have tested carefully whether these assumptions are valid and borne out by observation. Observational cosmologists have discovered evidence - very strong in some cases - that supports these assumptions, and as a result, metric expansion of space is considered by cosmologists to be an observed feature on the basis that although we cannot see it directly, scientists have tested the properties of the universe and observation provides compelling confirmation.[18] Sources of this confidence and confirmation include:
  • Hubble demonstrated that all galaxies and distant astronomical objects were moving away from us, as predicted by a universal expansion.[19] Using the redshift of their electromagnetic spectra to determine the distance and speed of remote objects in space, he showed that all objects are moving away from us, and that their speed is proportional to their distance, a feature of metric expansion. Further studies have since shown the expansion to be highly isotropic and homogeneous, that is, it does not seem to have a special point as a "center", but appears universal and independent of any fixed central point.
  • In studies of large-scale structure of the cosmos taken from redshift surveys a so-called "End of Greatness" was discovered at the largest scales of the universe. Until these scales were surveyed, the universe appeared "lumpy" with clumps of galaxy clusters and superclusters and filaments which were anything but isotropic and homogeneous. This lumpiness disappears into a smooth distribution of galaxies at the largest scales.
  • The isotropic distribution across the sky of distant gamma-ray bursts and supernovae is another confirmation of the Cosmological Principle.
  • The Copernican Principle was not truly tested on a cosmological scale until measurements of the effects of the cosmic microwave background radiation on the dynamics of distant astrophysical systems were made. A group of astronomers at the European Southern Observatory noticed, by measuring the temperature of a distant intergalactic cloud in thermal equilibrium with the cosmic microwave background, that the radiation from the Big Bang was demonstrably warmer at earlier times.[20] Uniform cooling of the cosmic microwave background over billions of years is strong and direct observational evidence for metric expansion.
 Taken together, these phenomena overwhelmingly support models that rely on space expanding through a change in metric. Interestingly, it was not until the discovery in the year 2000 of direct observational evidence for the changing temperature of the cosmic microwave background that more bizarre constructions could be ruled out. Until that time, it was based purely on an assumption that the universe did not behave as one with the Milky Way sitting at the middle of a fixed-metric with a universal explosion of galaxies in all directions (as seen in, for example, an early model proposed by Milne). Yet before this evidence, many rejected the Milne viewpoint based on the mediocrity principle.

The spatial and temporal universality of physical laws was until very recently taken as a fundamental philosophical assumption that is now tested to the observational limits of time and space.

Lifelong learning

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lifelong_learning ...