Search This Blog

Thursday, May 31, 2018

Void (astronomy)

From Wikipedia, the free encyclopedia
 
Structure of the Universe
Matter distribution in a cubic section of the universe. The blue fiber structures represent the matter (primarily dark matter) and the empty regions in between represent the cosmic voids.

Cosmic voids are vast spaces between filaments (the largest-scale structures in the universe), which contain very few or no galaxies. Voids typically have a diameter of 10 to 100 megaparsecs; particularly large voids, defined by the absence of rich superclusters, are sometimes called supervoids. They have less than one tenth of the average density of matter abundance that is considered typical for the observable universe. They were first discovered in 1978 in a pioneering study by Stephen Gregory and Laird A. Thompson at the Kitt Peak National Observatory.[1]

Voids are believed to have been formed by baryon acoustic oscillations in the Big Bang, collapses of mass followed by implosions of the compressed baryonic matter. Starting from initially small anisotropies from quantum fluctuations in the early universe, the anisotropies grew larger in scale over time. Regions of higher density collapsed more rapidly under gravity, eventually resulting in the large-scale, foam-like structure or "cosmic web" of voids and galaxy filaments seen today. Voids located in high-density environments are smaller than voids situated in low-density spaces of the universe.[2]

Voids appear to correlate with the observed temperature of the cosmic microwave background (CMB) because of the Sachs–Wolfe effect. Colder regions correlate with voids and hotter regions correlate with filaments because of gravitational redshifting. As the Sachs–Wolfe effect is only significant if the universe is dominated by radiation or dark energy, the existence of voids is significant in providing physical evidence for dark energy.[3][4]

Large-scale structure

The structure of our Universe can be broken down into components that can help describe the characteristics of individual regions of the cosmos. These are the main structural components of the cosmic web:
  • Voids – vast, largely spherical[5] regions with very low cosmic mean densities, up to 100 megaparsecs (Mpc) in diameter.[6]
  • Walls – the regions that contain the typical cosmic mean density of matter abundance. Walls can be further broken down into two smaller structural features:
    • Clusters – highly concentrated zones where walls meet and intersect, adding to the effective size of the local wall.
    • Filaments – the branching arms of walls that can stretch for tens of megaparsecs.[7]
Voids have a mean density less than a tenth of the average density of the universe. This serves as a working definition even though there is no single agreed-upon definition of what constitutes a void. The matter density value used for describing the cosmic mean density is usually based on a ratio of the number of galaxies per unit volume rather than the total mass of the matter contained in a unit volume.[8]

History and discovery

Cosmic voids as a topic of study in astrophysics began in the mid-1970s when redshift surveys became more popular and led two separate teams of astrophysicists in 1978 to identifying superclusters and voids in the distribution of galaxies and Abell clusters in a large region of space.[9][10] The new redshift surveys revolutionized the field of astronomy by adding depth to the two-dimensional maps of cosmological structure, which were often densely packed and overlapping,[6] allowing for the first three-dimensional mapping of the universe. In the redshift surveys, the depth was calculated from the individual redshifts of the galaxies due to the expansion of the universe according to Hubble's law.[11]

Timeline

A summarized timeline of important events in the field of cosmic voids from its beginning to recent times is listed below:
  • 1961 – Large-scale structural features such as "second-order clusters", a specific type of supercluster, were brought to the astronomical community's attention.[12]
  • 1978 – The first two papers on the topic of voids in the large-scale structure were published referencing voids found in the foreground of the Coma/A1367 clusters.[9][13]
  • 1981 – Discovery of a large void in the Boötes region of the sky that was nearly 50 h−1 Mpc in diameter (which was later recalculated to be about 34 h−1 Mpc).[14][15]
  • 1983 – Computer simulations sophisticated enough to provide relatively reliable results of growth and evolution of the large-scale structure emerged and yielded insight on key features of the large-scale galaxy distribution.[16][17]
  • 1985 – Details of the supercluster and void structure of the Perseus-Pisces region were surveyed.[18]
  • 1989 – The Center for Astrophysics Redshift Survey revealed that large voids, sharp filaments, and the walls that surround them dominate the large-scale structure of the universe.[19]
  • 1991 – The Las Campanas Redshift Survey confirmed the abundance of voids in the large-scale structure of the universe (Kirshner et al. 1991).[20]
  • 1995 – Comparisons of optically selected galaxy surveys indicate that the same voids are found regardless of the sample selection.[21]
  • 2001 – The completed two-degree Field Galaxy Redshift Survey adds a significantly large amount of voids to the database of all known cosmic voids.[22]
  • 2009 – The Sloan Digital Sky Survey (SDSS) data combined with previous large-scale surveys now provide the most complete view of the detailed structure of cosmic voids.[23][24][25]

Methods for finding

There exist a number of ways for finding voids with the results of large-scale surveys of the universe. Of the many different algorithms, virtually all fall into one of three general categories.[26] The first class consists of void finders that try to find empty regions of space based on local galaxy density.[27] The second class are those which try to find voids via the geometrical structures in the dark matter distribution as suggested by the galaxies.[28] The third class is made up of those finders which identify structures dynamically by using gravitationally unstable points in the distribution of dark matter.[29] The three most popular methods through the study of cosmic voids are listed below:

VoidFinder algorithm

This first-class method uses each galaxy in a catalog as its target and then uses the Nearest Neighbor Approximation to calculate the cosmic density in the region contained in a spherical radius determined by the distance to the third-closest galaxy.[30] El Ad & Piran introduced this method in 1997 to allow a quick and effective method for standardizing the cataloging of voids. Once the spherical cells are mined from all of the structure data, each cell is expanded until the underdensity returns to average expected wall density values.[31] One of the helpful features of void regions is that their boundaries are very distinct and defined, with a cosmic mean density that starts at 10% in the body and quickly rises to 20% at the edge and then to 100% in the walls directly outside the edges. The remaining walls and overlapping void regions are then gridded into, respectively, distinct and intertwining zones of filaments, clusters, and near-empty voids. Any overlap of more than 10% with already known voids are considered to be subregions within those known voids. All voids admitted to the catalog had a minimum radius of 10 Mpc in order to ensure all identified voids were not accidentally cataloged due to sampling errors.[30]

Zone bordering on voidness (ZOBOV) algorithm

This particular second-class algorithm uses a Voronoi tessellation technique and mock border particles in order to categorize regions based on a high-density contrasting border with a very low amount of bias.[32] Neyrinck introduced this algorithm in 2008 with the purpose of introducing a method that did not contain free parameters or presumed shape tessellations. Therefore, this technique can create more accurately shaped and sized void regions. Although this algorithm has some advantages in shape and size, it has been criticized often for sometimes providing loosely defined results. Since it has no free parameters, it mostly finds small and trivial voids, although the algorithm places a statistical significance on each void it finds. A physical significance parameter can be applied in order to reduce the number of trivial voids by including a minimum density to average density ratio of at least 1:5. Subvoids are also identified using this process which raises more philosophical questions on what qualifies as a void.[33] Void finders such as VIDE[34] are based on ZOBOV.

Dynamical void analysis (DIVA) algorithm

This third-class method is drastically different from the previous two algorithms listed. The most striking aspect is that it requires a different definition of what it means to be a void. Instead of the general notion that a void is a region of space with a low cosmic mean density; a hole in the distribution of galaxies, it defines voids to be regions in which matter is escaping; which corresponds to the dark energy equation of state, w. Void centers are then considered to be the maximal source of the displacement field denoted as Sψ. The purpose for this change in definitions was presented by Lavaux and Wandelt in 2009 as a way to yield cosmic voids such that exact analytical calculations can be made on their dynamical and geometrical properties. This allows DIVA to heavily explore the ellipticity of voids and how they evolve in the large-scale structure, subsequently leading to the classification of three distinct types of voids. These three morphological classes are True voids, Pancake voids, and Filament voids. Another notable quality is that even though DIVA also contains selection function bias just as first-class methods do, DIVA is devised such that this bias can be precisely calibrated, leading to much more reliable results. Multiple shortfalls of this Lagrangian-Eulerian hybrid approach exist. One example is that the resulting voids from this method are intrinsically different than those found by other methods, which makes an all-data points inclusive comparison between results of differing algorithms very difficult.[26]

Robustness testing

Once an algorithm is presented to find what it deems to be cosmic voids, it is crucial that its findings approximately match what is expected by the current simulations and models of large-scale structure. In order to perform this, the number, size, and proportion as well as other features of voids found by the algorithm are then checked by placing mock data through a Smoothed Particle Hydrodynamic Halo simulation, ΛCDM model, or other reliable simulator. An algorithm is much more robust if its data is in concordance with the results of these simulations for a range of input criterion (Pan et al. 2011).[35]

Significance

Voids have contributed significantly to the modern understanding of the cosmos, with applications ranging from shedding light on the current understanding of dark energy, to refining and constraining cosmological evolution models.[4] Some popular applications are mentioned in detail below.

Dark energy

The simultaneous existence of the largest-known voids and galaxy clusters requires about 70% dark energy in the universe today, consistent with the latest data from the cosmic microwave background.[4] Voids act as bubbles in the universe that are sensitive to background cosmological changes. This means that the evolution of a void's shape is in part the result of the expansion of the universe. Since this acceleration is believed to be caused by dark energy, studying the changes of a void's shape over a period of time can further refine the Quintessence + Cold Dark Matter (QCDM) model and provide a more accurate dark energy equation of state.[36] Additionally the abundance of voids is a promising way to constrain the dark energy equation of state.[37]

Galactic formation and evolution models

Large-scale structure formation
A 43×43×43-megaparsec cube shows the evolution of the large-scale structure over a logarithmic period starting from a redshift of 30 and ending at redshift 0. The model makes it clear to see how the matter-dense regions contract under the collective gravitational force while simultaneously aiding in the expansion of cosmic voids as the matter flees to the walls and filaments.

Cosmic voids contain a mix of galaxies and matter that is slightly different than other regions in the universe. This unique mix supports the biased galaxy formation picture predicted in Gaussian adiabatic cold dark matter models. This phenomenon provides an opportunity to modify the morphology-density correlation that holds discrepancies with these voids. Such observations like the morphology-density correlation can help uncover new facets about how galaxies form and evolve on the large scale.[38] On a more local scale, galaxies that reside in voids have differing morphological and spectral properties than those that are located in the walls. One feature that has been found is that voids have been shown to contain a significantly higher fraction of starburst galaxies of young, hot stars when compared to samples of galaxies in walls.[39]

Anomalies in anisotropies

Cold spots in the cosmic microwave background, such as the WMAP cold spot found by Wilkinson Microwave Anisotropy Probe, could possibly be explained by an extremely large cosmic void that has a radius of ~120 Mpc, as long as the late integrated Sachs–Wolfe effect was accounted for in the possible solution. Anomalies in CMB screenings are now being potentially explained through the existence of large voids located down the line-of-sight in which the cold spots lie.[40]

Cosmic Microwave Background screening of Universe.
CMB screening of the universe.

Accelerating expansion of the universe

Although dark energy is currently the most popular explanation for the acceleration in the expansion of the universe, another theory elaborates on the possibility of our galaxy being part of a very large, not-so-underdense, cosmic void. According to this theory, such an environment could naively lead to the demand for dark energy to solve the problem with the observed acceleration. As more data has been released on this topic the chances of it being a realistic solution in place of the current ΛCDM interpretation has been largely diminished but not all together abandoned.[41]

Gravitational theories

The abundance of voids, particularly when combined with the abundance of clusters of galaxies, is a promising method for precision tests of deviations from general relativity on large scales and in low-density regions.[42]

The insides of voids often seem to adhere to cosmological parameters which differ from those of the known universe. It is because of this unique feature that cosmic voids make for great laboratories to study the effects that gravitational clustering and growth rates have on local galaxies and structure when the cosmological parameters have different values from the outside universe. Due to the observation that larger voids predominantly remain in a linear regime, with most structures within exhibiting spherical symmetry in the underdense environment; that is, the underdensity leads to near-negligible particle-particle gravitational interactions that would otherwise occur in a region of normal galactic density. Testing models for voids can be performed with very high accuracy. The cosmological parameters that differ in these voids are Ωm, ΩΛ, and H0.[43]

Vacuum state

From Wikipedia, the free encyclopedia

In quantum field theory, the quantum vacuum state (also called the quantum vacuum or vacuum state) is the quantum state with the lowest possible energy. Generally, it contains no physical particles. Zero-point field is sometimes used as a synonym for the vacuum state of an individual quantized field.

According to present-day understanding of what is called the vacuum state or the quantum vacuum, it is "by no means a simple empty space".[1][2] According to quantum mechanics, the vacuum state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into and out of existence.[3][4][5]

The QED vacuum of quantum electrodynamics (or QED) was the first vacuum of quantum field theory to be developed. QED originated in the 1930s, and in the late 1940s and early 1950s it was reformulated by Feynman, Tomonaga and Schwinger, who jointly received the Nobel prize for this work in 1965.[6] Today the electromagnetic interactions and the weak interactions are unified in the theory of the electroweak interaction.

The Standard Model is a generalization of the QED work to include all the known elementary particles and their interactions (except gravity). Quantum chromodynamics is the portion of the Standard Model that deals with strong interactions, and QCD vacuum is the vacuum of quantum chromodynamics. It is the object of study in the Large Hadron Collider and the Relativistic Heavy Ion Collider, and is related to the so-called vacuum structure of strong interactions.[7]

Non-zero expectation value

File:Vacuum fluctuations revealed through spontaneous parametric down-conversion.ogvPlay media

The video of an experiment showing vacuum fluctuations (in the red ring) amplified by spontaneous parametric down-conversion.

If the quantum field theory can be accurately described through perturbation theory, then the properties of the vacuum are analogous to the properties of the ground state of a quantum mechanical harmonic oscillator, or more accurately, the ground state of a measurement problem. In this case the vacuum expectation value (VEV) of any field operator vanishes. For quantum field theories in which perturbation theory breaks down at low energies (for example, Quantum chromodynamics or the BCS theory of superconductivity) field operators may have non-vanishing vacuum expectation values called condensates. In the Standard Model, the non-zero vacuum expectation value of the Higgs field, arising from spontaneous symmetry breaking, is the mechanism by which the other fields in the theory acquire mass.

Energy

In many situations, the vacuum state can be defined to have zero energy, although the actual situation is considerably more subtle. The vacuum state is associated with a zero-point energy, and this zero-point energy has measurable effects. In the laboratory, it may be detected as the Casimir effect. In physical cosmology, the energy of the cosmological vacuum appears as the cosmological constant. In fact, the energy of a cubic centimeter of empty space has been calculated figuratively to be one trillionth of an erg (or 0.6 eV).[8] An outstanding requirement imposed on a potential Theory of Everything is that the energy of the quantum vacuum state must explain the physically observed cosmological constant.

Symmetry

For a relativistic field theory, the vacuum is Poincaré invariant, which follows from Wightman axioms but can be also proved directly without these axioms.[9] Poincaré invariance implies that only scalar combinations of field operators have non-vanishing VEV's. The VEV may break some of the internal symmetries of the Lagrangian of the field theory. In this case the vacuum has less symmetry than the theory allows, and one says that spontaneous symmetry breaking has occurred.

Electrical permittivity

In principle, quantum corrections to Maxwell's equations can cause the experimental electrical permittivity ε of the vacuum state to deviate from the defined scalar value ε0 of the electric constant.[10] These theoretical developments are described, for example, in Dittrich and Gies.[5] In particular, the theory of quantum electrodynamics predicts that the QED vacuum should exhibit nonlinear effects that will make it behave like a birefringent material with ε slightly greater than ε0 for extremely strong electric fields.[11][12] Explanations for dichroism from particle physics, outside quantum electrodynamics, also have been proposed.[13] Active attempts to measure such effects have yielded negative results so far.[14]

Notations

The vacuum state is written as |0\rangle or |\rangle. The vacuum expectation value (see also Expectation value) of any field φ should be written as \langle0|\phi|0\rangle.

Virtual particles

The presence of virtual particles can be rigorously based upon the non-commutation of the quantized electromagnetic fields. Non-commutation means that although the average values of the fields vanish in a quantum vacuum, their variances do not.[15] The term "vacuum fluctuations" refers to the variance of the field strength in the minimal energy state,[16] and is described picturesquely as evidence of "virtual particles".[17] It is sometimes attempted to provide an intuitive picture of virtual particles, or variances, based upon the Heisenberg energy-time uncertainty principle:
\Delta E \Delta t \ge \hbar \ ,
(with ΔE and Δt being the energy and time variations respectively; ΔE is the accuracy in the measurement of energy and Δt is the time taken in the measurement, and ħ is the Reduced Planck constant) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times.[18] Although the phenomenon of virtual particles is accepted, this interpretation of the energy-time uncertainty relation is not universal.[19][20] One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty Δt determines a "budget" for borrowing energy ΔE. Another issue is the meaning of "time" in this relation, because energy and time (unlike position q and momentum p, for example) do not satisfy a canonical commutation relation (such as [q, p] = i ħ).[21] Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy.[22][23] The very many approaches to the energy-time uncertainty principle are a long and continuing subject.[23]

Physical nature of the quantum vacuum

According to Astrid Lambrecht (2002): "When one empties out a space of all matter and lowers the temperature to absolute zero, one produces in a Gedankenexperiment [mental experiment] the quantum vacuum state."[1] According to Fowler & Guggenheim (1939/1965), the third law of thermodynamics may be precisely enunciated as follows:
It is impossible by any procedure, no matter how idealized, to reduce any assembly to the absolute zero in a finite number of operations.[24] (See also.[25][26][27])
Photon-photon interaction can occur only through interaction with the vacuum state of some other field, for example through the Dirac electron-positron vacuum field; this is associated with the concept of vacuum polarization.[28] According to Milonni (1994): "... all quantum fields have zero-point energies and vacuum fluctuations."[29] This means that there is a component of the quantum vacuum respectively for each component field (considered in the conceptual absence of the other fields), such as the electromagnetic field, the Dirac electron-positron field, and so on. According to Milonni (1994), some of the effects attributed to the vacuum electromagnetic field can have several physical interpretations, some more conventional than others. The Casimir attraction between uncharged conductive plates is often proposed as an example of an effect of the vacuum electromagnetic field. Schwinger, DeRaad, and Milton (1978) are cited by Milonni (1994) as validly, though unconventionally, explaining the Casimir effect with a model in which "the vacuum is regarded as truly a state with all physical properties equal to zero."[30][31] In this model, the observed phenomena are explained as the effects of the electron motions on the electromagnetic field, called the source field effect. Milonni writes:
The basic idea here will be that the Casimir force may be derived from the source fields alone even in completely conventional QED, ... Milonni provides detailed argument that the measurable physical effects usually attributed to the vacuum electromagnetic field cannot be explained by that field alone, but require in addition a contribution from the self-energy of the electrons, or their radiation reaction. He writes: "The radiation reaction and the vacuum fields are two aspects of the same thing when it comes to physical interpretations of various QED processes including the Lamb shift, van der Waals forces, and Casimir effects.[32]
This point of view is also stated by Jaffe (2005): "The Casimir force can be calculated without reference to vacuum fluctuations, and like all other observable effects in QED, it vanishes as the fine structure constant, α, goes to zero."[33]

Probability amplitude

From Wikipedia, the free encyclopedia
 
A wave function for a single electron on 5d atomic orbital of a hydrogen atom. The solid body shows the places where the electron's probability density is above a certain value (here 0.02 nm−3): this is calculated from the probability amplitude. The hue on the colored surface shows the complex phase of the wave function.

In quantum mechanics, a probability amplitude is a complex number used in describing the behaviour of systems. The modulus squared of this quantity represents a probability or probability density.

Probability amplitudes provide a relationship between the wave function (or, more generally, of a quantum state vector) of a system and the results of observations of that system, a link first proposed by Max Born. Interpretation of values of a wave function as the probability amplitude is a pillar of the Copenhagen interpretation of quantum mechanics. In fact, the properties of the space of wave functions were being used to make physical predictions (such as emissions from atoms being at certain discrete energies) before any physical interpretation of a particular function was offered. Born was awarded half of the 1954 Nobel Prize in Physics for this understanding (see References), and the probability thus calculated is sometimes called the "Born probability". These probabilistic concepts, namely the probability density and quantum measurements, were vigorously contested at the time by the original physicists working on the theory, such as Schrödinger[clarification needed] and Einstein. It is the source of the mysterious consequences and philosophical difficulties in the interpretations of quantum mechanics—topics that continue to be debated even today.

Overview

Physical

Neglecting some technical complexities, the problem of quantum measurement is the behaviour of a quantum state, for which the value of the observable Q to be measured is uncertain. Such a state is thought to be a coherent superposition of the observable's eigenstates, states on which the value of the observable is uniquely defined, for different possible values of the observable.

When a measurement of Q is made, the system (under the Copenhagen interpretation) jumps to one of the eigenstates, returning the eigenvalue to which the state belongs. The superposition of states can give them unequal "weights". Intuitively it is clear that eigenstates with heavier "weights" are more "likely" to be produced. Indeed, which of the above eigenstates the system jumps to is given by a probabilistic law: the probability of the system jumping to the state is proportional to the absolute value of the corresponding numerical factor squared. These numerical factors are called probability amplitudes, and this relationship used to calculate probabilities from given pure quantum states (such as wave functions) is called the Born rule.

Different observables may define incompatible decompositions of states.[clarification needed] Observables that do not commute define probability amplitudes on different sets.

Mathematical

In a formal setup, any system in quantum mechanics is described by a state, which is a vector |Ψ⟩, residing in an abstract complex vector space, called a Hilbert space. It may be either infinite- or finite-dimensional. A usual presentation of that Hilbert space is a special function space, called L2(X), on certain set X, that is either some configuration space or a discrete set.

For a measurable function \psi , the condition \psi \in L^{2}(X) reads:
\int \limits _{X}|\psi (x)|^{2}\,\mathrm {d} \mu (x)<\infty ;
this integral defines the square of the norm of ψ. If that norm is equal to 1, then
\int \limits _{X}|\psi (x)|^{2}\,\mathrm {d} \mu (x)=1.
It actually means that any element of L2(X) of the norm 1 defines a probability measure on X and a non-negative real expression |ψ(x)|2 defines its Radon–Nikodym derivative with respect to the standard measure μ.

If the standard measure μ on X is non-atomic, such as the Lebesgue measure on the real line, or on three-dimensional space, or similar measures on manifolds, then a real-valued function |ψ(x)|2 is called a probability density; see details below. If the standard measure on X consists of atoms only (we shall call such sets X discrete), and specifies the measure of any xX equal to 1,[1] then an integral over X is simply a sum[2] and |ψ(x)|2 defines the value of the probability measure on the set {x}, in other words, the probability that the quantum system is in the state x. How amplitudes and the vector are related can be understood with the standard basis of L2(X), elements of which will be denoted by |x or x| (see bra–ket notation for the angle bracket notation). In this basis
\psi (x)=\langle x|\Psi \rangle
specifies the coordinate presentation of an abstract vector |Ψ⟩.

Mathematically, many L2 presentations of the system's Hilbert space can exist. We shall consider not an arbitrary one, but a convenient one for the observable Q in question. A convenient configuration space X is such that each point x produces some unique value of Q. For discrete X it means that all elements of the standard basis are eigenvectors of Q. In other words, Q shall be diagonal in that basis. Then \psi (x) is the "probability amplitude" for the eigenstate x|. If it corresponds to a non-degenerate eigenvalue of Q, then |\psi (x)|^{2} gives the probability of the corresponding value of Q for the initial state |Ψ⟩.

For non-discrete X there may not be such states as x| in L2(X), but the decomposition is in some sense possible.

Wave functions and probabilities

If the configuration space X is continuous (something like the real line or Euclidean space, see above), then there are no valid quantum states corresponding to particular xX, and the probability that the system is "in the state x" will always be zero. An archetypical example of this is the L2(R) space constructed with 1-dimensional Lebesgue measure; it is used to study a motion in one dimension. This presentation of the infinite-dimensional Hilbert space corresponds to the spectral decomposition of the coordinate operator: x| Q | Ψ⟩ = xx | Ψ⟩, xR in this example. Although there are no such vectors as x |, strictly speaking, the expression x | Ψ⟩ can be made meaningful, for instance, with spectral theory.

Generally, it is the case when the motion of a particle is described in the position space, where the corresponding probability amplitude function ψ is the wave function.

If the function ψL2(X), ‖ψ‖ = 1 represents the quantum state vector |Ψ⟩, then the real expression |ψ(x)|2, that depends on x, forms a probability density function of the given state. The difference of a density function from simply a numerical probability means that one should integrate this modulus-squared function over some (small) domains in X to obtain probability values – as was stated above, the system can't be in some state x with a positive probability. It gives to both amplitude and density function a physical dimension, unlike a dimensionless probability. For example, for a 3-dimensional wave function, the amplitude has the dimension [L−3/2], where L is length.

Note that for both continuous and infinite discrete cases not every measurable, or even smooth function (i.e. a possible wave function) defines an element of L2(X); see #Normalisation below.

Discrete amplitudes

When the set X is discrete (see above), vectors |Ψ⟩ represented with the Hilbert space L2(X) are just column vectors composed of "amplitudes" and indexed by X. These are sometimes referred to as wave functions of a discrete variable xX. Discrete dynamical variables are used in such problems as a particle in an idealized reflective box and quantum harmonic oscillator. Components of the vector will be denoted by ψ(x) for uniformity with the previous case; there may be either finite of infinite number of components depending on the Hilbert space. In this case, if the vector |Ψ⟩ has the norm 1, then |ψ(x)|2 is just the probability that the quantum system resides in the state x. It defines a discrete probability distribution on X.

|ψ(x)| = 1 if and only if |x is the same quantum state as |Ψ⟩. ψ(x) = 0 if and only if |x and |Ψ⟩ are orthogonal (see inner product space). Otherwise the modulus of ψ(x) is between 0 and 1.

A discrete probability amplitude may be considered as a fundamental frequency in the Probability Frequency domain (spherical harmonics) for the purposes of simplifying M-theory transformation calculations.

A basic example

Take the simplest meaningful example of the discrete case: a quantum system that can be in two possible states: for example, the polarization of a photon. When the polarization is measured, it could be the horizontal state | H ⟩, or the vertical state | V ⟩. Until its polarization is measured the photon can be in a superposition of both these states, so its state |ψ could be written as:
|\psi \rangle =\alpha |H\rangle +\beta |V\rangle ,\,
The probability amplitudes of |ψ for the states | H ⟩ and | V ⟩ are α and β respectively. When the photon's polarization is measured, the resulting state is either horizontal or vertical. But in a random experiment, the probability of being horizontally polarized is α2, and the probability of being vertically polarized is β2.

Therefore, a photon in a state {\displaystyle |\psi \rangle ={\sqrt {1 \over 3}}|H\rangle -i{\sqrt {2 \over 3}}|V\rangle } would have a probability of 1/3 to come out horizontally polarized, and a probability of 2/3 to come out vertically polarized when an ensemble of measurements are made. The order of such results, is, however, completely random.

Normalization

In the example above, the measurement must give either | H ⟩ or | V ⟩, so the total probability of measuring | H ⟩ or | V ⟩ must be 1. This leads to a constraint that α2 + β2 = 1; more generally the sum of the squared moduli of the probability amplitudes of all the possible states is equal to one. If to understand "all the possible states" as an orthonormal basis, that makes sense in the discrete case, then this condition is the same as the norm-1 condition explained above.

One can always divide any non-zero element of a Hilbert space by its norm and obtain a normalized state vector. Not every wave function belongs to the Hilbert space L2(X), though. Wave functions that fulfill this constraint are called normalizable.

The Schrödinger wave equation, describing states of quantum particles, has solutions that describe a system and determine precisely how the state changes with time. Suppose a wavefunction ψ0(x, t) is a solution of the wave equation, giving a description of the particle (position x, for time t). If the wavefunction is square integrable, i.e.
\int _{\mathbf {R} ^{n}}|\psi _{0}(\mathbf {x} ,t_{0})|^{2}\,\mathrm {d\mathbf {x} } =a^{2}<\infty
for some t0, then ψ = ψ0/a is called the normalized wavefunction. Under the standard Copenhagen interpretation, the normalized wavefunction gives probability amplitudes for the position of the particle. Hence, at a given time t0, ρ(x) = |ψ(x, t0)|2 is the probability density function of the particle's position. Thus the probability that the particle is in the volume V at t0 is
\mathbf {P} (V)=\int _{V}\rho (\mathbf {x} )\,\mathrm {d\mathbf {x} } =\int _{V}|\psi (\mathbf {x} ,t_{0})|^{2}\,\mathrm {d\mathbf {x} } .
Note that if any solution ψ0 to the wave equation is normalisable at some time t0, then the ψ defined above is always normalised, so that
\rho _{t}(\mathbf {x} )=\left|\psi (\mathbf {x} ,t)\right|^{2}=\left|{\frac {\psi _{0}(\mathbf {x} ,t)}{a}}\right|^{2}
is always a probability density function for all t. This is key to understanding the importance of this interpretation, because for a given the particle's constant mass, initial ψ(x, 0) and the potential, the Schrödinger equation fully determines subsequent wavefunction, and the above then gives probabilities of locations of the particle at all subsequent times.

The laws of calculating probabilities of events

A. Provided a system evolves naturally (which under the Copenhagen interpretation means that the system is not subjected to measurement), the following laws apply:
  1. The probability (or the density of probability in position/momentum space) of an event to occur is the square of the absolute value of the probability amplitude for the event: P=|\phi |^{2}.
  2. If there are several mutually exclusive, indistinguishable alternatives in which an event might occur (or, in realistic interpretations of wavefunction, several wavefunctions exist for a space-time event), the probability amplitudes of all these possibilities add to give the probability amplitude for that event: {\displaystyle \phi =\sum _{i}\phi _{i};P=|\phi |^{2}=\left|\sum _{i}\phi _{i}\right|^{2}}.
  3. If, for any alternative, there is a succession of sub-events, then the probability amplitude for that alternative is the product of the probability amplitude for each sub-event: \phi _{APB}=\phi _{AP}\phi _{PB}.
  4. Non-entangled states of a composite quantum system have amplitudes equal to the product of the amplitudes of the states of constituent systems: \phi _{\rm {system}}(\alpha ,\beta ,\gamma ,\delta ,\ldots )=\phi _{1}(\alpha )\phi _{2}(\beta )\phi _{3}(\gamma )\phi _{4}(\delta )\ldots . See the #Composite systems section for more information.
Law 2 is analogous to the addition law of probability, only the probability being substituted by the probability amplitude. Similarly, Law 4 is analogous to the multiplication law of probability for independent events; note that it fails for entangled states.

B. When an experiment is performed to decide between the several alternatives, the same laws hold true for the corresponding probabilities: P=\sum _{i}|\phi _{i}|^{2}.

Provided one knows the probability amplitudes for events associated with an experiment, the above laws provide a complete description of quantum systems in terms of probabilities.

The above laws give way to the path integral formulation of quantum mechanics, in the formalism developed by the celebrated theoretical physicist Richard Feynman. This approach to quantum mechanics forms the stepping-stone to the path integral approach to quantum field theory.

In the context of the double-slit experiment

Probability amplitudes have special significance because they act in quantum mechanics as the equivalent of conventional probabilities, with many analogous laws, as described above. For example, in the classic double-slit experiment, electrons are fired randomly at two slits, and the probability distribution of detecting electrons at all parts on a large screen placed behind the slits, is questioned. An intuitive answer is that P(through either slit) = P(through first slit) + P(through second slit), where P(event) is the probability of that event. This is obvious if one assumes that an electron passes through either slit. When nature does not have a way to distinguish which slit the electron has gone through (a much more stringent condition than simply "it is not observed"), the observed probability distribution on the screen reflects the interference pattern that is common with light waves. If one assumes the above law to be true, then this pattern cannot be explained. The particles cannot be said to go through either slit and the simple explanation does not work. The correct explanation is, however, by the association of probability amplitudes to each event. This is an example of the case A as described in the previous article. The complex amplitudes which represent the electron passing each slit (ψfirst and ψsecond) follow the law of precisely the form expected: ψtotal = ψfirst + ψsecond. This is the principle of quantum superposition. The probability, which is the modulus squared of the probability amplitude, then, follows the interference pattern under the requirement that amplitudes are complex:
{\displaystyle P=|\psi _{\rm {first}}+\psi _{\rm {second}}|^{2}=|\psi _{\rm {first}}|^{2}+|\psi _{\rm {second}}|^{2}+2|\psi _{\rm {first}}||\psi _{\rm {second}}|\cos(\varphi _{1}-\varphi _{2}).}


Here, \varphi _{1} and\varphi _{2} are the arguments of ψfirst and ψsecond respectively. A purely real formulation has too few dimensions to describe the system's state when superposition is taken into account. That is, without the arguments of the amplitudes, we cannot describe the phase-dependent interference. The crucial term 2|\psi _{\rm {first}}||\psi _{\rm {second}}|\cos(\varphi _{1}-\varphi _{2}) is called the "interference term", and this would be missing if we had added the probabilities.

However, one may choose to devise an experiment in which he observes which slit each electron goes through. Then case B of the above article applies, and the interference pattern is not observed on the screen.

One may go further in devising an experiment in which he gets rid of this "which-path information" by a "quantum eraser". Then, according to the Copenhagen interpretation, the case A applies again and the interference pattern is restored.[3]

Conservation of probabilities and the continuity equation

Intuitively, since a normalised wave function stays normalised while evolving according to the wave equation, there will be a relationship between the change in the probability density of the particle's position and the change in the amplitude at these positions.
Define the probability current (or flux) j as
\mathbf {j} ={\hbar  \over m}{1 \over {2i}}\left(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*}\right)={\hbar  \over m}\operatorname {Im} \left(\psi ^{*}\nabla \psi \right),
measured in units of (probability)/(area × time).

Then the current satisfies the equation
\nabla \cdot \mathbf {j} +{\partial  \over \partial t}|\psi |^{2}=0.
The probability density is \rho =|\psi |^{2}, this equation is exactly the continuity equation, appearing in many situations in physics where we need to describe the local conservation of quantities. The best example is in classical electrodynamics, where j corresponds to current density corresponding to electric charge, and the density is the charge-density. The corresponding continuity equation describes the local conservation of charges.[clarification needed]

Composite systems

For two quantum systems with spaces L2(X1) and L2(X2) and given states 1 and 2 respectively, their combined state 12 can be expressed as ψ1(x1) ψ2(x2) a function on X1×X2, that gives the product of respective probability measures. In other words, amplitudes of a non-entangled composite state are products of original amplitudes, and respective observables on the systems 1 and 2 behave on these states as independent random variables. This strengthens the probabilistic interpretation explicated above.

Amplitudes in operators

The concept of amplitudes described above is relevant to quantum state vectors. It is also used in the context of unitary operators that are important in the scattering theory, notably in the form of S-matrices. Whereas moduli of vector components squared, for a given vector, give a fixed probability distribution, moduli of matrix elements squared are interpreted as transition probabilities just as in a random process. Like a finite-dimensional unit vector specifies a finite probability distribution, a finite-dimensional unitary matrix specifies transition probabilities between a finite number of states. Note that columns of a unitary matrix, as vectors, have the norm 1.

The "transitional" interpretation may be applied to L2s on non-discrete spaces as well.

Vertebral column

From Wikipedia, the free encyclopedia ...