Search This Blog

Wednesday, February 18, 2026

Uncertainty principle

From Wikipedia, the free encyclopedia

The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.

More formally, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the product of the accuracy of certain related pairs of measurements on a quantum system, such as position, x, and momentum, p. Such paired-variables are known as complementary variables or canonically conjugate variables.

First introduced in 1927 by German physicist Werner Heisenberg, the formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928:

where is the reduced Planck constant.

The quintessentially quantum mechanical uncertainty principle comes in many forms other than position–momentum. The energy–time relationship is widely used to relate quantum state lifetime to measured energy widths but its formal derivation is fraught with confusing issues about the nature of time. The basic principle has been extended in numerous directions; it must be considered in many kinds of fundamental physical measurements.

Position–momentum

The superposition of several plane waves to form a wave packet. This wave packet becomes increasingly localized with the addition of many waves. The Fourier transform is a mathematical operation that separates a wave packet into its individual plane waves. The waves shown here are real for illustrative purposes only; in quantum mechanics the wave function is generally complex.

It is vital to illustrate how the principle applies to relatively intelligible physical situations since it is indiscernible on the macroscopic scales that humans experience. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily.

Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). A nonzero function and its Fourier transform cannot both be sharply localized at the same time. A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relation p = ħk, where k is the wavenumber.

In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable A is performed, then the system is in a particular eigenstate Ψ of that observable. However, the particular eigenstate of the observable A need not be an eigenstate of another observable B: If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable.

Visualization

The uncertainty principle can be visualized using the position- and momentum-space wavefunctions for one spinless particle with mass in one dimension.

The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread. Conversely, the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread. These wavefunctions are Fourier transforms of each other: mathematically, the uncertainty principle expresses the relationship between conjugate variables in the transform.

Position x and momentum p wavefunctions corresponding to quantum particles. The colour opacity of the particles corresponds to the probability density of finding the particle with position x or momentum component p.
Top: If wavelength λ is unknown, so are momentum p, wave-vector k and energy E (de Broglie relations). As the particle is more localized in position space, Δx is smaller than for Δpx.
Bottom: If λ is known, so are p, k, and E. As the particle is more localized in momentum space, Δp is smaller than for Δx.

Wave mechanics interpretation

Propagation of de Broglie waves in 1d—real part of the complex amplitude is blue, imaginary part is green. The probability (shown as the colour opacity) of finding the particle at a given point x is spread out like a waveform, there is no definite position of the particle. As the amplitude increases above zero the curvature reverses sign, so the amplitude begins to decrease again, and vice versa—the result is an alternating amplitude: a wave.

According to the de Broglie hypothesis, every object in the universe is associated with a wave. Thus every object, from an elementary particle to atoms, molecules and on up to planets and beyond are subject to the uncertainty principle.

The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is 

The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is

In the case of the single-mode plane wave, is 1 if and 0 otherwise. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet.

On the other hand, consider a wave function that is a sum of many waves, which we may write as where An represents the relative contribution of the mode pn to the overall total. The figures to the right show how with the addition of many plane waves, the wave packet can become more localized. We may take this a step further to the continuum limit, where the wave function is an integral over all possible modes with representing the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that is the Fourier transform of and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta.

One way to quantify the precision of the position and momentum is the standard deviation σ. Since is a probability density function for position, we calculate its standard deviation.

The precision of the position is improved, i.e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i.e. increased σp. Another way of stating this is that σx and σp have an inverse relationship or are at least bounded from below. This is the uncertainty principle, the exact limit of which is the Kennard bound.

Proof of the Kennard inequality using wave mechanics

We are interested in the variances of position and momentum, defined as

Without loss of generality, we will assume that the means vanish, which just amounts to a shift of the origin of our coordinates. (A more general proof that does not make this assumption is given below.) This gives us the simpler form

The function can be interpreted as a vector in a function space. We can define an inner product for a pair of functions u(x) and v(x) in this vector space: where the asterisk denotes the complex conjugate.

With this inner product defined, we note that the variance for position can be written as

We can repeat this for momentum by interpreting the function as a vector, but we can also take advantage of the fact that and are Fourier transforms of each other. We evaluate the inverse Fourier transform through integration by parts: where in the integration by parts, the cancelled term vanishes because the wave function vanishes at both infinities and , and then use the Dirac delta function which is valid because does not depend on p .

The term is called the momentum operator in position space. Applying Plancherel's theorem, we see that the variance for momentum can be written as

The Cauchy–Schwarz inequality asserts that

The modulus squared of any complex number z can be expressed as we let and and substitute these into the equation above to get

All that remains is to evaluate these inner products.

Plugging this into the above inequalities, we get and taking the square root

with equality if and only if p and x are linearly dependent. Note that the only physics involved in this proof was that and are wave functions for position and momentum, which are Fourier transforms of each other. A similar result would hold for any pair of conjugate variables.

Matrix mechanics interpretation

In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators. When considering pairs of observables, an important quantity is the commutator. For a pair of operators  and , one defines their commutator as In the case of position and momentum, the commutator is the canonical commutation relation

The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates. Let be a right eigenstate of position with a constant eigenvalue x0. By definition, this means that Applying the commutator to yields where Î is the identity operator.

Suppose, for the sake of proof by contradiction, that is also a right eigenstate of momentum, with constant eigenvalue p0. If this were true, then one could write On the other hand, the above canonical commutation relation requires that This implies that no quantum state can simultaneously be both a position and a momentum eigenstate.

When a state is measured, it is projected onto an eigenstate in the basis of the relevant observable. For example, if a particle's position is measured, then the state amounts to a position eigenstate. This means that the state is not a momentum eigenstate, however, but rather it can be represented as a sum of multiple momentum basis eigenstates. In other words, the momentum must be less precise. This precision may be quantified by the standard deviations,

As in the wave mechanics interpretation above, one sees a tradeoff between the respective precisions of the two, quantified by the uncertainty principle.

Quantum harmonic oscillator stationary states

Consider a one-dimensional quantum harmonic oscillator. It is possible to express the position and momentum operators in terms of the creation and annihilation operators:

Using the standard rules for creation and annihilation operators on the energy eigenstates, the variances may be computed directly, The product of these standard deviations is then

In particular, the above Kennard bound is saturated for the ground state n=0, for which the probability density is just the normal distribution.

Quantum harmonic oscillators with Gaussian initial condition

Position (blue) and momentum (red) probability densities for an initial Gaussian distribution. From top to bottom, the animations show the cases Ω = ω, Ω = 2ω, and Ω = ω/2. Note the tradeoff between the widths of the distributions.

In a quantum harmonic oscillator of characteristic angular frequency ω, place a state that is offset from the bottom of the potential by some displacement x0 as where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the full time-dependent solution. After many cancelations, the probability densities reduce to where we have used the notation to denote a normal distribution of mean μ and variance σ2. Copying the variances above and applying trigonometric identities, we can write the product of the standard deviations as

From the relations we can conclude the following (the right most equality holds only when Ω = ω):

Coherent states

A coherent state is a right eigenstate of the annihilation operator, which may be represented in terms of Fock states as

In the picture where the coherent state is a massive particle in a quantum harmonic oscillator, the position and momentum operators may be expressed in terms of the annihilation operators in the same formulas above and used to calculate the variances, Therefore, every coherent state saturates the Kennard bound with position and momentum each contributing an amount in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general.

Particle in a box

Consider a particle in a one-dimensional box of length . The eigenfunctions in position and momentum space are and where and we have used the de Broglie relation . The variances of and can be calculated explicitly:

The product of the standard deviations is therefore For all , the quantity is greater than 1, so the uncertainty principle is never violated. For numerical concreteness, the smallest value occurs when , in which case

Constant momentum

Position space probability density of an initially Gaussian state moving at minimally uncertain, constant momentum in free space

Assume a particle initially has a momentum space wave function described by a normal distribution around some constant momentum p0 according to where we have introduced a reference scale , with describing the width of the distribution—cf. nondimensionalization. If the state is allowed to evolve in free space, then the time-dependent momentum and position space wave functions are

Since and , this can be interpreted as a particle moving along with constant momentum at arbitrarily high precision. On the other hand, the standard deviation of the position is such that the uncertainty product can only increase with time as

Mathematical formalism

Starting with Kennard's derivation of position-momentum uncertainty, Howard Percy Robertson developed a formulation for arbitrary Hermitian operators expressed in terms of their standard deviation where the brackets indicate an expectation value of the observable represented by operator . For a pair of operators and , define their commutator as and the Robertson uncertainty relation is given by 

Erwin Schrödinger showed how to allow for correlation between the operators, giving a stronger inequality, known as the Robertson–Schrödinger uncertainty relation,

where the anticommutator, is used.

Proof of the Schrödinger uncertainty relation

The derivation shown here incorporates and builds off of those shown in Robertson, Schrödinger and standard textbooks such as Griffiths. For any Hermitian operator , based upon the definition of variance, we have we let and thus

Similarly, for any other Hermitian operator in the same state for

The product of the two deviations can thus be expressed as

In order to relate the two vectors and , we use the Cauchy–Schwarz inequality which is defined as and thus Equation (1) can be written as

Since is in general a complex number, we use the fact that the modulus squared of any complex number is defined as , where is the complex conjugate of . The modulus squared can also be expressed as

we let and and substitute these into the equation above to get

The inner product is written out explicitly as and using the fact that and are Hermitian operators, we find

Similarly it can be shown that

Thus, we have and

We now substitute the above two equations above back into Eq. (4) and get

Substituting the above into Equation (2) we get the Schrödinger uncertainty relation

This proof has an issue related to the domains of the operators involved. For the proof to make sense, the vector has to be in the domain of the unbounded operator , which is not always the case. In fact, the Robertson uncertainty relation is false if is an angle variable and is the derivative with respect to this variable. In this example, the commutator is a nonzero constant—just as in the Heisenberg uncertainty relation—and yet there are states where the product of the uncertainties is zero. (See the counterexample section below.) This issue can be overcome by using a variational method for the proof, or by working with an exponentiated version of the canonical commutation relations.

Note that in the general form of the Robertson–Schrödinger uncertainty relation, there is no need to assume that the operators and are self-adjoint operators. It suffices to assume that they are merely symmetric operators. (The distinction between these two notions is generally glossed over in the physics literature, where the term Hermitian is used for either or both classes of operators. See Chapter 9 of Hall's book for a detailed discussion of this important but technical distinction.)

Phase space

In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function with star product ★ and a function f, the following is generally true: 

Choosing , we arrive at

Since this positivity condition is true for all a, b, and c, it follows that all the eigenvalues of the matrix are non-negative.

The non-negative eigenvalues then imply a corresponding non-negativity condition on the determinant, or, explicitly, after algebraic manipulation,

Examples

Since the Robertson and Schrödinger relations are for general operators, the relations can be applied to any two observables to obtain specific uncertainty relations. A few of the most common relations found in the literature are given below.

  • Position–linear momentum uncertainty relation: for the position and linear momentum operators, the canonical commutation relation implies the Kennard inequality from above:
  • Angular momentum uncertainty relation: For two orthogonal components of the total angular momentum operator of an object: where i, j, k are distinct, and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for , a choice , , in angular momentum multiplets, ψ = |j, m⟩, bounds the Casimir invariant (angular momentum squared, ) from below and thus yields useful constraints such as j(j + 1) ≥ m(m + 1), and hence jm, among others.

Limitations

The derivation of the Robertson inequality for operators and requires and to be defined. There are quantum systems where these conditions are not valid. One example is a quantum particle on a ring, where the wave function depends on an angular variable in the interval . Define "position" and "momentum" operators and by and with periodic boundary conditions on . The definition of depends the range from 0 to . These operators satisfy the usual commutation relations for position and momentum operators, . More precisely, whenever both and are defined, and the space of such is a dense subspace of the quantum Hilbert space.

Now let be any of the eigenstates of , which are given by . These states are normalizable, unlike the eigenstates of the momentum operator on the line. Also the operator is bounded, since ranges over a bounded interval. Thus, in the state , the uncertainty of is zero and the uncertainty of is finite, so that The Robertson uncertainty principle does not apply in this case: is not in the domain of the operator , since multiplication by disrupts the periodic boundary conditions imposed on .

For the usual position and momentum operators and on the real line, no such counterexamples can occur. As long as and are defined in the state , the Heisenberg uncertainty principle holds, even if fails to be in the domain of or of .

Mixed states

The Robertson–Schrödinger uncertainty can be improved noting that it must hold for all components in any decomposition of the density matrix given as Here, for the probabilities and hold. Then, using the relation for , it follows that  where the function in the bound is defined The above relation very often has a bound larger than that of the original Robertson–Schrödinger uncertainty relation. Thus, we need to calculate the bound of the Robertson–Schrödinger uncertainty for the mixed components of the quantum state rather than for the quantum state, and compute an average of their square roots. The following expression is stronger than the Robertson–Schrödinger uncertainty relation where on the right-hand side there is a concave roof over the decompositions of the density matrix. The improved relation above is saturated by all single-qubit quantum states.

With similar arguments, one can derive a relation with a convex roof on the right-hand side  where denotes the quantum Fisher information and the density matrix is decomposed to pure states as The derivation takes advantage of the fact that the quantum Fisher information is the convex roof of the variance times four.

A simpler inequality follows without a convex roof  which is stronger than the Heisenberg uncertainty relation, since for the quantum Fisher information we have while for pure states the equality holds.

The Maccone–Pati uncertainty relations

The Robertson–Schrödinger uncertainty relation can be trivial if the state of the system is chosen to be eigenstate of one of the observable. The stronger uncertainty relations proved by Lorenzo Maccone and Arun K. Pati give non-trivial bounds on the sum of the variances for two incompatible observables. (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., Ref. due to Yichen Huang.) For two non-commuting observables and the first stronger uncertainty relation is given by where , , is a normalized vector that is orthogonal to the state of the system and one should choose the sign of to make this real quantity a positive number.

The second stronger uncertainty relation is given by where is a state orthogonal to . The form of implies that the right-hand side of the new uncertainty relation is nonzero unless is an eigenstate of . One may note that can be an eigenstate of without being an eigenstate of either or . However, when is an eigenstate of one of the two observables the Heisenberg–Schrödinger uncertainty relation becomes trivial. But the lower bound in the new relation is nonzero unless is an eigenstate of both.

Energy–time

An energy–time uncertainty relation like has a long, controversial history; the meaning of and varies and different formulations have different arenas of validity. However, one well-known application is both well established and experimentally verified: the connection between the life-time of a resonance state, and its energy width : In particle-physics, widths from experimental fits to the Breit–Wigner energy distribution are used to characterize the lifetime of quasi-stable or decaying states.

An informal, heuristic meaning of the principle is the following: A state that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must be defined accurately, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy. For example, in spectroscopy, excited states have a finite lifetime. By the time–energy uncertainty principle, they do not have a definite energy, and, each time they decay, the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow-decaying states have a narrow linewidth. The same linewidth effect also makes it difficult to specify the rest mass of unstable, fast-decaying particles in particle physics. The faster the particle decays (the shorter its lifetime), the less certain is its mass (the larger the particle's width).

Time in quantum mechanics

The concept of "time" in quantum mechanics offers many challenges. There is no quantum theory of time measurement; relativity is both fundamental to time and difficult to include in quantum mechanics. While position and momentum are associated with a single particle, time is a system property: it has no operator needed for the Robertson–Schrödinger relation. The mathematical treatment of stable and unstable quantum systems differ. These factors combine to make energy–time uncertainty principles controversial.

Three notions of "time" can be distinguished: external, intrinsic, and observable. External or laboratory time is seen by the experimenter; intrinsic time is inferred by changes in dynamic variables, like the hands of a clock or the motion of a free particle; observable time concerns time as an observable, the measurement of time-separated events.

An external-time energy–time uncertainty principle might say that measuring the energy of a quantum system to an accuracy requires a time interval . However, Yakir Aharonov and David Bohm have shown that, in some quantum systems, energy can be measured accurately within an arbitrarily short time: external-time uncertainty principles are not universal.

Intrinsic time is the basis for several formulations of energy–time uncertainty relations, including the Mandelstam–Tamm relation discussed in the next section. A physical system with an intrinsic time closely matching the external laboratory time is called a "clock".

Observable time, measuring time between two events, remains a challenge for quantum theories; some progress has been made using positive operator-valued measure concepts.

Mandelstam–Tamm

In 1945, Leonid Mandelstam and Igor Tamm derived a non-relativistic time–energy uncertainty relation as follows. From Heisenberg mechanics, the generalized Ehrenfest theorem for an observable B without explicit time dependence, represented by a self-adjoint operator relates time dependence of the average value of to the average of its commutator with the Hamiltonian:

The value of is then substituted in the Robertson uncertainty relation for the energy operator and : giving (whenever the denominator is nonzero). While this is a universal result, it depends upon the observable chosen and that the deviations and are computed for a particular state. Identifying and the characteristic time gives an energy–time relationship Although has the dimension of time, it is different from the time parameter t that enters the Schrödinger equation. This can be interpreted as time for which the expectation value of the observable, changes by an amount equal to one standard deviation. Examples:

  • The time a free quantum particle passes a point in space is more uncertain as the energy of the state is more precisely controlled: Since the time spread is related to the particle position spread and the energy spread is related to the momentum spread, this relation is directly related to position–momentum uncertainty.
  • A Delta particle, a quasistable composite of quarks related to protons and neutrons, has a lifetime of 10−23 s, so its measured mass equivalent to energy, 1232 MeV/c2, varies by ±120 MeV/c2; this variation is intrinsic and not caused by measurement errors.
  • Two energy states with energies superimposed to create a composite state
The probability amplitude of this state has a time-dependent interference term:
The oscillation period varies inversely with the energy difference: .

Each example has a different meaning for the time uncertainty, according to the observable and state used.

Quantum field theory

Some formulations of quantum field theory uses temporary electron–positron pairs in its calculations called virtual particles. The mass-energy and lifetime of these particles are related by the energy–time uncertainty relation. The energy of a quantum systems is not known with enough precision to limit their behavior to a single, simple history. Thus the influence of all histories must be incorporated into quantum calculations, including those with much greater or much less energy than the mean of the measured/calculated energy distribution.

The energy–time uncertainty principle does not temporarily violate conservation of energy; it does not imply that energy can be "borrowed" from the universe as long as it is "returned" within a short amount of time. The energy of the universe is not an exactly known parameter at all times. When events transpire at very short time intervals, there is uncertainty in the energy of these events.

Harmonic analysis

In the context of harmonic analysis the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform. To wit, the following inequality holds,

Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function f and its Fourier transform ƒ̂

Signal processing

In the context of time–frequency analysis uncertainty principles are referred to as the Gabor limit, after Dennis Gabor, or sometimes the Heisenberg–Gabor limit. The basic result, which follows from "Benedicks's theorem", below, is that a function cannot be both time limited and band limited (a function and its Fourier transform cannot both have bounded domain)—see bandlimited versus timelimited. More accurately, the time-bandwidth or duration-bandwidth product satisfies where and are the standard deviations of the time and frequency energy concentrations respectively. The minimum is attained for a Gaussian-shaped pulse (Gabor wavelet) [For the un-squared Gaussian (i.e. signal amplitude) and its un-squared Fourier transform magnitude ; squaring reduces each by a factor .] Another common measure is the product of the time and frequency full width at half maximum (of the power/energy), which for the Gaussian equals (see bandwidth-limited pulse).

Stated differently, one cannot simultaneously sharply localize a signal f in both the time domain and frequency domain.

When applied to filters, the result implies that one cannot simultaneously achieve a high temporal resolution and high frequency resolution at the same time; a concrete example are the resolution issues of the short-time Fourier transform—if one uses a wide window, one achieves good frequency resolution at the cost of temporal resolution, while a narrow window has the opposite trade-off.

Alternate theorems give more precise quantitative results, and, in time–frequency analysis, rather than interpreting the (1-dimensional) time and frequency domains separately, one instead interprets the limit as a lower limit on the support of a function in the (2-dimensional) time–frequency plane. In practice, the Gabor limit limits the simultaneous time–frequency resolution one can achieve without interference; it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other.

As a result, in order to analyze signals where the transients are important, the wavelet transform is often used instead of the Fourier.

Discrete Fourier transform

Let be a sequence of N complex numbers and be its discrete Fourier transform.

Denote by the number of non-zero elements in the time sequence and by the number of non-zero elements in the frequency sequence . Then,

This inequality is sharp, with equality achieved when x or X is a Dirac mass, or more generally when x is a nonzero multiple of a Dirac comb supported on a subgroup of the integers modulo N (in which case X is also a Dirac comb supported on a complementary subgroup, and vice versa).

More generally, if T and W are subsets of the integers modulo N, let denote the time-limiting operator and band-limiting operators, respectively. Then where the norm is the operator norm of operators on the Hilbert space of functions on the integers modulo N. This inequality has implications for signal reconstruction.

When N is a prime number, a stronger inequality holds: Discovered by Terence Tao, this inequality is also sharp.

Benedicks's theorem

Amrein–Berthier and Benedicks's theorem intuitively says that the set of points where f is non-zero and the set of points where ƒ̂ is non-zero cannot both be small.

Specifically, it is impossible for a function f in L2(R) and its Fourier transform ƒ̂ to both be supported on sets of finite Lebesgue measure. A more quantitative version is

One expects that the factor CeC|S||Σ| may be replaced by CeC(|S||Σ|)1/d, which is only known if either S or Σ is convex.

Hardy's uncertainty principle

The mathematician G. H. Hardy formulated the following uncertainty principle: it is not possible for f and ƒ̂ to both be "very rapidly decreasing". Specifically, if f in is such that and ( an integer), then, if ab > 1, f = 0, while if ab = 1, then there is a polynomial P of degree N such that

This was later improved as follows: if is such that then where P is a polynomial of degree (Nd)/2 and A is a real d × d positive definite matrix.

This result was stated in Beurling's complete works without proof and proved in Hörmander (the case ) and Bonami, Demange, and Jaming for the general case. Note that Hörmander–Beurling's version implies the case ab > 1 in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref.

A full description of the case ab < 1 as well as the following extension to Schwartz class distributions appears in ref.

Theorem If a tempered distribution is such that and then for some convenient polynomial P and real positive definite matrix A of type d × d.

Additional uncertainty relations

Heisenberg limit

In quantum metrology, and especially interferometry, the Heisenberg limit is the optimal rate at which the accuracy of a measurement can scale with the energy used in the measurement. Typically, this is the measurement of a phase (applied to one arm of a beam-splitter) and the energy is given by the number of photons used in an interferometer. Although some claim to have broken the Heisenberg limit, this reflects disagreement on the definition of the scaling resource. Suitably defined, the Heisenberg limit is a consequence of the basic principles of quantum mechanics and cannot be beaten, although the weak Heisenberg limit can be beaten.

Systematic and statistical errors

The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation . Heisenberg's original version, however, was dealing with the systematic error, a disturbance of the quantum system produced by the measuring apparatus, i.e., an observer effect.

If we let represent the error (i.e., inaccuracy) of a measurement of an observable A and the disturbance produced on a subsequent measurement of the conjugate variable B by the former measurement of A, then the inequality proposed by Masanao Ozawa − encompassing both systematic and statistical errors - holds:

Heisenberg's uncertainty principle, as originally described in the 1927 formulation, mentions only the first term of Ozawa inequality, regarding the systematic error. Using the notation above to describe the error/disturbance effect of sequential measurements (first A, then B), it could be written as

The formal derivation of the Heisenberg relation is possible but far from intuitive. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years. Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors and . There is increasing experimental evidence that the total quantum uncertainty cannot be described by the Heisenberg term alone, but requires the presence of all the three terms of the Ozawa inequality.

Using the same formalism, it is also possible to introduce the other kind of physical situation, often confused with the previous one, namely the case of simultaneous measurements (A and B at the same time):

The two simultaneous measurements on A and B are necessarily unsharp or weak.

It is also possible to derive an uncertainty relation that, as the Ozawa's one, combines both the statistical and systematic error components, but keeps a form very close to the Heisenberg original inequality. By adding Robertson

and Ozawa relations we obtain The four terms can be written as: Defining: as the inaccuracy in the measured values of the variable A and as the resulting fluctuation in the conjugate variable B, Kazuo Fujikawa established an uncertainty relation similar to the Heisenberg original one, but valid both for systematic and statistical errors:

Quantum entropic uncertainty principle

For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period. Other examples include highly bimodal distributions, or unimodal distributions with divergent variance.

A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty. This conjecture, also studied by I. I. Hirschman and proven in 1975 by W. Beckner and by Iwo Bialynicki-Birula and Jerzy Mycielski is that, for two normalized, dimensionless Fourier transform pairs f(a) and g(b) where

    and    

the Shannon information entropies and are subject to the following constraint,

where the logarithms may be in any base.

The probability distribution functions associated with the position wave function ψ(x) and the momentum wave function φ(x) have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by where x0 and p0 are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function ψ(x) and the momentum wavefunction φ(p), the above constraint can be written for the corresponding entropies as

where h is the Planck constant.

Depending on one's choice of the x0 p0 product, the expression may be written in many ways. If x0 p0 is chosen to be h, then

If, instead, x0 p0 is chosen to be , then

If x0 and p0 are chosen to be unity in whatever system of units are being used, then where h is interpreted as a dimensionless number equal to the value of the Planck constant in the chosen system of units. Note that these inequalities can be extended to multimode quantum states, or wavefunctions in more than one spatial dimension.

The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities  (equivalently, from the fact that normal distributions maximize the entropy of all such with a given variance), it readily follows that this entropic uncertainty principle is stronger than the one based on standard deviations, because

In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the Shannon entropy has been used, not the quantum von Neumann entropy. Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof).

A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let δx be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset c. The probability of lying within the jth interval of width δx is

To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as

Under the above definition, the entropic uncertainty relation is

Here we note that δx δp/h is a typical infinitesimal phase space volume used in the calculation of a partition function. The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research.

Uncertainty relation with three angular momentum components

For a particle of total angular momentum the following uncertainty relation holds where are angular momentum components. The relation can be derived from and The relation can be strengthened as  where is the quantum Fisher information.

History

In 1925 Heisenberg published the Umdeutung (reinterpretation) paper where he showed that central aspect of quantum theory was the non-commutativity: the theory implied that the relative order of position and momentum measurement was significant. Working with Max Born and Pascual Jordan, he continued to develop matrix mechanics, that would become the first modern quantum mechanics formulation.

Werner Heisenberg and Niels Bohr

In March 1926, working in Bohr's institute, Heisenberg realized that the non-commutativity implies the uncertainty principle. Writing to Wolfgang Pauli in February 1927, he worked out the basic concepts.

In his celebrated 1927 paper "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement, but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. His paper gave an analysis in terms of a microscope that Bohr showed was incorrect; Heisenberg included an addendum to the publication.

In his 1930 Chicago lecture he refined his principle:

Later work broadened the concept. Any two variables that do not commute cannot be measured simultaneously—the more precisely one is known, the less precisely the other can be known. Heisenberg wrote:

It can be expressed in its simplest form as follows: One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant.

Kennard in 1927 first proved the modern inequality:

where ħ = h/2π, and σx, σp are the standard deviations of position and momentum. (Heisenberg only proved relation (A2) for the special case of Gaussian states.) In 1929 Robertson generalized the inequality to all observables and in 1930 Schrödinger extended the form to allow non-zero covariance of the operators; this result is referred to as Robertson-Schrödinger inequality.

Terminology and translation

Throughout the main body of his original 1927 paper, written in German, Heisenberg used the word "Ungenauigkeit", (Eng: Imprecision) to describe the basic theoretical principle. Only in the endnote did he switch to the word "Unsicherheit" (Eng: Uncertainty). Later on, he always used "Unbestimmtheit" (Eng: Indefiniteness). When the English-language version of Heisenberg's textbook, The Physical Principles of the Quantum Theory, was published in 1930, however, only the English word "uncertainty" was used, and it became the term in the English language.

Heisenberg's microscope

Heisenberg's gamma-ray microscope for locating an electron (shown in blue). The incoming gamma ray (shown in green) is scattered by the electron up into the microscope's aperture angle θ. The scattered gamma-ray is shown in red. Classical optics shows that the electron position can be resolved only up to an uncertainty Δx that depends on θ and the wavelength λ of the incoming light.

The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by using the observer effect of an imaginary microscope as a measuring device.

He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it.

  • Problem 1 – If the photon has a short wavelength, and therefore, a large momentum, the position can be measured accurately. But the photon scatters in a random direction, transferring a large and uncertain amount of momentum to the electron. If the photon has a long wavelength and low momentum, the collision does not disturb the electron's momentum very much, but the scattering will reveal its position only vaguely.
  • Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon affects the electron's beamline momentum and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around.

The combination of these trade-offs implies that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower limit, which is (up to a small numerical factor) equal to the Planck constant. Heisenberg did not care to formulate the uncertainty principle as an exact limit, and preferred to use it instead, as a heuristic quantitative statement, correct up to small numerical factors, which makes the radically new noncommutativity of quantum mechanics inevitable.

Intrinsic quantum uncertainty

Historically, the uncertainty principle has been confused with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the system, that is, without changing something in a system. Heisenberg used such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty. It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems, and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology.

Critical reactions

The Copenhagen interpretation of quantum mechanics and Heisenberg's uncertainty principle were, in fact, initially seen as twin targets by detractors. According to the Copenhagen interpretation of quantum mechanics, there is no fundamental reality that the quantum state describes, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be.

Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years.

Ideal detached observer

Wolfgang Pauli called Einstein's fundamental objection to the uncertainty principle "the ideal of the detached observer" (phrase translated from the German):

"Like the moon has a definite position," Einstein said to me last winter, "whether or not we look at the moon, the same must also hold for the atomic objects, as there is no sharp distinction possible between these and macroscopic objects. Observation cannot create an element of reality like a position, there must be something contained in the complete description of physical reality which corresponds to the possibility of observing a position, already before the observation has been actually made." I hope, that I quoted Einstein correctly; it is always difficult to quote somebody out of memory with whom one does not agree. It is precisely this kind of postulate which I call the ideal of the detached observer.

— Letter from Pauli to Niels Bohr, February 15, 1955

Einstein's slit

The first of Einstein's thought experiments challenging the uncertainty principle went as follows:

Consider a particle passing through a slit of width d. The slit introduces an uncertainty in momentum of approximately h/d because the particle passes through the wall. But let us determine the momentum of the particle by measuring the recoil of the wall. In doing so, we find the momentum of the particle to arbitrary accuracy by conservation of momentum.

Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy Δp, the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to h/Δp, and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement.

A similar analysis with particles diffracting through multiple slits is given by Richard Feynman.

Einstein's box

Bohr was present when Einstein proposed the thought experiment which has become known as Einstein's box. Einstein argued that "Heisenberg's uncertainty equation implied that the uncertainty in time was related to the uncertainty in energy, the product of the two being related to the Planck constant." Consider, he said, an ideal box, lined with mirrors so that it can contain light indefinitely. The box could be weighed before a clockwork mechanism opened an ideal shutter at a chosen instant to allow one single photon to escape. "We now know, explained Einstein, precisely the time at which the photon left the box." "Now, weigh the box again. The change of mass tells the energy of the emitted light. In this manner, said Einstein, one could measure the energy emitted and the time it was released with any desired precision, in contradiction to the uncertainty principle."

Bohr spent a sleepless night considering this argument, and eventually realized that it was flawed. He pointed out that if the box were to be weighed, say by a spring and a pointer on a scale, "since the box must move vertically with a change in its weight, there will be uncertainty in its vertical velocity and therefore an uncertainty in its height above the table. ... Furthermore, the uncertainty about the elevation above the Earth's surface will result in an uncertainty in the rate of the clock", because of Einstein's own theory of gravity's effect on time. "Through this chain of uncertainties, Bohr showed that Einstein's light box experiment could not simultaneously measure exactly both the energy of the photon and the time of its escape."

EPR paradox for entangled particles

In 1935, Einstein, Boris Podolsky and Nathan Rosen published an analysis of spatially separated entangled particles (EPR paradox). According to EPR, one could measure the position of one of the entangled particles and the momentum of the second particle, and from those measurements deduce the position and momentum of both particles to any precision, violating the uncertainty principle. In order to avoid such possibility, the measurement of one particle must modify the probability distribution of the other particle instantaneously, possibly violating the principle of locality.

In 1964, John Stewart Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out EPR's basic assumption of local hidden variables.

Popper's criticism

Science philosopher Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist. He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations". In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory.

In 1934, Popper published Zur Kritik der Ungenauigkeitsrelationen ("Critique of the Uncertainty Relations") in Naturwissenschaften, and in the same year Logik der Forschung (translated and updated by the author as The Logic of Scientific Discovery in 1959), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing:

[Heisenberg's] formulae are, beyond all doubt, derivable statistical formulae of the quantum theory. But they have been habitually misinterpreted by those quantum theorists who said that these formulae can be interpreted as determining some upper limit to the precision of our measurements. [original emphasis]

Popper proposed an experiment to falsify the uncertainty relations, although he later withdrew his initial version after discussions with Carl Friedrich von Weizsäcker, Heisenberg, and Einstein; Popper sent his paper to Einstein and it may have influenced the formulation of the EPR paradox.

Free will

Some scientists, including Arthur Compton and Martin Heisenberg, have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature. Proponents of this theory commonly say that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells.

Thermodynamics

There is reason to believe that violating the uncertainty principle also strongly implies the violation of the second law of thermodynamics. See Gibbs paradox.

Rejection of the principle

Uncertainty principles relate quantum particles – electrons for example – to classical concepts – position and momentum. This presumes quantum particles have position and momentum. Edwin C. Kemble pointed out in 1937 that such properties cannot be experimentally verified and assuming they exist gives rise to many contradictions; similarly Rudolf Haag notes that position in quantum mechanics is an attribute of an interaction, say between an electron and a detector, not an intrinsic property. From this point of view the uncertainty principle is not a fundamental quantum property but a concept "carried over from the language of our ancestors", as Kemble says.

Applications

Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. All forms of spectroscopy, including particle physics use the relationship to relate measured energy line-width to the lifetime of quantum states. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers.

Wave–particle duality

From Wikipedia, the free encyclopedia

Wave–particle duality is the concept in quantum mechanics that fundamental entities of the universe, like photons and electrons, exhibit particle or wave properties according to the experimental circumstances. It expresses the inability of the classical concepts such as particle or wave to fully describe the behavior of quantum objects. During the 19th and early 20th centuries, light was found to behave as a wave, then later was discovered to have a particle-like behavior, whereas electrons behaved like particles in early experiments, then later were discovered to have wave-like behavior. The concept of duality arose to name these seeming contradictions.

History

Wave–particle duality of light

In the late 17th century, Sir Isaac Newton had advocated that light was corpuscular (particulate), but Christiaan Huygens took an opposing wave description. While Newton had favored a particle approach, he was the first to attempt to reconcile both wave and particle theories of light, and the only one in his time to consider both, thereby anticipating modern wave–particle duality. Thomas Young's interference experiments in 1801, and François Arago's detection of the Poisson spot in 1819, validated Huygens' wave models. However, the wave model was challenged in 1901 by Planck's law for black-body radiationMax Planck heuristically derived a formula for the observed spectrum by assuming that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, E, that was proportional to the frequency of its associated electromagnetic wave. In 1905 Albert Einstein interpreted the photoelectric effect also with discrete energies for photons. These both indicate particle behavior. Despite confirmation by various experimental observations, the photon theory (as it came to be called) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light. The experimental evidence of particle-like momentum and energy seemingly contradicted the earlier work demonstrating wave-like interference of light.

Wave–particle duality of matter

The contradictory evidence from electrons arrived in the opposite order. Many experiments by J. J. Thomson, Robert Millikan, and Charles Wilson among others had shown that free electrons had particle properties, for instance, the measurement of their charge-mass ratio by Thomson in 1897. In 1924, Louis de Broglie introduced his theory of electron waves in his PhD thesis Recherches sur la théorie des quanta. He suggested that an electron around a nucleus could be thought of as being a standing wave and that electrons and all matter could be considered as waves. He merged the idea of thinking about them as particles, and of thinking of them as waves. He proposed that particles are bundles of waves (wave packets) that move with a group velocity and have an effective mass. Both of these depend upon the energy, which in turn connects to the wavevector and the relativistic formulation of Albert Einstein a few years before.

Following de Broglie's proposal of wave–particle duality of electrons, in 1925 to 1926, Erwin Schrödinger developed the wave equation of motion for electrons. This rapidly became part of what was called by Schrödinger undulatory mechanics, now called the Schrödinger equation and also "wave mechanics".

In 1926, Max Born gave a talk in an Oxford meeting about using the electron diffraction experiments to confirm the wave–particle duality of electrons. In his talk, Born cited experimental data from Clinton Davisson in 1923. It happened that Davisson also attended that talk. Davisson returned to his lab in the US to switch his experimental focus to test the wave property of electrons.

In 1927, the wave nature of electrons was empirically confirmed by two experiments. The Davisson–Germer experiment at Bell Labs measured electrons scattered from Ni metal surfaces. George Paget Thomson and Alexander Reid at Cambridge University scattered electrons through thin nickel films and observed concentric diffraction rings. Alexander Reid, who was Thomson's graduate student, performed the first experiments, but he died soon after in a motorcycle accident and is rarely mentioned. These experiments were rapidly followed by the first non-relativistic diffraction model for electrons by Hans Bethe based upon the Schrödinger equation, which is very close to how electron diffraction is now described. Significantly, Davisson and Germer noticed that their results could not be interpreted using a Bragg's law approach as the positions were systematically different; the approach of Bethe, which includes the refraction due to the average potential, yielded more accurate results. Davisson and Thomson were awarded the Nobel Prize in 1937 for experimental verification of wave property of electrons by diffraction experiments. Similar crystal diffraction experiments were carried out by Otto Stern in the 1930s using beams of helium atoms and hydrogen molecules. These experiments further verified that wave behavior is not limited to electrons and is a general property of matter on a microscopic scale.

Classical waves and particles

Before proceeding further, it is critical to introduce some definitions of waves and particles both in a classical sense and in quantum mechanics. Waves and particles are two very different models for physical systems, each with an exceptionally large range of application. Classical waves obey the wave equation; they have continuous values at many points in space that vary with time; their spatial extent can vary with time due to diffraction, and they display wave interference. Physical systems exhibiting wave behavior and described by the mathematics of wave equations include water waves, seismic waves, sound waves, radio waves, and more.

Classical particles obey classical mechanics; they have some center of mass and extent; they follow trajectories characterized by positions and velocities that vary over time; in the absence of forces their trajectories are straight lines. Stars, planets, spacecraft, tennis balls, bullets, sand grains: particle models work across a huge scale. Unlike waves, particles do not exhibit interference.

Classical waves interfere. Particles follow trajectories.
 
Wave interference in water due to two sources marked as red points on the left
Wave interference in water due to two sources marked as red points on the left.
 
Classical trajectories for a mass thrown at an angle of 70°, at different speeds.
 
Line trace for a two-slit electron interference pattern. Compare to a slice through the image of the water wave pattern above.
 
Curved arc shows a cloud chamber trajectory of a positron.
Curved arc shows a cloud chamber trajectory of a positron acting like a particle.
Both interference and trajectories are observed in quantum systems

Some experiments on quantum systems show wave-like interference and diffraction; some experiments show particle-like collisions.

Quantum systems obey wave equations that predict particle probability distributions. These particles are associated with discrete values called quanta for properties such as spin, electric charge and magnetic moment. These particles arrive one at time, randomly, but build up a pattern. The probability that experiments will measure particles at a point in space is the square of a complex-number valued wave. Experiments can be designed to exhibit diffraction and interference of the probability amplitude. Thus statistically large numbers of the random particle appearances can display wave-like properties. Similar equations govern collective excitations called quasiparticles.

Electrons behaving as waves and particles

The electron double slit experiment is a textbook demonstration of wave–particle duality. A modern version of the experiment is shown schematically in the figure below.

Left half: schematic setup for electron double-slit experiment with masking; inset micrographs of slits and mask; Right half: results for slit 1, slit 2 and both slits open.

Electrons from the source hit a wall with two thin slits. A mask behind the slits can expose either one or open to expose both slits. The results for high electron intensity are shown on the right, first for each slit individually, then with both slits open. With either slit open there is a smooth intensity variation due to diffraction. When both slits are open the intensity oscillates, characteristic of wave interference.

Having observed wave behavior, now change the experiment, lowering the intensity of the electron source until only one or two are detected per second, appearing as individual particles, dots in the video. As shown in the movie clip below, the dots on the detector seem at first to be random. After some time a pattern emerges, eventually forming an alternating sequence of light and dark bands.

Electron diffraction pattern
Dots slowly filling an interference pattern.
Experimental electron double slit diffraction pattern. Across the middle of the image at the top the intensity alternates from high to low showing interference in the signal from the two slits. Bottom: movie of the pattern build up dot by dot.

The experiment shows wave interference revealed a single particle at a time—quantum mechanical electrons display both wave and particle behavior. Similar results have been shown for atoms and even large molecules.

Observing photons as particles

Photoelectric effect in a solid

While electrons were thought to be particles until their wave properties were discovered, for photons it was the opposite. In 1887, Heinrich Hertz observed that when light with sufficient frequency hits a metallic surface, the surface emits cathode rays, what are now called electrons. In 1902, Philipp Lenard discovered that the maximum possible energy of an ejected electron is unrelated to its intensity. This observation is at odds with classical electromagnetism, which predicts that the electron's energy should be proportional to the intensity of the incident radiation. In 1905, Albert Einstein suggested that the energy of the light must occur a finite number of energy quanta. He postulated that electrons can receive energy from an electromagnetic field only in discrete units (quanta or photons): an amount of energy E that was related to the frequency f of the light by

A photon of wavelength comes in from the left, collides with a target at rest, and a new photon of wavelength emerges at an angle . The target recoils, and the photons have provided momentum to the target.

where h is the Planck constant (6.626×10−34 J⋅s). Only photons of a high enough frequency (above a certain threshold value which, when multiplied by the Planck constant, is the work function) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal he used, but photons of red light did not. One photon of light above the threshold frequency could release only one electron; the higher the frequency of a photon, the higher the kinetic energy of the emitted electron, but no amount of light below the threshold frequency could release an electron. Despite confirmation by various experimental observations, the photon theory (as it came to be called later) remained controversial until Arthur Compton performed a series of experiments from 1922 to 1924 demonstrating the momentum of light.

Both discrete (quantized) energies and also momentum are, classically, particle attributes. There are many other examples where photons display particle-type properties, for instance in solar sails, where sunlight could propel a space vehicle and laser cooling where the momentum is used to slow down (cool) atoms. These are a different aspect of wave–particle duality.

Which slit experiments

In a "which way" experiment, particle detectors are placed at the slits to determine which slit the electron traveled through. When these detectors are inserted, quantum mechanics predicts that the interference pattern disappears because the detected part of the electron wave has changed (loss of coherence). Many similar proposals have been made and many have been converted into experiments and tried out. Every single one shows the same result: as soon as electron trajectories are detected, interference disappears.

A simple example of these "which way" experiments uses a Mach–Zehnder interferometer, a device based on lasers and mirrors sketched below.

Interferometer schematic diagram

A laser beam along the input port splits at a half-silvered mirror. Part of the beam continues straight, passes through a glass phase shifter, then reflects downward. The other part of the beam reflects from the first mirror then turns at another mirror. The two beams meet at a second half-silvered beam splitter.

Each output port has a camera to record the results. The two beams show interference characteristic of wave propagation. If the laser intensity is turned sufficiently low, individual dots appear on the cameras, building up the pattern as in the electron example.

The first beam-splitter mirror acts like double slits, but in the interferometer case we can remove the second beam splitter. Then the beam heading down ends up in output port 1: any photon particles on this path gets counted in that port. The beam going across the top ends up on output port 2. In either case the counts will track the photon trajectories. However, as soon as the second beam splitter is removed the interference pattern disappears.

Tuesday, February 17, 2026

Military–industrial complex

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Military%E2%80%93industrial_complex

The expression military–industrial complex (MIC) describes the relationship between a country's military and the defense industry that supplies it, seen together as a vested interest which influences public policy. A driving factor behind the relationship between the military and the defense corporations is that both sides benefit—one side from obtaining weapons, and the other from being paid to supply them. The term is most often used in reference to the system behind the armed forces of the United States, where the relationship is most prevalent due to close links among defense contractors, the Department of Defense, and politicians. The expression gained popularity after a warning of the relationship's harmful effects, in the farewell address of U.S. President Dwight D. Eisenhower in 1961. The term has also been used in relation to Russia, especially since its 2022 invasion of Ukraine.

Origin of the term

In his farewell address, U.S. President Dwight D. Eisenhower famously warned U.S. citizens about the "military–industrial complex".

U.S. President Dwight D. Eisenhower used the term in his Farewell Address to the Nation on January 17, 1961:

A vital element in keeping the peace is our military establishment. Our arms must be mighty, ready for instant action, so that no potential aggressor may be tempted to risk his own destruction... This conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence—economic, political, even spiritual—is felt in every city, every statehouse, every office of the federal government. We recognize the imperative need for this development. Yet we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society. In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military–industrial complex. The potential for the disastrous rise of misplaced power exists, and will persist. We must never let the weight of this combination endanger our liberties or democratic processes. We should take nothing for granted. Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals so that security and liberty may prosper together. [emphasis added]

The speech was authored by Ralph E. Williams and Malcolm Moos and was foreshadowed by a passage in the 1954 book Power Through Purpose coauthored by Moos. The degree to which Eisenhower and his brother Milton shaped the speech is unclear from surviving documents. Planning commenced in early 1959; however, the earliest archival evidence of a military–industrial complex theme is a late-1960 memo by Williams that includes the phrase war based industrial complex. A wide range of interpretations have been made of the speech's meaning.

While the term military–industrial complex is often ascribed to Eisenhower, he was neither the first to use the phrase, nor the first to warn of such a potential danger. The first known use of military-industrial complex was by Winfield W. Riefler in 1947. Riefler attributed the outcome of the war to the balance of aggregate economic potentials of the belligerents which he termed "military-industrial complexes".  C. Wright Mills's 1956 book The Power Elite is thematically similar to Eisenhower's Farewell Address and was used as a conceptual framework for the military-industrial complex debate in the 1960s and 1970s. Mills said that American society had cleaved into a powerful elite of military and corporate chieftains set against a powerless mass society.

United States

Some sources divide the history of the United States military–industrial complex into three eras.

First era

From 1797 to 1941, the U.S. government only relied on civilian industries while the country was actually at war. The government owned their own shipyards and weapons manufacturing facilities which they relied on through World War I. With World War II came a massive shift in the way that the U.S. government armed the military.

In World War II, the U.S. President Franklin D. Roosevelt established the War Production Board to coordinate civilian industries and shift them into wartime production. Arms production in the U.S. went from around one percent of annual Gross domestic product (GDP) to 40 percent of GDP. U.S. companies, such as Boeing and General Motors, maintained and expanded their defense divisions. These companies have gone on to develop various technologies that have improved civilian life as well, such as night-vision goggles and GPS.

Second era (Cold War)

The second era is identified as beginning with the coining of the term by U.S. President Dwight D. Eisenhower. This era continued through the Cold War period, up to the end of the Warsaw Pact and the collapse of the Soviet Union.

The phrase rose to prominence in the years following Eisenhower's farewell address, as part of opposition to the Vietnam War. John Kenneth Galbraith said that he and others quoted Eisenhower's farewell address for the "flank protection it provided" when criticizing military power given Eisenhower's "impeccably conservative" reputation.

Following Eisenhower's address, the term became a staple of American political and sociological discourse. Many Vietnam War–era activists and polemicists, such as Seymour Melman and Noam Chomsky employed the concept in their criticism of U.S. foreign policy, while other academics and policymakers found it to be a useful analytical framework. Although the MIC was bound up in its origins with the bipolar international environment of the Cold War, some contended that the MIC might endure under different geopolitical conditions (for example, George F. Kennan wrote in 1987 that "were the Soviet Union to sink tomorrow under the waters of the ocean, the American military–industrial complex would have to remain, substantially unchanged, until some other adversary could be invented."). The collapse of the Soviet Union and the resultant decrease in global military spending (the so-called 'peace dividend') did in fact lead to decreases in defense industrial output and consolidation among major arms producers, although global expenditures rose again following the September 11 attacks and the ensuing "War on terror", as well as the more recent increase in geopolitical tensions associated with strategic competition between the United States, Russia, and China.

A 1965 article written by Marc Pilisuk and Thomas Hayden says benefits of the military–industrial complex of the U.S. include the advancement of the civilian technology market as civilian companies benefit from innovations from the MIC and vice versa. In 1993, the Pentagon urged defense contractors to consolidate due to the fall of communism and a shrinking defense budget.

Third era

A placard saying that war only benefits the military industrial complex is held by a woman who smiles into the camera. Another protestor holds a peace symbol placard saying "Peace with Iran". Protestors are wearing winter clothing and the trees have no leaves. The background is filled with the walls of brick buildings.
Anti-war protestor with sign criticizing the military–industrial complex

In the third era, U.S. defense contractors either consolidated or shifted their focus to civilian innovation. From 1992 to 1997 there was a total of US$55 billion worth of mergers in the defense industry, with major defense companies purchasing smaller competitors. The U.S. domestic economy is now tied to the success of the MIC which has led to concerns of repression as Cold War-era attitudes are still prevalent among the American public. Shifts in values and the collapse of communism have ushered in a new era for the U.S. military–industrial complex. The Department of Defense works in coordination with traditional military–industrial complex aligned companies such as Lockheed Martin and Northrop Grumman. Many former defense contractors have shifted operations to the civilian market and sold off their defense departments. In recent years, traditional defense contracting firms have faced competition from Silicon Valley and other tech companies, like Anduril Industries and Palantir, over Pentagon contracts. This represents a shift in defense strategy away from the procurement of more armaments and toward an increasing role of technologies like cloud computing and cybersecurity in military affairs. From 2019 to 2022, venture capital funding for defense technologies doubled.

Proxmire

Proxmire's The Economics of Military Procurement was highly influential among critics of the military-industrial complex.

William Proxmire was the chief advocate for the idea of the military-industrial complex as an unaccountable bureaucracy that wastes resources in order to turn a profit. He achieved prominence in this role in 1968 when he was featured on the front page of the New York Times after giving a press conference where he named 23 defense contractors who he said were engaged in "shocking abuse". Proxmire was quoted as saying: "I think this is an excellent example of the military industrial complex at work, with the victim being ... the taxpayer". James Ledbetter said that Proxmire's attacks on the military-industrial complex were interpreted as a proxy for opposition to the Vietnam War. Proxmire said that the C-5A Galaxy jet was "one of the greatest fiscal disasters in the history of military contracting. He secured the testimony of U.S. Air Force whistleblower A. Ernest Fitzgerald before Congress. Fitzgerald testified that cost overruns on the C-5A were due to underestimation of costs, ineffective cost controls, and perverse incentives inherent in the repricing formula of the contract. The Air Force responded by saying that the actual overrun was half what Fitzgerald claimed. Proxmire said the Air Force was concealing the full extent of the overrun and pressed the Government Accountability Office to investigate the entire project.

Military subsidy theory

A debate exists between two schools of thought concerning the effect of U.S. military spending on U.S. civilian industry. Eugene Gholz of UT Austin said that Cold War military spending on aircraft, electronics, communications, and computers has been credited with indirect technological and financial benefits for the associated civilian industries. This contrasts with the idea that military research threatens to crowd out commercial innovation. Gholz said that the U.S. government intentionally overpaid for military aircraft to hide a subsidy to the commercial aircraft industry. He presents development of the military Boeing KC-135 Stratotanker alongside the Boeing 707 civilian jetliner as the canonical example of this idea. However, he said that the actual benefits that accrued to the Boeing 707 from the KC-135 program were minimal and that Boeing's image as an arms maker hampered commercial sales. He said that Convair's involvement in military aircraft led it to make disastrous decisions on the commercial side of its business. Gholz concluded that military spending fails to explain the competitiveness of the American commercial aircraft industry.

Connotations in U.S. politics

James Ledbetter and certain other scholars describe the phrase military–industrial complex as pejorative. Some scholars suggest that it implies the existence of a conspiracy. David S. Rohde compares its use in U.S. politics by liberals to that of the phrase deep state by conservatives. Ledbetter further describes the phrase:

In the half century since Eisenhower uttered his prophetic words, the concept of the military–industrial complex has become a rhetorical Rorschach blot—the meaning is in the eye of the beholder. The very utility of the phrase, the source of its mass appeal, comes at the cost of a precise, universally accepted definition.

Russia

Russia's military–industrial complex is overseen by the Military-Industrial Commission of Russia. As of 2024, Russia's military–industrial complex is made up of about 6,000 companies and employs about 3.5 million people, or 2.5% of the population. In 2025, nearly 40% of Russian government spending will be on national defense and security. This record-high allocation of 13.5 trillion rubles ($133.63 billion) is more than the spending allocated to education, healthcare, social programs and economic development.

Russia ramped-up weapons production following the 2022 Russian invasion of Ukraine, and factories making ammunition and military equipment have been running around the clock. Andrei Chekmenyov, the head of the Russian Union of Industrial Workers, said that "practically all military–industrial enterprises" were requiring workers to work additional hours "without their consent", to sustain Russia's war machine. In January 2023, Russia's president Vladimir Putin said that Russia's large military–industrial complex would ensure its victory over Ukraine.

According to Philip Luck of the Center for Strategic and International Studies, Russia's war against Ukraine has "created a new class of economic beneficiaries—industries and individuals profiting from the war—who now have a vested interest in sustaining Putin's war economy". Russian political scientist Ekaterina Schulmann refers to this as a new "military–industrial class" whose welfare depends on the continuation of the war. Likewise, Luke Cooper of the Peace and Conflict Resolution Evidence Platform writes that "Russia has created a rent-based military industrial complex whose elites have an interest in large scale military spending". He says that while this military–industrial complex would have an incentive to oppose peace negotiations, "it seems plausible that the militarisation of the economy would remain a priority in a post-war situation regardless", justified by the "threat" from the West.

However, Russia's military–industrial complex has been severely hindered by international sanctions and by the demands of the war in Ukraine. This has highlighted Russia's dependence on Western components. Although Russia has bypassed some sanctions, and its military industry is resilient, this is not sustainable for long.

Soviet Union

The Red Army sought control over Soviet industry in the 1920s during Lenin's reign, but Stalin actively prevented the formation of a military-industrial complex that could have challenged his power. He used a divide and rule strategy to prevent collusion between military and industrial factions. Although Stalin needed a strong military to defend himself against external threats and used the Soviet military command to execute industrialization and the transition to a command economy, he also came to fear military and industrial leaders. Stalin structured incentives so that military and industrial actors gained more from rivalry and cheating one another than from cooperation.

While the Soviet Union lacked a military-industrial complex, in the sense of a powerful vested interest, its heavily militarized economy illustrates the dangers inherent in militarism. A climate of secrecy and control, rigid centralized allocation of resources, economic isolation from the rest of the world, and unquestioning acceptance of government actions were all predicated on national security. The economic and societal costs were in many cases not tracked, or were withheld from civilians. Because these costs were hidden in the Soviet system, but exposed by the transition to a market economy, many Russians blame the new market economy of the Russian Federation for creating these costs in the first place.

Connotations in Russian

The connotations of military–industrial complex are different in English and in Russian. The English term implies a coalition of industrial and military interests. The Russian term refers to the military industries taken together as a group, or what is known as a defense industrial base in English.

While there are many references to a Russian or Soviet military–industrial complex, this is partly the result of word-for-word translation that fails to account for the nuances of Russian and English grammar. Voenno-promyshlennyi kompleks [ru] is the Russian term commonly translated into English as military–industrial complex. However, the adjectival voenno- (military) modifies promyshlennyi (industrial) rather than the complex. In other words, it refers to a complex of the interests of military industries; not to the collective interests of military and industry.

Similar terms

A related term is "defense industrial base" – the network of organizations, facilities, and resources that supplies governments with defense-related goods and services. Another related term is the "iron triangle" in the U.S. – the three-sided relationship between Congress, the executive branch bureaucracy, and interest groups.

A thesis similar to the military–industrial complex was originally expressed by Daniel Guérin, in his 1936 book Fascism and Big Business, about the fascist governments' ties to heavy industry. It would be defined as "an informal and changing coalition of groups with vested psychological, moral, and material interests in the continuous development and maintenance of high levels of weaponry, in preservation of colonial markets and in military-strategic conceptions of internal affairs." An exhibit of the trend was made in Franz Leopold Neumann's book Behemoth: The Structure and Practice of National Socialism in 1942, a study of how Nazism came into a position of power in a democratic state.

In The Global Industrial Complex, edited by American philosopher and activist Steven Best, the "power complex" first analyzed by sociologist Charles Wright Mills 1956 work The Power Elite, is shown to have evolved into a global array of "corporate-state" structures, an interdependent and overlapping systems of domination.

Matthew Brummer, associate professor at Tokyo's National Graduate Institute for Policy Studies, has pointed out in 2016 Japan's "Manga Military" to denote the effort undertaken by the country's Ministry of Defense, using film, anime, theater, literature, fashion, and other, along with moe, to reshape domestic and international perceptions of the Japanese military–industrial complex.

James Der Derian's book Military–Industrial–Media–Entertainment Network relates the convergence of cyborg technologies, video games, media spectacles, war movies, and "do-good ideologies" into what generates a mirage, as he claims, of high-tech, and low-risk "virtuous wars." American political activist and former Central Intelligence Agency officer Ray McGovern denounces the fact that, as he claims, American citizens are vulnerable to anti-Russian propaganda since few of them know the Soviet Union's major role in World War II victory, and blames for this the "corporate-controlled mainstream media." He goes on to label the culprits as the Military–Industrial–Congressional–Intelligence–Media–Academia–Think-Tank complex.

In the decades of the term's inception, other industrial complexes appeared in the literature:

Tech–industrial complex

In his 2025 farewell address, outgoing U.S. President Joe Biden warned of a "tech–industrial complex," stating that "Americans are being buried under an avalanche of misinformation and disinformation, enabling the abuse of power."

The statement was made following Elon Musk's appointment in the second Donald Trump administration and the public overtures towards Trump by technology industry leaders, including Meta's Mark Zuckerberg and Amazon's Jeff Bezos, as well as the dismantling of Facebook's fact-checking program.

Military–entertainment complex

The scope of the military–industrial complex has broadened to include cultural and media sectors, giving rise to what modern scholarship has dubbed the military–entertainment complex. This term refers to forms of cooperation between military institutions and entertainment industries, in which the military may provide equipment, personnel, technical expertise, or other forms of support to filmmakers, video game developers, and related media producers. In the United States in particular, such collaborations have contributed to films, games, and other media that depict military themes and operations. In some cases, media production has been developed with direct military involvement, such as America's Army, a video game created by the U.S. Army for recruitment and public outreach purposes. Through these interactions, entertainment media can play a role in shaping public understanding of military activities and warfare, extending the influence of military institutions beyond traditional domains such as production and procurement, into areas of cultural and media production.

Academic debate

The value of military-industrial complex for academic analysis was questioned by numerous scholars within a few years of the idea's introduction. However, Steve J. Rosen said in 1973 that C. Wright Mills's theory of the military-industrial complex is "a most useful analytical construct".

The notion of military‐industrial complexes, or MICs, has, however, become so politically and conceptually loaded as to make it almost meaningless as an analytical concept, especially when studying the years prior to 1939.

— Miller, Turnbull, and Jari Ojala

Neurohacking

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurohacking   ...