Search This Blog

Saturday, July 12, 2025

Born rule

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Born_rule

The Born rule is a postulate of quantum mechanics that gives the probability that a measurement of a quantum system will yield a given result. In one commonly used application, it states that the probability density for finding a particle at a given position is proportional to the square of the amplitude of the system's wavefunction at that position. It was formulated and published by German physicist Max Born in July 1926.

Details

The Born rule states that an observable, measured in a system with normalized wave function (see Bra–ket notation), corresponds to a self-adjoint operator whose spectrum is discrete if:

  • the measured result will be one of the eigenvalues of , and
  • the probability of measuring a given eigenvalue will equal , where is the projection onto the eigenspace of corresponding to .
(In the case where the eigenspace of corresponding to is one-dimensional and spanned by the normalized eigenvector , is equal to , so the probability is equal to . Since the complex number is known as the probability amplitude that the state vector assigns to the eigenvector , it is common to describe the Born rule as saying that probability is equal to the amplitude-squared (really the amplitude times its own complex conjugate). Equivalently, the probability can be written as .)

In the case where the spectrum of is not wholly discrete, the spectral theorem proves the existence of a certain projection-valued measure (PVM) , the spectral measure of . In this case:

  • the probability that the result of the measurement lies in a measurable set is given by .

For example, a single structureless particle can be described by a wave function that depends upon position coordinates and a time coordinate . The Born rule implies that the probability density function for the result of a measurement of the particle's position at time is: The Born rule can also be employed to calculate probabilities (for measurements with discrete sets of outcomes) or probability densities (for continuous-valued measurements) for other observables, like momentum, energy, and angular momentum.

In some applications, this treatment of the Born rule is generalized using positive-operator-valued measures (POVM). A POVM is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalization of von Neumann measurements and, correspondingly, quantum measurements described by POVMs are a generalization of quantum measurements described by self-adjoint observables. In rough analogy, a POVM is to a PVM what a mixed state is to a pure state. Mixed states are needed to specify the state of a subsystem of a larger system (see purification of quantum state); analogously, POVMs are necessary to describe the effect on a subsystem of a projective measurement performed on a larger system. POVMs are the most general kind of measurement in quantum mechanics and can also be used in quantum field theory. They are extensively used in the field of quantum information.

In the simplest case, of a POVM with a finite number of elements acting on a finite-dimensional Hilbert space, a POVM is a set of positive semi-definite matrices on a Hilbert space that sum to the identity matrix,:

The POVM element is associated with the measurement outcome , such that the probability of obtaining it when making a measurement on the quantum state is given by:

where is the trace operator. This is the POVM version of the Born rule. When the quantum state being measured is a pure state this formula reduces to:

The Born rule, together with the unitarity of the time evolution operator (or, equivalently, the Hamiltonian being Hermitian), implies the unitarity of the theory: a wave function that is time-evolved by a unitary operator will remain properly normalized. (In the more general case where one considers the time evolution of a density matrix, proper normalization is ensured by requiring that the time evolution is a trace-preserving, completely positive map.)

History

The Born rule was formulated by Born in a 1926 paper. In this paper, Born solves the Schrödinger equation for a scattering problem and, inspired by Albert Einstein and Einstein's probabilistic rule for the photoelectric effect, concludes, in a footnote, that the Born rule gives the only possible interpretation of the solution. (The main body of the article says that the amplitude "gives the probability" [bestimmt die Wahrscheinlichkeit], while the footnote added in proof says that the probability is proportional to the square of its magnitude.) In 1954, together with Walther Bothe, Born was awarded the Nobel Prize in Physics for this and other work. John von Neumann discussed the application of spectral theory to Born's rule in his 1932 book.

Derivation from more basic principles

Gleason's theorem shows that the Born rule can be derived from the usual mathematical representation of measurements in quantum physics together with the assumption of non-contextuality. Andrew M. Gleason first proved the theorem in 1957, prompted by a question posed by George W. Mackey. This theorem was historically significant for the role it played in showing that wide classes of hidden-variable theories are inconsistent with quantum physics.

Several other researchers have also tried to derive the Born rule from more basic principles. A number of derivations have been proposed in the context of the many-worlds interpretation. These include the decision-theory approach pioneered by David Deutsch and later developed by Hilary Greaves and David Wallace; and an "envariance" approach by Wojciech H. Zurek. These proofs have, however, been criticized as circular. In 2018, an approach based on self-locating uncertainty was suggested by Charles Sebens and Sean M. Carroll; this has also been criticized. Simon Saunders, in 2021, produced a branch counting derivation of the Born rule. The crucial feature of this approach is to define the branches so that they all have the same magnitude or 2-norm. The ratios of the numbers of branches thus defined give the probabilities of the various outcomes of a measurement, in accordance with the Born rule.

In 2019, Lluís Masanes, Thomas Galley, and Markus Müller proposed a derivation based on postulates including the possibility of state estimation.

It has also been claimed that pilot-wave theory can be used to statistically derive the Born rule, though this remains controversial.

Within the QBist interpretation of quantum theory, the Born rule is seen as an extension of the normative principle of coherence, which ensures self-consistency of probability assessments across a whole set of such assessments. It can be shown that an agent who thinks they are gambling on the outcomes of measurements on a sufficiently quantum-like system but refuses to use the Born rule when placing their bets is vulnerable to a Dutch book.

Precision tests of QED


From Wikipedia, the free encyclopedia

Quantum electrodynamics (QED), a relativistic quantum field theory of electrodynamics, is among the most stringently tested theories in physics. The most precise and specific tests of QED consist of measurements of the electromagnetic fine-structure constant, α, in various physical systems. Checking the consistency of such measurements tests the theory.

Tests of a theory are normally carried out by comparing experimental results to theoretical predictions. In QED, there is some subtlety in this comparison, because theoretical predictions require as input an extremely precise value of α, which can only be obtained from another precision QED experiment. Because of this, the comparisons between theory and experiment are usually quoted as independent determinations of α. QED is then confirmed to the extent that these measurements of α from different physical sources agree with each other.

The agreement found this way is to within less than one part in a billion (10−9). An extremely high precision measurement of the quantized energies of the cyclotron orbits of the electron gives a precision of better than one part in a trillion (10−12). This makes QED one of the most accurate physical theories constructed thus far.

Besides these independent measurements of the fine-structure constant, many other predictions of QED have been tested as well.

Measurements of the fine-structure constant using different systems

Precision tests of QED have been performed in low-energy atomic physics experiments, high-energy collider experiments, and condensed matter systems. The value of α is obtained in each of these experiments by fitting an experimental measurement to a theoretical expression (including higher-order radiative corrections) that includes α as a parameter. The uncertainty in the extracted value of α includes both experimental and theoretical uncertainties. This program thus requires both high-precision measurements and high-precision theoretical calculations. Unless noted otherwise, all results below are taken from.

Low-energy measurements

Anomalous magnetic dipole moments

The most precise measurement of α comes from the anomalous magnetic dipole moment, or g−2 (pronounced "g minus 2"), of the electron. To make this measurement, two ingredients are needed:

  1. A precise measurement of the anomalous magnetic dipole moment, and
  2. A precise theoretical calculation of the anomalous magnetic dipole moment in terms of α.

As of February 2023, the best measurement of the anomalous magnetic dipole moment of the electron was made by the group of Gerald Gabrielse at Harvard University, using a single electron caught in a Penning trap. The difference between the electron's cyclotron frequency and its spin precession frequency in a magnetic field is proportional to g−2. An extremely high precision measurement of the quantized energies of the cyclotron orbits, or Landau levels, of the electron, compared to the quantized energies of the electron's two possible spin orientations, gives a value for the electron's spin g-factor:

g/2 = 1.00115965218059(13),

a precision of better than one part in a trillion. (The digits in parentheses indicate the standard uncertainty in the last listed digits of the measurement.)

The current state-of-the-art theoretical calculation of the anomalous magnetic dipole moment of the electron includes QED diagrams with up to four loops. Combining this with the experimental measurement of g yields the most precise value of α:

α−1 = 137.035999166(15),

a precision of better than a part in a billion. This uncertainty is ten times smaller than the nearest rival method involving atom-recoil measurements.

A value of α can also be extracted from the anomalous magnetic dipole moment of the muon. The g-factor of the muon is extracted using the same physical principle as for the electron above – namely, that the difference between the cyclotron frequency and the spin precession frequency in a magnetic field is proportional to g−2. The most precise measurement comes from Brookhaven National Laboratory's muon g−2 experiment, in which polarized muons are stored in a cyclotron and their spin orientation is measured by the direction of their decay electrons. As of February 2007, the current world average muon g-factor measurement is,

g/2 = 1.0011659208(6),

a precision of better than one part in a billion. The difference between the g-factors of the muon and the electron is due to their difference in mass. Because of the muon's larger mass, contributions to the theoretical calculation of its anomalous magnetic dipole moment from Standard Model weak interactions and from contributions involving hadrons are important at the current level of precision, whereas these effects are not important for the electron. The muon's anomalous magnetic dipole moment is also sensitive to contributions from new physics beyond the Standard Model, such as supersymmetry. For this reason, the muon's anomalous magnetic moment is normally used as a probe for new physics beyond the Standard Model rather than as a test of QED. See muon g–2 for current efforts to refine the measurement.

Atom-recoil measurements

This is an indirect method of measuring α, based on measurements of the masses of the electron, certain atoms, and the Rydberg constant. The Rydberg constant is known to seven parts in a trillion. The mass of the electron relative to that of caesium and rubidium atoms is also known with extremely high precision. If the mass of the electron can be measured with sufficiently high precision, then α can be found from the Rydberg constant according to

To get the mass of the electron, this method actually measures the mass of an 87Rb atom by measuring the recoil speed of the atom after it emits a photon of known wavelength in an atomic transition. Combining this with the ratio of electron to 87Rb atom, the result for α is,

α−1 = 137.03599878(91).

Because this measurement is the next-most-precise after the measurement of α from the electron's anomalous magnetic dipole moment described above, their comparison provides the most stringent test of QED: the value of α obtained here is within one standard deviation of that found from the electron's anomalous magnetic dipole moment, an agreement to within ten parts in a billion.

Neutron Compton wavelength

This method of measuring α is very similar in principle to the atom-recoil method. In this case, the accurately known mass ratio of the electron to the neutron is used. The neutron mass is measured with high precision through a very precise measurement of its Compton wavelength. This is then combined with the value of the Rydberg constant to extract α. The result is,

α−1 = 137.0360101(54).

Hyperfine splitting

Hyperfine splitting is a splitting in the energy levels of an atom caused by the interaction between the magnetic moment of the nucleus and the combined spin and orbital magnetic moment of the electron. The hyperfine splitting in hydrogen, measured using Ramsey's hydrogen maser, is known with great precision. Unfortunately, the influence of the proton's internal structure limits how precisely the splitting can be predicted theoretically. This leads to the extracted value of α being dominated by theoretical uncertainty:

α−1 = 137.0360(3).

The hyperfine splitting in muonium, an "atom" consisting of an electron and an antimuon, provides a more precise measurement of α because the muon has no internal structure:

α−1 = 137.035994(18).

Lamb shift

The Lamb shift is a small difference in the energies of the 2 S1/2 and 2 P1/2 energy levels of hydrogen, which arises from a one-loop effect in quantum electrodynamics. The Lamb shift is proportional to α5 and its measurement yields the extracted value:

α−1 = 137.0368(7).

Positronium

Positronium is an "atom" consisting of an electron and a positron. Whereas the calculation of the energy levels of ordinary hydrogen is contaminated by theoretical uncertainties from the proton's internal structure, the particles that make up positronium have no internal structure so precise theoretical calculations can be performed. The measurement of the splitting between the 2 3S1 and the 1 3S1 energy levels of positronium yields

α−1 = 137.034(16).

Measurements of α can also be extracted from the positronium decay rate. Positronium decays through the annihilation of the electron and the positron into two or more gamma-ray photons. The decay rate of the singlet ("para-positronium") 1S0 state yields

α−1 = 137.00(6),

and the decay rate of the triplet ("ortho-positronium") 3S1 state yields

α−1 = 136.971(6).

This last result is the only serious discrepancy among the numbers given here, but there is some evidence that uncalculated higher-order quantum corrections give a large correction to the value quoted here.

High-energy QED processes

The cross sections of higher-order QED reactions at high-energy electron-positron colliders provide a determination of α. In order to compare the extracted value of α with the low-energy results, higher-order QED effects including the running of α due to vacuum polarization must be taken into account. These experiments typically achieve only percent-level accuracy, but their results are consistent with the precise measurements available at lower energies.

The cross section for e+ e → e+ e e+ e yields

α−1 = 136.5(2.7),

and the cross section for e+ e → e+ e μ+ μ yields

α−1 = 139.9(1.2).

Condensed matter systems

The quantum Hall effect and the AC Josephson effect are exotic quantum interference phenomena in condensed matter systems. These two effects provide a standard electrical resistance and a standard frequency, respectively, which measure the charge of the electron with corrections that are strictly zero for macroscopic systems.

The quantum Hall effect yields

α−1 = 137.0359979(32),

and the AC Josephson effect yields

α−1 = 137.0359770(77).

Other tests

  • QED predicts that the photon is a massless particle. A variety of highly sensitive tests have proven that the photon mass is either zero, or else extraordinarily small. One type of these tests, for example, works by checking Coulomb's law at high accuracy, as the photon's mass would be nonzero if Coulomb's law were modified. See Photon § Experimental checks on photon mass.
  • QED predicts that when electrons get very close to each other, they behave as if they had a higher electric charge, due to vacuum polarization. This prediction was experimentally verified in 1997 using the TRISTAN particle accelerator in Japan.
  • QED effects like vacuum polarization and self-energy influence the electrons bound to a nucleus in a heavy atom due to extreme electromagnetic fields. A recent experiment on the ground state hyperfine splitting in 209Bi80+ and 209Bi82+ ions revealed a deviation from the theory by more than 7 standard uncertainties. Indications show that this deviation may originate from a wrong value of the nuclear magnetic moment of 209Bi.
  • Photon

    From Wikipedia, the free encyclopedia

    CompositionElementary particle
    StatisticsBosonic
    FamilyGauge boson
    InteractionsElectromagnetic, gravity
    Symbolγ
    TheorizedAlbert Einstein (1905)
    The name "photon" is generally attributed to Gilbert N. Lewis (1926)
    Mass0 (theoretical value)
    < 1×10−18 eV/c2 (experimental limit)
    Mean lifetimeStable
    Electric charge0
    < 1×10−35 e
    Color chargeNo
    Spinħ
    Spin states+1 ħ,  −1 ħ
    Parity−1
    C parity−1
    CondensedI(J PC) = 0, 1 (1−−)

    A photon (from Ancient Greek φῶς, φωτός (phôs, phōtós) 'light') is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless particles that can move no faster than the speed of light measured in vacuum. The photon belongs to the class of boson particles.

    As with other elementary particles, photons are best explained by quantum mechanics and exhibit wave–particle duality, their behavior featuring properties of both waves and particles. The modern photon concept originated during the first two decades of the 20th century with the work of Albert Einstein, who built upon the research of Max Planck. While Planck was trying to explain how matter and electromagnetic radiation could be in thermal equilibrium with one another, he proposed that the energy stored within a material object should be regarded as composed of an integer number of discrete, equal-sized parts. To explain the photoelectric effect, Einstein introduced the idea that light itself is made of discrete units of energy. In 1926, Gilbert N. Lewis popularized the term photon for these energy units. Subsequently, many other experiments validated Einstein's approach.

    In the Standard Model of particle physics, photons and other elementary particles are described as a necessary consequence of physical laws having a certain symmetry at every point in spacetime. The intrinsic properties of particles, such as charge, mass, and spin, are determined by gauge symmetry. The photon concept has led to momentous advances in experimental and theoretical physics, including lasers, Bose–Einstein condensation, quantum field theory, and the probabilistic interpretation of quantum mechanics. It has been applied to photochemistry, high-resolution microscopy, and measurements of molecular distances. Moreover, photons have been studied as elements of quantum computers, and for applications in optical imaging and optical communication such as quantum cryptography.

    Physical properties

    The photon has no electric charge, is generally considered to have zero rest mass and is a stable particle. The experimental upper limit on the photon mass is very small, on the order of 10−50 kg; its lifetime would be more than 1018 years. For comparison the age of the universe is about 1.38×1010 years.

    In a vacuum, a photon has two possible polarization states. The photon is the gauge boson for electromagnetism,[16]: 29–30  and therefore all other quantum numbers of the photon (such as lepton number, baryon number, and flavour quantum numbers) are zero. Also, the photon obeys Bose–Einstein statistics, and not Fermi–Dirac statistics. That is, they do not obey the Pauli exclusion principle and more than one can occupy the same bound quantum state.

    Photons are emitted in many natural processes. For example, when a charge is accelerated it emits synchrotron radiation. During a molecular, atomic or nuclear transition to a lower energy level, photons of various energy will be emitted, ranging from radio waves to gamma rays. Photons can also be emitted when a particle and its corresponding antiparticle are annihilated (for example, electron–positron annihilation).

    Energy and momentum

    The cone shows possible values of wave 4-vector of a photon. The "time" axis gives the angular frequency (rad⋅s−1) and the "space" axis represents the angular wavenumber (rad⋅m−1). Green and indigo represent left and right polarization.

    In a quantum mechanical model, electromagnetic waves transfer energy in photons with energy proportional to frequency ()

    where h is the Planck constant, is a fundamental physical constant. The energy can be written with angular frequency () or wavelength (λ):

    where ħh/ 2π is called the reduced Planck constant and c is the speed of light.

    The momentum of a photon

    where k is the wave vector, where

    • k ≡ |k| =  2π/λ   is the wave number.

    Since points in the direction of the photon's propagation, the magnitude of its momentum is

    The photon energy can be written as E = pc where p is the magnitude of the momentum vector p. This consistent with the energy–momentum relation of special relativity,

    when m = 0:

    Polarization and spin angular momentum

    The photon also carries spin angular momentum, which is related to photon polarization. (Beams of light also exhibit properties described as orbital angular momentum of light).

    The angular momentum of the photon has two possible values, either or −ħ. These two possible values correspond to the two possible pure states of circular polarization. Collections of photons in a light beam may have mixtures of these two values; a linearly polarized light beam will act as if it were composed of equal numbers of the two possible angular momenta.

    The spin angular momentum of light does not depend on its frequency, and was experimentally verified by C. V. Raman and Suri Bhagavantam in 1931.

    Antiparticle annihilation

    The collision of a particle with its antiparticle can create photons. In free space at least two photons must be created since, in the center of momentum frame, the colliding antiparticles have no net momentum, whereas a single photon always has momentum (determined by the photon's frequency or wavelength, which cannot be zero). Hence, conservation of momentum (or equivalently, translational invariance) requires that at least two photons are created, with zero net momentum. The energy of the two photons, or, equivalently, their frequency, may be determined from conservation of four-momentum.

    Seen another way, the photon can be considered as its own antiparticle (thus an "antiphoton" is simply a normal photon with opposite momentum, equal polarization, and 180° out of phase). The reverse process, pair production, is the dominant mechanism by which high-energy photons such as gamma rays lose energy while passing through matter. That process is the reverse of "annihilation to one photon" allowed in the electric field of an atomic nucleus.

    The classical formulae for the energy and momentum of electromagnetic radiation can be re-expressed in terms of photon events. For example, the pressure of electromagnetic radiation on an object derives from the transfer of photon momentum per unit time and unit area to that object, since pressure is force per unit area and force is the change in momentum per unit time.

    Experimental checks on photon mass

    Current commonly accepted physical theories imply or assume the photon to be strictly massless. If photons were not purely massless, their speeds would vary with frequency, with lower-energy (redder) photons moving slightly slower than higher-energy photons. Relativity would be unaffected by this; the so-called speed of light, c, would then not be the actual speed at which light moves, but a constant of nature which is the upper bound on speed that any object could theoretically attain in spacetime. Thus, it would still be the speed of spacetime ripples (gravitational waves and gravitons), but it would not be the speed of photons.

    If a photon did have non-zero mass, there would be other effects as well. Coulomb's law would be modified and the electromagnetic field would have an extra physical degree of freedom. These effects yield more sensitive experimental probes of the photon mass than the frequency dependence of the speed of light. If Coulomb's law is not exactly valid, then that would allow the presence of an electric field to exist within a hollow conductor when it is subjected to an external electric field. This provides a means for precision tests of Coulomb's law. A null result of such an experiment has set a limit of m10−14 eV/c2.

    Sharper upper limits on the mass of light have been obtained in experiments designed to detect effects caused by the galactic vector potential. Although the galactic vector potential is large because the galactic magnetic field exists on great length scales, only the magnetic field would be observable if the photon is massless. In the case that the photon has mass, the mass term 1/2m2AμAμ would affect the galactic plasma. The fact that no such effects are seen implies an upper bound on the photon mass of m < 3×10−27 eV/c2. The galactic vector potential can also be probed directly by measuring the torque exerted on a magnetized ring. Such methods were used to obtain the sharper upper limit of 1.07×10−27 eV/c2 (10−36 Da) given by the Particle Data Group.

    These sharp limits from the non-observation of the effects caused by the galactic vector potential have been shown to be model-dependent. If the photon mass is generated via the Higgs mechanism then the upper limit of m10−14 eV/c2 from the test of Coulomb's law is valid.

    Historical development

    Thomas Young's double-slit experiment in 1801 showed that light can act as a wave, helping to invalidate early particle theories of light.

    In most theories up to the eighteenth century, light was pictured as being made of particles. Since particle models cannot easily account for the refraction, diffraction and birefringence of light, wave theories of light were proposed by René Descartes (1637), Robert Hooke (1665), and Christiaan Huygens (1678); however, particle models remained dominant, chiefly due to the influence of Isaac Newton. In the early 19th century, Thomas Young and August Fresnel clearly demonstrated the interference and diffraction of light, and by 1850 wave models were generally accepted. James Clerk Maxwell's 1865 prediction that light was an electromagnetic wave – which was confirmed experimentally in 1888 by Heinrich Hertz's detection of radio waves – seemed to be the final blow to particle models of light.

    In 1900, Maxwell's theoretical model of light as oscillating electric and magnetic fields seemed complete. However, several observations could not be explained by any wave model of electromagnetic radiation, leading to the idea that light-energy was packaged into quanta described by E = hν. Later experiments showed that these light-quanta also carry momentum and, thus, can be considered particles: The photon concept was born, leading to a deeper understanding of the electric and magnetic fields themselves.

    The Maxwell wave theory, however, does not account for all properties of light. The Maxwell theory predicts that the energy of a light wave depends only on its intensity, not on its frequency; nevertheless, several independent types of experiments show that the energy imparted by light to atoms depends only on the light's frequency, not on its intensity. For example, some chemical reactions are provoked only by light of frequency higher than a certain threshold; light of frequency lower than the threshold, no matter how intense, does not initiate the reaction. Similarly, electrons can be ejected from a metal plate by shining light of sufficiently high frequency on it (the photoelectric effect); the energy of the ejected electron is related only to the light's frequency, not to its intensity.

    At the same time, investigations of black-body radiation carried out over four decades (1860–1900) by various researchers culminated in Max Planck's hypothesis that the energy of any system that absorbs or emits electromagnetic radiation of frequency ν is an integer multiple of an energy quantum E = . As shown by Albert Einstein, some form of energy quantization must be assumed to account for the thermal equilibrium observed between matter and electromagnetic radiation; for this explanation of the photoelectric effect, Einstein received the 1921 Nobel Prize in physics.

    Since the Maxwell theory of light allows for all possible energies of electromagnetic radiation, most physicists assumed initially that the energy quantization resulted from some unknown constraint on the matter that absorbs or emits the radiation. In 1905, Einstein was the first to propose that energy quantization was a property of electromagnetic radiation itself. Although he accepted the validity of Maxwell's theory, Einstein pointed out that many anomalous experiments could be explained if the energy of a Maxwellian light wave were localized into point-like quanta that move independently of one another, even if the wave itself is spread continuously over space. In 1909 and 1916, Einstein showed that, if Planck's law regarding black-body radiation is accepted, the energy quanta must also carry momentum p =  h / λ  , making them full-fledged particles.

    Up to 1923, most physicists were reluctant to accept that light itself was quantized. Instead, they tried to explain photon behaviour by quantizing only matter, as in the Bohr model of the hydrogen atom (shown here). Even though these semiclassical models were only a first approximation, they were accurate for simple systems and they led to quantum mechanics.

    As recounted in Robert Millikan's 1923 Nobel lecture, Einstein's 1905 predicted energy relationship was verified experimentally by 1916 but the local concept of the quanta remained unsettled. Most physicists were reluctant to believe that electromagnetic radiation itself might be particulate and thus an example of wave-particle duality. Then in 1922 Arthur Compton experiment showed that photons carried momentum proportional to their wave number (1922) in an experiment now called Compton scattering that appeared to clearly support a localized quantum model. At least for Millikan, this settled the matter. Compton received the Nobel Prize in 1927 for his scattering work.

    Even after Compton's experiment, Niels Bohr, Hendrik Kramers and John Slater made one last attempt to preserve the Maxwellian continuous electromagnetic field model of light, the so-called BKS theory. An important feature of the BKS theory is how it treated the conservation of energy and the conservation of momentum. In the BKS theory, energy and momentum are only conserved on the average across many interactions between matter and radiation. However, refined Compton experiments showed that the conservation laws hold for individual interactions. Accordingly, Bohr and his co-workers gave their model "as honorable a funeral as possible". Nevertheless, the failures of the BKS model inspired Werner Heisenberg in his development of matrix mechanics.

    By the late 1920, the pivotal question was how to unify Maxwell's wave theory of light with its experimentally observed particle nature. The answer to this question occupied Albert Einstein for the rest of his life, and was solved in quantum electrodynamics and its successor, the Standard Model. (See § Quantum field theory and § As a gauge boson, below.)

    A few physicists persisted in developing semiclassical models in which electromagnetic radiation is not quantized, but matter appears to obey the laws of quantum mechanics. Although the evidence from chemical and physical experiments for the existence of photons was overwhelming by the 1970s, this evidence could not be considered as absolutely definitive; since it relied on the interaction of light with matter, and a sufficiently complete theory of matter could in principle account for the evidence.

    In the 1970s and 1980s photon-correlation experiments definitively demonstrated quantum photon effects. These experiments produce results that cannot be explained by any classical theory of light, since they involve anticorrelations that result from the quantum measurement process. In 1974, the first such experiment was carried out by Clauser, who reported a violation of a classical Cauchy–Schwarz inequality. In 1977, Kimble et al. demonstrated an analogous anti-bunching effect of photons interacting with a beam splitter; this approach was simplified and sources of error eliminated in the photon-anticorrelation experiment of Grangier, Roger, & Aspect (1986); This work is reviewed and simplified further in Thorn, Neel, et al. (2004).

    Nomenclature

    Photoelectric effect: the emission of electrons from a metal plate caused by light quanta – photons

    The word quanta (singular quantum, Latin for how much) was used before 1900 to mean particles or amounts of different quantities, including electricity. In 1900, the German physicist Max Planck was studying black-body radiation, and he suggested that the experimental observations, specifically at shorter wavelengths, would be explained if the energy was "made up of a completely determinate number of finite equal parts", which he called "energy elements". In 1905, Albert Einstein published a paper in which he proposed that many light-related phenomena—including black-body radiation and the photoelectric effect—would be better explained by modelling electromagnetic waves as consisting of spatially localized, discrete energy quanta. He called these a light quantum (German: ein Lichtquant).

    The name photon derives from the Greek word for light, φῶς (transliterated phôs). The name was used 1916 by the American physicist and psychologist Leonard T. Troland for a unit of illumination of the retina and in several other contexts before being adopted for physics. The use of the term photon for the light quantum was popularized by Gilbert N. Lewis, who used the term in a letter to Nature on 18 December 1926. Arthur Compton, who had performed a key experiment demonstrating light quanta, cited Lewis in the 1927 Solvay conference proceedings for suggesting the name photon. Einstein never did use the term.

    In physics, a photon is usually denoted by the symbol γ (the Greek letter gamma). This symbol for the photon probably derives from gamma rays, which were discovered in 1900 by Paul Villard, named by Ernest Rutherford in 1903, and shown to be a form of electromagnetic radiation in 1914 by Rutherford and Edward Andrade. In chemistry and optical engineering, photons are usually symbolized by , which is the photon energy, where h is the Planck constant and the Greek letter ν (nu) is the photon's frequency.

    Wave–particle duality and uncertainty principles

    Photons obey the laws of quantum mechanics, and so their behavior has both wave-like and particle-like aspects. When a photon is detected by a measuring instrument, it is registered as a single, particulate unit. However, the probability of detecting a photon is calculated by equations that describe waves. This combination of aspects is known as wave–particle duality. For example, the probability distribution for the location at which a photon might be detected displays clearly wave-like phenomena such as diffraction and interference. A single photon passing through a double slit has its energy received at a point on the screen with a probability distribution given by its interference pattern determined by Maxwell's wave equations. However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; a photon's Maxwell waves will diffract, but photon energy does not spread out as it propagates, nor does this energy divide when it encounters a beam splitter. Rather, the received photon acts like a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, including systems much smaller than its wavelength, such as an atomic nucleus (≈10−15 m across) or even the point-like electron.

    While many introductory texts treat photons using the mathematical techniques of non-relativistic quantum mechanics, this is in some ways an awkward oversimplification, as photons are by nature intrinsically relativistic. Because photons have zero rest mass, no wave function defined for a photon can have all the properties familiar from wave functions in non-relativistic quantum mechanics. In order to avoid these difficulties, physicists employ the second-quantized theory of photons described below, quantum electrodynamics, in which photons are quantized excitations of electromagnetic modes.

    Another difficulty is finding the proper analogue for the uncertainty principle, an idea frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment involving an electron and a high-energy photon. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position–momentum uncertainty principle is due to Kennard, Pauli, and Weyl. The uncertainty principle applies to situations where an experimenter has a choice of measuring either one of two "canonically conjugate" quantities, like the position and the momentum of a particle. According to the uncertainty principle, no matter how the particle is prepared, it is not possible to make a precise prediction for both of the two alternative measurements: if the outcome of the position measurement is made more certain, the outcome of the momentum measurement becomes less so, and vice versa. A coherent state minimizes the overall uncertainty as far as quantum mechanics allows. Quantum optics makes use of coherent states for modes of the electromagnetic field. There is a tradeoff, reminiscent of the position–momentum uncertainty relation, between measurements of an electromagnetic wave's amplitude and its phase. This is sometimes informally expressed in terms of the uncertainty in the number of photons present in the electromagnetic wave, , and the uncertainty in the phase of the wave, . However, this cannot be an uncertainty relation of the Kennard–Pauli–Weyl type, since unlike position and momentum, the phase cannot be represented by a Hermitian operator.

    Bose–Einstein model of a photon gas

    In 1924, Satyendra Nath Bose derived Planck's law of black-body radiation without using any electromagnetism, but rather by using a modification of coarse-grained counting of phase space. Einstein showed that this modification is equivalent to assuming that photons are rigorously identical and that it implied a "mysterious non-local interaction", now understood as the requirement for a symmetric quantum mechanical state. This work led to the concept of coherent states and the development of the laser. In the same papers, Einstein extended Bose's formalism to material particles (bosons) and predicted that they would condense into their lowest quantum state at low enough temperatures; this Bose–Einstein condensation was observed experimentally in 1995. It was later used by Lene Hau to slow, and then completely stop, light in 1999 and 2001.

    The modern view on this is that photons are, by virtue of their integer spin, bosons (as opposed to fermions with half-integer spin). By the spin-statistics theorem, all bosons obey Bose–Einstein statistics (whereas all fermions obey Fermi–Dirac statistics).

    Stimulated and spontaneous emission

    Stimulated emission (in which photons "clone" themselves) was predicted by Einstein in his kinetic analysis, and led to the development of the laser. Einstein's derivation inspired further developments in the quantum treatment of light, which led to the statistical interpretation of quantum mechanics.

    In 1916, Albert Einstein showed that Planck's radiation law could be derived from a semi-classical, statistical treatment of photons and atoms, which implies a link between the rates at which atoms emit and absorb photons. The condition follows from the assumption that functions of the emission and absorption of radiation by the atoms are independent of each other, and that thermal equilibrium is made by way of the radiation's interaction with the atoms. Consider a cavity in thermal equilibrium with all parts of itself and filled with electromagnetic radiation and that the atoms can emit and absorb that radiation. Thermal equilibrium requires that the energy density of photons with frequency (which is proportional to their number density) is, on average, constant in time; hence, the rate at which photons of any particular frequency are emitted must equal the rate at which they are absorbed.

    Einstein began by postulating simple proportionality relations for the different reaction rates involved. In his model, the rate for a system to absorb a photon of frequency and transition from a lower energy to a higher energy is proportional to the number of atoms with energy and to the energy density of ambient photons of that frequency,

    where is the rate constant for absorption. For the reverse process, there are two possibilities: spontaneous emission of a photon, or the emission of a photon initiated by the interaction of the atom with a passing photon and the return of the atom to the lower-energy state. Following Einstein's approach, the corresponding rate for the emission of photons of frequency and transition from a higher energy to a lower energy is

    where is the rate constant for emitting a photon spontaneously, and is the rate constant for emissions in response to ambient photons (induced or stimulated emission). In thermodynamic equilibrium, the number of atoms in state and those in state must, on average, be constant; hence, the rates and must be equal. Also, by arguments analogous to the derivation of Boltzmann statistics, the ratio of and is where and are the degeneracy of the state and that of , respectively, and their energies, the Boltzmann constant and the system's temperature. From this, it is readily derived that

    and

    The and are collectively known as the Einstein coefficients.

    Einstein could not fully justify his rate equations, but claimed that it should be possible to calculate the coefficients , and once physicists had obtained "mechanics and electrodynamics modified to accommodate the quantum hypothesis". Not long thereafter, in 1926, Paul Dirac derived the rate constants by using a semiclassical approach, and, in 1927, succeeded in deriving all the rate constants from first principles within the framework of quantum theory. Dirac's work was the foundation of quantum electrodynamics, i.e., the quantization of the electromagnetic field itself. Dirac's approach is also called second quantization or quantum field theory; earlier quantum mechanical treatments only treat material particles as quantum mechanical, not the electromagnetic field.

    Einstein was troubled by the fact that his theory seemed incomplete, since it did not determine the direction of a spontaneously emitted photon. A probabilistic nature of light-particle motion was first considered by Newton in his treatment of birefringence and, more generally, of the splitting of light beams at interfaces into a transmitted beam and a reflected beam. Newton hypothesized that hidden variables in the light particle determined which of the two paths a single photon would take. Similarly, Einstein hoped for a more complete theory that would leave nothing to chance, beginning his separation from quantum mechanics. Ironically, Max Born's probabilistic interpretation of the wave function was inspired by Einstein's later work searching for a more complete theory.

    Quantum field theory

    Quantization of the electromagnetic field

    Different electromagnetic modes (such as those depicted here) can be treated as independent simple harmonic oscillators. A photon corresponds to a unit of energy E =  in its electromagnetic mode.

    In 1910, Peter Debye derived Planck's law of black-body radiation from a relatively simple assumption. He decomposed the electromagnetic field in a cavity into its Fourier modes, and assumed that the energy in any mode was an integer multiple of , where is the frequency of the electromagnetic mode. Planck's law of black-body radiation follows immediately as a geometric sum. However, Debye's approach failed to give the correct formula for the energy fluctuations of black-body radiation, which were derived by Einstein in 1909.

    In 1925, Born, Heisenberg and Jordan reinterpreted Debye's concept in a key way. As may be shown classically, the Fourier modes of the electromagnetic field—a complete set of electromagnetic plane waves indexed by their wave vector k and polarization state—are equivalent to a set of uncoupled simple harmonic oscillators. Treated quantum mechanically, the energy levels of such oscillators are known to be , where is the oscillator frequency. The key new step was to identify an electromagnetic mode with energy as a state with photons, each of energy . This approach gives the correct energy fluctuation formula.

    Feynman diagram of two electrons interacting by exchange of a virtual photon.

    Dirac took this one step further. He treated the interaction between a charge and an electromagnetic field as a small perturbation that induces transitions in the photon states, changing the numbers of photons in the modes, while conserving energy and momentum overall. Dirac was able to derive Einstein's and coefficients from first principles, and showed that the Bose–Einstein statistics of photons is a natural consequence of quantizing the electromagnetic field correctly (Bose's reasoning went in the opposite direction; he derived Planck's law of black-body radiation by assuming B–E statistics). In Dirac's time, it was not yet known that all bosons, including photons, must obey Bose–Einstein statistics.

    Dirac's second-order perturbation theory can involve virtual photons, transient intermediate states of the electromagnetic field; the static electric and magnetic interactions are mediated by such virtual photons. In such quantum field theories, the probability amplitude of observable events is calculated by summing over all possible intermediate steps, even ones that are unphysical; hence, virtual photons are not constrained to satisfy , and may have extra polarization states; depending on the gauge used, virtual photons may have three or four polarization states, instead of the two states of real photons. Although these transient virtual photons can never be observed, they contribute measurably to the probabilities of observable events.

    Indeed, such second-order and higher-order perturbation calculations can give apparently infinite contributions to the sum. Such unphysical results are corrected for using the technique of renormalization.

    Other virtual particles may contribute to the summation as well; for example, two photons may interact indirectly through virtual electronpositron pairs. Such photon–photon scattering (see two-photon physics), as well as electron–photon scattering, is meant to be one of the modes of operations of the planned particle accelerator, the International Linear Collider.

    In modern physics notation, the quantum state of the electromagnetic field is written as a Fock state, a tensor product of the states for each electromagnetic mode

    where represents the state in which photons are in the mode . In this notation, the creation of a new photon in mode (e.g., emitted from an atomic transition) is written as . This notation merely expresses the concept of Born, Heisenberg and Jordan described above, and does not add any physics.

    As a gauge boson

    The electromagnetic field can be understood as a gauge field, i.e., as a field that results from requiring that a gauge symmetry holds independently at every position in spacetime. For the electromagnetic field, this gauge symmetry is the Abelian U(1) symmetry of complex numbers of absolute value 1, which reflects the ability to vary the phase of a complex field without affecting observables or real valued functions made from it, such as the energy or the Lagrangian.

    The quanta of an Abelian gauge field must be massless, uncharged bosons, as long as the symmetry is not broken; hence, the photon is predicted to be massless, and to have zero electric charge and integer spin. The particular form of the electromagnetic interaction specifies that the photon must have spin ±1; thus, its helicity must be . These two spin components correspond to the classical concepts of right-handed and left-handed circularly polarized light. However, the transient virtual photons of quantum electrodynamics may also adopt unphysical polarization states.

    In the prevailing Standard Model of physics, the photon is one of four gauge bosons in the electroweak interaction; the other three are denoted W+, W and Z0 and are responsible for the weak interaction. Unlike the photon, these gauge bosons have mass, owing to a mechanism that breaks their SU(2) gauge symmetry. The unification of the photon with W and Z gauge bosons in the electroweak interaction was accomplished by Sheldon Glashow, Abdus Salam and Steven Weinberg, for which they were awarded the 1979 Nobel Prize in physics. Physicists continue to hypothesize grand unified theories that connect these four gauge bosons with the eight gluon gauge bosons of quantum chromodynamics; however, key predictions of these theories, such as proton decay, have not been observed experimentally.

    Hadronic properties

    Measurements of the interaction between energetic photons and hadrons show that the interaction is much more intense than expected by the interaction of merely photons with the hadron's electric charge. Furthermore, the interaction of energetic photons with protons is similar to the interaction of photons with neutrons in spite of the fact that the electrical charge structures of protons and neutrons are substantially different. A theory called vector meson dominance (VMD) was developed to explain this effect. According to VMD, the photon is a superposition of the pure electromagnetic photon, which interacts only with electric charges, and vector mesons, which mediate the residual nuclear force. However, if experimentally probed at very short distances, the intrinsic structure of the photon appears to have as components a charge-neutral flux of quarks and gluons, quasi-free according to asymptotic freedom in QCD. That flux is described by the photon structure function. A review by Nisius (2000) presented a comprehensive comparison of data with theoretical predictions.

    Contributions to the mass of a system

    The energy of a system that emits a photon is decreased by the energy of the photon as measured in the rest frame of the emitting system, which may result in a reduction in mass in the amount . Similarly, the mass of a system that absorbs a photon is increased by a corresponding amount. As an application, the energy balance of nuclear reactions involving photons is commonly written in terms of the masses of the nuclei involved, and terms of the form for the gamma photons (and for other relevant energies, such as the recoil energy of nuclei).

    This concept is applied in key predictions of quantum electrodynamics (QED, see above). In that theory, the mass of electrons (or, more generally, leptons) is modified by including the mass contributions of virtual photons, in a technique known as renormalization. Such "radiative corrections" contribute to a number of predictions of QED, such as the magnetic dipole moment of leptons, the Lamb shift, and the hyperfine structure of bound lepton pairs, such as muonium and positronium.

    Since photons contribute to the stress–energy tensor, they exert a gravitational attraction on other objects, according to the theory of general relativity. Conversely, photons are themselves affected by gravity; their normally straight trajectories may be bent by warped spacetime, as in gravitational lensing, and their frequencies may be lowered by moving to a higher gravitational potential, as in the Pound–Rebka experiment. However, these effects are not specific to photons; exactly the same effects would be predicted for classical electromagnetic waves.

    In matter

    Light that travels through transparent matter does so at a lower speed than c, the speed of light in vacuum. The factor by which the speed is decreased is called the refractive index of the material. In a classical wave picture, the slowing can be explained by the light inducing electric polarization in the matter, the polarized matter radiating new light, and that new light interfering with the original light wave to form a delayed wave. In a particle picture, the slowing can instead be described as a blending of the photon with quantum excitations of the matter to produce quasi-particles known as polaritons. Polaritons have a nonzero effective mass, which means that they cannot travel at c. Light of different frequencies may travel through matter at different speeds; this is called dispersion (not to be confused with scattering). In some cases, it can result in extremely slow speeds of light in matter. The effects of photon interactions with other quasi-particles may be observed directly in Raman scattering and Brillouin scattering.

    Photons can be scattered by matter. For example, photons scatter so many times in the solar radiative zone after leaving the core of the Sun that radiant energy takes about a million years to reach the convection zone. However, photons emitted from the sun's photosphere take only 8.3 minutes to reach Earth.

    Photons can also be absorbed by nuclei, atoms or molecules, provoking transitions between their energy levels. A classic example is the molecular transition of retinal (C20H28O), which is responsible for vision, as discovered in 1958 by Nobel laureate biochemist George Wald and co-workers. The absorption provokes a cis–trans isomerization that, in combination with other such transitions, is transduced into nerve impulses. The absorption of photons can even break chemical bonds, as in the photodissociation of chlorine; this is the subject of photochemistry.

    Technological applications

    Photons have many applications in technology. These examples are chosen to illustrate applications of photons per se, rather than general optical devices such as lenses, etc. that could operate under a classical theory of light. The laser is an important application and is discussed above under stimulated emission.

    Individual photons can be detected by several methods. The classic photomultiplier tube exploits the photoelectric effect: a photon of sufficient energy strikes a metal plate and knocks free an electron, initiating an ever-amplifying avalanche of electrons. Semiconductor charge-coupled device chips use a similar effect: an incident photon generates a charge on a microscopic capacitor that can be detected. Other detectors such as Geiger counters use the ability of photons to ionize gas molecules contained in the device, causing a detectable change of conductivity of the gas.

    Planck's energy formula is often used by engineers and chemists in design, both to compute the change in energy resulting from a photon absorption and to determine the frequency of the light emitted from a given photon emission. For example, the emission spectrum of a gas-discharge lamp can be altered by filling it with (mixtures of) gases with different electronic energy level configurations.

    Under some conditions, an energy transition can be excited by "two" photons that individually would be insufficient. This allows for higher resolution microscopy, because the sample absorbs energy only in the spectrum where two beams of different colors overlap significantly, which can be made much smaller than the excitation volume of a single beam (see two-photon excitation microscopy). Moreover, these photons cause less damage to the sample, since they are of lower energy.

    In some cases, two energy transitions can be coupled so that, as one system absorbs a photon, another nearby system "steals" its energy and re-emits a photon of a different frequency. This is the basis of fluorescence resonance energy transfer, a technique that is used in molecular biology to study the interaction of suitable proteins.

    Several different kinds of hardware random number generators involve the detection of single photons. In one example, for each bit in the random sequence that is to be produced, a photon is sent to a beam-splitter. In such a situation, there are two possible outcomes of equal probability. The actual outcome is used to determine whether the next bit in the sequence is 0 or 1.

    Quantum optics and computation

    Much research has been devoted to applications of photons in the field of quantum optics. Photons seem well-suited to be elements of an extremely fast quantum computer, and the quantum entanglement of photons is a focus of research. Nonlinear optical processes are another active research area, with topics such as two-photon absorption, self-phase modulation, modulational instability and optical parametric oscillators. However, such processes generally do not require the assumption of photons per se; they may often be modeled by treating atoms as nonlinear oscillators. The nonlinear process of spontaneous parametric down conversion is often used to produce single-photon states. Finally, photons are essential in some aspects of optical communication, especially for quantum cryptography.

    Two-photon physics studies interactions between photons, which are rare. In 2018, Massachusetts Institute of Technology researchers announced the discovery of bound photon triplets, which may involve polaritons.

    Roman mythology

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Roman_mythology   Romulus and Remus , ...