Search This Blog

Saturday, July 12, 2025

Geodesic

From Wikipedia, the free encyclopedia
Klein quartic with 28 geodesics (marked by 7 colors and 4 patterns)

In geometry, a geodesic (/ˌ.əˈdɛsɪk, --, -ˈdsɪk, -zɪk/) is a curve representing in some sense the locally shortest path (arc) between two points in a surface, or more generally in a Riemannian manifold. The term also has meaning in any differentiable manifold with a connection. It is a generalization of the notion of a "straight line".

The noun geodesic and the adjective geodetic come from geodesy, the science of measuring the size and shape of Earth, though many of the underlying principles can be applied to any ellipsoidal geometry. In the original sense, a geodesic was the shortest route between two points on the Earth's surface. For a spherical Earth, it is a segment of a great circle (see also great-circle distance). The term has since been generalized to more abstract mathematical spaces; for example, in graph theory, one might consider a geodesic between two vertices/nodes of a graph.

In a Riemannian manifold or submanifold, geodesics are characterised by the property of having vanishing geodesic curvature. More generally, in the presence of an affine connection, a geodesic is defined to be a curve whose tangent vectors remain parallel if they are transported along it. Applying this to the Levi-Civita connection of a Riemannian metric recovers the previous notion.

Geodesics are of particular importance in general relativity. Timelike geodesics in general relativity describe the motion of free falling test particles.

Introduction

A locally shortest path between two given points in a curved space, assumed to be a Riemannian manifold, can be defined by using the equation for the length of a curve (a function f from an open interval of R to the space), and then minimizing this length between the points using the calculus of variations. This has some minor technical problems because there is an infinite-dimensional space of different ways to parameterize the shortest path. It is simpler to restrict the set of curves to those that are parameterized "with constant speed" 1, meaning that the distance from f(s) to f(t) along the curve equals |st|. Equivalently, a different quantity may be used, termed the energy of the curve; minimizing the energy leads to the same equations for a geodesic (here "constant velocity" is a consequence of minimization). Intuitively, one can understand this second formulation by noting that an elastic band stretched between two points will contract its width, and in so doing will minimize its energy. The resulting shape of the band is a geodesic.

It is possible that several different curves between two points minimize the distance, as is the case for two diametrically opposite points on a sphere. In such a case, any of these curves is a geodesic.

A contiguous segment of a geodesic is again a geodesic.

In general, geodesics are not the same as "shortest curves" between two points, though the two concepts are closely related. The difference is that geodesics are only locally the shortest distance between points, and are parameterized with "constant speed". Going the "long way round" on a great circle between two points on a sphere is a geodesic but not the shortest path between the points. The map from the unit interval on the real number line to itself gives the shortest path between 0 and 1, but is not a geodesic because the velocity of the corresponding motion of a point is not constant.

Geodesics are commonly seen in the study of Riemannian geometry and more generally metric geometry. In general relativity, geodesics in spacetime describe the motion of point particles under the influence of gravity alone. In particular, the path taken by a falling rock, an orbiting satellite, or the shape of a planetary orbit are all geodesics in curved spacetime. More generally, the topic of sub-Riemannian geometry deals with the paths that objects may take when they are not free, and their movement is constrained in various ways.

This article presents the mathematical formalism involved in defining, finding, and proving the existence of geodesics, in the case of Riemannian manifolds. The article Levi-Civita connection discusses the more general case of a pseudo-Riemannian manifold and geodesic (general relativity) discusses the special case of general relativity in greater detail.

Examples

A geodesic on a triaxial ellipsoid.
If an insect is placed on a surface and continually walks "forward", by definition it will trace out a geodesic.

The most familiar examples are the straight lines in Euclidean geometry. On a sphere, the images of geodesics are the great circles. The shortest path from point A to point B on a sphere is given by the shorter arc of the great circle passing through A and B. If A and B are antipodal points, then there are infinitely many shortest paths between them. Geodesics on an ellipsoid behave in a more complicated way than on a sphere; in particular, they are not closed in general (see figure).

Triangles

A geodesic triangle on the sphere.

A geodesic triangle is formed by the geodesics joining each pair out of three points on a given surface. On the sphere, the geodesics are great circle arcs, forming a spherical triangle.

Geodesic triangles in spaces of positive (top), negative (middle) and zero (bottom) curvature.

Metric geometry

In metric geometry, a geodesic is a curve which is everywhere locally a distance minimizer. More precisely, a curve γ : IM from an interval I of the reals to the metric space M is a geodesic if there is a constant v ≥ 0 such that for any tI there is a neighborhood J of t in I such that for any t1, t2J we have

This generalizes the notion of geodesic for Riemannian manifolds. However, in metric geometry the geodesic considered is often equipped with natural parameterization, i.e. in the above identity v = 1 and

If the last equality is satisfied for all t1, t2I, the geodesic is called a minimizing geodesic or shortest path.

In general, a metric space may have no geodesics, except constant curves. At the other extreme, any two points in a length metric space are joined by a minimizing sequence of rectifiable paths, although this minimizing sequence need not converge to a geodesic. The metric Hopf-Rinow theorem provides situations where a length space is automatically a geodesic space.

Common examples of geodesic metric spaces that are often not manifolds include metric graphs, (locally compact) metric polyhedral complexes, infinite-dimensional pre-Hilbert spaces, and real trees.

Riemannian geometry

In a Riemannian manifold with metric tensor , the length of a continuously differentiable curve is defined by

The distance between two points and of is defined as the infimum of the length taken over all continuous, piecewise continuously differentiable curves such that and . In Riemannian geometry, all geodesics are locally distance-minimizing paths, but the converse is not true. In fact, only paths that are both locally distance minimizing and parameterized proportionately to arc-length are geodesics.

Another equivalent way of defining geodesics on a Riemannian manifold, is to define them as the minima of the following action or energy functional

All minima of are also minima of , but is a bigger set since paths that are minima of can be arbitrarily re-parameterized (without changing their length), while minima of cannot. For a piecewise curve (more generally, a curve), the Cauchy–Schwarz inequality gives

with equality if and only if is equal to a constant a.e.; the path should be travelled at constant speed. It happens that minimizers of also minimize , because they turn out to be affinely parameterized, and the inequality is an equality. The usefulness of this approach is that the problem of seeking minimizers of is a more robust variational problem. Indeed, is a "convex function" of , so that within each isotopy class of "reasonable functions", one ought to expect existence, uniqueness, and regularity of minimizers. In contrast, "minimizers" of the functional are generally not very regular, because arbitrary reparameterizations are allowed.

The Euler–Lagrange equations of motion for the functional are then given in local coordinates by

where are the Christoffel symbols of the metric. This is the geodesic equation, discussed below.

Calculus of variations

Techniques of the classical calculus of variations can be applied to examine the energy functional . The first variation of energy is defined in local coordinates by

The critical points of the first variation are precisely the geodesics. The second variation is defined by

In an appropriate sense, zeros of the second variation along a geodesic arise along Jacobi fields. Jacobi fields are thus regarded as variations through geodesics.

By applying variational techniques from classical mechanics, one can also regard geodesics as Hamiltonian flows. They are solutions of the associated Hamilton equations, with (pseudo-)Riemannian metric taken as Hamiltonian.

Affine geodesics

A geodesic on a smooth manifold with an affine connection is defined as a curve such that parallel transport along the curve preserves the tangent vector to the curve, so

at each point along the curve, where is the derivative with respect to . More precisely, in order to define the covariant derivative of it is necessary first to extend to a continuously differentiable vector field in an open set. However, the resulting value of (1) is independent of the choice of extension.

Using local coordinates on , we can write the geodesic equation (using the summation convention) as

where are the coordinates of the curve and are the Christoffel symbols of the connection . This is an ordinary differential equation for the coordinates. It has a unique solution, given an initial position and an initial velocity. Therefore, from the point of view of classical mechanics, geodesics can be thought of as trajectories of free particles in a manifold. Indeed, the equation means that the acceleration vector of the curve has no components in the direction of the surface (and therefore it is perpendicular to the tangent plane of the surface at each point of the curve). So, the motion is completely determined by the bending of the surface. This is also the idea of general relativity where particles move on geodesics and the bending is caused by gravity.

Existence and uniqueness

The local existence and uniqueness theorem for geodesics states that geodesics on a smooth manifold with an affine connection exist, and are unique. More precisely:

For any point p in M and for any vector V in TpM (the tangent space to M at p) there exists a unique geodesic  : IM such that
and
where I is a maximal open interval in R containing 0.

The proof of this theorem follows from the theory of ordinary differential equations, by noticing that the geodesic equation is a second-order ODE. Existence and uniqueness then follow from the Picard–Lindelöf theorem for the solutions of ODEs with prescribed initial conditions. γ depends smoothly on both p and V.

In general, I may not be all of R as for example for an open disc in R2. Any γ extends to all of if and only if M is geodesically complete.

Geodesic flow

Geodesic flow is a local R-action on the tangent bundle TM of a manifold M defined in the following way

where t ∈ R, V ∈ TM and denotes the geodesic with initial data . Thus, is the exponential map of the vector tV. A closed orbit of the geodesic flow corresponds to a closed geodesic on M.

On a (pseudo-)Riemannian manifold, the geodesic flow is identified with a Hamiltonian flow on the cotangent bundle. The Hamiltonian is then given by the inverse of the (pseudo-)Riemannian metric, evaluated against the canonical one-form. In particular the flow preserves the (pseudo-)Riemannian metric , i.e.

In particular, when V is a unit vector, remains unit speed throughout, so the geodesic flow is tangent to the unit tangent bundle. Liouville's theorem implies invariance of a kinematic measure on the unit tangent bundle.

Geodesic spray

The geodesic flow defines a family of curves in the tangent bundle. The derivatives of these curves define a vector field on the total space of the tangent bundle, known as the geodesic spray.

More precisely, an affine connection gives rise to a splitting of the double tangent bundle TTM into horizontal and vertical bundles:

The geodesic spray is the unique horizontal vector field W satisfying

at each point v ∈ TM; here π : TTM → TM denotes the pushforward (differential) along the projection π : TM → M associated to the tangent bundle.

More generally, the same construction allows one to construct a vector field for any Ehresmann connection on the tangent bundle. For the resulting vector field to be a spray (on the deleted tangent bundle TM \ {0}) it is enough that the connection be equivariant under positive rescalings: it need not be linear. That is, (cf. Ehresmann connection#Vector bundles and covariant derivatives) it is enough that the horizontal distribution satisfy

for every X ∈ TM \ {0} and λ > 0. Here d(Sλ) is the pushforward along the scalar homothety A particular case of a non-linear connection arising in this manner is that associated to a Finsler manifold.

Affine and projective geodesics

Equation (1) is invariant under affine reparameterizations; that is, parameterizations of the form

where a and b are constant real numbers. Thus apart from specifying a certain class of embedded curves, the geodesic equation also determines a preferred class of parameterizations on each of the curves. Accordingly, solutions of (1) are called geodesics with affine parameter.

An affine connection is determined by its family of affinely parameterized geodesics, up to torsion (Spivak 1999, Chapter 6, Addendum I). The torsion itself does not, in fact, affect the family of geodesics, since the geodesic equation depends only on the symmetric part of the connection. More precisely, if are two connections such that the difference tensor

is skew-symmetric, then and have the same geodesics, with the same affine parameterizations. Furthermore, there is a unique connection having the same geodesics as , but with vanishing torsion.

Geodesics without a particular parameterization are described by a projective connection.

Computational methods

Efficient solvers for the minimal geodesic problem on surfaces have been proposed by Mitchell, Kimmel, Crane, and others.

Ribbon test

A ribbon "test" is a way of finding a geodesic on a physical surface. The idea is to fit a bit of paper around a straight line (a ribbon) onto a curved surface as closely as possible without stretching or squishing the ribbon (without changing its internal geometry).

For example, when a ribbon is wound as a ring around a cone, the ribbon would not lie on the cone's surface but stick out, so that circle is not a geodesic on the cone. If the ribbon is adjusted so that all its parts touch the cone's surface, it would give an approximation to a geodesic.

Mathematically the ribbon test can be formulated as finding a mapping of a neighborhood of a line in a plane into a surface so that the mapping "doesn't change the distances around by much"; that is, at the distance from we have where and are metrics on and .

Examples of applications

While geometric in nature, the idea of a shortest path is so general that it easily finds extensive use in nearly all sciences, and in some other disciplines as well.

Topology and geometric group theory

Probability, statistics and machine learning

Physics

Biology

  • The study of how the nervous system optimizes muscular movement may be approached by endowing a configuration space of the body with a Riemannian metric that measures the effort, so that the problem can be stated in terms of geodesy.
  • Geodesic distance is often used to measure the length of paths for signal propagation in neurons.
  • The structures of geodesics in large molecules plays a role in the study of protein folds.
  • The structure of compound eyes, many parts of which are being held together and supported by a geodesic dome grid on the outside surface of the eye.

Engineering

Geodesics serve as the basis to calculate:

Born rule

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Born_rule

The Born rule is a postulate of quantum mechanics that gives the probability that a measurement of a quantum system will yield a given result. In one commonly used application, it states that the probability density for finding a particle at a given position is proportional to the square of the amplitude of the system's wavefunction at that position. It was formulated and published by German physicist Max Born in July 1926.

Details

The Born rule states that an observable, measured in a system with normalized wave function (see Bra–ket notation), corresponds to a self-adjoint operator whose spectrum is discrete if:

  • the measured result will be one of the eigenvalues of , and
  • the probability of measuring a given eigenvalue will equal , where is the projection onto the eigenspace of corresponding to .
(In the case where the eigenspace of corresponding to is one-dimensional and spanned by the normalized eigenvector , is equal to , so the probability is equal to . Since the complex number is known as the probability amplitude that the state vector assigns to the eigenvector , it is common to describe the Born rule as saying that probability is equal to the amplitude-squared (really the amplitude times its own complex conjugate). Equivalently, the probability can be written as .)

In the case where the spectrum of is not wholly discrete, the spectral theorem proves the existence of a certain projection-valued measure (PVM) , the spectral measure of . In this case:

  • the probability that the result of the measurement lies in a measurable set is given by .

For example, a single structureless particle can be described by a wave function that depends upon position coordinates and a time coordinate . The Born rule implies that the probability density function for the result of a measurement of the particle's position at time is: The Born rule can also be employed to calculate probabilities (for measurements with discrete sets of outcomes) or probability densities (for continuous-valued measurements) for other observables, like momentum, energy, and angular momentum.

In some applications, this treatment of the Born rule is generalized using positive-operator-valued measures (POVM). A POVM is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalization of von Neumann measurements and, correspondingly, quantum measurements described by POVMs are a generalization of quantum measurements described by self-adjoint observables. In rough analogy, a POVM is to a PVM what a mixed state is to a pure state. Mixed states are needed to specify the state of a subsystem of a larger system (see purification of quantum state); analogously, POVMs are necessary to describe the effect on a subsystem of a projective measurement performed on a larger system. POVMs are the most general kind of measurement in quantum mechanics and can also be used in quantum field theory. They are extensively used in the field of quantum information.

In the simplest case, of a POVM with a finite number of elements acting on a finite-dimensional Hilbert space, a POVM is a set of positive semi-definite matrices on a Hilbert space that sum to the identity matrix,:

The POVM element is associated with the measurement outcome , such that the probability of obtaining it when making a measurement on the quantum state is given by:

where is the trace operator. This is the POVM version of the Born rule. When the quantum state being measured is a pure state this formula reduces to:

The Born rule, together with the unitarity of the time evolution operator (or, equivalently, the Hamiltonian being Hermitian), implies the unitarity of the theory: a wave function that is time-evolved by a unitary operator will remain properly normalized. (In the more general case where one considers the time evolution of a density matrix, proper normalization is ensured by requiring that the time evolution is a trace-preserving, completely positive map.)

History

The Born rule was formulated by Born in a 1926 paper. In this paper, Born solves the Schrödinger equation for a scattering problem and, inspired by Albert Einstein and Einstein's probabilistic rule for the photoelectric effect, concludes, in a footnote, that the Born rule gives the only possible interpretation of the solution. (The main body of the article says that the amplitude "gives the probability" [bestimmt die Wahrscheinlichkeit], while the footnote added in proof says that the probability is proportional to the square of its magnitude.) In 1954, together with Walther Bothe, Born was awarded the Nobel Prize in Physics for this and other work. John von Neumann discussed the application of spectral theory to Born's rule in his 1932 book.

Derivation from more basic principles

Gleason's theorem shows that the Born rule can be derived from the usual mathematical representation of measurements in quantum physics together with the assumption of non-contextuality. Andrew M. Gleason first proved the theorem in 1957, prompted by a question posed by George W. Mackey. This theorem was historically significant for the role it played in showing that wide classes of hidden-variable theories are inconsistent with quantum physics.

Several other researchers have also tried to derive the Born rule from more basic principles. A number of derivations have been proposed in the context of the many-worlds interpretation. These include the decision-theory approach pioneered by David Deutsch and later developed by Hilary Greaves and David Wallace; and an "envariance" approach by Wojciech H. Zurek. These proofs have, however, been criticized as circular. In 2018, an approach based on self-locating uncertainty was suggested by Charles Sebens and Sean M. Carroll; this has also been criticized. Simon Saunders, in 2021, produced a branch counting derivation of the Born rule. The crucial feature of this approach is to define the branches so that they all have the same magnitude or 2-norm. The ratios of the numbers of branches thus defined give the probabilities of the various outcomes of a measurement, in accordance with the Born rule.

In 2019, Lluís Masanes, Thomas Galley, and Markus Müller proposed a derivation based on postulates including the possibility of state estimation.

It has also been claimed that pilot-wave theory can be used to statistically derive the Born rule, though this remains controversial.

Within the QBist interpretation of quantum theory, the Born rule is seen as an extension of the normative principle of coherence, which ensures self-consistency of probability assessments across a whole set of such assessments. It can be shown that an agent who thinks they are gambling on the outcomes of measurements on a sufficiently quantum-like system but refuses to use the Born rule when placing their bets is vulnerable to a Dutch book.

Precision tests of QED


From Wikipedia, the free encyclopedia

Quantum electrodynamics (QED), a relativistic quantum field theory of electrodynamics, is among the most stringently tested theories in physics. The most precise and specific tests of QED consist of measurements of the electromagnetic fine-structure constant, α, in various physical systems. Checking the consistency of such measurements tests the theory.

Tests of a theory are normally carried out by comparing experimental results to theoretical predictions. In QED, there is some subtlety in this comparison, because theoretical predictions require as input an extremely precise value of α, which can only be obtained from another precision QED experiment. Because of this, the comparisons between theory and experiment are usually quoted as independent determinations of α. QED is then confirmed to the extent that these measurements of α from different physical sources agree with each other.

The agreement found this way is to within less than one part in a billion (10−9). An extremely high precision measurement of the quantized energies of the cyclotron orbits of the electron gives a precision of better than one part in a trillion (10−12). This makes QED one of the most accurate physical theories constructed thus far.

Besides these independent measurements of the fine-structure constant, many other predictions of QED have been tested as well.

Measurements of the fine-structure constant using different systems

Precision tests of QED have been performed in low-energy atomic physics experiments, high-energy collider experiments, and condensed matter systems. The value of α is obtained in each of these experiments by fitting an experimental measurement to a theoretical expression (including higher-order radiative corrections) that includes α as a parameter. The uncertainty in the extracted value of α includes both experimental and theoretical uncertainties. This program thus requires both high-precision measurements and high-precision theoretical calculations. Unless noted otherwise, all results below are taken from.

Low-energy measurements

Anomalous magnetic dipole moments

The most precise measurement of α comes from the anomalous magnetic dipole moment, or g−2 (pronounced "g minus 2"), of the electron. To make this measurement, two ingredients are needed:

  1. A precise measurement of the anomalous magnetic dipole moment, and
  2. A precise theoretical calculation of the anomalous magnetic dipole moment in terms of α.

As of February 2023, the best measurement of the anomalous magnetic dipole moment of the electron was made by the group of Gerald Gabrielse at Harvard University, using a single electron caught in a Penning trap. The difference between the electron's cyclotron frequency and its spin precession frequency in a magnetic field is proportional to g−2. An extremely high precision measurement of the quantized energies of the cyclotron orbits, or Landau levels, of the electron, compared to the quantized energies of the electron's two possible spin orientations, gives a value for the electron's spin g-factor:

g/2 = 1.00115965218059(13),

a precision of better than one part in a trillion. (The digits in parentheses indicate the standard uncertainty in the last listed digits of the measurement.)

The current state-of-the-art theoretical calculation of the anomalous magnetic dipole moment of the electron includes QED diagrams with up to four loops. Combining this with the experimental measurement of g yields the most precise value of α:

α−1 = 137.035999166(15),

a precision of better than a part in a billion. This uncertainty is ten times smaller than the nearest rival method involving atom-recoil measurements.

A value of α can also be extracted from the anomalous magnetic dipole moment of the muon. The g-factor of the muon is extracted using the same physical principle as for the electron above – namely, that the difference between the cyclotron frequency and the spin precession frequency in a magnetic field is proportional to g−2. The most precise measurement comes from Brookhaven National Laboratory's muon g−2 experiment, in which polarized muons are stored in a cyclotron and their spin orientation is measured by the direction of their decay electrons. As of February 2007, the current world average muon g-factor measurement is,

g/2 = 1.0011659208(6),

a precision of better than one part in a billion. The difference between the g-factors of the muon and the electron is due to their difference in mass. Because of the muon's larger mass, contributions to the theoretical calculation of its anomalous magnetic dipole moment from Standard Model weak interactions and from contributions involving hadrons are important at the current level of precision, whereas these effects are not important for the electron. The muon's anomalous magnetic dipole moment is also sensitive to contributions from new physics beyond the Standard Model, such as supersymmetry. For this reason, the muon's anomalous magnetic moment is normally used as a probe for new physics beyond the Standard Model rather than as a test of QED. See muon g–2 for current efforts to refine the measurement.

Atom-recoil measurements

This is an indirect method of measuring α, based on measurements of the masses of the electron, certain atoms, and the Rydberg constant. The Rydberg constant is known to seven parts in a trillion. The mass of the electron relative to that of caesium and rubidium atoms is also known with extremely high precision. If the mass of the electron can be measured with sufficiently high precision, then α can be found from the Rydberg constant according to

To get the mass of the electron, this method actually measures the mass of an 87Rb atom by measuring the recoil speed of the atom after it emits a photon of known wavelength in an atomic transition. Combining this with the ratio of electron to 87Rb atom, the result for α is,

α−1 = 137.03599878(91).

Because this measurement is the next-most-precise after the measurement of α from the electron's anomalous magnetic dipole moment described above, their comparison provides the most stringent test of QED: the value of α obtained here is within one standard deviation of that found from the electron's anomalous magnetic dipole moment, an agreement to within ten parts in a billion.

Neutron Compton wavelength

This method of measuring α is very similar in principle to the atom-recoil method. In this case, the accurately known mass ratio of the electron to the neutron is used. The neutron mass is measured with high precision through a very precise measurement of its Compton wavelength. This is then combined with the value of the Rydberg constant to extract α. The result is,

α−1 = 137.0360101(54).

Hyperfine splitting

Hyperfine splitting is a splitting in the energy levels of an atom caused by the interaction between the magnetic moment of the nucleus and the combined spin and orbital magnetic moment of the electron. The hyperfine splitting in hydrogen, measured using Ramsey's hydrogen maser, is known with great precision. Unfortunately, the influence of the proton's internal structure limits how precisely the splitting can be predicted theoretically. This leads to the extracted value of α being dominated by theoretical uncertainty:

α−1 = 137.0360(3).

The hyperfine splitting in muonium, an "atom" consisting of an electron and an antimuon, provides a more precise measurement of α because the muon has no internal structure:

α−1 = 137.035994(18).

Lamb shift

The Lamb shift is a small difference in the energies of the 2 S1/2 and 2 P1/2 energy levels of hydrogen, which arises from a one-loop effect in quantum electrodynamics. The Lamb shift is proportional to α5 and its measurement yields the extracted value:

α−1 = 137.0368(7).

Positronium

Positronium is an "atom" consisting of an electron and a positron. Whereas the calculation of the energy levels of ordinary hydrogen is contaminated by theoretical uncertainties from the proton's internal structure, the particles that make up positronium have no internal structure so precise theoretical calculations can be performed. The measurement of the splitting between the 2 3S1 and the 1 3S1 energy levels of positronium yields

α−1 = 137.034(16).

Measurements of α can also be extracted from the positronium decay rate. Positronium decays through the annihilation of the electron and the positron into two or more gamma-ray photons. The decay rate of the singlet ("para-positronium") 1S0 state yields

α−1 = 137.00(6),

and the decay rate of the triplet ("ortho-positronium") 3S1 state yields

α−1 = 136.971(6).

This last result is the only serious discrepancy among the numbers given here, but there is some evidence that uncalculated higher-order quantum corrections give a large correction to the value quoted here.

High-energy QED processes

The cross sections of higher-order QED reactions at high-energy electron-positron colliders provide a determination of α. In order to compare the extracted value of α with the low-energy results, higher-order QED effects including the running of α due to vacuum polarization must be taken into account. These experiments typically achieve only percent-level accuracy, but their results are consistent with the precise measurements available at lower energies.

The cross section for e+ e → e+ e e+ e yields

α−1 = 136.5(2.7),

and the cross section for e+ e → e+ e μ+ μ yields

α−1 = 139.9(1.2).

Condensed matter systems

The quantum Hall effect and the AC Josephson effect are exotic quantum interference phenomena in condensed matter systems. These two effects provide a standard electrical resistance and a standard frequency, respectively, which measure the charge of the electron with corrections that are strictly zero for macroscopic systems.

The quantum Hall effect yields

α−1 = 137.0359979(32),

and the AC Josephson effect yields

α−1 = 137.0359770(77).

Other tests

  • QED predicts that the photon is a massless particle. A variety of highly sensitive tests have proven that the photon mass is either zero, or else extraordinarily small. One type of these tests, for example, works by checking Coulomb's law at high accuracy, as the photon's mass would be nonzero if Coulomb's law were modified. See Photon § Experimental checks on photon mass.
  • QED predicts that when electrons get very close to each other, they behave as if they had a higher electric charge, due to vacuum polarization. This prediction was experimentally verified in 1997 using the TRISTAN particle accelerator in Japan.
  • QED effects like vacuum polarization and self-energy influence the electrons bound to a nucleus in a heavy atom due to extreme electromagnetic fields. A recent experiment on the ground state hyperfine splitting in 209Bi80+ and 209Bi82+ ions revealed a deviation from the theory by more than 7 standard uncertainties. Indications show that this deviation may originate from a wrong value of the nuclear magnetic moment of 209Bi.
  • Crystallography

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Crystallo...