Search This Blog

Sunday, April 12, 2026

Dirac equation

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Dirac_equation

In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to fully account for special relativity in the context of quantum mechanics. The equation is validated by its rigorous accounting of the observed fine structure of the hydrogen spectrum and has become vital in the building of the Standard Model.

The equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved. The existence of antimatter was experimentally confirmed several years later. It also provided a theoretical justification for the introduction of several component wave functions in Pauli's phenomenological theory of spin. The wave functions in the Dirac theory are vectors of four complex numbers (known as bispinors), two of which resemble the Pauli wavefunction in the non-relativistic limit, in contrast to the Schrödinger equation, which described wave functions of only one complex value. Moreover, in the limit of zero mass, the Dirac equation reduces to the Weyl equation. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-1/2 particles.

Dirac did not fully appreciate the importance of his results; however, the entailed explanation of spin as a consequence of the union of quantum mechanics and relativity—and the eventual discovery of the positron—represents one of the great triumphs of theoretical physics. This accomplishment has been described as fully on par with the works of Isaac Newton, James Clerk Maxwell, and Albert Einstein before him. The equation has been deemed by some physicists to be "the real seed of modern physics". The Dirac equation has been described as the "centerpiece of relativistic quantum mechanics", with it also stated that "the equation is perhaps the most important one in all of quantum mechanics".

History

Early attempts at a relativistic formulation

The first phase in the development of quantum mechanics, lasting between 1900 and 1925, focused on explaining individual phenomena that could not be explained through classical mechanics. The second phase, starting in the mid-1920s, saw the development of two systematic frameworks governing quantum mechanics. The first, known as matrix mechanics, uses matrices to describe physical observables; it was developed in 1925 by Werner Heisenberg, Max Born, and Pascual Jordan. The second, known as wave mechanics, uses a wave equation known as the Schrödinger equation to describe the state of a system; it was developed the next year by Erwin Schrödinger. While these two frameworks were initially seen as competing approaches, they would later be shown to be equivalent.

Both these frameworks only formulated quantum mechanics in a non-relativistic setting. This was seen as a deficiency right from the start, with Schrödinger originally attempting to formulate a relativistic version of the Schrödinger equation, in the process discovering the Klein–Gordon equation.However, after showing that this equation did not correctly reproduce the relativistic corrections to the hydrogen atom spectrum for which an exact form was known due to Arnold Sommerfeld, he abandoned his relativistic formulation. The Klein–Gordon equation was also found by at least six other authors in the same year.

During 1926 and 1927, there was a widespread effort to incorporate relativity into quantum mechanics, largely through two approaches. The first was to consider the Klein–Gordon as the correct relativistic generalization of the Schrödinger equation. Such an approach was viewed unfavourably by many leading theorists since it failed to correctly predict numerous experimental results, and more importantly it appeared difficult to reconcile with the principles of quantum mechanics as understood at the time. These conceptual issues primarily arose due to the presence of a second temporal derivative.

The second approach introduced relativistic effects as corrections to the known non-relativistic formulas. This provided many provisional answers that were expected to eventually be supplanted by some yet-unknown relativistic formulation of quantum mechanics. One notable result by Heisenberg and Jordan was the introduction of two terms for spin and relativity into the hydrogen Hamiltonian, allowing them to derive the first-order approximation of the Sommerfeld fine structure formula.

A parallel development during this time was the concept of spin, first introduced in 1925 by Samuel Goudsmit and George Uhlenbeck. Shortly after, it was conjectured by Schrödinger to be the missing link in acquiring the correct Sommerfeld formula. In 1927, Wolfgang Pauli used the ideas of spin to find an effective theory for a nonrelativistic spin-1/2 particle, the Pauli equation. He did this by taking the Schrödinger equation and, rather than just assuming that the wave function depends on the physical coordinate, he also assumed that it depends on a spin coordinate that can take only two values . While this was still a non-relativistic formulation, he believed that a fully relativistic formulation possibly required a more complicated model for the electron, one that moved beyond a point particle.

Dirac's relativistic quantum mechanics

By 1927, many physicists no longer considered the fine structure of hydrogen as a crucial puzzle that called for a completely new relativistic formulation since it could effectively be solved using the Pauli equation or by introducing a spin-1/2 angular momentum quantum number in the Klein–Gordon equation. At the fifth Solvay Conference held that year, Paul Dirac was primarily concerned with the logical development of quantum mechanics. However, he realized that many other physicists complacently accepted the Klein–Gordon equation as a satisfactory relativistic formulation, which demanded abandoning basic principles of quantum mechanics as understood at the time, to which Dirac strongly objected. After his return from Brussels, Dirac focused on finding a relativistic theory for electrons. Within two months he solved the problem and published his results on January 2, 1928.

In his paper, Dirac was guided by two principles from transformation theory, the first being that the equation should be invariant under transformations of special relativity, and the second that it should transform under the transformation theory of quantum mechanics. The latter demanded that the equation would have to be linear in temporal derivatives, so that it would admit a probabilistic interpretation. His argument begins with the Klein–Gordon equation

describing a particle using the wave function . Here is the square of the momentum, is the rest mass of the particle, is the speed of light, and is the reduced Planck constant. The naive way to get an equation linear in the time derivative is to essentially consider the square root of both sides. This replaces with . However, such a square root is mathematically problematic for the resulting theory, making it unfeasible.

Dirac's first insight was the concept of linearization. He looked for some sort of variables that are independent of momentum and spacetime coordinates for which the square root could be rewritten in a linear form

By squaring this operator and demanding that it reduces to the Klein–Gordon equation, Dirac found that the variables must satisfy and if . Dirac initially considered the Pauli matrices as a candidate, but then showed these would not work since it is impossible to find a set of four matrices that all anticommute with each other. His second insight was to instead consider four-dimensional matrices. In that case the equation would be acting on a four-component wavefunction . Such a proposal was much more bold than Pauli's original generalization to a two-component wavefunction in the Pauli equation. This is because in Pauli's case, this was motivated by the demand to encode the two spin states of the particle. In contrast, Dirac had no physical argument for a four-component wavefunction, but instead introduced it as a matter of mathematical necessity. He thus arrived at the Dirac equation

Dirac constructed the correct matrices without realizing that they form a mathematical structure known of since the early 1880s, the Clifford algebra. By recasting the equation in a Lorentz invariant form, he also showed that it correctly combines special relativity with his principle of quantum mechanical transformation theory, making it a viable candidate for a relativistic theory of the electron.

To investigate the equation further, he examined how it behaves in the presence of an electromagnetic field. To his surprise, this showed that it described a particle with a magnetic moment arising due to the particle having spin 1/2. Spin directly emerged from the equation, without Dirac having added it in by hand. Additionally, he focused on showing that the equation successfully reproduces the fine structure of the hydrogen atom, at least to first order. The equation therefore succeeds where all previous attempts have failed, in correctly describing relativistic phenomena of electrons from first principles rather than through the ad hoc modification of existing formulas.

Consequences

Except for his followup paper deriving the Zeeman effect and Paschen–Back effect from the equation in the presences of a magnetic fields, Dirac left the work of examining the consequences of his equation to others, and only came back to the subject in 1930. Once the equation was published, it was recognized as the correct solution to the problem of spin, relativity, and quantum mechanics. At first the Dirac equation was considered the only valid relativistic equation for a particle with mass. Then in 1934 Pauli and Victor Weisskopf reinterpreted the Klein–Gordon equation as the equation for a relativistic spinless particle.

One of the first calculations was to reproduce the Sommerfeld fine structure formula exactly, which was performed independently by Charles Galton Darwin and Walter Gordon in 1928. This is the first time that the full formula has been derived from first principles. Further work on the mathematics of the equation was undertaken by Hermann Weyl in 1929. In this work he showed that the massless Dirac equation can be decomposed into a pair of Weyl equations.

The Dirac equation was also used to study various scattering processes. In particular, the Klein–Nishina formula, looking at photon-electron scattering, was also derived in 1928. Mott scattering, the scattering of electrons off a heavy target such as atomic nuclei, followed the next year. Over the following years it was further used to derive other standard scattering processes such as Moller scattering in 1932 and Bhabha scattering in 1936.

A problem that gained more focus with time was the presence of negative energy states in the Dirac equation, which led to many efforts to try to eliminate such states. Dirac initially simply rejected the negative energy states as unphysical but the problem was made more clear when in 1929 Oskar Klein showed that in static fields there exists inevitable mixing between the negative and positive energy states. Dirac's initial response was to believe that his equation must have some sort of defect, and that it was only the first approximation of a future theory that would not have this problem. However, he then suggested a solution to the problem in the form of the Dirac sea. This is the idea that the universe is filled with an infinite sea of negative energy electrons states. Positive energy electron states then live in this sea and are prevented from decaying to the negative energy states through the Pauli exclusion principle.

Additionally, Dirac postulated the existence of positively charged holes in the Dirac sea, which he initially suggested could be the proton. However, Oppenheimer showed that in this case stable atoms could not exist and Weyl further showed that the holes would have to have the same mass as the electrons. Persuaded by Oppenheimer's and Weyl's argument, Dirac published a paper in 1931 that predicted the existence of an as-yet-unobserved particle that he called an "anti-electron" that would have the same mass and the opposite charge as an electron and that would mutually annihilate upon contact with an electron. He suggested that every particle may have an oppositely charged partner, a concept now called antimatter

In 1933 Carl Anderson discovered the "positive electron", now called a positron, which had all the properties of Dirac's anti-electron. While the Dirac sea was later superseded by quantum field theory, its conceptual legacy survived in the idea of a dynamical vacuum filled with virtual particles. In 1949 Ernst Stueckelberg suggested and Richard Feynman showed in detail that the negative energy solutions can be interpreted as particles traveling backwards in proper time. The concept of the Dirac sea is also realized more explicitly in some condensed matter systems in the form of the Fermi sea, which consists of a sea of filled valence electrons below some chemical potential.

Significant work was done over the following decades to try to find spectroscopic discrepancies compared to the predictions made by the Dirac equation, however it was not until 1947 that Lamb shift was discovered, which the equation does not predict. This led to the development of quantum electrodynamics in 1950s, with the Dirac equation then being incorporated within the context of quantum field theory. Since it describes the dynamics of Dirac spinors, it went on to play a fundamental role in the Standard Model as well as many other areas of physics. For example, within condensed matter physics, systems whose fermions have a near linear dispersion relation are described by the Dirac equation. Such systems are known as Dirac matter and they include graphene and topological insulators, which have become a major area of research since the start of the 21st century.

The Dirac equation is inscribed upon a plaque on the floor of Westminster Abbey. Unveiled on 13 November 1995, the plaque commemorates Dirac's life. The equation, in its natural units formulation, is also prominently displayed in the auditorium at the ‘Paul A.M. Dirac’ Lecture Hall at the Patrick M.S. Blackett Institute (formerly The San Domenico Monastery) of the Ettore Majorana Foundation and Centre for Scientific Culture in Erice, Sicily.

Formulation

Covariant formulation

In its modern field theoretic formulation, the Dirac equation in 3+1 dimensional Minkowski spacetime is written in terms of a Dirac field . This is a field that assigns a complex vector from to each point in spacetime, where the key property of the field is that it transforms as a Dirac spinor under Lorentz transformations. In natural units where , the Lorentz covariant formulation of the Dirac equation is given by

Dirac equation

where is a contraction between the four-gradient and the gamma matrices . These are a set of four matrices generating the Dirac algebra, which requires them to satisfy

where is the anticommutator, is the Minkowski metric in a mainly negative signature, and is the identity matrix. The Dirac algebra is a special case of the more general mathematical structure known as a Clifford algebra. The Dirac algebra can also be seen as the real part of the spacetime algebra. There is no unique choice of matrices for the gamma matrices, with different choices known as different representations of the algebra. One common choice, originally discovered by Dirac, is known as the Dirac representation. Here the matrices are given by

where are the three Pauli matrices for . There are two other common representations for the gamma matrices. The first is the chiral representation, which is useful when decomposing the Dirac equation into a pair of Weyl equations. The second is the Majorana representation, for which all gamma matrices are imaginary, so the Dirac operator is purely real. This representation is useful for studying Majorana spinors, which are purely real four-component spinor solutions of the Dirac equation.

By taking the hermitian conjugate of the Dirac equation and multiplying it by from the right, the adjoint Dirac equation can be found, with this being the equation of motion for the Dirac adjoint . It is given by

The adjoint spinor is useful in forming Lorentz invariant quantities. For example, the bilinear is not Lorentz invariant, but is. Here is shorthand notation for a partial derivative acting on the left. In regular notation, the adjoint Dirac equation is equivalent to

The Dirac equation can be rewritten in a non-covariant form similar to that of the Schrödinger equation

The right-hand side of the equation is the Hamiltonian acting on the Dirac spinor . Here and are a set of four Hermitian matrices that all anticommute with each other and square to the identity. They are related to the gamma matrices through and . This form is useful in quantum mechanics, where the Hamiltonian can be easily modified to solve a wide range of problems, such as by introducing a potential or through a minimal coupling to the electromagnetic field.

Dirac action

The Dirac equation can also be acquired from a Lagrangian formulation of the field theory, where the Dirac action is given by

The equation then arises as the Euler–Lagrange equation of this action, found by varying the spinor . Meanwhile, the adjoint Dirac equation is acquired by varying the adjoint spinor . The action formulation of the Dirac equation has the advantage of making the symmetries of the Dirac equation more explicit, since they leave its action invariant. Noether's theorem then allows for the direct calculation of currents corresponding to these symmetries. Additionally, the action is usually used to define the associated quantum field theory, such as through the path integral formulation.

Meaning of the Dirac fields

In quantum mechanics, the Dirac spinor corresponds to a four-component spinor wave function describing the state of a Dirac fermion. Its position probability density, the probability of finding the fermion in a region of space, is described by the zeroth component of its vector current, . In the case of a large number of particles, it can also be interpreted as the charge density. An appropriate normalization is required to ensure that the total probability across all of space is equal to one, with probability conservation following directly from the conservation of the vector current. The Dirac equation is the relativistic analogue of the Schrödinger equation for the Dirac fermion wavefunction.

In the second quantization form of quantum field theory the Dirac spinor is quantized to be an operator-valued spinor field . In contrast to quantum mechanics, it no longer represents the state in the Hilbert space, but is rather the operator that acts on states to create or destroy particles. Observables are formed using expectation values of these operators. The Dirac equation then becomes an operator equation describing the state-independent evolution of the operator-valued spinor field

In the path integral formulation of quantum field theory, the spinor field is an anti-commuting Grassmann-valued field that only acts as an integration variable. The Dirac equation then emerges as the classical saddle point behaviour of the path integral. It also arises as an equation of the expectation value of the classical field variables

in the sense of the Schwinger–Dyson equations. This version of the equation can also be acquired by taking the expectation value of the operator equation.

The Dirac equation also arises in describing the time evolution of a spinor field in classical field theory. Such a field theory would have the special linear group as its spacetime symmetry group rather than the Lorentz group, since the latter does not admit spinor representations. This is in contrast to the quantum theory which does admit spinor representations even when the spacetime symmetries are described by the Lorentz group. This is because the states in a Hilbert space are defined only up to a complex phase, so particles belong to projective representations rather than regular representations, with the projective representations of the Lorentz group being equivalent to regular representations of . Classical spinor fields do not arise in our universe because the Pauli exclusion principle prevents populating the field with a sufficient number of particles to reach the classical limit.

Properties

Lorentz transformations

The Lorentz group ,  describing the transformation between inertial reference frames, can admit many different representations. A representation is a particular choice of matrices that faithfully represent the action of the group on some vector space, where the dimensionality of the matrices can differ between representations. For example, the Lorentz group can be represented by real matrices acting on the vector space , corresponding to how Lorentz transformations act on vectors or on spacetime. Another representation is a set of complex matrices acting on Dirac spinors in the complex vector space . A smaller representation is a set of complex matrices acting on Weyl spinors in the vector space.

Lie group elements can be generated using the corresponding Lie algebra, which together with a Lie bracket, describes the tangent space of the group manifold around its identity element.4  The basis elements of this vector space are known as generators of the group. A particular group element is then acquired by exponentiating a corresponding tangent space vector. The generators of the Lorentz Lie algebra must satisfy certain anticommutation relations, known as a Lie bracket. The six vectors can be packaged into an antisymmetric object indexed by , with the bracket for the Lorentz algebra given by

This algebra admits numerous representation, where each generator is represented by a matrix, with each algebra representation generating a corresponding representation of the group. For example, the representation acting on real vectors is given by the six matrices where

The Lorentz transformation matrix can then be acquired from these generators through an exponentiation

where is an antisymmetric matrix encoding the six degrees of freedom of the Lorentz group used to specify the particular group element. These correspond to the three boosts and three rotations.

Another representation for the Lorentz algebra is the spinor representation where the generators are given by

In this case the Lorentz group element, specified by , is given by

The mapping of is not one-to-one since there are two consistent choices for that give the same but a different .  This is a consequence of the spinor representation being projective representations of the Lorentz group . Equivalently, they are regular representation of , which is a double cover of the Lorentz group.

Under a Lorentz transformation, spacetime coordinates transform under the vector representation , while the spinors transform under the spinor representation

The Dirac equation is a Lorentz covariant equation, meaning that it takes the form in all inertial reference frames. That is, it takes the same form when for a spinor with coordinates , as well as for a Lorentz transformed spinor in the Lorentz transformed coordinates

where is the four-gradient for the new coordinates . Meanwhile, the Dirac action is Lorentz invariant, meaning that it is the same in all reference frames .

Symmetries

Dirac's theory is invariant under a global symmetry acting on the phase of the spinor

This has a corresponding conserved current that can be derived from the action using Noether's theorem, given by

This symmetry is known as the vector symmetry because its current transforms as a vector under Lorentz transformations. Promoting this symmetry to a gauge symmetry gives rise to quantum electrodynamics.

In the massless limit, the Dirac equation has a second inequivalent symmetry known as the axial symmetry, which acts on the spinors as

where is the chiral matrix. This arises because in the massless limit the Dirac equation reduces to a pair of Weyl equations. Each of these is invariant under a phase symmetry. These two symmetries can then be grouped into the vector symmetry where both Weyl spinors transform by the same phase, and the axial symmetry where they transform under the opposite signs of the phase. The current corresponding to the axial symmetry is given by

This transforms as a pseudovector, meaning that its spatial part is odd under parity transformations. Classically, the axial symmetry admits a well-formulated gauge theory, but at the quantum level it has a chiral anomaly that provides an obstruction towards gauging.

The spacetime symmetries of the Dirac action correspond to the Poincaré group, a combination of spacetime translations and the Lorentz group. Invariance under the four spacetime translations yields the Dirac stress-energy tensor as its four-currents

where the last term is the Dirac Lagrangian, which vanishes on shell. Invariance under Lorentz transformations meanwhile yields a set of currents indexed by and , given by

where are the spinor representation generators of the Lorentz Lie algebra, used to define how spinors transform under Lorentz transformations.

Plane wave solutions

Acting on the Dirac equation with the operator gives rise to the Klein–Gordon equation for each component of the spinor

As a result, any solution to the Dirac equation is also automatically a solution to the Klein–Gordon equation. Its solutions can therefore be written as a linear combination of plane waves.

The Dirac equation admits positive frequency plane wave solutions

with a positive energy given by . It also admits negative frequency solutions taking the same form except with . It is more convenient to rewritten these negative frequency solutions by flipping the sign of the momentum to ensure that they have a positive energy and so take the form

At the classical level these are positive and negative frequency solutions to a classical wave equation, but in the quantum theory they correspond to operators creating particles with spinor polarization or annihilating antiparticles with spinor polarization . Both these spinor polarizations satisfy the momentum space Dirac equation

Since these are simple matrix equations, they can be solved directly once an explicit representation for the gamma matrices is chosen. In the chiral representation the general solution is given by

where and are arbitrary complex 2-vectors, describing the two spin degrees of freedom for the particle and two for the antiparticle. In the massless limit, these spin states correspond to the possible helicity states that the massless fermions can have, either being left-handed or right-handed.

While the standard Dirac equation was originally derived in a dimensional spacetime, it can be directly generalized to arbitrary dimension and metric signatures, where it takes the same covariant form. The crucial difference is that the gamma matrices must be changed to gamma matrices of the Clifford algebra appropriate in those dimensions and metric signature, with the size of the Dirac spinor corresponding to the dimensionality of the gamma matrices. While the Dirac equation always exists, since every dimension admits Dirac spinors, the properties of these spinors and their relation to other spinor representations differs significantly across dimensions. Other differences include the absence of a chirality matrix in odd dimensions.

The equation can also be generalized from flat Minkowski spacetime to curved spacetime through the introduction of a spinor covariant derivative

where is the spin connection that can be defined using the tetrad formalism. The Dirac equation in curved spacetime then takes the form

Adding self-interaction terms to the Dirac action gives rise to the nonlinear Dirac equation, which allows for the fermions to interact with themselves, such as in the Thirring model. Interactions between fermions can also be introduced through electromagnetic effects. In particular, the Breit equation describes multi-electron systems interacting electromagnetically to first order in perturbation theory. The two-body Dirac equation is a similar multi-body equation.

A geometric reformulation of the Dirac equation is known as the Dirac–Hestenes equation. In this formulation all the components of the Dirac equation have an explicit geometric interpretation. Another related geometric equation is the Dirac–Kähler equation, which is a geometric analogue of the Dirac equation that can be defined on any general pseudo-Riemannian manifold and which acts on differential forms. In the case of a flat manifold, it reduces to four copies of the Dirac equation. However, on curved manifolds this decomposition breaks down and the equation fundamentally differs. This equation is used in lattice field theory to describe the continuum limit of staggered fermions.

Weyl and Majorana equations

The Dirac spinor can be decomposed into a pair of Weyl spinors of opposite chirality . Under Lorentz transformations, one transforms as a left-handed Weyl spinor and the other as a right-handed Weyl spinor . In the chiral representation of the gamma matrices, the Dirac equation reduces to the pair of equations for the Weyl spinors

In particular, in the massless limit the Weyl spinors decouple and the Dirac equation is equivalent to a pair of Weyl equations.

This decomposition has been proposed as an intuitive explanation of Zitterbewegung, as these massless components would propagate at the speed of light and move in opposite directions, since the helicity is the projection of the spin onto the direction of motion. Here the role of the mass is not to make the velocity less than the speed of light, but instead controls the average rate at which these reversals occur; specifically, the reversals can be modelled as a Poisson process.

A closely related equation is the Majorana equation, with this formally taking the same form as the Dirac equation except that it acts on Majorana spinors. These are spinors that satisfy a reality condition , where is the charge-conjugation operator. In higher dimensions, the Dirac equation has similar relations to the equations describing other spinor representations that arise in those dimensions.

Pauli equation

In the non-relativistic limit, the Dirac equation reduces to the Pauli equation, which when coupled to electromagnetism has the form

Here is the vector of Pauli matrices and is the momentum operator. The equation describes a fermion of charge coupled to the electromagnetic field through a magnetic vector potential and an electric scalar potential . The fermion is described through the two-component wave function , where each component describes one of the two spin states . The Pauli equation is often used in quantum mechanics to describe phenomena where relativistic effects are negligible but the spin of the fermion is important. It can also be recast in a form which directly shows that the gyromagnetic ratio of the fermion described by the Dirac equation is exactly . In quantum electrodynamics there are additional quantum corrections that modify this value, give rise to a non-zero anomalous magnetic moment.

Gauge symmetry

Vector symmetry

The vector and axial symmetries of the Dirac action are both global symmetries, in that they act the same everywhere in spacetime. In classical field theory, the Lagrangian can always be modified in a way to elevate a global symmetry to a local symmetry, which can act differently at different spacetime locations. In the case of the vector symmetry, which corresponding to a global change of the spinor field by a phase , gauging would result in an action invariant under a local symmetry where the phase can take different values at different points . While symmetries can always be gauged in classical field theory, this may not always be possible in the full quantum theory due to various obstructions such as anomalies, which signal that the full quantum theory is not invariant under the local symmetry despite its classical Lagrangian being invariant. For example, the axial symmetry in the massless Dirac theory with one fermion is anomalous due to the chiral anomaly and cannot be gauged.

Elevating the vector symmetry to a local symmetry means that the original action is no longer invariant under the symmetry due to the appearance of a term arising from the kinetic term. Instead, a new field , known as a gauge field, must be introduced. It must also transform under the local symmetry as

where plays the role of the charge of the Dirac spinor to the gauge field. The Dirac action can then be made invariant under the local symmetry by replacing the derivative term with a new gauge covariant derivative

The Dirac action then takes the form

This result can also be directly acquired through the Noether procedure, which is the general principle that a global symmetry can be gauged through the introduction of a term coupling the gauge field to the appropriate global symmetry current. Additionally introducing the kinetic term for the gauge field results in the action for quantum electrodynamics.

General symmetries

The symmetries that can be gauged can be greatly expanded by considering a theory with identical Dirac spinors labelled by a new index . Together these spinors can be considered as being part of a single object with components where labels the four spin components, and the different spinors. The largest global symmetry of this action is then given by the unitary group .

Any continuous subgroup of can then be gauged. In particular, if one wishes to gauge the symmetry acting on all components, then the symmetry being gauged must admit an -dimensional unitary representation acting on the spinors. That is, for every , there exists an dimensional matrix representation such that

forms a faithful representation of the group. Gauging the largest continuous subgroup, , requires the spinors to transform in the fundamental or antifundamental representation. Gauging elevates the representation from being spacetime independent to being spacetime dependent . It requires introducing a gauge field , formally the connection on the principal bundle, which necessarily transforms in the adjoint representation of the gauge group. The covariant derivative then takes the form

One symmetry that frequently gets gauged is the special unitary group symmetry. Spinors transforming in the fundamental representation transform as

where is a unitary matrix corresponding to a particular group element. The gauge field is a matrix valued gauge field which transforms in the adjoint representation as

The covariant derivative then takes the form

By also introducing the kinetic term for the gauge field, one constructs the action for quantum chromodynamics

where in the gauge field kinetic term, the Yang–Mills fields strength tensor is defined as

The case of describes strong interactions of the quark sector of the Standard Model, with the gauge field corresponding to gluons and the Dirac spinors to the quarks. The case also plays a role in the Standard Model, describing the electroweak sector. The gauge field in this case is the W-boson, while the Dirac spinors are leptons.

Mathematical optimization

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Mathematical_optimization
Graph of a surface given by z = f(x, y) = −(x² + y²) + 4. The global maximum at (x, y, z) = (0, 0, 4) is indicated by a blue dot.
Nelder-Mead minimum search of Simionescu's function. Simplex vertices are ordered by their values, with 1 having the lowest ( best) value.

Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics.

Optimization problems

Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:

An optimization problem can be represented in the following way:

Given: a function from some set A to the real numbers
Sought: an element x0A such that f(x0) ≤ f(x) for all xA ("minimization") or such that f(x0) ≥ f(x) for all xA ("maximization").

Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming – see History below). Many real-world and theoretical problems may be modeled in this general framework.

Since the following is valid:

it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too.

Problems formulated using this technique in the fields of physics may refer to the technique as energy minimization, speaking of the value of the function f as representing the energy of the system being modeled. In machine learning, it is always necessary to continuously evaluate the quality of a data model by using a cost function where a minimum implies a set of possibly optimal parameters with an optimal (lowest) error.

Typically, A is some subset of the Euclidean space , often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain A of f is called the search space or the choice set, while the elements of A are called candidate solutions or feasible solutions.

The function f is variously called an objective function, criterion function, loss function, cost function (minimization), utility function or fitness function (maximization), or, in certain fields, an energy function or energy functional. A feasible solution that minimizes (or maximizes) the objective function is called an optimal solution.

In mathematics, conventional optimization problems are usually stated in terms of minimization.

A local minimum x* is defined as an element for which there exists some δ > 0 such that

the expression f(x*) ≤ f(x) holds;

that is to say, on some region around x* all of the function values are greater than or equal to the value at that element. Local maxima are defined similarly.

While a local minimum is at least as good as any nearby elements, a global minimum is at least as good as every feasible element. Generally, unless the objective function is convex in a minimization problem, there may be several local minima. In a convex problem, if there is a local minimum that is interior (not on the edge of the set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima.

A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem. Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem.

Notation

Optimization problems are often expressed with special notation. Here are some examples:

Minimum and maximum value of a function

Consider the following notation:

This denotes the minimum value of the objective function x2 + 1, when choosing x from the set of real numbers . The minimum value in this case is 1, occurring at x = 0.

Similarly, the notation

asks for the maximum value of the objective function 2x, where x may be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined".

Optimal input arguments

Consider the following notation:

or equivalently

This represents the value (or values) of the argument x in the interval (−∞,−1] that minimizes (or minimize) the objective function x2 + 1 (the actual minimum value of that function is not what the problem asks for). In this case, the answer is x = −1, since x = 0 is infeasible, that is, it does not belong to the feasible set.

Similarly,

or equivalently

represents the {x, y} pair (or pairs) that maximizes (or maximize) the value of the objective function x cos y, with the added constraint that x lie in the interval [−5,5] (again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form {5, 2kπ} and {−5, (2k + 1)π}, where k ranges over all integers.

Operators arg min and arg max are sometimes also written as argmin and argmax, and stand for argument of the minimum and argument of the maximum.

History

Fermat and Lagrange found calculus-based formulae for identifying optima, while Newton and Gauss proposed iterative methods for moving towards an optimum.

The term "linear programming" for certain optimization cases was due to George B. Dantzig, although much of the theory had been introduced by Leonid Kantorovich in 1939. (Programming in this context does not refer to computer programming, but comes from the use of program by the United States military to refer to proposed training and logistics schedules, which were the problems Dantzig studied at that time.) Dantzig published the Simplex algorithm in 1947, and also John von Neumann and other researchers worked on the theoretical aspects of linear programming (like the theory of duality) around the same time.

Other notable researchers in mathematical optimization include the following:

Major subfields

  • Convex programming studies the case when the objective function is convex (minimization) or concave (maximization) and the constraint set is convex. This can be viewed as a particular case of nonlinear programming or as generalization of linear or convex quadratic programming.
    • Linear programming (LP), a type of convex programming, studies the case in which the objective function f is linear and the constraints are specified using only linear equalities and inequalities. Such a constraint set is called a polyhedron or a polytope if it is bounded.
    • Second-order cone programming (SOCP) is a convex program, and includes certain types of quadratic programs.
    • Semidefinite programming (SDP) is a subfield of convex optimization where the underlying variables are semidefinite matrices. It is a generalization of linear and convex quadratic programming.
    • Conic programming is a general form of convex programming. LP, SOCP and SDP can all be viewed as conic programs with the appropriate type of cone.
    • Geometric programming is a technique whereby objective and inequality constraints expressed as posynomials and equality constraints as monomials can be transformed into a convex program.
  • Integer programming studies linear programs in which some or all variables are constrained to take on integer values. This is not convex, and in general much more difficult than regular linear programming.
  • Quadratic programming allows the objective function to have quadratic terms, while the feasible set must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming.
  • Fractional programming studies optimization of ratios of two nonlinear functions. The special class of concave fractional programs can be transformed to a convex optimization problem.
  • Nonlinear programming studies the general case in which the objective function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, whether the program is convex affects the difficulty of solving it.
  • Stochastic programming studies the case in which some of the constraints or parameters depend on random variables.
  • Robust optimization is, like stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. Robust optimization aims to find solutions that are valid under all possible realizations of the uncertainties defined by an uncertainty set.
  • Combinatorial optimization is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one.
  • Stochastic optimization is used with random (noisy) function measurements or random inputs in the search process.
  • Infinite-dimensional optimization studies the case when the set of feasible solutions is a subset of an infinite-dimensional space, such as a space of functions.
  • Heuristics and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to find approximate solutions for many complicated optimization problems.
  • Constraint satisfaction studies the case in which the objective function f is constant (this is used in artificial intelligence, particularly in automated reasoning).
    • Constraint programming is a programming paradigm wherein relations between variables are stated in the form of constraints.
  • Disjunctive programming is used where at least one constraint must be satisfied but not all. It is of particular use in scheduling.
  • Space mapping is a concept for modeling and optimization of an engineering system to high-fidelity (fine) model accuracy exploiting a suitable physically meaningful coarse or surrogate model.

In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time):

Multi-objective optimization

Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as the Pareto set. The curve created plotting weight against stiffness of the best designs is known as the Pareto frontier.

A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal.

The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker.

Multi-objective optimization problems have been generalized further into vector optimization problems where the (partial) ordering is no longer given by the Pareto ordering.

Multi-modal or global optimization

Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer.

Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm.

Common approaches to global optimization problems, where multiple local extrema may be present include evolutionary algorithms, Bayesian optimization and simulated annealing.

Classification of critical points and extrema

Feasibility problem

The satisfiability problem, also called the feasibility problem, is just the problem of finding any feasible solution at all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal.

Many optimization algorithms need to start from a feasible point. One way to obtain such a point is to relax the feasibility conditions using a slack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative.

Existence

The extreme value theorem of Karl Weierstrass states that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view.

Necessary conditions for optimality

One of Fermat's theorems states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero (see first derivative test). More generally, they may be found at critical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions.

Optima of equality-constrained problems can be found by the Lagrange multiplier method. The optima of problems with equality and/or inequality constraints can be found using the 'Karush–Kuhn–Tucker conditions'.

Sufficient conditions for optimality

While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality.

Sensitivity and continuity of optima

The envelope theorem describes how the value of an optimal solution changes when an underlying parameter changes. The process of computing this change is called comparative statics.

The maximum theorem of Claude Berge (1963) describes the continuity of an optimal solution as a function of underlying parameters.

Calculus of optimization

For unconstrained problems with twice-differentiable functions, some critical points can be found by finding the points where the gradient of the objective function is zero (that is, the stationary points). More generally, a zero subgradient certifies that a local minimum has been found for minimization problems with convex functions and other locally Lipschitz functions, which meet in loss function minimization of the neural network. The positive-negative momentum estimation lets to avoid the local minimum and converges at the objective function global minimum.

Further, critical points can be classified using the definiteness of the Hessian matrix: If the Hessian is positive definite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind of saddle point.

Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers. Lagrangian relaxation can also provide approximate solutions to difficult constrained problems.

When the objective function is a convex function, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such as interior-point methods.

Global convergence

More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies on line searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence uses trust regions. Both line searches and trust regions are used in modern methods of non-differentiable optimization. Usually, a global optimizer is much slower than advanced local optimizers (such as BFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points.

Computational optimization techniques

To solve problems, researchers may use algorithms that terminate in a finite number of steps, or iterative methods that converge to a solution (on some specified class of problems), or heuristics that may provide approximate solutions to some problems (although their iterates need not converge).

Optimization algorithms

Iterative methods

The iterative methods used to solve problems of nonlinear programming differ according to whether they evaluate Hessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the computational complexity (or computational cost) of each iteration. In some cases, the computational complexity may be excessively high.

One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix), the number of function evaluations is in the order of N². Newton's method requires the 2nd-order derivatives, so for each iteration, the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself.

  • Methods that evaluate Hessians (or approximate Hessians, using finite differences):
    • Newton's method
    • Sequential quadratic programming: A Newton-based method for small-medium scale constrained problems. Some versions can handle large-dimensional problems.
    • Interior point methods: This is a large class of methods for constrained optimization, some of which use only (sub)gradient information and others of which require the evaluation of Hessians.
  • Methods that evaluate gradients, or approximate gradients in some way (or even subgradients):
    • Coordinate descent methods: Algorithms which update a single coordinate in each iteration
    • Conjugate gradient methods: Iterative methods for large problems. (In theory, these methods terminate in a finite number of steps with quadratic objective functions, but this finite termination is not observed in practice on finite–precision computers.)
    • Gradient descent (alternatively, "steepest descent" or "steepest ascent"): A (slow) method of historical and theoretical interest, which has had renewed interest for finding approximate solutions of enormous problems.
    • Subgradient methods: An iterative method for large locally Lipschitz functions using generalized gradients. Following Boris T. Polyak, subgradient–projection methods are similar to conjugate–gradient methods.
    • Bundle method of descent: An iterative method for small–medium-sized problems with locally Lipschitz functions, particularly for convex minimization problems (similar to conjugate gradient methods).
    • Ellipsoid method: An iterative method for small problems with quasiconvex objective functions and of great theoretical interest, particularly in establishing the polynomial time complexity of some combinatorial optimization problems. It has similarities with Quasi-Newton methods.
    • Conditional gradient method (Frank–Wolfe) for approximate minimization of specially structured problems with linear constraints, especially with traffic networks. For general unconstrained problems, this method reduces to the gradient method, which is regarded as obsolete (for almost all problems).
    • Quasi-Newton methods: Iterative methods for medium-large problems (e.g. N<1000).
    • Simultaneous perturbation stochastic approximation (SPSA) method for stochastic optimization; uses random (efficient) gradient approximation.
  • Methods that evaluate only function values: If a problem is continuously differentiable, then gradients can be approximated using finite differences, in which case a gradient-based method can be used.

Heuristics

Besides (finitely terminating) algorithms and (convergent) iterative methods, there are heuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics:

Applications

Mechanics

Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equation on a constraint manifold; the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving a linear complementarity problem, which can also be viewed as a QP (quadratic programming) problem.

Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems.

This approach may be applied in cosmology and astrophysics.

Economics and finance

Economics is closely enough linked to optimization of agents that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses. Modern optimization theory includes traditional optimization theory but also overlaps with game theory and the study of economic equilibria. The Journal of Economic Literature codes classify mathematical programming, optimization techniques, and related topics under JEL:C61-C63.

In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem, are economic optimization problems. Insofar as they behave consistently, consumers are assumed to maximize their utility, while firms are usually assumed to maximize their profit. Also, agents are often modeled as being risk-averse, thereby preferring to avoid risk. Asset prices are also modeled using optimization theory, though the underlying mathematics relies on optimizing stochastic processes rather than on static optimization. International trade theory also uses optimization to explain trade patterns between nations. The optimization of portfolios is an example of multi-objective optimization in economics.

Since the 1970s, economists have modeled dynamic decisions over time using control theory. For example, dynamic search models are used to study labor-market behavior. A crucial distinction is between deterministic and stochastic models. Macroeconomists build dynamic stochastic general equilibrium (DSGE) models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments.

Electrical engineering

Some common applications of optimization techniques in electrical engineering include active filter design, stray field reduction in superconducting magnetic energy storage systems, space mapping design of microwave structures, handset antennas, electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empirical surrogate model and space mapping methodologies since the discovery of space mapping in 1993. Optimization techniques are also used in power-flow analysis.

Civil engineering

Optimization has been widely used in civil engineering. Construction management and transportation engineering are among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures, resource levelingwater resource allocation, traffic management and schedule optimization.

Operations research

Another field that uses optimization techniques extensively is operations research. Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research uses stochastic programming to model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization and stochastic optimization methods.

Control engineering

Mathematical optimization is used in much modern controller design. High-level controllers such as model predictive control (MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled.

Geophysics

Optimization techniques are regularly used in geophysical parameter estimation problems. Given a set of geophysical measurements, e.g. seismic recordings, it is common to solve for the physical properties and geometrical shapes of the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used.

Molecular modeling

Nonlinear optimization methods are widely used in conformational analysis.

Computational systems biology

Optimization techniques are used in many facets of computational systems biology such as model building, optimal experimental design, metabolic engineering, and synthetic biology. Linear programming has been applied to calculate the maximal possible yields of fermentation products, and to infer gene regulatory networks from multiple microarray datasets as well as transcriptional regulatory networks from high-throughput data. Nonlinear programming has been used to analyze energy metabolism and has been applied to metabolic engineering and parameter estimation in biochemical pathways.

Machine learning

Solvers

Central limit theorem

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Central_limit_theorem   ...