Search This Blog

Sunday, May 3, 2015

Schrödinger equation


From Wikipedia, the free encyclopedia


In quantum mechanics, the Schrödinger equation is a partial differential equation that describes how the quantum state of a physical system changes with time. It was formulated in late 1925, and published in 1926, by the Austrian physicist Erwin Schrödinger.[1]

In classical mechanics, the equation of motion is Newton's second law, (F = ma), used to mathematically predict what the system will do at any time after the initial conditions of the system. In quantum mechanics, the analogue of Newton's law is Schrödinger's equation for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized). It is not a simple algebraic equation, but in general a linear partial differential equation, describing the time-evolution of the system's wave function (also called a "state function").[2]:1–2

The concept of a wavefunction is a fundamental postulate of quantum mechanics. Schrödinger's equation is also often presented as a separate postulate, but some authors[3]:Chapter 3 assert it can be derived from symmetry principles. Generally, "derivations" of the SE demonstrate its mathematical plausibility for describing wave–particle duality.

In the standard interpretation of quantum mechanics, the wave function is the most complete description that can be given of a physical system. Solutions to Schrödinger's equation describe not only molecular, atomic, and subatomic systems, but also macroscopic systems, possibly even the whole universe.[4]:292ff The Schrödinger equation, in its most general form, is consistent with both classical mechanics and special relativity, but the original formulation by Schrödinger himself was non-relativistic.

The Schrödinger equation is not the only way to make predictions in quantum mechanics - other formulations can be used, such as Werner Heisenberg's matrix mechanics, and Richard Feynman's path integral formulation.

Equation

Time-dependent equation

The form of the Schrödinger equation depends on the physical situation (see below for special cases). The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time:[5]:143
Time-dependent Schrödinger equation (general)
i \hbar \frac{\partial}{\partial t}\Psi = \hat H \Psi
where i is the imaginary unit, ħ is the Planck constant divided by 2π, the symbol ∂/∂t indicates a partial derivative with respect to time t, Ψ (the Greek letter Psi) is the wave function of the quantum system, and Ĥ is the Hamiltonian operator (which characterizes the total energy of any given wave function and takes different forms depending on the situation).

A wave function that satisfies the non-relativistic Schrödinger equation with V = 0. In other words, this corresponds to a particle traveling freely through empty space. The real part of the wave function is plotted here.

The most famous example is the non-relativistic Schrödinger equation for a single particle moving in an electric field (but not a magnetic field; see the Pauli equation):
Time-dependent Schrödinger equation
(single non-relativistic particle)
i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{r},t) = \left [ \frac{-\hbar^2}{2\mu}\nabla^2 + V(\mathbf{r},t)\right ] \Psi(\mathbf{r},t)
where μ is the particle's "reduced mass", V is its potential energy, 2 is the Laplacian (a differential operator), and Ψ is the wave function (more precisely, in this context, it is called the "position-space wave function"). In plain language, it means "total energy equals kinetic energy plus potential energy", but the terms take unfamiliar forms for reasons explained below.

Given the particular differential operators involved, this is a linear partial differential equation. It is also a diffusion equation, but unlike the heat equation, this one is also a wave equation given the imaginary unit present in the transient term.

The term "Schrödinger equation" can refer to both the general equation (first box above), or the specific nonrelativistic version (second box above and variations thereof). The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in various complicated expressions for the Hamiltonian. The specific nonrelativistic version is a simplified approximation to reality, which is quite accurate in many situations, but very inaccurate in others (see relativistic quantum mechanics and relativistic quantum field theory).

To apply the Schrödinger equation, the Hamiltonian operator is set up for the system, accounting for the kinetic and potential energy of the particles constituting the system, then inserted into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system.

Time-independent equation


Each of these three rows is a wave function which satisfies the time-dependent Schrödinger equation for a harmonic oscillator. Left: The real part (blue) and imaginary part (red) of the wave function. Right: The probability distribution of finding the particle with this wave function at a given position. The top two rows are examples of stationary states, which correspond to standing waves. The bottom row is an example of a state which is not a stationary state. The right column illustrates why stationary states are called "stationary".

The time-independent Schrödinger equation predicts that wave functions can form standing waves, called stationary states (also called "orbitals", as in atomic orbitals or molecular orbitals). These states are important in their own right, and if the stationary states are classified and understood, then it becomes easier to solve the time-dependent Schrödinger equation for any state. The time-independent Schrödinger equation is the equation describing stationary states. (It is only used when the Hamiltonian itself is not dependent on time. In general, the wave function still has a time dependency.)
Time-independent Schrödinger equation (general)
E\Psi=\hat H \Psi
In words, the equation states:
When the Hamiltonian operator acts on a certain wave function Ψ, and the result is proportional to the same wave function Ψ, then Ψ is a stationary state, and the proportionality constant, E, is the energy of the state Ψ.
The time-independent Schrödinger equation is discussed further below. In linear algebra terminology, this equation is an eigenvalue equation.

As before, the most famous manifestation is the non-relativistic Schrödinger equation for a single particle moving in an electric field (but not a magnetic field):
Time-independent Schrödinger equation (single non-relativistic particle)
E \Psi(\mathbf{r}) = \left[ \frac{-\hbar^2}{2\mu}\nabla^2 + V(\mathbf{r}) \right] \Psi(\mathbf{r})
with definitions as above.

Implications

The Schrödinger equation, and its solutions, introduced a breakthrough in thinking about physics. Schrödinger's equation was the first of its type, and solutions led to consequences that were very unusual and unexpected for the time.

Total, kinetic, and potential energy

The overall form of the equation is not unusual or unexpected as it uses the principle of the conservation of energy.
The terms of the nonrelativistic Schrödinger equation can be interpreted as total energy of the system, equal to the system kinetic energy plus the system potential energy. In this respect, it is just the same as in classical physics.

Quantization

The Schrödinger equation predicts that if certain properties of a system are measured, the result may be quantized, meaning that only specific discrete values can occur. One example is energy quantization: the energy of an electron in an atom is always one of the quantized energy levels, a fact discovered via atomic spectroscopy. (Energy quantization is discussed below.) Another example is quantization of angular momentum. This was an assumption in the earlier Bohr model of the atom, but it is a prediction of the Schrödinger equation.

Another result of the Schrödinger equation is that not every measurement gives a quantized result in quantum mechanics. For example, position, momentum, time, and (in some situations) energy can have any value across a continuous range.[6]:165–167

Measurement and uncertainty

In classical mechanics, a particle has, at every moment, an exact position and an exact momentum. These values change deterministically as the particle moves according to Newton's laws. In quantum mechanics, particles do not have exactly determined properties, and when they are measured, the result is randomly drawn from a probability distribution. The Schrödinger equation predicts what the probability distributions are, but fundamentally cannot predict the exact result of each measurement.
The Heisenberg uncertainty principle is the statement of the inherent measurement uncertainty in quantum mechanics. It states that the more precisely a particle's position is known, the less precisely its momentum is known, and vice versa.

The Schrödinger equation describes the (deterministic) evolution of the wave function of a particle. However, even if the wave function is known exactly, the result of a specific measurement on the wave function is uncertain.

Quantum tunneling

Quantum tunneling through a barrier. A particle coming from the left does not have enough energy to climb the barrier. However, it can sometimes "tunnel" to the other side.

In classical physics, when a ball is rolled slowly up a large hill, it will come to a stop and roll back, because it doesn't have enough energy to get over the top of the hill to the other side. However, the Schrödinger equation predicts that there is a small probability that the ball will get to the other side of the hill, even if it has too little energy to reach the top. This is called quantum tunneling. It is related to the distribution of energy: Although the ball's assumed position seems to be on one side of the hill, there is a chance of finding it on the other side.

Particles as waves

A double slit experiment showing the accumulation of electrons on a screen as time passes.

The nonrelativistic Schrödinger equation is a type of partial differential equation called a wave equation. Therefore it is often said particles can exhibit behavior usually attributed to waves. In most modern interpretations this description is reversed – the quantum state, i.e. wave, is the only genuine physical reality, and under the appropriate conditions it can show features of particle-like behavior.

Two-slit diffraction is a famous example of the strange behaviors that waves regularly display, that are not intuitively associated with particles. The overlapping waves from the two slits cancel each other out in some locations, and reinforce each other in other locations, causing a complex pattern to emerge. Intuitively, one would not expect this pattern from firing a single particle at the slits, because the particle should pass through one slit or the other, not a complex overlap of both.

However, since the Schrödinger equation is a wave equation, a single particle fired through a double-slit does show this same pattern (figure on right). Note: The experiment must be repeated many times for the complex pattern to emerge. The appearance of the pattern proves that each electron passes through both slits simultaneously.[7][8][9] Although this is counterintuitive, the prediction is correct; in particular, electron diffraction and neutron diffraction are well understood and widely used in science and engineering.

Related to diffraction, particles also display superposition and interference.

The superposition property allows the particle to be in a quantum superposition of two or more states with different classical properties at the same time. For example, a particle can have several different energies at the same time, and can be in several different locations at the same time. In the above example, a particle can pass through two slits at the same time. This superposition is still a single quantum state, as shown by the interference effects, even though that conflicts with classical intuition.

Interpretation of the wave function

The Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. However, the Schrödinger equation does not directly say what, exactly, the wave function is. Interpretations of quantum mechanics address questions such as what the relation is between the wave function, the underlying reality, and the results of experimental measurements.
An important aspect is the relationship between the Schrödinger equation and wavefunction collapse. In the oldest Copenhagen interpretation, particles follow the Schrödinger equation except during wavefunction collapse, during which they behave entirely differently. The advent of quantum decoherence theory allowed alternative approaches (such as the Everett many-worlds interpretation and consistent histories), wherein the Schrödinger equation is always satisfied, and wavefunction collapse should be explained as a consequence of the Schrödinger equation.

Historical background and development

Following Max Planck's quantization of light (see black body radiation), Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs of wave–particle duality. Since energy and momentum are related in the same way as frequency and wavenumber in special relativity, it followed that the momentum p of a photon is inversely proportional to its wavelength λ, or proportional to its wavenumber k.
p = \frac{h}{\lambda} = \hbar k
where h is Planck's constant. Louis de Broglie hypothesized that this is true for all particles, even particles which have mass such as electrons. He showed that, assuming that the matter waves propagate along with their particle counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus of an atom are allowed.[10] These quantized orbits correspond to discrete energy levels, and de Broglie reproduced the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular momentum L according to:
 L = n{h \over 2\pi} = n\hbar.
According to de Broglie the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit:
n \lambda = 2 \pi r.\,
This approach essentially confined the electron wave in one dimension, along a circular orbit of radius r.

In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation.[11]
Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation, and solve for its energy eigenvalues for the hydrogen atom. Unfortunately the paper was rejected by the Physical Review, as recounted by Kamen.[12]

Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Schrödinger decided to find a proper 3-dimensional wave equation for the electron. He was guided by William R. Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system — the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action.[13] A modern version of his reasoning is reproduced below. The equation he found is:[14]
i\hbar \frac{\partial}{\partial t}\Psi(\mathbf{r},\,t)=-\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{r},\,t) + V(\mathbf{r})\Psi(\mathbf{r},\,t).
However, by that time, Arnold Sommerfeld had refined the Bohr model with relativistic corrections.[15][16]
Schrödinger used the relativistic energy momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units):
\left(E + {e^2\over r} \right)^2 \psi(x) = - \nabla^2\psi(x) + m^2 \psi(x).
He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself in an isolated mountain cabin in December 1925.[17]

While at the cabin, Schrödinger decided that his earlier non-relativistic calculations were novel enough to publish, and decided to leave off the problem of relativistic corrections for the future. Despite the difficulties in solving the differential equation for hydrogen (he had sought help from his friend the mathematician Hermann Weyl[18]:3) Schrödinger showed that his non-relativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in 1926.[18]:1[19] In the equation, Schrödinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a wave Ψ(x, t), moving in a potential well V, created by the proton. This computation accurately reproduced the energy levels of the Bohr model. In a paper, Schrödinger himself explained this equation as follows:


This 1926 paper was enthusiastically endorsed by Einstein, who saw the matter-waves as an intuitive depiction of nature, as opposed to Heisenberg's matrix mechanics, which he considered overly formal.[21]

The Schrödinger equation details the behavior of Ψ but says nothing of its nature. Schrödinger tried to interpret it as a charge density in his fourth paper, but he was unsuccessful.[22]:219 In 1926, just a few days after Schrödinger's fourth and final paper was published, Max Born successfully interpreted Ψ as the probability amplitude, whose absolute square is equal to probability density.[22]:220 Schrödinger, though, always opposed a statistical or probabilistic approach, with its associated discontinuities—much like Einstein, who believed that quantum mechanics was a statistical approximation to an underlying deterministic theory— and never reconciled with the Copenhagen interpretation.[23]

Louis de Broglie in his later years proposed a real valued wave function connected to the complex wave function by a proportionality constant and developed the De Broglie–Bohm theory.

The wave equation for particles

The Schrödinger equation is a wave equation, since the solutions are functions which describe wave-like motions. Wave equations in physics can normally be derived from other physical laws – the wave equation for mechanical vibrations on strings and in matter can be derived from Newton's laws – where the wave function represents the displacement of matter, and electromagnetic waves from Maxwell's equations, where the wave functions are electric and magnetic fields. The basis for Schrödinger's equation, on the other hand, is the energy of the system and a separate postulate of quantum mechanics: the wave function is a description of the system.[24] The Schrödinger equation is therefore a new concept in itself; as Feynman put it:
 

Spherical harmonics are to the Schrödinger equation what the math of Henri Poincaré was to Einstein's theory of relativity: foundational.

The foundation of the equation is structured to be a linear differential equation based on classical energy conservation, and consistent with the De Broglie relations. The solution is the wave function ψ, which contains all the information that can be known about the system. In the Copenhagen interpretation, the modulus of ψ is related to the probability the particles are in some spatial configuration at some instant of time. Solving the equation for ψ can be used to predict how the particles will behave under the influence of the specified potential and with each other.

The Schrödinger equation was developed principally from the De Broglie hypothesis, a wave equation that would describe particles,[26] and can be constructed as shown informally in the following sections.[27] For a more rigorous description of Schrödinger's equation, see also.[28]

Consistency with energy conservation

The total energy E of a particle is the sum of kinetic energy T and potential energy V, this sum is also the frequent expression for the Hamiltonian H in classical mechanics:
E = T + V =H \,\!
Explicitly, for a particle in one dimension with position x, mass m and momentum p, and potential energy V which generally varies with position and time t:
 E = \frac{p^2}{2m}+V(x,t)=H.
For three dimensions, the position vector r and momentum vector p must be used:
E = \frac{\mathbf{p}\cdot\mathbf{p}}{2m}+V(\mathbf{r},t)=H
This formalism can be extended to any fixed number of particles: the total energy of the system is then the total kinetic energies of the particles, plus the total potential energy, again the Hamiltonian. However, there can be interactions between the particles (an N-body problem), so the potential energy V can change as the spatial configuration of particles changes, and possibly with time. The potential energy, in general, is not the sum of the separate potential energies for each particle, it is a function of all the spatial positions of the particles. Explicitly:
E=\sum_{n=1}^N \frac{\mathbf{p}_n\cdot\mathbf{p}_n}{2m_n} + V(\mathbf{r}_1,\mathbf{r}_2\cdots\mathbf{r}_N,t) = H \,\!

Linearity

The simplest wavefunction is a plane wave of the form:
 \Psi(\mathbf{r},t) = A e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)} \,\!
where the A is the amplitude, k the wavevector, and ω the angular frequency, of the plane wave. In general, physical situations are not purely described by plane waves, so for generality the superposition principle is required; any wave can be made by superposition of sinusoidal plane waves. So if the equation is linear, a linear combination of plane waves is also an allowed solution. Hence a necessary and separate requirement is that the Schrödinger equation is a linear differential equation.

For discrete k the sum is a superposition of plane waves:
 \Psi(\mathbf{r},t) = \sum_{n=1}^\infty A_n e^{i(\mathbf{k}_n\cdot\mathbf{r}-\omega_n t)} \,\!
for some real amplitude coefficients An, and for continuous k the sum becomes an integral, the Fourier transform of a momentum space wavefunction:[29]
 \Psi(\mathbf{r},t) = \frac{1}{(\sqrt{2\pi})^3}\int\Phi(\mathbf{k})e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}d^3\mathbf{k} \,\!
where d3k = dkxdkydkz is the differential volume element in k-space, and the integrals are taken over all k-space. The momentum wavefunction Φ(k) arises in the integrand since the position and momentum space wavefunctions are Fourier transforms of each other.

Consistency with the De Broglie relations


Diagrammatic summary of the quantities related to the wavefunction, as used in De broglie's hypothesis and development of the Schrödinger equation.[26]

Einstein's light quanta hypothesis (1905) states that the energy E of a photon is proportional to the frequency ν (or angular frequency, ω = 2πν) of the corresponding quantum wavepacket of light:
E = h\nu = \hbar \omega \,\!
Likewise De Broglie's hypothesis (1924) states that any particle can be associated with a wave, and that the momentum p of the particle is inversely proportional to the wavelength λ of such a wave (or proportional to the wavenumber, k = 2π/λ), in one dimension, by:
p = \frac{h}{\lambda} =  \hbar k\;,
while in three dimensions, wavelength λ is related to the magnitude of the wavevector k:
\mathbf{p} = \hbar \mathbf{k}\,,\quad |\mathbf{k}| = \frac{2\pi}{\lambda} \,.
The Planck–Einstein and de Broglie relations illuminate the deep connections between energy with time, and space with momentum, and express wave–particle duality. In practice, natural units comprising ħ = 1 are used, as the De Broglie equations reduce to identities: allowing momentum, wavenumber, energy and frequency to be used interchangeably, to prevent duplication of quantities, and reduce the number of dimensions of related quantities. For familiarity SI units are still used in this article.

Schrödinger's insight,[citation needed] late in 1925, was to express the phase of a plane wave as a complex phase factor using these relations:
\Psi = Ae^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)} = Ae^{i(\mathbf{p}\cdot\mathbf{r}-Et)/\hbar}
and to realize that the first order partial derivatives with respect to space
 \nabla\Psi = \dfrac{i}{\hbar}\mathbf{p}Ae^{i(\mathbf{p}\cdot\mathbf{r}-Et)/\hbar} = \dfrac{i}{\hbar}\mathbf{p}\Psi
and time
 \dfrac{\partial \Psi}{\partial t} = -\dfrac{i E}{\hbar} Ae^{i(\mathbf{p}\cdot\mathbf{r}-Et)/\hbar} = -\dfrac{i E}{\hbar} \Psi
Another postulate of quantum mechanics is that all observables are represented by linear Hermitian operators which act on the wavefunction, and the eigenvalues of the operator are the values the observable takes. The previous derivatives are consistent with the energy operator, corresponding to the time derivative,
\hat{E} \Psi = i\hbar\dfrac{\partial}{\partial t}\Psi = E\Psi
where E are the energy eigenvalues, and the momentum operator, corresponding to the spatial derivatives (the gradient ),
\hat{\mathbf{p}} \Psi = -i\hbar\nabla \Psi = \mathbf{p} \Psi
where p is a vector of the momentum eigenvalues. In the above, the "hats" ( ^ ) indicate these observables are operators, not simply ordinary numbers or vectors. The energy and momentum operators are differential operators, while the potential energy function V is just a multiplicative factor.

Substituting the energy and momentum operators into the classical energy conservation equation obtains the operator:
E= \dfrac{\mathbf{p}\cdot\mathbf{p}}{2m}+V \quad \rightarrow \quad \hat{E} = \dfrac{\hat{\mathbf{p}}\cdot\hat{\mathbf{p}}}{2m}  + V
so in terms of derivatives with respect to time and space, acting this operator on the wavefunction Ψ immediately led Schrödinger to his equation:[citation needed]
i\hbar\dfrac{\partial \Psi}{\partial t}= -\dfrac{\hbar^2}{2m}\nabla^2\Psi +V\Psi
Wave–particle duality can be assessed from these equations as follows. The kinetic energy T is related to the square of momentum p. As the particle's momentum increases, the kinetic energy increases more rapidly, but since the wavenumber |k| increases the wavelength λ decreases. In terms of ordinary scalar and vector quantities (not operators):
 \mathbf{p}\cdot\mathbf{p} \propto \mathbf{k}\cdot\mathbf{k} \propto T \propto \dfrac{1}{\lambda^2}
The kinetic energy is also proportional to the second spatial derivatives, so it is also proportional to the magnitude of the curvature of the wave, in terms of operators:
 \hat{T} \Psi = \frac{-\hbar^2}{2m}\nabla\cdot\nabla  \Psi \, \propto \, \nabla^2 \Psi \,.
As the curvature increases, the amplitude of the wave alternates between positive and negative more rapidly, and also shortens the wavelength. So the inverse relation between momentum and wavelength is consistent with the energy the particle has, and so the energy of the particle has a connection to a wave, all in the same mathematical formulation.[26]

Wave and particle motion

 
Increasing levels of wavepacket localization, meaning
the particle has a more localized position.
In the limit ħ → 0, the particle's position and momentum become known exactly. This is equivalent to the classical particle.
Schrödinger required that a wave packet solution near position r with wavevector near k will move along the trajectory determined by classical mechanics for times short enough for the spread in k (and hence in velocity) not to substantially increase the spread in r. Since, for a given spread in k, the spread in velocity is proportional to Planck's constant ħ, it is sometimes said that in the limit as ħ approaches zero, the equations of classical mechanics are restored from quantum mechanics.[30] Great care is required in how that limit is taken, and in what cases.

The limiting short-wavelength is equivalent to ħ tending to zero because this is limiting case of increasing the wave packet localization to the definite position of the particle (see images right). Using the Heisenberg uncertainty principle for position and momentum, the products of uncertainty in position and momentum become zero as ħ → 0:
 \sigma(x) \sigma(p_x) \geqslant \frac{\hbar}{2} \quad \rightarrow \quad \sigma(x) \sigma(p_x) \geqslant 0 \,\!
where σ denotes the (root mean square) measurement uncertainty in x and px (and similarly for the y and z directions) which implies the position and momentum can only be known to arbitrary precision in this limit.
The Schrödinger equation in its general form
 i\hbar \frac{\partial}{\partial t} \Psi\left(\mathbf{r},t\right) = \hat{H} \Psi\left(\mathbf{r},t\right) \,\!
is closely related to the Hamilton–Jacobi equation (HJE)
 \frac{\partial}{\partial t} S(q_i,t) = H\left(q_i,\frac{\partial S}{\partial q_i},t \right) \,\!
where S is action and H is the Hamiltonian function (not operator). Here the generalized coordinates qi for i = 1, 2, 3 (used in the context of the HJE) can be set to the position in Cartesian coordinates as r = (q1, q2, q3) = (x, y, z).[30]
Substituting
 \Psi = \sqrt{\rho(\mathbf{r},t)} e^{iS(\mathbf{r},t)/\hbar}\,\!
where ρ is the probability density, into the Schrödinger equation and then taking the limit ħ → 0 in the resulting equation, yields the Hamilton–Jacobi equation.

The implications are:
  • The motion of a particle, described by a (short-wavelength) wave packet solution to the Schrödinger equation, is also described by the Hamilton–Jacobi equation of motion.
  • The Schrödinger equation includes the wavefunction, so its wave packet solution implies the position of a (quantum) particle is fuzzily spread out in wave fronts. On the contrary, the Hamilton–Jacobi equation applies to a (classical) particle of definite position and momentum, instead the position and momentum at all times (the trajectory) are deterministic and can be simultaneously known.

Non-relativistic quantum mechanics

The quantum mechanics of particles without accounting for the effects of special relativity, for example particles propagating at speeds much less than light, is known as non-relativistic quantum mechanics. Following are several forms of Schrödinger's equation in this context for different situations: time independence and dependence, one and three spatial dimensions, and one and N particles.

In actuality, the particles constituting the system do not have the numerical labels used in theory. The language of mathematics forces us to label the positions of particles one way or another, otherwise there would be confusion between symbols representing which variables are for which particle.[28]

Time independent

If the Hamiltonian is not an explicit function of time, the equation is separable into a product of spatial and temporal parts. In general, the wavefunction takes the form:
\Psi(\text{space coords},t)=\psi(\text{space coords})\tau(t)\,.
where ψ(space coords) is a function of all the spatial coordinate(s) of the particle(s) constituting the system only, and τ(t) is a function of time only.

Substituting for ψ into the Schrödinger equation for the relevant number of particles in the relevant number of dimensions, solving by separation of variables implies the general solution of the time-dependent equation has the form:[14]
 \Psi(\text{space coords},t) = \psi(\text{space coords}) e^{-i{E t/\hbar}} \,.
Since the time dependent phase factor is always the same, only the spatial part needs to be solved for in time independent problems. Additionally, the energy operator Ê = /t can always be replaced by the energy eigenvalue E, thus the time independent Schrödinger equation is an eigenvalue equation for the Hamiltonian operator:[5]:143ff
\hat{H} \psi = E \psi
This is true for any number of particles in any number of dimensions (in a time independent potential). This case describes the standing wave solutions of the time-dependent equation, which are the states with definite energy (instead of a probability distribution of different energies). In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels.

The energy eigenvalues from this equation form a discrete spectrum of values, so mathematically energy must be quantized. More specifically, the energy eigenstates form a basis – any wavefunction may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is the spectral theorem in mathematics, and in a finite state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix.

One-dimensional examples

For a particle in one dimension, the Hamiltonian is:
 \hat{H} = \frac{\hat{p}^2}{2m} + V(x) \,, \quad \hat{p} = -i\hbar \frac{d}{d x}
and substituting this into the general Schrödinger equation gives:
 -\frac{\hbar^2}{2m}\frac{d^2}{d x^2}\psi(x) + V(x)\psi(x) = E\psi(x)
This is the only case the Schrödinger equation is an ordinary differential equation, rather than a partial differential equation. The general solutions are always of the form:
 \Psi(x,t)=\psi(x) e^{-iEt/\hbar} \, .
For N particles in one dimension, the Hamiltonian is:
 \hat{H} = \sum_{n=1}^{N}\frac{\hat{p}_n^2}{2m_n} + V(x_1,x_2,\cdots x_N) \,,\quad \hat{p}_n = -i\hbar \frac{\partial}{\partial x_n}
where the position of particle n is xn. The corresponding Schrödinger equation is:
 -\frac{\hbar^2}{2}\sum_{n=1}^{N}\frac{1}{m_n}\frac{\partial^2}{\partial x_n^2}\psi(x_1,x_2,\cdots x_N) + V(x_1,x_2,\cdots x_N)\psi(x_1,x_2,\cdots x_N) = E\psi(x_1,x_2,\cdots x_N) \, .
so the general solutions have the form:
 \Psi(x_1,x_2,\cdots x_N,t) = e^{-iEt/\hbar}\psi(x_1,x_2\cdots x_N)
For non-interacting distinguishable particles,[31] the potential of the system only influences each particle separately, so the total potential energy is the sum of potential energies for each particle:
 V(x_1,x_2,\cdots x_N) = \sum_{n=1}^N V(x_n) \, .
and the wavefunction can be written as a product of the wavefunctions for each particle:
 \Psi(x_1,x_2,\cdots x_N,t) = e^{-i{E t/\hbar}}\prod_{n=1}^N\psi(x_n) \, ,
For non-interacting identical particles, the potential is still a sum, but wavefunction is a bit more complicated - it is a sum over the permutations of products of the separate wavefunctions to account for particle exchange. In general for interacting particles, the above decompositions are not possible.

Free particle

For no potential, V = 0, so the particle is free and the equation reads:[5]:151ff
 - E \psi = \frac{\hbar^2}{2m}{d^2 \psi \over d x^2}\,
which has oscillatory solutions for E > 0 (the Cn are arbitrary constants):
\psi_E(x) = C_1 e^{i\sqrt{2mE/\hbar^2}\,x} + C_2 e^{-i\sqrt{2mE/\hbar^2}\,x}\,
and exponential solutions for E < 0
\psi_{-|E|}(x) = C_1 e^{\sqrt{2m|E|/\hbar^2}\,x} + C_2 e^{-\sqrt{2m|E|/\hbar^2}\,x}.\,
The exponentially growing solutions have an infinite norm, and are not physical. They are not allowed in a finite volume with periodic or fixed boundary conditions.

Constant potential

Animation of a de Broglie wave incident on a barrier.
For a constant potential, V = V0, the solution is oscillatory for E > V0 and exponential for E < V0, corresponding to energies that are allowed or disallowed in classical mechanics. Oscillatory solutions have a classically allowed energy and correspond to actual classical motions, while the exponential solutions have a disallowed energy and describe a small amount of quantum bleeding into the classically disallowed region, due to quantum tunneling. If the potential V0 grows to infinity, the motion is classically confined to a finite region. Viewed far enough away, every solution is reduced an exponential; the condition that the exponential is decreasing restricts the energy levels to a discrete set, called the allowed energies.[29]

Harmonic oscillator


A harmonic oscillator in classical mechanics (A–B) and quantum mechanics (C–H). In (A–B), a ball, attached to a spring, oscillates back and forth. (C–H) are six solutions to the Schrödinger Equation for this situation. The horizontal axis is position, the vertical axis is the real part (blue) or imaginary part (red) of the wavefunction. Stationary states, or energy eigenstates, which are solutions to the time-independent Schrödinger Equation, are shown in C,D,E,F, but not G or H.

The Schrödinger equation for this situation is
 E\psi = -\frac{\hbar^2}{2m}\frac{d^2}{d x^2}\psi + \frac{1}{2}m\omega^2x^2\psi
It is a notable quantum system to solve for; since the solutions are exact (but complicated – in terms of Hermite polynomials), and it can describe or at least approximate a wide variety of other systems, including vibrating atoms, molecules,[32] and atoms or ions in lattices,[33] and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in quantum mechanics.

There is a family of solutions – in the position basis they are
  \psi_n(x) = \sqrt{\frac{1}{2^n\,n!}} \cdot \left(\frac{m\omega}{\pi \hbar}\right)^{1/4} \cdot e^{
- \frac{m\omega x^2}{2 \hbar}} \cdot H_n\left(\sqrt{\frac{m\omega}{\hbar}} x \right)
where n = 0,1,2,..., and the functions Hn are the Hermite polynomials.

Three-dimensional examples

The extension from one dimension to three dimensions is straightforward, all position and momentum operators are replaced by their three-dimensional expressions and the partial derivative with respect to space is replaced by the gradient operator.

The Hamiltonian for one particle in three dimensions is:
 \hat{H} = \frac{\hat{\mathbf{p}}\cdot\hat{\mathbf{p}}}{2m} + V(\mathbf{r}) \,, \quad \hat{\mathbf{p}} = -i\hbar \nabla
generating the equation:
 -\frac{\hbar^2}{2m}\nabla^2\psi(\mathbf{r}) + V(\mathbf{r})\psi(\mathbf{r}) = E\psi(\mathbf{r})
with stationary state solutions of the form:
 \Psi(\mathbf{r},t) = \psi(\mathbf{r}) e^{-iEt/\hbar}
where the position of the particle is r. Two useful coordinate systems for solving the Schrödinger equation are Cartesian coordinates so that r = (x, y, z) and spherical polar coordinates so that r = (r, θ, φ), although other orthogonal coordinates are useful for solving the equation for systems with certain geometric symmetries.

For N particles in three dimensions, the Hamiltonian is:
 \hat{H} = \sum_{n=1}^{N}\frac{\hat{\mathbf{p}}_n\cdot\hat{\mathbf{p}}_n}{2m_n} + V(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N) \,,\quad \hat{\mathbf{p}}_n = -i\hbar \nabla_n
where the position of particle n is rn and the gradient operators are partial derivatives with respect to the particle's position coordinates. In Cartesian coordinates, for particle n, the position vector is rn = (xn, yn, zn) while the gradient and Laplacian operator are respectively:
\nabla_n = \mathbf{e}_x \frac{\partial}{\partial x_n} + \mathbf{e}_y\frac{\partial}{\partial y_n} + \mathbf{e}_z\frac{\partial}{\partial z_n}\,,\quad \nabla_n^2 = \nabla_n\cdot\nabla_n = \frac{\partial^2}{{\partial x_n}^2} + \frac{\partial^2}{{\partial y_n}^2} + \frac{\partial^2}{{\partial z_n}^2}
The Schrödinger equation is:
 -\frac{\hbar^2}{2}\sum_{n=1}^{N}\frac{1}{m_n}\nabla_n^2\Psi(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N) + V(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N)\Psi(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N) = E\Psi(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N)
with stationary state solutions:
 \Psi(\mathbf{r}_1,\mathbf{r}_2\cdots \mathbf{r}_N,t) = e^{-iEt/\hbar}\psi(\mathbf{r}_1,\mathbf{r}_2\cdots \mathbf{r}_N)
Again, for non-interacting distinguishable particles the potential is the sum of particle potentials
 V(\mathbf{r}_1,\mathbf{r}_2,\cdots \mathbf{r}_N) = \sum_{n=1}^N V(\mathbf{r}_n)
and the wavefunction is a product of the particle wavefuntions
 \Psi(\mathbf{r}_1,\mathbf{r}_2\cdots \mathbf{r}_N,t) = e^{-i{E t/\hbar}}\prod_{n=1}^N\psi(\mathbf{r}_n) \, .
For non-interacting identical particles, the potential is a sum but the wavefunction is a sum over permutations of products. The previous two equations do not apply to interacting particles.

Following are examples where exact solutions are known.

Hydrogen atom

This form of the Schrödinger equation can be applied to the hydrogen atom:[24][26]
 E \psi = -\frac{\hbar^2}{2\mu}\nabla^2\psi - \frac{e^2}{4\pi\epsilon_0 r}\psi
where e is the electron charge, r is the position of the electron (r = |r| is the magnitude of the position), the potential term is due to the Coulomb interaction, wherein ε0 is the electric constant (permittivity of free space) and
 \mu = \frac{m_em_p}{m_e+m_p}
is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass mp and the electron of mass me. The negative sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the electron mass is used since the electron and proton together orbit each other about a common centre of mass, and constitute a two-body problem to solve. The motion of the electron is of principle interest here, so the equivalent one-body problem is the motion of the electron using the reduced mass.

The wavefunction for hydrogen is a function of the electron's coordinates, and in fact can be separated into functions of each coordinate.[34] Usually this is done in spherical polar coordinates:
 \psi(r,\theta,\phi) = R(r)Y_\ell^m(\theta, \phi) = R(r)\Theta(\theta)\Phi(\phi)
where R are radial functions and \scriptstyle Y_{\ell}^{m}(\theta, \phi ) \, are spherical harmonics of degree and order m. This is the only atom for which the Schrödinger equation has been solved for exactly. Multi-electron atoms require approximative methods. The family of solutions are:[35]
 \psi_{n\ell m}(r,\theta,\phi) = \sqrt {{\left (  \frac{2}{n a_0} \right )}^3\frac{(n-\ell-1)!}{2n[(n+\ell)!]} } e^{- r/na_0} \left(\frac{2r}{na_0}\right)^{\ell} L_{n-\ell-1}^{2\ell+1}\left(\frac{2r}{na_0}\right) \cdot Y_{\ell}^{m}(\theta, \phi )
where:
 
\begin{align} 
n & = 1,2,3, \dots \\
\ell & = 0,1,2, \dots, n-1 \\
m & = -\ell,\dots,\ell \\
\end{align}
NB: generalized Laguerre polynomials are defined differently by different authors—see main article on them and the hydrogen atom.

Two-electron atoms or ions

The equation for any two-electron system, such as the neutral helium atom (He, Z = 2), the negative hydrogen ion (H, Z = 1), or the positive lithium ion (Li+, Z = 3) is:[27]
 E\psi = -\hbar^2\left[\frac{1}{2\mu}\left(\nabla_1^2 +\nabla_2^2 \right) + \frac{1}{M}\nabla_1\cdot\nabla_2\right] \psi + \frac{e^2}{4\pi\epsilon_0}\left[ \frac{1}{r_{12}} -Z\left( \frac{1}{r_1}+\frac{1}{r_2} \right) \right] \psi
where r1 is the position of one electron (r1 = |r1| is its magnitude), r2 is the position of the other electron (r2 = |r2| is the magnitude), r12 = |r12| is the magnitude of the separation between them given by
 |\mathbf{r}_{12}| = |\mathbf{r}_2 - \mathbf{r}_1 | \,\!
μ is again the two-body reduced mass of an electron with respect to the nucleus of mass M, so this time
 \mu = \frac{m_e M}{m_e+M} \,\!
and Z is the atomic number for the element (not a quantum number).

The cross-term of two laplacians
\frac{1}{M}\nabla_1\cdot\nabla_2\,\!
is known as the mass polarization term, which arises due to the motion of atomic nuclei. The wavefunction is a function of the two electron's positions:
 \psi = \psi(\mathbf{r}_1,\mathbf{r}_2).
There is no closed form solution for this equation.

Time dependent

This is the equation of motion for the quantum state. In the most general form, it is written:[5]:143ff
i \hbar \frac{\partial}{\partial t}\Psi = \hat H \Psi.
and the solution, the wavefunction, is a function of all the particle coordinates of the system and time. Following are specific cases.

For one particle in one dimension, the Hamiltonian
 \hat{H} = \frac{\hat{p}^2}{2m} + V(x,t) \,,\quad \hat{p} = -i\hbar \frac{\partial}{\partial x}
generates the equation:
 i\hbar\frac{\partial}{\partial t}\Psi(x,t) = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}\Psi(x,t) + V(x,t)\Psi(x,t)
For N particles in one dimension, the Hamiltonian is:
 \hat{H} = \sum_{n=1}^{N}\frac{\hat{p}_n^2}{2m_n} + V(x_1,x_2,\cdots x_N,t) \,,\quad \hat{p}_n = -i\hbar \frac{\partial}{\partial x_n}
where the position of particle n is xn, generating the equation:
 i\hbar\frac{\partial}{\partial t}\Psi(x_1,x_2\cdots x_N,t) = -\frac{\hbar^2}{2}\sum_{n=1}^{N}\frac{1}{m_n}\frac{\partial^2}{\partial x_n^2}\Psi(x_1,x_2\cdots x_N,t) + V(x_1,x_2\cdots x_N,t)\Psi(x_1,x_2\cdots x_N,t) \, .
For one particle in three dimensions, the Hamiltonian is:
 \hat{H} = \frac{\hat{\mathbf{p}}\cdot\hat{\mathbf{p}}}{2m} + V(\mathbf{r},t) \,,\quad \hat{\mathbf{p}} = -i\hbar \nabla
generating the equation:
 i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{r},t) = -\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{r},t) + V(\mathbf{r},t)\Psi(\mathbf{r},t)
For N particles in three dimensions, the Hamiltonian is:
  \hat{H} = \sum_{n=1}^{N}\frac{\hat{\mathbf{p}}_n\cdot\hat{\mathbf{p}}_n}{2m_n} + V(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N,t)\,,\quad \hat{\mathbf{p}}_n = -i\hbar \nabla_n
where the position of particle n is rn, generating the equation:[5]:141
 i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N,t) = -\frac{\hbar^2}{2}\sum_{n=1}^{N}\frac{1}{m_n}\nabla_n^2\Psi(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N,t) + V(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N,t)\Psi(\mathbf{r}_1,\mathbf{r}_2,\cdots\mathbf{r}_N,t)
This last equation is in a very high dimension, so the solutions are not easy to visualize.

Solution methods

Properties

The Schrödinger equation has the following properties: some are useful, but there are shortcomings. Ultimately, these properties arise from the Hamiltonian used, and solutions to the equation.

Linearity

In the development above, the Schrödinger equation was made to be linear for generality, though this has other implications. If two wave functions ψ1 and ψ2 are solutions, then so is any linear combination of the two:
\displaystyle \psi = a\psi_1 + b \psi_2
where a and b are any complex numbers (the sum can be extended for any number of wavefunctions). This property allows superpositions of quantum states to be solutions of the Schrödinger equation. Even more generally, it holds that a general solution to the Schrödinger equation can be found by taking a weighted sum over all single state solutions achievable. For example, consider a wave function Ψ(x, t) such that the wave function is a product of two functions: one time independent, and one time dependent. If states of definite energy found using the time independent Shrödinger equation are given by ψE(x) with amplitude An and time dependent phase factor is given by
e^{{-iE_n t}/\hbar},
then a valid general solution is
\displaystyle \Psi(x,t) = \sum\limits_{n} A_n \psi_{E_n}(x) e^{{-iE_n t}/\hbar}.
Additionally, the ability to scale solutions allows one to solve for a wave function without normalizing it first. If one has a set of normalized solutions ψn, then
\displaystyle \Psi = \sum\limits_{n} A_n \psi_n
can be normalized by ensuring that
\displaystyle \sum\limits_{n}|A_n|^2 = 1.
This is much more convenient than having to verify that
\displaystyle \int\limits_{-\infty}^{\infty}|\Psi(x)|^2\,dx = \int\limits_{-\infty}^{\infty}\Psi(x)\Psi^{*}(x)\,dx = 1.

Real energy eigenstates

For the time-independent equation, an additional feature of linearity follows: if two wave functions ψ1 and ψ2 are solutions to the time-independent equation with the same energy E, then so is any linear combination:
 \hat H (a\psi_1 + b \psi_2 ) = a \hat H \psi_1 + b \hat H \psi_2 = E (a \psi_1 + b\psi_2).
Two different solutions with the same energy are called degenerate.[29]

In an arbitrary potential, if a wave function ψ solves the time-independent equation, so does its complex conjugate, denoted ψ*. By taking linear combinations, the real and imaginary parts of ψ are each solutions. If there is no degeneracy they can only differ by a factor.

In the time-dependent equation, complex conjugate waves move in opposite directions. If Ψ(x, t) is one solution, then so is Ψ(x, –t). The symmetry of complex conjugation is called time-reversal symmetry.

Space and time derivatives


Continuity of the wavefunction and its first spatial derivative (in the x direction, y and z coordinates not shown), at some time t.

The Schrödinger equation is first order in time and second in space, which describes the time evolution of a quantum state (meaning it determines the future amplitude from the present).

Explicitly for one particle in 3-dimensional Cartesian coordinates – the equation is
i\hbar{\partial \Psi \over \partial t} = - {\hbar^2\over 2m} \left ( {\partial^2 \Psi \over \partial x^2} + {\partial^2 \Psi \over \partial y^2} + {\partial^2 \Psi \over \partial z^2} \right ) + V(x,y,z,t)\Psi.\,\!
The first time partial derivative implies the initial value (at t = 0) of the wavefunction
 \Psi(x,y,z,0) \,\!
is an arbitrary constant. Likewise – the second order derivatives with respect to space implies the wavefunction and its first order spatial derivatives
 \begin{align} 
& \Psi(x_b,y_b,z_b,t) \\
& \frac{\partial}{\partial x}\Psi(x_b,y_b,z_b,t) \quad \frac{\partial}{\partial y}\Psi(x_b,y_b,z_b,t) \quad \frac{\partial}{\partial z}\Psi(x_b,y_b,z_b,t) 
\end{align} \,\!
are all arbitrary constants at a given set of points, where xb, yb, zb are a set of points describing boundary b (derivatives are evaluated at the boundaries). Typically there are one or two boundaries, such as the step potential and particle in a box respectively.

As the first order derivatives are arbitrary, the wavefunction can be a continuously differentiable function of space, since at any boundary the gradient of the wavefunction can be matched.

On the contrary, wave equations in physics are usually second order in time, notable are the family of classical wave equations and the quantum Klein–Gordon equation.

Local conservation of probability

The Schrödinger equation is consistent with probability conservation. Multiplying the Schrödinger equation on the right by the complex conjugate wavefunction, and multiplying the wavefunction to the left of the complex conjugate of the Schrödinger equation, and subtracting, gives the continuity equation for probability:[36]
{ \partial \over \partial t} \rho\left(\mathbf{r},t\right) + \nabla \cdot \mathbf{j} = 0,
where
\rho=|\Psi|^2=\Psi^*(\mathbf{r},t)\Psi(\mathbf{r},t)\,\!
is the probability density (probability per unit volume, * denotes complex conjugate), and
 \mathbf{j} = {1 \over 2m} \left( \Psi^*\hat{\mathbf{p}}\Psi  - \Psi\hat{\mathbf{p}}\Psi^* \right)\,\!
is the probability current (flow per unit area).

Hence predictions from the Schrödinger equation do not violate probability conservation.

Positive energy

If the potential is bounded from below, meaning there is a minimum value of potential energy, the eigenfunctions of the Schrödinger equation have energy which is also bounded from below. This can be seen most easily by using the variational principle, as follows. (See also below).

For any linear operator  bounded from below, the eigenvector with the smallest eigenvalue is the vector ψ that minimizes the quantity
 \langle \psi |\hat{A}|\psi \rangle
over all ψ which are normalized.[36] In this way, the smallest eigenvalue is expressed through the variational principle. For the Schrödinger Hamiltonian Ĥ bounded from below, the smallest eigenvalue is called the ground state energy. That energy is the minimum value of
\langle \psi|\hat{H}|\psi\rangle = \int \psi^*(\mathbf{r}) \left[ - \frac{\hbar^2}{2m} \nabla^2\psi(\mathbf{r}) + V(\mathbf{r})\psi(\mathbf{r})\right] d^3\mathbf{r} = \int \left[ \frac{\hbar^2}{2m}|\nabla\psi|^2 + V(\mathbf{r}) |\psi|^2 \right] d^3\mathbf{r} = \langle \hat{H}\rangle
(using integration by parts). Due to the complex modulus of ψ squared (which is positive definite), the right hand side always greater than the lowest value of V(x). In particular, the ground state energy is positive when V(x) is everywhere positive.

For potentials which are bounded below and are not infinite over a region, there is a ground state which minimizes the integral above. This lowest energy wavefunction is real and positive definite – meaning the wavefunction can increase and decrease, but is positive for all positions. It physically cannot be negative: if it were, smoothing out the bends at the sign change (to minimize the wavefunction) rapidly reduces the gradient contribution to the integral and hence the kinetic energy, while the potential energy changes linearly and less quickly. The kinetic and potential energy are both changing at different rates, so the total energy is not constant, which can't happen (conservation).
The solutions are consistent with Schrödinger equation if this wavefunction is positive definite.

The lack of sign changes also shows that the ground state is nondegenerate, since if there were two ground states with common energy E, not proportional to each other, there would be a linear combination of the two that would also be a ground state resulting in a zero solution.

Analytic continuation to diffusion

The above properties (positive definiteness of energy) allow the analytic continuation of the Schrödinger equation to be identified as a stochastic process. This can be interpreted as the Huygens–Fresnel principle applied to De Broglie waves; the spreading wavefronts are diffusive probability amplitudes.[36]
For a free particle (not subject to a potential) in a random walk, substituting τ = it into the time-dependent Schrödinger equation gives:[37]
 {\partial \over \partial \tau} X(\mathbf{r},\tau) = \frac{\hbar}{2m} \nabla ^2 X(\mathbf{r},\tau) \, , \quad X(\mathbf{r},\tau) = \Psi(\mathbf{r},\tau/i)
which has the same form as the diffusion equation, with diffusion coefficient ħ/2m.

Relativistic quantum mechanics

Relativistic quantum mechanics is obtained where quantum mechanics and special relativity simultaneously apply. In general, one wishes to build relativistic wave equations from the relativistic energy–momentum relation
E^2 = (pc)^2 + (m_0c^2)^2 \, ,
instead of classical energy equations. The Klein–Gordon equation and the Dirac equation are two such equations.
The Klein–Gordon equation was the first such equation to be obtained, even before the non-relativistic one, and applies to massive spinless particles. The Dirac equation arose from taking the "square root" of the Klein–Gordon equation by factorizing the entire relativistic wave operator into a product of two operators – one of these is the operator for the entire Dirac equation.

The general form of the Schrödinger equation remains true in relativity, but the Hamiltonian is less obvious. For example, the Dirac Hamiltonian for a particle of mass m and electric charge q in an electromagnetic field (described by the electromagnetic potentials φ and A) is:
\hat{H}_{\text{Dirac}}= \gamma^0 \left[c  \boldsymbol{\gamma}\cdot\left(\hat{\mathbf{p}} - q \mathbf{A}\right) + mc^2 + \gamma^0q \phi \right]\,,
in which the γ = (γ1, γ2, γ3) and γ0 are the Dirac gamma matrices related to the spin of the particle. The Dirac equation is true for all spin-1/2 particles, and the solutions to the equation are 4-component spinor fields with two components corresponding to the particle and the other two for the antiparticle.

For the Klein–Gordon equation, the general form of the Schrödinger equation is inconvenient to use, and in practice the Hamiltonian is not expressed in an analogous way to the Dirac Hamiltonian. The equations for relativistic quantum fields can be obtained in other ways, such as starting from a Lagrangian density and using the Euler-Lagrange equations for fields, or use the representation theory of the Lorentz group in which certain representations can be used to fix the equation for a free particle of given spin (and mass).

In general, the Hamiltonian to be substituted in the general Schrödinger equation is not just a function of the position and momentum operators (and possibly time), but also of spin matrices. Also, the solutions to a relativistic wave equation, for a massive particle of spin s, are complex-valued 2(2s + 1)-component spinor fields.

Quantum field theory

The general equation is also valid and used in quantum field theory, both in relativistic and non-relativistic situations. However, the solution ψ is no longer interpreted as a "wave", but more like a "field".

Quantum electrodynamics


From Wikipedia, the free encyclopedia

In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics.

In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction.

In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum. Richard Feynman called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen.[1]:Ch1

History


The first formulation of a quantum theory describing radiation and matter interaction is attributed to British scientist Paul Dirac, who (during the 1920s) was first able to compute the coefficient of spontaneous emission of an atom.[2]

Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and an elegant formulation of quantum electrodynamics due to Enrico Fermi,[3] physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck,[4] and Victor Weisskopf,[5] in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer.[6] At higher orders in the series infinities emerged, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics.

Difficulties with the theory increased through the end of 1940. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom,[7] now known as the Lamb shift and magnetic moment of the electron.[8] These experiments unequivocally exposed discrepancies which the theory was unable to explain.

A first indication of a possible way out was given by Hans Bethe. In 1947, while he was traveling by train to reach Schenectady from New York,[9] after giving a talk at the conference at Shelter Island on the subject, Bethe completed the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford.[10] Despite the limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments. This procedure was named renormalization.

Feynman (center) and Oppenheimer (right) at Los Alamos.

Based on Bethe's intuition and fundamental papers on the subject by Sin-Itiro Tomonaga,[11] Julian Schwinger,[12][13] Richard Feynman[14][15][16] and Freeman Dyson,[17][18] it was finally possible to get fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with a Nobel prize in physics in 1965 for their work in this area.[19] Their contributions, and those of Freeman Dyson, were about covariant and gauge invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent.[17] Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, has subsequently become one of the fundamental aspects of quantum field theory and has come to be seen as a criterion for a theory's general acceptability. Even though renormalization works very well in practice, Feynman was never entirely comfortable with its mathematical validity, even referring to renormalization as a "shell game" and "hocus pocus".[1]:128

QED has served as the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1975 work by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on the pioneering work of Schwinger, Gerald Guralnik, Dick Hagen, and Tom Kibble,[20][21] Peter Higgs, Jeffrey Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force.

Feynman's view of quantum electrodynamics

Introduction

Near the end of his life, Richard P. Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), QED: The strange theory of light and matter,[1] a classic non-mathematical exposition of QED from the point of view articulated below.

The key components of Feynman's presentation of QED are three basic actions.[1]:85
  • A photon goes from one place and time to another place and time.
  • An electron goes from one place and time to another place and time.
  • An electron emits or absorbs a photon at a certain place and time.
These actions are represented in a form of visual shorthand by the three basic elements of Feynman diagrams: a wavy line for the photon, a straight line for the electron and a junction of two straight lines and a wavy one for a vertex representing emission or absorption of a photon by an electron.

It is important not to over-interpret these diagrams. Nothing is implied about how a particle gets from one point to another. The diagrams do not imply that the particles are moving in straight or curved lines. They do not imply that the particles are moving with fixed speeds. The fact that the photon is often represented, by convention, by a wavy line and not a straight one does not imply that it is thought that it is more wavelike than is an electron. The images are just symbols to represent the actions above: photons and electrons do, somehow, move from point to point and electrons, somehow, emit and absorb photons. We do not know how these things happen, but the theory tells us about the probabilities of these things happening.

As well as the visual shorthand for the actions Feynman introduces another kind of shorthand for the numerical quantities called probability amplitudes. The probability is the square of the total probability amplitude. If a photon moves from one place and time—in shorthand, A—to another place and time—in shorthand, B—the associated quantity is written in Feynman's shorthand as P(A to B). The similar quantity for an electron moving from C to D is written E(C to D). The quantity which tells us about the probability amplitude for the emission or absorption of a photon he calls 'j'. This is related to, but not the same as, the measured electron charge 'e'.[1]:91

QED is based on the assumption that complex interactions of many electrons and photons can be represented by fitting together a suitable collection of the above three building blocks, and then using the probability amplitudes to calculate the probability of any such complex interaction. It turns out that the basic idea of QED can be communicated while making the assumption that the square of the total of the probability amplitudes mentioned above (P(A to B), E(A to B) and 'j') acts just like our everyday probability. (A simplification made in Feynman's book.) Later on, this will be corrected to include specifically quantum-style mathematics, following Feynman.

The basic rules of probability amplitudes that will be used are that a) if an event can happen in a variety of different ways then its probability amplitude is the sum of the probability amplitudes of the possible ways and b) if a process involves a number of independent sub-processes then its probability amplitude is the product of the component probability amplitudes.[1]:93

Basic constructions

Suppose we start with one electron at a certain place and time (this place and time being given the arbitrary label A) and a photon at another place and time (given the label B). A typical question from a physical standpoint is: 'What is the probability of finding an electron at C (another place and a later time) and a photon at D (yet another place and time)?'. The simplest process to achieve this end is for the electron to move from A to C (an elementary action) and for the photon to move from B to D (another elementary action). From a knowledge of the probability amplitudes of each of these sub-processes – E(A to C) and P(B to D) – then we would expect to calculate the probability amplitude of both happening together by multiplying them, using rule b) above. This gives a simple estimated overall probability amplitude, which is squared to give an estimated probability.

But there are other ways in which the end result could come about. The electron might move to a place and time E where it absorbs the photon; then move on before emitting another photon at F; then move on to C where it is detected, while the new photon moves on to D. The probability of this complex process can again be calculated by knowing the probability amplitudes of each of the individual actions: three electron actions, two photon actions and two vertexes – one emission and one absorption. We would expect to find the total probability amplitude by multiplying the probability amplitudes of each of the actions, for any chosen positions of E and F. We then, using rule a) above, have to add up all these probability amplitudes for all the alternatives for E and F. (This is not elementary in practice, and involves integration.) But there is another possibility, which is that the electron first moves to G where it emits a photon which goes on to D, while the electron moves on to H, where it absorbs the first photon, before moving on to C. Again we can calculate the probability amplitude of these possibilities (for all points G and H). We then have a better estimation for the total probability amplitude by adding the probability amplitudes of these two possibilities to our original simple estimate. Incidentally the name given to this process of a photon interacting with an electron in this way is Compton scattering.

There are an infinite number of other intermediate processes in which more and more photons are absorbed and/or emitted. For each of these possibilities there is a Feynman diagram describing it. This implies a complex computation for the resulting probability amplitudes, but provided it is the case that the more complicated the diagram the less it contributes to the result, it is only a matter of time and effort to find as accurate an answer as one wants to the original question. This is the basic approach of QED. To calculate the probability of any interactive process between electrons and photons it is a matter of first noting, with Feynman diagrams, all the possible ways in which the process can be constructed from the three basic elements. Each diagram involves some calculation involving definite rules to find the associated probability amplitude.

That basic scaffolding remains when one moves to a quantum description but some conceptual changes are needed. One is that whereas we might expect in our everyday life that there would be some constraints on the points to which a particle can move, that is not true in full quantum electrodynamics. There is a possibility of an electron at A, or a photon at B, moving as a basic action to any other place and time in the universe. That includes places that could only be reached at speeds greater than that of light and also earlier times. (An electron moving backwards in time can be viewed as a positron moving forward in time.)[1]:89, 98–99

Probability amplitudes


Feynman replaces complex numbers with spinning arrows, which start at emission and end at detection of a particle. The sum of all resulting arrows represents the total probability of the event. In this diagram, light emitted by the source S bounces off a few segments of the mirror (in blue) before reaching the detector at P. The sum of all paths must be taken into account. The graph below depicts the total time spent to traverse each of the paths above.

Quantum mechanics introduces an important change in the way probabilities are computed. Probabilities are still represented by the usual real numbers we use for probabilities in our everyday world, but probabilities are computed as the square of probability amplitudes. Probability amplitudes are complex numbers.

Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen. (These must not be confused with the arrows of Feynman diagrams which are actually simplified representations in two dimensions of a relationship between points in three dimensions of space and one of time.) The amplitude arrows are fundamental to the description of the world given by quantum theory. No satisfactory reason has been given for why they are needed. But pragmatically we have to accept that they are an essential part of our description of all quantum phenomena. They are related to our everyday ideas of probability by the simple rule that the probability of an event is the square of the length of the corresponding amplitude arrow. So, for a given process, if two probability amplitudes, v and w, are involved, the probability of the process will be given either by
P=|\mathbf{v}+\mathbf{w}|^2
or
P=|\mathbf{v} \,\mathbf{w}|^2.
The rules as regards adding or multiplying, however, are the same as above. But where you would expect to add or multiply probabilities, instead you add or multiply probability amplitudes that now are complex numbers.

Addition of probability amplitudes as complex numbers

Addition and multiplication are familiar operations in the theory of complex numbers and are given in the figures. The sum is found as follows. Let the start of the second arrow be at the end of the first. The sum is then a third arrow that goes directly from the start of the first to the end of the second. The product of two arrows is an arrow whose length is the product of the two lengths. The direction of the product is found by adding the angles that each of the two have been turned through relative to a reference direction: that gives the angle that the product is turned relative to the reference direction.

That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore P(A to B) actually consists of 16 complex numbers, or probability amplitude arrows.[1]:120–121 There are also some minor changes to do with the quantity "j", which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping.

Associated with the fact that the electron can be polarized is another small necessary detail which is connected with the fact that an electron is a fermion and obeys Fermi–Dirac statistics. The basic rule is that if we have the probability amplitude for a given complex process involving more than one electron, then when we include (as we always must) the complementary Feynman diagram in which we just exchange two electron events, the resulting amplitude is the reverse – the negative – of the first. The simplest case would be two electrons starting at A and B ending at C and D. The amplitude would be calculated as the "difference", E(A to D) × E(B to C) − E(A to C) × E(B to D), where we would expect, from our everyday idea of probabilities, that it would be a sum.[1]:112–113

Propagators

Finally, one has to compute P (A to B) and E (C to D) corresponding to the probability amplitudes for the photon and the electron respectively. These are essentially the solutions of the Dirac Equation which describes the behavior of the electron's probability amplitude and the Klein–Gordon equation which describes the behavior of the photon's probability amplitude. These are called Feynman propagators. The translation to a notation commonly used in the standard literature is as follows:
P(\mbox{A to B}) \rightarrow D_F(x_B-x_A),\quad  E(\mbox{C to D}) \rightarrow S_F(x_D-x_C)
where a shorthand symbol such as x_A stands for the four real numbers which give the time and position in three dimensions of the point labeled A.

Mass renormalization



A problem arose historically which held up progress for twenty years: although we start with the assumption of three basic "simple" actions, the rules of the game say that if we want to calculate the probability amplitude for an electron to get from A to B we must take into account all the possible ways: all possible Feynman diagrams with those end points. Thus there will be a way in which the electron travels to C, emits a photon there and then absorbs it again at D before moving on to B. Or it could do this kind of thing twice, or more. In short we have a fractal-like situation in which if we look closely at a line it breaks up into a collection of "simple" lines, each of which, if looked at closely, are in turn composed of "simple" lines, and so on ad infinitum. This is a very difficult situation to handle. If adding that detail only altered things slightly then it would not have been too bad, but disaster struck when it was found that the simple correction mentioned above led to infinite probability amplitudes. In time this problem was "fixed" by the technique of renormalization. However, Feynman himself remained unhappy about it, calling it a "dippy process".[1]:128

Conclusions

Within the above framework physicists were then able to calculate to a high degree of accuracy some of the properties of electrons, such as the anomalous magnetic dipole moment. However, as Feynman points out, it fails totally to explain why particles such as the electron have the masses they do. "There is no theory that adequately explains these numbers. We use the numbers in all our theories, but we don't understand them – what they are, or where they come from. I believe that from a fundamental point of view, this is a very interesting and serious problem."[1]:152

Mathematics

Mathematically, QED is an abelian gauge theory with the symmetry group U(1). The gauge field, which mediates the interaction between the charged spin-1/2 fields, is the electromagnetic field. The QED Lagrangian for a spin-1/2 field interacting with the electromagnetic field is given by the real part of[22]:78
\mathcal{L}=\bar\psi(i\gamma^\mu D_\mu-m)\psi -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}
where
 \gamma^\mu are Dirac matrices;
\psi a bispinor field of spin-1/2 particles (e.g. electronpositron field);
\bar\psi\equiv\psi^\dagger\gamma^0, called "psi-bar", is sometimes referred to as the Dirac adjoint;
D_\mu \equiv \partial_\mu+ieA_\mu+ieB_\mu \,\! is the gauge covariant derivative;
e is the coupling constant, equal to the electric charge of the bispinor field;
Aμ is the covariant four-potential of the electromagnetic field generated by the electron itself;
Bμ is the external field imposed by external source;
F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu \,\! is the electromagnetic field tensor.

Equations of motion

To begin, substituting the definition of D into the Lagrangian gives us
\mathcal{L} = i \bar\psi \gamma^\mu \partial_\mu \psi - e\bar{\psi}\gamma_\mu (A^\mu+B^\mu) \psi -m \bar{\psi} \psi - \frac{1}{4}F_{\mu\nu}F^{\mu\nu}. \,
Next, we can substitute this Lagrangian into the Euler–Lagrange equation of motion for a field:
 \partial_\mu \left( \frac{\partial \mathcal{L}}{\partial ( \partial_\mu \psi )} \right) - \frac{\partial \mathcal{L}}{\partial \psi} = 0 \,




(2)
to find the field equations for QED.

The two terms from this Lagrangian are then
\partial_\mu \left( \frac{\partial \mathcal{L}}{\partial ( \partial_\mu \psi )} \right) = \partial_\mu \left( i \bar{\psi} \gamma^\mu \right), \,
\frac{\partial \mathcal{L}}{\partial \psi} = -e\bar{\psi}\gamma_\mu (A^\mu+B^\mu) - m \bar{\psi}. \,
Substituting these two back into the Euler–Lagrange equation (2) results in
i \partial_\mu \bar{\psi} \gamma^\mu + e\bar{\psi}\gamma_\mu (A^\mu+B^\mu) + m \bar{\psi} = 0 \,
with complex conjugate
i \gamma^\mu \partial_\mu \psi - e \gamma_\mu (A^\mu+B^\mu) \psi - m \psi = 0. \,
Bringing the middle term to the right-hand side transforms this second equation into
i \gamma^\mu \partial_\mu \psi - m \psi = e \gamma_\mu (A^\mu+B^\mu) \psi \,
The left-hand side is like the original Dirac equation and the right-hand side is the interaction with the electromagnetic field.

One further important equation can be found by substituting the above Lagrangian into another Euler–Lagrange equation, this time for the field, Aμ:
 \partial_\nu \left( \frac{\partial \mathcal{L}}{\partial ( \partial_\nu A_\mu )} \right) - \frac{\partial \mathcal{L}}{\partial A_\mu} = 0\,.




(3)
The two terms this time are
\partial_\nu \left( \frac{\partial \mathcal{L}}{\partial ( \partial_\nu A_\mu )} \right) = \partial_\nu \left( \partial^\mu A^\nu - \partial^\nu A^\mu \right), \,
\frac{\partial \mathcal{L}}{\partial A_\mu} = -e\bar{\psi} \gamma^\mu \psi \,
and these two terms, when substituted back into (3) give us
\partial_\nu F^{\nu \mu} = e \bar{\psi} \gamma^\mu \psi \,
Now, if we impose the Lorenz gauge condition, that the divergence of the four potential vanishes
\partial_{\mu} A^\mu = 0
then we get
\Box A^{\mu}=e\bar{\psi} \gamma^{\mu} \psi\,,
which is a wave equation for the four potential, the QED version of the classical Maxwell equations in the Lorenz gauge. (In the above equation, the square represents the D'Alembert operator.)

Interaction picture

This theory can be straightforwardly quantized by treating bosonic and fermionic sectors[clarification needed] as free.
This permits us to build a set of asymptotic states which can be used to start a computation of the probability amplitudes for different processes. In order to do so, we have to compute an evolution operator that, for a given initial state |i\rangle, will give a final state \langle f| in such a way to have[22]:5
M_{fi}=\langle f|U|i\rangle.
This technique is also known as the S-matrix. The evolution operator is obtained in the interaction picture where time evolution is given by the interaction Hamiltonian, which is the integral over space of the second term in the Lagrangian density given above:[22]:123
V=e\int d^3x\bar\psi\gamma^\mu\psi A_\mu
and so, one has[22]:86
U=T\exp\left[-\frac{i}{\hbar}\int_{t_0}^tdt'V(t')\right]
where T is the time ordering operator. This evolution operator only has meaning as a series, and what we get here is a perturbation series with the fine structure constant as the development parameter. This series is called the Dyson series.

Feynman diagrams

Despite the conceptual clarity of this Feynman approach to QED, almost no early textbooks follow him in their presentation. When performing calculations it is much easier to work with the Fourier transforms of the propagators. Quantum physics considers particle's momenta rather than their positions, and it is convenient to think of particles as being created or annihilated when they interact. Feynman diagrams then look the same, but the lines have different interpretations. The electron line represents an electron with a given energy and momentum, with a similar interpretation of the photon line. A vertex diagram represents the annihilation of one electron and the creation of another together with the absorption or creation of a photon, each having specified energies and momenta.

Using Wick theorem on the terms of the Dyson series, all the terms of the S-matrix for quantum electrodynamics can be computed through the technique of Feynman diagrams.
To these rules we must add a further one for closed loops that implies an integration on momenta \int d^4p/(2\pi)^4, since these internal ("virtual") particles are not constrained to any specific energy–momentum – even that usually required by special relativity (see this article for details). From them, computations of probability amplitudes are straightforwardly given. An example is Compton scattering, with an electron and a photon undergoing elastic scattering.

Renormalizability

Higher order terms can be straightforwardly computed for the evolution operator but these terms display diagrams containing the following simpler ones[22]:ch 10
that, being closed loops, imply the presence of diverging integrals having no mathematical meaning. To overcome this difficulty, a technique called renormalization has been devised, producing finite results in very close agreement with experiments. It is important to note that a criterion for theory being meaningful after renormalization is that the number of diverging diagrams is finite. In this case the theory is said to be renormalizable. The reason for this is that to get observables renormalized one needs a finite number of constants to maintain the predictive value of the theory untouched. This is exactly the case of quantum electrodynamics displaying just three diverging diagrams. This procedure gives observables in very close agreement with experiment as seen e.g. for electron gyromagnetic ratio.

Renormalizability has become an essential criterion for a quantum field theory to be considered as a viable one. All the theories describing fundamental interactions, except gravitation whose quantum counterpart is presently under very active research, are renormalizable theories.

Nonconvergence of series

An argument by Freeman Dyson shows that the radius of convergence of the perturbation series in QED is zero.[23] The basic argument goes as follows: if the coupling constant were negative, this would be equivalent to the Coulomb force constant being negative. This would "reverse" the electromagnetic interaction so that like charges would attract and unlike charges would repel. This would render the vacuum unstable against decay into a cluster of electrons on one side of the universe and a cluster of positrons on the other side of the universe. Because the theory is 'sick' for any negative value of the coupling constant, the series do not converge, but are an asymptotic series.

From a modern perspective, we say that QED is not well defined as a quantum field theory to arbitrarily high energy.[24] The coupling constant runs to infinity at finite energy, signalling a Landau pole. The problem is essentially that QED is not asymptotically free. This is one of the motivations for embedding QED within a Grand Unified Theory.

Butane

From Wikipedia, the free encyclopedia ...