Search This Blog

Thursday, July 10, 2025

Quantum electrodynamics

From Wikipedia, the free encyclopedia

In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction.

In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum. Richard Feynman called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen. It is the most precise and stringently tested theory in physics.

History

Paul Dirac

The first formulation of a quantum theory describing radiation and matter interaction is attributed to Paul Dirac, who during the 1920s computed the coefficient of spontaneous emission of an atom. He is credited with coining the term "quantum electrodynamics".

Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and Enrico Fermi, physicists came to believe that, in principle, it was possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck, and Victor Weisskopf, in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer. At higher orders in the series infinities emerged, making such computations meaningless and casting doubt on the theory's internal consistency. This suggested that special relativity and quantum mechanics were fundamentally incompatible.

Hans Bethe

Difficulties increased through the end of the 1940s. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, later known as the Lamb shift and magnetic moment of the electron. These experiments exposed discrepancies that the theory was unable to explain.

A first indication of a possible solution was given by Hans Bethe in 1947. He made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Willis Lamb and Robert Retherford. Despite limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result with good experimental agreement. This procedure was named renormalization.

Feynman (center) and Oppenheimer (right) at Los Alamos.

Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō TomonagaJulian SchwingerRichard Feynman and Freeman Dyson, it was finally possible to produce fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Tomonaga, Schwinger, and Feynman were jointly awarded the 1965 Nobel Prize in Physics for their work in this area. Their contributions, and Dyson's, were about covariant and gauge-invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed unlike the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Dyson later showed that the two approaches were equivalent. Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, became one of the fundamental aspects of quantum field theory and is seen as a criterion for a theory's general acceptability. Even though renormalization works well in practice, Feynman was never entirely comfortable with its mathematical validity, referring to renormalization as a "shell game" and "hocus pocus".

Neither Feynman nor Dirac were happy with that way to approach the observations made in theoretical physics, above all in quantum mechanics.

QED is the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1970s, developed by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on Schwinger's pioneering work, Gerald Guralnik, Dick Hagen, and Tom KibblePeter Higgs, Jeffrey Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force.

Feynman's view of quantum electrodynamics

Introduction

Near the end of his life, Richard Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), QED: The Strange Theory of Light and Matter, a classic non-mathematical exposition of QED from the point of view articulated below.

The key components of Feynman's presentation of QED are three basic actions.

A photon goes from one place and time to another place and time.
An electron goes from one place and time to another place and time.
An electron emits or absorbs a photon at a certain place and time.
Feynman diagram elements

These actions are represented in the form of visual shorthand by the three basic elements of diagrams: a wavy line for the photon, a straight line for the electron and a junction of two straight lines and a wavy one for a vertex representing emission or absorption of a photon by an electron. These can all be seen in the adjacent diagram.

As well as the visual shorthand for the actions, Feynman introduces another kind of shorthand for the numerical quantities called probability amplitudes. The probability is the square of the absolute value of total probability amplitude, . If a photon moves from one place and time to another place and time , the associated quantity is written in Feynman's shorthand as , and it depends on only the momentum and polarization of the photon. The similar quantity for an electron moving from to is written . It depends on the momentum and polarization of the electron, in addition to a constant Feynman calls n, sometimes called the "bare" mass of the electron: it is related to, but not the same as, the measured electron mass. Finally, the quantity that tells us about the probability amplitude for an electron to emit or absorb a photon Feynman calls j, and is sometimes called the "bare" charge of the electron: it is a constant, and is related to, but not the same as, the measured electron charge e.

QED is based on the assumption that complex interactions of many electrons and photons can be represented by fitting together a suitable collection of the above three building blocks and then using the probability amplitudes to calculate the probability of any such complex interaction. It turns out that the basic idea of QED can be communicated while assuming that the square of the total of the probability amplitudes mentioned above (P(A to B), E(C to D) and j) acts just like our everyday probability (a simplification made in Feynman's book). Later on, this will be corrected to include specifically quantum-style mathematics, following Feynman.

The basic rules of probability amplitudes that will be used are:

  1. If an event can occur via a number of indistinguishable alternative processes (a.k.a. "virtual" processes), then its probability amplitude is the sum of the probability amplitudes of the alternatives.
  2. If a virtual process involves a number of independent or concomitant sub-processes, then the probability amplitude of the total (compound) process is the product of the probability amplitudes of the sub-processes.

The indistinguishability criterion in (a) is very important: it means that there is no observable feature present in the given system that in any way "reveals" which alternative is taken. In such a case, one cannot observe which alternative actually takes place without changing the experimental setup in some way (e.g. by introducing a new apparatus into the system). Whenever one is able to observe which alternative takes place, one always finds that the probability of the event is the sum of the probabilities of the alternatives. Indeed, if this were not the case, the very term "alternatives" to describe these processes would be inappropriate. What (a) says is that once the physical means for observing which alternative occurred is removed, one cannot still say that the event is occurring through "exactly one of the alternatives" in the sense of adding probabilities; one must add the amplitudes instead.

Similarly, the independence criterion in (b) is very important: it only applies to processes which are not "entangled".

Basic constructions

Suppose we start with one electron at a certain place and time (this place and time being given the arbitrary label A) and a photon at another place and time (given the label B). A typical question from a physical standpoint is: "What is the probability of finding an electron at C (another place and a later time) and a photon at D (yet another place and time)?". The simplest process to achieve this end is for the electron to move from A to C (an elementary action) and for the photon to move from B to D (another elementary action). From a knowledge of the probability amplitudes of each of these sub-processes – E(A to C) and P(B to D) – we would expect to calculate the probability amplitude of both happening together by multiplying them, using rule b) above. This gives a simple estimated overall probability amplitude, which is squared to give an estimated probability.

Compton scattering

But there are other ways in which the result could come about. The electron might move to a place and time E, where it absorbs the photon; then move on before emitting another photon at F; then move on to C, where it is detected, while the new photon moves on to D. The probability of this complex process can again be calculated by knowing the probability amplitudes of each of the individual actions: three electron actions, two photon actions and two vertexes – one emission and one absorption. We would expect to find the total probability amplitude by multiplying the probability amplitudes of each of the actions, for any chosen positions of E and F. We then, using rule a) above, have to add up all these probability amplitudes for all the alternatives for E and F. (This is not elementary in practice and involves integration.) But there is another possibility, which is that the electron first moves to G, where it emits a photon, which goes on to D, while the electron moves on to H, where it absorbs the first photon, before moving on to C. Again, we can calculate the probability amplitude of these possibilities (for all points G and H). We then have a better estimation for the total probability amplitude by adding the probability amplitudes of these two possibilities to our original simple estimate. Incidentally, the name given to this process of a photon interacting with an electron in this way is Compton scattering.

An infinite number of other intermediate "virtual" processes exist in which photons are absorbed or emitted. For each of these processes, a Feynman diagram could be drawn describing it. This implies a complex computation for the resulting probability amplitudes, but provided it is the case that the more complicated the diagram, the less it contributes to the result, it is only a matter of time and effort to find as accurate an answer as one wants to the original question. This is the basic approach of QED. To calculate the probability of any interactive process between electrons and photons, it is a matter of first noting, with Feynman diagrams, all the possible ways in which the process can be constructed from the three basic elements. Each diagram involves some calculation involving definite rules to find the associated probability amplitude.

That basic scaffolding remains when one moves to a quantum description, but some conceptual changes are needed. One is that whereas we might expect in our everyday life that there would be some constraints on the points to which a particle can move, that is not true in full quantum electrodynamics. There is a nonzero probability amplitude of an electron at A, or a photon at B, moving as a basic action to any other place and time in the universe. That includes places that could only be reached at speeds greater than that of light and also earlier times. (An electron moving backwards in time can be viewed as a positron moving forward in time.)

Probability amplitudes

Feynman replaces complex numbers with spinning arrows, which start at emission and end at detection of a particle. The sum of all resulting arrows gives a final arrow whose length squared equals the probability of the event. In this diagram, light emitted by the source S can reach the detector at P by bouncing off the mirror (in blue) at various points. Each one of the paths has an arrow associated with it (whose direction changes uniformly with the time taken for the light to traverse the path). To correctly calculate the total probability for light to reach P starting at S, one needs to sum the arrows for all such paths. The graph below depicts the total time spent to traverse each of the paths above.

Quantum mechanics introduces an important change in the way probabilities are computed. Probabilities are still represented by the usual real numbers we use for probabilities in our everyday world, but probabilities are computed as the square modulus of probability amplitudes, which are complex numbers.

Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen. (These must not be confused with the arrows of Feynman diagrams, which are simplified representations in two dimensions of a relationship between points in three dimensions of space and one of time.) The amplitude arrows are fundamental to the description of the world given by quantum theory. They are related to our everyday ideas of probability by the simple rule that the probability of an event is the square of the length of the corresponding amplitude arrow. So, for a given process, if two probability amplitudes, v and w, are involved, the probability of the process will be given either by

or

The rules as regards adding or multiplying, however, are the same as above. But where you would expect to add or multiply probabilities, instead you add or multiply probability amplitudes that now are complex numbers.

Addition of probability amplitudes as complex numbers
Multiplication of probability amplitudes as complex numbers

Addition and multiplication are common operations in the theory of complex numbers and are given in the figures. The sum is found as follows. Let the start of the second arrow be at the end of the first. The sum is then a third arrow that goes directly from the beginning of the first to the end of the second. The product of two arrows is an arrow whose length is the product of the two lengths. The direction of the product is found by adding the angles that each of the two have been turned through relative to a reference direction: that gives the angle that the product is turned relative to the reference direction.

That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore, P(A to B) consists of 16 complex numbers, or probability amplitude arrows. There are also some minor changes to do with the quantity j, which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping.

Associated with the fact that the electron can be polarized is another small necessary detail, which is connected with the fact that an electron is a fermion and obeys Fermi–Dirac statistics. The basic rule is that if we have the probability amplitude for a given complex process involving more than one electron, then when we include (as we always must) the complementary Feynman diagram in which we exchange two electron events, the resulting amplitude is the reverse – the negative – of the first. The simplest case would be two electrons starting at A and B ending at C and D. The amplitude would be calculated as the "difference", E(A to D) × E(B to C) − E(A to C) × E(B to D), where we would expect, from our everyday idea of probabilities, that it would be a sum.

Propagators

Finally, one has to compute P(A to B) and E(C to D) corresponding to the probability amplitudes for the photon and the electron respectively. These are essentially the solutions of the Dirac equation, which describe the behavior of the electron's probability amplitude and the Maxwell's equations, which describes the behavior of the photon's probability amplitude. These are called Feynman propagators. The translation to a notation commonly used in the standard literature is as follows:

where a shorthand symbol such as stands for the four real numbers that give the time and position in three dimensions of the point labeled A.

Mass renormalization

Electron self-energy loop

A problem arose historically which held up progress for twenty years: although we start with the assumption of three basic "simple" actions, the rules of the game say that if we want to calculate the probability amplitude for an electron to get from A to B, we must take into account all the possible ways: all possible Feynman diagrams with those endpoints. Thus there will be a way in which the electron travels to C, emits a photon there and then absorbs it again at D before moving on to B. Or it could do this kind of thing twice, or more. In short, we have a fractal-like situation in which if we look closely at a line, it breaks up into a collection of "simple" lines, each of which, if looked at closely, are in turn composed of "simple" lines, and so on ad infinitum. This is a challenging situation to handle. If adding that detail only altered things slightly, then it would not have been too bad, but disaster struck when it was found that the simple correction mentioned above led to infinite probability amplitudes. In time this problem was "fixed" by the technique of renormalization. However, Feynman himself remained unhappy about it, calling it a "dippy process", and Dirac also criticized this procedure, saying "in mathematics one does not get rid of infinities when it does not please you".

Conclusions

Within the above framework physicists were then able to calculate to a high degree of accuracy some of the properties of electrons, such as the anomalous magnetic dipole moment. However, as Feynman points out, it fails to explain why particles such as the electron have the masses they do. "There is no theory that adequately explains these numbers. We use the numbers in all our theories, but we don't understand them – what they are, or where they come from. I believe that from a fundamental point of view, this is a very interesting and serious problem."

Mathematical formulation

QED action

Mathematically, QED is an abelian gauge theory with the symmetry group U(1), defined on Minkowski space (flat spacetime). The gauge field, which mediates the interaction between the charged spin-1/2 fields, is the electromagnetic field. The QED Lagrangian for a spin-1/2 field interacting with the electromagnetic field in natural units gives rise to the action

QED Action

where

Expanding the covariant derivative reveals a second useful form of the Lagrangian (external field set to zero for simplicity)

where is the conserved current arising from Noether's theorem. It is written

Equations of motion

Expanding the covariant derivative in the Lagrangian gives

For simplicity, has been set to zero, with no loss of generality. Alternatively, we can absorb into a new gauge field and relabel the new field as

From this Lagrangian, the equations of motion for the and fields can be obtained.

Equation of motion for ψ

These arise most straightforwardly by considering the Euler-Lagrange equation for . Since the Lagrangian contains no terms, we immediately get

so the equation of motion can be written

Equation of motion for Aμ

  • Using the Euler–Lagrange equation for the field,

the derivatives this time are

Substituting back into (3) leads to

which can be written in terms of the current as

Now, if we impose the Lorenz gauge condition the equations reduce to which is a wave equation for the four-potential, the QED version of the classical Maxwell equations in the Lorenz gauge. (The square represents the wave operator, .)

Interaction picture

This theory can be straightforwardly quantized by treating bosonic and fermionic sectors as free. This permits us to build a set of asymptotic states that can be used to start computation of the probability amplitudes for different processes. In order to do so, we have to compute an evolution operator, which for a given initial state will give a final state in such a way to have

This technique is also known as the S-matrix. The evolution operator is obtained in the interaction picture, where time evolution is given by the interaction Hamiltonian, which is the integral over space of the second term in the Lagrangian density given above:

Which can also be written in terms of an integral over the interaction Hamiltonian density . Thus, one has

where T is the time-ordering operator. This evolution operator only has meaning as a series, and what we get here is a perturbation series with the fine-structure constant as the development parameter. This series expansion of the probability amplitude is called the Dyson series, and is given by:

Feynman diagrams

Despite the conceptual clarity of the Feynman approach to QED, almost no early textbooks follow him in their presentation. When performing calculations, it is much easier to work with the Fourier transforms of the propagators. Experimental tests of quantum electrodynamics are typically scattering experiments. In scattering theory, particles' momenta rather than their positions are considered, and it is convenient to think of particles as being created or annihilated when they interact. Feynman diagrams then look the same, but the lines have different interpretations. The electron line represents an electron with a given energy and momentum, with a similar interpretation of the photon line. A vertex diagram represents the annihilation of one electron and the creation of another together with the absorption or creation of a photon, each having specified energies and momenta.

Using Wick's theorem on the terms of the Dyson series, all the terms of the S-matrix for quantum electrodynamics can be computed through the technique of Feynman diagrams. In this case, rules for drawing are the following

To these rules we must add a further one for closed loops that implies an integration on momenta , since these internal ("virtual") particles are not constrained to any specific energy–momentum, even that usually required by special relativity (see Propagator for details). The signature of the metric is .

From them, computations of probability amplitudes are straightforwardly given. An example is Compton scattering, with an electron and a photon undergoing elastic scattering. Feynman diagrams are in this case

and so we are able to get the corresponding amplitude at the first order of a perturbation series for the S-matrix:

from which we can compute the cross section for this scattering.

Nonperturbative phenomena

The predictive success of quantum electrodynamics largely rests on the use of perturbation theory, expressed in Feynman diagrams. However, quantum electrodynamics also leads to predictions beyond perturbation theory. In the presence of very strong electric fields, it predicts that electrons and positrons will be spontaneously produced, so causing the decay of the field. This process, called the Schwinger effect, cannot be understood in terms of any finite number of Feynman diagrams and hence is described as nonperturbative. Mathematically, it can be derived by a semiclassical approximation to the path integral of quantum electrodynamics.

Renormalizability

Higher-order terms can be straightforwardly computed for the evolution operator, but these terms display diagrams containing the following simpler ones

that, being closed loops, imply the presence of diverging integrals having no mathematical meaning. To overcome this difficulty, a technique called renormalization has been devised, producing finite results in very close agreement with experiments. A criterion for the theory being meaningful after renormalization is that the number of diverging diagrams is finite. In this case, the theory is said to be "renormalizable". The reason for this is that to get observables renormalized, one needs a finite number of constants to maintain the predictive value of the theory untouched. This is exactly the case of quantum electrodynamics displaying just three diverging diagrams. This procedure gives observables in very close agreement with experiment as seen e.g. for electron gyromagnetic ratio.

Renormalizability has become an essential criterion for a quantum field theory to be considered as a viable one. All the theories describing fundamental interactions, except gravitation, whose quantum counterpart is only conjectural and presently under very active research, are renormalizable theories.

Nonconvergence of series

An argument by Freeman Dyson shows that the radius of convergence of the perturbation series in QED is zero. The basic argument goes as follows: if the coupling constant were negative, this would be equivalent to the Coulomb force constant being negative. This would "reverse" the electromagnetic interaction so that like charges would attract and unlike charges would repel. This would render the vacuum unstable against decay into a cluster of electrons on one side of the universe and a cluster of positrons on the other side of the universe. Because the theory is "sick" for any negative value of the coupling constant, the series does not converge but is at best an asymptotic series.

From a modern perspective, we say that QED is not well defined as a quantum field theory to arbitrarily high energy. The coupling constant runs to infinity at finite energy, signalling a Landau pole. The problem is essentially that QED appears to suffer from quantum triviality issues. This is one of the motivations for embedding QED within a Grand Unified Theory.

Electrodynamics in curved spacetime

This theory can be extended, at least as a classical field theory, to curved spacetime. This arises similarly to the flat spacetime case, from coupling a free electromagnetic theory to a free fermion theory and including an interaction which promotes the partial derivative in the fermion theory to a gauge-covariant derivative.

Electromagnetic field

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Electromagnetic_field

An electromagnetic field (also EM field) is a physical field, varying in space and time, that represents the electric and magnetic influences generated by and acting upon electric charges. The field at any point in space and time can be regarded as a combination of an electric field and a magnetic field. Because of the interrelationship between the fields, a disturbance in the electric field can create a disturbance in the magnetic field which in turn affects the electric field, leading to an oscillation that propagates through space, known as an electromagnetic wave.

The way in which charges and currents (i.e. streams of charges) interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force law. Maxwell's equations detail how the electric field converges towards or diverges away from electric charges, how the magnetic field curls around electrical currents, and how changes in the electric and magnetic fields influence each other. The Lorentz force law states that a charge subject to an electric field feels a force along the direction of the field, and a charge moving through a magnetic field feels a force that is perpendicular both to the magnetic field and to its direction of motion.

The electromagnetic field is described by classical electrodynamics, an example of a classical field theory. This theory describes many macroscopic physical phenomena accurately. However, it was unable to explain the photoelectric effect and atomic absorption spectroscopy, experiments at the atomic scale. That required the use of quantum mechanics, specifically the quantization of the electromagnetic field and the development of quantum electrodynamics.

History

Results of Michael Faraday's iron filings experiment.

The empirical investigation of electromagnetism is at least as old as the ancient Greek philosopher, mathematician and scientist Thales of Miletus, who around 600 BCE described his experiments rubbing fur of animals on various materials such as amber creating static electricity. By the 18th century, it was understood that objects can carry positive or negative electric charge, that two objects carrying charge of the same sign repel each other, that two objects carrying charges of opposite sign attract one another, and that the strength of this force falls off as the square of the distance between them. Michael Faraday visualized this in terms of the charges interacting via the electric field. An electric field is produced when the charge is stationary with respect to an observer measuring the properties of the charge, and a magnetic field as well as an electric field are produced when the charge moves, creating an electric current with respect to this observer. Over time, it was realized that the electric and magnetic fields are better thought of as two parts of a greater whole—the electromagnetic field. In 1820, Hans Christian Ørsted showed that an electric current can deflect a nearby compass needle, establishing that electricity and magnetism are closely related phenomena. Faraday then made the seminal observation that time-varying magnetic fields could induce electric currents in 1831.

In 1861, James Clerk Maxwell synthesized all the work to date on electrical and magnetic phenomena into a single mathematical theory, from which he then deduced that light is an electromagnetic wave. Maxwell's continuous field theory was very successful until evidence supporting the atomic model of matter emerged. Beginning in 1877, Hendrik Lorentz developed an atomic model of electromagnetism and in 1897 J. J. Thomson completed experiments that defined the electron. The Lorentz theory works for free charges in electromagnetic fields, but fails to predict the energy spectrum for bound charges in atoms and molecules. For that problem, quantum mechanics is needed, ultimately leading to the theory of quantum electrodynamics.

Practical applications of the new understanding of electromagnetic fields emerged in the late 1800s. The electrical generator and motor were invented using only the empirical findings like Faraday's and Ampere's laws combined with practical experience.

Mathematical description

There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as E(x, y, z, t) (electric field) and B(x, y, z, t) (magnetic field).

If only the electric field (E) is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field (B) is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations.

With the advent of special relativity, physical laws became amenable to the formalism of tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a more elegant means of expressing physical laws.

The behavior of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell's equations. In the vector field formalism, these are:

Gauss's law
Gauss's law for magnetism
Faraday's law
Ampère–Maxwell law

where is the charge density, which is a function of time and position, is the vacuum permittivity, is the vacuum permeability, and J is the current density vector, also a function of time and position. Inside a linear material, Maxwell's equations change by switching the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors.

The Lorentz force law governs the interaction of the electromagnetic field with charged matter.

When a field travels across to different media, the behavior of the field changes according to the properties of the media.

Properties of the field

Electrostatics and magnetostatics

Electric field of a positive point electric charge suspended over an infinite sheet of conducting material. The field is depicted by electric field lines, lines which follow the direction of the electric field in space.

The Maxwell equations simplify when the charge density at each point in space does not change over time and all electric currents likewise remain constant. All of the time derivatives vanish from the equations, leaving two expressions that involve the electric field, and along with two formulae that involve the magnetic field: and These expressions are the basic equations of electrostatics, which focuses on situations where electrical charges do not move, and magnetostatics, the corresponding area of magnetic phenomena.

Transformations of electromagnetic fields

Whether a physical effect is attributable to an electric field or to a magnetic field is dependent upon the observer, in a way that special relativity makes mathematically precise. For example, suppose that a laboratory contains a long straight wire that carries an electrical current. In the frame of reference where the laboratory is at rest, the wire is motionless and electrically neutral: the current, composed of negatively charged electrons, moves against a background of positively charged ions, and the densities of positive and negative charges cancel each other out. A test charge near the wire would feel no electrical force from the wire. However, if the test charge is in motion parallel to the current, the situation changes. In the rest frame of the test charge, the positive and negative charges in the wire are moving at different speeds, and so the positive and negative charge distributions are Lorentz-contracted by different amounts. Consequently, the wire has a nonzero net charge density, and the test charge must experience a nonzero electric field and thus a nonzero force. In the rest frame of the laboratory, there is no electric field to explain the test charge being pulled towards or pushed away from the wire. So, an observer in the laboratory rest frame concludes that a magnetic field must be present.

In general, a situation that one observer describes using only an electric field will be described by an observer in a different inertial frame using a combination of electric and magnetic fields. Analogously, a phenomenon that one observer describes using only a magnetic field will be, in a relatively moving reference frame, described by a combination of fields. The rules for relating the fields required in different reference frames are the Lorentz transformations of the fields.

Thus, electrostatics and magnetostatics are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely a consequence of different frames of measurement. The fact that the two field variations can be reproduced just by changing the motion of the observer is further evidence that there is only a single actual field involved which is simply being observed differently.

Reciprocal behavior of electric and magnetic fields

The two Maxwell equations, Faraday's Law and the Ampère–Maxwell Law, illustrate a very practical feature of the electromagnetic field. Faraday's Law may be stated roughly as "a changing magnetic field inside a loop creates an electric voltage around the loop". This is the principle behind the electric generator.

Ampere's Law roughly states that "an electrical current around a loop creates a magnetic field through the loop". Thus, this law can be applied to generate a magnetic field and run an electric motor.

Behavior of the fields in the absence of charges or currents

A linearly polarized electromagnetic plane wave propagating parallel to the z-axis is a possible solution for the electromagnetic wave equations in free space. The electric field, E, and the magnetic field, B, are perpendicular to each other and the direction of propagation.

Maxwell's equations can be combined to derive wave equations. The solutions of these equations take the form of an electromagnetic wave. In a volume of space not containing charges or currents (free space) – that is, where and J are zero, the electric and magnetic fields satisfy these electromagnetic wave equations:

James Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's equations with the addition of a displacement current term to Ampere's circuital law. This unified the physical understanding of electricity, magnetism, and light: visible light is but one portion of the full range of electromagnetic waves, the electromagnetic spectrum.

Time-varying EM fields in Maxwell's equations

An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source. Such radiation can occur across a wide range of frequencies called the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles.

A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen.

A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of "close") will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic near-field.

Changing electric dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances.

Changing magnetic dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as RFID tags, metal detectors, and MRI scanner coils at higher frequencies.

Health and safety

The potential effects of electromagnetic fields on human health vary widely depending on the frequency, intensity of the fields, and the length of the exposure. Low frequency, low intensity, and short duration exposure to electromagnetic radiation is generally considered safe. On the other hand, radiation from other parts of the electromagnetic spectrum, such as ultraviolet light and gamma rays, are known to cause significant harm in some circumstances.

Quantum suicide and immortality

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality ...