Search This Blog

Wednesday, May 2, 2018

Electron

From Wikipedia, the free encyclopedia
Electron
HAtomOrbitals.png
Hydrogen atom orbitals at different energy levels. The brighter areas are where one is most likely to find an electron at any given time.
 
Composition Elementary particle[1]
Statistics Fermionic
Generation First
Interactions Gravity, electromagnetic, weak
Symbol
e
,
β
Antiparticle Positron (also called antielectron)
Theorized Richard Laming (1838–1851),[2]
G. Johnstone Stoney (1874) and others.[3][4]
Discovered J. J. Thomson (1897)[5]
Mass 9.10938356(11)×10−31 kg[6]
5.48579909070(16)×10−4 u[6]
[1822.8884845(14)]−1 u[note 1]
0.5109989461(31) MeV/c2[6]
Mean lifetime stable ( > 6.6×1028 yr[7])
Electric charge −1 e[note 2]
−1.6021766208(98)×10−19 C[6]
−4.80320451(10)×10−10 esu
Magnetic moment −1.00115965218091(26) μB[6]
Spin 1/2
Weak isospin LH: −1/2, RH: 0
Weak hypercharge LH: -1, RH: −2

The electron is a subatomic particle, symbol
e
or
β
, whose electric charge is negative one elementary charge.[8] Electrons belong to the first generation of the lepton particle family,[9] and are generally thought to be elementary particles because they have no known components or substructure.[1] The electron has a mass that is approximately 1/1836 that of the proton.[10] Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ħ. As it is a fermion, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle.[9] Like all elementary particles, electrons exhibit properties of both particles and waves: they can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy.

Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry and thermal conductivity, and they also participate in gravitational, electromagnetic and weak interactions.[11] Since an electron has charge, it has a surrounding electric field, and if that electron is moving relative to an observer it will generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications such as electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors and particle accelerators.

Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.[12] In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms.[3] Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897.[5][13][14] Electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron except that it carries electrical and other charges of the opposite sign. When an electron collides with a positron, both particles can be totally annihilated, producing gamma ray photons.

History

Discovery of effect of electric force

The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity.[15] In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electricus, to refer to this property of attracting small objects after being rubbed.[16] Both electric and electricity are derived from the Latin ēlectrum (also the root of the alloy of the same name), which came from the Greek word for amber, ἤλεκτρον (ēlektron).

Discovery of two kinds of charges

In the early 1700s, Francis Hauksbee and French chemist Charles François du Fay independently discovered what they believed were two kinds of frictional electricity—one generated from rubbing glass, the other from rubbing resin. From this, du Fay theorized that electricity consists of two electrical fluids, vitreous and resinous, that are separated by friction, and that neutralize each other when combined.[17] American scientist Ebenezer Kinnersley later also independently reached the same conclusion.[18]:118 A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (-). He gave them the modern charge nomenclature of positive and negative respectively.[19] Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit.[20]

Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges.[2] Beginning in 1846, German physicist William Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion. He was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis.[21] However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity".[3]

Stoney initially coined the term electrolion in 1881. Ten years later, he switched to electron to describe these elementary charges, writing in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron". A 1906 proposal to change to electrion failed because Hendrik Lorentz preferred to keep electron.[22][23] The word electron is a combination of the words electric and ion.[24] The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron.[25][26]

Discovery of free electrons outside matter

A round glass vacuum tube with a glowing circular beam inside
A beam of electrons deflected in a circle by a magnetic field[27]
 
Electron detected in an isopropanol cloud chamber

The German physicist Johann Wilhelm Hittorf studied electrical conductivity in rarefied gases: in 1869, he discovered a glow emitted from the cathode that increased in size with decrease in gas pressure. In 1876, the German physicist Eugen Goldstein showed that the rays from this glow cast a shadow, and he dubbed the rays cathode rays.[28] During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode ray tube to have a high vacuum inside.[29] He then showed that the luminescence rays appearing within the tube carried energy and moved from the cathode to the anode. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged.[30][31] In 1879, he proposed that these properties could be explained by what he termed 'radiant matter'. He suggested that this was a fourth state of matter, consisting of negatively charged molecules that were being projected with high velocity from the cathode.[32]

The German-born British physicist Arthur Schuster expanded upon Crookes' experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates. The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given level of current, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time.[30][33]

In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge.[34]

In 1896, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson,[13] performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier.[5] Thomson made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called "corpuscles," had perhaps one thousandth of the mass of the least massive ion known: hydrogen.[5][14] He showed that their charge-to-mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal.[5][35] The name electron was again proposed for these particles by the Irish physicist George Johnstone Stoney, and the name has since gained universal acceptance.

Robert Millikan

While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter.[36] In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays.[37] This evidence strengthened the view that electrons existed as components of atoms.[38][39]

The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team,[5] using clouds of charged water droplets generated by electrolysis,[13] and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913.[40] However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time.[41]

Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons.[42]

Atomic theory

Three concentric circles about a nucleus, with an electron moving from the second to the first circle and releasing a photon
The Bohr model of the atom, showing states of electron with energy quantized by the number n. An electron dropping to a lower orbit emits a photon equal to the energy difference between the orbits.

By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons.[43] In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom.[44] However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.[43]

Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them.[45] Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics.[46] In 1919, the American chemist Irving Langmuir elaborated on the Lewis' static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness".[47] In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table,[46] which were known to largely repeat themselves according to the periodic law.[48]

In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle.[49] The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment.[43][50] This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting.[51]

Quantum mechanics

In his 1924 dissertation Recherches sur la théorie des quanta (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light.[52] That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment.[53] The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927 George Paget Thomson, discovered the interference effect was produced when a beam of electrons was passed through thin metal foils and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel.[54]
 
A symmetrical blue cloud that decreases in intensity from the center outward
In quantum mechanics, the behavior of an electron in an atom is described by an orbital, which is a probability distribution rather than an orbit. In the figure, the shading indicates the relative probability to "find" the electron, having the energy corresponding to the given quantum numbers, at that point.

De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated.[55] Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum.[56] Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen.[57]

In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field.[58] In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron.[59] This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatrons, and using electron as a generic term to describe both the positively and negatively charged variants.

In 1947 Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other, the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s.[60]

Particle accelerators

With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles.[61] The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light.[62]

With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968.[63] This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron.[64] The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.[65][66]

Confinement of individual electrons

Individual electrons can now be easily confined in ultra small (L = 20 nm, W = 20 nm) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K).[67] The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor.

Characteristics

Classification

A table with four rows and four columns, with each cell containing a particle identifier
Standard Model of elementary particles. The electron (symbol e) is on the left.

In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first-generation of fundamental particles.[68] The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions, because they all have half-odd integer spin; the electron has spin 1/2.[69]

Fundamental properties

The invariant mass of an electron is approximately 9.109×10−31 kilograms,[70] or 5.489×10−4 atomic mass units. On the basis of Einstein's principle of mass–energy equivalence, this mass corresponds to a rest energy of 0.511 MeV. The ratio between the mass of a proton and that of an electron is about 1836.[10][71] Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe.[72]

Electrons have an electric charge of −1.602×10−19 coulomb,[70] which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. This elementary charge has a relative standard uncertainty of 2.2×10−8.[70] Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign.[73] As the symbol e is used for the elementary charge, the electron is commonly symbolized by
e
, where the minus sign indicates the negative charge. The positron is symbolized by
e+
because it has the same properties as the electron but with a positive rather than negative charge.[69][70]

The electron has an intrinsic angular momentum or spin of 1/2.[70] This property is usually stated by referring to the electron as a spin-1/2 particle.[69] For such particles the spin magnitude is 3/2 ħ.[note 3] while the result of the measurement of a projection of the spin on any axis can only be ±ħ/2. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis.[70] It is approximately equal to one Bohr magneton,[74][note 4] which is a physical constant equal to 9.27400915(23)×10−24 joules per tesla.[70] The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.[75]

The electron has no known substructure[1][76] and it is assumed to be a point particle with a point charge and no spatial extent.[9] In classical physics, the angular momentum and magnetic moment of an object depend upon its physical dimensions. Hence, the concept of a dimensionless electron possessing these properties contrasts to experimental observations in Penning traps which point to finite non-zero radius of the electron.[citation needed] A possible explanation of this paradoxical situation is given below in the "Virtual particles" subsection by taking into consideration the Foldy-Wouthuysen transformation.

The issue of the radius of the electron is a challenging problem of the modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity.[77]

Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10−22 meters.[78] The upper bound of the electron radius of 10−18 meters[79] can be derived using the uncertainty relation in energy.

There is also a physical constant called the "classical electron radius", with the much larger value of 2.8179×10−15 m, greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.[80][note 5]

There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of 2.2×10−6 seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation.[81] The experimental lower bound for the electron's mean lifetime is 6.6×1028 years, at a 90% confidence level.[7][82][83]

Quantum properties

As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment.

The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (ψ). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density.[84]:162–218

A three dimensional projection of a two dimensional plot. There are symmetric hills along one axis and symmetric valleys along the other, roughly giving a saddle-shape
Example of an antisymmetric wave function for a quantum state of two identical fermions in a 1-dimensional box. If the particles swap position, the wave function inverts its sign.

Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, ψ(r1, r2) = −ψ(r2, r1), where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead.[84]:162–218

In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit.[84]:162–218

Virtual particles

In a simplified picture, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter.[85] The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, ħ6.6×10−16 eV·s. Thus, for a virtual electron, Δt is at most 1.3×10−21 s.[86]

A sphere with a minus sign at lower left symbolizes the electron, while pairs of spheres with plus and minus signs show the virtual particles
A schematic depiction of virtual electron–positron pairs appearing at random near an electron (at lower left)

While an electron–positron virtual pair is in existence, the coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron.[87][88] This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator.[89] Virtual particles cause a comparable shielding effect for the mass of the electron.[90]

The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment).[74][91] The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.[92]

The apparent paradox (mentioned above in the properties subsection) of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons cause the electron to shift about in a jittery fashion (known as zitterbewegung),[93] which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron.[9][94] In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines.[87]

Interaction

An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law.[95]:58–61 When an electron is in motion, it generates a magnetic field.[84]:140 The Ampère-Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor.[96] The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic).[95]:429–434

A graph with arcs showing the motion of charged particles
A particle with charge q (at left) is moving with velocity v through a magnetic field B that is oriented toward the viewer. For an electron, q is negative so it follows a curved trajectory toward the top.

When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation.[84]:160[97][note 6] The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself.[98]

Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force.[99] Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The acceleration of the electron results in the emission of Bremsstrahlung radiation.[100]

A curve shows the motion of the electron, a red dot shows the nucleus, and a wiggly line the emitted photon
Here, Bremsstrahlung is produced by an electron e deflected by the electric field of an atomic nucleus. The energy change E2 − E1 determines the frequency f of the emitted photon.

An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift.[note 7] The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength.[101] For an electron, it has a value of 2.43×10−12 m.[70] When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or Linear Thomson scattering.[102]

The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by α ≈ 7.297353×10−3, which is approximately equal to 1/137.[70]

When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV.[103][104] On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus.[105][106]

In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a
W
and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a
Z0
exchange, and this is responsible for neutrino-electron elastic scattering.[107]

Atoms and molecules

A table of five rows and five columns, with each cell portraying a color-coded probability density
Probability densities for the first few hydrogen atom orbitals, seen in cross-section. The energy level of a bound electron determines the orbital it occupies, and the color reflects the probability of finding the electron at a given position.

An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus' electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number.

Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential.[108] Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect.[109] To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron.[110]

The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital (so called, paired electrons) cancel each other out.[111]

The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics.[112] The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules.[12] Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms.[113] A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei.[114]

Conductivity

Four bolts of lightning strike the ground
A lightning discharge consists primarily of a flow of electrons.[115] The electric potential needed for lightning can be generated by a triboelectric effect.[116][117]

If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect.[118]

Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass.[119] When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations.[120]

At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation.[121] On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas)[122] through the material much like free electrons.

Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed.[123] This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material.[124]

Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law,[122] which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current.[125]

When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, this behavior is modeled by pairs of electrons entering a quantum state known as a Bose–Einstein condensate. These Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance.[126] (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.)[127] However, the mechanism by which higher temperature superconductors operate remains uncertain.

Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons.[128][129] The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge.

Motion and energy

According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in a vacuum, c. However, when relativistic electrons—that is, electrons moving at a speed close to c—are injected into a dielectric medium such as water, where the local speed of light is significantly less than c, the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation.[130]

The plot starts at zero and curves sharply upward toward the right
Lorentz factor as a function of velocity. It starts at value 1 and goes to infinity as v approaches c.

The effects of special relativity are based on a quantity known as the Lorentz factor, defined as \scriptstyle \gamma =1/{\sqrt {1-{v^{2}}/{c^{2}}}} where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is:
\displaystyle K_{\mathrm {e} }=(\gamma -1)m_{\mathrm {e} }c^{2},
where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV.[131] Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum.[52] For the 51 GeV electron above, the wavelength is about 2.4×10−17 m, small enough to explore structures well below the size of an atomic nucleus.[132]

Formation

A photon approaches the nucleus from the left, with the resulting electron and positron moving off to the right
Pair production of an electron and positron, caused by the close approach of a photon with an atomic nucleus. The lightning symbol represents an exchange of a virtual photon, thus an electric force acts. The angle between the particles is very small.[133]

The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe.[134] For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons:

γ
+
γ

e+
+
e
An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe.[135]

For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron-positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe.[136][137] The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes.[138] Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process,

n

p
+
e
+
ν
e
For about the next 300000400000 years, the excess electrons remained too energetic to bind with atomic nuclei.[139] What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.[140]

Roughly one million years after the big bang, the first generation of stars began to form.[140] Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus.[141] An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 (60Ni
).[142]

A branching tree representing the particle production
An extended air shower generated by an energetic cosmic ray striking the Earth's atmosphere

At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole.[143] According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants.

When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space.[144] In exchange, the other member of the pair is given negative energy, which results in a net loss of mass-energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes.[145]

Cosmic rays are particles traveling through space with high energies. Energy events as high as 3.0×1020 eV have been recorded.[146] When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions.[147] More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion.

π

μ
+
ν
μ
A muon, in turn, can decay to form an electron or positron.[148]

μ

e
+
ν
e
+
ν
μ

Observation

A swirling green glow in the night sky above snow-covered ground
Aurorae are mostly caused by energetic electrons precipitating into the atmosphere.[149]

Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes.[150]

The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct absorption lines appear in the spectrum of transmitted radiation. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. Spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined.[151][152]

In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge.[110] The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months.[153] The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant.[154]

The first video images of an electron's energy distribution were captured by a team at Lund University in Sweden, February 2008. The scientists used extremely short flashes of light, called attosecond pulses, which allowed an electron's motion to be observed for the first time.[155][156]

The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material.[157]

Plasma applications

Particle beams

A violet beam from above produces a blue glow about a Space shuttle model
During a NASA wind tunnel test, a model of the Space Shuttle is targeted by a beam of electrons, simulating the effect of ionizing gases during re-entry.[158]

Electron beams are used in welding.[159] They allow energy densities up to 107 W·cm−2 across a narrow focus diameter of 0.1–1.3 mm and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.[160][161]

Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer.[162] This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits.[163]

Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products.[164] Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy.[165]

Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays.[166][167]

Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect.[note 8] Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies .[168]

Imaging

Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV.[169] The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°.[170][171]

The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material.[172] In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm.[173] By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential.[174] The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms.[175] This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain.

Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface.[176][177][178]

Other applications

In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery.[179]

Electrons are important in cathode ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets.[180] In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse.[181] Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.[182]

Green Europe is Killing 40,000 Poor People a Year

Europe’s suicidal green energy policies are killing at least 40,000 people a year.

That’s just the number estimated to have died in the winter of 2014 because they were unable to afford fuel bills driven artificially high by renewable energy tariffs.

But the real death toll will certainly be much higher when you take into account the air pollution caused when Germany decided to abandon nuclear power after Fukushima and ramp up its coal-burning instead; and also when you consider the massive increase in diesel pollution –  the result of EU-driven anti-CO2 policies – which may be responsible for as many as 500,000 deaths a year.

But even that 40,000 figure is disgraceful enough, given that greenies are always trying to take the moral high ground and tell us that people who oppose their policies are uncaring and selfish.

It comes from an article in the German online magazine FOCUS about Energiewende (Energy Transition) – the disastrous policy I mentioned earlier this week whereby Germany is committed to abandoning cheap, effective fossil fuel power and converting its economy to expensive, inefficient renewables (aka unreliables) instead.

According to FOCUS around ten percent of the European population are now living in ‘energy poverty’ because electricity prices have risen, on average, by 42 percent in the last eight years. In Germany alone this amounts to seven million households.

The article is titled: The grand electricity lie: why electricity is becoming a luxury.

The reason, of course, is that green energy policies have made it that way. Many of these have emanated from the European Union, which in turn has taken its cue from the most Green-infested nation in Europe – Germany.

Germany has long been obsessed with all things environmental. Besides having invented the dodgy ‘science’ of ecology in the 1880s it was also, of course, between 1933 and 1945 the home of Europe’s official “Greenest government ever” – the first to ban smoking on public transport, an enthusiastic supporter of organic food, national parks and population control.

The Greens have also since the early Eighties been arguably the most influential party in Germany. Though their percentage of the vote has rarely risen above the 10 percent mark, they have punched above their weight either as a coalition partner in government or as a pressure group outside it.

For example, the reason that after Fukushima, Chancellor Angela Merkel completely changed Germany’s policy on nuclear power was her terror of the Greens who were suddenly polling 25 percent of the national vote.

It was the Greens too who were responsible for Energiewende – the policy which is turning Germany into the opposite of what most of us imagine it to be: not the economic powerhouse we’ve been taught to admire all these years, but a gibbering basket case.

This becomes clear in an investigation by the German newspaper Handelsblatt, which reports the horrendous industrial decline brought about by green energy policies.
Hit hardest, of course, are the traditional utilities. After all, the energy transition was designed to seal their coffin. Once the proverbial investment for widows and orphans because their revenue streams were considered rock-solid — these companies have been nothing short of decimated. With 77 nuclear and fossil-fuel power plants taken off the grid in recent years, Germany’s four big utilities — E.ON, RWE, Vattenfall and EnBW — have had to write off a total of €46.2 billion since 2011.
RWE and E.ON alone have debt piles of €28.2 billion and €25.8 billion, respectively, according to the latest company data. Losses at Düsseldorf-based E.ON rose to €6.1 billion for the first three quarters of 2015. Both companies have slashed the dividends on their shares, which have lost up to 76 percent of their value. Regional municipalities, which hold 24 percent of RWE’s shares, are scrambling to plug the holes left in their budgets by the missing dividends.

Thousands of workers have already been let go, disproportionately hitting communities in Germany‘s rust belt that are already struggling with blight. RWE has cut 7,000 jobs since 2011. At E.ON, the work force has shrunk by a third, a loss of over 25,000 jobs. Just as banks spun off their toxic assets and unprofitable operations into “bad banks” during the financial crisis, Germany’s utilities are reorganizing to cut their losses.
Why are the Germans enacting such lunacy? Aren’t they supposed to be the sensible ones?

Well yes, up to a point.

As a seasoned German-watcher explains to me, it’s with good reason that one of Germany’s greatest contributions to the world’s vocabulary is the word Angst.

The Germans are absolutely riddled with it – always have been – and it explains the two otherwise inexplicable policies with which Germany is currently destroying itself.

One, of course, is Energiewende caused by a misplaced, but deeply-held neurosis about stuff like diminishing scarce resources and “global warming” and the evils of Atomkraft (Nuclear power).

The other are its similarly insane immigration policies – the result of the neurosis that if it doesn’t replace its declining population with a supposedly healthy influx of immigrant workers, then it will wither and cease to be the great force it was under people like Frederick the Great, Bismarck and that chap in the 1930s and that no one will know or care where Germany is any more.

Ironically, though, if national decline is what the Germans most fear then the two policies they are pursuing to avoid it happening to be the ones most likely to hasten it.

This is sad. Sad for Germany which, for all its faults, has produced some pretty impressive things over the years: Beethoven; Kraftwerk; Goethe; Porsche; autobahns; those two girls on Deutschland 83.

And even sadder for those of us who, through absolutely no fault of our own happen to be shackled politically and economically to a socialistic superstate called the European Union, most of whose rules are decided by Germans over whom we have no democratic control.

Oh and by the way, Greenies: as I never tire of reminding you, you insufferable tossers, not a single one of the “future generations” you constantly cite in your mantras as justification for your disgusting, immoral and anti-free-market environmental policies actually exists.

But the people you’re killing now as a result of those environmental policies DO exist.

Or rather they did, till you choked or froze them to death, you vile, evil, eco-Nazi scumbags.

Boltzmann brain

From Wikipedia, the free encyclopedia
 
Ludwig Boltzmann, after whom Boltzmann brains are named

In physics thought experiments, a Boltzmann brain is a self-aware entity that arises due to extremely rare random fluctuations out of a state of thermodynamic equilibrium. For example, in a homogeneous Newtonian soup, theoretically by sheer chance all the atoms could bounce off and stick to one another in such a way as to assemble a functioning human brain (though this would, on average, take vastly longer than the current lifetime of the Universe).

The idea is indirectly named after the Austrian physicist Ludwig Boltzmann (1844–1906), who in 1896 published a theory that the Universe is observed to be in a highly improbable non-equilibrium state because only when such states randomly occur can brains exist to be aware of the Universe. The fatal flaw with Boltzmann's "Boltzmann universe" hypothesis is that the most common thermal fluctuations are as close to equilibrium overall as possible; thus, by any reasonable criteria, human brains in a Boltzmann universe with myriad neighboring stars would be vastly outnumbered by "Boltzmann brains" existing alone in an empty universe.

Boltzmann brains gained new relevance around 2002, when some cosmologists started to become concerned that, in many existing theories about the Universe, human brains in the current Universe appear to be vastly outnumbered by Boltzmann brains in the future Universe who, by chance, have the exact same perceptions that we do; this leads to the absurd conclusion that statistically we ourselves are likely to be Boltzmann brains. Such a reducto ad absurdum argument is sometimes used to argue against certain theories of the Universe. When applied to more recent theories about the multiverse, Boltzmann brain arguments are part of the unsolved measure problem of cosmology.

Boltzmann universe

In 1896, mathematician Ernst Zermelo advanced an incorrect theory that the Second Law of Thermodynamics was absolute rather than statistical. Zermelo bolstered his theory by pointing out that the Poincaré recurrence theorem shows statistical entropy in a closed system must eventually be a periodic function; therefore, the Second Law, which is always observed to increase entropy, is unlikely to be statistical. To counter Zermelo's argument, Austrian physicist Ludwig Boltzmann advanced two theories. The first theory, now known to be the correct one, is that the Universe started for some unknown reason in a low-entropy state. The second, alternative, theory, published in 1896 but attributed in 1895 to Boltzmann's assistant Ignaz Schütz, is the "Boltzmann universe" scenario. In this scenario, the Universe spends the vast majority of eternity in a featureless state of heat death; however, over enough eons, eventually a very rare thermal fluctuation will occur where atoms bounce off each other in exactly such a way that it creates substructures such as our entire observable universe. Boltzmann argues that, while most of the Universe is featureless, we do not see those regions because they are devoid of intelligent life; to Boltzmann, it is unremarkable that we view solely the interior of our Boltzmann universe, as that is the only place where intelligent life lives. (This may be the first use in modern science of the anthropic principle).[1][2]

In 1931, astronomer Arthur Eddington pointed out that, because a large fluctuation is vastly and exponentially less probable than a small fluctuation, observers in Boltzmann universes will be vastly outnumbered by observers in smaller fluctuations. Physicist Richard Feynman published a similar counterargument within his widely-read 1964 Feynman Lectures on Physics. By 2004 physicists had pushed Eddington's observation to its logical conclusion: the most numerous observers in an eternity of thermal fluctuations would be minimal "Boltzmann brains" popping up in an otherwise featureless universe.[1][3]

Creation

Given enough time, every possible structure is formed via random fluctuation. Boltzmann-style thought experiments focus on structures like human brains that are presumably self-aware observers. Given any arbitrary criteria for what constitutes a Boltzmann brain (or planet, or universe), smaller structures that minimally and barely meet the criteria are vastly and exponentially more common than larger structures; a rough analogy is how the odds of a real English word showing up when you shake a box of Scrabble letters are greater than the odds that a whole English sentence or paragraph will form.[4] The average timescale required for formation of a Boltzmann brain is vastly greater than the current age of the Universe. In modern physics, Boltzmann brains can be formed either by quantum fluctuation, or by a thermal fluctuation generally involving nucleation.[1]

Via quantum fluctuation

By one calculation, a Boltzmann brain appears as a quantum fluctuation in the vacuum after a time interval of 10^{10^{50}} years. This fluctuation can occur even in a true Minkowski vacuum (a flat spacetime vacuum lacking vacuum energy). Quantum mechanics heavily favors smaller fluctuations that "borrow" the least amount of energy from the vacuum. Typically, a quantum Boltzmann brain will suddenly appear from the vacuum (alongside an equivalent amount of virtual antimatter), remain only long enough to have a single coherent thought or observation, and then disappear into the vacuum as suddenly as it appeared. Such a brain is completely self-contained, and can never radiate energy out to infinity.[5]

Via nucleation

Current evidence suggests that the observable Universe is not a Minkowski space, but rather a de Sitter universe with a positive cosmological constant. In a de Sitter vacuum (but not in a Minkowski vacuum), a Boltzmann brain can form via nucleation of non-virtual particles gradually assembled by chance from the Hawking radiation emitted from the de Sitter space's bounded cosmological horizon. One estimate for the average time required until nucleation is around {\displaystyle 10^{10^{69}}} years.[5] A typical nucleated Boltzmann brain will, after it finishes its activity, cool off to absolute zero and eventually completely decay, as any isolated object would in the vacuum of space. Unlike the quantum fluctuation case, the Boltzmann brain will radiate energy out to infinity. In nucleation, the most common fluctuations are as close to thermal equilibrium overall as possible given whatever arbitrary criteria are provided for labeling a fluctuation a "Boltzmann brain".[1]

Theoretically a Boltzmann brain can also form, albeit again with a tiny probability, at any time during the matter-dominated early universe.[6]

Modern Boltzmann brain problems

Many cosmologists believe that if a theory predicts that Boltzmann brains with human-like experiences vastly outnumber normal human brains, then that theory should be rejected or disfavored. Others argue that brains produced via quantum fluctuation, and maybe even brains produced via nucleation in the de Sitter vacuum, don't count as observers. Quantum fluctuations are easier to exclude than nucleated brains, as quantum fluctuations can more easily be targeted by straightforward criteria (such as their lack of interaction with the environment at infinity).[1][5]

Some cosmologists believe that a better understanding of the degrees of freedom in the quantum vacuum of holographic string theory can solve the Boltzmann brain problem.[7]

In single-Universe scenarios

In a single de Sitter Universe with a cosmological constant, and starting from any finite spatial slice, the number of "normal" observers is finite and bounded by the heat death of the Universe. If the Universe lasts forever, the number of nucleated Boltzmann brains is, in most models, infinite; cosmologists such as Alan Guth worry that this would make it seem "infinitely unlikely for us to be normal brains".[4] One caveat is that if the Universe is a false vacuum that locally decays into a Minkowski or a big crunch-bound anti-de Sitter space in less than 20 billion years, then infinite Boltzmann nucleation is avoided. (If the average local false vacuum decay rate is over 20 billion years, Boltzmann brain nucleation is still infinite, as the Universe increases in size faster than local vacuum collapses destroy the portions of the Universe within the collapses' future light cones). Proposed hypothetical mechanisms to destroy the universe within that timeframe range from superheavy gravitinos to a heavier-than-observed top quark triggering "death by Higgs".[8][9][10]

If no cosmological constant exists, and if the presently observed vacuum energy is from quintessence that will eventually completely dissipate, then infinite Boltzmann nucleation is also avoided.[11]

In eternal inflation

One class of solutions to the Boltzmann brain problem makes use of differing approaches to the measure problem in cosmology: in infinite multiverse theories, the ratio of normal observers to Boltzmann brains depends on how infinite limits are taken. Measures might be chosen to avoid appreciable fractions of Boltzmann brains.[12][13][14] Unlike the single-universe case, one challenge in finding a global solution in eternal inflation is that all possible string landscapes must be summed over; in some measures, having even a small fraction of universes infested with Boltzmann brains causes the measure of the multiverse as a whole to be dominated by Boltzmann brains.[10][15] The measurement problem in cosmology also grapples with the ratio of normal observers to abnormally early observers. In measures such as the proper time measure that suffer from an extreme "youngness" problem, the typical observer is a "Boltzmann baby" formed by rare fluctuation in an extremely hot, early universe.[6]

Sweating may be why we became the dominant species on Earth

Article Image
The French Open. Credit: Getty Images.

Persistence truly does pay off, even if you have to endure the perspiration that comes with it. This is true right down to the biological and evolutionary level, and is in fact how we got here, as the apex predator of the planet. Millions of years ago, digestion consumed most of the calories we ate. These days, our brain takes 20 times more energy than any other organ in the body. So for our brain to develop, we needed a higher density food. Meat—obtained from hunting and killing other animals—fit the bill.

One theory of human evolution states that our ancestors began eating meat about 2 million years ago, which rapidly expanded the development of their brains. Since meat packed a lot of calories and fat, a meat-based diet allowed the brain to grow larger. But how did early humans get that meat?

One way was eating carcasses, just like pack animals of today still do. The human tapeworm evolved from the kind that infects dogs and hyenas, which means that at some point, we must’ve fed on the same carcasses as them, and came into contact with their saliva. But this wasn’t the only way we obtained meat.


Ancient hominids must’ve fed on carcasses much like wild dogs and hyenas, before moving on to hunting. Wild African Dogs consuming a blue wildebeest. Credit: by Masteraah, Madikwe Game Reserve, South Africa.

Early humans must’ve taken part in hunting too. Yet, hominins didn’t begin using stones and sticks for hunting until about 200,000 years ago. So between 2.3 million and 200,000 years ago, how did early humans hunt? According to journalist and writer Christopher McDougall, author of the book Born to Run, we ran game animals to death in order to feast upon them.

The ability to run long distances and sweat—so as not to overheat, allowed our ancestors to wear out other animals. Sweating was the key factor. Consider a gazelle running over long distances and being chased by our progenitors. The fact that they can sweat and the gazelle can’t means they can last far longer in the heat of the African Savannah.

Game animals like the gazelle over time become overheated and have to stop to catch their breath, allowing early hunters to make short work of them, a strategy we call today persistence hunting. After about five miles or so, a gazelle needs to stop, rest, and breathe, or risk damaging itself, even dying. Such an animal can only fully extend its diaphragm when not running, while walking upright freed our ancestors from such an issue.

Human sweat is actually a very efficient cooling system, arguably the most effective in the animal kingdom.


Sweating may also act as a defense mechanism. Credit: Getty Images.

Research shows that several traits simultaneously evolved around the same time, about 1.89 million years ago. These were walking upright, hairless skin, sweating, and the ability to run great distances. One reason for all of these rapid changes might have been climate change. The Earth warmed over this same period, shifting the habitat from forest to open grassland, and allowing our ancestors to walk upright and even run in open space. It may have also forced them to hunt animals for food.

Sweating, in addition to being a highly advanced cooling system, may have also acted as a defense mechanism. Anyone who’s ever played shirtless tackle football in the summertime knows how hard it is to catch someone who’s slick and sweaty. So the next time you’re sweating bullets in some social situation, take a moment to calmly reflect on the fact that despite the awkwardness of your perspiration, this biological function is the main reason why you’re able to suffer such indignities in the first place.

Chandrasekhar limit

From Wikipedia, the free encyclopedia

The Chandrasekhar limit (/ʌndrəˈʃkər/) is the maximum mass of a stable white dwarf star. The currently accepted value of the Chandrasekhar limit is about 1.4 M (2.765×1030 kg).[1][2][3]

White dwarfs resist gravitational collapse primarily through electron degeneracy pressure (compare main sequence stars, which resist collapse through thermal pressure). The Chandrasekhar limit is the mass above which electron degeneracy pressure in the star's core is insufficient to balance the star's own gravitational self-attraction. Consequently, a white dwarf with a mass greater than the limit is subject to further gravitational collapse, evolving into a different type of stellar remnant, such as a neutron star or black hole. Those with masses under the limit remain stable as white dwarfs.[4] Collapse is not inevitable: most white dwarfs explode rather than undergo collapse.

The limit was named after Subrahmanyan Chandrasekhar, the Indian astrophysicist who improved upon the accuracy of the calculation in 1930, at the age of 20, in India by calculating the limit for a polytope model of a star in hydrostatic equilibrium, and comparing his limit to the earlier limit found by E.C. Stoner for a uniform density star. Importantly, the existence of a limit, based the conceptual breakthrough of combining relativity with Fermi degeneracy, was indeed first established in separate papers published by Wilhelm Anderson and E. C. Stoner in 1929. The limit was initially ignored by the community of scientists because such a limit would logically require the existence of black holes, which were considered a scientific impossibility at the time. That Stoner and Anderson have been a commonly forgotten part of this history in the astronomy community has been noted. [5] [6]

Physics

Radius–mass relations for a model white dwarf. The green curve uses the general pressure law for an ideal Fermi gas, while the blue curve is for a non-relativistic ideal Fermi gas. The black line marks the ultrarelativistic limit.

Electron degeneracy pressure is a quantum-mechanical effect arising from the Pauli exclusion principle. Since electrons are fermions, no two electrons can be in the same state, so not all electrons can be in the minimum-energy level. Rather, electrons must occupy a band of energy levels. Compression of the electron gas increases the number of electrons in a given volume and raises the maximum energy level in the occupied band. Therefore, the energy of the electrons increases on compression, so pressure must be exerted on the electron gas to compress it, producing electron degeneracy pressure. With sufficient compression, electrons are forced into nuclei in the process of electron capture, relieving the pressure.

In the nonrelativistic case, electron degeneracy pressure gives rise to an equation of state of the form P = K1ρ5/3, where P is the pressure, ρ is the mass density, and K1 is a constant. Solving the hydrostatic equation leads to a model white dwarf that is a polytrope of index 3/2 – and therefore has radius inversely proportional to the cube root of its mass, and volume inversely proportional to its mass.[7]

As the mass of a model white dwarf increases, the typical energies to which degeneracy pressure forces the electrons are no longer negligible relative to their rest masses. The velocities of the electrons approach the speed of light, and special relativity must be taken into account. In the strongly relativistic limit, the equation of state takes the form P = K2ρ4/3. This yields a polytrope of index 3, which has a total mass, Mlimit say, depending only on K2.[8]

For a fully relativistic treatment, the equation of state used interpolates between the equations P = K1ρ5/3 for small ρ and P = K2ρ4/3 for large ρ. When this is done, the model radius still decreases with mass, but becomes zero at Mlimit. This is the Chandrasekhar limit.[9] The curves of radius against mass for the non-relativistic and relativistic models are shown in the graph. They are colored blue and green, respectively. μe has been set equal to 2. Radius is measured in standard solar radii[10] or kilometers, and mass in standard solar masses.

Calculated values for the limit vary depending on the nuclear composition of the mass.[11] Chandrasekhar[12], eq. (36),[9], eq. (58),[13], eq. (43) gives the following expression, based on the equation of state for an ideal Fermi gas:
{\displaystyle M_{\rm {limit}}={\frac {\omega _{3}^{0}{\sqrt {3\pi }}}{2}}\left({\frac {\hbar c}{G}}\right)^{\frac {3}{2}}{\frac {1}{(\mu _{\text{e}}m_{\text{H}})^{2}}}}
where:
As ħc/G is the Planck mass, the limit is of the order of
{\displaystyle {\frac {M_{\text{Pl}}^{3}}{m_{\text{H}}^{2}}}}
A more accurate value of the limit than that given by this simple model requires adjusting for various factors, including electrostatic interactions between the electrons and nuclei and effects caused by nonzero temperature.[11] Lieb and Yau[14] have given a rigorous derivation of the limit from a relativistic many-particle Schrödinger equation.

History

In 1926, the British physicist Ralph H. Fowler observed that the relationship between the density, energy, and temperature of white dwarfs could be explained by viewing them as a gas of nonrelativistic, non-interacting electrons and nuclei that obey Fermi–Dirac statistics.[15] This Fermi gas model was then used by the British physicist Edmund Clifton Stoner in 1929 to calculate the relationship among the mass, radius, and density of white dwarfs, assuming they were homogeneous spheres.[16] Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately 1.37×1030 kg.[17] In 1930, Stoner derived the internal energydensity equation of state for a Fermi gas, and was then able to treat the mass–radius relationship in a fully relativistic manner, giving a limiting mass of approximately 2.19×1030 kg (for μe = 2.5).[18] Stoner went on to derive the pressuredensity equation of state, which he published in 1932.[19] These equations of state were also previously published by the Soviet physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter.[20] Frenkel's work, however, was ignored by the astronomical and astrophysical community.[21]

A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas.[22] In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state,[7] and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above.[8][9][12][23] Chandrasekhar reviews this work in his Nobel Prize lecture.[13] This value was also computed in 1932 by the Soviet physicist Lev Davidovich Landau,[24] who, however, did not apply it to white dwarfs.

Chandrasekhar's work on the limit aroused controversy, owing to the opposition of the British astrophysicist Arthur Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After a talk by Chandrasekhar on the limit in 1935, he replied:
The star has to go on radiating and radiating and contracting and contracting until, I suppose, it gets down to a few km radius, when gravity becomes strong enough to hold in the radiation, and the star can at last find peace. … I think there should be a law of Nature to prevent a star from behaving in this absurd way![25]
Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law P = K1ρ5/3 universally applicable, even for large ρ.[26] Although Niels Bohr, Fowler, Wolfgang Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar.[27], pp. 110–111 Through the rest of his life, Eddington held to his position in his writings,[28][29][30][31][32] including his work on his fundamental theory.[33] The drama associated with this disagreement is one of the main themes of Empire of the Stars, Arthur I. Miller's biography of Chandrasekhar.[27] In Miller's view:
Chandra's discovery might well have transformed and accelerated developments in both physics and astrophysics in the 1930s. Instead, Eddington's heavy-handed intervention lent weighty support to the conservative community astrophysicists, who steadfastly refused even to consider the idea that stars might collapse to nothing. As a result, Chandra's work was almost forgotten.[27]:150

Applications

The core of a star is kept from collapsing by the heat generated by the fusion of nuclei of lighter elements into heavier ones. At various stages of stellar evolution, the nuclei required for this process are exhausted, and the core collapses, causing it to become denser and hotter. A critical situation arises when iron accumulates in the core, since iron nuclei are incapable of generating further energy through fusion. If the core becomes sufficiently dense, electron degeneracy pressure will play a significant part in stabilizing it against gravitational collapse.[34]

If a main-sequence star is not too massive (less than approximately 8 solar masses), it eventually sheds enough mass to form a white dwarf having mass below the Chandrasekhar limit, which consists of the former core of the star. For more-massive stars, electron degeneracy pressure does not keep the iron core from collapsing to very great density, leading to formation of a neutron star, black hole, or, speculatively, a quark star. (For very massive, low-metallicity stars, it is also possible that instabilities destroy the star completely.)[35][36][37][38] During the collapse, neutrons are formed by the capture of electrons by protons in the process of electron capture, leading to the emission of neutrinos.[34], pp. 1046–1047. The decrease in gravitational potential energy of the collapsing core releases a large amount of energy on the order of 1046 joules (100 foes). Most of this energy is carried away by the emitted neutrinos.[39] This process is believed responsible for supernovae of types Ib, Ic, and II.[34]

Type Ia supernovae derive their energy from runaway fusion of the nuclei in the interior of a white dwarf. This fate may befall carbonoxygen white dwarfs that accrete matter from a companion giant star, leading to a steadily increasing mass. As the white dwarf's mass approaches the Chandrasekhar limit, its central density increases, and, as a result of compressional heating, its temperature also increases. This eventually ignites nuclear fusion reactions, leading to an immediate carbon detonation, which disrupts the star and causes the supernova.[40], §5.1.2

A strong indication of the reliability of Chandrasekhar's formula is that the absolute magnitudes of supernovae of Type Ia are all approximately the same; at maximum luminosity, MV is approximately −19.3, with a standard deviation of no more than 0.3.[40], (1) A 1-sigma interval therefore represents a factor of less than 2 in luminosity. This seems to indicate that all type Ia supernovae convert approximately the same amount of mass to energy.

Super-Chandrasekhar mass supernovae

In April 2003, the Supernova Legacy Survey observed a type Ia supernova, designated SNLS-03D3bb, in a galaxy approximately 4 billion light years away. According to a group of astronomers at the University of Toronto and elsewhere, the observations of this supernova are best explained by assuming that it arose from a white dwarf that grew to twice the mass of the Sun before exploding. They believe that the star, dubbed the "Champagne Supernova" by University of Oklahoma astronomer David R. Branch, may have been spinning so fast that a centrifugal tendency allowed it to exceed the limit. Alternatively, the supernova may have resulted from the merger of two white dwarfs, so that the limit was only violated momentarily. Nevertheless, they point out that this observation poses a challenge to the use of type Ia supernovae as standard candles.[41][42][43]
Since the observation of the Champagne Supernova in 2003, more very bright type Ia supernovae have been observed that are thought to have originated from white dwarfs whose masses exceeded the Chandrasekhar limit. These include SN 2006gz, SN 2007if and SN 2009dc.[44] The super-Chandrasekhar mass white dwarfs that gave rise to these supernovae are believed to have had masses up to 2.4–2.8 solar masses.[44] One way to potentially explain the problem of the Champagne Supernova was considering it the result of an aspherical explosion of a white dwarf. However, spectropolarimetric observations of SN 2009dc showed it had a polarization smaller than 0.3, making the large asphericity theory unlikely.[44]

Tolman–Oppenheimer–Volkoff limit

After a supernova explosion, a neutron star may be left behind. These objects are even more compact than white dwarfs and are also supported, in part, by degeneracy pressure. A neutron star, however, is so massive and compressed that electrons and protons have combined to form neutrons, and the star is thus supported by neutron degeneracy pressure (as well as short-range repulsive neutron-neutron interactions mediated by the strong force) instead of electron degeneracy pressure. The limiting value for neutron star mass, analogous to the Chandrasekhar limit, is known as the Tolman–Oppenheimer–Volkoff limit.

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...