Search This Blog

Thursday, September 12, 2024

History of molecular theory

Space-filling model of the H2O molecule.

In chemistry, the history of molecular theory traces the origins of the concept or idea of the existence of strong chemical bonds between two or more atoms.

A modern conceptualization of molecules began to develop in the 19th century along with experimental evidence for pure chemical elements and how individual atoms of different chemical elements such as hydrogen and oxygen can combine to form chemically stable molecules such as water molecules.

Ancient world

The modern concept of molecules can be traced back towards pre-scientific and Greek philosophers such as Leucippus and Democritus who argued that all the universe is composed of atoms and voids.

Circa 450 BC Empedocles imagined fundamental elements (fire (), earth (), air (), and water ()) and "forces" of attraction and repulsion allowing the elements to interact. Prior to this, Heraclitus had claimed that fire or change was fundamental to our existence, created through the combination of opposite properties.

In the Timaeus, Plato, following Pythagoras, considered mathematical entities such as number, point, line and triangle as the fundamental building blocks or elements of this ephemeral world, and considered the four elements of fire, air, water and earth as states of substances through which the true mathematical principles or elements would pass. A fifth element, the incorruptible quintessence aether, was considered to be the fundamental building block of the heavenly bodies.

The viewpoint of Leucippus and Empedocles, along with the aether, was accepted by Aristotle and passed to medieval and renaissance Europe.

Greek atomism

The earliest views on the shapes and connectivity of atoms was that proposed by Leucippus, Democritus, and Epicurus who reasoned that the solidness of the material corresponded to the shape of the atoms involved. Thus, iron atoms are solid and strong with hooks that lock them into a solid; water atoms are smooth and slippery; salt atoms, because of their taste, are sharp and pointed; and air atoms are light and whirling, pervading all other materials.

It was Democritus that was the main proponent of this view. Using analogies based on the experiences of the senses, he gave a picture or an image of an atom in which atoms were distinguished from each other by their shape, their size, and the arrangement of their parts. Moreover, connections were explained by material links in which single atoms were supplied with attachments: some with hooks and eyes others with balls and sockets (see diagram).

A water molecule as hook-and-eye model might have represented it. Leucippus, Democritus, Epicurus, Lucretius and Gassendi adhered to such conception. Note that the composition of water was not known before Avogadro (c. 1811).

17th century

With the rise of scholasticism and the decline of the Roman Empire, the atomic theory was abandoned for many ages in favor of the various four element theories and later alchemical theories. The 17th century, however, saw a resurgence in the atomic theory primarily through the works of Gassendi, and Newton.

Among other scientists of that time Gassendi deeply studied ancient history, wrote major works about Epicurus natural philosophy and was a persuasive propagandist of it. He reasoned that to account for the size and shape of atoms moving in a void could account for the properties of matter. Heat was due to small, round atoms; cold, to pyramidal atoms with sharp points, which accounted for the pricking sensation of severe cold; and solids were held together by interlacing hooks.

Newton, though he acknowledged the various atom attachment theories in vogue at the time, i.e. "hooked atoms", "glued atoms" (bodies at rest), and the "stick together by conspiring motions" theory, rather believed, as famously stated in "Query 31" of his 1704 Opticks, that particles attract one another by some force, which "in immediate contact is extremely strong, at small distances performs the chemical operations, and reaches not far from particles with any sensible effect."

In a more concrete manner, however, the concept of aggregates or units of bonded atoms, i.e. "molecules", traces its origins to Robert Boyle's 1661 hypothesis, in his famous treatise The Sceptical Chymist, that matter is composed of clusters of particles and that chemical change results from the rearrangement of the clusters. Boyle argued that matter's basic elements consisted of various sorts and sizes of particles, called "corpuscles", which were capable of arranging themselves into groups.

In 1680, using the corpuscular theory as a basis, French chemist Nicolas Lemery stipulated that the acidity of any substance consisted in its pointed particles, while alkalis were endowed with pores of various sizes. A molecule, according to this view, consisted of corpuscles united through a geometric locking of points and pores.

18th century

Étienne François Geoffroy’s 1718 Affinity Table: at the head of the column is a substance with which all the substances below can combine.

An early precursor to the idea of bonded "combinations of atoms", was the theory of "combination via chemical affinity". For example, in 1718, building on Boyle's conception of combinations of clusters, the French chemist Étienne François Geoffroy developed theories of chemical affinity to explain combinations of particles, reasoning that a certain alchemical "force" draws certain alchemical components together. Geoffroy's name is best known in connection with his tables of "affinities" (tables des rapports), which he presented to the French Academy in 1718 and 1720.

These were lists, prepared by collating observations on the actions of substances one upon another, showing the varying degrees of affinity exhibited by analogous bodies for different reagents. These tables retained their vogue for the rest of the century, until displaced by the profounder conceptions introduced by CL Berthollet.

In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica, which laid the basis for the kinetic theory of gases. In this work, Bernoulli positioned the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion. The theory was not immediately accepted, in part because conservation of energy had not yet been established, and it was not obvious to physicists how the collisions between molecules could be perfectly elastic.

In 1789, William Higgins published views on what he called combinations of "ultimate" particles, which foreshadowed the concept of valency bonds. If, for example, according to Higgins, the force between the ultimate particle of oxygen and the ultimate particle of nitrogen were 6, then the strength of the force would be divided accordingly, and similarly for the other combinations of ultimate particles:

William Higgins' combinations of ultimate particles (1789)

19th century

John Dalton's union of atoms combined in ratios (1808)

Similar to these views, in 1803 John Dalton took the atomic weight of hydrogen, the lightest element, as unity, and determined, for example, that the ratio for nitrous anhydride was 2 to 3 which gives the formula N2O3. Dalton incorrectly imagined that atoms "hooked" together to form molecules. Later, in 1808, Dalton published his famous diagram of combined "atoms":

Amedeo Avogadro created the word "molecule". His 1811 paper "Essay on Determining the Relative Masses of the Elementary Molecules of Bodies", he essentially states, i.e. according to Partington's A Short History of Chemistry, that:

The smallest particles of gases are not necessarily simple atoms, but are made up of a certain number of these atoms united by attraction to form a single molecule.

Note that this quote is not a literal translation. Avogadro uses the name "molecule" for both atoms and molecules. Specifically, he uses the name "elementary molecule" when referring to atoms and to complicate the matter also speaks of "compound molecules" and "composite molecules".

During his stay in Vercelli, Avogadro wrote a concise note (memoria) in which he declared the hypothesis of what we now call Avogadro's law: equal volumes of gases, at the same temperature and pressure, contain the same number of molecules. This law implies that the relationship occurring between the weights of same volumes of different gases, at the same temperature and pressure, corresponds to the relationship between respective molecular weights. Hence, relative molecular masses could now be calculated from the masses of gas samples.

Avogadro developed this hypothesis to reconcile Joseph Louis Gay-Lussac's 1808 law on volumes and combining gases with Dalton's 1803 atomic theory. The greatest difficulty Avogadro had to resolve was the huge confusion at that time regarding atoms and molecules—one of the most important contributions of Avogadro's work was clearly distinguishing one from the other, admitting that simple particles too could be composed of molecules and that these are composed of atoms. Dalton, by contrast, did not consider this possibility. Curiously, Avogadro considers only molecules containing even numbers of atoms; he does not say why odd numbers are left out.

In 1826, building on the work of Avogadro, the French chemist Jean-Baptiste Dumas states:

Gases in similar circumstances are composed of molecules or atoms placed at the same distance, which is the same as saying that they contain the same number in the same volume.

In coordination with these concepts, in 1833 the French chemist Marc Antoine Auguste Gaudin presented a clear account of Avogadro's hypothesis, regarding atomic weights, by making use of "volume diagrams", which clearly show both semi-correct molecular geometries, such as a linear water molecule, and correct molecular formulas, such as H2O:

Marc Antoine Auguste Gaudin's volume diagrams of molecules in the gas phase (1833)

In two papers outlining his "theory of atomicity of the elements" (1857–58), Friedrich August Kekulé was the first to offer a theory of how every atom in an organic molecule was bonded to every other atom. He proposed that carbon atoms were tetravalent, and could bond to themselves to form the carbon skeletons of organic molecules.

In 1856, Scottish chemist Archibald Couper began research on the bromination of benzene at the laboratory of Charles Wurtz in Paris. One month after Kekulé's second paper appeared, Couper's independent and largely identical theory of molecular structure was published. He offered a very concrete idea of molecular structure, proposing that atoms joined to each other like modern-day Tinkertoys in specific three-dimensional structures. Couper was the first to use lines between atoms, in conjunction with the older method of using brackets, to represent bonds, and also postulated straight chains of atoms as the structures of some molecules, ring-shaped molecules of others, such as in tartaric acid and cyanuric acid. In later publications, Couper's bonds were represented using straight dotted lines (although it is not known if this is the typesetter's preference) such as with alcohol and oxalic acid below:

Archibald Couper's molecular structures, for alcohol and oxalic acid, using elemental symbols for atoms and lines for bonds (1858)

In 1861, an unknown Vienna high-school teacher named Joseph Loschmidt published, at his own expense, a booklet entitled Chemische Studien I, containing pioneering molecular images which showed both "ringed" structures as well as double-bonded structures, such as:

Joseph Loschmidt's molecule drawings of ethylene H2C=CH2 and acetylene HC≡CH (1861)

Loschmidt also suggested a possible formula for benzene, but left the issue open. The first proposal of the modern structure for benzene was due to Kekulé, in 1865. The cyclic nature of benzene was finally confirmed by the crystallographer Kathleen Lonsdale. Benzene presents a special problem in that, to account for all the bonds, there must be alternating double carbon bonds:

Benzene molecule with alternating double bonds

In 1865, German chemist August Wilhelm von Hofmann was the first to make stick-and-ball molecular models, which he used in lecture at the Royal Institution of Great Britain, such as methane shown below:

Hofmann's 1865 stick-and-ball model of methane CH4.

The basis of this model followed the earlier 1855 suggestion by his colleague William Odling that carbon is tetravalent. Hofmann's color scheme, to note, is still used to this day: carbon = black, nitrogen = blue, oxygen = red, chlorine = green, sulfur = yellow, hydrogen = white. The deficiencies in Hofmann's model were essentially geometric: carbon bonding was shown as planar, rather than tetrahedral, and the atoms were out of proportion, e.g. carbon was smaller in size than the hydrogen.

In 1864, Scottish organic chemist Alexander Crum Brown began to draw pictures of molecules, in which he enclosed the symbols for atoms in circles, and used broken lines to connect the atoms together in a way that satisfied each atom's valence.

The year 1873, by many accounts, was a seminal point in the history of the development of the concept of the "molecule". In this year, the renowned Scottish physicist James Clerk Maxwell published his famous thirteen page article 'Molecules' in the September issue of Nature. In the opening section to this article, Maxwell clearly states:

An atom is a body which cannot be cut in two; a molecule is the smallest possible portion of a particular substance.

After speaking about the atomic theory of Democritus, Maxwell goes on to tell us that the word 'molecule' is a modern word. He states, "it does not occur in Johnson's Dictionary. The ideas it embodies are those belonging to modern chemistry." We are told that an 'atom' is a material point, invested and surrounded by 'potential forces' and that when 'flying molecules' strike against a solid body in constant succession it causes what is called pressure of air and other gases. At this point, however, Maxwell notes that no one has ever seen or handled a molecule.

In 1874, Jacobus Henricus van 't Hoff and Joseph Achille Le Bel independently proposed that the phenomenon of optical activity could be explained by assuming that the chemical bonds between carbon atoms and their neighbors were directed towards the corners of a regular tetrahedron. This led to a better understanding of the three-dimensional nature of molecules.

Emil Fischer developed the Fischer projection technique for viewing 3-D molecules on a 2-D sheet of paper:

In 1898, Ludwig Boltzmann, in his Lectures on Gas Theory, used the theory of valence to explain the phenomenon of gas phase molecular dissociation, and in doing so drew one of the first rudimentary yet detailed atomic orbital overlap drawings. Noting first the known fact that molecular iodine vapor dissociates into atoms at higher temperatures, Boltzmann states that we must explain the existence of molecules composed of two atoms, the "double atom" as Boltzmann calls it, by an attractive force acting between the two atoms. Boltzmann states that this chemical attraction, owing to certain facts of chemical valence, must be associated with a relatively small region on the surface of the atom called the sensitive region.

Boltzmann states that this "sensitive region" will lie on the surface of the atom, or may partially lie inside the atom, and will firmly be connected to it. Specifically, he states "only when two atoms are situated so that their sensitive regions are in contact, or partly overlap, will there be a chemical attraction between them. We then say that they are chemically bound to each other." This picture is detailed below, showing the α-sensitive region of atom-A overlapping with the β-sensitive region of atom-B:

Boltzmann’s 1898 I2 molecule diagram showing atomic "sensitive region" (α, β) overlap.

20th century

In the early 20th century, the American chemist Gilbert N. Lewis began to use dots in lecture, while teaching undergraduates at Harvard, to represent the electrons around atoms. His students favored these drawings, which stimulated him in this direction. From these lectures, Lewis noted that elements with a certain number of electrons seemed to have a special stability. This phenomenon was pointed out by the German chemist Richard Abegg in 1904, to which Lewis referred to as "Abegg's law of valence" (now generally known as Abegg's rule). To Lewis it appeared that once a core of eight electrons has formed around a nucleus, the layer is filled, and a new layer is started. Lewis also noted that various ions with eight electrons also seemed to have a special stability. On these views, he proposed the rule of eight or octet rule: Ions or atoms with a filled layer of eight electrons have a special stability.

Moreover, noting that a cube has eight corners Lewis envisioned an atom as having eight sides available for electrons, like the corner of a cube. Subsequently, in 1902 he devised a conception in which cubic atoms can bond on their sides to form cubic-structured molecules.

In other words, electron-pair bonds are formed when two atoms share an edge, as in structure C below. This results in the sharing of two electrons. Similarly, charged ionic-bonds are formed by the transfer of an electron from one cube to another, without sharing an edge A. An intermediate state B where only one corner is shared was also postulated by Lewis.

Lewis cubic-atoms bonding to form cubic-molecules

Hence, double bonds are formed by sharing a face between two cubic atoms. This results in the sharing of four electrons.

In 1913, while working as the chair of the department of chemistry at the University of California, Berkeley, Lewis read a preliminary outline of paper by an English graduate student, Alfred Lauck Parson, who was visiting Berkeley for a year. In this paper, Parson suggested that the electron is not merely an electric charge but is also a small magnet (or "magneton" as he called it) and furthermore that a chemical bond results from two electrons being shared between two atoms. This, according to Lewis, meant that bonding occurred when two electrons formed a shared edge between two complete cubes.

On these views, in his famous 1916 article The Atom and the Molecule, Lewis introduced the "Lewis structure" to represent atoms and molecules, where dots represent electrons and lines represent covalent bonds. In this article, he developed the concept of the electron-pair bond, in which two atoms may share one to six electrons, thus forming the single electron bond, a single bond, a double bond, or a triple bond.

Lewis-type Chemical bond

In Lewis' own words:

An electron may form a part of the shell of two different atoms and cannot be said to belong to either one exclusively.

Moreover, he proposed that an atom tended to form an ion by gaining or losing the number of electrons needed to complete a cube. Thus, Lewis structures show each atom in the structure of the molecule using its chemical symbol. Lines are drawn between atoms that are bonded to one another; occasionally, pairs of dots are used instead of lines. Excess electrons that form lone pairs are represented as pair of dots, and are placed next to the atoms on which they reside:

Lewis dot structures of the Nitrite-ion

To summarize his views on his new bonding model, Lewis states:

Two atoms may conform to the rule of eight, or the octet rule, not only by the transfer of electrons from one atom to another, but also by sharing one or more pairs of electrons...Two electrons thus coupled together, when lying between two atomic centers, and held jointly in the shells of the two atoms, I have considered to be the chemical bond. We thus have a concrete picture of that physical entity, that "hook and eye" which is part of the creed of the organic chemist.

The following year, in 1917, an unknown American undergraduate chemical engineer named Linus Pauling was learning the Dalton hook-and-eye bonding method at the Oregon Agricultural College, which was the vogue description of bonds between atoms at the time. Each atom had a certain number of hooks that allowed it to attach to other atoms, and a certain number of eyes that allowed other atoms to attach to it. A chemical bond resulted when a hook and eye connected. Pauling, however, wasn't satisfied with this archaic method and looked to the newly emerging field of quantum physics for a new method.

In 1927, the physicists Fritz London and Walter Heitler applied the new quantum mechanics to the deal with the saturable, nondynamic forces of attraction and repulsion, i.e., exchange forces, of the hydrogen molecule. Their valence bond treatment of this problem, in their joint paper, was a landmark in that it brought chemistry under quantum mechanics. Their work was an influence on Pauling, who had just received his doctorate and visited Heitler and London in Zürich on a Guggenheim Fellowship.

Subsequently, in 1931, building on the work of Heitler and London and on theories found in Lewis' famous article, Pauling published his ground-breaking article "The Nature of the Chemical Bond" (see: manuscript) in which he used quantum mechanics to calculate properties and structures of molecules, such as angles between bonds and rotation about bonds. On these concepts, Pauling developed hybridization theory to account for bonds in molecules such as CH4, in which four sp³ hybridised orbitals are overlapped by hydrogen's 1s orbital, yielding four sigma (σ) bonds. The four bonds are of the same length and strength, which yields a molecular structure as shown below:

A schematic presentation of hybrid orbitals overlapping hydrogens' s orbitals

Owing to these exceptional theories, Pauling won the 1954 Nobel Prize in Chemistry. Notably he has been the only person to ever win two unshared Nobel prizes, winning the Nobel Peace Prize in 1963.

In 1926, French physicist Jean Perrin received the Nobel Prize in physics for proving, conclusively, the existence of molecules. He did this by calculating the Avogadro number using three different methods, all involving liquid phase systems. First, he used a gamboge soap-like emulsion, second by doing experimental work on Brownian motion, and third by confirming Einstein's theory of particle rotation in the liquid phase.

In 1937, chemist K.L. Wolf introduced the concept of supermolecules (Übermoleküle) to describe hydrogen bonding in acetic acid dimers. This would eventually lead to the area of supermolecular chemistry, which is the study of non-covalent bonding.

In 1951, physicist Erwin Wilhelm Müller invents the field ion microscope and is the first to see atoms, e.g. bonded atomic arrangements at the tip of a metal point.

In 1968-1970 Leroy Cooper, PhD of the University of California at Davis completed his thesis which showed what molecules looked like. He used x-ray deflection off crystals and a complex computer program written by Bill Pentz of the UC Davis Computer Center. This program took the mapped deflections and used them to calculate the basic shapes of crystal molecules. His work showed that actual molecular shapes in quartz crystals and other tested crystals looked similar to the long envisioned merged various sized soap bubbles theorized, except instead of being merged spheres of different sizes, actual shapes were rigid mergers of more tear dropped shapes that stayed fixed in orientation. This work verified for the first time that crystal molecules are actually linked or stacked merged tear drop constructions. 

In 1999, researchers from the University of Vienna reported results from experiments on wave-particle duality for C60 molecules. The data published by Anton Zeilinger et al. were consistent with Louis de Broglie's matter waves. This experiment was noted for extending the applicability of wave–particle duality by about one order of magnitude in the macroscopic direction.

In 2009, researchers from IBM managed to take the first picture of a real molecule. Using an atomic force microscope every single atom and bond of a pentacene molecule could be imaged.

History of quantum field theory

From Wikipedia, the free encyclopedia

In particle physics, the history of quantum field theory starts with its creation by Paul Dirac, when he attempted to quantize the electromagnetic field in the late 1920s. Major advances in the theory were made in the 1940s and 1950s, leading to the introduction of renormalized quantum electrodynamics (QED). The field theory behind QED was so accurate and successful in predictions that efforts were made to apply the same basic concepts for the other forces of nature. Beginning in 1954, the parallel was found by way of gauge theory, leading by the late 1970s, to quantum field models of strong nuclear force and weak nuclear force, united in the modern Standard Model of particle physics.

Efforts to describe gravity using the same techniques have, to date, failed. The study of quantum field theory is still flourishing, as are applications of its methods to many physical problems. It remains one of the most vital areas of theoretical physics today, providing a common language to several different branches of physics.

Early developments

Quantum field theory originated in the 1920s from the problem of creating a quantum mechanical theory of the electromagnetic field. In particular, de Broglie in 1924 introduced the idea of a wave description of elementary systems in the following way: "we proceed in this work from the assumption of the existence of a certain periodic phenomenon of a yet to be determined character, which is to be attributed to each and every isolated energy parcel".

In 1925, Werner Heisenberg, Max Born, and Pascual Jordan constructed just such a theory by expressing the field's internal degrees of freedom as an infinite set of harmonic oscillators, and by then utilizing the canonical quantization procedure to these oscillators; their paper was published in 1926. This theory assumed that no electric charges or currents were present and today would be called a free field theory.

The first reasonably complete theory of quantum electrodynamics, which included both the electromagnetic field and electrically charged matter as quantum mechanical objects, was created by Paul Dirac in 1927. This quantum field theory could be used to model important processes such as the emission of a photon by an electron dropping into a quantum state of lower energy, a process in which the number of particles changes—one atom in the initial state becomes an atom plus a photon in the final state. It is now understood that the ability to describe such processes is one of the most important features of quantum field theory.

The final crucial step was Enrico Fermi's theory of β-decay (1934). In it, fermion species nonconservation was shown to follow from second quantization: creation and annihilation of fermions came to the fore and quantum field theory was seen to describe particle decays. (Fermi's breakthrough was somewhat foreshadowed in the abstract studies of Soviet physicists, Viktor Ambartsumian and Dmitri Ivanenko, in particular the Ambarzumian–Ivanenko hypothesis of creation of massive particles (1930). The idea was that not only the quanta of the electromagnetic field, photons, but also other particles might emerge and disappear as a result of their interaction with other particles.)

Incorporating special relativity

It was evident from the beginning that a proper quantum treatment of the electromagnetic field had to somehow incorporate Einstein's relativity theory, which had grown out of the study of classical electromagnetism. This need to put together relativity and quantum mechanics was the second major motivation in the development of quantum field theory. Pascual Jordan and Wolfgang Pauli showed in 1928 that quantum fields could be made to behave in the way predicted by special relativity during coordinate transformations (specifically, they showed that the field commutators were Lorentz invariant). A further boost for quantum field theory came with the discovery of the Dirac equation, which was originally formulated and interpreted as a single-particle equation analogous to the Schrödinger equation, but unlike the Schrödinger equation, the Dirac equation satisfies both the Lorentz invariance, that is, the requirements of special relativity, and the rules of quantum mechanics. The Dirac equation accommodated the spin-1/2 value of the electron and accounted for its magnetic moment as well as giving accurate predictions for the spectra of hydrogen.

The attempted interpretation of the Dirac equation as a single-particle equation could not be maintained long, however, and finally it was shown that several of its undesirable properties (such as negative-energy states) could be made sense of by reformulating and reinterpreting the Dirac equation as a true field equation, in this case for the quantized "Dirac field" or the "electron field", with the "negative-energy solutions" pointing to the existence of anti-particles. This work was performed first by Dirac himself with the invention of hole theory in 1930 and by Wendell Furry, Robert Oppenheimer, Vladimir Fock, and others. Erwin Schrödinger, during the same period that he discovered his equation in 1926, also independently found the relativistic generalization of it known as the Klein–Gordon equation but dismissed it since, without spin, it predicted impossible properties for the hydrogen spectrum. (See Oskar Klein and Walter Gordon.) All relativistic wave equations that describe spin-zero particles are said to be of the Klein–Gordon type.

Uncertainty, again

A subtle and careful analysis in 1933 by Niels Bohr and Léon Rosenfeld showed that there is a fundamental limitation on the ability to simultaneously measure the electric and magnetic field strengths that enter into the description of charges in interaction with radiation, imposed by the uncertainty principle, which must apply to all canonically conjugate quantities. This limitation is crucial for the successful formulation and interpretation of a quantum field theory of photons and electrons (quantum electrodynamics), and indeed, any perturbative quantum field theory. The analysis of Bohr and Rosenfeld explains fluctuations in the values of the electromagnetic field that differ from the classically "allowed" values distant from the sources of the field.

Their analysis was crucial to showing that the limitations and physical implications of the uncertainty principle apply to all dynamical systems, whether fields or material particles. Their analysis also convinced most physicists that any notion of returning to a fundamental description of nature based on classical field theory, such as what Einstein aimed at with his numerous and failed attempts at a classical unified field theory, was simply out of the question. Fields had to be quantized.

Second quantization

The third thread in the development of quantum field theory was the need to handle the statistics of many-particle systems consistently and with ease. In 1927, Pascual Jordan tried to extend the canonical quantization of fields to the many-body wave functions of identical particles using a formalism which is known as statistical transformation theory; this procedure is now sometimes called second quantization. Dirac is also credited with the invention, as he introduced the key ideas in a 1927 paper. In 1928, Jordan and Eugene Wigner found that the quantum field describing electrons, or other fermions, had to be expanded using anti-commuting creation and annihilation operators due to the Pauli exclusion principle (see Jordan–Wigner transformation). This thread of development was incorporated into many-body theory and strongly influenced condensed matter physics and nuclear physics.

The problem of infinities

Despite its early successes quantum field theory was plagued by several serious theoretical difficulties. Basic physical quantities, such as the self-energy of the electron, the energy shift of electron states due to the presence of the electromagnetic field, gave infinite, divergent contributions—a nonsensical result—when computed using the perturbative techniques available in the 1930s and most of the 1940s. The electron self-energy problem was already a serious issue in the classical electromagnetic field theory, where the attempt to attribute to the electron a finite size or extent (the classical electron-radius) led immediately to the question of what non-electromagnetic stresses would need to be invoked, which would presumably hold the electron together against the Coulomb repulsion of its finite-sized "parts". The situation was dire, and had certain features that reminded many of the "Rayleigh–Jeans catastrophe". What made the situation in the 1940s so desperate and gloomy, however, was the fact that the correct ingredients (the second-quantized Maxwell–Dirac field equations) for the theoretical description of interacting photons and electrons were well in place, and no major conceptual change was needed analogous to that which was necessitated by a finite and physically sensible account of the radiative behavior of hot objects, as provided by the Planck radiation law.

Renormalization procedures

Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift and magnetic moment of the electron. These experiments exposed discrepancies which the theory was unable to explain.

A first indication of a possible way out was given by Hans Bethe in 1947, after attending the Shelter Island Conference. While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. Despite the limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments. This procedure was named renormalization.

This "divergence problem" was solved in the case of quantum electrodynamics through the procedure known as renormalization in 1947–49 by Hans Kramers, Hans Bethe, Julian Schwinger. Richard Feynman, and Shin'ichiro Tomonaga; the procedure was systematized by Freeman Dyson in 1949. Great progress was made after realizing that all infinities in quantum electrodynamics are related to two effects: the self-energy of the electron/positron, and vacuum polarization.

Renormalization requires paying very careful attention to just what is meant by, for example, the very concepts "charge" and "mass" as they occur in the pure, non-interacting field-equations. The "vacuum" is itself polarizable and, hence, populated by virtual particle (on shell and off shell) pairs, and, hence, is a seething and busy dynamical system in its own right. This was a critical step in identifying the source of "infinities" and "divergences". The "bare mass" and the "bare charge" of a particle, the values that appear in the free-field equations (non-interacting case), are abstractions that are simply not realized in experiment (in interaction). What we measure, and hence, what we must take account of with our equations, and what the solutions must account for, are the "renormalized mass" and the "renormalized charge" of a particle. That is to say, the "shifted" or "dressed" values these quantities must have when due systematic care is taken to include all deviations from their "bare values" is dictated by the very nature of quantum fields themselves.

Quantum electrodynamics

The first approach that bore fruit is known as the "interaction representation" (see the article Interaction picture), a Lorentz-covariant and gauge-invariant generalization of time-dependent perturbation theory used in ordinary quantum mechanics, and developed by Tomonaga and Schwinger, generalizing earlier efforts of Dirac, Fock and Boris Podolsky. Tomonaga and Schwinger invented a relativistically covariant scheme for representing field commutators and field operators intermediate between the two main representations of a quantum system, the Schrödinger and the Heisenberg representations. Within this scheme, field commutators at separated points can be evaluated in terms of "bare" field creation and annihilation operators. This allows for keeping track of the time-evolution of both the "bare" and "renormalized", or perturbed, values of the Hamiltonian and expresses everything in terms of the coupled, gauge invariant "bare" field-equations. Schwinger gave the most elegant formulation of this approach. The next development was due to Richard Feynman, with his rules for assigning a graph to the terms in the scattering matrix (see S-matrix and Feynman diagrams). These directly corresponded (through the Schwinger–Dyson equation) to the measurable physical processes (cross sections, probability amplitudes, decay widths and lifetimes of excited states) one needs to be able to calculate. This revolutionized how quantum field theory calculations are carried out in practice.

Two classic text-books from the 1960s, James D. Bjorken, Sidney David Drell, Relativistic Quantum Mechanics (1964) and J. J. Sakurai, Advanced Quantum Mechanics (1967), thoroughly developed the Feynman graph expansion techniques using physically intuitive and practical methods following from the correspondence principle, without worrying about the technicalities involved in deriving the Feynman rules from the superstructure of quantum field theory itself. Although both Feynman's heuristic and pictorial style of dealing with the infinities, as well as the formal methods of Tomonaga and Schwinger, worked extremely well, and gave spectacularly accurate answers, the true analytical nature of the question of "renormalizability", that is, whether ANY theory formulated as a "quantum field theory" would give finite answers, was not worked-out until much later, when the urgency of trying to formulate finite theories for the strong and electro-weak (and gravitational) interactions demanded its solution.

Renormalization in the case of QED was largely fortuitous due to the smallness of the coupling constant, the fact that the coupling has no dimensions involving mass, the so-called fine-structure constant, and also the zero-mass of the gauge boson involved, the photon, rendered the small-distance/high-energy behavior of QED manageable. Also, electromagnetic processes are very "clean" in the sense that they are not badly suppressed/damped and/or hidden by the other gauge interactions. By 1965 James D. Bjorken and Sidney David Drell observed: "Quantum electrodynamics (QED) has achieved a status of peaceful coexistence with its divergences ...".

The unification of the electromagnetic force with the weak force encountered initial difficulties due to the lack of accelerator energies high enough to reveal processes beyond the Fermi interaction range. Additionally, a satisfactory theoretical understanding of hadron substructure had to be developed, culminating in the quark model.

Thanks to the somewhat brute-force, ad hoc and heuristic early methods of Feynman, and the abstract methods of Tomonaga and Schwinger, elegantly synthesized by Freeman Dyson, from the period of early renormalization, the modern theory of quantum electrodynamics (QED) has established itself. It is still the most accurate physical theory known, the prototype of a successful quantum field theory. Quantum electrodynamics is an example of what is known as an abelian gauge theory. It relies on the symmetry group U(1) and has one massless gauge field, the U(1) gauge symmetry, dictating the form of the interactions involving the electromagnetic field, with the photon being the gauge boson.

Yang-Mills theory

In the 1950s Yang and Mills, following the previous lead of Hermann Weyl, explored the impact of symmetries and invariances on field theory. All field theories, including QED, were generalized to a class of quantum field theories known as gauge theories. That symmetries dictate, limit and necessitate the form of interaction between particles is the essence of the "gauge theory revolution". Yang and Mills formulated the first explicit example of a non-abelian gauge theory, Yang–Mills theory, with an attempted explanation of the strong interactions in mind. The strong interactions were then (incorrectly) understood in the mid-1950s, to be mediated by the pi-mesons, the particles predicted by Hideki Yukawa in 1935, based on his profound reflections concerning the reciprocal connection between the mass of any force-mediating particle and the range of the force it mediates. This was allowed by the uncertainty principle. In the absence of dynamical information, Murray Gell-Mann pioneered the extraction of physical predictions from sheer non-abelian symmetry considerations, and introduced non-abelian Lie groups to current algebra and so the gauge theories that came to supersede it.

The 1960s and 1970s saw the formulation of a gauge theory now known as the Standard Model of particle physics, which systematically describes the elementary particles and the interactions between them. The strong interactions are described by quantum chromodynamics (QCD), based on "color" SU(3). The weak interactions require the additional feature of spontaneous symmetry breaking, elucidated by Yoichiro Nambu and the adjunct Higgs mechanism, considered next.

Electroweak unification

The electroweak interaction part of the Standard Model was formulated by Sheldon Glashow, Abdus Salam, and John Clive Ward in 1959, with their discovery of the SU(2)xU(1) group structure of the theory. In 1967, Steven Weinberg invoked the Higgs mechanism for the generation of the W and Z masses (the intermediate vector bosons responsible for the weak interactions and neutral-currents) and keeping the mass of the photon zero. The Goldstone and Higgs idea for generating mass in gauge theories was sparked in the late 1950s and early 1960s when a number of theoreticians (including Yoichiro Nambu, Steven Weinberg, Jeffrey Goldstone, François Englert, Robert Brout, G. S. Guralnik, C. R. Hagen, Tom Kibble and Philip Warren Anderson) noticed a possibly useful analogy to the (spontaneous) breaking of the U(1) symmetry of electromagnetism in the formation of the BCS ground-state of a superconductor. The gauge boson involved in this situation, the photon, behaves as though it has acquired a finite mass.

There is a further possibility that the physical vacuum (ground-state) does not respect the symmetries implied by the "unbroken" electroweak Lagrangian from which one arrives at the field equations (see the article Electroweak interaction for more details). The electroweak theory of Weinberg and Salam was shown to be renormalizable (finite) and hence consistent by Gerardus 't Hooft and Martinus Veltman. The Glashow–Weinberg–Salam theory (GWS theory), in certain applications, gives an accuracy on a par with quantum electrodynamics.

Quantum chromodynamics

In the case of the strong interactions, progress concerning their short-distance/high-energy behavior was much slower and more frustrating. For strong interactions with the electro-weak fields, there were difficult issues regarding the strength of coupling, the mass generation of the force carriers as well as their non-linear, self interactions. Although there has been theoretical progress toward a grand unified quantum field theory incorporating the electro-magnetic force, the weak force and the strong force, empirical verification is still pending. Superunification, incorporating the gravitational force, is still very speculative, and is under intensive investigation by many of the best minds in contemporary theoretical physics. Gravitation is a tensor field description of a spin-2 gauge-boson, the "graviton", and is further discussed in the articles on general relativity and quantum gravity.

Quantum gravity

From the point of view of the techniques of (four-dimensional) quantum field theory, and as the numerous efforts to formulate a consistent quantum gravity theory attests, gravitational quantization has been the reigning champion for bad behavior.

There are technical problems underlain by the fact that the Newtonian constant of gravitation has dimensions involving inverse powers of mass, and, as a simple consequence, it is plagued by perturbatively badly behaved non-linear self-interactions. Gravity is itself a source of gravity, analogously to gauge theories (whose couplings, are, by contrast, dimensionless) leading to uncontrollable divergences at increasing orders of perturbation theory.

Moreover, gravity couples to all energy equally strongly, as per the equivalence principle, so this makes the notion of ever really "switching-off", "cutting-off" or separating, the gravitational interaction from other interactions ambiguous, since, with gravitation, we are dealing with the very structure of space-time itself.

Moreover, it has not been established that a theory of quantum gravity is necessary (see Quantum field theory in curved spacetime).

Contemporary framework of renormalization

Parallel breakthroughs in the understanding of phase transitions in condensed matter physics led to novel insights based on the renormalization group. They involved the work of Leo Kadanoff (1966) and Kenneth Geddes WilsonMichael Fisher (1972)—extending the work of Ernst StueckelbergAndré Petermann (1953) and Murray Gell-MannFrancis Low (1954)—which led to the seminal reformulation of quantum field theory by Kenneth Geddes Wilson in 1975. This reformulation provided insights into the evolution of effective field theories with scale, which classified all field theories, renormalizable or not. The remarkable conclusion is that, in general, most observables are "irrelevant", i.e., the macroscopic physics is dominated by only a few observables in most systems.

During the same period, Leo Kadanoff (1969) introduced an operator algebra formalism for the two-dimensional Ising model, a widely studied mathematical model of ferromagnetism in statistical physics. This development suggested that quantum field theory describes its scaling limit. Later, there developed the idea that a finite number of generating operators could represent all the correlation functions of the Ising model. The existence of a much stronger symmetry for the scaling limit of two-dimensional critical systems was suggested by Alexander Belavin, Alexander Markovich Polyakov and Alexander Zamolodchikov in 1984, which eventually led to the development of conformal field theory, a special case of quantum field theory, which is presently utilized in different areas of particle physics and condensed matter physics.

The renormalization group spans a set of ideas and methods to monitor changes of the behavior of the theory with scale, providing a deep physical understanding which sparked what has been called the "grand synthesis" of theoretical physics, uniting the quantum field theoretical techniques used in particle physics and condensed matter physics into a single powerful theoretical framework.

The gauge field theory of the strong interactions, quantum chromodynamics, relies crucially on this renormalization group for its distinguishing characteristic features, asymptotic freedom and color confinement.

Supersymmetry

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Supersymmetry

Supersymmetry is a theoretical framework in physics that suggests the existence of a symmetry between particles with integer spin (bosons) and particles with half-integer spin (fermions). It proposes that for every known particle, there exists a partner particle with different spin properties. There have been multiple experiments on supersymmetry that have failed to provide evidence that it exists in nature. If evidence is found, supersymmetry could help explain certain phenomena, such as the nature of dark matter and the hierarchy problem in particle physics.

A supersymmetric theory is a theory in which the equations for force and the equations for matter are identical. In theoretical and mathematical physics, any theory with this property has the principle of supersymmetry (SUSY). Dozens of supersymmetric theories exist. In theory, supersymmetry is a type of spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statistics. The names of bosonic partners of fermions are prefixed with s-, because they are scalar particles. For example, if the electron exists in a supersymmetric theory, then there would be a particle called a selectron (superpartner electron), a bosonic partner of the electron.

In supersymmetry, each particle from the class of fermions would have an associated particle in the class of bosons, and vice versa, known as a superpartner. The spin of a particle's superpartner is different by a half-integer. In the simplest supersymmetry theories, with perfectly "unbroken" supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. More complex supersymmetry theories have a spontaneously broken symmetry, allowing superpartners to differ in mass.

Supersymmetry has various applications to different areas of physics, such as quantum mechanics, statistical mechanics, quantum field theory, condensed matter physics, nuclear physics, optics, stochastic dynamics, astrophysics, quantum gravity, and cosmology. Supersymmetry has also been applied to high energy physics, where a supersymmetric extension of the Standard Model is a possible candidate for physics beyond the Standard Model. However, no supersymmetric extensions of the Standard Model have been experimentally verified.

History

A supersymmetry relating mesons and baryons was first proposed, in the context of hadronic physics, by Hironari Miyazawa in 1966. This supersymmetry did not involve spacetime, that is, it concerned internal symmetry, and was broken badly. Miyazawa's work was largely ignored at the time.

J. L. Gervais and B. Sakita (in 1971), Yu. A. Golfand and E. P. Likhtman (also in 1971), and D. V. Volkov and V. P. Akulov (1972), independently rediscovered supersymmetry in the context of quantum field theory, a radically new type of symmetry of spacetime and fundamental fields, which establishes a relationship between elementary particles of different quantum nature, bosons and fermions, and unifies spacetime and internal symmetries of microscopic phenomena. Supersymmetry with a consistent Lie-algebraic graded structure on which the Gervais−Sakita rediscovery was based directly first arose in 1971 in the context of an early version of string theory by Pierre Ramond, John H. Schwarz and André Neveu.

In 1974, Julius Wess and Bruno Zumino identified the characteristic renormalization features of four-dimensional supersymmetric field theories, which identified them as remarkable QFTs, and they and Abdus Salam and their fellow researchers introduced early particle physics applications. The mathematical structure of supersymmetry (graded Lie superalgebras) has subsequently been applied successfully to other topics of physics, ranging from nuclear physics, critical phenomena, quantum mechanics to statistical physics, and supersymmetry remains a vital part of many proposed theories in many branches of physics.

In particle physics, the first realistic supersymmetric version of the Standard Model was proposed in 1977 by Pierre Fayet and is known as the Minimal Supersymmetric Standard Model or MSSM for short. It was proposed to solve, amongst other things, the hierarchy problem.

Supersymmetry was coined by Abdus Salam and John Strathdee in 1974 as a simplification of the term super-gauge symmetry used by Wess and Zumino, although Zumino also used the same term at around the same time. The term supergauge was in turn coined by Neveu and Schwarz in 1971 when they devised supersymmetry in the context of string theory.

In 2018, John Fortnite started studying this topic, and as he found it interesting, teamed up with Steven Hacking to formulate some problems with this. He stated that although this makes intrinsic sense, it had no proven statements.

Applications

Extension of possible symmetry groups

One reason that physicists explored supersymmetry is because it offers an extension to the more familiar symmetries of quantum field theory. These symmetries are grouped into the Poincaré group and internal symmetries and the Coleman–Mandula theorem showed that under certain assumptions, the symmetries of the S-matrix must be a direct product of the Poincaré group with a compact internal symmetry group or if there is not any mass gap, the conformal group with a compact internal symmetry group. In 1971 Golfand and Likhtman were the first to show that the Poincaré algebra can be extended through introduction of four anticommuting spinor generators (in four dimensions), which later became known as supercharges. In 1975, the Haag–Łopuszański–Sohnius theorem analyzed all possible superalgebras in the general form, including those with an extended number of the supergenerators and central charges. This extended super-Poincaré algebra paved the way for obtaining a very large and important class of supersymmetric field theories.

The supersymmetry algebra

Traditional symmetries of physics are generated by objects that transform by the tensor representations of the Poincaré group and internal symmetries. Supersymmetries, however, are generated by objects that transform by the spin representations. According to the spin-statistics theorem, bosonic fields commute while fermionic fields anticommute. Combining the two kinds of fields into a single algebra requires the introduction of a Z2-grading under which the bosons are the even elements and the fermions are the odd elements. Such an algebra is called a Lie superalgebra.

The simplest supersymmetric extension of the Poincaré algebra is the Super-Poincaré algebra. Expressed in terms of two Weyl spinors, has the following anti-commutation relation:

and all other anti-commutation relations between the Qs and commutation relations between the Qs and Ps vanish. In the above expression Pμ = −iμ are the generators of translation and σμ are the Pauli matrices.

There are representations of a Lie superalgebra that are analogous to representations of a Lie algebra. Each Lie algebra has an associated Lie group and a Lie superalgebra can sometimes be extended into representations of a Lie supergroup.

Supersymmetric quantum mechanics

Supersymmetric quantum mechanics adds the SUSY superalgebra to quantum mechanics as opposed to quantum field theory. Supersymmetric quantum mechanics often becomes relevant when studying the dynamics of supersymmetric solitons, and due to the simplified nature of having fields which are only functions of time (rather than space-time), a great deal of progress has been made in this subject and it is now studied in its own right.

SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. (The potential energy terms which occur in the Hamiltonians are then known as partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy. This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy.

In finance

In 2021, supersymmetric quantum mechanics was applied to option pricing and the analysis of markets in finance, and to financial networks.

Supersymmetry in quantum field theory

In quantum field theory, supersymmetry is motivated by solutions to several theoretical problems, for generally providing many desirable mathematical properties, and for ensuring sensible behavior at high energies. Supersymmetric quantum field theory is often much easier to analyze, as many more problems become mathematically tractable. When supersymmetry is imposed as a local symmetry, Einstein's theory of general relativity is included automatically, and the result is said to be a theory of supergravity. Another theoretically appealing property of supersymmetry is that it offers the only "loophole" to the Coleman–Mandula theorem, which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories with very general assumptions. The Haag–Łopuszański–Sohnius theorem demonstrates that supersymmetry is the only way spacetime and internal symmetries can be combined consistently.

While supersymmetry has not been discovered at high energy, see Section Supersymmetry in particle physics, supersymmetry was found to be effectively realized at the intermediate energy of hadronic physics where baryons and mesons are superpartners. An exception is the pion that appears as a zero mode in the mass spectrum and thus protected by the supersymmetry: It has no baryonic partner. The realization of this effective supersymmetry is readily explained in quark–diquark models: Because two different color charges close together (e.g., blue and red) appear under coarse resolution as the corresponding anti-color (e.g. anti-green), a diquark cluster viewed with coarse resolution (i.e., at the energy-momentum scale used to study hadron structure) effectively appears as an antiquark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves likes a meson.

Supersymmetry in condensed matter physics

SUSY concepts have provided useful extensions to the WKB approximation. Additionally, SUSY has been applied to disorder averaged systems both quantum and non-quantum (through statistical mechanics), the Fokker–Planck equation being an example of a non-quantum theory. The 'supersymmetry' in all these systems arises from the fact that one is modelling one particle and as such the 'statistics' do not matter. The use of the supersymmetry method provides a mathematical rigorous alternative to the replica trick, but only in non-interacting systems, which attempts to address the so-called 'problem of the denominator' under disorder averaging. For more on the applications of supersymmetry in condensed matter physics see Efetov (1997).

In 2021, a group of researchers showed that, in theory, SUSY could be realised at the edge of a Moore–Read quantum Hall state. However, to date, no experiments have been done yet to realise it at an edge of a Moore–Read state. In 2022, a different group of researchers created a computer simulation of atoms in 1 dimensions that had supersymmetric topological quasiparticles.

Supersymmetry in optics

In 2013, integrated optics was found to provide a fertile ground on which certain ramifications of SUSY can be explored in readily-accessible laboratory settings. Making use of the analogous mathematical structure of the quantum-mechanical Schrödinger equation and the wave equation governing the evolution of light in one-dimensional settings, one may interpret the refractive index distribution of a structure as a potential landscape in which optical wave packets propagate. In this manner, a new class of functional optical structures with possible applications in phase matching, mode conversion and space-division multiplexing becomes possible. SUSY transformations have been also proposed as a way to address inverse scattering problems in optics and as a one-dimensional transformation optics.

Supersymmetry in dynamical systems

All stochastic (partial) differential equations, the models for all types of continuous time dynamical systems, possess topological supersymmetry. In the operator representation of stochastic evolution, the topological supersymmetry is the exterior derivative which is commutative with the stochastic evolution operator defined as the stochastically averaged pullback induced on differential forms by SDE-defined diffeomorphisms of the phase space. The topological sector of the so-emerging supersymmetric theory of stochastic dynamics can be recognized as the Witten-type topological field theory.

The meaning of the topological supersymmetry in dynamical systems is the preservation of the phase space continuity—infinitely close points will remain close during continuous time evolution even in the presence of noise. When the topological supersymmetry is broken spontaneously, this property is violated in the limit of the infinitely long temporal evolution and the model can be said to exhibit (the stochastic generalization of) the butterfly effect. From a more general perspective, spontaneous breakdown of the topological supersymmetry is the theoretical essence of the ubiquitous dynamical phenomenon variously known as chaos, turbulence, self-organized criticality etc. The Goldstone theorem explains the associated emergence of the long-range dynamical behavior that manifests itself as 1/f noise, butterfly effect, and the scale-free statistics of sudden (instantonic) processes, such as earthquakes, neuroavalanches, and solar flares, known as the Zipf's law and the Richter scale.

Supersymmetry in mathematics

SUSY is also sometimes studied mathematically for its intrinsic properties. This is because it describes complex fields satisfying a property known as holomorphy, which allows holomorphic quantities to be exactly computed. This makes supersymmetric models useful "toy models" of more realistic theories. A prime example of this has been the demonstration of S-duality in four-dimensional gauge theories that interchanges particles and monopoles.

The proof of the Atiyah–Singer index theorem is much simplified by the use of supersymmetric quantum mechanics.

Supersymmetry in string theory

Supersymmetry is an integral part of string theory, a possible theory of everything. There are two types of string theory, supersymmetric string theory or superstring theory, and non-supersymmetric string theory. By definition of superstring theory, supersymmetry is required in superstring theory at some level. However, even in non-supersymmetric string theory, a type of supersymmetry called misaligned supersymmetry is still required in the theory in order to ensure no physical tachyons appear. Any string theories without some kind of supersymmetry, such as bosonic string theory and the , , and heterotic string theories, will have a tachyon and therefore the spacetime vacuum itself would be unstable and would decay into some tachyon-free string theory usually in a lower spacetime dimension. There is no experimental evidence that either supersymmetry or misaligned supersymmetry holds in our universe, and many physicists have moved on from supersymmetry and string theory entirely due to the non-detection of supersymmetry at the LHC.

Despite the null results for supersymmetry at the LHC so far, some particle physicists have nevertheless moved to string theory in order to resolve the naturalness crisis for certain supersymmetric extensions of the Standard Model. According to the particle physicists, there exists a concept of "stringy naturalness" in string theory, where the string theory landscape could have a power law statistical pull on soft SUSY breaking terms to large values (depending on the number of hidden sector SUSY breaking fields contributing to the soft terms). If this is coupled with an anthropic requirement that contributions to the weak scale not exceed a factor between 2 and 5 from its measured value (as argued by Agrawal et al.), then the Higgs mass is pulled up to the vicinity of 125 GeV while most sparticles are pulled to values beyond the current reach of LHC. An exception occurs for higgsinos which gain mass not from SUSY breaking but rather from whatever mechanism solves the SUSY mu problem. Light higgsino pair production in association with hard initial state jet radiation leads to a soft opposite-sign dilepton plus jet plus missing transverse energy signal.

Supersymmetry in particle physics

Supersymmetric extensions of the Standard Model

Incorporating supersymmetry into the Standard Model requires doubling the number of particles since there is no way that any of the particles in the Standard Model can be superpartners of each other. With the addition of new particles, there are many possible new interactions. The simplest possible supersymmetric model consistent with the Standard Model is the Minimal Supersymmetric Standard Model (MSSM) which can include the necessary additional new particles that are able to be superpartners of those in the Standard Model.

Cancellation of the Higgs boson quadratic mass renormalization between fermionic top quark loop and scalar stop squark tadpole Feynman diagrams in a supersymmetric extension of the Standard Model

One of the original motivations for the Minimal Supersymmetric Standard Model came from the hierarchy problem. Due to the quadratically divergent contributions to the Higgs mass squared in the Standard Model, the quantum mechanical interactions of the Higgs boson causes a large renormalization of the Higgs mass and unless there is an accidental cancellation, the natural size of the Higgs mass is the greatest scale possible. Furthermore, the electroweak scale receives enormous Planck-scale quantum corrections. The observed hierarchy between the electroweak scale and the Planck scale must be achieved with extraordinary fine tuning. This problem is known as the hierarchy problem.

Supersymmetry close to the electroweak scale, such as in the Minimal Supersymmetric Standard Model, would solve the hierarchy problem that afflicts the Standard Model. It would reduce the size of the quantum corrections by having automatic cancellations between fermionic and bosonic Higgs interactions, and Planck-scale quantum corrections cancel between partners and superpartners (owing to a minus sign associated with fermionic loops). The hierarchy between the electroweak scale and the Planck scale would be achieved in a natural manner, without extraordinary fine-tuning. If supersymmetry were restored at the weak scale, then the Higgs mass would be related to supersymmetry breaking which can be induced from small non-perturbative effects explaining the vastly different scales in the weak interactions and gravitational interactions.

Another motivation for the Minimal Supersymmetric Standard Model comes from grand unification, the idea that the gauge symmetry groups should unify at high-energy. In the Standard Model, however, the weak, strong and electromagnetic gauge couplings fail to unify at high energy. In particular, the renormalization group evolution of the three gauge coupling constants of the Standard Model is somewhat sensitive to the present particle content of the theory. These coupling constants do not quite meet together at a common energy scale if we run the renormalization group using the Standard Model. After incorporating minimal SUSY at the electroweak scale, the running of the gauge couplings are modified, and joint convergence of the gauge coupling constants is projected to occur at approximately 1016 GeV. The modified running also provides a natural mechanism for radiative electroweak symmetry breaking.

In many supersymmetric extensions of the Standard Model, such as the Minimal Supersymmetric Standard Model, there is a heavy stable particle (such as the neutralino) which could serve as a weakly interacting massive particle (WIMP) dark matter candidate. The existence of a supersymmetric dark matter candidate is related closely to R-parity. Supersymmetry at the electroweak scale (augmented with a discrete symmetry) typically provides a candidate dark matter particle at a mass scale consistent with thermal relic abundance calculations.

The standard paradigm for incorporating supersymmetry into a realistic theory is to have the underlying dynamics of the theory be supersymmetric, but the ground state of the theory does not respect the symmetry and supersymmetry is broken spontaneously. The supersymmetry break can not be done permanently by the particles of the MSSM as they currently appear. This means that there is a new sector of the theory that is responsible for the breaking. The only constraint on this new sector is that it must break supersymmetry permanently and must give superparticles TeV scale masses. There are many models that can do this and most of their details do not matter. In order to parameterize the relevant features of supersymmetry breaking, arbitrary soft SUSY breaking terms are added to the theory which temporarily break SUSY explicitly but could never arise from a complete theory of supersymmetry breaking.

Searches and constraints for supersymmetry

SUSY extensions of the standard model are constrained by a variety of experiments, including measurements of low-energy observables – for example, the anomalous magnetic moment of the muon at Fermilab; the WMAP dark matter density measurement and direct detection experiments – for example, XENON-100 and LUX; and by particle collider experiments, including B-physics, Higgs phenomenology and direct searches for superpartners (sparticles), at the Large Electron–Positron Collider, Tevatron and the LHC. In fact, CERN publicly states that if a supersymmetric model of the Standard Model "is correct, supersymmetric particles should appear in collisions at the LHC."

Historically, the tightest limits were from direct production at colliders. The first mass limits for squarks and gluinos were made at CERN by the UA1 experiment and the UA2 experiment at the Super Proton Synchrotron. LEP later set very strong limits, which in 2006 were extended by the D0 experiment at the Tevatron. From 2003 to 2015, WMAP's and Planck's dark matter density measurements have strongly constrained supersymmetric extensions of the Standard Model, which, if they explain dark matter, have to be tuned to invoke a particular mechanism to sufficiently reduce the neutralino density.

Prior to the beginning of the LHC, in 2009, fits of available data to CMSSM and NUHM1 indicated that squarks and gluinos were most likely to have masses in the 500 to 800 GeV range, though values as high as 2.5 TeV were allowed with low probabilities. Neutralinos and sleptons were expected to be quite light, with the lightest neutralino and the lightest stau most likely to be found between 100 and 150 GeV.

The first runs of the LHC surpassed existing experimental limits from the Large Electron–Positron Collider and Tevatron and partially excluded the aforementioned expected ranges. In 2011–12, the LHC discovered a Higgs boson with a mass of about 125 GeV, and with couplings to fermions and bosons which are consistent with the Standard Model. The MSSM predicts that the mass of the lightest Higgs boson should not be much higher than the mass of the Z boson, and, in the absence of fine tuning (with the supersymmetry breaking scale on the order of 1 TeV), should not exceed 135 GeV. The LHC found no previously unknown particles other than the Higgs boson which was already suspected to exist as part of the Standard Model, and therefore no evidence for any supersymmetric extension of the Standard Model.

Indirect methods include the search for a permanent electric dipole moment (EDM) in the known Standard Model particles, which can arise when the Standard Model particle interacts with the supersymmetric particles. The current best constraint on the electron electric dipole moment put it to be smaller than 10−28 e·cm, equivalent to a sensitivity to new physics at the TeV scale and matching that of the current best particle colliders. A permanent EDM in any fundamental particle points towards time-reversal violating physics, and therefore also CP-symmetry violation via the CPT theorem. Such EDM experiments are also much more scalable than conventional particle accelerators and offer a practical alternative to detecting physics beyond the standard model as accelerator experiments become increasingly costly and complicated to maintain. The current best limit for the electron's EDM has already reached a sensitivity to rule out so called 'naive' versions of supersymmetric extensions of the Standard Model.

Research in the late 2010s and early 2020s from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the LHC. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs.

Current status

The negative findings in the experiments disappointed many physicists, who believed that supersymmetric extensions of the Standard Model (and other theories relying upon it) were by far the most promising theories for "new" physics beyond the Standard Model, and had hoped for signs of unexpected results from the experiments. In particular, the LHC result seems problematic for the Minimal Supersymmetric Standard Model, as the value of 125 GeV is relatively large for the model and can only be achieved with large radiative loop corrections from top squarks, which many theorists consider to be "unnatural" (see naturalness and fine tuning).

In response to the so-called "naturalness crisis" in the Minimal Supersymmetric Standard Model, some researchers have abandoned naturalness and the original motivation to solve the hierarchy problem naturally with supersymmetry, while other researchers have moved on to other supersymmetric models such as split supersymmetry. Still others have moved to string theory as a result of the naturalness crisis. Former enthusiastic supporter Mikhail Shifman went as far as urging the theoretical community to search for new ideas and accept that supersymmetry was a failed theory in particle physics. However, some researchers suggested that this "naturalness" crisis was premature because various calculations were too optimistic about the limits of masses which would allow a supersymmetric extension of the Standard Model as a solution.

General supersymmetry

Supersymmetry appears in many related contexts of theoretical physics. It is possible to have multiple supersymmetries and also have supersymmetric extra dimensions.

Extended supersymmetry

It is possible to have more than one kind of supersymmetry transformation. Theories with more than one supersymmetry transformation are known as extended supersymmetric theories. The more supersymmetry a theory has, the more constrained are the field content and interactions. Typically the number of copies of a supersymmetry is a power of 2 (1, 2, 4, 8...). In four dimensions, a spinor has four degrees of freedom and thus the minimal number of supersymmetry generators is four in four dimensions and having eight copies of supersymmetry means that there are 32 supersymmetry generators.

The maximal number of supersymmetry generators possible is 32. Theories with more than 32 supersymmetry generators automatically have massless fields with spin greater than 2. It is not known how to make massless fields with spin greater than two interact, so the maximal number of supersymmetry generators considered is 32. This is due to the Weinberg–Witten theorem. This corresponds to an N = 8 supersymmetry theory. Theories with 32 supersymmetries automatically have a graviton.

For four dimensions there are the following theories, with the corresponding multiplets (CPT adds a copy, whenever they are not invariant under such symmetry):

N = 1 Chiral multiplet (0, 1/2)
Vector multiplet (1/2, 1)
Gravitino multiplet (1, 3/2)
Graviton multiplet (3/2, 2)
N = 2 Hypermultiplet (−1/2, 02, 1/2)
Vector multiplet (0, 1/22, 1)
Supergravity multiplet (1, 3/22, 2)
N = 4 Vector multiplet (−1, 1/24, 06, 1/24, 1)
Supergravity multiplet (0, 1/24, 16, 3/24, 2)
N = 8 Supergravity multiplet (−2, 3/28, −128, 1/256, 070, 1/256, 128, 3/28, 2)

Supersymmetry in alternate numbers of dimensions

It is possible to have supersymmetry in dimensions other than four. Because the properties of spinors change drastically between different dimensions, each dimension has its characteristic. In d dimensions, the size of spinors is approximately 2d/2 or 2(d − 1)/2. Since the maximum number of supersymmetries is 32, the greatest number of dimensions in which a supersymmetric theory can exist is eleven.

Fractional supersymmetry

Fractional supersymmetry is a generalization of the notion of supersymmetry in which the minimal positive amount of spin does not have to be 1/2 but can be an arbitrary 1/N for integer value of N. Such a generalization is possible in two or fewer spacetime dimensions.

Software testing

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Software_testing TestingCup – Polish Championship in Software Tes...