Search This Blog

Thursday, September 12, 2024

History of quantum field theory

From Wikipedia, the free encyclopedia

In particle physics, the history of quantum field theory starts with its creation by Paul Dirac, when he attempted to quantize the electromagnetic field in the late 1920s. Major advances in the theory were made in the 1940s and 1950s, leading to the introduction of renormalized quantum electrodynamics (QED). The field theory behind QED was so accurate and successful in predictions that efforts were made to apply the same basic concepts for the other forces of nature. Beginning in 1954, the parallel was found by way of gauge theory, leading by the late 1970s, to quantum field models of strong nuclear force and weak nuclear force, united in the modern Standard Model of particle physics.

Efforts to describe gravity using the same techniques have, to date, failed. The study of quantum field theory is still flourishing, as are applications of its methods to many physical problems. It remains one of the most vital areas of theoretical physics today, providing a common language to several different branches of physics.

Early developments

Quantum field theory originated in the 1920s from the problem of creating a quantum mechanical theory of the electromagnetic field. In particular, de Broglie in 1924 introduced the idea of a wave description of elementary systems in the following way: "we proceed in this work from the assumption of the existence of a certain periodic phenomenon of a yet to be determined character, which is to be attributed to each and every isolated energy parcel".

In 1925, Werner Heisenberg, Max Born, and Pascual Jordan constructed just such a theory by expressing the field's internal degrees of freedom as an infinite set of harmonic oscillators, and by then utilizing the canonical quantization procedure to these oscillators; their paper was published in 1926. This theory assumed that no electric charges or currents were present and today would be called a free field theory.

The first reasonably complete theory of quantum electrodynamics, which included both the electromagnetic field and electrically charged matter as quantum mechanical objects, was created by Paul Dirac in 1927. This quantum field theory could be used to model important processes such as the emission of a photon by an electron dropping into a quantum state of lower energy, a process in which the number of particles changes—one atom in the initial state becomes an atom plus a photon in the final state. It is now understood that the ability to describe such processes is one of the most important features of quantum field theory.

The final crucial step was Enrico Fermi's theory of β-decay (1934). In it, fermion species nonconservation was shown to follow from second quantization: creation and annihilation of fermions came to the fore and quantum field theory was seen to describe particle decays. (Fermi's breakthrough was somewhat foreshadowed in the abstract studies of Soviet physicists, Viktor Ambartsumian and Dmitri Ivanenko, in particular the Ambarzumian–Ivanenko hypothesis of creation of massive particles (1930). The idea was that not only the quanta of the electromagnetic field, photons, but also other particles might emerge and disappear as a result of their interaction with other particles.)

Incorporating special relativity

It was evident from the beginning that a proper quantum treatment of the electromagnetic field had to somehow incorporate Einstein's relativity theory, which had grown out of the study of classical electromagnetism. This need to put together relativity and quantum mechanics was the second major motivation in the development of quantum field theory. Pascual Jordan and Wolfgang Pauli showed in 1928 that quantum fields could be made to behave in the way predicted by special relativity during coordinate transformations (specifically, they showed that the field commutators were Lorentz invariant). A further boost for quantum field theory came with the discovery of the Dirac equation, which was originally formulated and interpreted as a single-particle equation analogous to the Schrödinger equation, but unlike the Schrödinger equation, the Dirac equation satisfies both the Lorentz invariance, that is, the requirements of special relativity, and the rules of quantum mechanics. The Dirac equation accommodated the spin-1/2 value of the electron and accounted for its magnetic moment as well as giving accurate predictions for the spectra of hydrogen.

The attempted interpretation of the Dirac equation as a single-particle equation could not be maintained long, however, and finally it was shown that several of its undesirable properties (such as negative-energy states) could be made sense of by reformulating and reinterpreting the Dirac equation as a true field equation, in this case for the quantized "Dirac field" or the "electron field", with the "negative-energy solutions" pointing to the existence of anti-particles. This work was performed first by Dirac himself with the invention of hole theory in 1930 and by Wendell Furry, Robert Oppenheimer, Vladimir Fock, and others. Erwin Schrödinger, during the same period that he discovered his equation in 1926, also independently found the relativistic generalization of it known as the Klein–Gordon equation but dismissed it since, without spin, it predicted impossible properties for the hydrogen spectrum. (See Oskar Klein and Walter Gordon.) All relativistic wave equations that describe spin-zero particles are said to be of the Klein–Gordon type.

Uncertainty, again

A subtle and careful analysis in 1933 by Niels Bohr and Léon Rosenfeld showed that there is a fundamental limitation on the ability to simultaneously measure the electric and magnetic field strengths that enter into the description of charges in interaction with radiation, imposed by the uncertainty principle, which must apply to all canonically conjugate quantities. This limitation is crucial for the successful formulation and interpretation of a quantum field theory of photons and electrons (quantum electrodynamics), and indeed, any perturbative quantum field theory. The analysis of Bohr and Rosenfeld explains fluctuations in the values of the electromagnetic field that differ from the classically "allowed" values distant from the sources of the field.

Their analysis was crucial to showing that the limitations and physical implications of the uncertainty principle apply to all dynamical systems, whether fields or material particles. Their analysis also convinced most physicists that any notion of returning to a fundamental description of nature based on classical field theory, such as what Einstein aimed at with his numerous and failed attempts at a classical unified field theory, was simply out of the question. Fields had to be quantized.

Second quantization

The third thread in the development of quantum field theory was the need to handle the statistics of many-particle systems consistently and with ease. In 1927, Pascual Jordan tried to extend the canonical quantization of fields to the many-body wave functions of identical particles using a formalism which is known as statistical transformation theory; this procedure is now sometimes called second quantization. Dirac is also credited with the invention, as he introduced the key ideas in a 1927 paper. In 1928, Jordan and Eugene Wigner found that the quantum field describing electrons, or other fermions, had to be expanded using anti-commuting creation and annihilation operators due to the Pauli exclusion principle (see Jordan–Wigner transformation). This thread of development was incorporated into many-body theory and strongly influenced condensed matter physics and nuclear physics.

The problem of infinities

Despite its early successes quantum field theory was plagued by several serious theoretical difficulties. Basic physical quantities, such as the self-energy of the electron, the energy shift of electron states due to the presence of the electromagnetic field, gave infinite, divergent contributions—a nonsensical result—when computed using the perturbative techniques available in the 1930s and most of the 1940s. The electron self-energy problem was already a serious issue in the classical electromagnetic field theory, where the attempt to attribute to the electron a finite size or extent (the classical electron-radius) led immediately to the question of what non-electromagnetic stresses would need to be invoked, which would presumably hold the electron together against the Coulomb repulsion of its finite-sized "parts". The situation was dire, and had certain features that reminded many of the "Rayleigh–Jeans catastrophe". What made the situation in the 1940s so desperate and gloomy, however, was the fact that the correct ingredients (the second-quantized Maxwell–Dirac field equations) for the theoretical description of interacting photons and electrons were well in place, and no major conceptual change was needed analogous to that which was necessitated by a finite and physically sensible account of the radiative behavior of hot objects, as provided by the Planck radiation law.

Renormalization procedures

Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift and magnetic moment of the electron. These experiments exposed discrepancies which the theory was unable to explain.

A first indication of a possible way out was given by Hans Bethe in 1947, after attending the Shelter Island Conference. While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. Despite the limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments. This procedure was named renormalization.

This "divergence problem" was solved in the case of quantum electrodynamics through the procedure known as renormalization in 1947–49 by Hans Kramers, Hans Bethe, Julian Schwinger. Richard Feynman, and Shin'ichiro Tomonaga; the procedure was systematized by Freeman Dyson in 1949. Great progress was made after realizing that all infinities in quantum electrodynamics are related to two effects: the self-energy of the electron/positron, and vacuum polarization.

Renormalization requires paying very careful attention to just what is meant by, for example, the very concepts "charge" and "mass" as they occur in the pure, non-interacting field-equations. The "vacuum" is itself polarizable and, hence, populated by virtual particle (on shell and off shell) pairs, and, hence, is a seething and busy dynamical system in its own right. This was a critical step in identifying the source of "infinities" and "divergences". The "bare mass" and the "bare charge" of a particle, the values that appear in the free-field equations (non-interacting case), are abstractions that are simply not realized in experiment (in interaction). What we measure, and hence, what we must take account of with our equations, and what the solutions must account for, are the "renormalized mass" and the "renormalized charge" of a particle. That is to say, the "shifted" or "dressed" values these quantities must have when due systematic care is taken to include all deviations from their "bare values" is dictated by the very nature of quantum fields themselves.

Quantum electrodynamics

The first approach that bore fruit is known as the "interaction representation" (see the article Interaction picture), a Lorentz-covariant and gauge-invariant generalization of time-dependent perturbation theory used in ordinary quantum mechanics, and developed by Tomonaga and Schwinger, generalizing earlier efforts of Dirac, Fock and Boris Podolsky. Tomonaga and Schwinger invented a relativistically covariant scheme for representing field commutators and field operators intermediate between the two main representations of a quantum system, the Schrödinger and the Heisenberg representations. Within this scheme, field commutators at separated points can be evaluated in terms of "bare" field creation and annihilation operators. This allows for keeping track of the time-evolution of both the "bare" and "renormalized", or perturbed, values of the Hamiltonian and expresses everything in terms of the coupled, gauge invariant "bare" field-equations. Schwinger gave the most elegant formulation of this approach. The next development was due to Richard Feynman, with his rules for assigning a graph to the terms in the scattering matrix (see S-matrix and Feynman diagrams). These directly corresponded (through the Schwinger–Dyson equation) to the measurable physical processes (cross sections, probability amplitudes, decay widths and lifetimes of excited states) one needs to be able to calculate. This revolutionized how quantum field theory calculations are carried out in practice.

Two classic text-books from the 1960s, James D. Bjorken, Sidney David Drell, Relativistic Quantum Mechanics (1964) and J. J. Sakurai, Advanced Quantum Mechanics (1967), thoroughly developed the Feynman graph expansion techniques using physically intuitive and practical methods following from the correspondence principle, without worrying about the technicalities involved in deriving the Feynman rules from the superstructure of quantum field theory itself. Although both Feynman's heuristic and pictorial style of dealing with the infinities, as well as the formal methods of Tomonaga and Schwinger, worked extremely well, and gave spectacularly accurate answers, the true analytical nature of the question of "renormalizability", that is, whether ANY theory formulated as a "quantum field theory" would give finite answers, was not worked-out until much later, when the urgency of trying to formulate finite theories for the strong and electro-weak (and gravitational) interactions demanded its solution.

Renormalization in the case of QED was largely fortuitous due to the smallness of the coupling constant, the fact that the coupling has no dimensions involving mass, the so-called fine-structure constant, and also the zero-mass of the gauge boson involved, the photon, rendered the small-distance/high-energy behavior of QED manageable. Also, electromagnetic processes are very "clean" in the sense that they are not badly suppressed/damped and/or hidden by the other gauge interactions. By 1965 James D. Bjorken and Sidney David Drell observed: "Quantum electrodynamics (QED) has achieved a status of peaceful coexistence with its divergences ...".

The unification of the electromagnetic force with the weak force encountered initial difficulties due to the lack of accelerator energies high enough to reveal processes beyond the Fermi interaction range. Additionally, a satisfactory theoretical understanding of hadron substructure had to be developed, culminating in the quark model.

Thanks to the somewhat brute-force, ad hoc and heuristic early methods of Feynman, and the abstract methods of Tomonaga and Schwinger, elegantly synthesized by Freeman Dyson, from the period of early renormalization, the modern theory of quantum electrodynamics (QED) has established itself. It is still the most accurate physical theory known, the prototype of a successful quantum field theory. Quantum electrodynamics is an example of what is known as an abelian gauge theory. It relies on the symmetry group U(1) and has one massless gauge field, the U(1) gauge symmetry, dictating the form of the interactions involving the electromagnetic field, with the photon being the gauge boson.

Yang-Mills theory

In the 1950s Yang and Mills, following the previous lead of Hermann Weyl, explored the impact of symmetries and invariances on field theory. All field theories, including QED, were generalized to a class of quantum field theories known as gauge theories. That symmetries dictate, limit and necessitate the form of interaction between particles is the essence of the "gauge theory revolution". Yang and Mills formulated the first explicit example of a non-abelian gauge theory, Yang–Mills theory, with an attempted explanation of the strong interactions in mind. The strong interactions were then (incorrectly) understood in the mid-1950s, to be mediated by the pi-mesons, the particles predicted by Hideki Yukawa in 1935, based on his profound reflections concerning the reciprocal connection between the mass of any force-mediating particle and the range of the force it mediates. This was allowed by the uncertainty principle. In the absence of dynamical information, Murray Gell-Mann pioneered the extraction of physical predictions from sheer non-abelian symmetry considerations, and introduced non-abelian Lie groups to current algebra and so the gauge theories that came to supersede it.

The 1960s and 1970s saw the formulation of a gauge theory now known as the Standard Model of particle physics, which systematically describes the elementary particles and the interactions between them. The strong interactions are described by quantum chromodynamics (QCD), based on "color" SU(3). The weak interactions require the additional feature of spontaneous symmetry breaking, elucidated by Yoichiro Nambu and the adjunct Higgs mechanism, considered next.

Electroweak unification

The electroweak interaction part of the Standard Model was formulated by Sheldon Glashow, Abdus Salam, and John Clive Ward in 1959, with their discovery of the SU(2)xU(1) group structure of the theory. In 1967, Steven Weinberg invoked the Higgs mechanism for the generation of the W and Z masses (the intermediate vector bosons responsible for the weak interactions and neutral-currents) and keeping the mass of the photon zero. The Goldstone and Higgs idea for generating mass in gauge theories was sparked in the late 1950s and early 1960s when a number of theoreticians (including Yoichiro Nambu, Steven Weinberg, Jeffrey Goldstone, François Englert, Robert Brout, G. S. Guralnik, C. R. Hagen, Tom Kibble and Philip Warren Anderson) noticed a possibly useful analogy to the (spontaneous) breaking of the U(1) symmetry of electromagnetism in the formation of the BCS ground-state of a superconductor. The gauge boson involved in this situation, the photon, behaves as though it has acquired a finite mass.

There is a further possibility that the physical vacuum (ground-state) does not respect the symmetries implied by the "unbroken" electroweak Lagrangian from which one arrives at the field equations (see the article Electroweak interaction for more details). The electroweak theory of Weinberg and Salam was shown to be renormalizable (finite) and hence consistent by Gerardus 't Hooft and Martinus Veltman. The Glashow–Weinberg–Salam theory (GWS theory), in certain applications, gives an accuracy on a par with quantum electrodynamics.

Quantum chromodynamics

In the case of the strong interactions, progress concerning their short-distance/high-energy behavior was much slower and more frustrating. For strong interactions with the electro-weak fields, there were difficult issues regarding the strength of coupling, the mass generation of the force carriers as well as their non-linear, self interactions. Although there has been theoretical progress toward a grand unified quantum field theory incorporating the electro-magnetic force, the weak force and the strong force, empirical verification is still pending. Superunification, incorporating the gravitational force, is still very speculative, and is under intensive investigation by many of the best minds in contemporary theoretical physics. Gravitation is a tensor field description of a spin-2 gauge-boson, the "graviton", and is further discussed in the articles on general relativity and quantum gravity.

Quantum gravity

From the point of view of the techniques of (four-dimensional) quantum field theory, and as the numerous efforts to formulate a consistent quantum gravity theory attests, gravitational quantization has been the reigning champion for bad behavior.

There are technical problems underlain by the fact that the Newtonian constant of gravitation has dimensions involving inverse powers of mass, and, as a simple consequence, it is plagued by perturbatively badly behaved non-linear self-interactions. Gravity is itself a source of gravity, analogously to gauge theories (whose couplings, are, by contrast, dimensionless) leading to uncontrollable divergences at increasing orders of perturbation theory.

Moreover, gravity couples to all energy equally strongly, as per the equivalence principle, so this makes the notion of ever really "switching-off", "cutting-off" or separating, the gravitational interaction from other interactions ambiguous, since, with gravitation, we are dealing with the very structure of space-time itself.

Moreover, it has not been established that a theory of quantum gravity is necessary (see Quantum field theory in curved spacetime).

Contemporary framework of renormalization

Parallel breakthroughs in the understanding of phase transitions in condensed matter physics led to novel insights based on the renormalization group. They involved the work of Leo Kadanoff (1966) and Kenneth Geddes WilsonMichael Fisher (1972)—extending the work of Ernst StueckelbergAndré Petermann (1953) and Murray Gell-MannFrancis Low (1954)—which led to the seminal reformulation of quantum field theory by Kenneth Geddes Wilson in 1975. This reformulation provided insights into the evolution of effective field theories with scale, which classified all field theories, renormalizable or not. The remarkable conclusion is that, in general, most observables are "irrelevant", i.e., the macroscopic physics is dominated by only a few observables in most systems.

During the same period, Leo Kadanoff (1969) introduced an operator algebra formalism for the two-dimensional Ising model, a widely studied mathematical model of ferromagnetism in statistical physics. This development suggested that quantum field theory describes its scaling limit. Later, there developed the idea that a finite number of generating operators could represent all the correlation functions of the Ising model. The existence of a much stronger symmetry for the scaling limit of two-dimensional critical systems was suggested by Alexander Belavin, Alexander Markovich Polyakov and Alexander Zamolodchikov in 1984, which eventually led to the development of conformal field theory, a special case of quantum field theory, which is presently utilized in different areas of particle physics and condensed matter physics.

The renormalization group spans a set of ideas and methods to monitor changes of the behavior of the theory with scale, providing a deep physical understanding which sparked what has been called the "grand synthesis" of theoretical physics, uniting the quantum field theoretical techniques used in particle physics and condensed matter physics into a single powerful theoretical framework.

The gauge field theory of the strong interactions, quantum chromodynamics, relies crucially on this renormalization group for its distinguishing characteristic features, asymptotic freedom and color confinement.

Supersymmetry

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Supersymmetry

Supersymmetry is a theoretical framework in physics that suggests the existence of a symmetry between particles with integer spin (bosons) and particles with half-integer spin (fermions). It proposes that for every known particle, there exists a partner particle with different spin properties. There have been multiple experiments on supersymmetry that have failed to provide evidence that it exists in nature. If evidence is found, supersymmetry could help explain certain phenomena, such as the nature of dark matter and the hierarchy problem in particle physics.

A supersymmetric theory is a theory in which the equations for force and the equations for matter are identical. In theoretical and mathematical physics, any theory with this property has the principle of supersymmetry (SUSY). Dozens of supersymmetric theories exist. In theory, supersymmetry is a type of spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statistics. The names of bosonic partners of fermions are prefixed with s-, because they are scalar particles. For example, if the electron exists in a supersymmetric theory, then there would be a particle called a selectron (superpartner electron), a bosonic partner of the electron.

In supersymmetry, each particle from the class of fermions would have an associated particle in the class of bosons, and vice versa, known as a superpartner. The spin of a particle's superpartner is different by a half-integer. In the simplest supersymmetry theories, with perfectly "unbroken" supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. More complex supersymmetry theories have a spontaneously broken symmetry, allowing superpartners to differ in mass.

Supersymmetry has various applications to different areas of physics, such as quantum mechanics, statistical mechanics, quantum field theory, condensed matter physics, nuclear physics, optics, stochastic dynamics, astrophysics, quantum gravity, and cosmology. Supersymmetry has also been applied to high energy physics, where a supersymmetric extension of the Standard Model is a possible candidate for physics beyond the Standard Model. However, no supersymmetric extensions of the Standard Model have been experimentally verified.

History

A supersymmetry relating mesons and baryons was first proposed, in the context of hadronic physics, by Hironari Miyazawa in 1966. This supersymmetry did not involve spacetime, that is, it concerned internal symmetry, and was broken badly. Miyazawa's work was largely ignored at the time.

J. L. Gervais and B. Sakita (in 1971), Yu. A. Golfand and E. P. Likhtman (also in 1971), and D. V. Volkov and V. P. Akulov (1972), independently rediscovered supersymmetry in the context of quantum field theory, a radically new type of symmetry of spacetime and fundamental fields, which establishes a relationship between elementary particles of different quantum nature, bosons and fermions, and unifies spacetime and internal symmetries of microscopic phenomena. Supersymmetry with a consistent Lie-algebraic graded structure on which the Gervais−Sakita rediscovery was based directly first arose in 1971 in the context of an early version of string theory by Pierre Ramond, John H. Schwarz and André Neveu.

In 1974, Julius Wess and Bruno Zumino identified the characteristic renormalization features of four-dimensional supersymmetric field theories, which identified them as remarkable QFTs, and they and Abdus Salam and their fellow researchers introduced early particle physics applications. The mathematical structure of supersymmetry (graded Lie superalgebras) has subsequently been applied successfully to other topics of physics, ranging from nuclear physics, critical phenomena, quantum mechanics to statistical physics, and supersymmetry remains a vital part of many proposed theories in many branches of physics.

In particle physics, the first realistic supersymmetric version of the Standard Model was proposed in 1977 by Pierre Fayet and is known as the Minimal Supersymmetric Standard Model or MSSM for short. It was proposed to solve, amongst other things, the hierarchy problem.

Supersymmetry was coined by Abdus Salam and John Strathdee in 1974 as a simplification of the term super-gauge symmetry used by Wess and Zumino, although Zumino also used the same term at around the same time. The term supergauge was in turn coined by Neveu and Schwarz in 1971 when they devised supersymmetry in the context of string theory.

In 2018, John Fortnite started studying this topic, and as he found it interesting, teamed up with Steven Hacking to formulate some problems with this. He stated that although this makes intrinsic sense, it had no proven statements.

Applications

Extension of possible symmetry groups

One reason that physicists explored supersymmetry is because it offers an extension to the more familiar symmetries of quantum field theory. These symmetries are grouped into the Poincaré group and internal symmetries and the Coleman–Mandula theorem showed that under certain assumptions, the symmetries of the S-matrix must be a direct product of the Poincaré group with a compact internal symmetry group or if there is not any mass gap, the conformal group with a compact internal symmetry group. In 1971 Golfand and Likhtman were the first to show that the Poincaré algebra can be extended through introduction of four anticommuting spinor generators (in four dimensions), which later became known as supercharges. In 1975, the Haag–Łopuszański–Sohnius theorem analyzed all possible superalgebras in the general form, including those with an extended number of the supergenerators and central charges. This extended super-Poincaré algebra paved the way for obtaining a very large and important class of supersymmetric field theories.

The supersymmetry algebra

Traditional symmetries of physics are generated by objects that transform by the tensor representations of the Poincaré group and internal symmetries. Supersymmetries, however, are generated by objects that transform by the spin representations. According to the spin-statistics theorem, bosonic fields commute while fermionic fields anticommute. Combining the two kinds of fields into a single algebra requires the introduction of a Z2-grading under which the bosons are the even elements and the fermions are the odd elements. Such an algebra is called a Lie superalgebra.

The simplest supersymmetric extension of the Poincaré algebra is the Super-Poincaré algebra. Expressed in terms of two Weyl spinors, has the following anti-commutation relation:

and all other anti-commutation relations between the Qs and commutation relations between the Qs and Ps vanish. In the above expression Pμ = −iμ are the generators of translation and σμ are the Pauli matrices.

There are representations of a Lie superalgebra that are analogous to representations of a Lie algebra. Each Lie algebra has an associated Lie group and a Lie superalgebra can sometimes be extended into representations of a Lie supergroup.

Supersymmetric quantum mechanics

Supersymmetric quantum mechanics adds the SUSY superalgebra to quantum mechanics as opposed to quantum field theory. Supersymmetric quantum mechanics often becomes relevant when studying the dynamics of supersymmetric solitons, and due to the simplified nature of having fields which are only functions of time (rather than space-time), a great deal of progress has been made in this subject and it is now studied in its own right.

SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. (The potential energy terms which occur in the Hamiltonians are then known as partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy. This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy.

In finance

In 2021, supersymmetric quantum mechanics was applied to option pricing and the analysis of markets in finance, and to financial networks.

Supersymmetry in quantum field theory

In quantum field theory, supersymmetry is motivated by solutions to several theoretical problems, for generally providing many desirable mathematical properties, and for ensuring sensible behavior at high energies. Supersymmetric quantum field theory is often much easier to analyze, as many more problems become mathematically tractable. When supersymmetry is imposed as a local symmetry, Einstein's theory of general relativity is included automatically, and the result is said to be a theory of supergravity. Another theoretically appealing property of supersymmetry is that it offers the only "loophole" to the Coleman–Mandula theorem, which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories with very general assumptions. The Haag–Łopuszański–Sohnius theorem demonstrates that supersymmetry is the only way spacetime and internal symmetries can be combined consistently.

While supersymmetry has not been discovered at high energy, see Section Supersymmetry in particle physics, supersymmetry was found to be effectively realized at the intermediate energy of hadronic physics where baryons and mesons are superpartners. An exception is the pion that appears as a zero mode in the mass spectrum and thus protected by the supersymmetry: It has no baryonic partner. The realization of this effective supersymmetry is readily explained in quark–diquark models: Because two different color charges close together (e.g., blue and red) appear under coarse resolution as the corresponding anti-color (e.g. anti-green), a diquark cluster viewed with coarse resolution (i.e., at the energy-momentum scale used to study hadron structure) effectively appears as an antiquark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves likes a meson.

Supersymmetry in condensed matter physics

SUSY concepts have provided useful extensions to the WKB approximation. Additionally, SUSY has been applied to disorder averaged systems both quantum and non-quantum (through statistical mechanics), the Fokker–Planck equation being an example of a non-quantum theory. The 'supersymmetry' in all these systems arises from the fact that one is modelling one particle and as such the 'statistics' do not matter. The use of the supersymmetry method provides a mathematical rigorous alternative to the replica trick, but only in non-interacting systems, which attempts to address the so-called 'problem of the denominator' under disorder averaging. For more on the applications of supersymmetry in condensed matter physics see Efetov (1997).

In 2021, a group of researchers showed that, in theory, SUSY could be realised at the edge of a Moore–Read quantum Hall state. However, to date, no experiments have been done yet to realise it at an edge of a Moore–Read state. In 2022, a different group of researchers created a computer simulation of atoms in 1 dimensions that had supersymmetric topological quasiparticles.

Supersymmetry in optics

In 2013, integrated optics was found to provide a fertile ground on which certain ramifications of SUSY can be explored in readily-accessible laboratory settings. Making use of the analogous mathematical structure of the quantum-mechanical Schrödinger equation and the wave equation governing the evolution of light in one-dimensional settings, one may interpret the refractive index distribution of a structure as a potential landscape in which optical wave packets propagate. In this manner, a new class of functional optical structures with possible applications in phase matching, mode conversion and space-division multiplexing becomes possible. SUSY transformations have been also proposed as a way to address inverse scattering problems in optics and as a one-dimensional transformation optics.

Supersymmetry in dynamical systems

All stochastic (partial) differential equations, the models for all types of continuous time dynamical systems, possess topological supersymmetry. In the operator representation of stochastic evolution, the topological supersymmetry is the exterior derivative which is commutative with the stochastic evolution operator defined as the stochastically averaged pullback induced on differential forms by SDE-defined diffeomorphisms of the phase space. The topological sector of the so-emerging supersymmetric theory of stochastic dynamics can be recognized as the Witten-type topological field theory.

The meaning of the topological supersymmetry in dynamical systems is the preservation of the phase space continuity—infinitely close points will remain close during continuous time evolution even in the presence of noise. When the topological supersymmetry is broken spontaneously, this property is violated in the limit of the infinitely long temporal evolution and the model can be said to exhibit (the stochastic generalization of) the butterfly effect. From a more general perspective, spontaneous breakdown of the topological supersymmetry is the theoretical essence of the ubiquitous dynamical phenomenon variously known as chaos, turbulence, self-organized criticality etc. The Goldstone theorem explains the associated emergence of the long-range dynamical behavior that manifests itself as 1/f noise, butterfly effect, and the scale-free statistics of sudden (instantonic) processes, such as earthquakes, neuroavalanches, and solar flares, known as the Zipf's law and the Richter scale.

Supersymmetry in mathematics

SUSY is also sometimes studied mathematically for its intrinsic properties. This is because it describes complex fields satisfying a property known as holomorphy, which allows holomorphic quantities to be exactly computed. This makes supersymmetric models useful "toy models" of more realistic theories. A prime example of this has been the demonstration of S-duality in four-dimensional gauge theories that interchanges particles and monopoles.

The proof of the Atiyah–Singer index theorem is much simplified by the use of supersymmetric quantum mechanics.

Supersymmetry in string theory

Supersymmetry is an integral part of string theory, a possible theory of everything. There are two types of string theory, supersymmetric string theory or superstring theory, and non-supersymmetric string theory. By definition of superstring theory, supersymmetry is required in superstring theory at some level. However, even in non-supersymmetric string theory, a type of supersymmetry called misaligned supersymmetry is still required in the theory in order to ensure no physical tachyons appear. Any string theories without some kind of supersymmetry, such as bosonic string theory and the , , and heterotic string theories, will have a tachyon and therefore the spacetime vacuum itself would be unstable and would decay into some tachyon-free string theory usually in a lower spacetime dimension. There is no experimental evidence that either supersymmetry or misaligned supersymmetry holds in our universe, and many physicists have moved on from supersymmetry and string theory entirely due to the non-detection of supersymmetry at the LHC.

Despite the null results for supersymmetry at the LHC so far, some particle physicists have nevertheless moved to string theory in order to resolve the naturalness crisis for certain supersymmetric extensions of the Standard Model. According to the particle physicists, there exists a concept of "stringy naturalness" in string theory, where the string theory landscape could have a power law statistical pull on soft SUSY breaking terms to large values (depending on the number of hidden sector SUSY breaking fields contributing to the soft terms). If this is coupled with an anthropic requirement that contributions to the weak scale not exceed a factor between 2 and 5 from its measured value (as argued by Agrawal et al.), then the Higgs mass is pulled up to the vicinity of 125 GeV while most sparticles are pulled to values beyond the current reach of LHC. An exception occurs for higgsinos which gain mass not from SUSY breaking but rather from whatever mechanism solves the SUSY mu problem. Light higgsino pair production in association with hard initial state jet radiation leads to a soft opposite-sign dilepton plus jet plus missing transverse energy signal.

Supersymmetry in particle physics

Supersymmetric extensions of the Standard Model

Incorporating supersymmetry into the Standard Model requires doubling the number of particles since there is no way that any of the particles in the Standard Model can be superpartners of each other. With the addition of new particles, there are many possible new interactions. The simplest possible supersymmetric model consistent with the Standard Model is the Minimal Supersymmetric Standard Model (MSSM) which can include the necessary additional new particles that are able to be superpartners of those in the Standard Model.

Cancellation of the Higgs boson quadratic mass renormalization between fermionic top quark loop and scalar stop squark tadpole Feynman diagrams in a supersymmetric extension of the Standard Model

One of the original motivations for the Minimal Supersymmetric Standard Model came from the hierarchy problem. Due to the quadratically divergent contributions to the Higgs mass squared in the Standard Model, the quantum mechanical interactions of the Higgs boson causes a large renormalization of the Higgs mass and unless there is an accidental cancellation, the natural size of the Higgs mass is the greatest scale possible. Furthermore, the electroweak scale receives enormous Planck-scale quantum corrections. The observed hierarchy between the electroweak scale and the Planck scale must be achieved with extraordinary fine tuning. This problem is known as the hierarchy problem.

Supersymmetry close to the electroweak scale, such as in the Minimal Supersymmetric Standard Model, would solve the hierarchy problem that afflicts the Standard Model. It would reduce the size of the quantum corrections by having automatic cancellations between fermionic and bosonic Higgs interactions, and Planck-scale quantum corrections cancel between partners and superpartners (owing to a minus sign associated with fermionic loops). The hierarchy between the electroweak scale and the Planck scale would be achieved in a natural manner, without extraordinary fine-tuning. If supersymmetry were restored at the weak scale, then the Higgs mass would be related to supersymmetry breaking which can be induced from small non-perturbative effects explaining the vastly different scales in the weak interactions and gravitational interactions.

Another motivation for the Minimal Supersymmetric Standard Model comes from grand unification, the idea that the gauge symmetry groups should unify at high-energy. In the Standard Model, however, the weak, strong and electromagnetic gauge couplings fail to unify at high energy. In particular, the renormalization group evolution of the three gauge coupling constants of the Standard Model is somewhat sensitive to the present particle content of the theory. These coupling constants do not quite meet together at a common energy scale if we run the renormalization group using the Standard Model. After incorporating minimal SUSY at the electroweak scale, the running of the gauge couplings are modified, and joint convergence of the gauge coupling constants is projected to occur at approximately 1016 GeV. The modified running also provides a natural mechanism for radiative electroweak symmetry breaking.

In many supersymmetric extensions of the Standard Model, such as the Minimal Supersymmetric Standard Model, there is a heavy stable particle (such as the neutralino) which could serve as a weakly interacting massive particle (WIMP) dark matter candidate. The existence of a supersymmetric dark matter candidate is related closely to R-parity. Supersymmetry at the electroweak scale (augmented with a discrete symmetry) typically provides a candidate dark matter particle at a mass scale consistent with thermal relic abundance calculations.

The standard paradigm for incorporating supersymmetry into a realistic theory is to have the underlying dynamics of the theory be supersymmetric, but the ground state of the theory does not respect the symmetry and supersymmetry is broken spontaneously. The supersymmetry break can not be done permanently by the particles of the MSSM as they currently appear. This means that there is a new sector of the theory that is responsible for the breaking. The only constraint on this new sector is that it must break supersymmetry permanently and must give superparticles TeV scale masses. There are many models that can do this and most of their details do not matter. In order to parameterize the relevant features of supersymmetry breaking, arbitrary soft SUSY breaking terms are added to the theory which temporarily break SUSY explicitly but could never arise from a complete theory of supersymmetry breaking.

Searches and constraints for supersymmetry

SUSY extensions of the standard model are constrained by a variety of experiments, including measurements of low-energy observables – for example, the anomalous magnetic moment of the muon at Fermilab; the WMAP dark matter density measurement and direct detection experiments – for example, XENON-100 and LUX; and by particle collider experiments, including B-physics, Higgs phenomenology and direct searches for superpartners (sparticles), at the Large Electron–Positron Collider, Tevatron and the LHC. In fact, CERN publicly states that if a supersymmetric model of the Standard Model "is correct, supersymmetric particles should appear in collisions at the LHC."

Historically, the tightest limits were from direct production at colliders. The first mass limits for squarks and gluinos were made at CERN by the UA1 experiment and the UA2 experiment at the Super Proton Synchrotron. LEP later set very strong limits, which in 2006 were extended by the D0 experiment at the Tevatron. From 2003 to 2015, WMAP's and Planck's dark matter density measurements have strongly constrained supersymmetric extensions of the Standard Model, which, if they explain dark matter, have to be tuned to invoke a particular mechanism to sufficiently reduce the neutralino density.

Prior to the beginning of the LHC, in 2009, fits of available data to CMSSM and NUHM1 indicated that squarks and gluinos were most likely to have masses in the 500 to 800 GeV range, though values as high as 2.5 TeV were allowed with low probabilities. Neutralinos and sleptons were expected to be quite light, with the lightest neutralino and the lightest stau most likely to be found between 100 and 150 GeV.

The first runs of the LHC surpassed existing experimental limits from the Large Electron–Positron Collider and Tevatron and partially excluded the aforementioned expected ranges. In 2011–12, the LHC discovered a Higgs boson with a mass of about 125 GeV, and with couplings to fermions and bosons which are consistent with the Standard Model. The MSSM predicts that the mass of the lightest Higgs boson should not be much higher than the mass of the Z boson, and, in the absence of fine tuning (with the supersymmetry breaking scale on the order of 1 TeV), should not exceed 135 GeV. The LHC found no previously unknown particles other than the Higgs boson which was already suspected to exist as part of the Standard Model, and therefore no evidence for any supersymmetric extension of the Standard Model.

Indirect methods include the search for a permanent electric dipole moment (EDM) in the known Standard Model particles, which can arise when the Standard Model particle interacts with the supersymmetric particles. The current best constraint on the electron electric dipole moment put it to be smaller than 10−28 e·cm, equivalent to a sensitivity to new physics at the TeV scale and matching that of the current best particle colliders. A permanent EDM in any fundamental particle points towards time-reversal violating physics, and therefore also CP-symmetry violation via the CPT theorem. Such EDM experiments are also much more scalable than conventional particle accelerators and offer a practical alternative to detecting physics beyond the standard model as accelerator experiments become increasingly costly and complicated to maintain. The current best limit for the electron's EDM has already reached a sensitivity to rule out so called 'naive' versions of supersymmetric extensions of the Standard Model.

Research in the late 2010s and early 2020s from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the LHC. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs.

Current status

The negative findings in the experiments disappointed many physicists, who believed that supersymmetric extensions of the Standard Model (and other theories relying upon it) were by far the most promising theories for "new" physics beyond the Standard Model, and had hoped for signs of unexpected results from the experiments. In particular, the LHC result seems problematic for the Minimal Supersymmetric Standard Model, as the value of 125 GeV is relatively large for the model and can only be achieved with large radiative loop corrections from top squarks, which many theorists consider to be "unnatural" (see naturalness and fine tuning).

In response to the so-called "naturalness crisis" in the Minimal Supersymmetric Standard Model, some researchers have abandoned naturalness and the original motivation to solve the hierarchy problem naturally with supersymmetry, while other researchers have moved on to other supersymmetric models such as split supersymmetry. Still others have moved to string theory as a result of the naturalness crisis. Former enthusiastic supporter Mikhail Shifman went as far as urging the theoretical community to search for new ideas and accept that supersymmetry was a failed theory in particle physics. However, some researchers suggested that this "naturalness" crisis was premature because various calculations were too optimistic about the limits of masses which would allow a supersymmetric extension of the Standard Model as a solution.

General supersymmetry

Supersymmetry appears in many related contexts of theoretical physics. It is possible to have multiple supersymmetries and also have supersymmetric extra dimensions.

Extended supersymmetry

It is possible to have more than one kind of supersymmetry transformation. Theories with more than one supersymmetry transformation are known as extended supersymmetric theories. The more supersymmetry a theory has, the more constrained are the field content and interactions. Typically the number of copies of a supersymmetry is a power of 2 (1, 2, 4, 8...). In four dimensions, a spinor has four degrees of freedom and thus the minimal number of supersymmetry generators is four in four dimensions and having eight copies of supersymmetry means that there are 32 supersymmetry generators.

The maximal number of supersymmetry generators possible is 32. Theories with more than 32 supersymmetry generators automatically have massless fields with spin greater than 2. It is not known how to make massless fields with spin greater than two interact, so the maximal number of supersymmetry generators considered is 32. This is due to the Weinberg–Witten theorem. This corresponds to an N = 8 supersymmetry theory. Theories with 32 supersymmetries automatically have a graviton.

For four dimensions there are the following theories, with the corresponding multiplets (CPT adds a copy, whenever they are not invariant under such symmetry):

N = 1 Chiral multiplet (0, 1/2)
Vector multiplet (1/2, 1)
Gravitino multiplet (1, 3/2)
Graviton multiplet (3/2, 2)
N = 2 Hypermultiplet (−1/2, 02, 1/2)
Vector multiplet (0, 1/22, 1)
Supergravity multiplet (1, 3/22, 2)
N = 4 Vector multiplet (−1, 1/24, 06, 1/24, 1)
Supergravity multiplet (0, 1/24, 16, 3/24, 2)
N = 8 Supergravity multiplet (−2, 3/28, −128, 1/256, 070, 1/256, 128, 3/28, 2)

Supersymmetry in alternate numbers of dimensions

It is possible to have supersymmetry in dimensions other than four. Because the properties of spinors change drastically between different dimensions, each dimension has its characteristic. In d dimensions, the size of spinors is approximately 2d/2 or 2(d − 1)/2. Since the maximum number of supersymmetries is 32, the greatest number of dimensions in which a supersymmetric theory can exist is eleven.

Fractional supersymmetry

Fractional supersymmetry is a generalization of the notion of supersymmetry in which the minimal positive amount of spin does not have to be 1/2 but can be an arbitrary 1/N for integer value of N. Such a generalization is possible in two or fewer spacetime dimensions.

Pre-Marxist communism

From Wikipedia, the free encyclopedia Chiefs of the Six Nations of the Hauden...