Search This Blog

Tuesday, November 11, 2025

Fine-structure constant

From Wikipedia, the free encyclopedia
 
Value of α
0.0072973525643(11)
Value of α−1
137.035999177(21)

In physics, the fine-structure constant, also known as the Sommerfeld constant, commonly denoted by (the Greek letter alpha), is a fundamental physical constant that quantifies the strength of the electromagnetic interaction between elementary charged particles.

It is a dimensionless quantity (dimensionless physical constant), independent of the system of units used, which is related to the strength of the coupling of an elementary charge with the electromagnetic field, by the formula . Its numerical value is approximately 0.00729735256431/137.035999177, with a relative uncertainty of 1.6×10−10.

The constant was named by Arnold Sommerfeld, who introduced it in 1916 when extending the Bohr model of the atom. quantified the gap in the fine structure of the spectral lines of the hydrogen atom, which had been measured precisely by Michelson and Morley in 1887.

Why the constant should have this value is not understood, but there are a number of ways to measure its value.

Definition

In terms of other physical constants, may be defined as:

where

is the elementary charge (1.602176634×10−19 C);
is the Planck constant (6.62607015×10−34 J⋅Hz−1);
is the reduced Planck constant, (1.054571817...×10−34 J⋅s)
is the speed of light (299792458 m⋅s−1);
is the electrical permittivity of space (8.8541878188(14)×10−12 F⋅m−1).

Since the 2019 revision of the SI, the only quantity in this list that does not have an exact value in SI units is the electric constant (vacuum permittivity).

Alternative systems of units

The electrostatic CGS system implicitly sets , as commonly found in older physics literature, where the expression of the fine-structure constant becomes

A normalised system of units commonly used in high energy physics selects artificial units for mass, distance, time, and electrical charge which cause in such a system of "natural units" the expression for the fine-structure constant becomes

As such, the fine-structure constant is chiefly a quantity determining (or determined by) the elementary charge: 0.30282212  in terms of such a natural unit of charge.

In the system of atomic units, which sets , the expression for the fine-structure constant becomes

Measurement

Eighth-order Feynman diagrams on electron self-interaction. The arrowed horizontal line represents the electron, the wavy lines are virtual photons, and the circles are virtual electronpositron pairs.

The CODATA recommended value of α is

α = e2/ 4πε0ħc = 0.0072973525643(11).

This has a relative standard uncertainty of 1.6×10−10.

This value for α gives the following value for the vacuum magnetic permeability (magnetic constant): µ0 = 4π × 0.99999999987(16)×10−7 H⋅m−1, with the mean differing from the old defined value by only 0.13 parts per billion, 0.8 times the standard uncertainty (0.16 parts per billion) of its recommended measured value.

Historically, the value of the reciprocal of the fine-structure constant is often given. The CODATA recommended value is

1/α = 137.035999177(21).

While the value of α can be determined from estimates of the constants that appear in any of its definitions, the theory of quantum electrodynamics (QED) provides a way to measure α directly using the quantum Hall effect or the anomalous magnetic moment of the electron. Other methods include the A.C. Josephson effect and photon recoil in atom interferometry. There is general agreement for the value of α, as measured by these different methods. The preferred methods in 2019 are measurements of electron anomalous magnetic moments and of photon recoil in atom interferometry. The theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant α (the magnetic moment of the electron is also referred to as the electron g-factor ge). One of the most precise values of α obtained experimentally (as of 2023) is based on a measurement of ge using a one-electron so-called "quantum cyclotron" apparatus, together with a calculation via the theory of QED that involved 12672 tenth-order Feynman diagrams:

137.035999166(15) .

This measurement of α has a relative standard uncertainty of 1.1×10−10. This value and uncertainty are about the same as the latest experimental results.

Further refinement of the experimental value was published by the end of 2020, giving the value

1/α = 137.035999206(11),

with a relative accuracy of 8.1×10−11, which has a significant discrepancy from the previous experimental value.

Physical interpretations

The fine-structure constant, α, has several physical interpretations. α is:

  • The ratio of two energies:
    1. the energy needed to overcome the electrostatic repulsion between two electrons a distance of d apart, and
    2. the energy of a single photon of wavelength λ = 2πd (or of angular wavelength d; see Planck relation):
  • The ratio of the velocity of the electron in the first circular orbit of the Bohr model of the atom, which is 1/ε0e2/ħ, to the speed of light in vacuum, c. This is Sommerfeld's original physical interpretation.
  • is the ratio of the potential energy of the electron in the first circular orbit of the Bohr model of the atom and the energy mec2 equivalent to the mass of an electron. Using the virial theorem in the Bohr model of the atom , which means that . Essentially this ratio follows from the electron's velocity being .
  • The two ratios of three characteristic lengths: the classical electron radius re, the reduced Compton wavelength of the electron ƛe, and the Bohr radius a0: re = αƛe = α2a0.
  • In quantum electrodynamics, α is directly related to the coupling constant determining the strength of the interaction between electrons and photons. The theory does not predict its value. Therefore, α must be determined experimentally. In fact, α is one of the empirical parameters in the Standard Model of particle physics, whose value is not determined within the Standard Model.
  • In the electroweak theory unifying the weak interaction with electromagnetism, α is absorbed into two other coupling constants associated with the electroweak gauge fields. In this theory, the electromagnetic interaction is treated as a mixture of interactions associated with the electroweak fields. The strength of the electromagnetic interaction varies with the strength of the energy field.
  • In the fields of electrical engineering and solid-state physics, the fine-structure constant is one fourth the product of the characteristic impedance of free space, and the conductance quantum, The optical conductivity of graphene for visible frequencies is theoretically given by π/4G0, and as a result its light absorption and transmission properties can be expressed in terms of the fine-structure constant alone. The absorption value for normal-incident light on graphene in vacuum would then be given by πα/ (1 + πα/2)2 or 2.24%, and the transmission by 1/(1 + πα/2)2 or 97.75% (experimentally observed to be between 97.6% and 97.8%). The reflection would then be given by  π2 α2/ 4 (1 + πα/2)2.
  • The fine-structure constant gives the maximum positive charge of an atomic nucleus that will allow a stable electron-orbit around it within the Bohr model (element feynmanium). For an electron orbiting an atomic nucleus with atomic number Z the relation is mv2/r = 1/ε0 Ze2/r2 . The Heisenberg uncertainty principle momentum/position uncertainty relationship of such an electron is just mvr = ħ. The relativistic limiting value for v is c, and so the limiting value for Z is the reciprocal of the fine-structure constant, 137.

When perturbation theory is applied to quantum electrodynamics, the resulting perturbative expansions for physical results are expressed as sets of power series in α. Because α is much less than one, higher powers of α are soon unimportant, making the perturbation theory practical in this case. On the other hand, the large value of the corresponding factors in quantum chromodynamics makes calculations involving the strong nuclear force extremely difficult.

Variation with energy scale

In quantum electrodynamics, the more thorough quantum field theory underlying the electromagnetic coupling, the renormalization group dictates how the strength of the electromagnetic interaction grows logarithmically as the relevant energy scale increases. The value of the fine-structure constant α is linked to the observed value of this coupling associated with the energy scale of the electron mass: the electron's mass gives a lower bound for this energy scale, because it (and the positron) is the lightest charged object whose quantum loops can contribute to the running. Therefore, 1/ 137.03600  is the asymptotic value of the fine-structure constant at zero energy. At higher energies, such as the scale of the Z boson, about 90 GeV, one instead measures an effective α ≈ 1/127.

As the energy scale increases, the strength of the electromagnetic interaction in the Standard Model approaches that of the other two fundamental interactions, a feature important for grand unification theories. If quantum electrodynamics were an exact theory, the fine-structure constant would actually diverge at an energy known as the Landau pole – this fact undermines the consistency of quantum electrodynamics beyond perturbative expansions.

History

Sommerfeld memorial at University of Munich

Based on the precise measurement of the hydrogen atom spectrum by Michelson and Morley in 1887, Arnold Sommerfeld extended the Bohr model to include elliptical orbits and relativistic dependence of mass on velocity. He introduced a term for the fine-structure constant in 1916. The first physical interpretation of the fine-structure constant α was as the ratio of the velocity of the electron in the first circular orbit of the relativistic Bohr atom to the speed of light in the vacuum. Equivalently, it was the quotient between the minimum angular momentum allowed by relativity for a closed orbit, and the minimum angular momentum allowed for it by quantum mechanics. It appears naturally in Sommerfeld's analysis, and determines the size of the splitting or fine-structure of the hydrogenic spectral lines. This constant was not seen as significant until Paul Dirac's linear relativistic wave equation in 1928, which gave the exact fine structure formula.

With the development of quantum electrodynamics (QED) the significance of α has broadened from a spectroscopic phenomenon to a general coupling constant for the electromagnetic field, determining the strength of the interaction between electrons and photons. The term α/2π is engraved on the tombstone of one of the pioneers of QED, Julian Schwinger, referring to his calculation of the anomalous magnetic dipole moment.

History of measurements

Successive values determined for the fine-structure constant
Date α 1/α Sources
1969 Jul 0.007297351(11) 137.03602(21) CODATA 1969
1973 0.0072973461(81) 137.03612(15) CODATA 1973
1987 Jan 0.00729735308(33) 137.0359895(61) CODATA 1986
1998 0.007297352582(27) 137.03599883(51) Kinoshita
2000 Apr 0.007297352533(27) 137.03599976(50) CODATA 1998
2002 0.007297352568(24) 137.03599911(46) CODATA 2002
2007 Jul 0.0072973525700(52) 137.035999070(98) Gabrielse (2007)
2008 Jun 0.0072973525376(50) 137.035999679(94) CODATA 2006
2008 Jul 0.0072973525692(27) 137.035999084(51) Gabrielse (2008), Hanneke (2008)
2010 Dec 0.0072973525717(48) 137.035999037(91) Bouchendira (2010)
2011 Jun 0.0072973525698(24) 137.035999074(44) CODATA 2010
2015 Jun 0.0072973525664(17) 137.035999139(31) CODATA 2014
2017 Jul 0.0072973525657(18) 137.035999150(33) Aoyama et al. (2017)
2018 Dec 0.0072973525713(14) 137.035999046(27) Parker, Yu, et al. (2018)
2019 May 0.0072973525693(11) 137.035999084(21) CODATA 2018
2020 Dec 0.0072973525628(6) 137.035999206(11) Morel et al. (2020)
2022 Dec 0.0072973525643(11) 137.035999177(21) CODATA 2022
2023 Feb 0.0072973525649(8) 137.035999166(15) Fan et al. (2023)

The CODATA values in the above table are computed by averaging other measurements; they are not independent experiments.

Potential variation over time

Physicists have pondered whether the fine-structure constant is in fact constant, or whether its value differs by location and over time. A varying α has been proposed as a way of solving problems in cosmology and astrophysicsString theory and other proposals for going beyond the Standard Model of particle physics have led to theoretical interest in whether the accepted physical constants (not just α) actually vary.

In the experiments below, Δα represents the change in α over time, which can be computed by αpastαnow . If the fine-structure constant really is a constant, then any experiment should show that or as close to zero as experiment can measure. Any value far away from zero would indicate that α does change over time. So far, most experimental data is consistent with α being constant, up to 10 digits of accuracy.

Past rate of change

The first experimenters to test whether the fine-structure constant might actually vary examined the spectral lines of distant astronomical objects and the products of radioactive decay in the Oklo natural nuclear fission reactor. Their findings were consistent with no variation in the fine-structure constant between these two vastly separated locations and times.

Improved technology at the dawn of the 21st century made it possible to probe the value of α at much larger distances and to a much greater accuracy. In 1999, a team led by John K. Webb of the University of New South Wales claimed the first detection of a variation in α Using the Keck telescopes and a data set of 128 quasars at redshifts 0.5 < z < 3, Webb et al. found that their spectra were consistent with a slight increase in α over the last 10–12 billion years. Specifically, they found that

In other words, they measured the value to be somewhere between −0.0000047 and −0.0000067. This is a very small value, but the error bars do not actually include zero. This result either indicates that α is not constant or that there is experimental error unaccounted for.

In 2004, a smaller study of 23 absorption systems by Chand et al., using the Very Large Telescope, found no measurable variation: 

However, in 2007 simple flaws were identified in the analysis method of Chand et al., discrediting those results.

King et al. have used Markov chain Monte Carlo methods to investigate the algorithm used by the UNSW group to determine Δα/ α from the quasar spectra, and have found that the algorithm appears to produce correct uncertainties and maximum likelihood estimates for Δα/ α for particular models. This suggests that the statistical uncertainties and best estimate for Δα/ α stated by Webb et al. and Murphy et al. are robust.

Lamoreaux and Torgerson analyzed data from the Oklo natural nuclear fission reactor in 2004, and concluded that α has changed in the past 2 billion years by 45 parts per billion. They claimed that this finding was "probably accurate to within 20%". Accuracy is dependent on estimates of impurities and temperature in the natural reactor. These conclusions have yet to be verified.

In 2007, Khatri and Wandelt of the University of Illinois at Urbana-Champaign realized that the 21 cm hyperfine transition in neutral hydrogen of the early universe leaves a unique absorption line imprint in the cosmic microwave background radiation. They proposed using this effect to measure the value of α during the epoch before the formation of the first stars. In principle, this technique provides enough information to measure a variation of 1 part in 109 (4 orders of magnitude better than the current quasar constraints). However, the constraint which can be placed on α is strongly dependent upon effective integration time, going as 1t . The European LOFAR radio telescope would only be able to constrain Δα/ α to about 0.3%. The collecting area required to constrain Δα/ α to the current level of quasar constraints is on the order of 100 square kilometers, which is economically impracticable at present.

Present rate of change

In 2008, Rosenband et al. used the frequency ratio of Al+ and Hg+ in single-ion optical atomic clocks to place a very stringent constraint on the present-time temporal variation of α, namely Δα/ α = (−1.6±2.3)×10−17 per year. A present day null constraint on the time variation of alpha does not necessarily rule out time variation in the past. Indeed, some theories that predict a variable fine-structure constant also predict that the value of the fine-structure constant should become practically fixed in its value once the universe enters its current dark energy-dominated epoch.

Spatial variation – Australian dipole

Researchers from Australia have said they had identified a variation of the fine-structure constant across the observable universe.

These results have not been replicated by other researchers. In September and October 2010, after released research by Webb et al., physicists C. Orzel and S.M. Carroll separately suggested various approaches of how Webb's observations may be wrong. Orzel argues that the study may contain wrong data due to subtle differences in the two telescopes. Carroll takes an altogether different approach: he looks at the fine-structure constant as a scalar field and claims that if the telescopes are correct and the fine-structure constant varies smoothly over the universe, then the scalar field must have a very small mass. However, previous research has shown that the mass is not likely to be extremely small. Both of these scientists' early criticisms point to the fact that different techniques are needed to confirm or contradict the results, a conclusion Webb, et al., previously stated in their study.

Other research finds no meaningful variation in the fine-structure constant.

Anthropic explanation

The anthropic principle provides an argument as to the reason the fine-structure constant has the value it does: stable matter, and therefore life and intelligent beings, could not exist if its value were very different. For instance, if modern grand unified theories are correct, then α needs to be between around 1/180 and 1/85 to have proton decay to be slow enough for life to be possible.

Numerological explanations

As a dimensionless constant which does not seem to be directly related to any mathematical constant, the fine-structure constant has long fascinated physicists.

Arthur Eddington argued that the value could be "obtained by pure deduction" and he related it to the Eddington number, his estimate of the number of protons in the universe. This led him in 1929 to conjecture that the reciprocal of the fine-structure constant was not approximately but precisely the integer 137. By the 1940s experimental values for 1/α deviated sufficiently from 137 to refute Eddington's arguments.

Physicist Wolfgang Pauli commented on the appearance of certain numbers in physics, including the fine-structure constant, which he also noted approximates reciprocal of the prime number 137. This constant so intrigued him that he collaborated with psychoanalyst Carl Jung in a quest to understand its significance. Similarly, Max Born believed that if the value of α differed, the universe would degenerate, and thus that α = 1/137 is a law of nature.

Richard Feynman, one of the originators and early developers of the theory of quantum electrodynamics (QED), referred to the fine-structure constant in these terms:

There is a most profound and beautiful question associated with the observed coupling constant, e – the amplitude for a real electron to emit or absorb a real photon. It is a simple number that has been experimentally determined to be close to 0.08542455. (My physicist friends won't recognize this number, because they like to remember it as the inverse of its square: about 137.03597 with an uncertainty of about 2 in the last decimal place. It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it.)

Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It's one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by humans. You might say the "hand of God" wrote that number, and "we don't know how He pushed His pencil." We know what kind of a dance to do experimentally to measure this number very accurately, but we don't know what kind of dance to do on the computer to make this number come out – without putting it in secretly!

Conversely, statistician I. J. Good argued that a numerological explanation would only be acceptable if it could be based on a good theory that is not yet known but "exists" in the sense of a Platonic Ideal.

Attempts to find a mathematical basis for this dimensionless constant have continued up to the present time. However, no numerological explanation has ever been accepted by the physics community.

In the late 20th century, multiple physicists, including Stephen Hawking in his 1988 book A Brief History of Time, began exploring the idea of a multiverse, and the fine-structure constant was one of several universal constants that suggested the idea of a fine-tuned universe.

Quotes

For historical reasons, α is known as the fine structure constant. Unfortunately, this name conveys a false impression. We have seen that the charge of an electron is not strictly constant but varies with distance because of quantum effects; hence α must be regarded as a variable, too. The value 1/ 137  is the asymptotic value of α shown in Fig. 1.5a.

— F. Halzen & A. Martin (1984)


The mystery about α is actually a double mystery: The first mystery – the origin of its numerical value α1/ 137  – has been recognized and discussed for decades. The second mystery – the range of its domain – is generally unrecognized.

— M.H. MacGregor (2007)

When I die my first question to the Devil will be: What is the meaning of the fine structure constant?

— Wolfgang Pauli

Monday, November 10, 2025

Quantum chemistry

From Wikipedia, the free encyclopedia

Quantum chemistry, also called molecular quantum mechanics, is a branch of physical chemistry focused on the application of quantum mechanics to chemical systems, particularly towards the quantum-mechanical calculation of electronic contributions to physical and chemical properties of molecules, materials, and solutions at the atomic level. These calculations include systematically applied approximations intended to make calculations computationally feasible while still capturing as much information about important contributions to the computed wave functions as well as to observable properties such as structures, spectra, and thermodynamic properties. Quantum chemistry is also concerned with the computation of quantum effects on molecular dynamics and chemical kinetics.

Chemists rely heavily on spectroscopy through which information regarding the quantization of energy on a molecular scale can be obtained. Common methods are infra-red (IR) spectroscopy, nuclear magnetic resonance (NMR) spectroscopy, and scanning probe microscopy. Quantum chemistry may be applied to the prediction and verification of spectroscopic data as well as other experimental data.

Many quantum chemistry studies are focused on the electronic ground state and excited states of individual atoms and molecules as well as the study of reaction pathways and transition states that occur during chemical reactions. Spectroscopic properties may also be predicted. Typically, such studies assume the electronic wave function is adiabatically parameterized by the nuclear positions (i.e., the Born–Oppenheimer approximation). A wide variety of approaches are used, including semi-empirical methods, density functional theory, Hartree–Fock calculations, quantum Monte Carlo methods, and coupled cluster methods.

Understanding electronic structure and molecular dynamics through the development of computational solutions to the Schrödinger equation is a central goal of quantum chemistry. Progress in the field depends on overcoming several challenges, including the need to increase the accuracy of the results for small molecular systems, and to also increase the size of large molecules that can be realistically subjected to computation, which is limited by scaling considerations — the computation time increases as a power of the number of atoms.

History

Some view the birth of quantum chemistry as starting with the discovery of the Schrödinger equation and its application to the hydrogen atom. However, a 1927 article of Walter Heitler (1904–1981) and Fritz London is often recognized as the first milestone in the history of quantum chemistry. This was the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. However, prior to this a critical conceptual framework was provided by Gilbert N. Lewis in his 1916 paper The Atom and the Molecule, wherein Lewis developed the first working model of valence electrons. Important contributions were also made by Yoshikatsu Sugiura and S.C. Wang. A series of articles by Linus Pauling, written throughout the 1930s, integrated the work of Heitler, London, Sugiura, Wang, Lewis, and John C. Slater on the concept of valence and its quantum-mechanical basis into a new theoretical framework. Many chemists were introduced to the field of quantum chemistry by Pauling's 1939 text The Nature of the Chemical Bond and the Structure of Molecules and Crystals: An Introduction to Modern Structural Chemistry, wherein he summarized this work (referred to widely now as valence bond theory) and explained quantum mechanics in a way which could be followed by chemists. The text soon became a standard text at many universities. In 1937, Hans Hellmann appears to have been the first to publish a book on quantum chemistry, in the Russian and German languages.

In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding. In addition to the investigators mentioned above, important progress and critical contributions were made in the early years of this field by Irving Langmuir, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Hans Hellmann, Maria Goeppert Mayer, Erich Hückel, Douglas Hartree, John Lennard-Jones, and Vladimir Fock.

Electronic structure

The electronic structure of an atom or molecule is the quantum state of its electrons. The first step in solving a quantum chemical problem is usually solving the Schrödinger equation (or Dirac equation in relativistic quantum chemistry) with the electronic molecular Hamiltonian, usually making use of the Born–Oppenheimer (B–O) approximation. This is called determining the electronic structure of the molecule. An exact solution for the non-relativistic Schrödinger equation can only be obtained for the hydrogen atom (though exact solutions for the bound state energies of the hydrogen molecular ion within the B-O approximation have been identified in terms of the generalized Lambert W function). Since all other atomic and molecular systems involve the motions of three or more "particles", their Schrödinger equations cannot be solved analytically and so approximate and/or computational solutions must be sought. The process of seeking computational solutions to these problems is part of the field known as computational chemistry.

Valence bond theory

As mentioned above, Heitler and London's method was extended by Slater and Pauling to become the valence-bond (VB) method. In this method, attention is primarily devoted to the pairwise interactions between atoms, and this method therefore correlates closely with classical chemists' drawings of bonds. It focuses on how the atomic orbitals of an atom combine to give individual chemical bonds when a molecule is formed, incorporating the two key concepts of orbital hybridization and resonance.

Molecular orbital theory

An anti-bonding molecular orbital of Butadiene

An alternative approach to valence bond theory was developed in 1929 by Friedrich Hund and Robert S. Mulliken, in which electrons are described by mathematical functions delocalized over an entire molecule. The Hund–Mulliken approach or molecular orbital (MO) method is less intuitive to chemists, but has turned out capable of predicting spectroscopic properties better than the VB method. This approach is the conceptual basis of the Hartree–Fock method and further post-Hartree–Fock methods.

Density functional theory

The Thomas–Fermi model was developed independently by Thomas and Fermi in 1927. This was the first attempt to describe many-electron systems on the basis of electronic density instead of wave functions, although it was not very successful in the treatment of entire molecules. The method did provide the basis for what is now known as density functional theory (DFT). Modern day DFT uses the Kohn–Sham method, where the density functional is split into four terms; the Kohn–Sham kinetic energy, an external potential, exchange and correlation energies. A large part of the focus on developing DFT is on improving the exchange and correlation terms. Though this method is less developed than post Hartree–Fock methods, its significantly lower computational requirements (scaling typically no worse than n3 with respect to n basis functions, for the pure functionals) allow it to tackle larger polyatomic molecules and even macromolecules. This computational affordability and often comparable accuracy to MP2 and CCSD(T) (post-Hartree–Fock methods) has made it one of the most popular methods in computational chemistry.

Chemical dynamics

A further step can consist of solving the Schrödinger equation with the total molecular Hamiltonian in order to study the motion of molecules. Direct solution of the Schrödinger equation is called quantum dynamics, whereas its solution within the semiclassical approximation is called semiclassical dynamics. Purely classical simulations of molecular motion are referred to as molecular dynamics (MD). Another approach to dynamics is a hybrid framework known as mixed quantum-classical dynamics; yet another hybrid framework uses the Feynman path integral formulation to add quantum corrections to molecular dynamics, which is called path integral molecular dynamics. Statistical approaches, using for example classical and quantum Monte Carlo methods, are also possible and are particularly useful for describing equilibrium distributions of states.

Adiabatic chemical dynamics

In adiabatic dynamics, interatomic interactions are represented by single scalar potentials called potential energy surfaces. This is the Born–Oppenheimer approximation introduced by Born and Oppenheimer in 1927. Pioneering applications of this in chemistry were performed by Rice and Ramsperger in 1927 and Kassel in 1928, and generalized into the RRKM theory in 1952 by Marcus who took the transition state theory developed by Eyring in 1935 into account. These methods enable simple estimates of unimolecular reaction rates from a few characteristics of the potential surface.

Non-adiabatic chemical dynamics

Non-adiabatic dynamics consists of taking the interaction between several coupled potential energy surfaces (corresponding to different electronic quantum states of the molecule). The coupling terms are called vibronic couplings. The pioneering work in this field was done by Stueckelberg, Landau, and Zener in the 1930s, in their work on what is now known as the Landau–Zener transition. Their formula allows the transition probability between two adiabatic potential curves in the neighborhood of an avoided crossing to be calculated. Spin-forbidden reactions are one type of non-adiabatic reactions where at least one change in spin state occurs when progressing from reactant to product.

DNA nanotechnology

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/DNA_nanotechnology   DNA nanotechnolo...