Evolutionary invasion analysis, also known as adaptive dynamics, is a set of mathematical modeling techniques that use differential equations to study the long-term evolution of traits in asexually and sexually reproducing populations. It rests on the following three assumptions about mutation and population dynamics:
Mutations are infrequent. The population can be assumed to be at equilibrium when a new mutant arises.
The number of individuals with the mutant trait is initially negligible in the large, established resident population.
Mutant phenotypes are only slightly different from the resident phenotype.
Evolutionary invasion analysis makes it possible to identify conditions on model parameters
for which the mutant population dies out, replaces the resident
population, and/or coexists with the resident population. Long-term
coexistence of the two phenotypes is known as evolutionary branching. When branching occurs, the mutant establishes itself as a second resident in the environment.
Central to evolutionary invasion analysis is the mutant's invasion fitness. This is a mathematical expression
for the long-term exponential growth rate of the mutant subpopulation
when it is introduced into the resident population in small numbers. If
the invasion fitness is positive (in continuous time), the mutant
population can grow in the environment set by the resident phenotype. If
the invasion fitness is negative, the mutant population swiftly goes
extinct.
Introduction and background
The basic principle of evolution via natural selection was outlined by Charles Darwin in his 1859 book, On the Origin of Species.
Though controversial at the time, the central ideas remain largely
unchanged to this date, even though much more is now known about the
biological basis of inheritance.
Darwin expressed his arguments verbally, but many attempts have since
then been made to formalise the theory of evolution. The best known are population genetics which models inheritance at the expense of ecological detail, quantitative genetics which incorporates quantitative traits influenced by genes at many loci, and evolutionary game theory which ignores genetic detail but incorporates a high degree of ecological
realism, in particular that the success of any given strategy depends
on the frequency at which strategies are played in the population, a
concept known as frequency dependence.
Adaptive dynamics is a set of techniques developed during the
1990s for understanding the long-term consequences of small mutations in
the traits expressing the phenotype. They link population dynamics to evolutionary dynamics and incorporate and generalise the fundamental idea of frequency-dependent selection from game theory.
Fundamental ideas
Two fundamental ideas of adaptive dynamics are that the resident population is in a dynamical equilibrium when new mutants
appear, and that the eventual fate of such mutants can be inferred from
their initial growth rate when rare in the environment consisting of
the resident. This rate is known as the invasion exponent when measured
as the initial exponential growth rate of mutants, and as the basic reproductive number
when it measures the expected total number of offspring that a mutant
individual produces in a lifetime. It is sometimes called the invasion
fitness of mutants.
To make use of these ideas, a mathematical model must explicitly
incorporate the traits undergoing evolutionary change. The model should
describe both the environment and the population dynamics given the
environment, even if the variable part of the environment consists only
of the demography
of the current population. The invasion exponent can then be
determined. This can be difficult, but once determined, the adaptive
dynamics techniques can be applied independent of the model structure.
Monomorphic evolution
A
population consisting of individuals with the same trait is called
monomorphic. If not explicitly stated otherwise, the trait is assumed to
be a real number, and r and m are the trait value of the monomorphic
resident population and that of an invading mutant, respectively.
Invasion exponent and selection gradient
The invasion exponent
is defined as the expected growth rate of an initially rare mutant in
the environment set by the resident (r), which means the frequency of
each phenotype (trait value) whenever this suffices to infer all other
aspects of the equilibrium environment, such as the demographic
composition and the
availability of resources. For each r, the invasion exponent can be
thought of as the fitness landscape experienced by an initially rare
mutant. The landscape changes with each successful invasion, as is the
case in evolutionary game theory, but in contrast with the classical
view of evolution as an optimisation process towards ever higher
fitness.
We will always assume that the resident is at its demographic attractor, and as a consequence for all r, as otherwise the population would grow indefinitely.
The selection gradient is defined as the slope of the invasion exponent at , .
If the sign of the selection gradient is positive (negative) mutants
with slightly higher (lower) trait values may successfully invade. This
follows from the linear approximation
which holds whenever .
Pairwise-invasibility plots
The
invasion exponent represents the fitness landscape as experienced by a
rare mutant. In a large (infinite) population only mutants with trait
values for which
is positive are able to
successfully invade. The generic outcome of an invasion is that the
mutant replaces the resident, and the fitness landscape as experienced
by a rare mutant changes. To determine the outcome of the resulting
series of invasions pairwise-invasibility plots (PIPs) are often used.
These show for each resident trait value all mutant trait values for which is positive. Note that is zero at the diagonal .
In PIPs the fitness landscapes as experienced by a rare mutant
correspond to the vertical lines where the resident trait value is constant.
Evolutionarily singular strategies
The selection gradient
determines the direction of evolutionary change. If it is positive
(negative) a mutant with a slightly higher (lower) trait-value will
generically invade and replace the resident. But what will happen if
vanishes? Seemingly evolution should come to a halt at such a point.
While this is a possible outcome, the general situation is more complex.
Traits or strategies for which , are known as evolutionarily singular strategies.
Near such points the fitness landscape as experienced by a rare mutant
is locally `flat'. There are three qualitatively different ways in which
this can occur. First, a degenerate case similar to the saddle point
of a qubic function where finite evolutionary steps would lead past the
local 'flatness'. Second, a fitness maximum which is known as an evolutionarily stable strategy
(ESS) and which, once established, cannot be invaded by nearby
mutants. Third, a fitness minimum where disruptive selection will occur
and the population branch into two morphs. This process is known as evolutionary branching.
In a pairwise invasibility plot the singular strategies are found
where the boundary of the region of positive invasion fitness intersects
the diagonal.
Singular strategies can be located and classified once the
selection gradient is known. To locate singular strategies, it is
sufficient to find the points for which the selection gradient vanishes,
i.e. to find such that . These can be classified then using the second derivative test from basic calculus. If the second derivative evaluated at is negative (positive) the strategy represents a local fitness maximum (minimum). Hence, for an evolutionarily stable strategy we have
If this does not hold the strategy is evolutionarily unstable and, provided that it is also convergence stable, evolutionary branching will eventually occur. For a singular strategy
to be convergence stable monomorphic populations with slightly lower or
slightly higher trait values must be invadable by mutants with trait
values closer to . That this can happen the selection gradient in a neighbourhood of must be positive for and negative for . This means that the slope of as a function of
at is negative, or equivalently
The criterion for convergence stability given above can also be
expressed using second derivatives of the invasion exponent, and the
classification can be refined to span more than the simple cases
considered here.
Polymorphic evolution
The
normal outcome of a successful invasion is that the mutant replaces the
resident. However, other outcomes are also possible; in particular both
the resident and the mutant may persist, and the population then
becomes dimorphic. Assuming that a trait persists in the population if
and only if its expected growth-rate when rare is positive, the
condition for coexistence among two traits and is
and
where and are often referred to as morphs.
Such a pair is a protected dimorphism. The set of all protected
dimorphisms is known as the region of coexistence. Graphically, the
region consists of the overlapping parts when a pair-wise invasibility
plot is mirrored over the diagonal
Invasion exponent and selection gradients in polymorphic populations
The invasion exponent is generalised to dimorphic populations straightforwardly, as the expected growth rate of a rare mutant in the environment set by the two morphs and
. The slope of the local fitness landscape for a mutant close to or is now given by the selection gradients
and
In practise, it is often difficult to determine the dimorphic
selection gradient and invasion exponent analytically, and one often
has to resort to numerical computations.
Evolutionary branching
The
emergence of protected dimorphism near singular points during the
course of evolution is not unusual, but its significance depends on
whether selection is stabilising or disruptive. In the latter case, the
traits of the two morphs will diverge in a process often referred to as
evolutionary branching. Geritz 1998 presents a compelling
argument that disruptive selection only occurs near fitness minima. To
understand this heuristically, consider a dimorphic population and near a singular point. By continuity
and, since
the fitness landscape for the dimorphic population must be a
perturbation of that for a monomorphic resident near the singular
strategy.
Trait evolution plots
Evolution
after branching is illustrated using trait evolution plots. These show
the region of coexistence, the direction of evolutionary change and
whether points where the selection gradient vanishes are fitness maxima
or minima. Evolution may well lead the dimorphic population outside the
region of coexistence, in which case one morph is extinct and the
population once again becomes monomorphic.
Superconductivity is a set of physical properties observed in superconductors: materials where electrical resistance vanishes and magnetic fields are expelled from the material. Unlike an ordinary metallic conductor, whose resistance decreases gradually as its temperature is lowered, even down to near absolute zero, a superconductor has a characteristic critical temperature below which the resistance drops abruptly to zero. An electric current through a loop of superconducting wire can persist indefinitely with no power source.
The superconductivity phenomenon was discovered in 1911 by Dutch physicist Heike Kamerlingh Onnes. Like ferromagnetism and atomic spectral lines, superconductivity is a phenomenon which can only be explained by quantum mechanics. It is characterized by the Meissner effect,
the complete cancelation of the magnetic field in the interior of the
superconductor during its transitions into the superconducting state.
The occurrence of the Meissner effect indicates that superconductivity
cannot be understood simply as the idealization of perfect conductivity in classical physics.
In 1986, it was discovered that some cuprate-perovskiteceramic materials have a critical temperature above 90 K (−183 °C). Such a high transition temperature is theoretically impossible for a conventional superconductor, leading the materials to be termed high-temperature superconductors. The cheaply available coolant liquid nitrogen
boils at 77 K (−196 °C) and thus the existence of superconductivity at
higher temperatures than this facilitates many experiments and
applications that are less practical at lower temperatures.
Superconductivity was discovered on April 8, 1911, by Heike
Kamerlingh Onnes, who was studying the resistance of solid mercury at cryogenic temperatures using the recently produced liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistance abruptly disappeared. In the same experiment, he also observed the superfluid
transition of helium at 2.2 K, without recognizing its significance.
The precise date and circumstances of the discovery were only
reconstructed a century later, when Onnes's notebook was found. In subsequent decades, superconductivity was observed in several other materials. In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K.
Great efforts have been devoted to finding out how and why superconductivity works; the important step occurred in 1933, when Meissner and Ochsenfeld
discovered that superconductors expelled applied magnetic fields, a
phenomenon which has come to be known as the Meissner effect. In 1935, Fritz and Heinz London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current.
London constitutive equations
The theoretical model that was first conceived for superconductivity was completely classical: it is summarized by London constitutive equations.
It was put forward by the brothers Fritz and Heinz London in 1935,
shortly after the discovery that magnetic fields are expelled from
superconductors. A major triumph of the equations of this theory is
their ability to explain the Meissner effect,
wherein a material exponentially expels all internal magnetic fields as
it crosses the superconducting threshold. By using the London equation,
one can obtain the dependence of the magnetic field inside the
superconductor on the distance to the surface.
The two constitutive equations for a superconductor by London are:
The first equation follows from Newton's second law for superconducting electrons.
Conventional theories (1950s)
During the 1950s, theoretical condensed matter
physicists arrived at an understanding of "conventional"
superconductivity, through a pair of remarkable and important theories:
the phenomenological Ginzburg–Landau theory (1950) and the microscopic BCS theory (1957).
In 1950, the phenomenologicalGinzburg–Landau theory of superconductivity was devised by Landau and Ginzburg. This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger-like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Abrikosov
showed that Ginzburg–Landau theory predicts the division of
superconductors into the two categories now referred to as Type I and
Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for
their work (Landau had received the 1962 Nobel Prize for other work, and
died in 1968). The four-dimensional extension of the Ginzburg–Landau
theory, the Coleman-Weinberg model, is important in quantum field theory and cosmology.
Also in 1950, Maxwell and Reynolds et al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element. This important discovery pointed to the electron–phonon interaction as the microscopic mechanism responsible for superconductivity.
The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper and Schrieffer.
This BCS theory explained the superconducting current as a superfluid
of Cooper pairs, pairs of electrons interacting through the exchange of
phonons. For this work, the authors were awarded the Nobel Prize in
1972.
The BCS theory was set on a firmer footing in 1958, when N. N. Bogolyubov
showed that the BCS wavefunction, which had originally been derived
from a variational argument, could be obtained using a canonical
transformation of the electronic Hamiltonian. In 1959, Lev Gor'kov showed that the BCS theory reduced to the Ginzburg–Landau theory close to the critical temperature.
Generalizations of BCS theory for conventional superconductors form the basis for the understanding of the phenomenon of superfluidity, because they fall into the lambda transition universality class. The extent to which such generalizations can be applied to unconventional superconductors is still controversial.
Further history
The first practical application of superconductivity was developed in 1954 with Dudley Allen Buck's invention of the cryotron.
Two superconductors with greatly different values of the critical
magnetic field are combined to produce a fast, simple switch for
computer elements.
Soon after discovering superconductivity in 1911, Kamerlingh
Onnes attempted to make an electromagnet with superconducting windings
but found that relatively low magnetic fields destroyed
superconductivity in the materials he investigated. Much later, in 1955,
G. B. Yntema succeeded in constructing a small 0.7-tesla iron-core electromagnet with superconducting niobium wire windings. Then, in 1961, J. E. Kunzler, E. Buehler, F. S. L. Hsu, and J. H. Wernick made the startling discovery that, at 4.2 kelvin, niobium–tin,
a compound consisting of three parts niobium and one part tin, was
capable of supporting a current density of more than 100,000 amperes per
square centimeter in a magnetic field of 8.8 tesla. Despite being
brittle and difficult to fabricate, niobium–tin has since proved
extremely useful in supermagnets generating magnetic fields as high as
20 tesla. In 1962, T. G. Berlincourt and R. R. Hake
discovered that more ductile alloys of niobium and titanium are
suitable for applications up to 10 tesla. Promptly thereafter,
commercial production of niobium–titanium supermagnet wire commenced at Westinghouse Electric Corporation and at Wah Chang Corporation.
Although niobium–titanium boasts less-impressive superconducting
properties than those of niobium–tin, niobium–titanium has,
nevertheless, become the most widely used "workhorse" supermagnet
material, in large measure a consequence of its very high ductility
and ease of fabrication. However, both niobium–tin and niobium–titanium
find wide application in MRI medical imagers, bending and focusing
magnets for enormous high-energy-particle accelerators, and a host of
other applications. Conectus, a European superconductivity consortium,
estimated that in 2014, global economic activity for which
superconductivity was indispensable amounted to about five billion
euros, with MRI systems accounting for about 80% of that total.
In 1962, Josephson
made the important theoretical prediction that a supercurrent can flow
between two pieces of superconductor separated by a thin layer of
insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantumΦ0 = h/(2e), where h is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973.
In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistance. The first development and study of superconducting Bose–Einstein condensate (BEC) in 2020 suggests that there is a "smooth transition between" BEC and Bardeen-Cooper-Shrieffer regimes.
There are many criteria by which superconductors are classified. The most common are:
Response to a magnetic field
A superconductor can be Type I, meaning it has a single critical field,
above which all superconductivity is lost and below which the magnetic
field is completely expelled from the superconductor; or Type II, meaning it has two critical fields, between which it allows partial penetration of the magnetic field through isolated points. These points are called vortices.
Furthermore, in multicomponent superconductors it is possible to have a
combination of the two behaviours. In that case the superconductor is
of Type-1.5.
A superconductor is generally considered high-temperature if it reaches a superconducting state above a temperature of 30 K (−243.15 °C); as in the initial discovery by Georg Bednorz and K. Alex Müller. It may also reference materials that transition to superconductivity when cooled using liquid nitrogen – that is, at only Tc > 77 K,
although this is generally used only to emphasize that liquid nitrogen
coolant is sufficient. Low temperature superconductors refer to
materials with a critical temperature below 30 K, and are cooled mainly
by liquid helium (Tc > 4.2 K). One exception to this rule is the iron pnictide
group of superconductors which display behaviour and properties typical
of high-temperature superconductors, yet some of the group have
critical temperatures below 30 K.
Several physical properties of superconductors vary from material to
material, such as the critical temperature, the value of the superconducting gap,
the critical magnetic field, and the critical current density at which
superconductivity is destroyed. On the other hand, there is a class of
properties that are independent of the underlying material. The Meissner
effect, the quantization of the magnetic flux
or permanent currents, i.e. the state of zero resistance are the most
important examples. The existence of these "universal" properties is
rooted in the nature of the broken symmetry of the superconductor and the emergence of off-diagonal long range order. Superconductivity is a thermodynamic phase,
and thus possesses certain distinguishing properties which are largely
independent of microscopic details. Off diagonal long range order is
closely connected to the formation of Cooper pairs.
Zero electrical DC resistance
The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current sourceI and measure the resulting voltageV across the sample. The resistance of the sample is given by Ohm's law as R = V / I. If the voltage is zero, this means that the resistance is zero.
Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI
machines. Experiments have demonstrated that currents in
superconducting coils can persist for years without any measurable
degradation. Experimental evidence points to a lifetime of at least
100,000 years. Theoretical estimates for the lifetime of a persistent
current can exceed the estimated lifetime of the universe, depending on
the wire geometry and the temperature.
In practice, currents injected in superconducting coils persisted for
28 years, 7 months, 27 days in a superconducting gravimeter in Belgium,
from August 4, 1995 until March 31, 2024.
In such instruments, the measurement is based on the monitoring of the
levitation of a superconducting niobium sphere with a mass of four
grams.
In a normal conductor, an electric current may be visualized as a fluid of electrons
moving across a heavy ionic lattice. The electrons are constantly
colliding with the ions in the lattice, and during each collision some
of the energy carried by the current is absorbed by the lattice and
converted into heat, which is essentially the vibrational kinetic energy
of the lattice ions. As a result, the energy carried by the current is
constantly being dissipated. This is the phenomenon of electrical
resistance and Joule heating.
The situation is different in a superconductor. In a conventional
superconductor, the electronic fluid cannot be resolved into individual
electrons. Instead, it consists of bound pairs of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. This pairing is very weak, and small thermal vibrations can fracture the bond. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ΔE that must be supplied in order to excite the fluid. Therefore, if ΔE is larger than the thermal energy of the lattice, given by kT, where k is the Boltzmann constant and T is the temperature, the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation.
In the class of superconductors known as type II superconductors, including all known high-temperature superconductors,
an extremely low but non-zero resistivity appears at temperatures not
too far below the nominal superconducting transition when an electric
current is applied in conjunction with a strong magnetic field, which
may be caused by the electric current. This is due to the motion of magnetic vortices
in the electronic superfluid, which dissipates some of the energy
carried by the current. If the current is sufficiently small, the
vortices are stationary, and the resistivity vanishes. The resistance
due to this effect is minuscule compared with that of
non-superconducting materials, but must be taken into account in
sensitive experiments. However, as the temperature decreases far enough
below the nominal superconducting transition, these vortices can become
frozen into a disordered but stationary phase known as a "vortex glass".
Below this vortex glass transition temperature, the resistance of the
material becomes truly zero.
Phase transition
In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature Tc.
The value of this critical temperature varies from material to
material. Conventional superconductors usually have critical
temperatures ranging from around 20 K to less than 1 K. Solid mercury,
for example, has a critical temperature of 4.2 K. As of 2015, the
highest critical temperature found for a conventional superconductor is
203 K for H2S, although high pressures of approximately 90 gigapascals were required. Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7,
one of the first cuprate superconductors to be discovered, has a
critical temperature above 90 K, and mercury-based cuprates have been
found with critical temperatures in excess of 130 K. The basic physical
mechanism responsible for the high critical temperature is not yet
clear. However, it is clear that a two-electron pairing is involved,
although the nature of the pairing ( wave vs. wave) remains controversial.
Similarly, at a fixed temperature below the critical temperature,
superconducting materials cease to superconduct when an external magnetic field is applied which is greater than the critical magnetic field. This is because the Gibbs free energy
of the superconducting phase increases quadratically with the magnetic
field while the free energy of the normal phase is roughly independent
of the magnetic field. If the material superconducts in the absence of a
field, then the superconducting phase free energy is lower than that of
the normal phase and so for some finite value of the magnetic field
(proportional to the square root of the difference of the free energies
at zero magnetic field) the two free energies will be equal and a phase
transition to the normal phase will occur. More generally, a higher
temperature and a stronger magnetic field lead to a smaller fraction of
electrons that are superconducting and consequently to a longer London penetration depth of external magnetic fields and currents. The penetration depth becomes infinite at the phase transition.
The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition. For example, the electronic heat capacity
is proportional to the temperature in the normal (non-superconducting)
regime. At the superconducting transition, it suffers a discontinuous
jump and thereafter ceases to be linear. At low temperatures, it varies
instead as e−α/T for some constant, α. This exponential behavior is one of the pieces of evidence for the existence of the energy gap.
The order of the superconducting phase transition was long a matter of debate. Experiments indicate that the transition is second-order, meaning there is no latent heat.
However, in the presence of an external magnetic field there is latent
heat, because the superconducting phase has a lower entropy below the
critical temperature than the normal phase. It has been experimentally
demonstrated
that, as a consequence, when the magnetic field is increased beyond the
critical field, the resulting phase transition leads to a decrease in
the temperature of the superconducting material.
Calculations in the 1970s suggested that it may actually be
weakly first-order due to the effect of long-range fluctuations in the
electromagnetic field. In the 1980s it was shown theoretically with the
help of a disorder field theory, in which the vortex lines of the superconductor play a major role, that the transition is of second order within the type II regime and of first order (i.e., latent heat) within the type I regime, and that the two regions are separated by a tricritical point. The results were strongly supported by Monte Carlo computer simulations.
When a superconductor is placed in a weak external magnetic field H,
and cooled below its transition temperature, the magnetic field is
ejected. The Meissner effect does not cause the field to be completely
ejected but instead, the field penetrates the superconductor but only to
a very small distance, characterized by a parameter λ, called the London penetration depth,
decaying exponentially to zero within the bulk of the material. The
Meissner effect is a defining characteristic of superconductivity. For
most superconductors, the London penetration depth is on the order of
100 nm.
The Meissner effect is sometimes confused with the kind of diamagnetism one would expect in a perfect electrical conductor: according to Lenz's law, when a changing
magnetic field is applied to a conductor, it will induce an electric
current in the conductor that creates an opposing magnetic field. In a
perfect conductor, an arbitrarily large current can be induced, and the
resulting magnetic field exactly cancels the applied field.
The Meissner effect is distinct from this – it is the spontaneous
expulsion that occurs during transition to superconductivity. Suppose
we have a material in its normal state, containing a constant internal
magnetic field. When the material is cooled below the critical
temperature, we would observe the abrupt expulsion of the internal
magnetic field, which we would not expect based on Lenz's law.
The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided
where H is the magnetic field and λ is the London penetration depth.
This equation, which is known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface.
A superconductor with little or no magnetic field within it is
said to be in the Meissner state. The Meissner state breaks down when
the applied magnetic field is too large. Superconductors can be divided
into two classes according to how this breakdown occurs. In Type I
superconductors, superconductivity is abruptly destroyed when the
strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern
of regions of normal material carrying a magnetic field mixed with
regions of superconducting material containing no field. In Type II
superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux
penetrates the material, but there remains no resistance to the flow of
electric current as long as the current is not too large. At a second
critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II.
Conversely, a spinning superconductor generates a magnetic field,
precisely aligned with the spin axis. The effect, the London moment, was
put to good use in Gravity Probe B.
This experiment measured the magnetic fields of four superconducting
gyroscopes to determine their spin axes. This was critical to the
experiment since it is one of the few ways to accurately determine the
spin axis of an otherwise featureless sphere.
Until 1986, physicists had believed that BCS theory forbade superconductivity at temperatures above about 30 K. In that year, Bednorz and Müller discovered superconductivity in lanthanum barium copper oxide (LBCO), a lanthanum-based cuprate perovskite material, which had a transition temperature of 35 K (Nobel Prize in Physics, 1987). It was soon found that replacing the lanthanum with yttrium (i.e., making YBCO) raised the critical temperature above 90 K.
This temperature jump is of particular engineering significance,
since it allows liquid nitrogen as a refrigerant, replacing liquid
helium.
Liquid nitrogen can be produced relatively cheaply, even on-site. The
higher temperatures additionally help to avoid some of the problems that
arise at liquid helium temperatures, such as the formation of plugs of
frozen air that can block cryogenic lines and cause unanticipated and
potentially hazardous pressure buildup.
Many other cuprate superconductors have since been discovered,
and the theory of superconductivity in these materials is one of the
major outstanding challenges of theoretical condensed matter physics. There are currently two main hypotheses – the resonating-valence-bond theory, and spin fluctuation which has the most support in the research community.
The second hypothesis proposed that electron pairing in
high-temperature superconductors is mediated by short-range spin waves
known as paramagnons.
In 2008, holographic superconductivity, which uses holographic duality or AdS/CFT correspondence
theory, was proposed by Gubser, Hartnoll, Herzog, and Horowitz, as a
possible explanation of high-temperature superconductivity in certain
materials.
In February 2008, an iron-based family of high-temperature superconductors was discovered.Hideo Hosono, of the Tokyo Institute of Technology, and colleagues found lanthanum oxygen fluorine iron arsenide (LaO1−xFxFeAs), an oxypnictide that superconducts below 26 K. Replacing the lanthanum in LaO1−xFxFeAs with samarium leads to superconductors that work at 55 K.
In 2014 and 2015, hydrogen sulfide (H 2S)
at extremely high pressures (around 150 gigapascals) was first
predicted and then confirmed to be a high-temperature superconductor
with a transition temperature of 80 K. Additionally, in 2019 it was discovered that lanthanum hydride (LaH 10) becomes a superconductor at 250 K under a pressure of 170 gigapascals.
In 2018, a research team from the Department of Physics, Massachusetts Institute of Technology, discovered superconductivity in bilayer graphene with one layer twisted at an angle
of approximately 1.1 degrees with cooling and applying a small electric
charge. Even if the experiments were not carried out in a
high-temperature environment, the results are correlated less to
classical but high temperature superconductors, given that no foreign
atoms need to be introduced. The superconductivity effect came about as a result of electrons twisted into a vortex between the graphene layers, called "skyrmions".
These act as a single particle and can pair up across the graphene's
layers, leading to the basic conditions required for superconductivity.
In 2020, a room-temperature superconductor
(critical temperature 288 K) made from hydrogen, carbon and sulfur
under pressures of around 270 gigapascals was described in a paper in Nature.However, in 2022 the article was retracted
by the editors because the validity of background subtraction
procedures had been called into question. All nine authors maintain that
the raw data strongly support the main claims of the paper.
On 31 December 2023 "Global Room-Temperature Superconductivity in
Graphite" was published in the journal "Advanced Quantum Technologies"
claiming to demonstrate superconductivity at room temperature and
ambient pressure in Highly oriented pyrolytic graphite with dense arrays of nearly parallel line defects.
Superconductors are promising candidate materials for devising
fundamental circuit elements of electronic, spintronic, and quantum
technologies. One such example is a superconducting diode,
in which supercurrent flows along one direction only, that promise
dissipationless superconducting and semiconducting-superconducting
hybrid technologies.
Superconducting magnets are some of the most powerful electromagnets known. They are used in MRI/NMR machines, mass spectrometers, the beam-steering magnets used in particle accelerators and plasma confining magnets in some tokamaks.
They can also be used for magnetic separation, where weakly magnetic
particles are extracted from a background of less or non-magnetic
particles, as in the pigment
industries. They can also be used in large wind turbines to overcome
the restrictions imposed by high electrical currents, with an industrial
grade 3.6 megawatt superconducting windmill generator having been
tested successfully in Denmark.
Superconductors are used to build Josephson junctions which are the building blocks of SQUIDs (superconducting quantum interference devices), the most sensitive magnetometers known. SQUIDs are used in scanning SQUID microscopes and magnetoencephalography. Series of Josephson devices are used to realize the SIvolt. Superconducting photon detectors can be realised in a variety of device configurations. Depending on the particular mode of operation, a superconductor–insulator–superconductor Josephson junction can be used as a photon detector or as a mixer.
The large resistance change at the transition from the normal to the
superconducting state is used to build thermometers in cryogenic micro-calorimeter photon detectors. The same effect is used in ultrasensitive bolometers made from superconducting materials. Superconducting nanowire single-photon detectors offer high speed, low noise single-photon detection and have been employed widely in advanced photon-counting applications.
Other early markets are arising where the relative efficiency, size and weight advantages of devices based on high-temperature superconductivity outweigh the additional costs involved. For example, in wind turbines
the lower weight and volume of superconducting generators could lead to
savings in construction and tower costs, offsetting the higher costs
for the generator and lowering the total levelized cost of electricity (LCOE).
Promising future applications include high-performance smart grid, electric power transmission, transformers, power storage devices, compact fusion power devices, electric motors (e.g. for vehicle propulsion, as in vactrains or maglev trains), magnetic levitation devices, fault current limiters, enhancing spintronic devices with superconducting materials, and superconducting magnetic refrigeration. However, superconductivity is sensitive to moving magnetic fields, so applications that use alternating current (e.g. transformers) will be more difficult to develop than those that rely upon direct current. Compared to traditional power lines, superconducting transmission lines
are more efficient and require only a fraction of the space, which
would not only lead to a better environmental performance but could also
improve public acceptance for expansion of the electric grid. Another attractive industrial aspect is the ability for high power transmission at lower voltages.
Advancements in the efficiency of cooling systems and use of cheap
coolants such as liquid nitrogen have also significantly decreased
cooling costs needed for superconductivity.