Search This Blog

Monday, September 8, 2014

Calculus

Calculus

From Wikipedia, the free encyclopedia
Calculus is the mathematical study of change,[1] in the same way that geometry is the study of shape and algebra is the study of operations and their application to solving equations. It has two major branches, differential calculus (concerning rates of change and slopes of curves), and integral calculus (concerning accumulation of quantities and the areas under and between curves); these two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit. Generally considered to have been founded in the 17th century by Isaac Newton and Gottfried Leibniz, today calculus has widespread uses in science, engineering and economics and can solve many problems that algebra alone cannot.

Calculus is a part of modern mathematics education. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of functions and limits, broadly called mathematical analysis. Calculus has historically been called "the calculus of infinitesimals", or "infinitesimal calculus". The word "calculus" comes from Latin (calculus) and refers to a small stone used for counting. More generally, calculus (plural calculi) refers to any method or system of calculation guided by the symbolic manipulation of expressions. Some examples of other well-known calculi are propositional calculus, calculus of variations, lambda calculus, and process calculus.

History

Modern calculus was developed in 17th century Europe by Isaac Newton and Gottfried Wilhelm Leibniz, but elements of it have appeared in ancient Greece, China, medieval Europe, India, and the Middle East.

Ancient

The ancient period introduced some of the ideas that led to integral calculus, but does not seem to have developed these ideas in a rigorous and systematic way. Calculations of volume and area, one goal of integral calculus, can be found in the Egyptian Moscow papyrus (c. 1820 BC), but the formulas are simple instructions, with no indication as to method, and some of them lack major components.[2] From the age of Greek mathematics, Eudoxus (c. 408−355 BC) used the method of exhaustion, which foreshadows the concept of the limit, to calculate areas and volumes, while Archimedes (c. 287−212 BC) developed this idea further, inventing heuristics which resemble the methods of integral calculus.[3] The method of exhaustion was later reinvented in China by Liu Hui in the 3rd century AD in order to find the area of a circle.[4] In the 5th century AD, Zu Chongzhi established a method that would later be called Cavalieri's principle to find the volume of a sphere.[5]

Medieval

Alexander the Great's invasion of northern India brought Greek trigonometry, using the chord, to India where the sine, cosine, and tangent were conceived. Indian mathematicians gave a semi-rigorous method of differentiation of some trigonometric functions. In the Middle East, Alhazen derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration, where the formulas for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.[6] In the 14th century, Indian mathematician Madhava of Sangamagrama and the Kerala school of astronomy and mathematics stated components of calculus such as the Taylor series and infinite series approximations.[7] However, they were not able to "combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today".[6]

Modern

In Europe, the foundational work was a treatise due to Bonaventura Cavalieri, who argued that volumes and areas should be computed as the sums of the volumes and areas of infinitesimally thin cross-sections. The ideas were similar to Archimedes' in The Method, but this treatise was lost until the early part of the twentieth century. Cavalieri's work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first.

The formal study of calculus brought together Cavalieri's infinitesimals with the calculus of finite differences developed in Europe at around the same time. Pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term.[9] The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, the latter two proving the second fundamental theorem of calculus around 1670.

Isaac Newton developed the use of calculus in his laws of motion and gravitation.

The product rule and chain rule, the notion of higher derivatives, Taylor series, and analytical functions were introduced by Isaac Newton in an idiosyncratic notation which he used to solve problems of mathematical physics.[10] In his works, Newton rephrased his ideas to suit the mathematical idiom of the time, replacing calculations with infinitesimals by equivalent geometrical arguments which were considered beyond reproach. He used the methods of calculus to solve the problem of planetary motion, the shape of the surface of a rotating fluid, the oblateness of the earth, the motion of a weight sliding on a cycloid, and many other problems discussed in his Principia Mathematica (1687). In other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were still considered disreputable.

Gottfried Wilhelm Leibniz was the first to publish his results on the development of calculus.

These ideas were arranged into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was originally accused of plagiarism by Newton.[11] He is now regarded as an independent inventor of and contributor to calculus. His contribution was to provide a clear set of rules for working with infinitesimal quantities, allowing the computation of second and higher derivatives, and providing the product rule and chain rule, in their differential and integral forms. Unlike Newton, Leibniz paid a lot of attention to the formalism, often spending days determining appropriate symbols for concepts.
Leibniz and Newton are usually both credited with the invention of calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the notation used in calculus today. The basic insights that both Newton and Leibniz provided were the laws of differentiation and integration, second and higher derivatives, and the notion of an approximating polynomial series. By Newton's time, the fundamental theorem of calculus was known.

When Newton and Leibniz first published their results, there was great controversy over which mathematician (and therefore which country) deserved credit. Newton derived his results first (later to be published in his Method of Fluxions), but Leibniz published his Nova Methodus pro Maximis et Minimis first. Newton claimed Leibniz stole ideas from his unpublished notes, which Newton had shared with a few members of the Royal Society. This controversy divided English-speaking mathematicians from continental mathematicians for many years, to the detriment of English mathematics. A careful examination of the papers of Leibniz and Newton shows that they arrived at their results independently, with Leibniz starting first with integration and Newton with differentiation. Today, both Newton and Leibniz are given credit for developing calculus independently. It is Leibniz, however, who gave the new discipline its name. Newton called his calculus "the science of fluxions".

Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on finite and infinitesimal analysis was written in 1748 by Maria Gaetana Agnesi.[12]

Foundations

In calculus, foundations refers to the rigorous development of a subject from precise axioms and definitions. In early calculus the use of infinitesimal quantities was thought unrigorous, and was fiercely criticized by a number of authors, most notably Michel Rolle and Bishop Berkeley. Berkeley famously described infinitesimals as the ghosts of departed quantities in his book The Analyst in 1734. Working out a rigorous foundation for calculus occupied mathematicians for much of the century following Newton and Leibniz, and is still to some extent an active area of research today.

Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere "notions" of infinitely small quantities.[13] The foundations of differential and integral calculus had been laid. In Cauchy's writing (see Cours d'Analyse), we find a broad range of foundational approaches, including a definition of continuity in terms of infinitesimals, and a (somewhat imprecise) prototype of an (ε, δ)-definition of limit in the definition of differentiation. In his work Weierstrass formalized the concept of limit and eliminated infinitesimals. Following the work of Weierstrass, it eventually became common to base calculus on limits instead of infinitesimal quantities. Though the subject is still occasionally called "infinitesimal calculus". Bernhard Riemann used these ideas to give a precise definition of the integral. It was also during this period that the ideas of calculus were generalized to Euclidean space and the complex plane.

In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended. Henri Lebesgue invented measure theory and used it to define integrals of all but the most pathological functions. Laurent Schwartz introduced distributions, which can be used to take the derivative of any function whatsoever.

Limits are not the only rigorous approach to the foundation of calculus. Another way is to use Abraham Robinson's non-standard analysis. Robinson's approach, developed in the 1960s, uses technical machinery from mathematical logic to augment the real number system with infinitesimal and infinite numbers, as in the original Newton-Leibniz conception. The resulting numbers are called hyperreal numbers, and they can be used to give a Leibniz-like development of the usual rules of calculus.

Significance

While many of the ideas of calculus had been developed earlier in Egypt, Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Isaac Newton and Gottfried Wilhelm Leibniz built on the work of earlier mathematicians to introduce its basic principles. The development of calculus was built on earlier concepts of instantaneous motion and area underneath curves.

Applications of differential calculus include computations involving velocity and acceleration, the slope of a curve, and optimization. Applications of integral calculus include computations involving area, volume, arc length, center of mass, work, and pressure. More advanced applications include power series and Fourier series.

Calculus is also used to gain a more precise understanding of the nature of space, time, and motion. For centuries, mathematicians and philosophers wrestled with paradoxes involving division by zero or sums of infinitely many numbers. These questions arise in the study of motion and area. The ancient Greek philosopher Zeno of Elea gave several famous examples of such paradoxes. Calculus provides tools, especially the limit and the infinite series, which resolve the paradoxes.

Principles

Limits and infinitesimals

Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like numbers but which are, in some sense, "infinitely small". An infinitesimal number dx could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and less than any positive real number. Any integer multiple of an infinitesimal is still infinitely small, i.e., infinitesimals do not satisfy the Archimedean property.
From this point of view, calculus is a collection of techniques for manipulating infinitesimals. This approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. However, the concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals.

In the 19th century, infinitesimals were replaced by the epsilon, delta approach to limits. Limits describe the value of a function at a certain input in terms of its values at nearby input. They capture small-scale behavior in the context of the real number system. In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by very small numbers, and the infinitely small behavior of the function is found by taking the limiting behavior for smaller and smaller numbers. Limits are the easiest way to provide rigorous foundations for calculus, and for this reason they are the standard approach.

Differential calculus


Tangent line at (x, f(x)). The derivative f′(x) of a curve at a point is the slope (rise over run) of the line tangent to that curve at that point.

Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the derivative function or just the derivative of the original function. In mathematical jargon, the derivative is a linear operator which inputs a function and outputs a second function. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. (The function it produces turns out to be the doubling function.)

The most common symbol for a derivative is an apostrophe-like mark called prime. Thus, the derivative of the function of f is f′, pronounced "f prime." For instance, if f(x) = x2 is the squaring function, then f′(x) = 2x is its derivative, the doubling function.

If the input of the function represents time, then the derivative represents change with respect to time. For example, if f is a function that takes a time as input and gives the position of a ball at that time as output, then the derivative of f is how the position is changing in time, that is, it is the velocity of the ball.

If a function is linear (that is, if the graph of the function is a straight line), then the function can be written as y = mx + b, where x is the independent variable, y is the dependent variable, b is the y-intercept, and:
m= \frac{\text{rise}}{\text{run}}= \frac{\text{change in } y}{\text{change in } x} = \frac{\Delta y}{\Delta x}.
This gives an exact value for the slope of a straight line. If the graph of the function is not a straight line, however, then the change in y divided by the change in x varies. Derivatives give an exact meaning to the notion of change in output with respect to change in input. To be concrete, let f be a function, and fix a point a in the domain of f. (a, f(a)) is a point on the graph of the function. If h is a number close to zero, then a + h is a number close to a. Therefore (a + h, f(a + h)) is close to (a, f(a)). The slope between these two points is

m = \frac{f(a+h) - f(a)}{(a+h) - a} = \frac{f(a+h) - f(a)}{h}.
This expression is called a difference quotient. A line through two points on a curve is called a secant line, so m is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). The secant line is only an approximation to the behavior of the function at the point a because it does not account for what happens between a and a + h. It is not possible to discover the behavior at a by setting h to zero because this would require dividing by zero, which is impossible. The derivative is defined by taking the limit as h tends to zero, meaning that it considers the behavior of f for all small values of h and extracts a consistent value for the case when h equals zero:
\lim_{h \to 0}{f(a+h) - f(a)\over{h}}.
Geometrically, the derivative is the slope of the tangent line to the graph of f at a. The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function f.

Here is a particular example, the derivative of the squaring function at the input 3. Let f(x) = x2 be the squaring function.

The derivative f′(x) of a curve at a point is the slope of the line tangent to that curve at that point. This slope is determined by considering the limiting value of the slopes of secant lines. Here the function involved (drawn in red) is f(x) = x3x. The tangent line (in green) which passes through the point (−3/2, −15/8) has a slope of 23/4. Note that the vertical and horizontal scales in this image are different.
\begin{align}f'(3) &=\lim_{h \to 0}{(3+h)^2 - 3^2\over{h}} \\
&=\lim_{h \to 0}{9 + 6h + h^2 - 9\over{h}} \\
&=\lim_{h \to 0}{6h + h^2\over{h}} \\
&=\lim_{h \to 0} (6 + h) \\
&= 6.
\end{align}
The slope of tangent line to the squaring function at the point (3,9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the derivative function of the squaring function, or just the derivative of the squaring function for short. A similar computation to the one above shows that the derivative of the squaring function is the doubling function.

Leibniz notation

A common notation, introduced by Leibniz, for the derivative in the example above is

\begin{align}
y&=x^2 \\
\frac{dy}{dx}&=2x.
\end{align}
In an approach based on limits, the symbol dy/dx is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above. Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, dy being the infinitesimally small change in y caused by an infinitesimally small change dx applied to x. We can also think of d/dx as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example:

\frac{d}{dx}(x^2)=2x.
In this usage, the dx in the denominator is read as "with respect to x". Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like dx and dy as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative.

Integral calculus

Integral calculus is the study of the definitions, properties, and applications of two related concepts, the indefinite integral and the definite integral. The process of finding the value of an integral is called integration. In technical language, integral calculus studies two related linear operators.
The indefinite integral is the antiderivative, the inverse operation to the derivative. F is an indefinite integral of f when f is a derivative of F. (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.)

The definite integral inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis. The technical definition of the definite integral is the limit of a sum of areas of rectangles, called a Riemann sum.

A motivating example is the distances traveled in a given time.
\mathrm{Distance} = \mathrm{Speed} \cdot \mathrm{Time}
If the speed is constant, only multiplication is needed, but if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled.


Constant Velocity

Integration can be thought of as measuring the area under a curve, defined by f(x), between two points (here a and b).

When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, travelling a steady 50 mph for 3 hours results in a total distance of 150 miles. In the diagram on the left, when constant velocity and time are graphed, these two values form a rectangle with height equal to the velocity and width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and distance traveled can be extended to any irregularly shaped region exhibiting a fluctuating velocity over a given time period. If f(x) in the diagram on the right represents speed as it varies over time, the distance traveled (between the times represented by a and b) is the area of the shaded region s.

To approximate that area, an intuitive method would be to divide up the distance between a and b into a number of equal segments, the length of each segment represented by the symbol Δx. For each small segment, we can choose one value of the function f(x). Call that value h. Then the area of the rectangle with base Δx and height h gives the distance (time Δx multiplied by speed h) traveled in that segment. Associated with each segment is the average value of the function above it, f(x) = h. The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for Δx will give more rectangles and in most cases a better approximation, but for an exact answer we need to take a limit as Δx approaches zero.
The symbol of integration is \int \,, an elongated S (the S stands for "sum"). The definite integral is written as:
\int_a^b f(x)\, dx.
and is read "the integral from a to b of f-of-x with respect to x." The Leibniz notation dx is intended to suggest dividing the area under the curve into an infinite number of rectangles, so that their width Δx becomes the infinitesimally small dx. In a formulation of the calculus based on limits, the notation
\int_a^b \cdots\, dx
is to be understood as an operator that takes a function as an input and gives a number, the area, as an output. The terminating differential, dx, is not a number, and is not being multiplied by f(x), although, serving as a reminder of the Δx limit definition, it can be treated as such in symbolic manipulations of the integral. Formally, the differential indicates the variable over which the function is integrated and serves as a closing bracket for the integration operator.

The indefinite integral, or antiderivative, is written:
\int f(x)\, dx.
Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is actually a family of functions differing only by a constant. Since the derivative of the function y = x2 + C, where C is any constant, is y′ = 2x, the antiderivative of the latter given by:
\int 2x\, dx = x^2 + C.
The unspecified constant C present in the indefinite integral or antiderivative is known as the constant of integration.

Fundamental theorem

The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
The fundamental theorem of calculus states: If a function f is continuous on the interval [a, b] and if F is a function whose derivative is f on the interval (a, b), then
\int_{a}^{b} f(x)\,dx = F(b) - F(a).
Furthermore, for every x in the interval (a, b),
\frac{d}{dx}\int_a^x f(t)\, dt = f(x).
This realization, made by both Newton and Leibniz, who based their results on earlier work by Isaac Barrow, was key to the massive proliferation of analytic results after their work became known. The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulas for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives, and are ubiquitous in the sciences.

Applications


The logarithmic spiral of the Nautilus shell is a classical image used to depict the growth and change related to calculus

Calculus is used in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other.

Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are related through calculus. The mass of an object of known density, the moment of inertia of objects, as well as the total energy of an object within a conservative field can be found by the use of calculus. An example of the use of calculus in mechanics is Newton's second law of motion: historically stated it expressly uses the term "rate of change" which refers to the derivative saying The rate of change of momentum of a body is equal to the resultant force acting on the body and is in the same direction. Commonly expressed today as Force = Mass × acceleration, it involves differential calculus because acceleration is the time derivative of velocity or second time derivative of trajectory or spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path.

Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus. Chemistry also uses calculus in determining reaction rates and radioactive decay. In biology, population dynamics starts with reproduction and death rates to model population changes.

Calculus can be used in conjunction with other mathematical disciplines. For example, it can be used with linear algebra to find the "best fit" linear approximation for a set of points in a domain. Or it can be used in probability theory to determine the probability of a continuous random variable from an assumed density function. In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points.

Green's Theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C, is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property.

Discrete Green's Theorem, which gives the relationship between a double integral of a function around a simple closed rectangular curve C and a linear combination of the antiderivative's values at corner points along the edge of the curve, allows fast calculation of sums of values in rectangular domains. For example, it can be used to efficiently calculate sums of rectangular domains in images, in order to rapidly extract features and detect object - see also the summed area table algorithm.

In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel so as to maximize flow. From the decay laws for a particular drug's elimination from the body, it is used to derive dosing laws. In nuclear medicine, it is used to build models of radiation transport in targeted tumor therapies.

In economics, calculus allows for the determination of maximal profit by providing a way to easily calculate both marginal cost and marginal revenue.

Calculus is also used to find approximate solutions to equations; in practice it is the standard way to solve differential equations and do root finding in most applications. Examples are methods such as Newton's method, fixed point iteration, and linear approximation. For instance, spacecraft use a variation of the Euler method to approximate curved courses within zero gravity environments.

Varieties

Over the years, many reformulations of calculus have been investigated for different purposes.

Non-standard calculus

Imprecise calculations with infinitesimals were widely replaced with the rigorous (ε, δ)-definition of limit starting in the 1870s. Meanwhile, calculations with infinitesimals persisted and often led to correct results. This led Abraham Robinson to investigate if were possible to develop a number system with infinitesimal quantities over which the theorems of calculus were still valid. In 1960, building upon the work of Edwin Hewitt and Jerzy Łoś, he succeeded in developing non-standard analysis. The theory of non-standard analysis is rich enough to be applied in many branches of mathematics. As such, books and articles dedicated solely to the traditional theorems of calculus often go by the title non-standard calculus.

Smooth infinitesimal analysis

This is an another reformulation of the calculus in terms of infinitesimals. Based on the ideas of F. W. Lawvere and employing the methods of category theory, it views all functions as being continuous and incapable of being expressed in terms of discrete entities. One aspect of this formulation is that the law of excluded middle does not hold in this formulation.

Constructive analysis

Constructive mathematics is a branch of mathematics that insists that proofs of the existence of a number, function, or other mathematical object should give a construction of the object. As such constructive mathematics also rejects the law of excluded middle. Reformulations of calculus in a constructive framework are generally part of the subject of constructive analysis.

Differential equation (requires basic understanding of calculus)

Differential equation

From Wikipedia, the free encyclopedia

Visualization of heat transfer in a pump casing, created by solving the heat equation. Heat is being generated internally in the casing and being cooled at the boundary, providing a steady state temperature distribution.

A differential equation is a mathematical equation that relates some function of one or more variables with its derivatives. Differential equations arise whenever a deterministic relation involving some continuously varying quantities (modeled by functions) and their rates of change in space and/or time (expressed as derivatives) are known or postulated. Because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.

Differential equations are mathematically studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form. If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.

Example

For example, in classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow one (given the position, velocity, acceleration and various forces acting on the body) to express these variables dynamically as a differential equation for the unknown position of the body as a function of time.

In some cases, this differential equation (called an equation of motion) may be solved explicitly.
An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity.

Directions of study

The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions.
Differential equations play an important role in modelling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods.

Mathematicians also study weak solutions (relying on weak derivatives), which are types of solutions that do not have to be differentiable everywhere. This extension is often necessary for solutions to exist.

The study of the stability of solutions of differential equations is known as stability theory.

Nomenclature

The theory of differential equations is well developed and the methods used to study them vary significantly with the type of the equation.

Ordinary and partial

  • An ordinary differential equation (ODE) is a differential equation in which the unknown function (also known as the dependent variable) is a function of a single independent variable. In the simplest form, the unknown function is a real or complex valued function, but more generally, it may be vector-valued or matrix-valued: this corresponds to considering a system of ordinary differential equations for a single function.

Ordinary differential equations are further classified according to the order of the highest derivative of the dependent variable with respect to the independent variable appearing in the equation. The most important cases for applications are first-order and second-order differential equations. For example, Bessel's differential equation
 
x^2 \frac{d^2 y}{dx^2} + x \frac{dy}{dx} + (x^2 - \alpha^2)y = 0
 
(in which y is the dependent variable) is a second-order differential equation. In the classical literature a distinction is also made between differential equations explicitly solved with respect to the highest derivative and differential equations in an implicit form. Also important is the degree, or (highest) power, of the highest derivative(s) in the equation (cf. : degree of a polynomial). A differential equation is called a nonlinear differential equation if its degree is not one (a sufficient but unnecessary condition).
  • A partial differential equation (PDE) is a differential equation in which the unknown function is a function of multiple independent variables and the equation involves its partial derivatives. The order is defined similarly to the case of ordinary differential equations, but further classification into elliptic, hyperbolic, and parabolic equations, especially for second-order linear equations, is of utmost importance. Some partial differential equations do not fall into any of these categories over the whole domain of the independent variables and they are said to be of mixed type.

Linear and non-linear

Both ordinary and partial differential equations are broadly classified as linear and nonlinear.
  • A differential equation is linear if the unknown function and its derivatives appear to the power 1 (products of the unknown function and its derivatives are not allowed) and nonlinear otherwise. The characteristic property of linear equations is that their solutions form an affine subspace of an appropriate function space, which results in much more developed theory of linear differential equations. Homogeneous linear differential equations are a further subclass for which the space of solutions is a linear subspace i.e. the sum of any set of solutions or multiples of solutions is also a solution. The coefficients of the unknown function and its derivatives in a linear differential equation are allowed to be (known) functions of the independent variable or variables; if these coefficients are constants then one speaks of a constant coefficient linear differential equation.
  • There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behavior over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.[1]
Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations (see below).

Examples

In the first group of examples, let u be an unknown function of x, and c and ω are known constants.
  • Inhomogeneous first-order linear constant coefficient ordinary differential equation:
 \frac{du}{dx} = cu+x^2.
  • Homogeneous second-order linear ordinary differential equation:
 \frac{d^2u}{dx^2} - x\frac{du}{dx} + u = 0.
  • Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator:
 \frac{d^2u}{dx^2} + \omega^2u = 0.
  • Inhomogeneous first-order nonlinear ordinary differential equation:
 \frac{du}{dx} = u^2 + 4.
  • Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L:
 L\frac{d^2u}{dx^2} + g\sin u = 0.
In the next group of examples, the unknown function u depends on two variables x and t or x and y.
  • Homogeneous first-order linear partial differential equation:
 \frac{\partial u}{\partial t} + t\frac{\partial u}{\partial x} = 0.
  • Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation:
 \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 0.
 \frac{\partial u}{\partial t} = 6u\frac{\partial u}{\partial x} - \frac{\partial^3 u}{\partial x^3}.

Related concepts

  • A delay differential equation (DDE) is an equation for a function of a single variable, usually called time, in which the derivative of the function at a certain time is given in terms of the values of the function at earlier times.

Connection to difference equations

The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve approximation of the solution of a differential equation by the solution of a corresponding difference equation.

Universality of mathematical description

Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation.

Notable differential equations

Physics and engineering

Biology

Economics

Superconductivity

Superconductivity

From Wikipedia, the free encyclopedia
 
A magnet levitating above a high-temperature superconductor, cooled with liquid nitrogen. Persistent electric current flows on the surface of the superconductor, acting to exclude the magnetic field of the magnet (Faraday's law of induction). This current effectively forms an electromagnet that repels the magnet.
Video of a Meissner effect in a high temperature superconductor (black pellet) with a NdFeB magnet (metallic)
A high-temperature superconductor levitating above a magnet

Superconductivity is a phenomenon of exactly zero electrical resistance and expulsion of magnetic fields occurring in certain materials when cooled below a characteristic critical temperature. It was discovered by Dutch physicist Heike Kamerlingh Onnes on April 8, 1911 in Leiden. Like ferromagnetism and atomic spectral lines, superconductivity is a quantum mechanical phenomenon. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor as it transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics.

The electrical resistivity of a metallic conductor decreases gradually as temperature is lowered. In ordinary conductors, such as copper or silver, this decrease is limited by impurities and other defects. Even near absolute zero, a real sample of a normal conductor shows some resistance. In a superconductor, the resistance drops abruptly to zero when the material is cooled below its critical temperature. An electric current flowing through a loop of superconducting wire can persist indefinitely with no power source.[1][2][3][4][5]

In 1986, it was discovered that some cuprate-perovskite ceramic materials have a critical temperature above 90 K (−183 °C).[6] Such a high transition temperature is theoretically impossible for a conventional superconductor, leading the materials to be termed high-temperature superconductors. Liquid nitrogen boils at 77 K, and superconduction at higher temperatures than this facilitates many experiments and applications that are less practical at lower temperatures.

Classification

There are many criteria by which superconductors are classified. The most common are:

Elementary properties of superconductors

Most of the physical properties of superconductors vary from material to material, such as the heat capacity and the critical temperature, critical field, and critical current density at which superconductivity is destroyed.

On the other hand, there is a class of properties that are independent of the underlying material. For instance, all superconductors have exactly zero resistivity to low applied currents when there is no magnetic field present or if the applied field does not exceed a critical value. The existence of these "universal" properties implies that superconductivity is a thermodynamic phase, and thus possesses certain distinguishing properties which are largely independent of microscopic details.

Zero electrical DC resistance

Electric cables for accelerators at CERN. Both the massive and slim cables are rated for 12,500 A. Top: conventional cables for LEP; bottom: superconductor-based cables for the LHC

The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current source I and measure the resulting voltage V across the sample. The resistance of the sample is given by Ohm's law as R = V / I. If the voltage is zero, this means that the resistance is zero.

Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a current lifetime of at least 100,000 years. Theoretical estimates for the lifetime of a persistent current can exceed the estimated lifetime of the universe, depending on the wire geometry and the temperature.[3]

In a normal conductor, an electric current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat, which is essentially the vibrational kinetic energy of the lattice ions. As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance.

The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons. Instead, it consists of bound pairs of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ΔE that must be supplied in order to excite the fluid. Therefore, if ΔE is larger than the thermal energy of the lattice, given by kT, where k is Boltzmann's constant and T is the temperature, the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation.

In a class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely small amount of resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. If the current is sufficiently small, the vortices are stationary, and the resistivity vanishes. The resistance due to this effect is tiny compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen into a disordered but stationary phase known as a "vortex glass". Below this vortex glass transition temperature, the resistance of the material becomes truly zero.

Superconducting phase transition

Behavior of heat capacity (cv, blue) and resistivity (ρ, green) at the superconducting phase transition

In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature Tc. The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from around 20 K to less than 1 K. Solid mercury, for example, has a critical temperature of 4.2 K. As of 2009, the highest critical temperature found for a conventional superconductor is 39 K for magnesium diboride (MgB2),[7][8] although this material displays enough exotic properties that there is some doubt about classifying it as a "conventional" superconductor.[9] Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7, one of the first cuprate superconductors to be discovered, has a critical temperature of 92 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The explanation for these high critical temperatures remains unknown. Electron pairing due to phonon exchanges explains superconductivity in conventional superconductors, but it does not explain superconductivity in the newer superconductors that have a very high critical temperature.

Similarly, at a fixed temperature below the critical temperature, superconducting materials cease to superconduct when an external magnetic field is applied which is greater than the critical magnetic field. This is because the Gibbs free energy of the superconducting phase increases quadratically with the magnetic field while the free energy of the normal phase is roughly independent of the magnetic field. If the material superconducts in the absence of a field, then the superconducting phase free energy is lower than that of the normal phase and so for some finite value of the magnetic field (proportional to the square root of the difference of the free energies at zero magnetic field) the two free energies will be equal and a phase transition to the normal phase will occur. More generally, a higher temperature and a stronger magnetic field lead to a smaller fraction of the electrons in the superconducting band and consequently a longer London penetration depth of external magnetic fields and currents. The penetration depth becomes infinite at the phase transition.

The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition. For example, the electronic heat capacity is proportional to the temperature in the normal (non-superconducting) regime. At the superconducting transition, it suffers a discontinuous jump and thereafter ceases to be linear. At low temperatures, it varies instead as e−α/T for some constant, α. This exponential behavior is one of the pieces of evidence for the existence of the energy gap.

The order of the superconducting phase transition was long a matter of debate. Experiments indicate that the transition is second-order, meaning there is no latent heat. However in the presence of an external magnetic field there is latent heat, because the superconducting phase has a lower entropy below the critical temperature than the normal phase. It has been experimentally demonstrated[10] that, as a consequence, when the magnetic field is increased beyond the critical field, the resulting phase transition leads to a decrease in the temperature of the superconducting material.

Calculations in the 1970s suggested that it may actually be weakly first-order due to the effect of long-range fluctuations in the electromagnetic field. In the 1980s it was shown theoretically with the help of a disorder field theory, in which the vortex lines of the superconductor play a major role, that the transition is of second order within the type II regime and of first order (i.e., latent heat) within the type I regime, and that the two regions are separated by a tricritical point.[11] The results were strongly supported by Monte Carlo computer simulations.[12]

Meissner effect

When a superconductor is placed in a weak external magnetic field H, and cooled below its transition temperature, the magnetic field is ejected. The Meissner effect does not cause the field to be completely ejected but instead the field penetrates the superconductor but only to a very small distance, characterized by a parameter λ, called the London penetration depth, decaying exponentially to zero within the bulk of the material. The Meissner effect is a defining characteristic of superconductivity. For most superconductors, the London penetration depth is on the order of 100 nm.
The Meissner effect is sometimes confused with the kind of diamagnetism one would expect in a perfect electrical conductor: according to Lenz's law, when a changing magnetic field is applied to a conductor, it will induce an electric current in the conductor that creates an opposing magnetic field. In a perfect conductor, an arbitrarily large current can be induced, and the resulting magnetic field exactly cancels the applied field.

The Meissner effect is distinct from this—it is the spontaneous expulsion which occurs during transition to superconductivity. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field, which we would not expect based on Lenz's law.

The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided
 \nabla^2\mathbf{H} = \lambda^{-2} \mathbf{H}\,
where H is the magnetic field and λ is the London penetration depth.

This equation, which is known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface.

A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state[13] consisting of a baroque pattern[14] of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II.

London moment

Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B. This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. This was critical to the experiment since it is one of the few ways to accurately determine the spin axis of an otherwise featureless sphere.

History of superconductivity

Heike Kamerlingh Onnes (right), the discoverer of superconductivity

Superconductivity was discovered on April 8, 1911 by Heike Kamerlingh Onnes, who was studying the resistance of solid mercury at cryogenic temperatures using the recently produced liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistance abruptly disappeared.[15] In the same experiment, he also observed the superfluid transition of helium at 2.2 K, without recognizing its significance. The precise date and circumstances of the discovery were only reconstructed a century later, when Onnes's notebook was found.[16] In subsequent decades, superconductivity was observed in several other materials. In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K.

Great efforts have been devoted to finding out how and why superconductivity works; the important step occurred in 1933, when Meissner and Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon which has come to be known as the Meissner effect.[17] In 1935, Fritz and Heinz London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current.[18]

London theory

The first phenomenological theory of superconductivity was London theory. It was put forward by the brothers Fritz and Heinz London in 1935, shortly after the discovery that magnetic fields are expelled from superconductors. A major triumph of the equations of this theory is their ability to explain the Meissner effect,[19] wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold. By using the London equation, one can obtain the dependence of the magnetic field inside the superconductor on the distance to the surface.[20]

There are two London equations:
\frac{\partial \mathbf{j}_s}{\partial t} = \frac{n_s e^2}{m}\mathbf{E}, \qquad \mathbf{\nabla}\times\mathbf{j}_s =-\frac{n_s e^2}{m}\mathbf{B}.
The first equation follows from Newton's second law for superconducting electrons.

Conventional theories (1950s)

During the 1950s, theoretical condensed matter physicists arrived at a solid understanding of "conventional" superconductivity, through a pair of remarkable and important theories: the phenomenological Ginzburg-Landau theory (1950) and the microscopic BCS theory (1957).[21][22]
In 1950, the phenomenological Ginzburg-Landau theory of superconductivity was devised by Landau and Ginzburg.[23] This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger-like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Abrikosov showed that Ginzburg-Landau theory predicts the division of superconductors into the two categories now referred to as Type I and Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for their work (Landau had received the 1962 Nobel Prize for other work, and died in 1968). The four-dimensional extension of the Ginzburg-Landau theory, the Coleman-Weinberg model, is important in quantum field theory and cosmology.

Also in 1950, Maxwell and Reynolds et al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element.[24][25] This important discovery pointed to the electron-phonon interaction as the microscopic mechanism responsible for superconductivity.

The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper and Schrieffer.[22] This BCS theory explained the superconducting current as a superfluid of Cooper pairs, pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972.

The BCS theory was set on a firmer footing in 1958, when N. N. Bogolyubov showed that the BCS wavefunction, which had originally been derived from a variational argument, could be obtained using a canonical transformation of the electronic Hamiltonian.[26] In 1959, Lev Gor'kov showed that the BCS theory reduced to the Ginzburg-Landau theory close to the critical temperature.[27][28]

Generalizations of BCS theory for conventional superconductors form the basis for understanding of the phenomenon of superfluidity, because they fall into the Lambda transition universality class. The extent to which such generalizations can be applied to unconventional superconductors is still controversial.

Further history

The first practical application of superconductivity was developed in 1954 with Dudley Allen Buck's invention of the cryotron.[29] Two superconductors with greatly different values of critical magnetic field are combined to produce a fast, simple, switch for computer elements.

In 1962, the first commercial superconducting wire, a niobium-titanium alloy, was developed by researchers at Westinghouse, allowing the construction of the first practical superconducting magnets. In the same year, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator.[30] This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973.

In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistance.[31]

High-temperature superconductivity

Timeline of superconducting materials

Until 1986, physicists had believed that BCS theory forbade superconductivity at temperatures above about 30 K. In that year, Bednorz and Müller discovered superconductivity in a lanthanum-based cuprate perovskite material, which had a transition temperature of 35 K (Nobel Prize in Physics, 1987).[6] It was soon found that replacing the lanthanum with yttrium (i.e., making YBCO) raised the critical temperature to 92 K.[32]

This temperature jump is particularly significant, since it allows liquid nitrogen as a refrigerant, replacing liquid helium.[32] This can be important commercially because liquid nitrogen can be produced relatively cheaply, even on-site, avoiding some of the problems (such as so-called "solid air" plugs) which arise when liquid helium is used in piping.[33][34]

Many other cuprate superconductors have since been discovered, and the theory of superconductivity in these materials is one of the major outstanding challenges of theoretical condensed matter physics.[35] There are currently two main hypotheses – the resonating-valence-bond theory, and spin fluctuation which has the most support in the research community.[36] The second hypothesis proposed that electron pairing in high-temperature superconductors is mediated by short-range spin waves known as paramagnons.[37][38]

Since about 1993, the highest temperature superconductor was a ceramic material consisting of mercury, barium, calcium, copper and oxygen (HgBa2Ca2Cu3O8+δ) with Tc = 133–138 K.[39][40] The latter experiment (138 K) still awaits experimental confirmation, however.

In February 2008, an iron-based family of high-temperature superconductors was discovered.[41][42] Hideo Hosono, of the Tokyo Institute of Technology, and colleagues found lanthanum oxygen fluorine iron arsenide (LaO1-xFxFeAs), an oxypnictide that superconducts below 26 K. Replacing the lanthanum in LaO1−xFxFeAs with samarium leads to superconductors that work at 55 K.[43]

Applications

File:Flyingsuperconductor.oggPlay media

Video of superconducting levitation of YBCO

Superconducting magnets are some of the most powerful electromagnets known. They are used in MRI/NMR machines, mass spectrometers, and the beam-steering magnets used in particle accelerators. They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries.

In the 1950s and 1960s, superconductors were used to build experimental digital computers using cryotron switches. More recently, superconductors have been used to make digital circuits based on rapid single flux quantum technology and RF and microwave filters for mobile phone base stations.

Superconductors are used to build Josephson junctions which are the building blocks of SQUIDs (superconducting quantum interference devices), the most sensitive magnetometers known. SQUIDs are used in scanning SQUID microscopes and magnetoencephalography. Series of Josephson devices are used to realize the SI volt. Depending on the particular mode of operation, a superconductor-insulator-superconductor Josephson junction can be used as a photon detector or as a mixer. The large resistance change at the transition from the normal- to the superconducting state is used to build thermometers in cryogenic micro-calorimeter photon detectors. The same effect is used in ultrasensitive bolometers made from superconducting materials.

Other early markets are arising where the relative efficiency, size and weight advantages of devices based on high-temperature superconductivity outweigh the additional costs involved.

Promising future applications include high-performance smart grid, electric power transmission, transformers, power storage devices, electric motors (e.g. for vehicle propulsion, as in vactrains or maglev trains), magnetic levitation devices, fault current limiters, and superconducting magnetic refrigeration. However, superconductivity is sensitive to moving magnetic fields so applications that use alternating current (e.g. transformers) will be more difficult to develop than those that rely upon direct current.

Nobel Prizes for superconductivity

Declaration of the Rights of Man and of the Citizen

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Declarati...