Search This Blog

Tuesday, January 13, 2015

Superconductivity

From Wikipedia, the free encyclopedia
 
A magnet levitating above a high-temperature superconductor, cooled with liquid nitrogen. Persistent electric current flows on the surface of the superconductor, acting to exclude the magnetic field of the magnet (Faraday's law of induction). This current effectively forms an electromagnet that repels the magnet.
Video of a Meissner effect in a high temperature superconductor (black pellet) with a NdFeB magnet (metallic)
A high-temperature superconductor levitating above a magnet

Superconductivity is a phenomenon of exactly zero electrical resistance and expulsion of magnetic fields occurring in certain materials when cooled below a characteristic critical temperature. It was discovered by Dutch physicist Heike Kamerlingh Onnes on April 8, 1911 in Leiden. Like ferromagnetism and atomic spectral lines, superconductivity is a quantum mechanical phenomenon. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor as it transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics.

The electrical resistivity of a metallic conductor decreases gradually as temperature is lowered. In ordinary conductors, such as copper or silver, this decrease is limited by impurities and other defects. Even near absolute zero, a real sample of a normal conductor shows some resistance. In a superconductor, the resistance drops abruptly to zero when the material is cooled below its critical temperature. An electric current flowing through a loop of superconducting wire can persist indefinitely with no power source.[1][2][3][4][5]

In 1986, it was discovered that some cuprate-perovskite ceramic materials have a critical temperature above 90 K (−183 °C).[6] Such a high transition temperature is theoretically impossible for a conventional superconductor, leading the materials to be termed high-temperature superconductors. Liquid nitrogen boils at 77 K, and superconduction at higher temperatures than this facilitates many experiments and applications that are less practical at lower temperatures.

Classification

There are many criteria by which superconductors are classified. The most common are:

Elementary properties of superconductors

Most of the physical properties of superconductors vary from material to material, such as the heat capacity and the critical temperature, critical field, and critical current density at which superconductivity is destroyed.

On the other hand, there is a class of properties that are independent of the underlying material. For instance, all superconductors have exactly zero resistivity to low applied currents when there is no magnetic field present or if the applied field does not exceed a critical value. The existence of these "universal" properties implies that superconductivity is a thermodynamic phase, and thus possesses certain distinguishing properties which are largely independent of microscopic details.

Zero electrical DC resistance

Electric cables for accelerators at CERN. Both the massive and slim cables are rated for 12,500 A. Top: conventional cables for LEP; bottom: superconductor-based cables for the LHC

The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current source I and measure the resulting voltage V across the sample. The resistance of the sample is given by Ohm's law as R = V / I. If the voltage is zero, this means that the resistance is zero.

Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a current lifetime of at least 100,000 years. Theoretical estimates for the lifetime of a persistent current can exceed the estimated lifetime of the universe, depending on the wire geometry and the temperature.[3]

In a normal conductor, an electric current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat, which is essentially the vibrational kinetic energy of the lattice ions. As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance.

The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons. Instead, it consists of bound pairs of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ΔE that must be supplied in order to excite the fluid. Therefore, if ΔE is larger than the thermal energy of the lattice, given by kT, where k is Boltzmann's constant and T is the temperature, the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation.

In a class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely small amount of resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. If the current is sufficiently small, the vortices are stationary, and the resistivity vanishes. The resistance due to this effect is tiny compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen into a disordered but stationary phase known as a "vortex glass". Below this vortex glass transition temperature, the resistance of the material becomes truly zero.

Superconducting phase transition

Behavior of heat capacity (cv, blue) and resistivity (ρ, green) at the superconducting phase transition

In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature Tc. The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from around 20 K to less than 1 K. Solid mercury, for example, has a critical temperature of 4.2 K. As of 2009, the highest critical temperature found for a conventional superconductor is 39 K for magnesium diboride (MgB2),[7][8] although this material displays enough exotic properties that there is some doubt about classifying it as a "conventional" superconductor.[9] Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7, one of the first cuprate superconductors to be discovered, has a critical temperature of 92 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The explanation for these high critical temperatures remains unknown. Electron pairing due to phonon exchanges explains superconductivity in conventional superconductors, but it does not explain superconductivity in the newer superconductors that have a very high critical temperature.

Similarly, at a fixed temperature below the critical temperature, superconducting materials cease to superconduct when an external magnetic field is applied which is greater than the critical magnetic field. This is because the Gibbs free energy of the superconducting phase increases quadratically with the magnetic field while the free energy of the normal phase is roughly independent of the magnetic field. If the material superconducts in the absence of a field, then the superconducting phase free energy is lower than that of the normal phase and so for some finite value of the magnetic field (proportional to the square root of the difference of the free energies at zero magnetic field) the two free energies will be equal and a phase transition to the normal phase will occur. More generally, a higher temperature and a stronger magnetic field lead to a smaller fraction of the electrons in the superconducting band and consequently a longer London penetration depth of external magnetic fields and currents. The penetration depth becomes infinite at the phase transition.

The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition. For example, the electronic heat capacity is proportional to the temperature in the normal (non-superconducting) regime. At the superconducting transition, it suffers a discontinuous jump and thereafter ceases to be linear. At low temperatures, it varies instead as e−α/T for some constant, α. This exponential behavior is one of the pieces of evidence for the existence of the energy gap.

The order of the superconducting phase transition was long a matter of debate. Experiments indicate that the transition is second-order, meaning there is no latent heat. However in the presence of an external magnetic field there is latent heat, because the superconducting phase has a lower entropy below the critical temperature than the normal phase. It has been experimentally demonstrated[10] that, as a consequence, when the magnetic field is increased beyond the critical field, the resulting phase transition leads to a decrease in the temperature of the superconducting material.

Calculations in the 1970s suggested that it may actually be weakly first-order due to the effect of long-range fluctuations in the electromagnetic field. In the 1980s it was shown theoretically with the help of a disorder field theory, in which the vortex lines of the superconductor play a major role, that the transition is of second order within the type II regime and of first order (i.e., latent heat) within the type I regime, and that the two regions are separated by a tricritical point.[11] The results were strongly supported by Monte Carlo computer simulations.[12]

Meissner effect

When a superconductor is placed in a weak external magnetic field H, and cooled below its transition temperature, the magnetic field is ejected. The Meissner effect does not cause the field to be completely ejected but instead the field penetrates the superconductor but only to a very small distance, characterized by a parameter λ, called the London penetration depth, decaying exponentially to zero within the bulk of the material. The Meissner effect is a defining characteristic of superconductivity. For most superconductors, the London penetration depth is on the order of 100 nm.
The Meissner effect is sometimes confused with the kind of diamagnetism one would expect in a perfect electrical conductor: according to Lenz's law, when a changing magnetic field is applied to a conductor, it will induce an electric current in the conductor that creates an opposing magnetic field. In a perfect conductor, an arbitrarily large current can be induced, and the resulting magnetic field exactly cancels the applied field.

The Meissner effect is distinct from this—it is the spontaneous expulsion which occurs during transition to superconductivity. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field, which we would not expect based on Lenz's law.

The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided
 \nabla^2\mathbf{H} = \lambda^{-2} \mathbf{H}\,
where H is the magnetic field and λ is the London penetration depth.

This equation, which is known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface.

A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state[13] consisting of a baroque pattern[14] of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II.

London moment

Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B. This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. This was critical to the experiment since it is one of the few ways to accurately determine the spin axis of an otherwise featureless sphere.

History of superconductivity

Heike Kamerlingh Onnes (right), the discoverer of superconductivity

Superconductivity was discovered on April 8, 1911 by Heike Kamerlingh Onnes, who was studying the resistance of solid mercury at cryogenic temperatures using the recently produced liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistance abruptly disappeared.[15] In the same experiment, he also observed the superfluid transition of helium at 2.2 K, without recognizing its significance. The precise date and circumstances of the discovery were only reconstructed a century later, when Onnes's notebook was found.[16] In subsequent decades, superconductivity was observed in several other materials. In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K.

Great efforts have been devoted to finding out how and why superconductivity works; the important step occurred in 1933, when Meissner and Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon which has come to be known as the Meissner effect.[17] In 1935, Fritz and Heinz London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current.[18]

London theory

The first phenomenological theory of superconductivity was London theory. It was put forward by the brothers Fritz and Heinz London in 1935, shortly after the discovery that magnetic fields are expelled from superconductors. A major triumph of the equations of this theory is their ability to explain the Meissner effect,[19] wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold. By using the London equation, one can obtain the dependence of the magnetic field inside the superconductor on the distance to the surface.[20]
There are two London equations:
\frac{\partial \mathbf{j}_s}{\partial t} = \frac{n_s e^2}{m}\mathbf{E}, \qquad \mathbf{\nabla}\times\mathbf{j}_s =-\frac{n_s e^2}{m}\mathbf{B}.
The first equation follows from Newton's second law for superconducting electrons.

Conventional theories (1950s)

During the 1950s, theoretical condensed matter physicists arrived at a solid understanding of "conventional" superconductivity, through a pair of remarkable and important theories: the phenomenological Ginzburg-Landau theory (1950) and the microscopic BCS theory (1957).[21][22]
In 1950, the phenomenological Ginzburg-Landau theory of superconductivity was devised by Landau and Ginzburg.[23] This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger-like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Abrikosov showed that Ginzburg-Landau theory predicts the division of superconductors into the two categories now referred to as Type I and Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for their work (Landau had received the 1962 Nobel Prize for other work, and died in 1968). The four-dimensional extension of the Ginzburg-Landau theory, the Coleman-Weinberg model, is important in quantum field theory and cosmology.

Also in 1950, Maxwell and Reynolds et al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element.[24][25] This important discovery pointed to the electron-phonon interaction as the microscopic mechanism responsible for superconductivity.

The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper and Schrieffer.[22] This BCS theory explained the superconducting current as a superfluid of Cooper pairs, pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972.

The BCS theory was set on a firmer footing in 1958, when N. N. Bogolyubov showed that the BCS wavefunction, which had originally been derived from a variational argument, could be obtained using a canonical transformation of the electronic Hamiltonian.[26] In 1959, Lev Gor'kov showed that the BCS theory reduced to the Ginzburg-Landau theory close to the critical temperature.[27][28]

Generalizations of BCS theory for conventional superconductors form the basis for understanding of the phenomenon of superfluidity, because they fall into the Lambda transition universality class. The extent to which such generalizations can be applied to unconventional superconductors is still controversial.

Further history

The first practical application of superconductivity was developed in 1954 with Dudley Allen Buck's invention of the cryotron.[29] Two superconductors with greatly different values of critical magnetic field are combined to produce a fast, simple, switch for computer elements.

In 1962, the first commercial superconducting wire, a niobium-titanium alloy, was developed by researchers at Westinghouse, allowing the construction of the first practical superconducting magnets. In the same year, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator.[30] This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973.

In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistance.[31]

High-temperature superconductivity

Timeline of superconducting materials

Until 1986, physicists had believed that BCS theory forbade superconductivity at temperatures above about 30 K. In that year, Bednorz and Müller discovered superconductivity in a lanthanum-based cuprate perovskite material, which had a transition temperature of 35 K (Nobel Prize in Physics, 1987).[6] It was soon found that replacing the lanthanum with yttrium (i.e., making YBCO) raised the critical temperature to 92 K.[32]

This temperature jump is particularly significant, since it allows liquid nitrogen as a refrigerant, replacing liquid helium.[32] This can be important commercially because liquid nitrogen can be produced relatively cheaply, even on-site, avoiding some of the problems (such as so-called "solid air" plugs) which arise when liquid helium is used in piping.[33][34]

Many other cuprate superconductors have since been discovered, and the theory of superconductivity in these materials is one of the major outstanding challenges of theoretical condensed matter physics.[35] There are currently two main hypotheses – the resonating-valence-bond theory, and spin fluctuation which has the most support in the research community.[36] The second hypothesis proposed that electron pairing in high-temperature superconductors is mediated by short-range spin waves known as paramagnons.[37][38]

Since about 1993, the highest temperature superconductor was a ceramic material consisting of mercury, barium, calcium, copper and oxygen (HgBa2Ca2Cu3O8+δ) with Tc = 133–138 K.[39][40] The latter experiment (138 K) still awaits experimental confirmation, however.

In February 2008, an iron-based family of high-temperature superconductors was discovered.[41][42] Hideo Hosono, of the Tokyo Institute of Technology, and colleagues found lanthanum oxygen fluorine iron arsenide (LaO1-xFxFeAs), an oxypnictide that superconducts below 26 K. Replacing the lanthanum in LaO1−xFxFeAs with samarium leads to superconductors that work at 55 K.[43]

Applications

File:Flyingsuperconductor.oggPlay media

Video of superconducting levitation of YBCO

Superconducting magnets are some of the most powerful electromagnets known. They are used in MRI/NMR machines, mass spectrometers, and the beam-steering magnets used in particle accelerators. They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries.
In the 1950s and 1960s, superconductors were used to build experimental digital computers using cryotron switches. More recently, superconductors have been used to make digital circuits based on rapid single flux quantum technology and RF and microwave filters for mobile phone base stations.

Superconductors are used to build Josephson junctions which are the building blocks of SQUIDs (superconducting quantum interference devices), the most sensitive magnetometers known. SQUIDs are used in scanning SQUID microscopes and magnetoencephalography. Series of Josephson devices are used to realize the SI volt. Depending on the particular mode of operation, a superconductor-insulator-superconductor Josephson junction can be used as a photon detector or as a mixer. The large resistance change at the transition from the normal- to the superconducting state is used to build thermometers in cryogenic micro-calorimeter photon detectors. The same effect is used in ultrasensitive bolometers made from superconducting materials.

Other early markets are arising where the relative efficiency, size and weight advantages of devices based on high-temperature superconductivity outweigh the additional costs involved.

Promising future applications include high-performance smart grid, electric power transmission, transformers, power storage devices, electric motors (e.g. for vehicle propulsion, as in vactrains or maglev trains), magnetic levitation devices, fault current limiters, and superconducting magnetic refrigeration. However, superconductivity is sensitive to moving magnetic fields so applications that use alternating current (e.g. transformers) will be more difficult to develop than those that rely upon direct current.

Nobel Prizes for superconductivity

Introduction to the mathematics of general relativity

From Wikipedia, the free encyclopedia

The mathematics of general relativity is complex. In Newton's theories of motion, an object's length and the rate at which time passes remain constant while the object accelerates, meaning that many problems in Newtonian mechanics may be solved by algebra alone. In relativity, however, an object's length and the rate at which time passes both change appreciably as the object's speed approaches the speed of light, meaning that more variables and more complicated mathematics is required to calculate the object's motion. As a result, relativity requires the use of concepts such as vectors, tensors, pseudotensors and curvilinear coordinates.
For an introduction based on the example of particles following circular orbits about a large mass, nonrelativistic and relativistic treatments are given in, respectively, Newtonian motivations for general relativity and Theoretical motivation for general relativity.

Vectors and tensors

Vectors

Illustration of a typical vector.

In mathematics, physics, and engineering, a Euclidean vector (sometimes called a geometric[1] or spatial vector,[2] or – as here – simply a vector) is a geometric object that has both a magnitude (or length) and direction. A vector is what is needed to "carry" the point A to the point B; the Latin word vector means "one who carries".[3] The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity.

Tensors

Stress, a second-order tensor. Stress is here shown as a series of vectors on each side of the box

A tensor extends the concept of a vector to additional dimensions. A scalar, that is, a simple set of numbers without direction, would be shown on a graph as a point, a zero-dimensional object. A vector, which has a magnitude and direction, would appear on a graph as a line, which is a one-dimensional object. A tensor extends this concept to additional dimensions. A two dimensional tensor would be called a second order tensor. This can be viewed as a set of related vectors, moving in multiple directions on a plane.

Applications

Vectors are fundamental in the physical sciences. They can be used to represent any quantity that has both a magnitude and direction, such as velocity, the magnitude of which is speed. For example, the velocity 5 meters per second upward could be represented by the vector (0, 5) (in 2 dimensions with the positive y axis as 'up'). Another quantity represented by a vector is force, since it has a magnitude and direction. Vectors also describe many other physical quantities, such as displacement, acceleration, momentum, and angular momentum. Other physical vectors, such as the electric and magnetic field, are represented as a system of vectors at each point of a physical space; that is, a vector field.

Tensors also have extensive applications in physics:

Dimensions

In general relativity, four-dimensional vectors, or four-vectors, are required. These four dimensions are length, height, width and time. A "point" in this context would be an event, as it has both a location and a time. Similar to vectors, tensors in relativity require four dimensions. One example is the Riemann curvature tensor.

Coordinate transformation

In physics, as well as mathematics, a vector is often identified with a tuple, or list of numbers, which depend on some auxiliary coordinate system or reference frame. When the coordinates are transformed, for example by rotation or stretching, then the components of the vector also transform. The vector itself has not changed, but the reference frame has, so the components of the vector (or measurements taken with respect to the reference frame) must change to compensate.

The vector is called covariant or contravariant depending on how the transformation of the vector's components is related to the transformation of coordinates.
  • Contravariant vectors are "regular vectors" with units of distance (such as a displacement) or distance times some other unit (such as velocity or acceleration). For example, in changing units from meters to millimeters, a displacement of 1 m becomes 1000 mm.
  • Covariant vectors, on the other hand, have units of one-over-distance (typically such as gradient). For example, in changing again from meters to millimeters, a gradient of 1 K/m becomes 0.001 K/mm.
Coordinate transformation is important because relativity states that there is no one correct reference point in the universe. On earth, we use dimensions like north, east, and elevation, which are used throughout the entire planet. There is no such system for space. Without a clear reference grid, it becomes more accurate to describe the four dimensions as towards/away, left/right, up/down and past/future. As an example event, take the signing of the Declaration of Independence. To a modern observer on Mount Rainier looking east, the event is ahead, to the right, below, and in the past. However, to an observer in medieval England looking north, the event is behind, to the left, neither up nor down, and in the future. The event itself has not changed, the location of the observer has.

Oblique axes

An oblique coordinate system is one in which the axes are not necessarily orthogonal to each other; that is, they meet at angles other than right angles. When using coordinate transformations as described above, the new coordinate system will often appear to have oblique axes compared to the old system.

Nontensors

A nontensor is a tensor-like quantity that behaves like a tensor in the raising and lowering of indices, but that does not transform like a tensor under a coordinate transformation. For example, Christoffel symbols cannot be tensors themselves if the coordinates don't change in a linear way.
In general relativity, one cannot describe the energy and momentum of the gravitational field by an energy–momentum tensor. Instead, one introduces objects that behave as tensors only with respect to restricted coordinate transformations. Strictly speaking, such objects are not tensors at all. A famous example of such a pseudotensor is the Landau–Lifshitz pseudotensor.

Curvilinear coordinates and curved spacetime

High-precision test of general relativity by the Cassini space probe (artist's impression): radio signals sent between the Earth and the probe (green wave) are delayed by the warping of space and time (blue lines) due to the Sun's mass. That is, the Sun's mass causes the regular grid coordinate system (in blue) to distort and have curvature. The radio wave then follows this curvature and moves toward the Sun.

Curvilinear coordinates are coordinates in which the angles between axes can change from point to point. This means that rather than having a grid of straight lines, the grid instead has curvature.
A good example of this is the surface of the Earth. While maps frequently portray north, south, east and west as a simple square grid, that is not in fact the case. Instead, the longitude lines running north and south are curved and meet at the north pole. This is because the Earth is not flat, but instead round.

In general relativity, gravity has curvature effects on the four dimensions of the universe. A common analogy is placing a heavy object on a stretched out rubber sheet, causing the sheet to bend downward. This curves the coordinate system around the object, much like an object in the universe curves the coordinate system it sits in. The mathematics here are conceptually more complex than on Earth, as it results in four dimensions of curved coordinates instead of three as used to describe a curved 2D surface.

Parallel transport

Example: Parallel displacement along a circle of a three-dimensional ball embedded in two dimensions. The circle of radius r is embedded in a two-dimensional space characterized by the coordinates z^1 and z^2. The circle itself is characterized by coordinates  y^1 and y^2 in the two-dimensional space. The circle itself is one-dimensional and can be characterized by its arc length x. The coordinate y is related to the coordinate x through the relation  y^1 = r \cos( x / r) and  y^2 = r \sin( x / r) . This gives  \partial y^1 / \partial x =  - \sin( x / r) and  \partial y^2 / \partial x = \cos( x / r). In this case the metric is a scalar and is given by  g =  \cos^2( x / r) + \sin^2(x/r) = 1. The interval is then  ds^2 = g \, dx^2 = dx^2. \,  The interval is just equal to the arc length as expected.

The interval in a high-dimensional space

In a Euclidean space, the separation between two points is measured by the distance between the two points. The distance is purely spatial, and is always positive. In spacetime, the separation between two events is measured by the invariant interval between the two events, which takes into account not only the spatial separation between the events, but also their temporal separation. The interval, s2, between two events is defined as:

s^2 = \Delta r^2 - c^2\Delta t^2 \,   (spacetime interval),

where c is the speed of light, and Δr and Δt denote differences of the space and time coordinates, respectively, between the events. The choice of signs for s^2 above follows the space-like convention (−+++). The reason s^2 is called the interval and not s is that s^2 can be positive, zero or negative.
Spacetime intervals may be classified into three distinct types, based on whether the temporal separation (c^2 \Delta t^2) or the spatial separation (\Delta r^2) of the two events is greater: time-like, light-like or space-like.

Certain types of world lines are called geodesics of the spacetime – straight lines in the case of Minkowski space and their closest equivalent in the curved spacetime of general relativity. In the case of purely time-like paths, geodesics are (locally) the paths of greatest separation (spacetime interval) as measured along the path between two events, whereas in Euclidean space and Riemannian manifolds, geodesics are paths of shortest distance between two points.[4][5] The concept of geodesics becomes central in general relativity, since geodesic motion may be thought of as "pure motion" (inertial motion) in spacetime, that is, free from any external influences.

The covariant derivative

The covariant derivative is a generalization of the directional derivative from vector calculus. As with the directional derivative, the covariant derivative is a rule, which takes as its inputs: (1) a vector, u, defined at a point P, and (2) a vector field, v, defined in a neighborhood of P. The output is the vector, also at the point P. The primary difference from the usual directional derivative is that must, in a certain precise sense, be independent of the manner in which it is expressed in a coordinate system.

Parallel transport

Given the covariant derivative, one can define the parallel transport of a vector v at a point P along a curve γ starting at P. For each point x of γ, the parallel transport of v at x will be a function of x, and can be written as v(x), where v(0) = v. The function v is determined by the requirement that the covariant derivative of v(x) along γ is 0. This is similar to the fact the a constant function is one whose derivative is constantly 0.

Christoffel symbols

The equation for the covariant derivative can be written down in terms of Christoffel symbols. The Christoffel symbols find frequent use in Einstein's theory of general relativity, where spacetime is represented by a curved 4-dimensional Lorentz manifold with a Levi-Civita connection. The Einstein field equations—which determine the geometry of spacetime in the presence of matter—contain the Ricci tensor, and so calculating the Christoffel symbols is essential. Once the geometry is determined, the paths of particles and light beams are calculated by solving the geodesic equations in which the Christoffel symbols explicitly appear.

Geodesics

In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational force, is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic.
In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting around a star is the projection of a geodesic of the curved 4-dimensional spacetime geometry around the star onto 3-dimensional space.

A curve is a geodesic if the tangent vector of the curve at any point is equal to the parallel transport of the tangent vector of the base point.

Curvature tensor

The Riemann tensor tells us, mathematically, how much curvature there is in any given region of space. Contracting the tensor produces 3 different mathematical objects:
  1. The Riemann curvature tensor: R^\rho{}_{\sigma\mu\nu}, which gives the most information on the curvature of a space and is derived from derivatives of the metric tensor. In flat space this tensor is zero.
  2. The Ricci tensor: R_{\sigma\nu}, comes from the need in Einstein's theory for a curvature tensor with only 2 indices. It is obtained by averaging certain portions of the Riemann curvature tensor.
  3. The scalar curvature: R, the simplest measure of curvature, assigns a single scalar value to each point in a space. It is obtained by averaging the Ricci tensor.
The Riemann curvature tensor can be expressed in terms of the covariant derivative.

The Einstein tensor \mathbf{G} is a rank 2 tensor defined over pseudo-Riemannian manifolds. In index-free notation it is defined as
\mathbf{G}=\mathbf{R}-\frac{1}{2}\mathbf{g}R,
where \mathbf{R} is the Ricci tensor, \mathbf{g} is the metric tensor and R is the scalar curvature. It is used in the Einstein field equations.

Stress–energy tensor

Contravariant components of the stress–energy tensor.

The stress–energy tensor (sometimes stress–energy–momentum tensor or energy–momentum tensor) is a tensor quantity in physics that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. The stress–energy tensor is the source of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.

Einstein equation

The Einstein field equations (EFE) or Einstein's equations are a set of 10 equations in Albert Einstein's general theory of relativity which describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy.[6] First published by Einstein in 1915[7] as a tensor equation, the EFE equate local spacetime curvature (expressed by the Einstein tensor) with the local energy and momentum within that spacetime (expressed by the stress–energy tensor).[8]
The Einstein Field Equations can be written as
G_{\mu \nu}= {8 \pi G \over c^4} T_{\mu \nu} ,
where G_{\mu \nu} is the Einstein tensor and T_{\mu \nu} is the stress–energy tensor.

This implies that the curvature of space (represented by the Einstein tensor) is directly connected to the presence of matter and energy (represented by the stress–energy tensor).

Schwarzschild solution and black holes

In Einstein's theory of general relativity, the Schwarzschild metric (also Schwarzschild vacuum or Schwarzschild solution), is a solution to the Einstein field equations which describes the gravitational field outside a spherical mass, on the assumption that the electric charge of the mass, angular momentum of the mass, and universal cosmological constant are all zero. The solution is a useful approximation for describing slowly rotating astronomical objects such as many stars and planets, including Earth and the Sun. The solution is named after Karl Schwarzschild, who first published the solution in 1916.
According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric, vacuum solution of the Einstein field equations. A Schwarzschild black hole or static black hole is a black hole that has no charge or angular momentum. A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass.

Accelerating change

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Acc...