Search This Blog

Friday, January 26, 2018

Introduction to general relativity

From Wikipedia, the free encyclopedia
High-precision test of general relativity by the Cassini space probe (artist's impression): radio signals sent between the Earth and the probe (green wave) are delayed by the warping of spacetime (blue lines) due to the Sun's mass.

General relativity is a theory of gravitation that was developed by Albert Einstein between 1907 and 1915. According to general relativity, the observed gravitational effect between masses results from their warping of spacetime.

By the beginning of the 20th century, Newton's law of universal gravitation had been accepted for more than two hundred years as a valid description of the gravitational force between masses. In Newton's model, gravity is the result of an attractive force between massive objects. Although even Newton was troubled by the unknown nature of that force, the basic framework was extremely successful at describing motion.

Experiments and observations show that Einstein's description of gravitation accounts for several effects that are unexplained by Newton's law, such as minute anomalies in the orbits of Mercury and other planets. General relativity also predicts novel effects of gravity, such as gravitational waves, gravitational lensing and an effect of gravity on time known as gravitational time dilation. Many of these predictions have been confirmed by experiment or observation, most recently gravitational waves.

General relativity has developed into an essential tool in modern astrophysics. It provides the foundation for the current understanding of black holes, regions of space where the gravitational effect is strong enough that even light cannot escape. Their strong gravity is thought to be responsible for the intense radiation emitted by certain types of astronomical objects (such as active galactic nuclei or microquasars). General relativity is also part of the framework of the standard Big Bang model of cosmology.

Although general relativity is not the only relativistic theory of gravity, it is the simplest such theory that is consistent with the experimental data. Nevertheless, a number of open questions remain, the most fundamental of which is how general relativity can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity.

From special to general relativity

In September 1905, Albert Einstein published his theory of special relativity, which reconciles Newton's laws of motion with electrodynamics (the interaction between objects with electric charge). Special relativity introduced a new framework for all of physics by proposing new concepts of space and time. Some then-accepted physical theories were inconsistent with that framework; a key example was Newton's theory of gravity, which describes the mutual attraction experienced by bodies due to their mass.

Several physicists, including Einstein, searched for a theory that would reconcile Newton's law of gravity and special relativity. Only Einstein's theory proved to be consistent with experiments and observations. To understand the theory's basic ideas, it is instructive to follow Einstein's thinking between 1907 and 1915, from his simple thought experiment involving an observer in free fall to his fully geometric theory of gravity.[1]

Equivalence principle

A person in a free-falling elevator experiences weightlessness; objects either float motionless or drift at constant speed. Since everything in the elevator is falling together, no gravitational effect can be observed. In this way, the experiences of an observer in free fall are indistinguishable from those of an observer in deep space, far from any significant source of gravity. Such observers are the privileged ("inertial") observers Einstein described in his theory of special relativity: observers for whom light travels along straight lines at constant speed.[2]
Einstein hypothesized that the similar experiences of weightless observers and inertial observers in special relativity represented a fundamental property of gravity, and he made this the cornerstone of his theory of general relativity, formalized in his equivalence principle. Roughly speaking, the principle states that a person in a free-falling elevator cannot tell that they are in free fall. Every experiment in such a free-falling environment has the same results as it would for an observer at rest or moving uniformly in deep space, far from all sources of gravity.[3]

Gravity and acceleration

Ball falling to the floor in an accelerating rocket (left) and on Earth (right). The effect is identical.

Most effects of gravity vanish in free fall, but effects that seem the same as those of gravity can be produced by an accelerated frame of reference. An observer in a closed room cannot tell which of the following is true:
  • Objects are falling to the floor because the room is resting on the surface of the Earth and the objects are being pulled down by gravity.
  • Objects are falling to the floor because the room is aboard a rocket in space, which is accelerating at 9.81 m/s2 and is far from any source of gravity. The objects are being pulled towards the floor by the same "inertial force" that presses the driver of an accelerating car into the back of his seat.
Conversely, any effect observed in an accelerated reference frame should also be observed in a gravitational field of corresponding strength. This principle allowed Einstein to predict several novel effects of gravity in 1907, as explained in the next section.

An observer in an accelerated reference frame must introduce what physicists call fictitious forces to account for the acceleration experienced by himself and objects around him. One example, the force pressing the driver of an accelerating car into his or her seat, has already been mentioned; another is the force you can feel pulling your arms up and out if you attempt to spin around like a top. Einstein's master insight was that the constant, familiar pull of the Earth's gravitational field is fundamentally the same as these fictitious forces.[4] The apparent magnitude of the fictitious forces always appears to be proportional to the mass of any object on which they act – for instance, the driver's seat exerts just enough force to accelerate the driver at the same rate as the car. By analogy, Einstein proposed that an object in a gravitational field should feel a gravitational force proportional to its mass, as embodied in Newton's law of gravitation.[5]

Physical consequences

In 1907, Einstein was still eight years away from completing the general theory of relativity. Nonetheless, he was able to make a number of novel, testable predictions that were based on his starting point for developing his new theory: the equivalence principle.[6]
The gravitational redshift of a light wave as it moves upwards against a gravitational field (caused by the yellow star below).

The first new effect is the gravitational frequency shift of light. Consider two observers aboard an accelerating rocket-ship. Aboard such a ship, there is a natural concept of "up" and "down": the direction in which the ship accelerates is "up", and unattached objects accelerate in the opposite direction, falling "downward". Assume that one of the observers is "higher up" than the other. When the lower observer sends a light signal to the higher observer, the acceleration causes the light to be red-shifted, as may be calculated from special relativity; the second observer will measure a lower frequency for the light than the first. Conversely, light sent from the higher observer to the lower is blue-shifted, that is, shifted towards higher frequencies.[7] Einstein argued that such frequency shifts must also be observed in a gravitational field. This is illustrated in the figure at left, which shows a light wave that is gradually red-shifted as it works its way upwards against the gravitational acceleration. This effect has been confirmed experimentally, as described below.

This gravitational frequency shift corresponds to a gravitational time dilation: Since the "higher" observer measures the same light wave to have a lower frequency than the "lower" observer, time must be passing faster for the higher observer. Thus, time runs more slowly for observers who are lower in a gravitational field.

It is important to stress that, for each observer, there are no observable changes of the flow of time for events or processes that are at rest in his or her reference frame. Five-minute-eggs as timed by each observer's clock have the same consistency; as one year passes on each clock, each observer ages by that amount; each clock, in short, is in perfect agreement with all processes happening in its immediate vicinity. It is only when the clocks are compared between separate observers that one can notice that time runs more slowly for the lower observer than for the higher.[8] This effect is minute, but it too has been confirmed experimentally in multiple experiments, as described below.

In a similar way, Einstein predicted the gravitational deflection of light: in a gravitational field, light is deflected downward. Quantitatively, his results were off by a factor of two; the correct derivation requires a more complete formulation of the theory of general relativity, not just the equivalence principle.[9]

Tidal effects

Two bodies falling towards the center of the Earth accelerate towards each other as they fall.

The equivalence between gravitational and inertial effects does not constitute a complete theory of gravity. When it comes to explaining gravity near our own location on the Earth's surface, noting that our reference frame is not in free fall, so that fictitious forces are to be expected, provides a suitable explanation. But a freely falling reference frame on one side of the Earth cannot explain why the people on the opposite side of the Earth experience a gravitational pull in the opposite direction.

A more basic manifestation of the same effect involves two bodies that are falling side by side towards the Earth. In a reference frame that is in free fall alongside these bodies, they appear to hover weightlessly – but not exactly so. These bodies are not falling in precisely the same direction, but towards a single point in space: namely, the Earth's center of gravity. Consequently, there is a component of each body's motion towards the other (see the figure). In a small environment such as a freely falling lift, this relative acceleration is minuscule, while for skydivers on opposite sides of the Earth, the effect is large. Such differences in force are also responsible for the tides in the Earth's oceans, so the term "tidal effect" is used for this phenomenon.

The equivalence between inertia and gravity cannot explain tidal effects – it cannot explain variations in the gravitational field.[10] For that, a theory is needed which describes the way that matter (such as the large mass of the Earth) affects the inertial environment around it.

From acceleration to geometry

In exploring the equivalence of gravity and acceleration as well as the role of tidal forces, Einstein discovered several analogies with the geometry of surfaces. An example is the transition from an inertial reference frame (in which free particles coast along straight paths at constant speeds) to a rotating reference frame (in which extra terms corresponding to fictitious forces have to be introduced in order to explain particle motion): this is analogous to the transition from a Cartesian coordinate system (in which the coordinate lines are straight lines) to a curved coordinate system (where coordinate lines need not be straight).

A deeper analogy relates tidal forces with a property of surfaces called curvature. For gravitational fields, the absence or presence of tidal forces determines whether or not the influence of gravity can be eliminated by choosing a freely falling reference frame. Similarly, the absence or presence of curvature determines whether or not a surface is equivalent to a plane. In the summer of 1912, inspired by these analogies, Einstein searched for a geometric formulation of gravity.[11]

The elementary objects of geometry – points, lines, triangles – are traditionally defined in three-dimensional space or on two-dimensional surfaces. In 1907, Hermann Minkowski, Einstein's former mathematics professor at the Swiss Federal Polytechnic, introduced a geometric formulation of Einstein's special theory of relativity where the geometry included not only space but also time. The basic entity of this new geometry is four-dimensional spacetime. The orbits of moving bodies are curves in spacetime; the orbits of bodies moving at constant speed without changing direction correspond to straight lines.[12]

For surfaces, the generalization from the geometry of a plane – a flat surface – to that of a general curved surface had been described in the early 19th century by Carl Friedrich Gauss. This description had in turn been generalized to higher-dimensional spaces in a mathematical formalism introduced by Bernhard Riemann in the 1850s. With the help of Riemannian geometry, Einstein formulated a geometric description of gravity in which Minkowski's spacetime is replaced by distorted, curved spacetime, just as curved surfaces are a generalization of ordinary plane surfaces. Embedding Diagrams are used to illustrate curved spacetime in educational contexts.[13][14]

After he had realized the validity of this geometric analogy, it took Einstein a further three years to find the missing cornerstone of his theory: the equations describing how matter influences spacetime's curvature. Having formulated what are now known as Einstein's equations (or, more precisely, his field equations of gravity), he presented his new theory of gravity at several sessions of the Prussian Academy of Sciences in late 1915, culminating in his final presentation on November 25, 1915.[15]

Geometry and gravitation

Paraphrasing John Wheeler, Einstein's geometric theory of gravity can be summarized thus:

spacetime tells matter how to move; matter tells spacetime how to curve.[16] What this means is addressed in the following three sections, which explore the motion of so-called test particles, examine which properties of matter serve as a source for gravity, and, finally, introduce Einstein's equations, which relate these matter properties to the curvature of spacetime.

Probing the gravitational field

Converging geodesics: two lines of longitude (green) that start out in parallel at the equator (red) but converge to meet at the pole.

In order to map a body's gravitational influence, it is useful to think about what physicists call probe or test particles: particles that are influenced by gravity, but are so small and light that we can neglect their own gravitational effect. In the absence of gravity and other external forces, a test particle moves along a straight line at a constant speed. In the language of spacetime, this is equivalent to saying that such test particles move along straight world lines in spacetime. In the presence of gravity, spacetime is non-Euclidean, or curved, and in curved spacetime straight world lines may not exist. Instead, test particles move along lines called geodesics, which are "as straight as possible", that is, they follow the shortest path between starting and ending points, taking the curvature into consideration.

A simple analogy is the following: In geodesy, the science of measuring Earth's size and shape, a geodesic (from Greek "geo", Earth, and "daiein", to divide) is the shortest route between two points on the Earth's surface. Approximately, such a route is a segment of a great circle, such as a line of longitude or the equator. These paths are certainly not straight, simply because they must follow the curvature of the Earth's surface. But they are as straight as is possible subject to this constraint.

The properties of geodesics differ from those of straight lines. For example, on a plane, parallel lines never meet, but this is not so for geodesics on the surface of the Earth: for example, lines of longitude are parallel at the equator, but intersect at the poles. Analogously, the world lines of test particles in free fall are spacetime geodesics, the straightest possible lines in spacetime. But still there are crucial differences between them and the truly straight lines that can be traced out in the gravity-free spacetime of special relativity. In special relativity, parallel geodesics remain parallel. In a gravitational field with tidal effects, this will not, in general, be the case. If, for example, two bodies are initially at rest relative to each other, but are then dropped in the Earth's gravitational field, they will move towards each other as they fall towards the Earth's center.[17]

Compared with planets and other astronomical bodies, the objects of everyday life (people, cars, houses, even mountains) have little mass. Where such objects are concerned, the laws governing the behavior of test particles are sufficient to describe what happens. Notably, in order to deflect a test particle from its geodesic path, an external force must be applied. A chair someone is sitting on applies an external upwards force preventing the person from falling freely towards the center of the Earth and thus following a geodesic, which they would otherwise be doing without matter in between them and the center of the Earth. In this way, general relativity explains the daily experience of gravity on the surface of the Earth not as the downwards pull of a gravitational force, but as the upwards push of external forces. These forces deflect all bodies resting on the Earth's surface from the geodesics they would otherwise follow.[18] For matter objects whose own gravitational influence cannot be neglected, the laws of motion are somewhat more complicated than for test particles, although it remains true that spacetime tells matter how to move.[19]

Sources of gravity

In Newton's description of gravity, the gravitational force is caused by matter. More precisely, it is caused by a specific property of material objects: their mass. In Einstein's theory and related theories of gravitation, curvature at every point in spacetime is also caused by whatever matter is present. Here, too, mass is a key property in determining the gravitational influence of matter. But in a relativistic theory of gravity, mass cannot be the only source of gravity. Relativity links mass with energy, and energy with momentum.

The equivalence between mass and energy, as expressed by the formula E = mc2, is the most famous consequence of special relativity. In relativity, mass and energy are two different ways of describing one physical quantity. If a physical system has energy, it also has the corresponding mass, and vice versa. In particular, all properties of a body that are associated with energy, such as its temperature or the binding energy of systems such as nuclei or molecules, contribute to that body's mass, and hence act as sources of gravity.[20]

In special relativity, energy is closely connected to momentum. Just as space and time are, in that theory, different aspects of a more comprehensive entity called spacetime, energy and momentum are merely different aspects of a unified, four-dimensional quantity that physicists call four-momentum. In consequence, if energy is a source of gravity, momentum must be a source as well. The same is true for quantities that are directly related to energy and momentum, namely internal pressure and tension. Taken together, in general relativity it is mass, energy, momentum, pressure and tension that serve as sources of gravity: they are how matter tells spacetime how to curve. In the theory's mathematical formulation, all these quantities are but aspects of a more general physical quantity called the energy–momentum tensor.[21]

Einstein's equations

Einstein's equations are the centerpiece of general relativity. They provide a precise formulation of the relationship between spacetime geometry and the properties of matter, using the language of mathematics. More concretely, they are formulated using the concepts of Riemannian geometry, in which the geometric properties of a space (or a spacetime) are described by a quantity called a metric. The metric encodes the information needed to compute the fundamental geometric notions of distance and angle in a curved space (or spacetime).
Distances, at different latitudes, corresponding to 30 degrees difference in longitude.

A spherical surface like that of the Earth provides a simple example. The location of any point on the surface can be described by two coordinates: the geographic latitude and longitude. Unlike the Cartesian coordinates of the plane, coordinate differences are not the same as distances on the surface, as shown in the diagram on the right: for someone at the equator, moving 30 degrees of longitude westward (magenta line) corresponds to a distance of roughly 3,300 kilometers (2,100 mi). On the other hand, someone at a latitude of 55 degrees, moving 30 degrees of longitude westward (blue line) covers a distance of merely 1,900 kilometers (1,200 mi). Coordinates therefore do not provide enough information to describe the geometry of a spherical surface, or indeed the geometry of any more complicated space or spacetime. That information is precisely what is encoded in the metric, which is a function defined at each point of the surface (or space, or spacetime) and relates coordinate differences to differences in distance. All other quantities that are of interest in geometry, such as the length of any given curve, or the angle at which two curves meet, can be computed from this metric function.[22]

The metric function and its rate of change from point to point can be used to define a geometrical quantity called the Riemann curvature tensor, which describes exactly how the space or spacetime is curved at each point. In general relativity, the metric and the Riemann curvature tensor are quantities defined at each point in spacetime. As has already been mentioned, the matter content of the spacetime defines another quantity, the energy–momentum tensor T, and the principle that "spacetime tells matter how to move, and matter tells spacetime how to curve" means that these quantities must be related to each other. Einstein formulated this relation by using the Riemann curvature tensor and the metric to define another geometrical quantity G, now called the Einstein tensor, which describes some aspects of the way spacetime is curved. Einstein's equation then states that
\mathbf {G} ={\frac {8\pi G}{c^{4}}}\mathbf {T} ,
i.e., up to a constant multiple, the quantity G (which measures curvature) is equated with the quantity T (which measures matter content). Here, G is the gravitational constant of Newtonian gravity, and c is the speed of light from special relativity.

This equation is often referred to in the plural as Einstein's equations, since the quantities G and T are each determined by several functions of the coordinates of spacetime, and the equations equate each of these component functions.[23] A solution of these equations describes a particular geometry of spacetime; for example, the Schwarzschild solution describes the geometry around a spherical, non-rotating mass such as a star or a black hole, whereas the Kerr solution describes a rotating black hole. Still other solutions can describe a gravitational wave or, in the case of the Friedmann–Lemaître–Robertson–Walker solution, an expanding universe. The simplest solution is the uncurved Minkowski spacetime, the spacetime described by special relativity.[24]

Experiments

No scientific theory is apodictically true; each is a model that must be checked by experiment. Newton's law of gravity was accepted because it accounted for the motion of planets and moons in the Solar System with considerable accuracy. As the precision of experimental measurements gradually improved, some discrepancies with Newton's predictions were observed, and these were accounted for in the general theory of relativity. Similarly, the predictions of general relativity must also be checked with experiment, and Einstein himself devised three tests now known as the classical tests of the theory:
Newtonian (red) vs. Einsteinian orbit (blue) of a single planet orbiting a spherical star. (Click on the image for animation.)
  • Newtonian gravity predicts that the orbit which a single planet traces around a perfectly spherical star should be an ellipse. Einstein's theory predicts a more complicated curve: the planet behaves as if it were travelling around an ellipse, but at the same time, the ellipse as a whole is rotating slowly around the star. In the diagram on the right, the ellipse predicted by Newtonian gravity is shown in red, and part of the orbit predicted by Einstein in blue. For a planet orbiting the Sun, this deviation from Newton's orbits is known as the anomalous perihelion shift. The first measurement of this effect, for the planet Mercury, dates back to 1859. The most accurate results for Mercury and for other planets to date are based on measurements which were undertaken between 1966 and 1990, using radio telescopes.[25] General relativity predicts the correct anomalous perihelion shift for all planets where this can be measured accurately (Mercury, Venus and the Earth).
  • According to general relativity, light does not travel along straight lines when it propagates in a gravitational field. Instead, it is deflected in the presence of massive bodies. In particular, starlight is deflected as it passes near the Sun, leading to apparent shifts of up 1.75 arc seconds in the stars' positions in the sky (an arc second is equal to 1/3600 of a degree). In the framework of Newtonian gravity, a heuristic argument can be made that leads to light deflection by half that amount. The different predictions can be tested by observing stars that are close to the Sun during a solar eclipse. In this way, a British expedition to West Africa in 1919, directed by Arthur Eddington, confirmed that Einstein's prediction was correct, and the Newtonian predictions wrong, via observation of the May 1919 eclipse. Eddington's results were not very accurate; subsequent observations of the deflection of the light of distant quasars by the Sun, which utilize highly accurate techniques of radio astronomy, have confirmed Eddington's results with significantly better precision (the first such measurements date from 1967, the most recent comprehensive analysis from 2004).[26]
  • Gravitational redshift was first measured in a laboratory setting in 1959 by Pound and Rebka. It is also seen in astrophysical measurements, notably for light escaping the white dwarf Sirius B. The related gravitational time dilation effect has been measured by transporting atomic clocks to altitudes of between tens and tens of thousands of kilometers (first by Hafele and Keating in 1971; most accurately to date by Gravity Probe A launched in 1976).[27]
Of these tests, only the perihelion advance of Mercury was known prior to Einstein's final publication of general relativity in 1916. The subsequent experimental confirmation of his other predictions, especially the first measurements of the deflection of light by the sun in 1919, catapulted Einstein to international stardom.[28] These three experiments justified adopting general relativity over Newton's theory and, incidentally, over a number of alternatives to general relativity that had been proposed.
Gravity Probe B with solar panels folded.

Further tests of general relativity include precision measurements of the Shapiro effect or gravitational time delay for light, most recently in 2002 by the Cassini space probe. One set of tests focuses on effects predicted by general relativity for the behavior of gyroscopes travelling through space. One of these effects, geodetic precession, has been tested with the Lunar Laser Ranging Experiment (high-precision measurements of the orbit of the Moon). Another, which is related to rotating masses, is called frame-dragging. The geodetic and frame-dragging effects were both tested by the Gravity Probe B satellite experiment launched in 2004, with results confirming relativity to within 0.5% and 15%, respectively, as of December 2008.[29]

By cosmic standards, gravity throughout the solar system is weak. Since the differences between the predictions of Einstein's and Newton's theories are most pronounced when gravity is strong, physicists have long been interested in testing various relativistic effects in a setting with comparatively strong gravitational fields. This has become possible thanks to precision observations of binary pulsars. In such a star system, two highly compact neutron stars orbit each other. At least one of them is a pulsar – an astronomical object that emits a tight beam of radiowaves. These beams strike the Earth at very regular intervals, similarly to the way that the rotating beam of a lighthouse means that an observer sees the lighthouse blink, and can be observed as a highly regular series of pulses. General relativity predicts specific deviations from the regularity of these radio pulses. For instance, at times when the radio waves pass close to the other neutron star, they should be deflected by the star's gravitational field. The observed pulse patterns are impressively close to those predicted by general relativity.[30]

One particular set of observations is related to eminently useful practical applications, namely to satellite navigation systems such as the Global Positioning System that are used both for precise positioning and timekeeping. Such systems rely on two sets of atomic clocks: clocks aboard satellites orbiting the Earth, and reference clocks stationed on the Earth's surface. General relativity predicts that these two sets of clocks should tick at slightly different rates, due to their different motions (an effect already predicted by special relativity) and their different positions within the Earth's gravitational field. In order to ensure the system's accuracy, the satellite clocks are either slowed down by a relativistic factor, or that same factor is made part of the evaluation algorithm. In turn, tests of the system's accuracy (especially the very thorough measurements that are part of the definition of universal coordinated time) are testament to the validity of the relativistic predictions.[31]

A number of other tests have probed the validity of various versions of the equivalence principle; strictly speaking, all measurements of gravitational time dilation are tests of the weak version of that principle, not of general relativity itself. So far, general relativity has passed all observational tests.[32]

Astrophysical applications

Models based on general relativity play an important role in astrophysics; the success of these models is further testament to the theory's validity.

Gravitational lensing

Einstein cross: four images of the same astronomical object, produced by a gravitational lens.

Since light is deflected in a gravitational field, it is possible for the light of a distant object to reach an observer along two or more paths. For instance, light of a very distant object such as a quasar can pass along one side of a massive galaxy and be deflected slightly so as to reach an observer on Earth, while light passing along the opposite side of that same galaxy is deflected as well, reaching the same observer from a slightly different direction. As a result, that particular observer will see one astronomical object in two different places in the night sky. This kind of focussing is well-known when it comes to optical lenses, and hence the corresponding gravitational effect is called gravitational lensing.[33]

Observational astronomy uses lensing effects as an important tool to infer properties of the lensing object. Even in cases where that object is not directly visible, the shape of a lensed image provides information about the mass distribution responsible for the light deflection. In particular, gravitational lensing provides one way to measure the distribution of dark matter, which does not give off light and can be observed only by its gravitational effects. One particularly interesting application are large-scale observations, where the lensing masses are spread out over a significant fraction of the observable universe, and can be used to obtain information about the large-scale properties and evolution of our cosmos.[34]

Gravitational waves

Gravitational waves, a direct consequence of Einstein's theory, are distortions of geometry that propagate at the speed of light, and can be thought of as ripples in spacetime. They should not be confused with the gravity waves of fluid dynamics, which are a different concept.

In February 2016, the Advanced LIGO team announced that they had directly observed gravitational waves from a black hole merger.[35]

Indirectly, the effect of gravitational waves had been detected in observations of specific binary stars. Such pairs of stars orbit each other and, as they do so, gradually lose energy by emitting gravitational waves. For ordinary stars like the Sun, this energy loss would be too small to be detectable, but this energy loss was observed in 1974 in a binary pulsar called PSR1913+16. In such a system, one of the orbiting stars is a pulsar. This has two consequences: a pulsar is an extremely dense object known as a neutron star, for which gravitational wave emission is much stronger than for ordinary stars. Also, a pulsar emits a narrow beam of electromagnetic radiation from its magnetic poles. As the pulsar rotates, its beam sweeps over the Earth, where it is seen as a regular series of radio pulses, just as a ship at sea observes regular flashes of light from the rotating light in a lighthouse. This regular pattern of radio pulses functions as a highly accurate "clock". It can be used to time the double star's orbital period, and it reacts sensitively to distortions of spacetime in its immediate neighborhood.

The discoverers of PSR1913+16, Russell Hulse and Joseph Taylor, were awarded the Nobel Prize in Physics in 1993. Since then, several other binary pulsars have been found. The most useful are those in which both stars are pulsars, since they provide accurate tests of general relativity.[36]

Currently, a number of land-based gravitational wave detectors are in operation, and a mission to launch a space-based detector, LISA, is currently under development, with a precursor mission (LISA Pathfinder) which was launched in 2015. Gravitational wave observations can be used to obtain information about compact objects such as neutron stars and black holes, and also to probe the state of the early universe fractions of a second after the Big Bang.[37]

Black holes

Black hole-powered jet emanating from the central region of the galaxy M87.

When mass is concentrated into a sufficiently compact region of space, general relativity predicts the formation of a black hole – a region of space with a gravitational effect so strong that not even light can escape. Certain types of black holes are thought to be the final state in the evolution of massive stars. On the other hand, supermassive black holes with the mass of millions or billions of Suns are assumed to reside in the cores of most galaxies, and they play a key role in current models of how galaxies have formed over the past billions of years.[38]

Matter falling onto a compact object is one of the most efficient mechanisms for releasing energy in the form of radiation, and matter falling onto black holes is thought to be responsible for some of the brightest astronomical phenomena imaginable. Notable examples of great interest to astronomers are quasars and other types of active galactic nuclei. Under the right conditions, falling matter accumulating around a black hole can lead to the formation of jets, in which focused beams of matter are flung away into space at speeds near that of light.[39]

There are several properties that make black holes most promising sources of gravitational waves. One reason is that black holes are the most compact objects that can orbit each other as part of a binary system; as a result, the gravitational waves emitted by such a system are especially strong. Another reason follows from what are called black-hole uniqueness theorems: over time, black holes retain only a minimal set of distinguishing features (these theorems have become known as "no-hair" theorems, since different hairstyles are a crucial part of what gives different people their different appearances). For instance, in the long term, the collapse of a hypothetical matter cube will not result in a cube-shaped black hole. Instead, the resulting black hole will be indistinguishable from a black hole formed by the collapse of a spherical mass, but with one important difference: in its transition to a spherical shape, the black hole formed by the collapse of a cube will emit gravitational waves.[40]

Cosmology

An image, created using data from the WMAP satellite telescope, of the radiation emitted no more than a few hundred thousand years after the Big Bang.

One of the most important aspects of general relativity is that it can be applied to the universe as a whole. A key point is that, on large scales, our universe appears to be constructed along very simple lines: all current observations suggest that, on average, the structure of the cosmos should be approximately the same, regardless of an observer's location or direction of observation: the universe is approximately homogeneous and isotropic. Such comparatively simple universes can be described by simple solutions of Einstein's equations. The current cosmological models of the universe are obtained by combining these simple solutions to general relativity with theories describing the properties of the universe's matter content, namely thermodynamics, nuclear- and particle physics. According to these models, our present universe emerged from an extremely dense high-temperature state – the Big Bang – roughly 14 billion years ago and has been expanding ever since.[41]

Einstein's equations can be generalized by adding a term called the cosmological constant. When this term is present, empty space itself acts as a source of attractive (or, less commonly, repulsive) gravity. Einstein originally introduced this term in his pioneering 1917 paper on cosmology, with a very specific motivation: contemporary cosmological thought held the universe to be static, and the additional term was required for constructing static model universes within the framework of general relativity. When it became apparent that the universe is not static, but expanding, Einstein was quick to discard this additional term. Since the end of the 1990s, however, astronomical evidence indicating an accelerating expansion consistent with a cosmological constant – or, equivalently, with a particular and ubiquitous kind of dark energy – has steadily been accumulating.[42]

Modern research

General relativity is very successful in providing a framework for accurate models which describe an impressive array of physical phenomena. On the other hand, there are many interesting open questions, and in particular, the theory as a whole is almost certainly incomplete.[43]

In contrast to all other modern theories of fundamental interactions, general relativity is a classical theory: it does not include the effects of quantum physics. The quest for a quantum version of general relativity addresses one of the most fundamental open questions in physics. While there are promising candidates for such a theory of quantum gravity, notably string theory and loop quantum gravity, there is at present no consistent and complete theory. It has long been hoped that a theory of quantum gravity would also eliminate another problematic feature of general relativity: the presence of spacetime singularities. These singularities are boundaries ("sharp edges") of spacetime at which geometry becomes ill-defined, with the consequence that general relativity itself loses its predictive power. Furthermore, there are so-called singularity theorems which predict that such singularities must exist within the universe if the laws of general relativity were to hold without any quantum modifications. The best-known examples are the singularities associated with the model universes that describe black holes and the beginning of the universe.[44]

Other attempts to modify general relativity have been made in the context of cosmology. In the modern cosmological models, most energy in the universe is in forms that have never been detected directly, namely dark energy and dark matter. There have been several controversial proposals to remove the need for these enigmatic forms of matter and energy, by modifying the laws governing gravity and the dynamics of cosmic expansion, for example modified Newtonian dynamics.[45]

Beyond the challenges of quantum effects and cosmology, research on general relativity is rich with possibilities for further exploration: mathematical relativists explore the nature of singularities and the fundamental properties of Einstein's equations,[46] and ever more comprehensive computer simulations of specific spacetimes (such as those describing merging black holes) are run.[47] More than ninety years after the theory was first published, research is more active than ever.[48]

Earth's Energy Imbalance -- Effect of Aerosols

I have reproduced a NASA science brief on Anthropogenic Global Warming (Climate Change) below, as a part of it has either left me puzzled as to the authors' meaning, or is highly suggestive.  First, please read the section (well, you should read all of it) on Aerosols. The caption under Figure 4 is especially intriguing:  "Expected Earth energy imbalance for three choices of aerosol climate forcing. Measured imbalance, close to 0.6 W/m2, implies that aerosol forcing is close to -1.6 W/m2."

As I read this, the total Earth energy imbalance, 2.2 W/m2, is (or was at this time) being offset by -1.6 W/m2, or 73%, of aerosol forcing, leaving only 0.6 W/m2.

Now, if you use the Arrhenius relation for the radiative forcing of CO2, (https://en.wikipedia.org/wiki/Svante_Arrhenius)

\Delta F=\alpha \ln(C/C_{0})

which calculates the change in radiative forcing as the constant alpha (generally accepted as 5.35) multiplied by the natural logarithm of the current CO2 concentration (390 ppm in 2012) divided by the pre-industrial (280 ppm) level.  Performing this calculation yields 1.8 W/m2.  I presume the total of 2.2 W/m2 includes forcings from other sources, such as other greenhouse gases.

We can calculate the temperature rise caused by this forcing by using a variation of the Stephan-Boltzmann law (https://en.wikipedia.org/wiki/Stefan-Boltzmann_law) in which the ratio change of the temperature change is proportional to the 0.25 power of the ratio of the radiative forcing change.  In this case, the base radiative forcing is ~390 W/m2, (this includes direct solar forcing plus down-welling radiation from the greenhouse effect).  Thus, (392.2/390)^0.25 = 1.0014.  Multiplied by 288K (the Earth's surface temperature) yields a temperature increase of but 0.4K (this, incidentally, is less than half the ~1K temperature increase since CO2 levels equaled 280 ppm).

A forcing of only 0.6 W/m2, however, yields a paltry temperature increase of only 0.1K, a mere tiny fraction of the estimated warming over the last 150-200 years -- well within natural fluctuations.
Yet this actual measured forcing of only 0.6 W/m2is interpreted by Hansen et al interpret as meaning that the -1.6 W/m2 of aerosol forcing is entirely anthropogenic:  "Which alternative is closer to the truth defines the terms of a "Faustian bargain" that humanity has set for itself. Global warming so far has been limited, as aerosol cooling has partially offset greenhouse gas warming."  Thus, they assume that as we are able to control and reduce our aerosol emissions, warming will increase dramatically.  This, remember, was in 2012; as I write these words it is January 2018, and the only way Hansen et al statement can be defended is using the 2015-2017 El Nino event -- which is already steadily declining (http://www.drroyspencer.com/latest-global-temperatures/).  And there is now considerable evidence that warming itself, regardless of cause, naturally increases aerosols from both the oceans and plant life (http://www.sciencemag.org/news/2016/05/earth-s-climate-may-not-warm-quickly-expected-suggest-new-cloud-studies).



Another part of this NASA report concerns the effect of solar influence on climate (The role of the Sun).  There is a well acknowledge correlation between solar magnetic activity (characterized by sunspot levels) and global temperatures, reach back about 1000 years. AGW proponents have tried a number of ways to discredit or explain away this correlation, even though it is too complex to be ignored.  One way is to note that changes in solar insolation is simply not strong enough to account for significant temperature changes on Earth.  In fact, no one disputes that.  Rather, the theory is that changes in solar magnetic activity, by altering the intensity of cosmic rays reaching Earth's atmosphere and affecting cloud cover, accounts for this correlation.  Note however, that these are long term changes in sunspot activity, covering many decades, that give rise to the correlation, not the ~11 year cycle of sunspot activity.  Yet Hansen et al focus on this short term cycle to "prove" sunspot levels do not effect radiative forcing.  Yet no one is claiming that, hence this proof is irrelevant and invalid.

Without further comment, I reproduce below the Science Brief produced by NASA.


Science Briefs

Earth's Energy Imbalance

Original link:  https://www.giss.nasa.gov/research/briefs/hansen_16/

Deployment of an international array of Argo floats, measuring ocean heat content to a depth of 2000 m, was completed during the past decade, allowing the best assessment so far of Earth's energy imbalance. The observed planetary energy gain during the recent strong solar minimum reveals that the solar forcing of climate, although significant, is overwhelmed by a much larger net human-made climate forcing. The measured imbalance confirms that, if other climate forcings are fixed, atmospheric CO2 must be reduced to about 350 ppm or less to stop global warming. In our recently published paper (Hansen et al., 2011), we also show that climate forcing by human-made aerosols (fine particles in the air) is larger than usually assumed, implying an urgent need for accurate global aerosol measurements to help interpret continuing climate change.

Pie chart of contribution to Earth's energy imbalance
Figure 1. Contributions to Earth's (positive) energy imbalance in 2005-2010. Estimates for the deep Southern and Abyssal Oceans are by Purkey and Johnson (2010) based on sparse observations. (Credit: NASA/GISS)

Earth's energy imbalance is the difference between the amount of solar energy absorbed by Earth and the amount of energy the planet radiates to space as heat. If the imbalance is positive, more energy coming in than going out, we can expect Earth to become warmer in the future — but cooler if the imbalance is negative. Earth's energy imbalance is thus the single most crucial measure of the status of Earth's climate and it defines expectations for future climate change.

Energy imbalance arises because of changes of the climate forcings acting on the planet in combination with the planet's thermal inertia. For example, if the Sun becomes brighter, that is a positive forcing that will cause warming. If Earth were like Mercury, a body composed of low conductivity material and without oceans, its surface temperature would rise quickly to a level at which the planet was again radiating as much heat energy to space as the absorbed solar energy.

Earth's temperature does not adjust as fast as Mercury's due to the ocean's thermal inertia, which is substantial because the ocean is mixed to considerable depths by winds and convection. Thus it requires centuries for Earth's surface temperature to respond fully to a climate forcing.

Climate forcings are imposed perturbations to Earth's energy balance. Natural forcings include change of the Sun's brightness and volcanic eruptions that deposit aerosols in the stratosphere, thus cooling Earth by reflecting sunlight back to space. Principal human-made climate forcings are greenhouse gases (mainly CO2), which cause warming by trapping Earth's heat radiation, and human-made aerosols, which, like volcanic aerosols, reflect sunlight and have a cooling effect.

Let's consider the effect of a long-lived climate forcing. Say the Sun becomes brighter, staying brighter for a century or longer, or humans increase long-lived greenhouse gases. Either forcing results in more energy coming in than going out. As the planet warms in response to this imbalance, the heat radiated to space by Earth increases. Eventually Earth will reach a global temperature warm enough to radiate to space as much energy as it receives from the Sun, thus stabilizing climate at the new level. At any time during this process the remaining planetary energy imbalance allows us to estimate how much global warming is still "in the pipeline."

Many nations began, about a decade ago, to deploy floats around the world ocean that could "yo-yo" an instrument measuring ocean temperature to a depth of 2 km. By 2006 there were about 3000 floats covering most of the world ocean. These floats allowed von Schuckmann and Le Traon (2011) to estimate that during the 6-year period 2005-2010 the upper 2 km of the world ocean gained energy at a rate 0.41 W/m2 averaged over the planet.

We used other measurements to estimate the energy going into the deeper ocean, into the continents, and into melting of ice worldwide in the period 2005-2010. We found a total Earth energy imbalance of +0.58±0.15 W/m2 divided as shown in Fig. 1.

The role of the Sun. The measured positive imbalance in 2005-2010 is particularly important because it occurred during the deepest solar minimum in the period of accurate solar monitoring (Fig. 2). If the Sun were the only climate forcing or the dominant climate forcing, then the planet would gain energy during the solar maxima, but lose energy during solar minima.

Plot of solar irradiance from 1975 to 2010
Figure 2. Solar irradiance in the era of accurate satellite data. Left scale is the energy passing through an area perpendicular to Sun-Earth line. Averaged over Earth's surface the absorbed solar energy is ~240 W/m2, so the amplitude of solar variability is a forcing of ~0.25 W/m2. (Credit: NASA/GISS)

The fact that Earth gained energy at a rate 0.58 W/m2 during a deep prolonged solar minimum reveals that there is a strong positive forcing overwhelming the negative forcing by below-average solar irradiance. That result is not a surprise, given knowledge of other forcings, but it provides unequivocal refutation of assertions that the Sun is the dominant climate forcing.

Target CO2. The measured planetary energy imbalance provides an immediate accurate assessment of how much atmospheric CO2 would need to be reduced to restore Earth's energy balance, which is the basic requirement for stabilizing climate. If other climate forcings were unchanged, increasing Earth's radiation to space by 0.5 W/m2 would require reducing CO2 by ~30 ppm to 360 ppm. However, given that the imbalance of 0.58±0.15 W/m2 was measured during a deep solar minimum, it is probably necessary to increase radiation to space by closer to 0.75 W/m2, which would require reducing CO2 to ~345 ppm, other forcings being unchanged. Thus the Earth's energy imbalance confirms an earlier estimate on other grounds that CO2 must be reduced to about 350 ppm or less to stabilize climate (Hansen et al., 2008).

Aerosols. The measured planetary energy imbalance also allows us to estimate the climate forcing caused by human-made atmospheric aerosols. This is important because the aerosol forcing is believed to be large, but it is practically unmeasured.

Schematic of human-made climate forcings
Figure 3. Schematic diagram of human-made climate forcings by greenhouse gases, aerosols, and their net effect. (Credit: NASA/GISS)

The human-made greenhouse gas (GHG) forcing is known to be about +3 W/m2 (Fig. 3). The net human-made aerosol forcing is negative (cooling), but its magnitude is uncertain within a broad range (Fig. 3). The aerosol forcing is complex because there are several aerosol types, with some aerosols, such as black soot, partially absorbing incident sunlight, thus heating the atmosphere. Also aerosols serve as condensation nuclei for water vapor, thus causing additional aerosol climate forcing by altering cloud properties. As a result, sophisticated global measurements are needed to define the aerosol climate forcing, as discussed below.

The importance of knowing the aerosol forcing is shown by considering the following two cases: (1) aerosol forcing about -1 W/m2, such that the net climate forcing is ~ 2 W/m2, (2) aerosol forcing of -2 W/m2, yielding a net forcing ~1 W/m2. Both cases are possible, because of the uncertainty in the aerosol forcing.

Which alternative is closer to the truth defines the terms of a "Faustian bargain" that humanity has set for itself. Global warming so far has been limited, as aerosol cooling has partially offset greenhouse gas warming. But aerosols remain airborne only several days, so they must be pumped into the air faster and faster to keep pace with increasing long-lived greenhouse gases (much of the CO2 from fossil fuel emissions will remain in the air for several millennia). However, concern about health effects of particulate air pollution is likely to lead to eventual reduction of human-made aerosols. Thereupon humanity's Faustian payment will come due.

If the true net forcing is +2 W/m2 (aerosol forcing -1 W/m2), even a major effort to clean up aerosols, say reduction by half, increases the net forcing only 25% (from 2 W/m2 to 2.5 W/m2). But if the net forcing is +1 W/m2 (aerosol forcing -2 W/m2), reducing aerosols by half doubles the net climate forcing (from 1 W/m2 to 2 W/m2). Given that global climate effects are already observed (IPCC, 2007; Hansen et al., 2012), doubling the climate forcing suggests that humanity may face a grievous Faustian payment.

Bar chart of energy imbalance for three aerosol forcing choices
Figure 4. Expected Earth energy imbalance for three choices of aerosol climate forcing. Measured imbalance, close to 0.6 W/m2, implies that aerosol forcing is close to -1.6 W/m2. (Credit: NASA/GISS)

Most climate models contributing to the last assessment by the Intergovernmental Panel on Climate Change (IPCC, 2007) employed aerosol forcings in the range -0.5 to -1.1 W/m2 and achieved good agreement with observed global warming over the past century, suggesting that the aerosol forcing is only moderate. However, there is an ambiguity in the climate models. Most of the models used in IPCC (2007) mix heat efficiently into the intermediate and deep ocean, resulting in the need for a large climate forcing (~2 W/m2) to warm Earth's surface by the observed 0.8°C over the past century. But if the ocean mixes heat into the deeper ocean less efficiently, the net climate forcing needed to match observed global warming is smaller.

Earth's energy imbalance, if measured accurately, provides one way to resolve this ambiguity. The case with rapid ocean mixing and small aerosol forcing requires a large planetary energy imbalance to yield the observed surface warming. The planetary energy imbalance required to yield the observed warming for different choices of aerosol optical depth is shown in Fig. 4, based on a simplified representation of global climate simulations (Hansen et al., 2011).

Measured Earth energy imbalance, +0.58 W/m2 during 2005-2010, implies that the aerosol forcing is about -1.6 W/m2, a greater negative forcing than employed in most IPCC models. We discuss multiple lines of evidence that most climate models employed in these earlier studies had moderately excessive ocean mixing, which could account for the fact that they achieved a good fit to observed global temperature change with a smaller aerosol forcing.

The large (negative) aerosol climate forcing makes it imperative that we achieve a better understanding of the aerosols that cause this forcing. Unfortunately, the first satellite capable of measuring detailed aerosol physical properties, the Glory mission (Mishchenko et al., 2007), suffered a launch failure. It is urgent that a replacement mission be carried out, as the present net effect of changing emissions in developing and developed countries is highly uncertain

Global measurements to assess the aerosol indirect climate forcing, via aerosol effects on clouds, require simultaneous high precision polarimetric measurements of reflected solar radiation and interferometric measurements of emitted heat radiation with the two instruments looking at the same area at the same time. Such a mission concept has been defined (Hansen et al., 1993) and recent reassessments indicate that it could be achieved at a cost of about $100M if carried out by the private sector without a requirement for undue government review panels.

Non-Euclidean geometry

From Wikipedia, the free encyclopedia

Behavior of lines with a common perpendicular in each of the three types of geometry

In mathematics, non-Euclidean geometry consists of two geometries based on axioms closely related to those specifying Euclidean geometry. As Euclidean geometry lies at the intersection of metric geometry and affine geometry, non-Euclidean geometry arises when either the metric requirement is relaxed, or the parallel postulate is replaced with an alternative one. In the latter case one obtains hyperbolic geometry and elliptic geometry, the traditional non-Euclidean geometries. When the metric requirement is relaxed, then there are affine planes associated with the planar algebras which give rise to kinematic geometries that have also been called non-Euclidean geometry.

The essential difference between the metric geometries is the nature of parallel lines. Euclid's fifth postulate, the parallel postulate, is equivalent to Playfair's postulate, which states that, within a two-dimensional plane, for any given line and a point A, which is not on , there is exactly one line through A that does not intersect . In hyperbolic geometry, by contrast, there are infinitely many lines through A not intersecting , while in elliptic geometry, any line through A intersects .

Another way to describe the differences between these geometries is to consider two straight lines indefinitely extended in a two-dimensional plane that are both perpendicular to a third line:
  • In Euclidean geometry, the lines remain at a constant distance from each other (meaning that a line drawn perpendicular to one line at any point will intersect the other line and the length of the line segment joining the points of intersection remains constant) and are known as parallels.
  • In hyperbolic geometry, they "curve away" from each other, increasing in distance as one moves further from the points of intersection with the common perpendicular; these lines are often called ultraparallels.
  • In elliptic geometry, the lines "curve toward" each other and intersect.

History

Background

Euclidean geometry, named after the Greek mathematician Euclid, includes some of the oldest known mathematics, and geometries that deviated from this were not widely accepted as legitimate until the 19th century.

The debate that eventually led to the discovery of the non-Euclidean geometries began almost as soon as Euclid's work Elements was written. In the Elements, Euclid began with a limited number of assumptions (23 definitions, five common notions, and five postulates) and sought to prove all the other results (propositions) in the work. The most notorious of the postulates is often referred to as "Euclid's Fifth Postulate," or simply the "parallel postulate", which in Euclid's original formulation is:
If a straight line falls on two straight lines in such a manner that the interior angles on the same side are together less than two right angles, then the straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles.
Other mathematicians have devised simpler forms of this property. Regardless of the form of the postulate, however, it consistently appears to be more complicated than Euclid's other postulates:
1. To draw a straight line from any point to any point.
2. To produce [extend] a finite straight line continuously in a straight line.
3. To describe a circle with any centre and distance [radius].
4. That all right angles are equal to one another.
For at least a thousand years, geometers were troubled by the disparate complexity of the fifth postulate, and believed it could be proved as a theorem from the other four. Many attempted to find a proof by contradiction, including Ibn al-Haytham (Alhazen, 11th century),[1] Omar Khayyám (12th century), Nasīr al-Dīn al-Tūsī (13th century), and Giovanni Girolamo Saccheri (18th century).

The theorems of Ibn al-Haytham, Khayyam and al-Tusi on quadrilaterals, including the Lambert quadrilateral and Saccheri quadrilateral, were "the first few theorems of the hyperbolic and the elliptic geometries." These theorems along with their alternative postulates, such as Playfair's axiom, played an important role in the later development of non-Euclidean geometry. These early attempts at challenging the fifth postulate had a considerable influence on its development among later European geometers, including Witelo, Levi ben Gerson, Alfonso, John Wallis and Saccheri.[2] All of these early attempts made at trying to formulate non-Euclidean geometry however provided flawed proofs of the parallel postulate, containing assumptions that were essentially equivalent to the parallel postulate. These early attempts did, however, provide some early properties of the hyperbolic and elliptic geometries.

Khayyam, for example, tried to derive it from an equivalent postulate he formulated from "the principles of the Philosopher" (Aristotle): "Two convergent straight lines intersect and it is impossible for two convergent straight lines to diverge in the direction in which they converge."[3] Khayyam then considered the three cases right, obtuse, and acute that the summit angles of a Saccheri quadrilateral can take and after proving a number of theorems about them, he correctly refuted the obtuse and acute cases based on his postulate and hence derived the classic postulate of Euclid which he didn't realize was equivalent to his own postulate. Another example is al-Tusi's son, Sadr al-Din (sometimes known as "Pseudo-Tusi"), who wrote a book on the subject in 1298, based on al-Tusi's later thoughts, which presented another hypothesis equivalent to the parallel postulate. "He essentially revised both the Euclidean system of axioms and postulates and the proofs of many propositions from the Elements."[4][5] His work was published in Rome in 1594 and was studied by European geometers, including Saccheri[4] who criticised this work as well as that of Wallis.[6]

Giordano Vitale, in his book Euclide restituo (1680, 1686), used the Saccheri quadrilateral to prove that if three points are equidistant on the base AB and the summit CD, then AB and CD are everywhere equidistant.

In a work titled Euclides ab Omni Naevo Vindicatus (Euclid Freed from All Flaws), published in 1733, Saccheri quickly discarded elliptic geometry as a possibility (some others of Euclid's axioms must be modified for elliptic geometry to work) and set to work proving a great number of results in hyperbolic geometry.

He finally reached a point where he believed that his results demonstrated the impossibility of hyperbolic geometry. His claim seems to have been based on Euclidean presuppositions, because no logical contradiction was present. In this attempt to prove Euclidean geometry he instead unintentionally discovered a new viable geometry, but did not realize it.

In 1766 Johann Lambert wrote, but did not publish, Theorie der Parallellinien in which he attempted, as Saccheri did, to prove the fifth postulate. He worked with a figure that today we call a Lambert quadrilateral, a quadrilateral with three right angles (can be considered half of a Saccheri quadrilateral). He quickly eliminated the possibility that the fourth angle is obtuse, as had Saccheri and Khayyam, and then proceeded to prove many theorems under the assumption of an acute angle. Unlike Saccheri, he never felt that he had reached a contradiction with this assumption. He had proved the non-Euclidean result that the sum of the angles in a triangle increases as the area of the triangle decreases, and this led him to speculate on the possibility of a model of the acute case on a sphere of imaginary radius. He did not carry this idea any further.[7]

At this time it was widely believed that the universe worked according to the principles of Euclidean geometry.[8]

Discovery of non-Euclidean geometry

The beginning of the 19th century would finally witness decisive steps in the creation of non-Euclidean geometry. Circa 1813, Carl Friedrich Gauss and independently around 1818, the German professor of law Ferdinand Karl Schweikart[9] had the germinal ideas of non-Euclidean geometry worked out, but neither published any results. Then, around 1830, the Hungarian mathematician János Bolyai and the Russian mathematician Nikolai Ivanovich Lobachevsky separately published treatises on hyperbolic geometry. Consequently, hyperbolic geometry is called Bolyai-Lobachevskian geometry, as both mathematicians, independent of each other, are the basic authors of non-Euclidean geometry. Gauss mentioned to Bolyai's father, when shown the younger Bolyai's work, that he had developed such a geometry several years before,[10] though he did not publish. While Lobachevsky created a non-Euclidean geometry by negating the parallel postulate, Bolyai worked out a geometry where both the Euclidean and the hyperbolic geometry are possible depending on a parameter k. Bolyai ends his work by mentioning that it is not possible to decide through mathematical reasoning alone if the geometry of the physical universe is Euclidean or non-Euclidean; this is a task for the physical sciences.

Bernhard Riemann, in a famous lecture in 1854, founded the field of Riemannian geometry, discussing in particular the ideas now called manifolds, Riemannian metric, and curvature. He constructed an infinite family of geometries which are not Euclidean by giving a formula for a family of Riemannian metrics on the unit ball in Euclidean space. The simplest of these is called elliptic geometry and it is considered to be a non-Euclidean geometry due to its lack of parallel lines.[11]

By formulating the geometry in terms of a curvature tensor, Riemann allowed non-Euclidean geometry to be applied to higher dimensions.

Terminology

It was Gauss who coined the term "non-Euclidean geometry".[12] He was referring to his own work which today we call hyperbolic geometry. Several modern authors still consider "non-Euclidean geometry" and "hyperbolic geometry" to be synonyms.

Arthur Cayley noted that distance between points inside a conic could be defined in terms of logarithm and the projective cross-ratio function. The method has become called the Cayley-Klein metric because Felix Klein exploited it to describe the non-euclidean geometries in articles[13] in 1871 and 73 and later in book form. The Cayley-Klein metrics provided working models of hyperbolic and elliptic metric geometries, as well as Euclidean geometry.

Klein is responsible for the terms "hyperbolic" and "elliptic" (in his system he called Euclidean geometry "parabolic", a term which generally fell out of use[14]). His influence has led to the current usage of the term "non-Euclidean geometry" to mean either "hyperbolic" or "elliptic" geometry.

There are some mathematicians who would extend the list of geometries that should be called "non-Euclidean" in various ways.[15]

Axiomatic basis of non-Euclidean geometry

Euclidean geometry can be axiomatically described in several ways. Unfortunately, Euclid's original system of five postulates (axioms) is not one of these as his proofs relied on several unstated assumptions which should also have been taken as axioms. Hilbert's system consisting of 20 axioms[16] most closely follows the approach of Euclid and provides the justification for all of Euclid's proofs. Other systems, using different sets of undefined terms obtain the same geometry by different paths. In all approaches, however, there is an axiom which is logically equivalent to Euclid's fifth postulate, the parallel postulate. Hilbert uses the Playfair axiom form, while Birkhoff, for instance, uses the axiom which says that "there exists a pair of similar but not congruent triangles." In any of these systems, removal of the one axiom which is equivalent to the parallel postulate, in whatever form it takes, and leaving all the other axioms intact, produces absolute geometry. As the first 28 propositions of Euclid (in The Elements) do not require the use of the parallel postulate or anything equivalent to it, they are all true statements in absolute geometry.[17]

To obtain a non-Euclidean geometry, the parallel postulate (or its equivalent) must be replaced by its negation. Negating the Playfair's axiom form, since it is a compound statement (... there exists one and only one ...), can be done in two ways:
  • Either there will exist more than one line through the point parallel to the given line or there will exist no lines through the point parallel to the given line. In the first case, replacing the parallel postulate (or its equivalent) with the statement "In a plane, given a point P and a line not passing through P, there exist two lines through P which do not meet " and keeping all the other axioms, yields hyperbolic geometry.[18]
  • The second case is not dealt with as easily. Simply replacing the parallel postulate with the statement, "In a plane, given a point P and a line not passing through P, all the lines through P meet ", does not give a consistent set of axioms. This follows since parallel lines exist in absolute geometry,[19] but this statement says that there are no parallel lines. This problem was known (in a different guise) to Khayyam, Saccheri and Lambert and was the basis for their rejecting what was known as the "obtuse angle case". In order to obtain a consistent set of axioms which includes this axiom about having no parallel lines, some of the other axioms must be tweaked. The adjustments to be made depend upon the axiom system being used. Among others these tweaks will have the effect of modifying Euclid's second postulate from the statement that line segments can be extended indefinitely to the statement that lines are unbounded. Riemann's elliptic geometry emerges as the most natural geometry satisfying this axiom.

Models of non-Euclidean geometry

On a sphere, the sum of the angles of a triangle is not equal to 180°. The surface of a sphere is not a Euclidean space, but locally the laws of the Euclidean geometry are good approximations. In a small triangle on the face of the earth, the sum of the angles is very nearly 180°.

Two dimensional Euclidean geometry is modelled by our notion of a "flat plane."

Elliptic geometry

The simplest model for elliptic geometry is a sphere, where lines are "great circles" (such as the equator or the meridians on a globe), and points opposite each other (called antipodal points) are identified (considered to be the same). This is also one of the standard models of the real projective plane. The difference is that as a model of elliptic geometry a metric is introduced permitting the measurement of lengths and angles, while as a model of the projective plane there is no such metric.In the elliptic model, for any given line and a point A, which is not on , all lines through A will intersect .

Hyperbolic geometry

Even after the work of Lobachevsky, Gauss, and Bolyai, the question remained: "Does such a model exist for hyperbolic geometry?". The model for hyperbolic geometry was answered by Eugenio Beltrami, in 1868, who first showed that a surface called the pseudosphere has the appropriate curvature to model a portion of hyperbolic space and in a second paper in the same year, defined the Klein model which models the entirety of hyperbolic space, and used this to show that Euclidean geometry and hyperbolic geometry were equiconsistent so that hyperbolic geometry was logically consistent if and only if Euclidean geometry was. (The reverse implication follows from the horosphere model of Euclidean geometry.)
In the hyperbolic model, within a two-dimensional plane, for any given line and a point A, which is not on , there are infinitely many lines through A that do not intersect .

In these models the concepts of non-Euclidean geometries are being represented by Euclidean objects in a Euclidean setting. This introduces a perceptual distortion wherein the straight lines of the non-Euclidean geometry are being represented by Euclidean curves which visually bend. This "bending" is not a property of the non-Euclidean lines, only an artifice of the way they are being represented.

Three-dimensional non-Euclidean geometry

In three dimensions, there are eight models of geometries.[20] There are Euclidean, elliptic, and hyperbolic geometries, as in the two-dimensional case; mixed geometries that are partially Euclidean and partially hyperbolic or spherical; twisted versions of the mixed geometries; and one unusual geometry that is completely anisotropic (i.e. every direction behaves differently).

Uncommon properties


Lambert quadrilateral in hyperbolic geometry

Saccheri quadrilaterals in the three geometries

Euclidean and non-Euclidean geometries naturally have many similar properties, namely those which do not depend upon the nature of parallelism. This commonality is the subject of absolute geometry (also called neutral geometry). However, the properties which distinguish one geometry from the others are the ones which have historically received the most attention.

Besides the behavior of lines with respect to a common perpendicular, mentioned in the introduction, we also have the following:
  • A Lambert quadrilateral is a quadrilateral which has three right angles. The fourth angle of a Lambert quadrilateral is acute if the geometry is hyperbolic, a right angle if the geometry is Euclidean or obtuse if the geometry is elliptic. Consequently, rectangles exist (a statement equivalent to the parallel postulate) only in Euclidean geometry.
  • A Saccheri quadrilateral is a quadrilateral which has two sides of equal length, both perpendicular to a side called the base. The other two angles of a Saccheri quadrilateral are called the summit angles and they have equal measure. The summit angles of a Saccheri quadrilateral are acute if the geometry is hyperbolic, right angles if the geometry is Euclidean and obtuse angles if the geometry is elliptic.
  • The sum of the measures of the angles of any triangle is less than 180° if the geometry is hyperbolic, equal to 180° if the geometry is Euclidean, and greater than 180° if the geometry is elliptic. The defect of a triangle is the numerical value (180° - sum of the measures of the angles of the triangle). This result may also be stated as: the defect of triangles in hyperbolic geometry is positive, the defect of triangles in Euclidean geometry is zero, and the defect of triangles in elliptic geometry is negative.

Importance

Before the models of a non-Euclidean plane were presented by Beltrami, Klein, and Poincaré, Euclidean geometry stood unchallenged as the mathematical model of space. Furthermore, since the substance of the subject in synthetic geometry was a chief exhibit of rationality, the Euclidean point of view represented absolute authority.

The discovery of the non-Euclidean geometries had a ripple effect which went far beyond the boundaries of mathematics and science. The philosopher Immanuel Kant's treatment of human knowledge had a special role for geometry. It was his prime example of synthetic a priori knowledge; not derived from the senses nor deduced through logic — our knowledge of space was a truth that we were born with. Unfortunately for Kant, his concept of this unalterably true geometry was Euclidean. Theology was also affected by the change from absolute truth to relative truth in the way that mathematics is related to the world around it, that was a result of this paradigm shift.[21]

Non-Euclidean geometry is an example of a scientific revolution in the history of science, in which mathematicians and scientists changed the way they viewed their subjects.[22] Some geometers called Lobachevsky the "Copernicus of Geometry" due to the revolutionary character of his work.[23][24]
The existence of non-Euclidean geometries impacted the intellectual life of Victorian England in many ways[25] and in particular was one of the leading factors that caused a re-examination of the teaching of geometry based on Euclid's Elements. This curriculum issue was hotly debated at the time and was even the subject of a book, Euclid and his Modern Rivals, written by Charles Lutwidge Dodgson (1832–1898) better known as Lewis Carroll, the author of Alice in Wonderland.

Planar algebras

In analytic geometry a plane is described with Cartesian coordinates : C = { (x,y) : x, y ∈ ℝ }. The points are sometimes identified with complex numbers z = x + y ε where ε2 ∈ { –1, 0, 1}.

The Euclidean plane corresponds to the case ε2 = −1 since the modulus of z is given by
zz^{\ast }=(x+y\epsilon )(x-y\epsilon )=x^{2}+y^{2}
and this quantity is the square of the Euclidean distance between z and the origin. For instance, {z | z z* = 1} is the unit circle.

For planar algebra, non-Euclidean geometry arises in the other cases. When ε2 = +1, then z is a split-complex number and conventionally j replaces epsilon. Then
zz^{\ast }=(x+y\mathbf {j} )(x-y\mathbf {j} )=x^{2}-y^{2}\!
and {z | z z* = 1} is the unit hyperbola.

When ε2 = 0, then z is a dual number.[26]

This approach to non-Euclidean geometry explains the non-Euclidean angles: the parameters of slope in the dual number plane and hyperbolic angle in the split-complex plane correspond to angle in Euclidean geometry. Indeed, they each arise in polar decomposition of a complex number z.[27]

Kinematic geometries

Hyperbolic geometry found an application in kinematics with the physical cosmology introduced by Hermann Minkowski in 1908. Minkowski introduced terms like worldline and proper time into mathematical physics. He realized that the submanifold, of events one moment of proper time into the future, could be considered a hyperbolic space of three dimensions.[28][29] Already in the 1890s Alexander Macfarlane was charting this submanifold through his Algebra of Physics and hyperbolic quaternions, though Macfarlane did not use cosmological language as Minkowski did in 1908. The relevant structure is now called the hyperboloid model of hyperbolic geometry.

The non-Euclidean planar algebras support kinematic geometries in the plane. For instance, the split-complex number z = eaj can represent a spacetime event one moment into the future of a frame of reference of rapidity a. Furthermore, multiplication by z amounts to a Lorentz boost mapping the frame with rapidity zero to that with rapidity a.

Kinematic study makes use of the dual numbers z=x+y\epsilon ,\quad \epsilon ^{2}=0, to represent the classical description of motion in absolute time and space: The equations x^{\prime }=x+vt,\quad t^{\prime }=t are equivalent to a shear mapping in linear algebra:
{\begin{pmatrix}x'\\t'\end{pmatrix}}={\begin{pmatrix}1&v\\0&1\end{pmatrix}}{\begin{pmatrix}x\\t\end{pmatrix}}.
With dual numbers the mapping is t^{\prime }+x^{\prime }\epsilon =(1+v\epsilon )(t+x\epsilon )=t+(x+vt)\epsilon .[30]

Another view of special relativity as a non-Euclidean geometry was advanced by E. B. Wilson and Gilbert Lewis in Proceedings of the American Academy of Arts and Sciences in 1912. They revamped the analytic geometry implicit in the split-complex number algebra into synthetic geometry of premises and deductions.[31][32]

Fiction

Non-Euclidean geometry often makes appearances in works of science fiction and fantasy.
  • In 1895 H. G. Wells published the short story "The Remarkable Case of Davidson’s Eyes". To appreciate this story one should know how antipodal points on a sphere are identified in a model of the elliptic plane. In the story, in the midst of a thunderstorm, Sidney Davidson sees "Waves and a remarkably neat schooner" while working in an electrical laboratory at Harlow Technical College. At the story’s close Davidson proves to have witnessed H.M.S. Fulmar off Antipodes Island.
  • Non-Euclidean geometry is sometimes connected with the influence of the 20th century horror fiction writer H. P. Lovecraft. In his works, many unnatural things follow their own unique laws of geometry: In Lovecraft's Cthulhu Mythos, the sunken city of R'lyeh is characterized by its non-Euclidean geometry. It is heavily implied this is achieved as a side effect of not following the natural laws of this universe rather than simply using an alternate geometric model, as the sheer innate wrongness of it is said to be capable of driving those who look upon it insane.[33]
  • The main character in Robert Pirsig's Zen and the Art of Motorcycle Maintenance mentioned Riemannian Geometry on multiple occasions.
  • In The Brothers Karamazov, Dostoevsky discusses non-Euclidean geometry through his main character Ivan.
  • Christopher Priest's novel Inverted World describes the struggle of living on a planet with the form of a rotating pseudosphere.
  • Robert Heinlein's The Number of the Beast utilizes non-Euclidean geometry to explain instantaneous transport through space and time and between parallel and fictional universes.
  • Alexander Bruce's Antichamber uses non-Euclidean geometry to create a minimal, Escher-like world, where geometry and space follow unfamiliar rules.
  • Zeno Rogue's HyperRogue is a roguelike game set on the hyperbolic plane, allowing the player to experience many properties of this geometry. Many mechanics, quests, and locations are strongly dependent on the features of hyperbolic geometry.[34]
  • In the Renegade Legion science fiction setting for FASA's wargame, role-playing-game and fiction, faster-than-light travel and communications is possible through the use of Hsieh Ho's Polydimensional Non-Euclidean Geometry, published sometime in the middle of the 22nd century.
  • In Ian Stewart's Flatterland the protagonist Victoria Line visit all kinds of non-Euclidean worlds.
  • In Jean-Pierre Petit's Here's looking at Euclid (and not looking at Euclid) Archibald Higgins stumbles upon spherical geometry[35]

Subject-oriented programming

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Sub...