Search This Blog

Thursday, May 10, 2018

Luminiferous aether

From Wikipedia, the free encyclopedia
 
The luminiferous aether: it was hypothesised that the Earth moves through a "medium" of aether that carries light

In the late 19th century, luminiferous aether, aether, or ether, meaning light-bearing aether, was the postulated medium for the propagation of light.[1] It was invoked to explain the ability of the apparently wave-based light to propagate through empty space, something that waves should not be able to do. The assumption of a spatial plenum of luminiferous aether, rather than a spatial vacuum, provided the theoretical medium that was required by wave theories of light.

The concept was the topic of considerable debate throughout its history, as it required the existence of an invisible and infinite material with no interaction with physical objects. As the nature of light was explored, especially in the 19th century, the physical qualities required of the aether became increasingly contradictory. By the late 1800s, the existence of the aether was being questioned, although there was no physical theory to replace it.

The negative outcome of the Michelson–Morley experiment (1887) suggested that the aether was non-existent, findings which were confirmed in subsequent experiments through the 1920s. This led to considerable theoretical work to explain the propagation of light without an aether. A major breakthrough was the theory of relativity, which could explain why the experiment failed to see aether, but was more broadly interpreted to suggest that it was not needed. The Michelson-Morley experiment, along with the blackbody radiator and photoelectric effect, was a key experiment in the development of modern physics, which includes both relativity and quantum theory, the latter of which explains the wave-like nature of light.

The history of light and aether

Particles vs. waves

To Robert Boyle in the 17th century, shortly before Isaac Newton, the aether was a probable hypothesis and consisted of subtle particles, one sort of which explained the absence of vacuum and the mechanical interactions between bodies, and the other sort of which explained phenomena such as magnetism (and possibly gravity) that were inexplicable on the basis of the purely mechanical interactions of macroscopic bodies, "though in the ether of the ancients there was nothing taken notice of but a diffused and very subtle substance; yet we are at present content to allow that there is always in the air a swarm of steams moving in a determinate course between the north pole and the south".[2]

Isaac Newton contended that light was made up of numerous small particles. This could explain such features as light's ability to travel in straight lines and reflect off surfaces. This theory was known to have its problems: although it explained reflection well, its explanation of refraction and diffraction was less satisfactory.[citation needed] To explain refraction, Newton's Opticks (1704) postulated an "Aethereal Medium" transmitting vibrations faster than light, by which light, when overtaken, is put into "Fits of easy Reflexion and easy Transmission", which caused refraction and diffraction. Newton believed that these vibrations were related to heat radiation:
Is not the Heat of the warm Room convey'd through the vacuum by the Vibrations of a much subtiler Medium than Air, which after the Air was drawn out remained in the Vacuum? And is not this Medium the same with that Medium by which Light is refracted and reflected, and by whose Vibrations Light communicates Heat to Bodies, and is put into Fits of easy Reflexion and easy Transmission?[A 1]:349
The modern understanding is that heat radiation is, like light, electromagnetic radiation. However, Newton viewed heat and light as two different phenomena. He believed heat vibrations to be excited "when a Ray of Light falls upon the Surface of any pellucid Body".[A 1]:348 He wrote, "I do not know what this Aether is", but that if it consists of particles then they must be
exceedingly smaller than those of Air, or even than those of Light: The exceeding smallness of its Particles may contribute to the greatness of the force by which those Particles may recede from one another, and thereby make that Medium exceedingly more rare and elastic than Air, and by consequence exceedingly less able to resist the motions of Projectiles, and exceedingly more able to press upon gross Bodies, by endeavoring to expand itself.[A 1]:352
Before Newton, Christiaan Huygens had hypothesized that light was a wave propagating through an aether.[citation needed] Newton rejected this idea, mainly on the ground that both men apparently could only envision light as a longitudinal wave, like sound and other mechanical waves in fluids.[citation needed]

However, longitudinal waves necessarily have only one form for a given propagation direction, rather than two polarizations like transverse wave.[citation needed] Thus, longitudinal waves could not explain birefringence, in which two polarizations of light are refracted differently by a crystal.[citation needed] Instead, Newton preferred to imagine non-spherical particles, or "corpuscles", of light with different "sides" that give rise to birefringence.[citation needed] In addition, Newton rejected light as waves in a medium because such a medium would have to extend everywhere in space, and would thereby "disturb and retard the Motions of those great Bodies" (the planets and comets) and thus "as it [light's medium] is of no use, and hinders the Operation of Nature, and makes her languish, so there is no evidence for its Existence, and therefore it ought to be rejected".[citation needed]

Bradley suggests particles

In 1720 James Bradley carried out a series of experiments attempting to measure stellar parallax by taking measurements of stars at different times of the year. As the Earth moves around the sun, the apparent angle to a given distant spot changes, and by measuring those angles the distance to the star can be calculated based on the known orbital circumference of the Earth around the sun. He failed to detect any parallax, thereby placing a lower limit on the distance to stars.

During these experiments he also discovered a similar effect; the apparent positions of the stars did change over the year, but not as expected. Instead of the apparent angle being maximized when the Earth was at either end of its orbit with respect to the star, the angle was maximized when the Earth was at its fastest sideways velocity with respect to the star. This interesting effect is now known as stellar aberration.

Bradley explained this effect in the context of Newton's corpuscular theory of light, by showing that the aberration angle was given by simple vector addition of the Earth's orbital velocity and the velocity of the corpuscles of light, just as vertically falling raindrops strike a moving object at an angle. Knowing the Earth's velocity and the aberration angle, this enabled him to estimate the speed of light.

To explain stellar aberration in the context of an aether-based theory of light was regarded as more problematic. As the aberration relied on relative velocities, and the measured velocity was dependent on the motion of the Earth, the aether had to be remaining stationary with respect to the star as the Earth moved through it. This meant that the Earth could travel through the aether, a physical medium, with no apparent effect – precisely the problem that led Newton to reject a wave model in the first place.

Waves theory triumphs

However, a century later, Young and Fresnel revived the wave theory of light when they pointed out that light could be a transverse wave rather than a longitudinal wave – the polarization of a transverse wave (like Newton's "sides" of light) could explain birefringence, and in the wake of a series of experiments on diffraction the particle model of Newton was finally abandoned. Physicists assumed, moreover, that like mechanical waves, light waves required a medium for propagation, and thus required Huygens's idea of an aether "gas" permeating all space.

However, a transverse wave apparently required the propagating medium to behave as a solid, as opposed to a gas or fluid. The idea of a solid that did not interact with other matter seemed a bit odd, and Augustin-Louis Cauchy suggested that perhaps there was some sort of "dragging", or "entrainment", but this made the aberration measurements difficult to understand. He also suggested that the absence of longitudinal waves suggested that the aether had negative compressibility. George Green pointed out that such a fluid would be unstable. George Gabriel Stokes became a champion of the entrainment interpretation, developing a model in which the aether might be (by analogy with pine pitch) rigid at very high frequencies and fluid at lower speeds. Thus the Earth could move through it fairly freely, but it would be rigid enough to support light.

Electromagnetism

In 1856 Wilhelm Eduard Weber and Rudolf Kohlrausch performed an experiment to measure the numerical value of the ratio of the electromagnetic unit of charge to the electrostatic unit of charge. The result came out to be equal to the product of the speed of light and the square root of two. The following year, Gustav Kirchhoff wrote a paper in which he showed that the speed of a signal along an electric wire was equal to the speed of light. These are the first recorded historical links between the speed of light and electromagnetic phenomena.

James Clerk Maxwell began working on Faraday's lines of force. In his 1861 paper On Physical Lines of Force he modelled these magnetic lines of force using a sea of molecular vortices that he considered to be partly made of aether and partly made of ordinary matter. He derived expressions for the dielectric constant and the magnetic permeability in terms of the transverse elasticity and the density of this elastic medium. He then equated the ratio of the dielectric constant to the magnetic permeability with a suitably adapted version of Weber and Kohlrausch's result of 1856, and he substituted this result into Newton's equation for the speed of sound. On obtaining a value that was close to the speed of light as measured by Fizeau, Maxwell concluded that light consists in undulations of the same medium that is the cause of electric and magnetic phenomena.[B 1][B 2][B 3][B 4]

Maxwell had however expressed some uncertainties surrounding the precise nature of his molecular vortices and so he began to embark on a purely dynamical approach to the problem. He wrote another famous paper in 1864 under the title of "A Dynamical Theory of the Electromagnetic Field" in which the details of the luminiferous medium were less explicit.[A 2] Although Maxwell did not explicitly mention the sea of molecular vortices, his derivation of Ampère's circuital law was carried over from the 1861 paper and he used a dynamical approach involving rotational motion within the electromagnetic field which he likened to the action of flywheels. Using this approach to justify the electromotive force equation (the precursor of the Lorentz force equation), he derived a wave equation from a set of eight equations which appeared in the paper and which included the electromotive force equation and Ampère's circuital law.[A 2] Maxwell once again used the experimental results of Weber and Kohlrausch to show that this wave equation represented an electromagnetic wave that propagates at the speed of light, hence supporting the view that light is a form of electromagnetic radiation.

The apparent need for a propagation medium for such Hertzian waves can be seen by the fact that they consist of perpendicular electric (E) and magnetic (B or H) waves. The E waves consist of undulating dipolar electric fields, and all such dipoles appeared to require separated and opposite electric charges. Electric charge is an inextricable property of matter, so it appeared that some form of matter was required to provide the alternating current that would seem to have to exist at any point along the propagation path of the wave. Propagation of waves in a true vacuum would imply the existence of electric fields without associated electric charge, or of electric charge without associated matter. Albeit compatible with Maxwell's equations, electromagnetic induction of electric fields could not be demonstrated in vacuum, because all methods of detecting electric fields required electrically charged matter.

In addition, Maxwell's equations required that all electromagnetic waves in vacuum propagate at a fixed speed, c. As this can only occur in one reference frame in Newtonian physics (see Galilean-Newtonian relativity), the aether was hypothesized as the absolute and unique frame of reference in which Maxwell's equations hold. That is, the aether must be "still" universally, otherwise c would vary along with any variations that might occur in its supportive medium. Maxwell himself proposed several mechanical models of aether based on wheels and gears, and George Francis FitzGerald even constructed a working model of one of them. These models had to agree with the fact that the electromagnetic waves are transverse but never longitudinal.

Problems

By this point the mechanical qualities of the aether had become more and more magical: it had to be a fluid in order to fill space, but one that was millions of times more rigid than steel in order to support the high frequencies of light waves. It also had to be massless and without viscosity, otherwise it would visibly affect the orbits of planets. Additionally it appeared it had to be completely transparent, non-dispersive, incompressible, and continuous at a very small scale.[citation needed] Maxwell wrote in Encyclopædia Britannica:[A 3]
Aethers were invented for the planets to swim in, to constitute electric atmospheres and magnetic effluvia, to convey sensations from one part of our bodies to another, and so on, until all space had been filled three or four times over with aethers. ... The only aether which has survived is that which was invented by Huygens to explain the propagation of light.
Contemporary scientists were aware of the problems, but aether theory was so entrenched in physical law by this point that it was simply assumed to exist. In 1908 Oliver Lodge gave a speech on behalf of Lord Rayleigh [3] to the Royal Institution on this topic, in which he outlined its physical properties, and then attempted to offer reasons why they were not impossible. Nevertheless, he was also aware of the criticisms, and quoted Lord Salisbury as saying that "aether is little more than a nominative case of the verb to undulate". Others criticized it as an "English invention", although Rayleigh jokingly stated it was actually an invention of the Royal Institution.[4]

By the early 20th century, aether theory was in trouble. A series of increasingly complex experiments had been carried out in the late 19th century to try to detect the motion of the Earth through the aether, and had failed to do so. A range of proposed aether-dragging theories could explain the null result but these were more complex, and tended to use arbitrary-looking coefficients and physical assumptions. Lorentz and FitzGerald offered within the framework of Lorentz ether theory a more elegant solution to how the motion of an absolute aether could be undetectable (length contraction), but if their equations were correct, the new special theory of relativity (1905) could generate the same mathematics without referring to an aether at all. Aether fell to Occam's Razor.[B 1][B 2][B 3][B 4]

Relative motion between the Earth and aether

Aether drag

The two most important models, which were aimed to describe the relative motion of the Earth and aether, were Augustin-Jean Fresnel's (1818) model of the (nearly) stationary aether including a partial aether drag determined by Fresnel's dragging coefficient,[A 4] and George Gabriel Stokes' (1844)[A 5] model of complete aether drag. The latter theory was not considered as correct, since it was not compatible with the aberration of light, and the auxiliary hypotheses developed to explain this problem were not convincing. Also, subsequent experiments as the Sagnac effect (1913) also showed that this model is untenable. However, the most important experiment supporting Fresnel's theory was Fizeau's 1851 experimental confirmation of Fresnel's 1818 prediction that a medium with refractive index n moving with a velocity v would increase the speed of light travelling through the medium in the same direction as v from c/n to:[E 1][E 2]
\frac{c}{n} + \left( 1 - \frac{1}{n^2} \right) v.
That is, movement adds only a fraction of the medium's velocity to the light (predicted by Fresnel in order to make Snell's law work in all frames of reference, consistent with stellar aberration). This was initially interpreted to mean that the medium drags the aether along, with a portion of the medium's velocity, but that understanding became very problematic after Wilhelm Veltmann demonstrated that the index n in Fresnel's formula depended upon the wavelength of light, so that the aether could not be moving at a wavelength-independent speed. This implied that there must be a separate aether for each of the infinitely many frequencies.

Negative aether-drift experiments

The key difficulty with Fresnel's aether hypothesis arose from the juxtaposition of the two well-established theories of Newtonian dynamics and Maxwell's electromagnetism. Under a Galilean transformation the equations of Newtonian dynamics are invariant, whereas those of electromagnetism are not. Basically this means that while physics should remain the same in non-accelerated experiments, light would not follow the same rules because it is travelling in the universal "aether frame". Some effect caused by this difference should be detectable.

A simple example concerns the model on which aether was originally built: sound. The speed of propagation for mechanical waves, the speed of sound, is defined by the mechanical properties of the medium. Sound travels 4.3 times faster in water than in air. This explains why a person hearing an explosion underwater and quickly surfacing can hear it again as the slower travelling sound arrives through the air. Similarly, a traveller on an airliner can still carry on a conversation with another traveller because the sound of words is travelling along with the air inside the aircraft. This effect is basic to all Newtonian dynamics, which says that everything from sound to the trajectory of a thrown baseball should all remain the same in the aircraft flying (at least at a constant speed) as if still sitting on the ground. This is the basis of the Galilean transformation, and the concept of frame of reference.

But the same was not supposed to be true for light, since Maxwell's mathematics demanded a single universal speed for the propagation of light, based, not on local conditions, but on two measured properties, the permittivity and permeability of free space, that were assumed to be the same throughout the universe. If these numbers did change, there should be noticeable effects in the sky; stars in different directions would have different colours, for instance.[verification needed]

Thus at any point there should be one special coordinate system, "at rest relative to the aether". Maxwell noted in the late 1870s that detecting motion relative to this aether should be easy enough—light travelling along with the motion of the Earth would have a different speed than light travelling backward, as they would both be moving against the unmoving aether. Even if the aether had an overall universal flow, changes in position during the day/night cycle, or over the span of seasons, should allow the drift to be detected.

First order experiments

Although the aether is almost stationary according to Fresnel, his theory predicts a positive outcome of aether drift experiments only to second order in v/c, because Fresnel's dragging coefficient would cause a negative outcome of all optical experiments capable of measuring effects to first order in v/c.
This was confirmed by the following first-order experiments, which all gave negative results. The following list is based on the description of Wilhelm Wien (1898), with changes and additional experiments according to the descriptions of Edmund Taylor Whittaker (1910) and Jakob Laub (1910):[B 5][B 1][B 6]
  • The experiment of François Arago (1810), to confirm whether refraction, and thus the aberration of light, is influenced by Earth's motion. Similar experiments were conducted by George Biddell Airy (1871) by means of a telescope filled with water, and Éleuthère Mascart (1872).[E 3][E 4][E 5]
  • The experiment of Fizeau (1860), to find whether the rotation of the polarization plane through glass columns is changed by Earth's motion. He obtained a positive result, but Lorentz could show that the results have been contradictory. DeWitt Bristol Brace (1905) and Strasser (1907) repeated the experiment with improved accuracy, and obtained negative results.[E 6][E 7][E 8]
  • The experiment of Martin Hoek (1868). This experiment is a more precise variation of the famous Fizeau experiment (1851). Two light rays were sent in opposite directions – one of them traverses a path filled with resting water, the other one follows a path through air. In agreement with Fresnel's dragging coefficient, he obtained a negative result.[E 9]
  • The experiment of Wilhelm Klinkerfues (1870) investigated whether an influence of Earth's motion on the absorption line of sodium exists. He obtained a positive result, but this was shown to be an experimental error, because a repetition of the experiment by Haga (1901) gave a negative result.[E 10][E 11]
  • The experiment of Ketteler (1872), in which two rays of an interferometer were sent in opposite directions through two mutually inclined tubes filled with water. No change of the interference fringes occurred. Later, Mascart (1872) showed that the interference fringes of polarized light in calcite remained uninfluenced as well.[E 12][E 13]
  • The experiment of Éleuthère Mascart (1872) to find a change of rotation of the polarization plane in quartz. No change of rotation was found when the light rays had the direction of Earth's motion and then the opposite direction. Lord Rayleigh conducted similar experiments with improved accuracy, and obtained a negative result as well.[E 5][E 13][E 14]
Besides those optical experiments, also electrodynamic first-order experiments were conducted, which should have led to positive results according to Fresnel. However, Hendrik Antoon Lorentz (1895) modified Fresnel's theory and showed that those experiments can be explained by a stationary aether as well:[A 6]
  • The experiment of Wilhelm Röntgen (1888), to find whether a charged condenser produces magnetic forces due to Earth's motion.[E 15]
  • The experiment of Theodor des Coudres (1889), to find whether the inductive effect of two wire rolls upon a third one is influenced by the direction of Earth's motion. Lorentz showed that this effect is cancelled to first order by the electrostatic charge (produced by Earth's motion) upon the conductors.[E 16]
  • The experiment of Königsberger (1905). The plates of a condenser are located in the field of a strong electromagnet. Due to Earth's motion, the plates should have become charged. No such effect was observed.[E 17]
  • The experiment of Frederick Thomas Trouton (1902). A condenser was brought parallel to Earth's motion, and it was assumed that momentum is produced when the condenser is charged. The negative result can be explained by Lorentz's theory, according to which the electromagnetic momentum compensates the momentum due to Earth's motion. Lorentz could also show, that the sensitivity of the apparatus was much too low to observe such an effect.[E 18]

Second order experiments

The Michelson–Morley experiment compared the time for light to reflect from mirrors in two orthogonal directions.

While the first-order experiments could be explained by a modified stationary aether, more precise second-order experiments were expected to give positive results, however, no such results could be found.

The famous Michelson–Morley experiment compared the source light with itself after being sent in different directions, looking for changes in phase in a manner that could be measured with extremely high accuracy. In this experiment, their goal was to determine the velocity of the Earth through the aether.[E 19][E 20] The publication of their result in 1887, the null result, was the first clear demonstration that something was seriously wrong with the aether concept (Michelson's first experiment in 1881 was not entirely conclusive). In this case the MM experiment yielded a shift of the fringing pattern of about 0.01 of a fringe, corresponding to a small velocity. However, it was incompatible with the expected aether wind effect due to the Earth's (seasonally varying) velocity which would have required a shift of 0.4 of a fringe, and the error was small enough that the value may have indeed been zero. Therefore, the null hypothesis, the hypothesis that there was no aether wind, could not be rejected. More modern experiments have since reduced the possible value to a number very close to zero, about 10−17.
It is obvious from what has gone before that it would be hopeless to attempt to solve the question of the motion of the solar system by observations of optical phenomena at the surface of the earth.
— A. Michelson and E. Morley. "On the Relative Motion of the Earth and the Luminiferous Æther". Phil. Mag. S. 5. Vol. 24. No. 151. Dec. 1887.[5]
A series of experiments using similar but increasingly sophisticated apparatuses all returned the null result as well. Conceptually different experiments that also attempted to detect the motion of the aether were the Trouton–Noble experiment (1903),[E 21] whose objective was to detect torsion effects caused by electrostatic fields, and the experiments of Rayleigh and Brace (1902, 1904),[E 22][E 23] to detect double refraction in various media. However, all of them obtained a null result, like Michelson–Morley (MM) previously did.

These "aether-wind" experiments led to a flurry of efforts to "save" aether by assigning to it ever more complex properties, while only few scientists, like Emil Cohn or Alfred Bucherer, considered the possibility of the abandonment of the aether concept. Of particular interest was the possibility of "aether entrainment" or "aether drag", which would lower the magnitude of the measurement, perhaps enough to explain the results of the Michelson-Morley experiment. However, as noted earlier, aether dragging already had problems of its own, notably aberration. In addition, the interference experiments of Lodge (1893, 1897) and Ludwig Zehnder (1895), aimed to show whether the aether is dragged by various, rotating masses, showed no aether drag.[E 24][E 25][E 26] A more precise measurement was made in the Hammar experiment (1935), which ran a complete MM experiment with one of the "legs" placed between two massive lead blocks.[E 27] If the aether was dragged by mass then this experiment would have been able to detect the drag caused by the lead, but again the null result was achieved. The theory was again modified, this time to suggest that the entrainment only worked for very large masses or those masses with large magnetic fields. This too was shown to be incorrect by the Michelson–Gale–Pearson experiment, which detected the Sagnac effect due to Earth's rotation (see Aether drag hypothesis).

Another, completely different attempt to save "absolute" aether was made in the Lorentz–FitzGerald contraction hypothesis, which posited that everything was affected by travel through the aether. In this theory the reason the Michelson–Morley experiment "failed" was that the apparatus contracted in length in the direction of travel. That is, the light was being affected in the "natural" manner by its travel though the aether as predicted, but so was the apparatus itself, cancelling out any difference when measured. FitzGerald had inferred this hypothesis from a paper by Oliver Heaviside. Without referral to an aether, this physical interpretation of relativistic effects was shared by Kennedy and Thorndike in 1932 as they concluded that the interferometer's arm contracts and also the frequency of its light source "very nearly" varies in the way required by relativity.[E 28][6]

Similarly the Sagnac effect, observed by G. Sagnac in 1913, was immediately seen to be fully consistent with special relativity.[E 29][E 30] In fact, the Michelson-Gale-Pearson experiment in 1925 was proposed specifically as a test to confirm the relativity theory, although it was also recognized that such tests, which merely measure absolute rotation, are also consistent with non-relativistic theories.[7]

During the 1920s, the experiments pioneered by Michelson were repeated by Dayton Miller, who publicly proclaimed positive results on several occasions, although they were not large enough to be consistent with any known aether theory. However, other researchers were unable to duplicate Miller's claimed results. Over the years the experimental accuracy of such measurements has been raised by many orders of magnitude, and no trace of any violations of Lorentz invariance has been seen. (A later re-analysis of Miller's results concluded that he had underestimated the variations due to temperature.)

Since the Miller experiment and its unclear results there have been many more experimental attempts to detect the aether. Many experimenters have claimed positive results. These results have not gained much attention from mainstream science, since they contradict a large quantity of high-precision measurements, all the results of which were consistent with special relativity.[8]

Lorentz aether theory

Between 1892 and 1904, Hendrik Lorentz developed an electron-aether theory, in which he introduced a strict separation between matter (electrons) and aether. In his model the aether is completely motionless, and won't be set in motion in the neighborhood of ponderable matter. Contrary to earlier electron models, the electromagnetic field of the aether appears as a mediator between the electrons, and changes in this field cannot propagate faster than the speed of light. A fundamental concept of Lorentz's theory in 1895 was the "theorem of corresponding states" for terms of order v/c.[A 6] This theorem states that an observer moving relative to the aether makes the same observations as a resting observer, after a suitable change of variables. Lorentz noticed that it was necessary to change the space-time variables when changing frames and introduced concepts like physical length contraction (1892)[A 7] to explain the Michelson–Morley experiment, and the mathematical concept of local time (1895) to explain the aberration of light and the Fizeau experiment. This resulted in the formulation of the so-called Lorentz transformation by Joseph Larmor (1897, 1900)[A 8][A 9] and Lorentz (1899, 1904),[A 10][A 11] whereby (it was noted by Larmor) the complete formulation of local time is accompanied by some sort of time dilation of electrons moving in the aether. As Lorentz later noted (1921, 1928), he considered the time indicated by clocks resting in the aether as "true" time, while local time was seen by him as a heuristic working hypothesis and a mathematical artifice.[A 12][A 13] Therefore, Lorentz's theorem is seen by modern authors as being a mathematical transformation from a "real" system resting in the aether into a "fictitious" system in motion.[B 7][B 3][B 8]

The work of Lorentz was mathematically perfected by Henri Poincaré, who formulated on many occasions the Principle of Relativity and tried to harmonize it with electrodynamics. He declared simultaneity only a convenient convention which depends on the speed of light, whereby the constancy of the speed of light would be a useful postulate for making the laws of nature as simple as possible. In 1900 and 1904[A 14][A 15] he physically interpreted Lorentz's local time as the result of clock synchronization by light signals. In June and July 1905[A 16][A 17] he declared the relativity principle a general law of nature, including gravitation. He corrected some mistakes of Lorentz and proved the Lorentz covariance of the electromagnetic equations. However, he used the notion of an aether as a perfectly undetectable medium and distinguished between apparent and real time, so most historians of science argue that he failed to invent special relativity.[B 7][B 9][B 3]

End of aether?

Special relativity

Aether theory was dealt another blow when the Galilean transformation and Newtonian dynamics were both modified by Albert Einstein's special theory of relativity, giving the mathematics of Lorentzian electrodynamics a new, "non-aether" context.[A 18] Unlike most major shifts in scientific thought, special relativity was adopted by the scientific community remarkably quickly, consistent with Einstein's later comment that the laws of physics described by the Special Theory were "ripe for discovery" in 1905.[B 10] Max Planck's early advocacy of the special theory, along with the elegant formulation given to it by Hermann Minkowski, contributed much to the rapid acceptance of special relativity among working scientists.

Einstein based his theory on Lorentz's earlier work. Instead of suggesting that the mechanical properties of objects changed with their constant-velocity motion through an undetectable aether, Einstein proposed to deduce the characteristics that any successful theory must possess in order to be consistent with the most basic and firmly established principles, independent of the existence of a hypothetical aether. He found that the Lorentz transformation must transcend its connection with Maxwell's equations, and must represent the fundamental relations between the space and time coordinates of inertial frames of reference. In this way he demonstrated that the laws of physics remained invariant as they had with the Galilean transformation, but that light was now invariant as well.

With the development of the special relativity, the need to account for a single universal frame of reference had disappeared – and acceptance of the 19th century theory of a luminiferous aether disappeared with it. For Einstein, the Lorentz transformation implied a conceptual change: that the concept of position in space or time was not absolute, but could differ depending on the observer's location and velocity.

Moreover, in another paper published the same month in 1905, Einstein made several observations on a then-thorny problem, the photoelectric effect. In this work he demonstrated that light can be considered as particles that have a "wave-like nature". Particles obviously do not need a medium to travel, and thus, neither did light. This was the first step that would lead to the full development of quantum mechanics, in which the wave-like nature and the particle-like nature of light are both considered as valid descriptions of light. A summary of Einstein's thinking about the aether hypothesis, relativity and light quanta may be found in his 1909 (originally German) lecture "The Development of Our Views on the Composition and Essence of Radiation".[A 19]

Lorentz on his side continued to use the aether concept. In his lectures of around 1911 he pointed out that what "the theory of relativity has to say ... can be carried out independently of what one thinks of the aether and the time". He commented that "whether there is an aether or not, electromagnetic fields certainly exist, and so also does the energy of the electrical oscillations" so that, "if we do not like the name of 'aether', we must use another word as a peg to hang all these things upon". He concluded that "one cannot deny the bearer of these concepts a certain substantiality".[9][B 7]

Other models

In later years there have been a few individuals who advocated a neo-Lorentzian approach to physics, which is Lorentzian in the sense of positing an absolute true state of rest that is undetectable and which plays no role in the predictions of the theory. (No violations of Lorentz covariance have ever been detected, despite strenuous efforts.) Hence these theories resemble the 19th century aether theories in name only. For example, the founder of quantum field theory, Paul Dirac, stated in 1951 in an article in Nature, titled "Is there an Aether?" that "we are rather forced to have an aether".[10][A 20] However, Dirac never formulated a complete theory, and so his speculations found no acceptance by the scientific community.

Einstein's views on the aether

When Einstein was still a student in the Zurich Polytechnic in 1900, he was very interested in the idea of aether. His initial proposal of research thesis was to do an experiment to measure how fast the Earth was moving through the aether.[11] "The velocity of a wave is proportional to the square root of the elastic forces which cause [its] propagation, and inversely proportional to the mass of the aether moved by these forces."[12]

In 1916, after Einstein completed his foundational work on general relativity, Lorentz wrote a letter to him in which he speculated that within general relativity the aether was re-introduced. In his response Einstein wrote that one can actually speak about a "new aether", but one may not speak of motion in relation to that aether. This was further elaborated by Einstein in some semi-popular articles (1918, 1920, 1924, 1930).[A 21][A 22][A 23][A 24][B 11][B 12][B 13]

In 1918 Einstein publicly alluded to that new definition for the first time.[A 21] Then, in the early 1920s, in a lecture which he was invited to give at Lorentz's university in Leiden, Einstein sought to reconcile the theory of relativity with Lorentzian aether. In this lecture Einstein stressed that special relativity took away the last mechanical property of the aether: immobility. However, he continued that special relativity does not necessarily rule out the aether, because the latter can be used to give physical reality to acceleration and rotation. This concept was fully elaborated within general relativity, in which physical properties (which are partially determined by matter) are attributed to space, but no substance or state of motion can be attributed to that "aether" (by which he meant curved space-time).[B 13][A 22][13]

In another paper of 1924, named "Concerning the Aether", Einstein argued that Newton's absolute space, in which acceleration is absolute, is the "Aether of Mechanics". And within the electromagnetic theory of Maxwell and Lorentz one can speak of the "Aether of Electrodynamics", in which the aether possesses an absolute state of motion. As regards special relativity, also in this theory acceleration is absolute as in Newton's mechanics. However, the difference from the electromagnetic aether of Maxwell and Lorentz lies in the fact, that "because it was no longer possible to speak, in any absolute sense, of simultaneous states at different locations in the aether, the aether became, as it were, four dimensional, since there was no objective way of ordering its states by time alone". Now the "aether of special relativity" is still "absolute", because matter is affected by the properties of the aether, but the aether is not affected by the presence of matter. This asymmetry was solved within general relativity. Einstein explained that the "aether of general relativity" is not absolute, because matter is influenced by the aether, just as matter influences the structure of the aether.[A 23]

The only similarity of this relativistic aether concept with the classical aether models lies in the presence of physical properties in space, which can be identified through geodesics. As historians such as John Stachel argue, Einstein's views on the "new aether" are not in conflict with his abandonment of the aether in 1905. As Einstein himself pointed out, no "substance" and no state of motion can be attributed to that new aether. Einstein's use of the word "aether" found little support in the scientific community, and played no role in the continuing development of modern physics.[B 11][B 12][B 13]

Non-standard cosmology

From Wikipedia, the free encyclopedia

A non-standard cosmology is any physical cosmological model of the universe that was, or still is, proposed as an alternative to the then-current standard model of cosmology. The term non-standard is applied to any theory that does not conform to the scientific consensus. Because the term depends on the prevailing consensus, the meaning of the term changes over time. For example, hot dark matter would not have been considered non-standard in 1990, but would be in 2010. Conversely, a non-zero cosmological constant resulting in an accelerating universe would have been considered non-standard in 1990, but is part of the standard cosmology in 2010.

Several major cosmological disputes have occurred throughout the history of cosmology. One of the earliest was the Copernican Revolution, which established the heliocentric model of the Solar System. More recent was the Great Debate of 1920, in the aftermath of which the Milky Way's status as but one of the Universe's many galaxies was established. From the 1940s to the 1960s, the astrophysical community was equally divided between supporters of the Big Bang theory and supporters of a rival steady state universe; this was eventually decided in favour of the Big Bang theory by advances in observational cosmology in the late 1960s. The current standard model of cosmology is the Lambda-CDM model, wherein the Universe is governed by General Relativity, began with a Big Bang and today is a nearly-flat universe that consists of approximately 5% baryons, 27% cold dark matter, and 68% dark energy.[1]

Lambda-CDM has been an extremely successful model, but retains some weaknesses (such as the dwarf galaxy problem). Research on extensions or modifications to Lambda-CDM, as well as fundamentally different models, is ongoing. Topics investigated include quintessence, Modified Newtonian Dynamics (MOND) and its relativistic generalization TeVeS, and warm dark matter.

The Lambda-CDM model

Before observational evidence was gathered, theorists developed frameworks based on what they understood to be the most general features of physics and philosophical assumptions about the universe. When Albert Einstein developed his general theory of relativity in 1915, this was used as a mathematical starting point for most cosmological theories.[2] In order to arrive at a cosmological model, however, theoreticians needed to make assumptions about the nature of the largest scales of the universe. The assumptions that the current standard model of cosmology, Lambda-CDM, relies upon are:
  1. the universality of physical laws – that the laws of physics don't change from one place and time to another,
  2. the cosmological principle – that the universe is roughly homogeneous and isotropic in space though not necessarily in time, and
  3. the Copernican principle – that we are not observing the universe from a preferred locale.
These assumptions when combined with General Relativity result in a universe that is governed by the Friedmann–Robertson–Walker metric (FRW metric). The FRW metric allows for a universe that is either expanding or contracting (as well as stationary but unstable universes). When Hubble's Law was discovered, most astronomers interpreted the law as a sign the universe is expanding. This implies the universe was smaller in the past, and therefore led to the following conclusions:
  1. the universe emerged from a hot, dense state at a finite time in the past,
  2. because the universe heats up as it contracts and cools as it expands, in the first moments that time existed as we know it, the temperatures were high enough for Big Bang nucleosynthesis to occur, and
  3. a cosmic microwave background pervading the entire universe should exist, which is a record of a phase transition that occurred when the atoms of the universe first formed.
These features were derived by numerous individuals over a period of years; indeed it was not until the middle of the twentieth century that accurate predictions of the last feature and observations confirming its existence were made. Non-standard theories developed either by starting from different assumptions or by contradicting the features predicted by Lambda-CDM.[3]

History

Modern physical cosmology as it is currently studied first emerged as a scientific discipline in the period after the Shapley–Curtis debate and discoveries by Edwin Hubble of a cosmic distance ladder when astronomers and physicists had to come to terms with a universe that was of a much larger scale than the previously assumed galactic size. Theorists who successfully developed cosmologies applicable to the larger-scale universe are remembered today as the founders of modern cosmology. Among these scientists are Arthur Milne, Willem de Sitter, Alexander Friedman, Georges Lemaître, and Albert Einstein himself.

After confirmation of the Hubble's law by observation, the two most popular cosmological theories became the Steady State theory of Hoyle, Gold and Bondi, and the big bang theory of Ralph Alpher, George Gamow, and Robert Dicke with a small number of supporters of a smattering of alternatives. Since the discovery of the Cosmic microwave background radiation (CMB) by Arno Penzias and Robert Wilson in 1965, most cosmologists concluded that observations were best explained by the big bang model. Steady State theorists and other non-standard cosmologies were then tasked with providing an explanation for the phenomenon if they were to remain plausible. This led to original approaches including integrated starlight and cosmic iron whiskers, which were meant to provide a source for a pervasive, all-sky microwave background that was not due to an early universe phase transition.


Artist depiction of the WMAP spacecraft at the L2 point. Data gathered by this spacecraft has been successfully used to parametrize the features of standard cosmology, but complete analysis of the data in the context of any non-standard cosmology has not yet been achieved.

Scepticism about the non-standard cosmologies' ability to explain the CMB caused interest in the subject to wane since then, however, there have been two periods in which interest in non-standard cosmology has increased due to observational data which posed difficulties for the big bang. The first occurred was the late 1970s when there were a number of unsolved problems, such as the horizon problem, the flatness problem, and the lack of magnetic monopoles, which challenged the big bang model. These issues were eventually resolved by cosmic inflation in the 1980s. This idea subsequently became part of the understanding of the big bang, although alternatives have been proposed from time to time. The second occurred in the mid-1990s when observations of the ages of globular clusters and the primordial helium abundance, apparently disagreed with the big bang. However, by the late 1990s, most astronomers had concluded that these observations did not challenge the big bang and additional data from COBE and the WMAP, provided detailed quantitative measures which were consistent with standard cosmology.

In the 1990s, a dawning of a "golden age of cosmology" was accompanied by a startling discovery that the expansion of the universe was, in fact, accelerating. Previous to this, it had been assumed that matter either in its visible or invisible dark matter form was the dominant energy density in the universe. This "classical" big bang cosmology was overthrown when it was discovered that nearly 70% of the energy in the universe was attributable to the cosmological constant, often referred to as "dark energy". This has led to the development of a so-called concordance ΛCDM model which combines detailed data obtained with new telescopes and techniques in observational astrophysics with an expanding, density-changing universe. Today, it is more common to find in the scientific literature proposals for "non-standard cosmologies" that actually accept the basic tenets of the big bang cosmology, while modifying parts of the concordance model. Such theories include alternative models of dark energy, such as quintessence, phantom energy and some ideas in brane cosmology; alternative models of dark matter, such as modified Newtonian dynamics; alternatives or extensions to inflation such as chaotic inflation and the ekpyrotic model; and proposals to supplement the universe with a first cause, such as the Hartle–Hawking boundary condition, the cyclic model, and the string landscape. There is no consensus about these ideas amongst cosmologists, but they are nonetheless active fields of academic inquiry.

Today, heterodox non-standard cosmologies are generally considered unworthy of consideration by cosmologists while many of the historically significant nonstandard cosmologies are considered to have been falsified. The essentials of the big bang theory have been confirmed by a wide range of complementary and detailed observations, and no non-standard cosmologies have reproduced the range of successes of the big bang model. Speculations about alternatives are not normally part of research or pedagogical discussions, except as object lessons or for their historical importance. An open letter started by some remaining advocates of non-standard cosmology has affirmed that: "today, virtually all financial and experimental resources in cosmology are devoted to big bang studies...."[4]

Alternative gravity

General relativity, upon which the FRW metric is based, is an extremely successful theory which has met every observational test so far. However, at a fundamental level it is incompatible with quantum mechanics, and by predicting singularities, it also predicts its own breakdown. Any alternative theory of gravity would imply immediately an alternative cosmological theory since current modeling is dependent on general relativity as a framework assumption. There are many different motivations to modify general relativity, such as to eliminate the need for dark matter or dark energy, or to avoid such paradoxes as the firewall.

Machian universe

Ernst Mach developed a kind of extension to general relativity which proposed that inertia was due to gravitational effects of the mass distribution of the universe. This led naturally to speculation about the cosmological implications for such a proposal. Carl Brans and Robert Dicke were able to successfully incorporate Mach's principle into general relativity which admitted for cosmological solutions that would imply a variable mass. The homogeneously distributed mass of the universe would result in a roughly scalar field that permeated the universe and would serve as a source for Newton's gravitational constant; creating a theory of quantum gravity.

MOND

Modified Newtonian Dynamics (MOND) is a relatively modern proposal to explain the galaxy rotation problem based on a variation of Newton's Second Law of Dynamics at low accelerations. This would produce a large-scale variation of Newton's universal theory of gravity. A modification of Newton's theory would also imply a modification of general relativistic cosmology in as much as Newtonian cosmology is the limit of Friedman cosmology. While almost all astrophysicists today reject MOND in favor of dark matter, a small number of researchers continue to enhance it, recently incorporating Brans–Dicke theories into treatments that attempt to account for cosmological observations.

TeVeS

Tensor–vector–scalar gravity (TeVeS) is a proposed relativistic theory that is equivalent to Modified Newtonian dynamics (MOND) in the non-relativistic limit, which purports to explain the galaxy rotation problem without invoking dark matter. Originated by Jacob Bekenstein in 2004, it incorporates various dynamical and non-dynamical tensor fields, vector fields and scalar fields.
The break-through of TeVeS over MOND is that it can explain the phenomenon of gravitational lensing, a cosmic optical illusion in which matter bends light, which has been confirmed many times. A recent preliminary finding is that it can explain structure formation without CDM, but requiring a ~2eV massive neutrino (they are also required to fit some Clusters of galaxies, including the Bullet Cluster).[5][6] However, other authors (see Slosar, Melchiorri and Silk)[7] claim that TeVeS can't explain cosmic microwave background anisotropies and structure formation at the same time, i.e. ruling out those models at high significance.

f(R) gravity

f(R) gravity is a family of theories that modify general relativity by defining a different function of the Ricci scalar. The simplest case is just the function being equal to the scalar; this is general relativity. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. Some functional forms may be inspired by corrections arising from a quantum theory of gravity. f(R) gravity was first proposed in 1970 by Hans Adolph Buchdahl[8] (although φ was used rather than f for the name of the arbitrary function). It has become an active field of research following work by Starobinsky on cosmic inflation.[9] A wide range of phenomena can be produced from this theory by adopting different functions; however, many functional forms can now be ruled out on observational grounds, or because of pathological theoretical problems.

Steady State theories

The Steady State theory challenges the homogeneity assumption of the cosmological principle to reflect a homogeneity in time as well as in space. This "perfect cosmological principle" as it would come to be called asserted that the universe looks the same everywhere (on the large scale), the same as it always has and always will. This is in contrast to Lambda-CDM, in which the universe looked very different in the past and will look very different in the future. Steady State theory was proposed in 1948 by Fred Hoyle, Thomas Gold, Hermann Bondi and others. In order to maintain the perfect cosmological principle in an expanding universe, steady state cosmology had to posit a "matter-creation field" (the so-called C-field) that would insert matter into the universe in order to maintain a constant density.[3]

The debate between the Big Bang and the Steady State models would happen for 15 years with camps roughly evenly divided until the discovery of the cosmic microwave background radiation. This radiation is a natural feature of the Big Bang model which demands a "time of last scattering" where photons decouple with baryonic matter. The Steady State model proposed that this radiation could be accounted for by so-called "integrated starlight" which was a background caused in part by Olbers' paradox in an infinite universe. In order to account for the uniformity of the background, steady state proponents posited a fog effect associated with microscopic iron particles that would scatter radio waves in such a manner as to produce an isotropic CMB. The proposed phenomena was whimsically named "cosmic iron whiskers" and served as the thermalization mechanism. The Steady State theory did not have the horizon problem of the Big Bang because it assumed an infinite amount of time was available for thermalizing the background.[3]

As more cosmological data began to be collected, cosmologists began to realize that the Big Bang correctly predicted the abundance of light elements observed in the cosmos. What was a coincidental ratio of hydrogen to deuterium and helium in the steady state model was a feature of the Big Bang model. Additionally, detailed measurements of the CMB since the 1990s with the COBE, WMAP and Planck observations indicated that the spectrum of the background was closer to a blackbody than any other source in nature. The best integrated starlight models could predict was a thermalization to the level of 10% while the COBE satellite measured the deviation at one part in 105. After this dramatic discovery, the majority of cosmologists became convinced that the steady state theory could not explain the observed CMB properties.

Although the original steady state model is now considered to be contrary to observations (particularly the CMB) even by its one-time supporters, modifications of the steady state model has been proposed, including a model that envisions the universe as originating through many little bangs rather than one big bang (the so-called "quasi-steady state cosmology"). It supposes that the universe goes through periodic expansion and contraction phases, with a soft "rebound" in place of the Big Bang. Thus the Hubble Law is explained by the fact that the universe is currently in an expansion phase. Work continues on this model (most notably by Jayant V. Narlikar), although it has yet to gain widespread mainstream acceptance.[10]

Anisotropic universe

Isotropicity – the idea that the universe looks the same in all directions – is one of the core assumptions that enters into the FRW equations. In 2008 however, scientists working on Wilkinson Microwave Anisotropy Probe data claimed to have detected a 600–1000 km/s flow of clusters toward a 20-degree patch of sky between the constellations of Centaurus and Vela.[11] They suggested that the motion may be a remnant of the influence of no-longer-visible regions of the universe prior to inflation. The detection is controversial, and other scientists have found that the universe is isotropic to a great degree.[12]

Exotic dark matter and dark energy

In Lambda-CDM, dark matter is an extremely inert form of matter that does not interact with both ordinary matter (baryons) and light, but still exerts gravitational effects. To produce the large-scale structure we see today, dark matter is "cold" (the 'C' in Lambda-CDM), i.e. non-relativistic. Dark energy is an unknown form of energy that tends to accelerate the expansion of the universe. Both dark matter and dark energy have not been conclusively identified, and their exact nature is the subject of intense study. For example, scientists have hypothesized that dark matter could decay into dark energy, or that both dark matter and dark energy are different facets of the same underlying fluid (see dark fluid). Other theories that aim to explain one or the other, such as warm dark matter and quintessence, also fall into this category.

Proposals based on observational skepticism

As the observational cosmology began to develop, certain astronomers began to offer alternative speculations regarding the interpretation of various phenomena that occasionally became parts of non-standard cosmologies.

Tired light

Tired light theories challenge the common interpretation of Hubble's Law as a sign the universe is expanding. It was proposed by Fritz Zwicky in 1929. The basic proposal amounted to light losing energy ("getting tired") due to the distance it traveled rather than any metric expansion or physical recession of sources from observers. A traditional explanation of this effect was to attribute a dynamical friction to photons; the photons' gravitational interactions with stars and other material will progressively reduce their momentum, thus producing a redshift. Other proposals for explaining how photons could lose energy included the scattering of light by intervening material in a process similar to observed interstellar reddening. However, all these processes would also tend to blur images of distant objects, and no such blurring has been detected.[13]

Traditional tired light has been found incompatible with the observed time dilation that is associated with the cosmological redshift.[14] This idea is mostly remembered as a falsified alternative explanation for Hubble's law in most astronomy or cosmology discussions.

Dirac large numbers hypothesis

The Dirac large numbers hypothesis uses the ratio of the size of the visible universe to the radius of quantum particle to predict the age of the universe. The coincidence of various ratios being close in order of magnitude may ultimately prove meaningless or the indication of a deeper connection between concepts in a future theory of everything. Nevertheless, attempts to use such ideas have been criticized as numerology.

Redshift periodicity and intrinsic redshifts


Halton Arp in London, Oct 2000

Some astrophysicists were unconvinced that the cosmological redshifts are caused by universal cosmological expansion.[15][16] Skepticism and alternative explanations began appearing in the scientific literature in the 1960s. In particular, Geoffrey Burbidge, William Tifft and Halton Arp were all observational astrophysicists who proposed that there were inconsistencies in the redshift observations of galaxies and quasars. The first two were famous for suggesting that there were periodicities in the redshift distributions of galaxies and quasars. Subsequent statistical analyses of redshift surveys, however, have not confirmed the existence of these periodicities.[17]

During the quasar controversies of the 1970s, these same astronomers were also of the opinion that quasars exhibited high redshifts not due to their incredible distance but rather due to unexplained intrinsic redshift mechanisms that would cause the periodicities and cast doubt on the Big Bang.[16] Arguments over how distant quasars were took the form of debates surrounding quasar energy production mechanisms, their light curves, and whether quasars exhibited any proper motion. Astronomers who believed quasars were not at cosmological distances argued that the Eddington luminosity set limits on how distant the quasars could be since the energy output required to explain the apparent brightness of cosmologically-distant quasars was far too high to be explainable by nuclear fusion alone. This objection was made moot by the improved models of gravity-powered accretion disks which for sufficiently dense material (such as black holes) can be more efficient at energy production than nuclear reactions. The controversy was laid to rest by the 1990s when evidence became available that observed quasars were actually the ultra-luminous cores of distant active galactic nuclei and that the major components of their redshift were in fact due to the Hubble flow.[18][19]

Throughout his career, Halton Arp maintained that there were anomalies in his observations of quasars and galaxies, and that those anomalies served as a refutation of the Big Bang.[16] In particular, Arp pointed out examples of quasars that were close to the line of sight of (relatively) nearby active galactic nuclei (AGN). He claimed that clusters of quasars were in alignment around AGN cores and that quasars, rather than being the cores of distant AGN, were actually much closer and were starlike-objects ejected from the centers of nearby AGN with high intrinsic redshifts. Arp also contended that they gradually lost their non-cosmological redshift component and eventually evolved into full-fledged galaxies.[20][3][16] This stands in stark contradiction to the accepted models of galaxy formation.

The biggest problem with Arp's analysis is that today there are hundreds of thousands of quasars with known redshifts discovered by various sky surveys. The vast majority of these quasars are not correlated in any way with nearby AGN. Indeed, with improved observing techniques, a number of host galaxies have been observed around quasars which indicates that those quasars at least really are at cosmological distances and are not the kind of objects Arp proposes.[21] Arp's analysis, according to most scientists, suffers from being based on small number statistics and hunting for peculiar coincidences and odd associations.[22] Unbiased samples of sources, taken from numerous galaxy surveys of the sky show none of the proposed 'irregularities', nor that any statistically significant correlations exist.[23]

In addition, it is not clear what mechanism would be responsible for intrinsic redshifts or their gradual dissipation over time. It is also unclear how nearby quasars would explain some features in the spectrum of quasars which the standard model easily explains. In the standard cosmology, clouds of neutral hydrogen between the quasar and the earth create Lyman alpha absorption lines having different redshifts up to that of the quasar itself; this feature is called the Lyman-alpha forest. Moreover, in extreme quasars one can observe the absorption of neutral hydrogen which has not yet been reionized in a feature known as the Gunn–Peterson trough. Most cosmologists see this missing theoretical work as sufficient reason to explain the observations as either chance or error.[24]

Halton Arp has proposed an explanation for his observations by a Machian "variable mass hypothesis".[25] The variable-mass theory invokes constant matter creation from active galactic nuclei, which puts it into the class of steady-state theories. With the passing of Halton Arp, this cosmology has been relegated to a dismissed theory.[26]

Plasma cosmology

In 1965, Hannes Alfvén proposed a "plasma cosmology" theory of the universe based in part on scaling observations of space plasma physics and experiments on plasmas in terrestrial laboratories to cosmological scales orders of magnitude greater.[27] Taking matter–antimatter symmetry as a starting point, Alfvén together with Oskar Klein proposed the Alfvén-Klein cosmology model, based on the fact that since most of the local universe was composed of matter and not antimatter there may be large bubbles of matter and antimatter that would globally balance to equality. The difficulties with this model were apparent almost immediately. Matter–antimatter annihilation results in the production of high energy photons which were not observed. While it was possible that the local "matter-dominated" cell was simply larger than the observable universe, this proposition did not lend itself to observational tests.

Like the steady state theory, plasma cosmology includes a Strong Cosmological Principle which assumes that the universe is isotropic in time as well as in space. Matter is explicitly assumed to have always existed, or at least that it formed at a time so far in the past as to be forever beyond humanity's empirical methods of investigation.

While plasma cosmology has never had the support of most astronomers or physicists, a small number of plasma researchers have continued to promote and develop the approach, and publish in the special issues of the IEEE Transactions on Plasma Science.[28] A few papers regarding plasma cosmology were published in other mainstream journals until the 1990s. Additionally, in 1991, Eric J. Lerner, an independent researcher in plasma physics and nuclear fusion, wrote a popular-level book supporting plasma cosmology called The Big Bang Never Happened. At that time there was renewed interest in the subject among the cosmological community along with other non-standard cosmologies. This was due to anomalous results reported in 1987 by Andrew Lange and Paul Richardson of UC Berkeley and Toshio Matsumoto of Nagoya University that indicated the cosmic microwave background might not have a blackbody spectrum.[29] However, the final announcement (in April 1992) of COBE satellite data corrected the earlier contradiction of the Big Bang; the popularity of plasma cosmology has since fallen.

Nucleosynthesis objections

One of the major successes of the Big Bang theory has been to provide a prediction that corresponds to the observations of the abundance of light elements in the universe. Along with the explanation provided for the Hubble's law and for the cosmic microwave background, this observation has proved very difficult for alternative theories to explain.

Theories which assert that the universe has an infinite age, including many of the theories described above, fail to account for the abundance of deuterium in the cosmos, because deuterium easily undergoes nuclear fusion in stars and there are no known astrophysical processes other than the Big Bang itself that can produce it in large quantities. Hence the fact that deuterium is not an extremely rare component of the universe suggests that the universe has a finite age.

Theories which assert that the universe has a finite life, but that the Big Bang did not happen, have problems with the abundance of helium-4. The observed amount of 4He is far larger than the amount that should have been created via stars or any other known process. By contrast, the abundance of 4He in Big Bang models is very insensitive to assumptions about baryon density, changing only a few percent as the baryon density changes by several orders of magnitude. The observed value of 4He is within the range calculated.

Wednesday, May 9, 2018

Mach's principle

From Wikipedia, the free encyclopedia

In theoretical physics, particularly in discussions of gravitation theories, Mach's principle (or Mach's conjecture[1]) is the name given by Einstein to an imprecise hypothesis often credited to the physicist and philosopher Ernst Mach. The idea is that the existence of absolute rotation (the distinction of local inertial frames vs. rotating reference frames) is determined by the large-scale distribution of matter, as exemplified by this anecdote:[2]
You are standing in a field looking at the stars. Your arms are resting freely at your side, and you see that the distant stars are not moving. Now start spinning. The stars are whirling around you and your arms are pulled away from your body. Why should your arms be pulled away when the stars are whirling? Why should they be dangling freely when the stars don't move?
Mach's principle says that this is not a coincidence—that there is a physical law that relates the motion of the distant stars to the local inertial frame. If you see all the stars whirling around you, Mach suggests that there is some physical law which would make it so you would feel a centrifugal force. There are a number of rival formulations of the principle. It is often stated in vague ways, like "mass out there influences inertia here". A very general statement of Mach's principle is "local physical laws are determined by the large-scale structure of the universe".[3]

This concept was a guiding factor in Einstein's development of the general theory of relativity. Einstein realized that the overall distribution of matter would determine the metric tensor, which tells you which frame is rotationally stationary. Frame-dragging and conservation of gravitational angular momentum makes this into a true statement in the general theory in certain solutions. But because the principle is so vague, many distinct statements can be (and have been) made that would qualify as a Mach principle, and some of these are false. The Gödel rotating universe is a solution of the field equations that is designed to disobey Mach's principle in the worst possible way. In this example, the distant stars seem to be revolving faster and faster as one moves further away. This example doesn't completely settle the question, because it has closed timelike curves.

History

The basic idea also appears before Mach's time, in the writings of George Berkeley.[4] The book Absolute or Relative Motion? (1896) by Benedict Friedländer and his brother Immanuel contained ideas similar to Mach's principle.[page needed]

Einstein's use of the principle

There is a fundamental issue in relativity theory. If all motion is relative, how can we measure the inertia of a body? We must measure the inertia with respect to something else. But what if we imagine a particle completely on its own in the universe? We might hope to still have some notion of its state of motion. Mach's principle is sometimes interpreted as the statement that such a particle's state of motion has no meaning in that case.

In Mach's words, the principle is embodied as follows:[5]
[The] investigator must feel the need of... knowledge of the immediate connections, say, of the masses of the universe. There will hover before him as an ideal insight into the principles of the whole matter, from which accelerated and inertial motions will result in the same way.
Albert Einstein seemed to view Mach's principle as something along the lines of:[6]
...inertia originates in a kind of interaction between bodies...
In this sense, at least some of Mach's principles are related to philosophical holism. Mach's suggestion can be taken as the injunction that gravitation theories should be relational theories. Einstein brought the principle into mainstream physics while working on general relativity. Indeed, it was Einstein who first coined the phrase Mach's principle. There is much debate as to whether Mach really intended to suggest a new physical law since he never states it explicitly.

The writing in which Einstein found inspiration from Mach was "The Science of Mechanics", where the philosopher criticized Newton's idea of absolute space, in particular the argument that Newton gave sustaining the existence of an advantaged reference system: what is commonly called "Newton's bucket argument".

In his Philosophiae Naturalis Principia Mathematica, Newton tried to demonstrate that one can always decide if one is rotating with respect to the absolute space, measuring the apparent forces that arise only when an absolute rotation is performed. If a bucket is filled with water, and made to rotate, initially the water remains still, but then, gradually, the walls of the vessel communicate their motion to the water, making it curve and climb up the borders of the bucket, because of the centrifugal forces produced by the rotation. Newton says that this thought experiment demonstrates that the centrifugal forces arise only when the water is in rotation with respect to the absolute space (represented here by the earth's reference frame, or better, the distant stars); instead, when the bucket was rotating with respect to the water no centrifugal forces were produced, this indicating that the latter was still with respect to the absolute space.

Mach, in his book, says that the bucket experiment only demonstrates that when the water is in rotation with respect to the bucket no centrifugal forces are produced, and that we cannot know how the water would behave if in the experiment the bucket's walls were increased in depth and width until they became leagues big. In Mach's idea this concept of absolute motion should be substituted with a total relativism in which every motion, uniform or accelerated, has sense only in reference to other bodies (i.e., one cannot simply say that the water is rotating, but must specify if it's rotating with respect to the vessel or to the earth). In this view, the apparent forces that seem to permit discrimination between relative and "absolute" motions should only be considered as an effect of the particular asymmetry that there is in our reference system between the bodies which we consider in motion, that are small (like buckets), and the bodies that we believe are still (the earth and distant stars), that are overwhelmingly bigger and heavier than the former. This same thought had been expressed by the philosopher George Berkeley in his De Motu. It is then not clear, in the passages from Mach just mentioned, if the philosopher intended to formulate a new kind of physical action between heavy bodies. This physical mechanism should determine the inertia of bodies, in a way that the heavy and distant bodies of our universe should contribute the most to the inertial forces. More likely, Mach only suggested a mere "redescription of motion in space as experiences that do not invoke the term space".[7] What is certain is that Einstein interpreted Mach's passage in the former way, originating a long-lasting debate.

Most physicists believe Mach's principle was never developed into a quantitative physical theory that would explain a mechanism by which the stars can have such an effect. It was never made clear by Mach himself exactly what his principle was.[8] Although Einstein was intrigued and inspired by Mach's principle, Einstein's formulation of the principle is not a fundamental assumption of general relativity.

Mach's principle in general relativity

Because intuitive notions of distance and time no longer apply, what exactly is meant by "Mach's principle" in general relativity is even less clear than in Newtonian physics and at least 21 formulations of Mach's principle are possible, some being considered more strongly Machian than others.[9] A relatively weak formulation is the assertion that the motion of matter in one place should affect which frames are inertial in another.

Einstein—before completing his development of the general theory of relativity—found an effect which he interpreted as being evidence of Mach's principle. We assume a fixed background for conceptual simplicity, construct a large spherical shell of mass, and set it spinning in that background. The reference frame in the interior of this shell will precess with respect to the fixed background. This effect is known as the Lense–Thirring effect. Einstein was so satisfied with this manifestation of Mach's principle that he wrote a letter to Mach expressing this:
it... turns out that inertia originates in a kind of interaction between bodies, quite in the sense of your considerations on Newton's pail experiment... If one rotates [a heavy shell of matter] relative to the fixed stars about an axis going through its center, a Coriolis force arises in the interior of the shell; that is, the plane of a Foucault pendulum is dragged around (with a practically unmeasurably small angular velocity).[6]
The Lense–Thirring effect certainly satisfies the very basic and broad notion that "matter there influences inertia here"[10] The plane of the pendulum would not be dragged around if the shell of matter were not present, or if it were not spinning. As for the statement that "inertia originates in a kind of interaction between bodies", this too could be interpreted as true in the context of the effect.

More fundamental to the problem, however, is the very existence of a fixed background, which Einstein describes as "the fixed stars". Modern relativists see the imprints of Mach's principle in the initial-value problem. Essentially, we humans seem to wish to separate spacetime into slices of constant time. When we do this, Einstein's equations can be decomposed into one set of equations, which must be satisfied on each slice, and another set, which describe how to move between slices. The equations for an individual slice are elliptic partial differential equations. In general, this means that only part of the geometry of the slice can be given by the scientist, while the geometry everywhere else will then be dictated by Einstein's equations on the slice.[clarification needed]

In the context of an asymptotically flat spacetime, the boundary conditions are given at infinity. Heuristically, the boundary conditions for an asymptotically flat universe define a frame with respect to which inertia has meaning. By performing a Lorentz transformation on the distant universe, of course, this inertia can also be transformed.

A stronger form of Mach's principle applies in Wheeler–Mach–Einstein spacetimes, which require spacetime to be spatially compact and globally hyperbolic. In such universes Mach's principle can be stated as the distribution of matter and field energy-momentum (and possibly other information) at a particular moment in the universe determines the inertial frame at each point in the universe (where "a particular moment in the universe" refers to a chosen Cauchy surface).[11]

There have been other attempts to formulate a theory that is more fully Machian, such as the Brans–Dicke theory and the Hoyle–Narlikar theory of gravity, but most physicists argue that none have been fully successful. At an exit poll of experts, held in Tübingen in 1993, when asked the question "Is general relativity perfectly Machian?", 3 respondents replied "yes", and 22 replied "no". To the question "Is general relativity with appropriate boundary conditions of closure of some kind very Machian?" the result was 14 "yes" and 7 "no".[12]

However, Einstein was convinced that a valid theory of gravity would necessarily have to include the relativity of inertia:

Variations in the statement of the principle

The broad notion that "mass there influences inertia here" has been expressed in several forms. Hermann Bondi and Joseph Samuel have listed eleven distinct statements that can be called Mach principles, labelled by Mach0 through Mach10.[13] Though their list is not necessarily exhaustive, it does give a flavor for the variety possible.
  • Mach0: The universe, as represented by the average motion of distant galaxies, does not appear to rotate relative to local inertial frames.
  • Mach1: Newton’s gravitational constant G is a dynamical field.
  • Mach2: An isolated body in otherwise empty space has no inertia.
  • Mach3: Local inertial frames are affected by the cosmic motion and distribution of matter.
  • Mach4: The universe is spatially closed.
  • Mach5: The total energy, angular and linear momentum of the universe are zero.
  • Mach6: Inertial mass is affected by the global distribution of matter.
  • Mach7: If you take away all matter, there is no more space.
  • Mach8: {\displaystyle \Omega \ {\stackrel {\text{def}}{=}}\ 4\pi \rho GT^{2}} is a definite number, of order unity, where \rho is the mean density of matter in the universe, and T is the Hubble time.
  • Mach9: The theory contains no absolute elements.
  • Mach10: Overall rigid rotations and translations of a system are unobservable.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...