Search This Blog

Saturday, February 14, 2015

Speed of light


From Wikipedia, the free encyclopedia

Speed of light
The distance from the Sun to the Earth is shown as 150 million kilometers, an approximate average. Sizes to scale.
Sunlight takes about 8 minutes 17 seconds to travel the average distance from the surface of the Sun to the Earth.
Exact values
metres per second 299792458
Planck length per Planck time
(i.e., Planck units)
1
Approximate values (to three significant digits)
kilometres per hour 1080 million (1.08×109)
miles per second 186000
miles per hour 671 million (6.71×108)
astronomical units per day 173[Note 1]
Approximate light signal travel times
Distance Time
one foot 1.0 ns
one metre 3.3 ns
from geostationary orbit to Earth 119 ms
the length of Earth's equator 134 ms
from Moon to Earth 1.3 s
from Sun to Earth (1 AU) 8.3 min
one light year 1.0 year
one parsec 3.26 years
from nearest star to Sun (1.3 pc) 4.2 years
from the nearest galaxy (the Canis Major Dwarf Galaxy) to Earth 25000 years
across the Milky Way 100000 years
from the Andromeda Galaxy (the nearest spiral galaxy) to Earth 2.5 million years

The speed of light in vacuum, commonly denoted c, is a universal physical constant important in many areas of physics. Its value is exactly 299792458 metres per second, as the length of the metre is defined from this constant and the international standard for time.[1] According to special relativity, c is the maximum speed at which all matter and information in the universe can travel. It is the speed at which all massless particles and changes of the associated fields (including electromagnetic radiation such as light and gravitational waves) travel in vacuum. Such particles and waves travel at c regardless of the motion of the source or the inertial frame of reference of the observer. In the theory of relativity, c interrelates space and time, and also appears in the famous equation of mass–energy equivalence E = mc2.[2]

The speed at which light propagates through transparent materials, such as glass or air, is less than c. The ratio between c and the speed v at which light travels in a material is called the refractive index n of the material (n = c / v). For example, for visible light the refractive index of glass is typically around 1.5, meaning that light in glass travels at c / 1.5 ≈ 200000 km/s; the refractive index of air for visible light is about 1.0003, so the speed of light in air is about 299700 km/s or 90 km/s slower than c.

For many practical purposes, light and other electromagnetic waves will appear to propagate instantaneously, but for long distances and very sensitive measurements, their finite speed has noticeable effects. In communicating with distant space probes, it can take minutes to hours for a message to get from Earth to the spacecraft, or vice versa. The light seen from stars left them many years ago, allowing the study of the history of the universe by looking at distant objects. The finite speed of light also limits the theoretical maximum speed of computers, since information must be sent within the computer from chip to chip. The speed of light can be used with time of flight measurements to measure large distances to high precision.

Ole Rømer first demonstrated in 1676 that light travels at a finite speed (as opposed to instantaneously) by studying the apparent motion of Jupiter's moon Io. In 1865, James Clerk Maxwell proposed that light was an electromagnetic wave, and therefore travelled at the speed c appearing in his theory of electromagnetism.[3] In 1905, Albert Einstein postulated that the speed of light with respect to any inertial frame is independent of the motion of the light source,[4] and explored the consequences of that postulate by deriving the special theory of relativity and showing that the parameter c had relevance outside of the context of light and electromagnetism. After centuries of increasingly precise measurements, in 1975 the speed of light was known to be 299792458 m/s with a measurement uncertainty of 4 parts per billion. In 1983, the metre was redefined in the International System of Units (SI) as the distance travelled by light in vacuum in 1/299792458 of a second. As a result, the numerical value of c in metres per second is now fixed exactly by the definition of the metre.[5]

Numerical value, notation, and units

The speed of light in vacuum is usually denoted by a lowercase c, for "constant" or the Latin celeritas (meaning "swiftness"). Originally, the symbol V was used for the speed of light, introduced by James Clerk Maxwell in 1865. In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used c for a different constant later shown to equal 2 times the speed of light in vacuum. In 1894, Paul Drude redefined c with its modern meaning. Einstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to c, which by then had become the standard symbol.[6][7]

Sometimes c is used for the speed of waves in any material medium, and c0 for the speed of light in vacuum.[8] This subscripted notation, which is endorsed in official SI literature,[5] has the same form as other related constants: namely, μ0 for the vacuum permeability or magnetic constant, ε0 for the vacuum permittivity or electric constant, and Z0 for the impedance of free space. This article uses c exclusively for the speed of light in vacuum.

Since 1983, the metre has been defined in the International System of Units (SI) as the distance light travels in vacuum in 1/299792458 of a second. This definition fixes the speed of light in vacuum at exactly 299792458 m/s.[9][10][11] As a dimensional physical constant, the numerical value of c is different for different unit systems.[Note 2] In branches of physics in which c appears often, such as in relativity, it is common to use systems of natural units of measurement or the geometrized unit system where c = 1.[13][14] Using these units, c does not appear explicitly because multiplication or division by 1 does not affect the result.

Fundamental role in physics

The speed at which light waves propagate in vacuum is independent both of the motion of the wave source and of the inertial frame of reference of the observer.[Note 3] This invariance of the speed of light was postulated by Einstein in 1905,[4] after being motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous aether;[15] it has since been consistently confirmed by many experiments. It is only possible to verify experimentally that the two-way speed of light (for example, from a source to a mirror and back again) is frame-independent, because it is impossible to measure the one-way speed of light (for example, from a source to a distant detector) without some convention as to how clocks at the source and at the detector should be synchronized. However, by adopting Einstein synchronization for the clocks, the one-way speed of light becomes equal to the two-way speed of light by definition.[14][16] The special theory of relativity explores the consequences of this invariance of c with the assumption that the laws of physics are the same in all inertial frames of reference.[17][18] One consequence is that c is the speed at which all massless particles and waves, including light, must travel in vacuum.
γ starts at 1 when v equals zero and stays nearly constant for small v's, then it sharply curves upwards and has a vertical asymptote, diverging to positive infinity as v approaches c.
The Lorentz factor γ as a function of velocity. It starts at 1 and approaches infinity as v approaches c.

Special relativity has many counterintuitive and experimentally verified implications.[19] These include the equivalence of mass and energy (E = mc2), length contraction (moving objects shorten),[Note 4] and time dilation (moving clocks run more slowly). The factor γ by which lengths contract and times dilate is known as the Lorentz factor and is given by γ = (1 − v2/c2)−1/2, where v is the speed of the object. The difference of γ from 1 is negligible for speeds much slower than c, such as most everyday speeds—in which case special relativity is closely approximated by Galilean relativity—but it increases at relativistic speeds and diverges to infinity as v approaches c.

The results of special relativity can be summarized by treating space and time as a unified structure known as spacetime (with c relating the units of space and time), and requiring that physical theories satisfy a special symmetry called Lorentz invariance, whose mathematical formulation contains the parameter c.[22] Lorentz invariance is an almost universal assumption for modern physical theories, such as quantum electrodynamics, quantum chromodynamics, the Standard Model of particle physics, and general relativity. As such, the parameter c is ubiquitous in modern physics, appearing in many contexts that are unrelated to light. For example, general relativity predicts that c is also the speed of gravity and of gravitational waves.[23][24] In non-inertial frames of reference (gravitationally curved space or accelerated reference frames), the local speed of light is constant and equal to c, but the speed of light along a trajectory of finite length can differ from c, depending on how distances and times are defined.[25]

It is generally assumed that fundamental constants such as c have the same value throughout spacetime, meaning that they do not depend on location and do not vary with time. However, it has been suggested in various theories that the speed of light may have changed over time.[26][27] No conclusive evidence for such changes has been found, but they remain the subject of ongoing research.[28][29]

It also is generally assumed that the speed of light is isotropic, meaning that it has the same value regardless of the direction in which it is measured. Observations of the emissions from nuclear energy levels as a function of the orientation of the emitting nuclei in a magnetic field (see Hughes–Drever experiment), and of rotating optical resonators (see Resonator experiments) have put stringent limits on the possible two-way anisotropy.[30][31]

Upper limit on speeds

According to special relativity, the energy of an object with rest mass m and speed v is given by γmc2, where γ is the Lorentz factor defined above. When v is zero, γ is equal to one, giving rise to the famous E = mc2 formula for mass–energy equivalence. The γ factor approaches infinity as v approaches c, and it would take an infinite amount of energy to accelerate an object with mass to the speed of light. The speed of light is the upper limit for the speeds of objects with positive rest mass.
This is experimentally established in many tests of relativistic energy and momentum.[32]
Three pairs of coordinate axes are depicted with the same origin A; in the green frame, the x axis is horizontal and the ct axis is vertical; in the red frame, the x′ axis is slightly skewed upwards, and the ct′ axis slightly skewed rightwards, relative to the green axes; in the blue frame, the x′′ axis is somewhat skewed downwards, and the ct′′ axis somewhat skewed leftwards, relative to the green axes. A point B on the green x axis, to the left of A, has zero ct, positive ct′, and negative ct′′.
Event A precedes B in the red frame, is simultaneous with B in the green frame, and follows B in the blue frame.

More generally, it is normally impossible for information or energy to travel faster than c. One argument for this follows from the counter-intuitive implication of special relativity known as the relativity of simultaneity. If the spatial distance between two events A and B is greater than the time interval between them multiplied by c then there are frames of reference in which A precedes B, others in which B precedes A, and others in which they are simultaneous. As a result, if something were travelling faster than c relative to an inertial frame of reference, it would be travelling backwards in time relative to another frame, and causality would be violated.[Note 5][34] In such a frame of reference, an "effect" could be observed before its "cause". Such a violation of causality has never been recorded,[16] and would lead to paradoxes such as the tachyonic antitelephone.[35]

Faster-than-light observations and experiments

There are situations in which it may seem that matter, energy, or information travels at speeds greater than c, but they do not. For example, as is discussed in the propagation of light in a medium section below, many wave velocities can exceed c. For example, the phase velocity of X-rays through most glasses can routinely exceed c,[36] but phase velocity does not determine the velocity at which waves convey information.[37]

If a laser beam is swept quickly across a distant object, the spot of light can move faster than c, although the initial movement of the spot is delayed because of the time it takes light to get to the distant object at the speed c. However, the only physical entities that are moving are the laser and its emitted light, which travels at the speed c from the laser to the various positions of the spot. Similarly, a shadow projected onto a distant object can be made to move faster than c, after a delay in time.[38] In neither case does any matter, energy, or information travel faster than light.[39]

The rate of change in the distance between two objects in a frame of reference with respect to which both are moving (their closing speed) may have a value in excess of c. However, this does not represent the speed of any single object as measured in a single inertial frame.[39]

Certain quantum effects appear to be transmitted instantaneously and therefore faster than c, as in the EPR paradox. An example involves the quantum states of two particles that can be entangled. Until either of the particles is observed, they exist in a superposition of two quantum states. If the particles are separated and one particle's quantum state is observed, the other particle's quantum state is determined instantaneously (i.e., faster than light could travel from one particle to the other). However, it is impossible to control which quantum state the first particle will take on when it is observed, so information cannot be transmitted in this manner.[39][40]

Another quantum effect that predicts the occurrence of faster-than-light speeds is called the Hartman effect; under certain conditions the time needed for a virtual particle to tunnel through a barrier is constant, regardless of the thickness of the barrier.[41][42] This could result in a virtual particle crossing a large gap faster-than-light. However, no information can be sent using this effect.[43]

So-called superluminal motion is seen in certain astronomical objects,[44] such as the relativistic jets of radio galaxies and quasars. However, these jets are not moving at speeds in excess of the speed of light: the apparent superluminal motion is a projection effect caused by objects moving near the speed of light and approaching Earth at a small angle to the line of sight: since the light which was emitted when the jet was farther away took longer to reach the Earth, the time between two successive observations corresponds to a longer time between the instants at which the light rays were emitted.[45]

In models of the expanding universe, the farther galaxies are from each other, the faster they drift apart. This receding is not due to motion through space, but rather to the expansion of space itself.[39] For example, galaxies far away from Earth appear to be moving away from the Earth with a speed proportional to their distances. Beyond a boundary called the Hubble sphere, the rate at which their distance from Earth increases becomes greater than the speed of light.[46]

Propagation of light

In classical physics, light is described as a type of electromagnetic wave. The classical behaviour of the electromagnetic field is described by Maxwell's equations, which predict that the speed c with which electromagnetic waves (such as light) propagate through the vacuum is related to the electric constant ε0 and the magnetic constant μ0 by the equation c = 1/ε0μ0.[47] In modern quantum physics, the electromagnetic field is described by the theory of quantum electrodynamics (QED). In this theory, light is described by the fundamental excitations (or quanta) of the electromagnetic field, called photons. In QED, photons are massless particles and thus, according to special relativity, they travel at the speed of light in vacuum.

Extensions of QED in which the photon has a mass have been considered. In such a theory, its speed would depend on its frequency, and the invariant speed c of special relativity would then be the upper limit of the speed of light in vacuum.[25] No variation of the speed of light with frequency has been observed in rigorous testing,[48][49][50] putting stringent limits on the mass of the photon. The limit obtained depends on the model used: if the massive photon is described by Proca theory,[51] the experimental upper bound for its mass is about 10−57 grams;[52] if photon mass is generated by a Higgs mechanism, the experimental upper limit is less sharp, m ≤ 10−14 eV/c2 [51] (roughly 2 × 10−47 g).

Another reason for the speed of light to vary with its frequency would be the failure of special relativity to apply to arbitrarily small scales, as predicted by some proposed theories of quantum gravity. In 2009, the observation of the spectrum of gamma-ray burst GRB 090510 did not find any difference in the speeds of photons of different energies, confirming that Lorentz invariance is verified at least down to the scale of the Planck length (lP = ħG/c3 ≈ 1.6163×10−35 m) divided by 1.2.[53]

In a medium

In a medium, light usually does not propagate at a speed equal to c; further, different types of light wave will travel at different speeds. The speed at which the individual crests and troughs of a plane wave (a wave filling the whole space, with only one frequency) propagate is called the phase velocity vp. An actual physical signal with a finite extent (a pulse of light) travels at a different speed. The largest part of the pulse travels at the group velocity vg, and its earliest part travels at the front velocity vf.
A modulated wave moves from left to right. There are three points marked with a dot: A blue dot at a node of the carrier wave, a green dot at the maximum of the envelope, and a red dot at the front of the envelope.
The blue dot moves at the speed of the ripples, the phase velocity; the green dot moves with the speed of the envelope, the group velocity; and the red dot moves with the speed of the foremost part of the pulse, the front velocity

The phase velocity is important in determining how a light wave travels through a material or from one material to another. It is often represented in terms of a refractive index. The refractive index of a material is defined as the ratio of c to the phase velocity vp in the material: larger indices of refraction indicate lower speeds. The refractive index of a material may depend on the light's frequency, intensity, polarization, or direction of propagation; in many cases, though, it can be treated as a material-dependent constant. The refractive index of air is approximately 1.0003.[54] Denser media, such as water,[55] glass,[56] and diamond,[57] have refractive indexes of around 1.3, 1.5 and 2.4, respectively, for visible light. In exotic materials like Bose–Einstein condensates near absolute zero, the effective speed of light may be only a few metres per second. However, this represents absorption and re-radiation delay between atoms, as do all slower-than-c speeds in material substances. As an extreme example of this, light "slowing" in matter, two independent teams of physicists claimed to bring light to a "complete standstill" by passing it through a Bose–Einstein Condensate of the element rubidium, one team at Harvard University and the Rowland Institute for Science in Cambridge, Mass., and the other at the Harvard–Smithsonian Center for Astrophysics, also in Cambridge. However, the popular description of light being "stopped" in these experiments refers only to light being stored in the excited states of atoms, then re-emitted at an arbitrarily later time, as stimulated by a second laser pulse. During the time it had "stopped," it had ceased to be light. This type of behaviour is generally microscopically true of all transparent media which "slow" the speed of light.[58]

In transparent materials, the refractive index generally is greater than 1, meaning that the phase velocity is less than c. In other materials, it is possible for the refractive index to become smaller than 1 for some frequencies; in some exotic materials it is even possible for the index of refraction to become negative.[59] The requirement that causality is not violated implies that the real and imaginary parts of the dielectric constant of any material, corresponding respectively to the index of refraction and to the attenuation coefficient, are linked by the Kramers–Kronig relations.[60] In practical terms, this means that in a material with refractive index less than 1, the absorption of the wave is so quick that no signal can be sent faster than c.

A pulse with different group and phase velocities (which occurs if the phase velocity is not the same for all the frequencies of the pulse) smears out over time, a process known as dispersion. Certain materials have an exceptionally low (or even zero) group velocity for light waves, a phenomenon called slow light, which has been confirmed in various experiments.[61][62][63][64] The opposite, group velocities exceeding c, has also been shown in experiment.[65] It should even be possible for the group velocity to become infinite or negative, with pulses travelling instantaneously or backwards in time.[66]

None of these options, however, allow information to be transmitted faster than c. It is impossible to transmit information with a light pulse any faster than the speed of the earliest part of the pulse (the front velocity). It can be shown that this is (under certain assumptions) always equal to c.[66]

It is possible for a particle to travel through a medium faster than the phase velocity of light in that medium (but still slower than c). When a charged particle does that in a dielectric material, the electromagnetic equivalent of a shock wave, known as Cherenkov radiation, is emitted.[67]

Practical effects of finiteness

The speed of light is of relevance to communications: the one-way and round-trip delay time are greater than zero. This applies from small to astronomical scales. On the other hand, some techniques depend on the finite speed of light, for example in distance measurements.

Small scales

In supercomputers, the speed of light imposes a limit on how quickly data can be sent between processors. If a processor operates at 1 gigahertz, a signal can only travel a maximum of about 30 centimetres (1 ft) in a single cycle. Processors must therefore be placed close to each other to minimize communication latencies; this can cause difficulty with cooling. If clock frequencies continue to increase, the speed of light will eventually become a limiting factor for the internal design of single chips.[68]

Large distances on Earth

For example, given the equatorial circumference of the Earth is about 40075 km and c about 300000 km/s, the theoretical shortest time for a piece of information to travel half the globe along the surface is about 67 milliseconds. When light is travelling around the globe in an optical fibre, the actual transit time is longer, in part because the speed of light is slower by about 35% in an optical fibre, depending on its refractive index n.[69] Furthermore, straight lines rarely occur in global communications situations, and delays are created when the signal passes through an electronic switch or signal regenerator.[70]

Spaceflights and astronomy

The diameter of the moon is about one quarter of that of Earth, and their distance is about thirty times the diameter of Earth. A beam of light starts from the Earth and reaches the Moon in about a second and a quarter.
A beam of light is depicted travelling between the Earth and the Moon in the time it takes a light pulse to move between them: 1.255 seconds at their mean orbital (surface-to-surface) distance. The relative sizes and separation of the Earth–Moon system are shown to scale.

Similarly, communications between the Earth and spacecraft are not instantaneous. There is a brief delay from the source to the receiver, which becomes more noticeable as distances increase. This delay was significant for communications between ground control and Apollo 8 when it became the first manned spacecraft to orbit the Moon: for every question, the ground control station had to wait at least three seconds for the answer to arrive.[71] The communications delay between Earth and Mars can vary between five and twenty minutes depending upon the relative positions of the two planets. As a consequence of this, if a robot on the surface of Mars were to encounter a problem, its human controllers would not be aware of it until at least five minutes later, and possibly up to twenty minutes later; it would then take a further five to twenty minutes for instructions to travel from Earth to Mars.

NASA must wait several hours for information from a probe orbiting Jupiter, and if it needs to correct a navigation error, the fix will not arrive at the spacecraft for an equal amount of time, creating a risk of the correction not arriving in time.

Receiving light and other signals from distant astronomical sources can even take much longer. For example, it has taken 13 billion (13×109) years for light to travel to Earth from the faraway galaxies viewed in the Hubble Ultra Deep Field images.[72][73] Those photographs, taken today, capture images of the galaxies as they appeared 13 billion years ago, when the universe was less than a billion years old.[72] The fact that more distant objects appear to be younger, due to the finite speed of light, allows astronomers to infer the evolution of stars, of galaxies, and of the universe itself.

Astronomical distances are sometimes expressed in light-years, especially in popular science publications and media.[74] A light-year is the distance light travels in one year, around 9461 billion kilometres, 5879 billion miles, or 0.3066 parsecs. In round figures, a light year is nearly 10 trillion kilometres or nearly 6 trillion miles. Proxima Centauri, the closest star to Earth after the Sun, is around 4.2 light-years away.[75]

Distance measurement

Radar systems measure the distance to a target by the time it takes a radio-wave pulse to return to the radar antenna after being reflected by the target: the distance to the target is half the round-trip transit time multiplied by the speed of light. A Global Positioning System (GPS) receiver measures its distance to GPS satellites based on how long it takes for a radio signal to arrive from each satellite, and from these distances calculates the receiver's position. Because light travels about 300000 kilometres (186000 mi) in one second, these measurements of small fractions of a second must be very precise. The Lunar Laser Ranging Experiment, radar astronomy and the Deep Space Network determine distances to the Moon,[76] planets[77] and spacecraft,[78] respectively, by measuring round-trip transit times.

High-frequency trading

The speed of light has become important in high-frequency trading, where traders seek to gain minute advantages by delivering their trades to exchanges fractions of a second ahead of other traders. For example traders have been switching to microwave communications between trading hubs, because of the advantage which microwaves travelling at near to the speed of light in air, have over fibre optic signals which travel 30–40% slower at the speed of light through glass.[79]

Measurement

There are different ways to determine the value of c. One way is to measure the actual speed at which light waves propagate, which can be done in various astronomical and earth-based setups. However, it is also possible to determine c from other physical laws where it appears, for example, by determining the values of the electromagnetic constants ε0 and μ0 and using their relation to c.
Historically, the most accurate results have been obtained by separately determining the frequency and wavelength of a light beam, with their product equalling c.

In 1983 the metre was defined as "the length of the path travelled by light in vacuum during a time interval of 1299792458 of a second",[80] fixing the value of the speed of light at 299792458 m/s by definition, as described below. Consequently, accurate measurements of the speed of light yield an accurate realization of the metre rather than an accurate value of c.

Astronomical measurements


Measurement of the speed of light using the eclipse of Io by Jupiter

Outer space is a convenient setting for measuring the speed of light because of its large scale and nearly perfect vacuum. Typically, one measures the time needed for light to traverse some reference distance in the solar system, such as the radius of the Earth's orbit. Historically, such measurements could be made fairly accurately, compared to how accurately the length of the reference distance is known in Earth-based units. It is customary to express the results in astronomical units (AU) per day.

Ole Christensen Rømer used an astronomical measurement to make the first quantitative estimate of the speed of light.[81][82] When measured from Earth, the periods of moons orbiting a distant planet are shorter when the Earth is approaching the planet than when the Earth is receding from it. The distance travelled by light from the planet (or its moon) to Earth is shorter when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the Sun. The observed change in the moon's orbital period is caused by the difference in the time it takes light to traverse the shorter or longer distance. Rømer observed this effect for Jupiter's innermost moon Io and deduced that light takes 22 minutes to cross the diameter of the Earth's orbit.
A star emits a light ray which hits the objective of a telescope. While the light travels down the telescope to its eyepiece, the telescope moves to the right. For the light to stay inside the telescope, the telescope must be tilted to the right, causing the distant source to appear at a different location to the right.
Aberration of light: light from a distant source appears to be from a different location for a moving telescope due to the finite speed of light.

Another method is to use the aberration of light, discovered and explained by James Bradley in the 18th century.[83] This effect results from the vector addition of the velocity of light arriving from a distant source (such as a star) and the velocity of its observer (see diagram on the right). A moving observer thus sees the light coming from a slightly different direction and consequently sees the source at a position shifted from its original position. Since the direction of the Earth's velocity changes continuously as the Earth orbits the Sun, this effect causes the apparent position of stars to move around. From the angular difference in the position of stars (maximally 20.5 arcseconds)[84] it is possible to express the speed of light in terms of the Earth's velocity around the Sun, which with the known length of a year can be converted to the time needed to travel from the Sun to the Earth. In 1729, Bradley used this method to derive that light travelled 10,210 times faster than the Earth in its orbit (the modern figure is 10,066 times faster) or, equivalently, that it would take light 8 minutes 12 seconds to travel from the Sun to the Earth.[83]

Astronomical unit

An astronomical unit (AU) is approximately the average distance between the Earth and Sun. It was redefined in 2012 as exactly 149597870700 m.[85][86] Previously the AU was not based on the International System of Units but in terms of the gravitational force exerted by the Sun in the framework of classical mechanics.[Note 6] The current definition uses the recommended value in metres for the previous definition of the astronomical unit, which was determined by measurement.[85] This redefinition is analogous to that of the metre, and likewise has the effect of fixing the speed of light to an exact value in astronomical units per second (via the exact speed of light in metres per second).

Previously, the inverse of c expressed in seconds per astronomical unit was measured by comparing the time for radio signals to reach different spacecraft in the Solar System, with their position calculated from the gravitational effects of the Sun and various planets. By combining many such measurements, a best fit value for the light time per unit distance could be obtained. For example, in 2009, the best estimate, as approved by the International Astronomical Union (IAU), was:[88][89]
light time for unit distance: 499.004783836(10) s
c = 0.00200398880410(4) AU/s = 173.144632674(3) AU/day.
The relative uncertainty in these measurements is 0.02 parts per billion (2×10−11), equivalent to the uncertainty in Earth-based measurements of length by interferometry.[90] Since the metre is defined to be the length travelled by light in a certain time interval, the measurement of the light time in terms of the previous definition of the astronomical unit can also be interpreted as measuring the length of an AU (old definition) in metres.[Note 7]

Time of flight techniques

A method of measuring the speed of light is to measure the time needed for light to travel to a mirror at a known distance and back. This is the working principle behind the Fizeau–Foucault apparatus developed by Hippolyte Fizeau and Léon Foucault.
A light ray passes horizontally through a half-mirror and a rotating cog wheel, is reflected back by a mirror, passes through the cog wheel, and is reflected by the half-mirror into a monocular.
Diagram of the Fizeau apparatus

The setup as used by Fizeau consists of a beam of light directed at a mirror 8 kilometres (5 mi) away. On the way from the source to the mirror, the beam passes through a rotating cogwheel. At a certain rate of rotation, the beam passes through one gap on the way out and another on the way back, but at slightly higher or lower rates, the beam strikes a tooth and does not pass through the wheel. Knowing the distance between the wheel and the mirror, the number of teeth on the wheel, and the rate of rotation, the speed of light can be calculated.[91]

The method of Foucault replaces the cogwheel by a rotating mirror. Because the mirror keeps rotating while the light travels to the distant mirror and back, the light is reflected from the rotating mirror at a different angle on its way out than it is on its way back. From this difference in angle, the known speed of rotation and the distance to the distant mirror the speed of light may be calculated.[92]

Nowadays, using oscilloscopes with time resolutions of less than one nanosecond, the speed of light can be directly measured by timing the delay of a light pulse from a laser or an LED reflected from a mirror. This method is less precise (with errors of the order of 1%) than other modern techniques, but it is sometimes used as a laboratory experiment in college physics classes.[93][94][95]

Electromagnetic constants

An option for deriving c that does not directly depend on a measurement of the propagation of electromagnetic waves is to use the relation between c and the vacuum permittivity ε0 and vacuum permeability μ0 established by Maxwell's theory: c2 = 1/(ε0μ0). The vacuum permittivity may be determined by measuring the capacitance and dimensions of a capacitor, whereas the value of the vacuum permeability is fixed at exactly ×10−7 H⋅m−1 through the definition of the ampere. Rosa and Dorsey used this method in 1907 to find a value of 299710±22 km/s.[96][97]

Cavity resonance

A box with three waves in it; there are one and a half wavelength of the top wave, one of the middle one, and a half of the bottom one.
Electromagnetic standing waves in a cavity.

Another way to measure the speed of light is to independently measure the frequency f and wavelength λ of an electromagnetic wave in vacuum. The value of c can then be found by using the relation c = . One option is to measure the resonance frequency of a cavity resonator. If the dimensions of the resonance cavity are also known, these can be used determine the wavelength of the wave. In 1946, Louis Essen and A.C. Gordon-Smith established the frequency for a variety of normal modes of microwaves of a microwave cavity of precisely known dimensions. The dimensions were established to an accuracy of about ±0.8 μm using gauges calibrated by interferometry.[96] As the wavelength of the modes was known from the geometry of the cavity and from electromagnetic theory, knowledge of the associated frequencies enabled a calculation of the speed of light.[96][98]

The Essen–Gordon-Smith result, 299792±9 km/s, was substantially more precise than those found by optical techniques.[96] By 1950, repeated measurements by Essen established a result of 299792.5±3.0 km/s.[99]

A household demonstration of this technique is possible, using a microwave oven and food such as marshmallows or margarine: if the turntable is removed so that the food does not move, it will cook the fastest at the antinodes (the points at which the wave amplitude is the greatest), where it will begin to melt. The distance between two such spots is half the wavelength of the microwaves; by measuring this distance and multiplying the wavelength by the microwave frequency (usually displayed on the back of the oven, typically 2450 MHz), the value of c can be calculated, "often with less than 5% error".[100][101]

Interferometry

Schematic of the working of a Michelson interferometer.
An interferometric determination of length. Left: constructive interference; Right: destructive interference.

Interferometry is another method to find the wavelength of electromagnetic radiation for determining the speed of light.[102] A coherent beam of light (e.g. from a laser), with a known frequency (f), is split to follow two paths and then recombined. By adjusting the path length while observing the interference pattern and carefully measuring the change in path length, the wavelength of the light (λ) can be determined. The speed of light is then calculated using the equation c = λf.

Before the advent of laser technology, coherent radio sources were used for interferometry measurements of the speed of light.[103] However interferometric determination of wavelength becomes less precise with wavelength and the experiments were thus limited in precision by the long wavelength (~0.4 cm) of the radiowaves. The precision can be improved by using light with a shorter wavelength, but then it becomes difficult to directly measure the frequency of the light. One way around this problem is to start with a low frequency signal of which the frequency can be precisely measured, and from this signal progressively synthesize higher frequency signals whose frequency can then be linked to the original signal. A laser can then be locked to the frequency, and its wavelength can be determined using interferometry.[104] This technique was due to a group at the National Bureau of Standards (NBS) (which later became NIST). They used it in 1972 to measure the speed of light in vacuum with a fractional uncertainty of 3.5×10−9.[104][105]

History

History of measurements of c (in km/s)
1675 Rømer and Huygens, moons of Jupiter 220000[82][106]
1729 James Bradley, aberration of light 301000[91]
1849 Hippolyte Fizeau, toothed wheel 315000[91]
1862 Léon Foucault, rotating mirror 298000±500[91]
1907 Rosa and Dorsey, EM constants 299710±30[96][97]
1926 Albert A. Michelson, rotating mirror 299796±4[107]
1950 Essen and Gordon-Smith, cavity resonator 299792.5±3.0[99]
1958 K.D. Froome, radio interferometry 299792.50±0.10[103]
1972 Evenson et al., laser interferometry 299792.4562±0.0011[105]
1983 17th CGPM, definition of the metre 299792.458 (exact)[80]
Until the early modern period, it was not known whether light travelled instantaneously or at a very fast finite speed. The first extant recorded examination of this subject was in ancient Greece. The ancient Greeks, Muslim scholars and classical European scientists long debated this until Rømer provided the first calculation of the speed of light. Einstein's Theory of Special Relativity concluded that the speed of light is constant regardless of one's frame of reference. Since then, scientists have provided increasingly accurate measurements.

Early history

Empedocles (c. 490–430 BC) was the first to claim that light has a finite speed.[108] He maintained that light was something in motion, and therefore must take some time to travel. Aristotle argued, to the contrary, that "light is due to the presence of something, but it is not a movement".[109] Euclid and Ptolemy advanced Empedocles' emission theory of vision, where light is emitted from the eye, thus enabling sight. Based on that theory, Heron of Alexandria argued that the speed of light must be infinite because distant objects such as stars appear immediately upon opening the eyes.

Early Islamic philosophers initially agreed with the Aristotelian view that light had no speed of travel. In 1021, Alhazen (Ibn al-Haytham) published the Book of Optics, in which he presented a series of arguments dismissing the emission theory of vision in favour of the now accepted intromission theory, in which light moves from an object into the eye.[110] This led Alhazen to propose that light must have a finite speed,[109][111][112] and that the speed of light is variable, decreasing in denser bodies.[112][113] He argued that light is substantial matter, the propagation of which requires time, even if this is hidden from our senses.[114] Also in the 11th century, Abū Rayhān al-Bīrūnī agreed that light has a finite speed, and observed that the speed of light is much faster than the speed of sound.[115]

In the 13th century, Roger Bacon argued that the speed of light in air was not infinite, using philosophical arguments backed by the writing of Alhazen and Aristotle.[116][117] In the 1270s, Witelo considered the possibility of light travelling at infinite speed in vacuum, but slowing down in denser bodies.[118]

In the early 17th century, Johannes Kepler believed that the speed of light was infinite, since empty space presents no obstacle to it. René Descartes argued that if the speed of light were to be finite, the Sun, Earth, and Moon would be noticeably out of alignment during a lunar eclipse. Since such misalignment had not been observed, Descartes concluded the speed of light was infinite. Descartes speculated that if the speed of light were found to be finite, his whole system of philosophy might be demolished.[109] In Descartes' derivation of Snell's law, he assumed that even though the speed of light was instantaneous, the more dense the medium, the faster was light's speed.[119] Pierre de Fermat derived Snell's law using the opposing assumption, the more dense the medium the slower light traveled. Fermat also argued in support of a finite speed of light.[120]

First measurement attempts

In 1629, Isaac Beeckman proposed an experiment in which a person observes the flash of a cannon reflecting off a mirror about one mile (1.6 km) away. In 1638, Galileo Galilei proposed an experiment, with an apparent claim to having performed it some years earlier, to measure the speed of light by observing the delay between uncovering a lantern and its perception some distance away. He was unable to distinguish whether light travel was instantaneous or not, but concluded that if it were not, it must nevertheless be extraordinarily rapid.[121][122] Galileo's experiment was carried out by the Accademia del Cimento of Florence, Italy, in 1667, with the lanterns separated by about one mile, but no delay was observed. The actual delay in this experiment would have been about 11 microseconds.
A diagram of a planet's orbit around the Sun and of a moon's orbit around another planet. The shadow of the latter planet is shaded.
Rømer's observations of the occultations of Io from Earth

The first quantitative estimate of the speed of light was made in 1676 by Rømer (see Rømer's determination of the speed of light).[81][82] From the observation that the periods of Jupiter's innermost moon Io appeared to be shorter when the Earth was approaching Jupiter than when receding from it, he concluded that light travels at a finite speed, and estimated that it takes light 22 minutes to cross the diameter of Earth's orbit. Christiaan Huygens combined this estimate with an estimate for the diameter of the Earth's orbit to obtain an estimate of speed of light of 220000 km/s, 26% lower than the actual value.[106]

In his 1704 book Opticks, Isaac Newton reported Rømer's calculations of the finite speed of light and gave a value of "seven or eight minutes" for the time taken for light to travel from the Sun to the Earth (the modern value is 8 minutes 19 seconds).[123] Newton queried whether Rømer's eclipse shadows were coloured; hearing that they were not, he concluded the different colours travelled at the same speed. In 1729, James Bradley discovered stellar aberration.[83] From this effect he determined that light must travel 10,210 times faster than the Earth in its orbit (the modern figure is 10,066 times faster) or, equivalently, that it would take light 8 minutes 12 seconds to travel from the Sun to the Earth.[83]

Connections with electromagnetism

In the 19th century Hippolyte Fizeau developed a method to determine the speed of light based on time-of-flight measurements on Earth and reported a value of 315000 km/s. His method was improved upon by Léon Foucault who obtained a value of 298000 km/s in 1862.[91] In the year 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch measured the ratio of the electromagnetic and electrostatic units of charge, 1/√ε0μ0, by discharging a Leyden jar, and found that its numerical value was very close to the speed of light as measured directly by Fizeau. The following year Gustav Kirchhoff calculated that an electric signal in a resistanceless wire travels along the wire at this speed.[124] In the early 1860s, Maxwell showed that, according to the theory of electromagnetism he was working on, electromagnetic waves propagate in empty space[125][126][127] at a speed equal to the above Weber/Kohrausch ratio, and drawing attention to the numerical proximity of this value to the speed of light as measured by Fizeau, he proposed that light is in fact an electromagnetic wave.[128]

"Luminiferous aether"


Hendrik Lorentz (right) with Albert Einstein.

It was thought at the time that empty space was filled with a background medium called the luminiferous aether in which the electromagnetic field existed. Some physicists thought that this aether acted as a preferred frame of reference for the propagation of light and therefore it should be possible to measure the motion of the Earth with respect to this medium, by measuring the isotropy of the speed of light. Beginning in the 1880s several experiments were performed to try to detect this motion, the most famous of which is the experiment performed by Albert A. Michelson and Edward W. Morley in 1887.[129] The detected motion was always less than the observational error. Modern experiments indicate that the two-way speed of light is isotropic (the same in every direction) to within 6 nanometres per second.[130] Because of this experiment Hendrik Lorentz proposed that the motion of the apparatus through the aether may cause the apparatus to contract along its length in the direction of motion, and he further assumed, that the time variable for moving systems must also be changed accordingly ("local time"), which led to the formulation of the Lorentz transformation. Based on Lorentz's aether theory, Henri Poincaré (1900) showed that this local time (to first order in v/c) is indicated by clocks moving in the aether, which are synchronized under the assumption of constant light speed. In 1904, he speculated that the speed of light could be a limiting velocity in dynamics, provided that the assumptions of Lorentz's theory are all confirmed. In 1905, Poincaré brought Lorentz's aether theory into full observational agreement with the principle of relativity.[131][132]

Special relativity

In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observer. Using this and the principle of relativity as a basis he derived the special theory of relativity, in which the speed of light in vacuum c featured as a fundamental constant, also appearing in contexts unrelated to light. This made the concept of the stationary aether (to which Lorentz and Poincaré still adhered) useless and revolutionized the concepts of space and time.[133][134]

Increased accuracy of c and redefinition of the metre and second

In the second half of the 20th century much progress was made in increasing the accuracy of measurements of the speed of light, first by cavity resonance techniques and later by laser interferometer techniques. These were aided by new, more precise, definitions of the metre and second. In 1950, Louis Essen determined the speed as 299,792.5±1 km/s, using cavity resonance. 
This value was adopted by the 12th General Assembly of the Radio-Scientific Union in 1957. In 1960, the metre was redefined in terms of the wavelength of a particular spectral line of krypton-86, and, in 1967, the second was redefined in terms of the hyperfine transition frequency of the ground state of caesium-133.
In 1972, using the laser interferometer method and the new definitions, a group at NBS in Boulder, Colorado determined the speed of light in vacuum to be c = 299792456.2±1.1 m/s. This was 100 times less uncertain than the previously accepted value. The remaining uncertainty was mainly related to the definition of the metre.[Note 8][105] As similar experiments found comparable results for c, the 15th Conférence Générale des Poids et Mesures (CGPM) in 1975 recommended using the value 299792458 m/s for the speed of light.[137]

Defining the speed of light as an explicit constant

In 1983 the 17th CGPM found that wavelengths from frequency measurements and a given value for the speed of light are more reproducible than the previous standard. They kept the 1967 definition of second, so the caesium hyperfine frequency would now determine both the second and the metre. To do this, they redefined the metre as: "The metre is the length of the path travelled by light in vacuum during a time interval of 1/299792458 of a second."[80] As a result of this definition, the value of the speed of light in vacuum is exactly 299792458 m/s[138][139] and has become a defined constant in the SI system of units.[11] Improved experimental techniques that prior to 1983 would have measured the speed of light, no longer affect the known value of the speed of light in SI units, but instead allow a more precise realization of the metre by more accurately measuring the wavelength of Krypton-86 and other light sources.[140][141]

In 2011, the CGPM stated its intention to redefine all seven SI base units using what it calls "the explicit-constant formulation", where each "unit is defined indirectly by specifying explicitly an exact value for a well-recognized fundamental constant", as was done for the speed of light. It proposed a new, but completely equivalent, wording of the metre's definition: "The metre, symbol m, is the unit of length; its magnitude is set by fixing the numerical value of the speed of light in vacuum to be equal to exactly 299792458 when it is expressed in the SI unit m s−1."[142] This is one of the proposed changes to be incorporated in the next revision of the SI also termed the New SI.

Skepticism


From Wikipedia, the free encyclopedia

Skepticism or scepticism (see spelling differences) is generally any questioning attitude towards knowledge, facts, or opinions/beliefs stated as facts,[1] or doubt regarding claims that are taken for granted elsewhere.[2]

Philosophical skepticism is an overall approach that requires all information to be well supported by evidence.[3] Classical philosophical skepticism derives from the 'Skeptikoi', a school who "asserted nothing".[4] Adherents of Pyrrhonism (and more recently, partially synonymous with Fallibilism), for instance, suspend judgment in investigations.[5] Skeptics may even doubt the reliability of their own senses.[6] Religious skepticism, on the other hand, is "doubt concerning basic religious principles (such as immortality, providence, and revelation)".[7] Scientific skepticism is about testing scientific beliefs for reliability, by subjecting them to systematic investigation using the scientific method, to create empirical evidence for them.

Definition

In ordinary usage, skepticism (US) or scepticism (UK) (Greek: 'σκέπτομαι' skeptomai, to think, to look about, to consider; see also spelling differences) refers to:
  1. an attitude of doubt or a disposition to incredulity either in general or toward a particular object;
  2. the doctrine that true knowledge or knowledge in a particular area is uncertain; or
  3. the method of suspended judgment, systematic doubt, or criticism that is characteristic of skeptics (Merriam–Webster).
In philosophy, skepticism refers more specifically to any one of several propositions. These include propositions about:
  1. an inquiry,
  2. a method of obtaining knowledge through systematic doubt and continual testing,
  3. the arbitrariness, relativity, or subjectivity of moral values,
  4. the limitations of knowledge,
  5. a method of intellectual caution and suspended judgment.

Philosophical skepticism

In philosophical skepticism, pyrrhonism is a position that refrains from making truth claims. A philosophical skeptic does not claim that truth is impossible (which itself would be a truth claim), instead it recommends "suspending belief". The label is commonly used to describe philosophies which appear similar to philosophical skepticism, such as academic skepticism, an ancient variant of Platonism that claimed knowledge of truth was impossible. Empiricism is a closely related, but not identical, position to philosophical skepticism. Empiricists see empiricism as a pragmatic compromise between philosophical skepticism and nomothetic science; philosophical skepticism is in turn sometimes referred to as "radical empiricism."

Western Philosophical skepticism originated in ancient Greek philosophy.[8] The Greek Sophists of the 5th century BC were partially skeptics.

Pyrrho of Elis (365-275 BC) is usually credited with founding the "school" of skepticism. He traveled to India and studied with the "gymnosophists" (naked lovers of wisdom), which could have been any number of Indian sects. From there, he brought back the idea that nothing can be known for certain. The senses are easily fooled, and reason follows too easily our desires.[9] Pyrrhonism was a school of skepticism founded by his follower Aenesidemus in the first century BC and recorded by Sextus Empiricus in the late 2nd century or early 3rd century AD. Subsequently, in the "New Academy" Arcesilaus (c. 315-241 BC) and Carneades (c. 213-129 BC) developed more theoretical perspectives by which conceptions of absolute truth and falsity were refuted as uncertain. Carneades criticized the views of the Dogmatists, especially supporters of Stoicism, asserting that absolute certainty of knowledge is impossible. Sextus Empiricus (c. AD 200), the main authority for Greek skepticism, developed the position further, incorporating aspects of empiricism into the basis for asserting knowledge.

Greek skeptics criticized the Stoics, accusing them of dogmatism. For the skeptics, the logical mode of argument was untenable, as it relied on propositions which could not be said to be either true or false without relying on further propositions. This was the regress argument, whereby every proposition must rely on other propositions in order to maintain its validity (see the five tropes of Agrippa the Sceptic). In addition, the skeptics argued that two propositions could not rely on each other, as this would create a circular argument (as p implies q and q implies p). For the skeptics, such logic was thus an inadequate measure of truth and could create as many problems as it claimed to have solved. Truth was not, however, necessarily unobtainable, but rather an idea which did not yet exist in a pure form. Although skepticism was accused of denying the possibility of truth, in fact it appears to have mainly been a critical school which merely claimed that logicians had not discovered truth.

In Islamic philosophy, skepticism was established by Al-Ghazali (1058–1111), known in the West as "Algazel", as part of the Ash'ari school of Islamic theology, whose method of skepticism shares many similarities with Descartes' method.[10]

In an effort to avoid skepticism, René Descartes begins his Meditations on First Philosophy attempting to find indubitable truth on which to base his knowledge. He later recognizes this truth as "I think, therefore I am," but before he finds this truth, he briefly entertains the skeptical arguments from dreaming and radical deception.

David Hume has also been described as a global skeptic.

Pierre Le Morvan (2011) has distinguished between three broad philosophical approaches to skepticism. The first he calls the "Foil Approach." According to this approach, skepticism is treated as a problem to be solved, or challenge to be met, or threat to be parried; skepticism's value on this view, insofar as it is deemed to have one, accrues from its role as a foil contrastively illuminating what is required for knowledge and justified belief. The second he calls the "Bypass Approach" according to which skepticism is bypassed as a central concern of epistemology. Le Morvan advocates a third approach—he dubs it the "Health Approach"—that explores when skepticism is healthy and when it is not, or when it is virtuous and when it is vicious.

Scientific skepticism

A scientific (or empirical) skeptic is one who questions beliefs on the basis of scientific understanding. Most scientists, being scientific skeptics, test the reliability of certain kinds of claims by subjecting them to a systematic investigation using some form of the scientific method.[11] As a result, a number of claims are considered "pseudoscience" if they are found to improperly apply or ignore the fundamental aspects of the scientific method. Scientific skepticism may discard beliefs pertaining to things outside perceivable observation and thus outside the realm of systematic, empirical falsifiability/testability.

Religious skepticism


In The God Delusion, Richard Dawkins wrote (about religious texts): "The fact that something is written down is persuasive to people not used to asking questions like: ‘Who wrote it, and when?’ ‘How did they know what to write?’ ‘Did they, in their time really mean what we, in our time, understand them to be saying?’ Were they unbiased observers, or did they have an agenda that coloured their writing?’".[12]

Religious skepticism generally refers to doubting given religious beliefs or claims. Historically, religious skepticism can be traced back to Socrates, who doubted many religious claims of the time. Modern religious skepticism typically places more emphasis on scientific and historical methods or evidence, with Michael Shermer writing that it is a process for discovering the truth rather than blanket non-acceptance. For this reason a religious skeptic might believe that Jesus existed while questioning claims that he was the messiah or performed miracles (see historicity of Jesus). Religious skepticism is not the same as atheism or agnosticism, though these often do involve skeptical attitudes toward religion and philosophical theology (for example, towards divine omnipotence). Religious people are generally skeptical about claims of other religions, at least when the two denominations conflict in some stated belief. In addition, they may also be skeptical of the claims made by atheists.[13] The historian Will Durant writes that Plato was "as skeptical of atheism as of any other dogma."[14]

Literary skeptics

Organizations

Media


Drake equation


From Wikipedia, the free encyclopedia


The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The equation was written in 1961 by Frank Drake not for purposes of quantifying the number of civilizations,[1] but intended as a way to stimulate scientific dialogue at the world's first search for extraterrestrial intelligence (SETI) meeting, in Green Bank, West Virginia. The equation summarizes the main concepts which scientists must contemplate when considering the question of other radio-communicative life.[1] The Drake equation has proved controversial since several of its factors are currently unknown, and estimates of their values span a very wide range. This has led critics to label the equation a guesstimate, or even meaningless.

History

In September 1959, physicists Giuseppe Cocconi and Philip Morrison published an article in the journal Nature with the provocative title "Searching for Interstellar Communications."[2][3] Cocconi and Morrison argued that radio telescopes had become sensitive enough to pick up transmissions that might be broadcast into space by civilizations orbiting other stars. Such messages, they suggested, might be transmitted at a wavelength of 21 centimeters (1,420.4 megahertz). This is the wavelength of radio emission by neutral hydrogen, the most common element in the universe, and they reasoned that other intelligences might see this as a logical landmark in the radio spectrum.
Seven months later, radio astronomer Frank Drake became the first person to start a systematic search for intelligent signals from the cosmos. Using the 25 meter dish of the National Radio Astronomy Observatory in Green Bank, West Virginia, Drake observed two nearby Sun-like stars: Epsilon Eridani and Tau Ceti. In this project, which he called Project Ozma, he slowly scanned frequencies close to the 21 cm wavelength for six hours a day from April to July 1960.[3] The project was well designed, cheap, simple by today's standards, but proved unsuccessful.

Soon thereafter, Drake hosted a "search for extraterrestrial intelligence" meeting on detecting their radio signals. The meeting was held at the Green Bank facility in 1961. The equation that bears Drake's name arose out of his preparations for the meeting.[4]
As I planned the meeting, I realized a few day[s] ahead of time we needed an agenda. And so I wrote down all the things you needed to know to predict how hard it's going to be to detect extraterrestrial life. And looking at them it became pretty evident that if you multiplied all these together, you got a number, N, which is the number of detectable civilizations in our galaxy. This was aimed at the radio search, and not to search for primordial or primitive life forms. —Frank Drake.
The ten attendees were conference organiser Peter Pearman, Frank Drake, Philip Morrison, businessman and radio amateur Dana Atchley, chemist Melvin Calvin, astronomer Su-Shu Huang, neuroscientist John C. Lilly, inventor Barney Oliver, astronomer Carl Sagan and radio-astronomer Otto Struve.[5] These participants dubbed themselves "The Order of the Dolphin" (because of Lilly's work on dolphin communication), and commemorated their first meeting with a plaque at the observatory hall.[6][7]

The equation

The Drake equation is:
$ N=R_{{\ast }}\cdot f_{p}\cdot n_{e}\cdot f_{{\ell }}\cdot f_{i}\cdot f_{c}\cdot L $
where:
N = the number of civilizations in our galaxy with which radio-communication might be possible (i.e. which are on our current past light cone);
and
R* = the average rate of star formation in our galaxy
fp = the fraction of those stars that have planets
ne = the average number of planets that can potentially support life per star that has planets
fl = the fraction of planets that could support life that actually develop life at some point
fi = the fraction of planets with life that actually go on to develop intelligent life (civilizations)
fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L = the length of time for which such civilizations release detectable signals into space[8]

Usefulness


The Allen Telescope Array for SETI

Although written as an equation, Drake's formulation is not particularly useful for computing an accurate value of the number of civilizations with which we might be able to communicate. The last four parameters, $ f_{{\ell }},f_{i},f_{c}, $ and $ L $, are not known and are very hard to estimate, with values ranging over many orders of magnitude (see criticism). Therefore, the SETI League states that the importance of the Drake equation is not in the solving, but rather in the contemplation.[1] It may be more useful to think of it as a series of questions framed as a numbers game.[8][9] The equation is quite useful for its intended application, which is to summarize all the various concepts which scientists must contemplate when considering the question of life elsewhere,[1] and gives the question of life elsewhere a basis for scientific analysis. The Drake equation is a statement that stimulates intellectual curiosity about the universe around us, for helping us to understand that life as we know it is the end product of a natural, cosmic evolution, and for helping us realize how much we are a part of that universe.[10] What the equation and the search for life has done is focus science on some of the other questions about life in the universe, specifically abiogenesis, the development of multi-cellular life and the development of intelligence itself.[11]

Within the limits of our existing technology, any practical search for distant intelligent life must necessarily be a search for some manifestation of a distant technology. After about 50 years, the Drake equation is still of seminal importance because it is a 'road map' of what we need to learn in order to solve this fundamental existential question. It also formed the backbone of astrobiology as a science; although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories.[12] Some 50 years of SETI have failed to find anything, even though radio telescopes, receiver techniques, and computational abilities have improved enormously since the early 1960s, but it has been discovered, at least, that our galaxy is not teeming with very powerful alien transmitters continuously broadcasting near the 21 cm hydrogen frequency. No one could say this in 1961.[13]

Modifications

As many observers have pointed out, the Drake equation is a very simple model that does not include potentially relevant parameters,[14] and many changes and modifications to the equation have been proposed. One line of modification, for example, attempts to account for the uncertainty inherent in many of the terms.[15]

Others note that the Drake equation ignores many concepts that might be relevant to the odds of contacting other civilizations. For example, David Brin states: "The Drake equation merely speaks of the number of sites at which ETIs spontaneously arise. The equation says nothing directly about the contact cross-section between an ETIS and contemporary human society".[16] Because it is the contact cross-section that is of interest to the SETI community, many additional factors and modifications of the Drake equation have been proposed.
Colonization
It has been proposed to generalize the Drake equation to include additional effects of alien civilizations colonizing other star systems. Each original site expands with an expansion velocity v, and establishes additional sites that survive for a lifetime L. The result is a more complex set of 3 equations.[16]
Reappearance factor
The Drake equation may furthermore be multiplied by how many times an intelligent civilization may occur on planets where it has happened once. Even if an intelligent civilization reaches the end of its lifetime after, for example, 10,000 years, life may still prevail on the planet for billions of years, permitting the next civilization to evolve. Thus, several civilizations may come and go during the lifespan of one and the same planet. Thus, if nr is the average number of times a new civilization reappears on the same planet where a previous civilization once has appeared and ended, then the total number of civilizations on such a planet would be (1+nr), which is the actual reappearance factor added to the equation.

The factor depends on what generally is the cause of civilization extinction. If it is generally by temporary uninhabitability, for example a nuclear winter, then nr may be relatively high. On the other hand, if it is generally by permanent uninhabitability, such as stellar evolution, then nr may be almost zero. In the case of total life extinction, a similar factor may be applicable for f, that is, how many times life may appear on a planet where it has appeared once.
METI factor
Alexander Zaitsev said that to be in a communicative phase and emit dedicated messages are not the same. For example, humans, although being in a communicative phase, are not a communicative civilization; we do not practise such activities as the purposeful and regular transmission of interstellar messages. For this reason, he suggested introducing the METI factor (Messaging to Extra-Terrestrial Intelligence) to the classical Drake equation.[17] He defined the factor as "the fraction of communicative civilizations with clear and non-paranoid planetary consciousness", or alternatively expressed, the fraction of communicative civilizations that actually engage in deliberate interstellar transmission.

The METI factor is somewhat misleading since active, purposeful transmission of messages by a civilization is not required for them to receive a broadcast sent by another that is seeking first contact. It is merely required they have capable and compatible receiver systems operational; however, this is a variable humans cannot accurately estimate.
Biogenic gases
Astronomer Sara Seager proposes a revised equation that focuses on the search for planets with biosignature gases, gases produced by living organisms that can accumulate in a planet atmosphere to levels that can be detected with remote space telescopes.[18]

Estimates

Original estimates

There is considerable disagreement on the values of these parameters, but the 'educated guesses' used by Drake and his colleagues in 1961 were:[19][20]
  • R* = 1/year (1 star formed per year, on the average over the life of the galaxy; this was regarded as conservative)
  • fp = 0.2-0.5 (one fifth to one half of all stars formed will have planets)
  • ne = 1-5 (stars with planets will have between 1 and 5 planets capable of developing life)
  • fl = 1 (100% of these planets will develop life)
  • fi = 1 (100% of which will develop intelligent life)
  • fc = 0.1-0.2 (10-20% of which will be able to communicate)
  • L = 1000-100,000,000 years (which will last somewhere between 1000 and 100,000,000 years)
Inserting the above minimum numbers into the equation gives a minimum N of 20. Inserting the maximum numbers gives a maximum of 50,000,000. Drake states that given the uncertainties, the original meeting concluded that N ≈ L, and there were probably between 1000 and 100,000,000 civilizations in the Milky Way galaxy.

Range of values

As many skeptics have pointed out, the Drake equation can give a very wide range of values, depending on the assumptions. One of the few points of agreement is that the presence of humanity means the probability of intelligence arising is greater than nil.[21] Beyond this, however, the values one may attribute to each factor in this equation tell more about a person's beliefs than about scientific facts.[22]

Using lowest values in the above estimates (and assuming the Rare Earth hypothesis implies ne*fl = 10−11, one planet with complex life in the galaxy):
R* = 7/year,[23] fp = 0.4,[24] ne*fl = 10−11, fi = 10−9,[25] fc = 0.1, and L = 304 years[26]
result in
N = 7 × 0.4 × 10−11 × 10−9 × 0.1 × 304 = 8 x 10−20 (suggesting that we are probably alone in this galaxy, and likely the observable universe)
On the other hand, with larger values for each of the parameters above, N may be greater than 1. Using the highest values in that have been proposed for each of the parameters
R* = 7/year,[23] fp = 1,[27] ne = 0.2,[28][29] fl = 0.13,[30] fi = 1,[31] fc = 0.2[Drake, above], and L = 109 years[32]
result in
N = 7 × 1 × 0.2 × 0.13 × 1 × 0.2 × 109 = 36.4 million
This has provided popular motivation and some funding for the SETI research.

Monte Carlo simulations of estimates of the Drake equation factors based on a stellar and planetary model of the Milky Way have resulted in the number of civilizations varying by a factor of 100.[33]

Current estimates

This section discusses and attempts to list the best current estimates for the parameters of the Drake equation.

R* = the rate of star creation in our galaxy
Latest calculations from NASA and the European Space Agency indicate that the current rate of star formation in our galaxy is about 7 per year.[23]
fp = the fraction of those stars that have planets
Recent analysis of Microlensing surveys has found that fp may approach 1 -- that is, stars are orbited by planets as a rule, rather than the exception; and that there are one or more bound planets per Milky Way star[27][34]
ne = the average number of planets (satellites may perhaps sometimes be just as good candidates) that can potentially support life per star that has planets
In November 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of sun-like stars and red dwarf stars within the Milky Way Galaxy.[35][36] 11 billion of these estimated planets may be orbiting sun-like stars.[37] Since there are about 100 billion stars in the galaxy, this implies fp*ne is roughly 0.4. The nearest planet in the habitable zone may be as little as 12 light-years away, according to the scientists.[35][36]
Even if planets are in the habitable zone, however, the number of planets with the right proportion of elements is difficult to estimate.[38] Brad Gibson, Yeshe Fenner, and Charley Lineweaver determined that about 10% of star systems in the Milky Way galaxy are hospitable to life, by having heavy elements, being far from supernovae and being stable for a sufficient time.[39] Also, the Rare Earth hypothesis, which posits that conditions for intelligent life are quite rare, has advanced a set of arguments based on the Drake equation that the number of planets or satellites that could support life is small, and quite possibly limited to Earth alone; in this case, the estimate of ne would be almost infinitesimally small.
The discovery of numerous gas giants in close orbit with their stars has introduced doubt that life-supporting planets commonly survive the formation of their stellar systems. In addition, most stars in our galaxy are red dwarfs, which flare violently, mostly in X-rays, a property not conducive to life as we know it. Simulations also suggest that these bursts erode planetary atmosphere.
On the other hand, the variety of star systems that might have habitable zones is not just limited to solar-type stars and Earth-sized planets; it is now estimated that even tidally locked planets close to red dwarfs might have habitable zones.[40] The possibility of life on moons of gas giants (such as Jupiter's moon Europa, or Saturn's moon Titan) adds further uncertainty to this figure.
fl = the fraction of the above that actually go on to develop life
Geological evidence from the Earth suggests that fl may be high; life on Earth appears to have begun around the same time as favorable conditions arose, suggesting that abiogenesis may be relatively common once conditions are right. However, this evidence only looks at the Earth (a single model planet), and contains anthropic bias, as the planet of study was not chosen randomly, but by the living organisms that already inhabit it (ourselves). From a classical hypothesis testing standpoint, there are zero degrees of freedom, permitting no valid estimates to be made. If life were to be found on Mars that developed independently from life on Earth it would imply a value for fl close to one. While this would improve the degrees of freedom from zero to one, there would remain a great deal of uncertainty on any estimate due to the small sample size, and the chance they are not really independent.
Countering this argument is that there is no evidence for abiogenesis occurring more than once on the Earth —that is, all terrestrial life stems from a common origin. If abiogenesis were more common it would be speculated to have occurred more than once on the Earth. Scientists have searched for this by looking for bacteria that are unrelated to other life on Earth, but none have been found yet.[41] It is also possible that life arose more than once, but that other branches were out-competed, or died in mass extinctions, or were lost in other ways. Biochemists Francis Crick and Leslie Orgel laid special emphasis on this uncertainty: "At the moment we have no means at all of knowing" whether we are "likely to be alone in the galaxy (Universe)" or whether "the galaxy may be pullulating with life of many different forms."[42] As an alternative to abiogenesis on Earth, they proposed the hypothesis of directed panspermia, which states that Earth life began with "microorganisms sent here deliberately by a technological society on another planet, by means of a special long-range unmanned spaceship" (Crick and Orgel, op.cit.).
In 2002, using a statistical argument based on the length of time life took to evolve on Earth, Charles H. Lineweaver and Tamara M. Davis (at the University of New South Wales and the Australian Centre for Astrobiology) estimated fl as > 0.13 on planets that have existed for at least one billion years.[30]
fi = the fraction of the above that develops intelligent life
This value remains particularly controversial. Those who favor a low value, such as the biologist Ernst Mayr, point out that of the billions of species that have existed on Earth, only one has become intelligent and from this, infer a tiny value for fi.[25] Those who favor higher values note the generally increasing complexity of life and conclude that the eventual appearance of intelligence might be imperative,[31][43] implying an fi approaching 1. Skeptics point out that the large spread of values in this factor and others make all estimates unreliable. 
In addition, while it appears that life developed soon after the formation of Earth, the Cambrian explosion, in which a large variety of multicellular life forms came into being, occurred a considerable amount of time after the formation of Earth, which suggests the possibility that special conditions were necessary. Some scenarios such as the Snowball Earth or research into the extinction events have raised the possibility that life on Earth is relatively fragile. Research on any past life on Mars is relevant since a discovery that life did form on Mars but ceased to exist would affect estimates of these factors.
This model also has a large anthropic bias and there are still zero degrees of freedom. Note that the capacity and willingness to participate in extraterrestrial communication has come relatively recently, with the Earth having only an estimated 100,000 year history of intelligent human life, and less than a century of technological ability.
Estimates of fi have been affected by discoveries that the Solar System's orbit is circular in the galaxy, at such a distance that it remains out of the spiral arms for tens of millions of years (evading radiation from novae). Also, Earth's large moon may aid the evolution of life by stabilizing the planet's axis of rotation.
fc = the fraction of the above that release detectable signs of their existence into space
For deliberate communication, the one example we have (the Earth) does not do much explicit communication, though there are some efforts covering only a tiny fraction of the stars that might look for our presence. (See Arecibo message, for example). There is considerable speculation why an extraterrestrial civilization might exist but choose not to communicate. However, deliberate communication is not required, and calculations indicate that current or near-future Earth-level technology might well be detectable to civilizations not too much more advanced than our own.[44][45] By this standard, the Earth is a communicating civilization.
L = the expected lifetime of such a civilization for the period that it can communicate across interstellar space
Michael Shermer estimated L as 420 years, based on the duration of sixty historical Earthly civilizations.[26] Using 28 civilizations more recent than the Roman Empire, he calculates a figure of 304 years for "modern" civilizations. It could also be argued from Michael Shermer's results that the fall of most of these civilizations was followed by later civilizations that carried on the technologies, so it is doubtful that they are separate civilizations in the context of the Drake equation. In the expanded version, including reappearance number, this lack of specificity in defining single civilizations does not matter for the end result, since such a civilization turnover could be described as an increase in the reappearance number rather than increase in L, stating that a civilization reappears in the form of the succeeding cultures. Furthermore, since none could communicate over interstellar space, the method of comparing with historical civilizations could be regarded as invalid.
David Grinspoon has argued that once a civilization has developed enough, it might overcome all threats to its survival. It will then last for an indefinite period of time, making the value for L potentially billions of years. If this is the case, then he proposes that the Milky Way galaxy may have been steadily accumulating advanced civilizations since it formed.[32] He proposes that the last factor L be replaced with fIC*T, where fIC is the fraction of communicating civilizations become "immortal" (in the sense that they simply do not die out), and T representing the length of time during which this process has been going on. This has the advantage that T would be a relatively easy to discover number, as it would simply be some fraction of the age of the universe.
It has also been hypothesized that once a civilization has learned of a more advanced one, its longevity could increase because it can learn from the experiences of the other.[46]
The astronomer Carl Sagan speculated that all of the terms, except for the lifetime of a civilization, are relatively high and the determining factor in whether there are large or small numbers of civilizations in the universe is the civilization lifetime, or in other words, the ability of technological civilizations to avoid self-destruction. In Sagan's case, the Drake equation was a strong motivating factor for his interest in environmental issues and his efforts to warn against the dangers of nuclear warfare.
Inserting these current estimates into the original equation, using a value of 0.1 wherever the text says someone has proposed an unspecified "low value," results in the range of N being from a low of 2 to a high of 280,000,000. As study of the concepts has gone forth, the range has increased at both the minimum and maximum ends.

Criticism

Criticism of the Drake equation follows mostly from the observation that several terms in the equation are largely or entirely based on conjecture. Star formation rates are on solid ground, and the incidence of planets has a sound theoretical and observational basis, but as we move from the left to right in the equation, estimating each succeeding factor becomes ever more speculative. The uncertainties revolve around our understanding of the evolution of life, intelligence, and civilization, not physics. No statistical estimates are possible for some of the parameters, where only one example is known. The net result is that the equation cannot be used to draw firm conclusions of any kind, and the resulting margin of error is huge, far beyond what some consider acceptable or meaningful.[47] As Michael Crichton, a science fiction author, stated in a 2003 lecture at Caltech:[48]
The problem, of course, is that none of the terms can be known, and most cannot even be estimated. The only way to work the equation is to fill in with guesses. [...] As a result, the Drake equation can have any value from "billions and billions" to zero. An expression that can mean anything, means nothing. Speaking precisely, the Drake equation is literally meaningless...
One reply to such criticisms[49] is that even though the Drake equation currently involves speculation about unmeasured parameters, it was intended as a way to stimulate dialogue on these topics. Then the focus becomes how to proceed experimentally. Indeed, Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference.[50]

Fermi paradox

The pessimists' most telling argument in the SETI debate stems not from theory or conjecture but from an actual observation: the lack of extraterrestrial contact.[3] A civilization lasting for tens of millions of years would have plenty of time to travel anywhere in the galaxy, even at the slow speeds foreseeable with our own kind of technology. Furthermore, no confirmed signs of intelligence elsewhere have been spotted, either in our galaxy or the more than 80 billion other galaxies of the observable universe. According to this line of thinking, the tendency to fill up all available territory seems to be a universal trait of living things, so the Earth should have already been colonized, or at least visited, but no evidence of this exists. Hence Fermi's question "Where is everybody?".[51][52]A large number of explanations have been proposed to explain this lack of contact; a recent book elaborated on 50 different explanations.[53] In terms of the Drake Equation, the explanations can be divided into three classes:
These lines of reasoning lead to the Great Filter hypothesis,[54] which states that since there are no observed extraterrestrial civilizations, despite the vast number of stars, then some step in the process must be acting as a filter to reduce the final value. According to this view, either it is very hard for intelligent life to arise, or the lifetime of such civilizations, or the period of time they reveal their existence, must be relatively short.

In fiction and popular culture

  • Frederik Pohl's Hugo award-winning "Fermi and Frost", cites a paradox as evidence for the short lifetime of technical civilizations—that is, the possibility that once a civilization develops the power to destroy itself (perhaps by nuclear warfare), it does.
  • Optimistic results of the equation along with unobserved extraterrestrials also serves as backdrop for humorous suggestions such as Terry Bisson's classic short story "They're Made Out of Meat," that there are many extraterrestrial civilizations but that they are deliberately ignoring humanity.[55]
  • In The Melancholy of Haruhi Suzumiya, the Drake equation is briefly flashed during the opening theme song, a reference to Haruhi's intention to find aliens among other things.
  • The equation was cited by Gene Roddenberry as supporting the multiplicity of inhabited planets shown in Star Trek, the television show he created. However, Roddenberry did not have the equation with him, and he was forced to "invent" it for his original proposal.[56] The invented equation created by Roddenberry is:
$ Ff^{2}(MgE)-C^{1}Ri^{1}~\cdot ~M=L/So\ $

Drake has gently pointed out, however, that a number raised to the first power is merely the number itself. A poster with both versions of the equation was seen in the Star Trek: Voyager episode "Future's End."
  • The equation is also cited in Michael Crichton's Sphere.
  • In James A. Michener's novel Space, several of the characters gather to discuss the equation and ponder its implications.
  • In the evolution-based game Spore, after eventually coming into contact with living beings on other planets, a picture is shown, along with the comment, "Drake's Equation was right...a living alien race!"
  • George Alec Effinger's short story "One" uses an expedition confident in the Drake equation as a backdrop to explore the psychological implications of a lone humanity.
  • Alastair Reynolds' Revelation Space trilogy and short stories focus very much on the Drake equation and the Fermi paradox, using genocidal self-replicating machines as a great filter.
  • Stephen Baxter's Manifold Trilogy explores the Drake equation and the Fermi paradox in three distinct perspectives.
  • Ian R. MacLeod's 2001 novel "New Light On The Drake Equation" concerns a man who is obsessed by the Drake equation.
  • The Ultimate Marvel comic book mini-series Ultimate Secret has Reed Richards examining the Drake equation and considering the Fermi paradox. He believes that advanced civilizations destroy themselves. In the story, it turns out that they are also destroyed by Gah Lak Tus.
  • Eleanor Ann Arroway paraphrases the Drake equation several times in the film Contact, using the magnitude of N * and its implications on the output value to justify the SETI program.
  • The band Carbon Based Lifeforms mention the Drake equation in their song "Abiogenesis" in their 2006 album World of Sleepers.[57]
  • The Drake equation has also been cited by Bill Bryson in his book titled A Short History of Nearly Everything (2003).
  • Mentioned by a character in George's Cosmic Treasure Hunt by Lucy and Stephen Hawking. (2009)
  • Mentioned in episode 602 (2009) "The Truth Is Out There" of the BBC series New Tricks.
  • Mentioned in Jupiter War, the third book of the Owner trilogy by Neal Asher, as a problem the Owner would investigate in the future. (2013)
  • The July 2013 issue of Popular Science, as a sidebar to an article about the Daleks of Doctor Who, includes an adaptation of the Drake equation, modified to include an additional factor dubbed the "Dalek Variable", rendering the equation thus:
$ N=R_{{\ast }}\cdot f_{p}\cdot n_{e}\cdot f_{{\ell }}\cdot f_{i}\cdot f_{c}\cdot L\cdot f_{d} $

The added variable at the end is defined as the "fraction of those civilizations that can survive an alien attack." (Note: in the article, the first variable is presented with the asterisk as superscript.)[58]

Occam's razor

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Occam%27s_razor In philosophy , Occa...