Search This Blog

Tuesday, September 4, 2018

Interferometry

From Wikipedia, the free encyclopedia
 
Figure 1. The light path through a Michelson interferometer. The two light rays with a common source combine at the half-silvered mirror to reach the detector. They may either interfere constructively (strengthening in intensity) if their light waves arrive in phase, or interfere destructively (weakening in intensity) if they arrive out of phase, depending on the exact distances between the three mirrors.

Interferometry is a family of techniques in which waves, usually electromagnetic waves, are superimposed causing the phenomenon of interference in order to extract information. Interferometry is an important investigative technique in the fields of astronomy, fiber optics, engineering metrology, optical metrology, oceanography, seismology, spectroscopy (and its applications to chemistry), quantum mechanics, nuclear and particle physics, plasma physics, remote sensing, biomolecular interactions, surface profiling, microfluidics, mechanical stress/strain measurement, velocimetry, and optometry.


Interferometers are widely used in science and industry for the measurement of small displacements, refractive index changes and surface irregularities. In an interferometer, light from a single source is split into two beams that travel different optical paths, then combined again to produce interference. The resulting interference fringes give information about the difference in optical path length. In analytical science, interferometers are used to measure lengths and the shape of optical components with nanometer precision; they are the highest precision length measuring instruments existing. In Fourier transform spectroscopy they are used to analyze light containing features of absorption or emission associated with a substance or mixture. An astronomical interferometer consists of two or more separate telescopes that combine their signals, offering a resolution equivalent to that of a telescope of diameter equal to the largest separation between its individual elements.

Basic principles

Figure 2. Formation of fringes in a Michelson interferometer
 
Figure 3. Colored and monochromatic fringes in a Michelson interferometer: (a) White light fringes where the two beams differ in the number of phase inversions; (b) White light fringes where the two beams have experienced the same number of phase inversions; (c) Fringe pattern using monochromatic light (sodium D lines)
 
Interferometry makes use of the principle of superposition to combine waves in a way that will cause the result of their combination to have some meaningful property that is diagnostic of the original state of the waves. This works because when two waves with the same frequency combine, the resulting intensity pattern is determined by the phase difference between the two waves—waves that are in phase will undergo constructive interference while waves that are out of phase will undergo destructive interference. Waves which are not completely in phase nor completely out of phase will have an intermediate intensity pattern, which can be used to determine their relative phase difference. Most interferometers use light or some other form of electromagnetic wave.


Typically (see Fig. 1, the well-known Michelson configuration) a single incoming beam of coherent light will be split into two identical beams by a beam splitter (a partially reflecting mirror). Each of these beams travels a different route, called a path, and they are recombined before arriving at a detector. The path difference, the difference in the distance traveled by each beam, creates a phase difference between them. It is this introduced phase difference that creates the interference pattern between the initially identical waves. If a single beam has been split along two paths, then the phase difference is diagnostic of anything that changes the phase along the paths. This could be a physical change in the path length itself or a change in the refractive index along the path.

As seen in Fig. 2a and 2b, the observer has a direct view of mirror M1 seen through the beam splitter, and sees a reflected image M'2 of mirror M2. The fringes can be interpreted as the result of interference between light coming from the two virtual images S'1 and S'2 of the original source S. The characteristics of the interference pattern depend on the nature of the light source and the precise orientation of the mirrors and beam splitter. In Fig. 2a, the optical elements are oriented so that S'1 and S'2 are in line with the observer, and the resulting interference pattern consists of circles centered on the normal to M1 and M'2. If, as in Fig. 2b, M1 and M'2 are tilted with respect to each other, the interference fringes will generally take the shape of conic sections (hyperbolas), but if M1 and M'2 overlap, the fringes near the axis will be straight, parallel, and equally spaced. If S is an extended source rather than a point source as illustrated, the fringes of Fig. 2a must be observed with a telescope set at infinity, while the fringes of Fig. 2b will be localized on the mirrors.

Use of white light will result in a pattern of colored fringes (see Fig. 3). The central fringe representing equal path length may be light or dark depending on the number of phase inversions experienced by the two beams as they traverse the optical system.

Categories

Interferometers and interferometric techniques may be categorized by a variety of criteria:

Homodyne versus heterodyne detection

In homodyne detection, the interference occurs between two beams at the same wavelength (or carrier frequency). The phase difference between the two beams results in a change in the intensity of the light on the detector. The resulting intensity of the light after mixing of these two beams is measured, or the pattern of interference fringes is viewed or recorded. Most of the interferometers discussed in this article fall into this category.

The heterodyne technique is used for (1) shifting an input signal into a new frequency range as well as (2) amplifying a weak input signal (assuming use of an active mixer). A weak input signal of frequency f1 is mixed with a strong reference frequency f2 from a local oscillator (LO). The nonlinear combination of the input signals creates two new signals, one at the sum f1 + f2 of the two frequencies, and the other at the difference f1 − f2. These new frequencies are called heterodynes. Typically only one of the new frequencies is desired, and the other signal is filtered out of the output of the mixer. The output signal will have an intensity proportional to the product of the amplitudes of the input signals.

The most important and widely used application of the heterodyne technique is in the superheterodyne receiver (superhet), invented by U.S. engineer Edwin Howard Armstrong in 1918. In this circuit, the incoming radio frequency signal from the antenna is mixed with a signal from a local oscillator (LO) and converted by the heterodyne technique to a lower fixed frequency signal called the intermediate frequency (IF). This IF is amplified and filtered, before being applied to a detector which extracts the audio signal, which is sent to the loudspeaker.
Optical heterodyne detection is an extension of the heterodyne technique to higher (visible) frequencies.

Double path versus common path

Figure 4. Four examples of common path
interferometers

A double path interferometer is one in which the reference beam and sample beam travel along divergent paths. Examples include the Michelson interferometer, the Twyman-Green interferometer, and the Mach-Zehnder interferometer. After being perturbed by interaction with the sample under test, the sample beam is recombined with the reference beam to create an interference pattern which can then be interpreted.

A common path interferometer is a class of interferometer in which the reference beam and sample beam travel along the same path. Fig. 4 illustrates the Sagnac interferometer, the fibre optic gyroscope, the point diffraction interferometer, and the lateral shearing interferometer. Other examples of common path interferometer include the Zernike phase contrast microscope, Fresnel's biprism, the zero-area Sagnac, and the scatterplate interferometer.

Wavefront splitting versus amplitude splitting

A wavefront splitting interferometer divides a light wavefront emerging from a point or a narrow slit (i.e. spatially coherent light) and, after allowing the two parts of the wavefront to travel through different paths, allows them to recombine. Fig. 5 illustrates Young's interference experiment and Lloyd's mirror. Other examples of wavefront splitting interferometer include the Fresnel biprism, the Billet Bi-Lens, and the Rayleigh interferometer.

Figure 5. Two wavefront splitting interferometers

In 1803, Young's interference experiment played a major role in the general acceptance of the wave theory of light. If white light is used in Young's experiment, the result is a white central band of constructive interference corresponding to equal path length from the two slits, surrounded by a symmetrical pattern of colored fringes of diminishing intensity. In addition to continuous electromagnetic radiation, Young's experiment has been performed with individual photons, with electrons, and with buckyball molecules large enough to be seen under an electron microscope.

Lloyd's mirror generates interference fringes by combining direct light from a source (blue lines) and light from the source's reflected image (red lines) from a mirror held at grazing incidence. The result is an asymmetrical pattern of fringes. The band of equal path length, nearest the mirror, is dark rather than bright. In 1834, Humphrey Lloyd interpreted this effect as proof that the phase of a front-surface reflected beam is inverted.

An amplitude splitting interferometer uses a partial reflector to divide the amplitude of the incident wave into separate beams which are separated and recombined. Fig. 6 illustrates the Fizeau, Mach–Zehnder and Fabry–Pérot interferometers. Other examples of amplitude splitting interferometer include the Michelson, Twyman–Green, Laser Unequal Path, and Linnik interferometer.

Figure 6. Three amplitude-splitting
interferometers: Fizeau, Mach–Zehnder,
and Fabry Pérot

The Fizeau interferometer is shown as it might be set up to test an optical flat. A precisely figured reference flat is placed on top of the flat being tested, separated by narrow spacers. The reference flat is slightly beveled (only a fraction of a degree of beveling is necessary) to prevent the rear surface of the flat from producing interference fringes. Separating the test and reference flats allows the two flats to be tilted with respect to each other. By adjusting the tilt, which adds a controlled phase gradient to the fringe pattern, one can control the spacing and direction of the fringes, so that one may obtain an easily interpreted series of nearly parallel fringes rather than a complex swirl of contour lines. Separating the plates, however, necessitates that the illuminating light be collimated. Fig 6 shows a collimated beam of monochromatic light illuminating the two flats and a beam splitter allowing the fringes to be viewed on-axis.

The Mach–Zehnder interferometer is a more versatile instrument than the Michelson interferometer. Each of the well separated light paths is traversed only once, and the fringes can be adjusted so that they are localized in any desired plane. Typically, the fringes would be adjusted to lie in the same plane as the test object, so that fringes and test object can be photographed together. If it is decided to produce fringes in white light, then, since white light has a limited coherence length, on the order of micrometers, great care must be taken to equalize the optical paths or no fringes will be visible. As illustrated in Fig. 6, a compensating cell would be placed in the path of the reference beam to match the test cell. Note also the precise orientation of the beam splitters. The reflecting surfaces of the beam splitters would be oriented so that the test and reference beams pass through an equal amount of glass. In this orientation, the test and reference beams each experience two front-surface reflections, resulting in the same number of phase inversions. The result is that light traveling an equal optical path length in the test and reference beams produces a white light fringe of constructive interference.

The heart of the Fabry–Pérot interferometer is a pair of partially silvered glass optical flats spaced several millimeters to centimeters apart with the silvered surfaces facing each other. (Alternatively, a Fabry–Pérot etalon uses a transparent plate with two parallel reflecting surfaces.) As with the Fizeau interferometer, the flats are slightly beveled. In a typical system, illumination is provided by a diffuse source set at the focal plane of a collimating lens. A focusing lens produces what would be an inverted image of the source if the paired flats were not present; i.e. in the absence of the paired flats, all light emitted from point A passing through the optical system would be focused at point A'. In Fig. 6, only one ray emitted from point A on the source is traced. As the ray passes through the paired flats, it is multiply reflected to produce multiple transmitted rays which are collected by the focusing lens and brought to point A' on the screen. The complete interference pattern takes the appearance of a set of concentric rings. The sharpness of the rings depends on the reflectivity of the flats. If the reflectivity is high, resulting in a high Q factor (i.e. high finesse), monochromatic light produces a set of narrow bright rings against a dark background.[19] In Fig. 6, the low-finesse image corresponds to a reflectivity of 0.04 (i.e. unsilvered surfaces) versus a reflectivity of 0.95 for the high-finesse image.

Michelson and Morley (1887) and other early experimentalists using interferometric techniques in an attempt to measure the properties of the luminiferous aether, used monochromatic light only for initially setting up their equipment, always switching to white light for the actual measurements. The reason is that measurements were recorded visually. Monochromatic light would result in a uniform fringe pattern. Lacking modern means of environmental temperature control, experimentalists struggled with continual fringe drift even though the interferometer might be set up in a basement. Since the fringes would occasionally disappear due to vibrations by passing horse traffic, distant thunderstorms and the like, it would be easy for an observer to "get lost" when the fringes returned to visibility. The advantages of white light, which produced a distinctive colored fringe pattern, far outweighed the difficulties of aligning the apparatus due to its low coherence length. This was an early example of the use of white light to resolve the "2 pi ambiguity".

Applications

Physics and astronomy

In physics, one of the most important experiments of the late 19th century was the famous "failed experiment" of Michelson and Morley which provided evidence for special relativity. Recent repetitions of the Michelson–Morley experiment perform heterodyne measurements of beat frequencies of crossed cryogenic optical resonators. Fig 7 illustrates a resonator experiment performed by Müller et al. in 2003. Two optical resonators constructed from crystalline sapphire, controlling the frequencies of two lasers, were set at right angles within a helium cryostat. A frequency comparator measured the beat frequency of the combined outputs of the two resonators. As of 2009, the precision by which anisotropy of the speed of light can be excluded in resonator experiments is at the 10−17 level.

Michelson interferometers are used in tunable narrow band optical filters and as the core hardware component of Fourier transform spectrometers.

When used as a tunable narrow band filter, Michelson interferometers exhibit a number of advantages and disadvantages when compared with competing technologies such as Fabry–Pérot interferometers or Lyot filters. Michelson interferometers have the largest field of view for a specified wavelength, and are relatively simple in operation, since tuning is via mechanical rotation of waveplates rather than via high voltage control of piezoelectric crystals or lithium niobate optical modulators as used in a Fabry–Pérot system. Compared with Lyot filters, which use birefringent elements, Michelson interferometers have a relatively low temperature sensitivity. On the negative side, Michelson interferometers have a relatively restricted wavelength range and require use of prefilters which restrict transmittance.


Fabry-Pérot thin-film etalons are used in narrow bandpass filters capable of selecting a single spectral line for imaging; for example, the H-alpha line or the Ca-K line of the Sun or stars. Fig. 10 shows an Extreme ultraviolet Imaging Telescope (EIT) image of the Sun at 195 Ångströms, corresponding to a spectral line of multiply-ionized iron atoms. EIT used multilayer coated reflective mirrors that were coated with alternate layers of a light "spacer" element (such as silicon), and a heavy "scatterer" element (such as molybdenum). Approximately 100 layers of each type were placed on each mirror, with a thickness of around 10 nm each. The layer thicknesses were tightly controlled so that at the desired wavelength, reflected photons from each layer interfered constructively.

The Laser Interferometer Gravitational-Wave Observatory (LIGO) uses two 4-km Michelson-Fabry-Pérot interferometers for the detection of gravitational waves. In this application, the Fabry–Pérot cavity is used to store photons for almost a millisecond while they bounce up and down between the mirrors. This increases the time a gravitational wave can interact with the light, which results in a better sensitivity at low frequencies. Smaller cavities, usually called mode cleaners, are used for spatial filtering and frequency stabilization of the main laser. The first observation of gravitational waves occurred on September 14, 2015.

The Mach-Zehnder interferometer's relatively large and freely accessible working space, and its flexibility in locating the fringes has made it the interferometer of choice for visualizing flow in wind tunnels, and for flow visualization studies in general. It is frequently used in the fields of aerodynamics, plasma physics and heat transfer to measure pressure, density, and temperature changes in gases.

Mach-Zehnder interferometers are also used to study one of the most counterintuitive predictions of quantum mechanics, the phenomenon known as quantum entanglement.

Figure 11. The VLA interferometer

An astronomical interferometer achieves high-resolution observations using the technique of aperture synthesis, mixing signals from a cluster of comparatively small telescopes rather than a single very expensive monolithic telescope.

Early radio telescope interferometers used a single baseline for measurement. Later astronomical interferometers, such as the Very Large Array illustrated in Fig 11, used arrays of telescopes arranged in a pattern on the ground. A limited number of baselines will result in insufficient coverage. This was alleviated by using the rotation of the Earth to rotate the array relative to the sky. Thus, a single baseline could measure information in multiple orientations by taking repeated measurements, a technique called Earth-rotation synthesis. Baselines thousands of kilometers long were achieved using very long baseline interferometry.

ALMA is an astronomical interferometer located in Chajnantor Plateau

Astronomical optical interferometry has had to overcome a number of technical issues not shared by radio telescope interferometry. The short wavelengths of light necessitate extreme precision and stability of construction. For example, spatial resolution of 1 milliarcsecond requires 0.5 µm stability in a 100 m baseline. Optical interferometric measurements require high sensitivity, low noise detectors that did not become available until the late 1990s. Astronomical "seeing", the turbulence that causes stars to twinkle, introduces rapid, random phase changes in the incoming light, requiring kilohertz data collection rates to be faster than the rate of turbulence. Despite these technical difficulties, roughly a dozen astronomical optical interferometers are now in operation offering resolutions down to the fractional milliarcsecond range. This linked video shows a movie assembled from aperture synthesis images of the Beta Lyrae system, a binary star system approximately 960 light-years (290 parsecs) away in the constellation Lyra, as observed by the CHARA array with the MIRC instrument. The brighter component is the primary star, or the mass donor. The fainter component is the thick disk surrounding the secondary star, or the mass gainer. The two components are separated by 1 milli-arcsecond. Tidal distortions of the mass donor and the mass gainer are both clearly visible.

The wave character of matter can be exploited to build interferometers. The first examples of matter interferometers were electron interferometers, later followed by neutron interferometers. Around 1990 the first atom interferometers were demonstrated, later followed by interferometers employing molecules.

Electron holography is an imaging technique that photographically records the electron interference pattern of an object, which is then reconstructed to yield a greatly magnified image of the original object. This technique was developed to enable greater resolution in electron microscopy than is possible using conventional imaging techniques. The resolution of conventional electron microscopy is not limited by electron wavelength, but by the large aberrations of electron lenses.

Neutron interferometry has been used to investigate the Aharonov–Bohm effect, to examine the effects of gravity acting on an elementary particle, and to demonstrate a strange behavior of fermions that is at the basis of the Pauli exclusion principle: Unlike macroscopic objects, when fermions are rotated by 360° about any axis, they do not return to their original state, but develop a minus sign in their wave function. In other words, a fermion needs to be rotated 720° before returning to its original state.

Atom interferometry techniques are reaching sufficient precision to allow laboratory-scale tests of general relativity.

Interferometers are used in atmospheric physics for high-precision measurements of trace gases via remote sounding of the atmosphere. There are several examples of interferometers that utilize either absorption or emission features of trace gases. A typical use would be in continual monitoring of the column concentration of trace gases such as ozone and carbon monoxide above the instrument.

Engineering and applied science

Figure 13. Optical flat interference fringes
How interference fringes are formed by an optical flat resting on a reflective surface. The gap between the surfaces and the wavelength of the light waves are greatly exaggerated.

Newton (test plate) interferometry is frequently used in the optical industry for testing the quality of surfaces as they are being shaped and figured. Fig. 13 shows photos of reference flats being used to check two test flats at different stages of completion, showing the different patterns of interference fringes. The reference flats are resting with their bottom surfaces in contact with the test flats, and they are illuminated by a monochromatic light source. The light waves reflected from both surfaces interfere, resulting in a pattern of bright and dark bands. The surface in the left photo is nearly flat, indicated by a pattern of straight parallel interference fringes at equal intervals. The surface in the right photo is uneven, resulting in a pattern of curved fringes. Each pair of adjacent fringes represents a difference in surface elevation of half a wavelength of the light used, so differences in elevation can be measured by counting the fringes. The flatness of the surfaces can be measured to millionths of an inch by this method. To determine whether the surface being tested is concave or convex with respect to the reference optical flat, any of several procedures may be adopted. One can observe how the fringes are displaced when one presses gently on the top flat. If one observes the fringes in white light, the sequence of colors becomes familiar with experience and aids in interpretation. Finally one may compare the appearance of the fringes as one moves ones head from a normal to an oblique viewing position. These sorts of maneuvers, while common in the optical shop, are not suitable in a formal testing environment. When the flats are ready for sale, they will typically be mounted in a Fizeau interferometer for formal testing and certification.

Fabry-Pérot etalons are widely used in telecommunications, lasers and spectroscopy to control and measure the wavelengths of light. Dichroic filters are multiple layer thin-film etalons. In telecommunications, wavelength-division multiplexing, the technology that enables the use of multiple wavelengths of light through a single optical fiber, depends on filtering devices that are thin-film etalons. Single-mode lasers employ etalons to suppress all optical cavity modes except the single one of interest.

Figure 14. Twyman-Green Interferometer

The Twyman–Green interferometer, invented by Twyman and Green in 1916, is a variant of the Michelson interferometer widely used to test optical components. The basic characteristics distinguishing it from the Michelson configuration are the use of a monochromatic point light source and a collimator. Michelson (1918) criticized the Twyman-Green configuration as being unsuitable for the testing of large optical components, since the light sources available at the time had limited coherence length. Michelson pointed out that constraints on geometry forced by limited coherence length required the use of a reference mirror of equal size to the test mirror, making the Twyman-Green impractical for many purposes. Decades later, the advent of laser light sources answered Michelson's objections. (A Twyman-Green interferometer using a laser light source and unequal path length is known as a Laser Unequal Path Interferometer, or LUPI.) Fig. 14 illustrates a Twyman-Green interferometer set up to test a lens. Light from a monochromatic point source is expanded by a diverging lens (not shown), then is collimated into a parallel beam. A convex spherical mirror is positioned so that its center of curvature coincides with the focus of the lens being tested. The emergent beam is recorded by an imaging system for analysis.

Mach-Zehnder interferometers are being used in integrated optical circuits, in which light interferes between two branches of a waveguide that are externally modulated to vary their relative phase. A slight tilt of one of the beam splitters will result in a path difference and a change in the interference pattern. Mach-Zehnder interferometers are the basis of a wide variety of devices, from RF modulators to sensors to optical switches.

The latest proposed extremely large astronomical telescopes, such as the Thirty Meter Telescope and the Extremely Large Telescope, will be of segmented design. Their primary mirrors will be built from hundreds of hexagonal mirror segments. Polishing and figuring these highly aspheric and non-rotationally symmetric mirror segments presents a major challenge. Traditional means of optical testing compares a surface against a spherical reference with the aid of a null corrector. In recent years, computer-generated holograms (CGHs) have begun to supplement null correctors in test setups for complex aspheric surfaces. Fig. 15 illustrates how this is done. Unlike the figure, actual CGHs have line spacing on the order of 1 to 10 µm. When laser light is passed through the CGH, the zero-order diffracted beam experiences no wavefront modification. The wavefront of the first-order diffracted beam, however, is modified to match the desired shape of the test surface. In the illustrated Fizeau interferometer test setup, the zero-order diffracted beam is directed towards the spherical reference surface, and the first-order diffracted beam is directed towards the test surface in such a way that the two reflected beams combine to form interference fringes. The same test setup can be used for the innermost mirrors as for the outermost, with only the CGH needing to be exchanged.

Figure 15. Optical testing with a Fizeau
interferometer and a computer generated hologram
 
Ring laser gyroscopes (RLGs) and fibre optic gyroscopes (FOGs) are interferometers used in navigation systems. They operate on the principle of the Sagnac effect. The distinction between RLGs and FOGs is that in a RLG, the entire ring is part of the laser while in a FOG, an external laser injects counter-propagating beams into an optical fiber ring, and rotation of the system then causes a relative phase shift between those beams. In a RLG, the observed phase shift is proportional to the accumulated rotation, while in a FOG, the observed phase shift is proportional to the angular velocity.

In telecommunication networks, heterodyning is used to move frequencies of individual signals to different channels which may share a single physical transmission line. This is called frequency division multiplexing (FDM). For example, a coaxial cable used by a cable television system can carry 500 television channels at the same time because each one is given a different frequency, so they don't interfere with one another. Continuous wave (CW) doppler radar detectors are basically heterodyne detection devices that compare transmitted and reflected beams.

Optical heterodyne detection is used for coherent Doppler lidar measurements capable of detecting very weak light scattered in the atmosphere and monitoring wind speeds with high accuracy. It has application in optical fiber communications, in various high resolution spectroscopic techniques, and the self-heterodyne method can be used to measure the linewidth of a laser.

Figure 16. Frequency comb of a mode-locked laser. The dashed lines represent an extrapolation of the mode frequencies towards the frequency of the carrier–envelope offset (CEO). The vertical grey line represents an unknown optical frequency. The horizontal black lines indicate the two lowest beat frequency measurements.

Optical heterodyne detection is an essential technique used in high-accuracy measurements of the frequencies of optical sources, as well as in the stabilization of their frequencies. Until a relatively few years ago, lengthy frequency chains were needed to connect the microwave frequency of a cesium or other atomic time source to optical frequencies. At each step of the chain, a frequency multiplier would be used to produce a harmonic of the frequency of that step, which would be compared by heterodyne detection with the next step (the output of a microwave source, far infrared laser, infrared laser, or visible laser). Each measurement of a single spectral line required several years of effort in the construction of a custom frequency chain. Currently, optical frequency combs have provided a much simpler method of measuring optical frequencies. If a mode-locked laser is modulated to form a train of pulses, its spectrum is seen to consist of the carrier frequency surrounded by a closely spaced comb of optical sideband frequencies with a spacing equal to the pulse repetition frequency (Fig. 16). The pulse repetition frequency is locked to that of the frequency standard, and the frequencies of the comb elements at the red end of the spectrum are doubled and heterodyned with the frequencies of the comb elements at the blue end of the spectrum, thus allowing the comb to serve as its own reference. In this manner, locking of the frequency comb output to an atomic standard can be performed in a single step. To measure an unknown frequency, the frequency comb output is dispersed into a spectrum. The unknown frequency is overlapped with the appropriate spectral segment of the comb and the frequency of the resultant heterodyne beats is measured.

One of the most common industrial applications of optical interferometry is as a versatile measurement tool for the high precision examination of surface topography. Popular interferometric measurement techniques include Phase Shifting Interferometry (PSI), and Vertical Scanning Interferometry(VSI), also known as scanning white light interferometry (SWLI) or by the ISO term Coherence Scanning Interferometry (CSI), CSI exploits coherence to extend the range of capabilities for interference microscopy. These techniques are widely used in micro-electronic and micro-optic fabrication. PSI uses monochromatic light and provides very precise measurements; however it is only usable for surfaces that are very smooth. CSI often uses white light and high numerical apertures, and rather than looking at the phase of the fringes, as does PSI, looks for best position of maximum fringe contrast or some other feature of the overall fringe pattern. In its simplest form, CSI provides less precise measurements than PSI but can be used on rough surfaces. Some configurations of CSI, variously known as Enhanced VSI (EVSI), high-resolution SWLI or Frequency Domain Analysis (FDA), use coherence effects in combination with interference phase to enhance precision.

Figure 17. Phase shifting and Coherence
scanning interferometers

Phase Shifting Interferometry addresses several issues associated with the classical analysis of static interferograms. Classically, one measures the positions of the fringe centers. As seen in Fig. 13, fringe deviations from straightness and equal spacing provide a measure of the aberration. Errors in determining the location of the fringe centers provide the inherent limit to precision of the classical analysis, and any intensity variations across the interferogram will also introduce error. There is a trade-off between precision and number of data points: closely spaced fringes provide many data points of low precision, while widely spaced fringes provide a low number of high precision data points. Since fringe center data is all that one uses in the classical analysis, all of the other information that might theoretically be obtained by detailed analysis of the intensity variations in an interferogram is thrown away. Finally, with static interferograms, additional information is needed to determine the polarity of the wavefront: In Fig. 13, one can see that the tested surface on the right deviates from flatness, but one cannot tell from this single image whether this deviation from flatness is concave or convex. Traditionally, this information would be obtained using non-automated means, such as by observing the direction that the fringes move when the reference surface is pushed.

Phase shifting interferometry overcomes these limitations by not relying on finding fringe centers, but rather by collecting intensity data from every point of the CCD image sensor. As seen in Fig. 17, multiple interferograms (at least three) are analyzed with the reference optical surface shifted by a precise fraction of a wavelength between each exposure using a piezoelectric transducer (PZT). Alternatively, precise phase shifts can be introduced by modulating the laser frequency. The captured images are processed by a computer to calculate the optical wavefront errors. The precision and reproducibility of PSI is far greater than possible in static interferogram analysis, with measurement repeatabilities of a hundredth of a wavelength being routine. Phase shifting technology has been adapted to a variety of interferometer types such as Twyman-Green, Mach–Zehnder, laser Fizeau, and even common path configurations such as point diffraction and lateral shearing interferometers.[71][73] More generally, phase shifting techniques can be adapted to almost any system that uses fringes for measurement, such as holographic and speckle interferometry.

Figure 18. Lunate cells of Nepenthes khasiana visualized by Scanning White Light Interferometry (SWLI)
 
Figure 19. Twyman-Green interferometer set up as a white light scanner

In coherence scanning interferometry, interference is only achieved when the path length delays of the interferometer are matched within the coherence time of the light source. CSI monitors the fringe contrast rather than the phase of the fringes. Fig. 17 illustrates a CSI microscope using a Mirau interferometer in the objective; other forms of interferometer used with white light include the Michelson interferometer (for low magnification objectives, where the reference mirror in a Mirau objective would interrupt too much of the aperture) and the Linnik interferometer (for high magnification objectives with limited working distance). The sample (or alternatively, the objective) is moved vertically over the full height range of the sample, and the position of maximum fringe contrast is found for each pixel. The chief benefit of coherence scanning interferometry is that systems can be designed that do not suffer from the 2 pi ambiguity of coherent interferometry, and as seen in Fig. 18, which scans a 180μm x 140μm x 10μm volume, it is well suited to profiling steps and rough surfaces. The axial resolution of the system is determined in part by the coherence length of the light source.[80][81] Industrial applications include in-process surface metrology, roughness measurement, 3D surface metrology in hard-to-reach spaces and in hostile environments, profilometry of surfaces with high aspect ratio features (grooves, channels, holes), and film thickness measurement (semi-conductor and optical industries, etc.).

Fig. 19 illustrates a Twyman–Green interferometer set up for white light scanning of a macroscopic object.

Holographic interferometry is a technique which uses holography to monitor small deformations in single wavelength implementations. In multi-wavelength implementations, it is used to perform dimensional metrology of large parts and assemblies and to detect larger surface defects.

Holographic interferometry was discovered by accident as a result of mistakes committed during the making of holograms. Early lasers were relatively weak and photographic plates were insensitive, necessitating long exposures during which vibrations or minute shifts might occur in the optical system. The resultant holograms, which showed the holographic subject covered with fringes, were considered ruined.

Eventually, several independent groups of experimenters in the mid-60s realized that the fringes encoded important information about dimensional changes occurring in the subject, and began intentionally producing holographic double exposures. The main Holographic interferometry article covers the disputes over priority of discovery that occurred during the issuance of the patent for this method.

Double- and multi- exposure holography is one of three methods used to create holographic interferograms. A first exposure records the object in an unstressed state. Subsequent exposures on the same photographic plate are made while the object is subjected to some stress. The composite image depicts the difference between the stressed and unstressed states.

Real-time holography is a second method of creating holographic interferograms. A holograph of the unstressed object is created. This holograph is illuminated with a reference beam to generate a hologram image of the object directly superimposed over the original object itself while the object is being subjected to some stress. The object waves from this hologram image will interfere with new waves coming from the object. This technique allows real time monitoring of shape changes.

The third method, time-average holography, involves creating a holograph while the object is subjected to a periodic stress or vibration. This yields a visual image of the vibration pattern.
Interferometric synthetic aperture radar (InSAR) is a radar technique used in geodesy and remote sensing. Satellite synthetic aperture radar images of a geographic feature are taken on separate days, and changes that have taken place between radar images taken on the separate days are recorded as fringes similar to those obtained in holographic interferometry. The technique can monitor centimeter- to millimeter-scale deformation resulting from earthquakes, volcanoes and landslides, and also has uses in structural engineering, in particular for the monitoring of subsidence and structural stability. Fig 20 shows Kilauea, an active volcano in Hawaii. Data acquired using the space shuttle Endeavour's X-band Synthetic Aperture Radar on April 13, 1994 and October 4, 1994 were used to generate interferometric fringes, which were overlaid on the X-SAR image of Kilauea.

Electronic speckle pattern interferometry (ESPI), also known as TV holography, uses video detection and recording to produce an image of the object upon which is superimposed a fringe pattern which represents the displacement of the object between recordings. (see Fig. 21) The fringes are similar to those obtained in holographic interferometry.

When lasers were first invented, laser speckle was considered to be a severe drawback in using lasers to illuminate objects, particularly in holographic imaging because of the grainy image produced. It was later realized that speckle patterns could carry information about the object's surface deformations. Butters and Leendertz developed the technique of speckle pattern interferometry in 1970, and since then, speckle has been exploited in a variety of other applications. A photograph is made of the speckle pattern before deformation, and a second photograph is made of the speckle pattern after deformation. Digital subtraction of the two images results in a correlation fringe pattern, where the fringes represent lines of equal deformation. Short laser pulses in the nanosecond range can be used to capture very fast transient events. A phase problem exists: In the absence of other information, one cannot tell the difference between contour lines indicating a peak versus contour lines indicating a trough. To resolve the issue of phase ambiguity, ESPI may be combined with phase shifting methods.

A method of establishing precise geodetic baselines, invented by Yrjö Väisälä, exploited the low coherence length of white light. Initially, white light was split in two, with the reference beam "folded", bouncing back-and-forth six times between a mirror pair spaced precisely 1 m apart. Only if the test path was precisely 6 times the reference path would fringes be seen. Repeated applications of this procedure allowed precise measurement of distances up to 864 meters. Baselines thus established were used to calibrate geodetic distance measurement equipment, leading to a metrologically traceable scale for geodetic networks measured by these instruments. (This method has been superseded by GPS.)

Other uses of interferometers have been to study dispersion of materials, measurement of complex indices of refraction, and thermal properties. They are also used for three-dimensional motion mapping including mapping vibrational patterns of structures.

Biology and medicine

Optical interferometry, applied to biology and medicine, provides sensitive metrology capabilities for the measurement of biomolecules, subcellular components, cells and tissues. Many forms of label-free biosensors rely on interferometry because the direct interaction of electromagnetic fields with local molecular polarizability eliminates the need for fluorescent tags or nanoparticle markers. At a larger scale, cellular interferometry shares aspects with phase-contrast microscopy, but comprises a much larger class of phase-sensitive optical configurations that rely on optical interference among cellular constituents through refraction and diffraction. At the tissue scale, partially-coherent forward-scattered light propagation through the micro aberrations and heterogeneity of tissue structure provides opportunities to use phase-sensitive gating (optical coherence tomography) as well as phase-sensitive fluctuation spectroscopy to image subtle structural and dynamical properties.

Central serous retinopathy.jpg
OCT B-Scan Setup.GIF


     

     
Figure 23. Central serous retinopathy,
imaged using optical coherence
tomography





 Figure 22. Typical optical setup of
single point OCT






Optical coherence tomography (OCT) is a medical imaging technique using low-coherence interferometry to provide tomographic visualization of internal tissue microstructures. As seen in Fig. 22, the core of a typical OCT system is a Michelson interferometer. One interferometer arm is focused onto the tissue sample and scans the sample in an X-Y longitudinal raster pattern. The other interferometer arm is bounced off a reference mirror. Reflected light from the tissue sample is combined with reflected light from the reference. Because of the low coherence of the light source, interferometric signal is observed only over a limited depth of sample. X-Y scanning therefore records one thin optical slice of the sample at a time. By performing multiple scans, moving the reference mirror between each scan, an entire three-dimensional image of the tissue can be reconstructed. Recent advances have striven to combine the nanometer phase retrieval of coherent interferometry with the ranging capability of low-coherence interferometry.

Phase contrast and differential interference contrast (DIC) microscopy are important tools in biology and medicine. Most animal cells and single-celled organisms have very little color, and their intracellular organelles are almost totally invisible under simple bright field illumination. These structures can be made visible by staining the specimens, but staining procedures are time-consuming and kill the cells. As seen in Figs. 24 and 25, phase contrast and DIC microscopes allow unstained, living cells to be studied. DIC also has non-biological applications, for example in the analysis of planar silicon semiconductor processing.

Angle-resolved low-coherence interferometry (a/LCI) uses scattered light to measure the sizes of subcellular objects, including cell nuclei. This allows interferometry depth measurements to be combined with density measurements. Various correlations have been found between the state of tissue health and the measurements of subcellular objects. For example, it has been found that as tissue changes from normal to cancerous, the average cell nuclei size increases.

Phase-contrast X-ray imaging (Fig. 26) refers to a variety of techniques that use phase information of a coherent x-ray beam to image soft tissues. (For an elementary discussion, see Phase-contrast x-ray imaging (introduction). For a more in-depth review, see Phase-contrast X-ray imaging.) It has become an important method for visualizing cellular and histological structures in a wide range of biological and medical studies. There are several technologies being used for x-ray phase-contrast imaging, all utilizing different principles to convert phase variations in the x-rays emerging from an object into intensity variations. These include propagation-based phase contrast, talbot interferometry, moiré-based far-field interferometry, refraction-enhanced imaging, and x-ray interferometry. These methods provide higher contrast compared to normal absorption-contrast x-ray imaging, making it possible to see smaller details. A disadvantage is that these methods require more sophisticated equipment, such as synchrotron or microfocus x-ray sources, x-ray optics, or high resolution x-ray detectors.

Artificial gravity

From Wikipedia, the free encyclopedia
Gemini 11 Agena tethered operations
 
Proposed Nautilus-X International space station centrifuge demo

Artificial gravity (sometimes referred to as pseudogravity) is the creation of an inertial force that mimics the effects of a gravitational force, usually by rotation. Artificial gravity, or rotational gravity, is thus the appearance of a centrifugal force in a rotating frame of reference (the transmission of centripetal acceleration via normal force in the non-rotating frame of reference), as opposed to the force experienced in linear acceleration, which by the principle of equivalence is indistinguishable from gravity. In a more general sense, "artificial gravity" may also refer to the effect of linear acceleration, e.g. by means of a rocket engine.

Rotational simulated gravity has been used in simulations to help astronauts train for extreme conditions. Rotational simulated gravity has been proposed as a solution in manned spaceflight to the adverse health effects caused by prolonged weightlessness. However, there are no current practical outer space applications of artificial gravity for humans due to concerns about the size and cost of a spacecraft necessary to produce a useful centripetal acceleration comparable to the gravitational field strength on Earth (g).

Centrifugal

Artificial gravity space station. 1969 NASA concept. This design is flawed because the astronauts would be walking back and forth between gravity and weightlessness.

Artificial gravity can be created using a centripetal force. A centripetal force directed towards the center of the turn is required for any object to move in a circular path. In the context of a rotating space station it is the normal force provided by spacecraft's hull that acts as centripetal force. Thus, the "gravity" force felt by an object the centrifugal force perceived in the rotating frame of reference as pointing "downwards" towards the hull. In accordance with Newton's Third Law the value of little g (the perceived "downward" acceleration) is equal in magnitude and opposite in direction to the centripetal acceleration.

Mechanism

Balls in a rotating spacecraft

From the point of view of people rotating with the habitat, artificial gravity by rotation behaves in some ways similarly to normal gravity but with the following differences:
  • Centripetal force: Unlike real gravity, which pulls towards a center of the planet, the centripetal force pushes towards the axis of rotation. For a given angular velocity the amount of artificial gravity depends linearly on the radius. With a small radius of rotation, the amount of gravity felt at one's head would be significantly different from the amount felt at one's feet. This could make movement and changing body position awkward. In accordance with the physics involved, slower rotations or larger rotational radii would reduce or eliminate this problem. Similarly the linear velocity of the habitat should be significantly higher than the relative velocities with which an astronaut will change position within it. Otherwise moving in the direction of the rotation will increase the felt gravity (while moving in the opposite direction will decrease it) to the point that it should cause problems.
  • The Coriolis effect gives an apparent force that acts on objects that move relative to a rotating reference frame. This apparent force acts at right angles to the motion and the rotation axis and tends to curve the motion in the opposite sense to the habitat's spin. If an astronaut inside a rotating artificial gravity environment moves towards or away from the axis of rotation, he or she will feel a force pushing him or her towards or away from the direction of spin. These forces act on the inner ear and can cause dizziness, nausea and disorientation. Lengthening the period of rotation (slower spin rate) reduces the Coriolis force and its effects. It is generally believed that at 2 rpm or less, no adverse effects from the Coriolis forces will occur, although humans have been shown to adapt to rates as high as 23 rpm.[4] It is not yet known whether very long exposures to high levels of Coriolis forces can increase the likelihood of becoming accustomed. The nausea-inducing effects of Coriolis forces can also be mitigated by restraining movement of the head.
This form of artificial gravity has additional engineering issues:
  • Kinetic energy and angular momentum: Spinning up (or down) parts or all of the habitat requires energy, while angular momentum must be conserved. This would require a propulsion system and expendable propellant, or could be achieved without expending mass, by an electric motor and a counterweight, such as a reaction wheel or possibly another living area spinning in the opposite direction.
  • Extra strength is needed in the structure to keep it from flying apart because of the rotation. However, the amount of structure needed over and above that to hold a breathable atmosphere (10 tons force per square meter at 1 atmosphere) is relatively modest for most structures.
  • If parts of the structure are intentionally not spinning, friction and similar torques will cause the rates of spin to converge (as well as causing the otherwise stationary parts to spin), requiring motors and power to be used to compensate for the losses due to friction.
  • A traversable interface between parts of the station spinning relative to each other requires large vacuum-tight axial seals.
Formulae
{\displaystyle R=a\left({\frac {T}{2\pi }}\right)^{2},} {\displaystyle a=R\left({\frac {2\pi }{T}}\right)^{2}\quad (T>0),}
{\displaystyle T=2\pi {\sqrt {\frac {R}{a}}}\quad (a>0),}

where:

R = Radius from center of rotation
a = Artificial gravity
T = Rotating spacecraft period

Rotation speed in rpm for a centrifuge of various radii to achieve a given g-force

Manned spaceflight

The engineering challenges of creating a rotating spacecraft are comparatively modest to any other proposed approach. Theoretical spacecraft designs using artificial gravity have a great number of variants with intrinsic problems and advantages. The formula for the centripetal force implies that the radius of rotation grows with the square of the rotating spacecraft period, so a doubling of the period requires a fourfold increase in the radius of rotation. For example, to produce standard gravity, ɡ0 = 9.80665 m/s2 with a rotating spacecraft period of 15 s, the radius of rotation would have to be 56 m (184 ft), while a period of 30 s would require it to be 224 m (735 ft). To reduce mass, the support along the diameter could consist of nothing but a cable connecting two sections of the spaceship, possibly a habitat module and a counterweight consisting of every other part of the spacecraft.
It is not yet known whether exposure to high gravity for short periods of time is as beneficial to health as continuous exposure to normal gravity. It is also not known how effective low levels of gravity would be at countering the adverse effects on health of weightlessness. Artificial gravity at 0.1g and a rotating spacecraft period of 30 s would require a radius of only 22 m (72 ft). Likewise, at a radius of 10 m, a period of just over 6 s would be required to produce standard gravity (at the hips; gravity would be 11% higher at the feet), while 4.5 s would produce 2g. If brief exposure to high gravity can negate the harmful effects of weightlessness, then a small centrifuge could be used as an exercise area.

Gemini missions

The Gemini 11 mission attempted to produce artificial gravity by rotating the capsule around the Agena Target Vehicle to which it was attached by a 36-meter tether. They were able to generate a small amount of artificial gravity, about 0.00015 g, by firing their side thrusters to slowly rotate the combined craft like a slow-motion pair of bolas. The resultant force was too small to be felt by either astronaut, but objects were observed moving towards the "floor" of the capsule.

It should be pointed out that the Gemini 8 mission achieved artificial gravity for a few minutes. This, however, was due to an accident. The acceleration forces upon the crew were so high (~ 4g's) that the mission had to be urgently terminated.

Health benefits

Artificial gravity has been suggested for interplanetary journey to Mars

Artificial gravity has been suggested as a solution to the various health risks associated with spaceflight. In 1964, the Soviet space program believed that a human could not survive more than 14 days in space due to a fear that the heart and blood vessels would be unable to adapt to the weightless conditions. This fear was eventually discovered to be unfounded as spaceflights have now lasted up to 438 consecutive days, with missions aboard the international space station commonly lasting 6 months. However, the question of human safety in space did launch an investigation into the physical effects of prolonged exposure to weightlessness. In June 1991, a Spacelab Life Sciences 1 flight performed 18 experiments on two men and two women over a period of nine days. In an environment without gravity, it was concluded that the response of white blood cells and muscle mass decreased. Additionally, within the first 24 hours spent in a weightless environment, blood volume decreased by 10%. Upon return to earth, the effects of prolonged weightlessness continue to affect the human body as fluids pool back to the lower body, the heart rate rises, a drop in blood pressure occurs and there is a reduced ability to exercise.

Artificial gravity, due to its ability to mimic the behavior of gravity on the human body has been suggested as one of the most encompassing manners of combating the physical effects inherent with weightless environments. Other measures that have been suggested as symptomatic treatments include exercise, diet and penguin suits. However, criticism of those methods lays in the fact that they do not fully eliminate the health problems and require a variety of solutions to address all issues. Artificial gravity, in contrast, would remove the weightlessness inherent with space travel. By implementing artificial gravity, space travelers would never have to experience weightlessness or the associated side effects. Especially in a modern-day six-month journey to Mars, exposure to artificial gravity is suggested in either a continuous or intermittent form to prevent extreme debilitation to the astronauts during travel.

Proposals

Rotating Mars spacecraft - 1989 NASA concept.

A number of proposals have incorporated artificial gravity into their design:
  • Discovery II: a 2005 vehicle proposal capable of delivering a 172-metric-ton crew to Jupiter's orbit in 118 days. A very small portion of the 1,690 metric-ton craft would incorporate a centrifugal crew station.
  • Multi-Mission Space Exploration Vehicle (MMSEV): a 2011 NASA proposal for a long-duration crewed space transport vehicle; it included a rotational artificial gravity space habitat intended to promote crew-health for a crew of up to six persons on missions of up to two years in duration. The torus-ring centrifuge would utilize both standard metal-frame and inflatable spacecraft structures and would provide 0.11 to 0.69g if built with the 40 feet (12 m) diameter option.
  • ISS Centrifuge Demo: a 2011 NASA proposal for a demonstration project preparatory to the final design of the larger torus centrifuge space habitat for the Multi-Mission Space Exploration Vehicle. The structure would have an outside diameter of 30 feet (9.1 m) with a ring interior cross-section diameter of 30 inches (760 mm). It would provide 0.08 to 0.51g partial gravity. This test and evaluation centrifuge would have the capability to become a Sleep Module for ISS crew.
Artist view of TEMPO³ in orbit.
  • Mars Direct: A plan for a manned Mars mission created by NASA engineers Robert Zubrin and David Baker in 1990, later expanded upon in Zubrin's 1996 book The Case for Mars. The "Mars Habitat Unit", which would carry astronauts to Mars to join the previously-launched "Earth Return Vehicle", would have had artificial gravity generated during flight by tying the spent upper stage of the booster to the Habitat Unit, and setting them both rotating about a common axis.
  • The proposed Tempo3 mission rotates two halves of a spacecraft connected by a tether to test the feasibility of simulating gravity on a manned mission to Mars.
  • The Mars Gravity Biosatellite was a proposed mission meant to study the effect of artificial gravity on mammals. An artificial gravity field of 0.38 g (equivalent to Mars's surface gravity) was to be produced by rotation (32 rpm, radius of ca. 30 cm). Fifteen mice would have orbited Earth (Low Earth orbit) for five weeks and then land alive. However, the program was canceled on 24 June 2009, due to lack of funding and shifting priorities at NASA.

Issues with implementation

Some of the reasons that artificial gravity remains unused today in spaceflight trace back to the problems inherent in implementation. One of the realistic methods of creating artificial gravity is a centripetal force pulling a person towards a relative floor. In that model, however, issues arise in the size of the spacecraft. As expressed by John Page and Matthew Francis, the smaller a spacecraft, the more rapid the rotation that is required. As such, to simulate gravity, it would be more ideal to utilize a larger spacecraft that rotates very slowly. The requirements on size in comparison to rotation are due to the different magnitude of forces the body can experience if the rotation is too tight. Additionally, questions remain as to what the best way to initially set the rotating motion in place without disturbing the stability of the whole spacecraft's orbit. At the moment, there is not a ship massive enough to meet the rotation requirements, and the costs associated with building, maintaining, and launching such a craft are extensive.

In general, with the limited health effects present in shorter spaceflights, as well as the high cost of research, application of artificial gravity is often stunted and sporadic.

In science fiction

Several science fiction novels, films and series have featured artificial gravity production. In the movie 2001: A Space Odyssey, a rotating centrifuge in the Discovery spacecraft provides artificial gravity. In the novel The Martian, the Hermes spacecraft achieves artificial gravity by design; it employs a ringed structure, at whose periphery forces around 40% of Earth's gravity are experienced, similar to Mars's gravity. The movie Interstellar features a spacecraft called the Endurance that can rotate on its center axis to create artificial gravity, controlled by retro thrusters on the ship.

Centrifuges

High-G training is done by aviators and astronauts who are subject to high levels of acceleration ('G') in large-radius centrifuges. It is designed to prevent a g-induced loss Of consciousness (abbreviated G-LOC), a situation when g-forces move the blood away from the brain to the extent that consciousness is lost. Incidents of acceleration-induced loss of consciousness have caused fatal accidents in aircraft capable of sustaining high-g for considerable periods.

In amusement parks, pendulum rides and centrifuges provide rotational force. Roller coasters also do, whenever they go over dips, humps, or loops. When going over a hill, time in which zero or negative gravity is felt is called air time, or "airtime", which can be divided into "floater air time" (for zero gravity) and "ejector air time" (for negative gravity).

Linear acceleration

Linear acceleration, even at a low level, can provide sufficient g-force to provide useful benefits. A spacecraft under constant acceleration in a straight line would give the appearance of a gravitational pull in the direction opposite of the acceleration. This "pull" that would cause a loose object to "fall" towards the hull of the spacecraft is actually a manifestation of the inertia of the objects inside the spacecraft, in accordance with Newton's first law. Further, the "gravity" felt by an object pressed against the hull of the spacecraft is simply the reaction force of the object on the hull reacting to the acceleration force of the hull on the object, in accordance with Newton's Third Law and somewhat similar to the effect on an object pressed against the hull of a spacecraft rotating as outlined above. Unlike an artificial gravity based on rotation, linear acceleration gives the appearance of a gravity field which is both uniform throughout the spacecraft and without the disadvantage of additional fictitious forces.

Some chemical reaction rockets can at least temporarily provide enough acceleration to overcome Earth's gravity and could thus provide linear acceleration to emulate Earth's g-force. However, since all such rockets provide this acceleration by expelling reaction mass such an acceleration would only be temporary, until the limited supply of rocket fuel had been spent.

Nevertheless, constant linear acceleration is desirable since in addition to providing artificial gravity it could theoretically provide relatively short flight times around the solar system. For example, if a propulsion technique able to support 1g of acceleration continuously were available, a spaceship accelerating (and then decelerating for the second half of the journey) at 1g would reach Mars within a few days. Similarly, a hypothetical space travel using constant acceleration of 1g for one year would reach relativistic speeds and allow for a round trip to the nearest star, Proxima Centauri.

As such, low-impulse but long-term linear acceleration has been proposed for various interplanetary missions. For example, even heavy (100 ton) cargo payloads to Mars could be transported to Mars in 27 months and retain approximately 55 percent of the LEO vehicle mass upon arrival into a Mars orbit, providing a low-gravity gradient to the spacecraft during the entire journey.

A propulsion system with a very high specific impulse (that is, good efficiency in the use of reaction mass that must be carried along and used for propulsion on the journey) could accelerate more slowly producing useful levels of artificial gravity for long periods of time. A variety of electric propulsion systems provide examples. Two examples of this long-duration, low-thrust, high-impulse propulsion that have either been practically used on spacecraft or are planned in for near-term in-space use are Hall effect thrusters and Variable Specific Impulse Magnetoplasma Rockets (VASIMR). Both provide very high specific impulse but relatively low thrust, compared to the more typical chemical reaction rockets. They are thus ideally suited for long-duration firings which would provide limited amounts of, but long-term, milli-g levels of artificial gravity in spacecraft.

In a number of science fiction plots, acceleration is used to produce artificial gravity for interstellar spacecraft, propelled by as yet theoretical or hypothetical means.

This effect of linear acceleration is well understood, and is routinely used for 0g cryogenic fluid management for post-launch (subsequent) in-space firings of upper stage rockets.

Roller coasters, especially launched roller coasters or those that rely on electromagnetic propulsion, can provide linear acceleration "gravity", and so can relatively high acceleration vehicles, such as sports cars. Linear acceleration can be used to provide air-time on roller coasters and other thrill rides.

Weightlessness/levitation

Diamagnetism

A live frog levitates inside a 32 mm diameter vertical bore of a Bitter solenoid in a magnetic field of about 16 teslas.

A similar effect to gravity can be created through diamagnetism. It requires magnets with extremely powerful magnetic fields. Such devices have been able to levitate at most a small mouse, producing a 1 g field to cancel that of the Earth's.

Sufficiently powerful magnets require either expensive cryogenics to keep them superconductive or several megawatts of power.

With such extremely strong magnetic fields, safety for use with humans is unclear.[citation needed] In addition, it would involve avoiding any ferromagnetic or paramagnetic materials near the strong magnetic field that is required for diamagnetism to be evident.

Facilities using diamagnetism may prove workable for laboratories simulating low gravity conditions here on Earth. A mouse has been levitated against Earth's gravity, creating a condition similar to microgravity. Lower forces may also be generated to simulate a condition similar to lunar or Martian gravity with small model organisms.

Parabolic flight

Weightless Wonder is the nickname for the NASA aircraft that flies parabolic trajectories and briefly provides a nearly weightless environment in which to train astronauts, conduct research, and film motion pictures. The parabolic trajectory creates a vertical linear acceleration which matches that of gravity, giving zero-g for a short time, usually 20–30 seconds, followed by approximately 1.8g for a similar period. The nickname Vomit Comet is also used to refer to motion sickness that is often experienced by the aircraft passengers during these parabolic trajectories. Such reduced gravity aircraft are nowadays operated by several organizations worldwide.

Neutral buoyancy

A Neutral Buoyancy Laboratory (NBL) is an astronaut training facility, such as the Sonny Carter Training Facility at the NASA Johnson Space Center in Houston, Texas. The NBL is a large indoor pool of water, the largest in the world, in which astronauts may perform simulated EVA tasks in preparation for space missions. The NBL contains full-sized mock-ups of the Space Shuttle cargo bay, flight payloads, and the International Space Station (ISS).

The principle of neutral buoyancy is used to simulate the weightless environment of space. The suited astronauts are lowered into the pool using an overhead crane and their weight is adjusted by support divers so that they experience no buoyant force and no rotational moment about their center of mass. The suits worn in the NBL are down-rated from fully flight-rated EMU suits like those in use on the space shuttle and International Space Station.

The NBL tank is 202 feet (62 m) in length, 102 feet (31 m) wide, and 40 feet 6 inches (12.34 m) deep, and contains 6.2 million gallons (23.5 million litres) of water. Divers breathe nitrox while working in the tank.

Neutral buoyancy in a pool is not weightlessness, since the balance organs in the inner ear still sense the up-down direction of gravity. Also, there is a significant amount of drag presented by water. Generally, drag effects are minimized by doing tasks slowly in the water. Another difference between neutral buoyancy simulation in a pool and actual EVA during spaceflight is that the temperature of the pool and the lighting conditions are maintained constant.

Speculative or fictional mechanisms

In science fiction, artificial gravity (or cancellation of gravity) or "paragravity" is sometimes present in spacecraft that are neither rotating nor accelerating. At present, there is no confirmed technique that can simulate gravity other than actual mass or acceleration. There have been many claims over the years of such a device. Eugene Podkletnov, a Russian engineer, has claimed since the early 1990s to have made such a device consisting of a spinning superconductor producing a powerful "gravitomagnetic field", but there has been no verification or even negative results from third parties. In 2006, a research group funded by ESA claimed to have created a similar device that demonstrated positive results for the production of gravitomagnetism, although it produced only 0.0001g. This result has not been replicated. String theory predicts that gravity and electromagnetism unify in hidden dimensions and that extremely short photons can enter those dimensions.

Speed of gravity

From Wikipedia, the free encyclopedia
In classical theories of gravitation, the changes in a gravitational field propagate. A change in the distribution of energy and momentum of matter results in subsequent alteration, at a distance, of the gravitational field which it produces. In the relativistic sense, the "speed of gravity" refers to the speed of a gravitational wave, which is the same speed as the speed of light (c) as predicted by General Relativity and confirmed by observation of the GW170817 neutron star merger.

Introduction

The speed of gravitational waves in the general theory of relativity is equal to the speed of light in a vacuum, c. Within the theory of special relativity, the constant c is not exclusively about light; instead it is the highest possible speed for any interaction in nature. Formally, c is a conversion factor for changing the unit of time to the unit of space. This makes it the only speed which does not depend either on the motion of an observer or a source of light and/or gravity. Thus, the speed of "light" is also the speed of gravitational waves and any massless particle. Such particles include the gluon (carrier of the strong force), the photons that make up light (hence carrier of electromagnetic force), and the hypothetical gravitons which make up the associated field particles of gravity (however a theory of the graviton requires a theory of quantum gravity).

Static fields

The speed of physical changes in a gravitational or electromagnetic field should not be confused with "changes" in the behavior of static fields that are due to pure observer-effects. These changes in direction of a static field, because of relativistic considerations, are the same for an observer when a distant charge is moving, as when an observer (instead) decides to move with respect to a distant charge. Thus, constant motion of an observer with regard to a static charge and its extended static field (either a gravitational or electric field) does not change the field. For static fields, such as the electrostatic field connected with electric charge, or the gravitational field connected to a massive object, the field extends to infinity, and does not propagate. Motion of an observer does not cause the direction of such a field to change, and by symmetrical considerations, changing the observer frame so that the charge appears to be moving at a constant rate, also does not cause the direction of its field to change, but requires that it continue to "point" in the direction of the charge, at all distances from the charge.

The consequence of this is that static fields (either electric or gravitational) always point directly to the actual position of the bodies that they are connected to, without any delay that is due to any "signal" traveling (or propagating) from the charge, over a distance to an observer. This remains true if the charged bodies and their observers are made to "move" (or not), by simply changing reference frames. This fact sometimes causes confusion about the "speed" of such static fields, which sometimes appear to change infinitely quickly when the changes in the field are mere artifacts of the motion of the observer, or of observation.

In such cases, nothing actually changes infinitely quickly, save the point of view of an observer of the field. For example, when an observer begins to move with respect to a static field that already extends over light years, it appears as though "immediately" the entire field, along with its source, has begun moving at the speed of the observer. This, of course, includes the extended parts of the field. However, this "change" in the apparent behavior of the field source, along with its distant field, does not represent any sort of propagation that is faster than light.

Newtonian gravitation

Isaac Newton's formulation of a gravitational force law requires that each particle with mass respond instantaneously to every other particle with mass irrespective of the distance between them. In modern terms, Newtonian gravitation is described by the Poisson equation, according to which, when the mass distribution of a system changes, its gravitational field instantaneously adjusts. Therefore, the theory assumes the speed of gravity to be infinite. This assumption was adequate to account for all phenomena with the observational accuracy of that time. It was not until the 19th century that an anomaly in astronomical observations which could not be reconciled with the Newtonian gravitational model of instantaneous action was noted: the French astronomer Urbain Le Verrier determined in 1859 that the elliptical orbit of Mercury precesses at a significantly different rate from that predicted by Newtonian theory.

Laplace

The first attempt to combine a finite gravitational speed with Newton's theory was made by Laplace in 1805. Based on Newton's force law he considered a model in which the gravitational field is defined as a radiation field or fluid. Changes in the motion of the attracting body are transmitted by some sort of waves. Therefore, the movements of the celestial bodies should be modified in the order v/c, where v is the relative speed between the bodies and c is the speed of gravity. The effect of a finite speed of gravity goes to zero as c goes to infinity, but not as 1/c2 as it does in modern theories. This led Laplace to conclude that the speed of gravitational interactions is at least 7×106 times the speed of light. This velocity was used by many in the 19th century to criticize any model based on a finite speed of gravity, like electrical or mechanical explanations of gravitation.

From a modern point of view, Laplace's analysis is incorrect. Not knowing about Lorentz' invariance of static fields, Laplace assumed that when an object like the Earth is moving around the Sun, the attraction of the Earth would not be toward the instantaneous position of the Sun, but toward where the Sun had been if its position was retarded using the relative velocity (this retardation actually does happen with the optical position of the Sun, and is called annual solar aberration). Putting the Sun immobile at the origin, when the Earth is moving in an orbit of radius R with velocity v presuming that the gravitational influence moves with velocity c, moves the Sun's true position ahead of its optical position, by an amount equal to vR/c, which is the travel time of gravity from the sun to the Earth times the relative velocity of the sun and the Earth. The pull of gravity (if it behaved like a wave, such as light) would then be always displaced in the direction of the Earth's velocity, so that the Earth would always be pulled toward the optical position of the Sun, rather than its actual position. This would cause a pull ahead of the Earth, which would cause the orbit of the Earth to spiral outward. Such an outspiral would be suppressed by an amount v/c compared to the force which keeps the Earth in orbit; and since the Earth's orbit is observed to be stable, Laplace's c must be very large. As is now known, it may be considered to be infinite in the limit of straight-line motion, since as a static influence, it is instantaneous at distance, when seen by observers at constant transverse velocity. For orbits in which velocity (direction of speed) changes slowly, it is almost infinite.

The attraction toward an object moving with a steady velocity is towards its instantaneous position with no delay, for both gravity and electric charge. In a field equation consistent with special relativity (i.e., a Lorentz invariant equation), the attraction between static charges moving with constant relative velocity, is always toward the instantaneous position of the charge (in this case, the "gravitational charge" of the Sun), not the time-retarded position of the Sun. When an object is moving in orbit at a steady speed but changing velocity v, the effect on the orbit is order v2/c2, and the effect preserves energy and angular momentum, so that orbits do not decay.

Electrodynamical analogies

Early theories

At the end of the 19th century, many tried to combine Newton's force law with the established laws of electrodynamics, like those of Wilhelm Eduard Weber, Carl Friedrich Gauss, Bernhard Riemann and James Clerk Maxwell. Those theories are not invalidated by Laplace's critique, because although they are based on finite propagation speeds, they contain additional terms which maintain the stability of the planetary system. Those models were used to explain the perihelion advance of Mercury, but they could not provide exact values. One exception was Maurice Lévy in 1890, who succeeded in doing so by combining the laws of Weber and Riemann, whereby the speed of gravity is equal to the speed of light. So those hypotheses were rejected.

However, a more important variation of those attempts was the theory of Paul Gerber, who derived in 1898 the identical formula, which was also derived later by Einstein for the perihelion advance. Based on that formula, Gerber calculated a propagation speed for gravity of 305 000 km/s, i.e. practically the speed of light. But Gerber's derivation of the formula was faulty, i.e., his conclusions did not follow from his premises, and therefore many (including Einstein) did not consider it to be a meaningful theoretical effort. Additionally, the value it predicted for the deflection of light in the gravitational field of the sun was too high by the factor 3/2.

Lorentz

In 1900 Hendrik Lorentz tried to explain gravity on the basis of his ether theory and the Maxwell equations. After proposing (and rejecting) a Le Sage type model, he assumed like Ottaviano Fabrizio Mossotti and Johann Karl Friedrich Zöllner that the attraction of opposite charged particles is stronger than the repulsion of equal charged particles. The resulting net force is exactly what is known as universal gravitation, in which the speed of gravity is that of light. This leads to a conflict with the law of gravitation by Isaac Newton, in which it was shown by Pierre Simon Laplace that a finite speed of gravity leads to some sort of aberration and therefore makes the orbits unstable. However, Lorentz showed that the theory is not concerned by Laplace's critique, because due to the structure of the Maxwell equations only effects in the order v2/c2 arise. But Lorentz calculated that the value for the perihelion advance of Mercury was much too low. He wrote:
The special form of these terms may perhaps be modified. Yet, what has been said is sufficient to show that gravitation may be attributed to actions which are propagated with no greater velocity than that of light.
In 1908 Henri Poincaré examined the gravitational theory of Lorentz and classified it as compatible with the relativity principle, but (like Lorentz) he criticized the inaccurate indication of the perihelion advance of Mercury.

Lorentz covariant models

Henri Poincaré argued in 1904 that a propagation speed of gravity which is greater than c would contradict the concept of local time (based on synchronization by light signals) and the principle of relativity. He wrote:
What would happen if we could communicate by signals other than those of light, the velocity of propagation of which differed from that of light? If, after having regulated our watches by the optimal method, we wished to verify the result by means of these new signals, we should observe discrepancies due to the common translatory motion of the two stations. And are such signals inconceivable, if we take the view of Laplace, that universal gravitation is transmitted with a velocity a million times as great as that of light?
However, in 1905 Poincaré calculated that changes in the gravitational field can propagate with the speed of light if it is presupposed that such a theory is based on the Lorentz transformation. He wrote:
Laplace showed in effect that the propagation is either instantaneous or much faster than that of light. However, Laplace examined the hypothesis of finite propagation velocity ceteris non mutatis; here, on the contrary, this hypothesis is conjoined with many others, and it may be that between them a more or less perfect compensation takes place. The application of the Lorentz transformation has already provided us with numerous examples of this.
Similar models were also proposed by Hermann Minkowski (1907) and Arnold Sommerfeld (1910). However, those attempts were quickly superseded by Einstein's theory of general relativity. Whitehead's theory of gravitation (1922) explains gravitational red shift, light bending, perihelion shift and Shapiro delay.

General relativity

Background

General relativity predicts that gravitational radiation should exist and propagate as a wave at lightspeed: A slowly evolving and weak gravitational field will produce, according to general relativity, effects like those of Newtonian gravitation. (It does not depend on the existence of gravitons, mentioned above, or any similar force carrying particles.)

Suddenly displacing one of two gravitoelectrically interacting particles would, after a delay corresponding to lightspeed, cause the other to feel the displaced particle's absence: accelerations due to the change in quadrupole moment of star systems, like the Hulse–Taylor binary have removed much energy (almost 2% of the energy of our own Sun's output) as gravitational waves, which would theoretically travel at the speed of light.

Two gravitoelectrically interacting particle ensembles, e.g., two planets or stars moving at constant velocity with respect to each other, each feel a force toward the instantaneous position of the other body without a speed-of-light delay because Lorentz invariance demands that what a moving body in a static field sees and what a moving body that emits that field sees be symmetrical.

A moving body's seeing no aberration in a static field emanating from a "motionless body" therefore causes Lorentz invariance to require that in the previously moving body's reference frame the (now moving) emitting body's field lines must not at a distance be retarded or aberred. Moving charged bodies (including bodies that emit static gravitational fields) exhibit static field lines that bend not with distance and show no speed of light delay effects, as seen from bodies moving with regard to them.

In other words, since the gravitoelectric field is, by definition, static and continuous, it does not propagate. If such a source of a static field is accelerated (for example stopped) with regard to its formerly constant velocity frame, its distant field continues to be updated as though the charged body continued with constant velocity. This effect causes the distant fields of unaccelerated moving charges to appear to be "updated" instantly for their constant velocity motion, as seen from distant positions, in the frame where the source-object is moving at constant velocity. However, as discussed, this is an effect which can be removed at any time, by transitioning to a new reference frame in which the distant charged body is now at rest.

The static and continuous gravitoelectric component of a gravitational field is not a gravitomagnetic component (gravitational radiation); see Petrov classification. The gravitoelectric field is a static field and therefore cannot superluminally transmit quantized (discrete) information, i.e., it could not constitute a well-ordered series of impulses carrying a well-defined meaning (this is the same for gravity and electromagnetism).

Aberration of field direction in general relativity, for a weakly accelerated observer

The finite speed of gravitational interaction in general relativity does not lead to the sorts of problems with the aberration of gravity that Newton was originally concerned with, because there is no such aberration in static field effects. Because the acceleration of the Earth with regard to the Sun is small (meaning, to a good approximation, the two bodies can be regarded as traveling in straight lines past each other with unchanging velocity) the orbital results calculated by general relativity are the same as those of Newtonian gravity with instantaneous action at a distance, because they are modelled by the behavior of a static field with constant-velocity relative motion, and no aberration for the forces involved. Although the calculations are considerably more complicated, one can show that a static field in general relativity does not suffer from aberration problems as seen by an unaccelerated observer (or a weakly accelerated observer, such as the Earth). Analogously, the "static term" in the electromagnetic Liénard–Wiechert potential theory of the fields from a moving charge, does not suffer from either aberration or positional-retardation. Only the term corresponding to acceleration and electromagnetic emission in the Liénard–Wiechert potential shows a direction toward the time-retarded position of the emitter.

It is in fact not very easy to construct a self-consistent gravity theory in which gravitational interaction propagates at a speed other than the speed of light, which complicates discussion of this possibility.

Formulaic conventions

In general relativity the metric tensor symbolizes the gravitational potential, and Christoffel symbols of the spacetime manifold symbolize the gravitational force field. The tidal gravitational field is associated with the curvature of spacetime.

Measurements

The speed of gravity (more correctly, the speed of gravitational waves) can be calculated from observations of the orbital decay rate of binary pulsars PSR 1913+16 (the Hulse–Taylor binary system noted above) and PSR B1534+12. The orbits of these binary pulsars are decaying due to loss of energy in the form of gravitational radiation. The rate of this energy loss ("gravitational damping") can be measured, and since it depends on the speed of gravity, comparing the measured values to theory shows that the speed of gravity is equal to the speed of light to within 1%. However, according to PPN formalism setting, measuring the speed of gravity by comparing theoretical results with experimental results will depend on the theory; use of a theory other than that of general relativity could in principle show a different speed, although the existence of gravitational damping at all implies that the speed cannot be infinite.

In September 2002, Sergei Kopeikin and Edward Fomalont announced that they had made an indirect measurement of the speed of gravity, using their data from VLBI measurement of the retarded position of Jupiter on its orbit during Jupiter's transit across the line-of-sight of the bright radio source quasar QSO J0842+1835. Kopeikin and Fomalont concluded that the speed of gravity is between 0.8 and 1.2 times the speed of light, which would be fully consistent with the theoretical prediction of general relativity that the speed of gravity is exactly the same as the speed of light.

Several physicists, including Clifford M. Will and Steve Carlip, have criticized these claims on the grounds that they have allegedly misinterpreted the results of their measurements. Notably, prior to the actual transit, Hideki Asada in a paper to the Astrophysical Journal Letters theorized that the proposed experiment was essentially a roundabout confirmation of the speed of light instead of the speed of gravity. However, Kopeikin and Fomalont continue to vigorously argue their case and the means of presenting their result at the press-conference of AAS that was offered after the peer review of the results of the Jovian experiment had been done by the experts of the AAS scientific organizing committee. In later publication by Kopeikin and Fomalont, which uses a bi-metric formalism that splits the space-time null cone in two – one for gravity and another one for light, the authors claimed that Asada's claim was theoretically unsound. The two null cones overlap in general relativity, which makes tracking the speed-of-gravity effects difficult and requires a special mathematical technique of gravitational retarded potentials, which was worked out by Kopeikin and co-authors but was never properly employed by Asada and/or the other critics.

Stuart Samuel also suggested that the experiment did not actually measure the speed of gravity because the effects were too small to have been measured. A response by Kopeikin and Fomalont challenges this opinion.

It is important to understand that none of the participants in this controversy are claiming that general relativity is "wrong". Rather, the debate concerns whether or not Kopeikin and Fomalont have really provided yet another verification of one of its fundamental predictions. A comprehensive review of the definition of the speed of gravity and its measurement with high-precision astrometric and other techniques appears in the textbook Relativistic Celestial Mechanics in the Solar System.

The detection of the neutron star inspiral GW170817 in 2017, detected through both gravitational waves and gamma rays, provides the so far by far best limit on the difference between the speed of light and that of gravity. Photons were detected 1.7 seconds after peak gravitational wave emission; assuming a delay of zero to ten seconds, the difference between the speeds of gravitational and electromagnetic waves, vGW − vEM, is constrained to between −3×10−15 and +7×10−16 times the speed of light.

This also excluded some alternatives to general relativity, including variants of scalar–tensor theory, instances of Horndeski's theory, and Hořava–Lifshitz gravity.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...