Search This Blog

Tuesday, November 5, 2019

Charge-coupled device

From Wikipedia, the free encyclopedia
 
A specially developed CCD in a wire-bonded package used for ultraviolet imaging
 
A charge-coupled device (CCD) is a device for the movement of electrical charge, usually from within the device to an area where the charge can be manipulated, such as conversion into a digital value. This is achieved by "shifting" the signals between stages within the device one at a time. CCDs move charge between capacitive bins in the device, with the shift allowing for the transfer of charge between bins. 

CCD is a major technology for digital imaging. In a CCD image sensor, pixels are represented by p-doped metal–oxide–semiconductor (MOS) capacitors. These MOS capacitors, the basic building blocks of a CCD, are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges. Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data are required. In applications with less exacting quality demands, such as consumer and professional digital cameras, active pixel sensors, also known as CMOS sensors (complementary MOS sensors), are generally used. However, the large quality advantage CCDs enjoyed early on has narrowed over time.

History

The basis for the CCD is the metal–oxide–semiconductor (MOS) structure, with MOS capacitors being the basic building blocks of a CCD, and a depleted MOS structure used as the photodetector in early CCD devices. MOS technology was originally invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959.

In the late 1960s, Willard Boyle and George E. Smith at Bell Labs were researching MOS technology while working on semiconductor bubble memory. They realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. This led to the invention of the charge-coupled device by Boyle and Smith in 1969. They conceived of the design of what they termed, in their notebook, "Charge 'Bubble' Devices".

The initial paper describing the concept in April 1970 listed possible uses as memory, a delay line, and an imaging device. The device could also be used as a shift register. The essence of the design was the ability to transfer charge along the surface of a semiconductor from one storage capacitor to the next. The concept was similar in principle to the bucket-brigade device (BBD), which was developed at Philips Research Labs during the late 1960s. 

The first experimental device demonstrating the principle was a row of closely spaced metal squares on an oxidized silicon surface electrically accessed by wire bonds. It was demonstrated by Gil Amelio, Michael Francis Tompsett and George Smith in April 1970. This was the first experimental application of the CCD in image sensor technology, and used a depleted MOS structure as the photodetector. The first patent (U.S. Patent 4,085,456) on the application of CCDs to imaging was assigned to Tompsett, who filed the application in 1971.

The first working CCD made with integrated circuit technology was a simple 8-bit shift register, reported by Tompsett, Amelio and Smith in August 1970. This device had input and output circuits and was used to demonstrate its use as a shift register and as a crude eight pixel linear imaging device. Development of the device progressed at a rapid rate. By 1971, Bell researchers led by Michael Tompsett were able to capture images with simple linear devices. Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention and began development programs. Fairchild's effort, led by ex-Bell researcher Gil Amelio, was the first with commercial devices, and by 1974 had a linear 500-element device and a 2-D 100 x 100 pixel device. Steven Sasson, an electrical engineer working for Kodak, invented the first digital still camera using a Fairchild 100 x 100 CCD in 1975.

The interline transfer (ILT) CCD device was proposed by L. Walsh and R. Dyck at Fairchild in 1973 to reduce smear and eliminate a mechanical shutter. To further reduce smear from bright light sources, the frame-interline-transfer (FIT) CCD architecture was developed by K. Horii, T. Kuroda and T. Kunii at Matsushita (now Panasonic) in 1981.

The first KH-11 KENNEN reconnaissance satellite equipped with charge-coupled device array (800 x 800 pixels) technology for imaging was launched in December 1976. Under the leadership of Kazuo Iwama, Sony started a large development effort on CCDs involving a significant investment. Eventually, Sony managed to mass-produce CCDs for their camcorders. Before this happened, Iwama died in August 1982; subsequently, a CCD chip was placed on his tombstone to acknowledge his contribution.

Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. They recognized that lag can be eliminated if the signal carriers could be transferred from the photodiode to the CCD. This led to their invention of the pinned photodiode, a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. It was first publicly reported by Teranishi and Ishihara with A. Kohono, E. Oda and K. Arai in 1982, with the addition of an anti-blooming structure. The new photodetector structure invented at NEC was given the name "pinned photodiode" (PPD) by B.C. Burkey at Kodak in 1984. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors.

In January 2006, Boyle and Smith were awarded the National Academy of Engineering Charles Stark Draper Prize, and in 2009 they were awarded the Nobel Prize for Physics, for their invention of the CCD concept. Michael Tompsett was awarded the 2010 National Medal of Technology and Innovation, for pioneering work and electronic technologies including the design and development of the first CCD imagers. He was also awarded the 2012 IEEE Edison Medal for "pioneering contributions to imaging devices including CCD Imagers, cameras and thermal imagers".

Basics of operation

The charge packets (electrons, blue) are collected in potential wells (yellow) created by applying positive voltage at the gate electrodes (G). Applying positive voltage to the gate electrode in the correct sequence transfers the charge packets.
 
In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out of a shift register (the CCD, properly speaking). 

An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, whereas a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages. In a digital device, these voltages are then sampled, digitized, and usually stored in memory; in an analog device (such as an analog video camera), they are processed into a continuous analog signal (e.g. by feeding the output of the charge amplifier into a low-pass filter), which is then processed and fed out to other circuits for transmission, recording, or other processing.

"One-dimensional" CCD image sensor from a fax machine

Detailed physics of operation

Charge generation

Before the MOS capacitors are exposed to light, they are biased into the depletion region; in n-channel CCDs, the silicon under the bias gate is slightly p-doped or intrinsic. The gate is then biased at a positive potential, above the threshold for strong inversion, which will eventually result in the creation of a n channel below the gate as in a MOSFET. However, it takes time to reach this thermal equilibrium: up to hours in high-end scientific cameras cooled at low temperature. Initially after biasing, the holes are pushed far into the substrate, and no mobile electrons are at or near the surface; the CCD thus operates in a non-equilibrium state called deep depletion. Then, when electron–hole pairs are generated in the depletion region, they are separated by the electric field, the electrons move toward the surface, and the holes move toward the substrate. Four pair-generation processes can be identified:
  • photo-generation (up to 95% of quantum efficiency),
  • generation in the depletion region,
  • generation at the surface, and
  • generation in the neutral bulk.
The last three processes are known as dark-current generation, and add noise to the image; they can limit the total usable integration time. The accumulation of electrons at or near the surface can proceed either until image integration is over and charge begins to be transferred, or thermal equilibrium is reached. In this case, the well is said to be full. The maximum capacity of each well is known as the well depth, typically about 105 electrons per pixel.

Design and manufacturing

The photoactive region of a CCD is, generally, an epitaxial layer of silicon. It is lightly p doped (usually with boron) and is grown upon a substrate material, often p++. In buried-channel devices, the type of design utilized in most modern CCDs, certain areas of the surface of the silicon are ion implanted with phosphorus, giving them an n-doped designation. This region defines the channel in which the photogenerated charge packets will travel. Simon Sze details the advantages of a buried-channel device:
This thin layer (= 0.2–0.3 micron) is fully depleted and the accumulated photogenerated charge is kept away from the surface. This structure has the advantages of higher transfer efficiency and lower dark current, from reduced surface recombination. The penalty is smaller charge capacity, by a factor of 2–3 compared to the surface-channel CCD.
The gate oxide, i.e. the capacitor dielectric, is grown on top of the epitaxial layer and substrate. 

Later in the process, polysilicon gates are deposited by chemical vapor deposition, patterned with photolithography, and etched in such a way that the separately phased gates lie perpendicular to the channels. The channels are further defined by utilization of the LOCOS process to produce the channel stop region. 

Channel stops are thermally grown oxides that serve to isolate the charge packets in one column from those in another. These channel stops are produced before the polysilicon gates are, as the LOCOS process utilizes a high-temperature step that would destroy the gate material. The channel stops are parallel to, and exclusive of, the channel, or "charge carrying", regions. 

Channel stops often have a p+ doped region underlying them, providing a further barrier to the electrons in the charge packets (this discussion of the physics of CCD devices assumes an electron transfer device, though hole transfer is possible). 

The clocking of the gates, alternately high and low, will forward and reverse bias the diode that is provided by the buried channel (n-doped) and the epitaxial layer (p-doped). This will cause the CCD to deplete, near the p–n junction and will collect and move the charge packets beneath the gates—and within the channels—of the device. 

CCD manufacturing and operation can be optimized for different uses. The above process describes a frame transfer CCD. While CCDs may be manufactured on a heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have been placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and infrared and red response. This method of manufacture is used in the construction of interline-transfer devices. 

Another version of CCD is called a peristaltic CCD. In a peristaltic charge-coupled device, the charge-packet transfer operation is analogous to the peristaltic contraction and dilation of the digestive system. The peristaltic CCD has an additional implant that keeps the charge away from the silicon/silicon dioxide interface and generates a large lateral electric field from one gate to the next. This provides an additional driving force to aid in transfer of the charge packets.

Architecture

The CCD image sensors can be implemented in several different architectures. The most common are full-frame, frame-transfer, and interline. The distinguishing characteristic of each of these architectures is their approach to the problem of shuttering. 

In a full-frame device, all of the image area is active, and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image smears as the device is clocked or read out. 

With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically aluminum). The image can be quickly transferred from the image area to the opaque area or storage region with acceptable smear of a few percent. That image can then be read out slowly from the storage region while a new image is integrating or exposing in the active area. Frame-transfer devices typically do not require a mechanical shutter and were a common architecture for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly twice as much.

The interline architecture extends this concept one step further and masks every other column of the image sensor for storage. In this device, only one pixel shift has to occur to transfer from image area to storage area; thus, shutter times can be less than a microsecond and smear is essentially eliminated. The advantage is not free, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this deleterious characteristic by adding microlenses on the surface of the device to direct light away from the opaque regions and on the active area. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design. 

CCD from a 2.1 megapixel Argus digital camera
 
The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure-prone, power-intensive mechanical shutter, an interline device is the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection and issues of money, power and time are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame devices. The frame-transfer falls in between and was a common choice before the fill-factor issue of interline devices was addressed. Today, frame-transfer is usually chosen when an interline architecture is not available, such as in a back-illuminated device. 

CCDs containing grids of pixels are used in digital cameras, optical scanners, and video cameras as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a quantum efficiency of about 70 percent) making them far more efficient than photographic film, which captures only about 2 percent of the incident light. 

CCD from a 2.1 megapixel Hewlett-Packard digital camera
 
Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. For normal silicon-based detectors, the sensitivity is limited to 1.1 μm. One other consequence of their sensitivity to infrared is that infrared from remote controls often appears on CCD-based digital cameras or camcorders if they do not have infrared blockers.

Cooling reduces the array's dark current, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise, to negligible levels.

Frame transfer CCD

A frame transfer CCD sensor
 
The frame transfer CCD imager was the first imaging structure proposed for CCD Imaging by Michael Tompsett at Bell Laboratories. A frame transfer CCD is a specialized CCD, often used in astronomy and some professional video cameras, designed for high exposure efficiency and correctness. 

The normal functioning of a CCD, astronomical or otherwise, can be divided into two phases: exposure and readout. During the first phase, the CCD passively collects incoming photons, storing electrons in its cells. After the exposure time is passed, the cells are read out one line at a time. During the readout phase, cells are shifted down the entire area of the CCD. While they are shifted, they continue to collect light. Thus, if the shifting is not fast enough, errors can result from light that falls on a cell holding charge during the transfer. These errors are referred to as "vertical smear" and cause a strong light source to create a vertical line above and below its exact location. In addition, the CCD cannot be used to collect light while it is being read out. Unfortunately, a faster shifting requires a faster readout, and a faster readout can introduce errors in the cell charge measurement, leading to a higher noise level. 

A frame transfer CCD solves both problems: it has a shielded, not light sensitive, area containing as many cells as the area exposed to light. Typically, this area is covered by a reflective material such as aluminium. When the exposure time is up, the cells are transferred very rapidly to the hidden area. Here, safe from any incoming light, cells can be read out at any speed one deems necessary to correctly measure the cells' charge. At the same time, the exposed part of the CCD is collecting light again, so no delay occurs between successive exposures.

The disadvantage of such a CCD is the higher cost: the cell area is basically doubled, and more complex control electronics are needed.

Intensified charge-coupled device

An intensified charge-coupled device (ICCD) is a CCD that is optically connected to an image intensifier that is mounted in front of the CCD.

An image intensifier includes three functional elements: a photocathode, a micro-channel plate (MCP) and a phosphor screen. These three elements are mounted one close behind the other in the mentioned sequence. The photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards the MCP by an electrical control voltage, applied between photocathode and MCP. The electrons are multiplied inside of the MCP and thereafter accelerated towards the phosphor screen. The phosphor screen finally converts the multiplied electrons back to photons which are guided to the CCD by a fiber optic or a lens. 

An image intensifier inherently includes a shutter functionality: If the control voltage between the photocathode and the MCP is reversed, the emitted photoelectrons are not accelerated towards the MCP but return to the photocathode. Thus, no electrons are multiplied and emitted by the MCP, no electrons are going to the phosphor screen and no light is emitted from the image intensifier. In this case no light falls onto the CCD, which means that the shutter is closed. The process of reversing the control voltage at the photocathode is called gating and therefore ICCDs are also called gateable CCD cameras. 

Besides the extremely high sensitivity of ICCD cameras, which enable single photon detection, the gateability is one of the major advantages of the ICCD over the EMCCD cameras. The highest performing ICCD cameras enable shutter times as short as 200 picoseconds

ICCD cameras are in general somewhat higher in price than EMCCD cameras because they need the expensive image intensifier. On the other hand, EMCCD cameras need a cooling system to cool the EMCCD chip down to temperatures around 170 K. This cooling system adds additional costs to the EMCCD camera and often yields heavy condensation problems in the application.

ICCDs are used in night vision devices and in various scientific applications.

Electron-multiplying CCD

Electrons are transferred serially through the gain stages making up the multiplication register of an EMCCD. The high voltages used in these serial transfers induce the creation of additional charge carriers through impact ionisation.
 
In an EMCCD there is a dispersion (variation) in the number of electrons output by the multiplication register for a given (fixed) number of input electrons (shown in the legend on the right). The probability distribution for the number of output electrons is plotted logarithmically on the vertical axis for a simulation of a multiplication register. Also shown are results from the empirical fit equation shown on this page.
 
An electron-multiplying CCD (EMCCD, also known as an L3Vision CCD, a product commercialized by e2v Ltd., GB, L3CCD or Impactron CCD, a now-discontinued product offered in the past by Texas Instruments) is a charge-coupled device in which a gain register is placed between the shift register and the output amplifier. The gain register is split up into a large number of stages. In each stage, the electrons are multiplied by impact ionization in a similar way to an avalanche diode. The gain probability at every stage of the register is small (P < 2%), but as the number of elements is large (N > 500), the overall gain can be very high (), with single input electrons giving many thousands of output electrons. Reading a signal from a CCD gives a noise background, typically a few electrons. In an EMCCD, this noise is superimposed on many thousands of electrons rather than a single electron; the devices' primary advantage is thus their negligible readout noise. It is to be noted that the use of avalanche breakdown for amplification of photo charges had already been described in the U.S. Patent 3,761,744 in 1973 by George E. Smith/Bell Telephone Laboratories. 

EMCCDs show a similar sensitivity to intensified CCDs (ICCDs). However, as with ICCDs, the gain that is applied in the gain register is stochastic and the exact gain that has been applied to a pixel's charge is impossible to know. At high gains (> 30), this uncertainty has the same effect on the signal-to-noise ratio (SNR) as halving the quantum efficiency (QE) with respect to operation with a gain of unity. However, at very low light levels (where the quantum efficiency is most important), it can be assumed that a pixel either contains an electron — or not. This removes the noise associated with the stochastic multiplication at the risk of counting multiple electrons in the same pixel as a single electron. To avoid multiple counts in one pixel due to coincident photons in this mode of operation, high frame rates are essential. The dispersion in the gain is shown in the graph on the right. For multiplication registers with many elements and large gains it is well modelled by the equation:

if

where P is the probability of getting n output electrons given m input electrons and a total mean multiplication register gain of g.

Because of the lower costs and better resolution, EMCCDs are capable of replacing ICCDs in many applications. ICCDs still have the advantage that they can be gated very fast and thus are useful in applications like range-gated imaging. EMCCD cameras indispensably need a cooling system — using either thermoelectric cooling or liquid nitrogen — to cool the chip down to temperatures in the range of −65 to −95 °C (−85 to −139 °F). This cooling system unfortunately adds additional costs to the EMCCD imaging system and may yield condensation problems in the application. However, high-end EMCCD cameras are equipped with a permanent hermetic vacuum system confining the chip to avoid condensation issues. 

The low-light capabilities of EMCCDs find use in astronomy and biomedical research, among other fields. In particular, their low noise at high readout speeds makes them very useful for a variety of astronomical applications involving low light sources and transient events such as lucky imaging of faint stars, high speed photon counting photometry, Fabry-Pérot spectroscopy and high-resolution spectroscopy. More recently, these types of CCDs have broken into the field of biomedical research in low-light applications including small animal imaging, single-molecule imaging, Raman spectroscopy, super resolution microscopy as well as a wide variety of modern fluorescence microscopy techniques thanks to greater SNR in low-light conditions in comparison with traditional CCDs and ICCDs. 

In terms of noise, commercial EMCCD cameras typically have clock-induced charge (CIC) and dark current (dependent on the extent of cooling) that together lead to an effective readout noise ranging from 0.01 to 1 electrons per pixel read. However, recent improvements in EMCCD technology have led to a new generation of cameras capable of producing significantly less CIC, higher charge transfer efficiency and an EM gain 5 times higher than what was previously available. These advances in low-light detection lead to an effective total background noise of 0.001 electrons per pixel read, a noise floor unmatched by any other low-light imaging device.

Use in astronomy

Due to the high quantum efficiencies of CCDs (for a quantum efficiency of 100%, one count equals one photon), linearity of their outputs, ease of use compared to photographic plates, and a variety of other reasons, CCDs were very rapidly adopted by astronomers for nearly all UV-to-infrared applications. 

Thermal noise and cosmic rays may alter the pixels in the CCD array. To counter such effects, astronomers take several exposures with the CCD shutter closed and opened. The average of images taken with the shutter closed is necessary to lower the random noise. Once developed, the dark frame average image is then subtracted from the open-shutter image to remove the dark current and other systematic defects (dead pixels, hot pixels, etc.) in the CCD.

The Hubble Space Telescope, in particular, has a highly developed series of steps (“data reduction pipeline”) to convert the raw CCD data to useful images.

CCD cameras used in astrophotography often require sturdy mounts to cope with vibrations from wind and other sources, along with the tremendous weight of most imaging platforms. To take long exposures of galaxies and nebulae, many astronomers use a technique known as auto-guiding. Most autoguiders use a second CCD chip to monitor deviations during imaging. This chip can rapidly detect errors in tracking and command the mount motors to correct for them. 

Array of 30 CCDs used on the Sloan Digital Sky Survey telescope imaging camera, an example of "drift-scanning".
 
An unusual astronomical application of CCDs, called drift-scanning, uses a CCD to make a fixed telescope behave like a tracking telescope and follow the motion of the sky. The charges in the CCD are transferred and read in a direction parallel to the motion of the sky, and at the same speed. In this way, the telescope can image a larger region of the sky than its normal field of view. The Sloan Digital Sky Survey is the most famous example of this, using the technique to a survey of over a quarter of the sky. 

In addition to imagers, CCDs are also used in an array of analytical instrumentation including spectrometers and interferometers.

Color cameras

A Bayer filter on a CCD
 
CCD color sensor
 
x80 microscope view of an RGGB Bayer filter on a 240 line Sony CCD PAL Camcorder CCD sensor
 
Digital color cameras generally use a Bayer mask over the CCD. Each square of four pixels has one filtered red, one blue, and two green (the human eye is more sensitive to green than either red or blue). The result of this is that luminance information is collected at every pixel, but the color resolution is lower than the luminance resolution.

Better color separation can be reached by three-CCD devices (3CCD) and a dichroic beam splitter prism, that splits the image into red, green and blue components. Each of the three CCDs is arranged to respond to a particular color. Many professional video camcorders, and some semi-professional camcorders, use this technique, although developments in competing CMOS technology have made CMOS sensors, both with beam-splitters and bayer filters, increasingly popular in high-end video and digital cinema cameras. Another advantage of 3CCD over a Bayer mask device is higher quantum efficiency (and therefore higher light sensitivity for a given aperture size). This is because in a 3CCD device most of the light entering the aperture is captured by a sensor, while a Bayer mask absorbs a high proportion (about 2/3) of the light falling on each CCD pixel.

For still scenes, for instance in microscopy, the resolution of a Bayer mask device can be enhanced by microscanning technology. During the process of color co-site sampling, several frames of the scene are produced. Between acquisitions, the sensor is moved in pixel dimensions, so that each point in the visual field is acquired consecutively by elements of the mask that are sensitive to the red, green and blue components of its color. Eventually every pixel in the image has been scanned at least once in each color and the resolution of the three channels become equivalent (the resolutions of red and blue channels are quadrupled while the green channel is doubled).

Sensor sizes

Sensors (CCD / CMOS) come in various sizes, or image sensor formats. These sizes are often referred to with an inch fraction designation such as 1/1.8″ or 2/3″ called the optical format. This measurement actually originates back in the 1950s and the time of Vidicon tubes.

Blooming

Vertical smear
 
When a CCD exposure is long enough, eventually the electrons that collect in the "bins" in the brightest part of the image will overflow the bin, resulting in blooming. The structure of the CCD allows the electrons to flow more easily in one direction than another, resulting in vertical streaking.

Some anti-blooming features that can be built into a CCD reduce its sensitivity to light by using some of the pixel area for a drain structure. James M. Early developed a vertical anti-blooming drain that would not detract from the light collection area, and so did not reduce light sensitivity.

Adaptive optics

From Wikipedia, the free encyclopedia
 
A deformable mirror can be used to correct wavefront errors in an astronomical telescope.
 
Illustration of a (simplified) adaptive optics system. The light first hits a tip–tilt (TT) mirror and then a deformable mirror (DM) which corrects the wavefront. Part of the light is tapped off by a beamsplitter (BS) to the wavefront sensor and the control hardware which sends updated signals to the DM and TT mirrors.
 
The wavefront of an aberrated image (left) can be measured using a wavefront sensor (center) and then corrected for using a deformable mirror (right)
 
Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of incoming wavefront distortions by deforming a mirror in order to compensate for the distortion. It is used in astronomical telescopes and laser communication systems to remove the effects of atmospheric distortion, in microscopy, optical fabrication and in retinal imaging systems to reduce optical aberrations. Adaptive optics works by measuring the distortions in a wavefront and compensating for them with a device that corrects those errors such as a deformable mirror or a liquid crystal array. 

Adaptive optics should not be confused with active optics, which works on a longer timescale to correct the primary mirror geometry. 

Other methods can achieve resolving power exceeding the limit imposed by atmospheric distortion, such as speckle imaging, aperture synthesis, and lucky imaging, or by moving outside the atmosphere with space telescopes, such as the Hubble Space Telescope.

History

Adaptive thin shell mirror.
 
Adaptive optics was first envisioned by Horace W. Babcock in 1953, and was also considered in science fiction, as in Poul Anderson's novel Tau Zero (1970), but it did not come into common usage until advances in computer technology during the 1990s made the technique practical. 

Some of the initial development work on adaptive optics was done by the US military during the Cold War and was intended for use in tracking Soviet satellites.

Microelectromechanical systems (MEMS) deformable mirrors and magnetics concept deformable mirrors are currently the most widely used technology in wavefront shaping applications for adaptive optics given their versatility, stroke, maturity of technology and the high resolution wavefront correction that they afford.

Tip–tilt correction

The simplest form of adaptive optics is tip-tilt correction, which corresponds to correction of the tilts of the wavefront in two dimensions (equivalent to correction of the position offsets for the image). This is performed using a rapidly moving tip–tilt mirror that makes small rotations around two of its axes. A significant fraction of the aberration introduced by the atmosphere can be removed in this way.

Tip–tilt mirrors are effectively segmented mirrors having only one segment which can tip and tilt, rather than having an array of multiple segments that can tip and tilt independently. Due to the relative simplicity of such mirrors and having a large stroke, meaning they have large correcting power, most AO systems use these, first, to correct low order aberrations. Higher order aberrations may then be corrected with deformable mirrors.

In astronomy

Astronomers at the Very Large Telescope site in Chile use adaptive optics.

Atmospheric seeing

When light from a star passes through the Earth's atmosphere, the wavefront is perturbed.
 

Negative images of a star through a telescope. The left-hand panel shows the slow-motion movie of a star when the adaptive optics system is switched off. The right-hand panel shows the slow motion movie of the same star when the AO system is switched on.
 
When light from a star or another astronomical object enters the Earth's atmosphere, atmospheric turbulence (introduced, for example, by different temperature layers and different wind speeds interacting) can distort and move the image in various ways. Visual images produced by any telescope larger than approximately 20 centimeters are blurred by these distortions.

Wavefront sensing and correction

An adaptive optics system tries to correct these distortions, using a wavefront sensor which takes some of the astronomical light, a deformable mirror that lies in the optical path, and a computer that receives input from the detector. The wavefront sensor measures the distortions the atmosphere has introduced on the timescale of a few milliseconds; the computer calculates the optimal mirror shape to correct the distortions and the surface of the deformable mirror is reshaped accordingly. For example, an 8–10 m telescope (like the VLT or Keck) can produce AO-corrected images with an angular resolution of 30–60 milliarcsecond (mas) resolution at infrared wavelengths, while the resolution without correction is of the order of 1 arcsecond

In order to perform adaptive optics correction, the shape of the incoming wavefronts must be measured as a function of position in the telescope aperture plane. Typically the circular telescope aperture is split up into an array of pixels in a wavefront sensor, either using an array of small lenslets (a Shack–Hartmann wavefront sensor), or using a curvature or pyramid sensor which operates on images of the telescope aperture. The mean wavefront perturbation in each pixel is calculated. This pixelated map of the wavefronts is fed into the deformable mirror and used to correct the wavefront errors introduced by the atmosphere. It is not necessary for the shape or size of the astronomical object to be known – even Solar System objects which are not point-like can be used in a Shack–Hartmann wavefront sensor, and time-varying structure on the surface of the Sun is commonly used for adaptive optics at solar telescopes. The deformable mirror corrects incoming light so that the images appear sharp.

Using guide stars

Natural guide stars

Because a science target is often too faint to be used as a reference star for measuring the shape of the optical wavefronts, a nearby brighter guide star can be used instead. The light from the science target has passed through approximately the same atmospheric turbulence as the reference star's light and so its image is also corrected, although generally to a lower accuracy. 

A laser beam directed toward the centre of the Milky Way. This laser beam can then be used as a guide star for the AO.
 
The necessity of a reference star means that an adaptive optics system cannot work everywhere on the sky, but only where a guide star of sufficient luminosity (for current systems, about magnitude 12–15) can be found very near to the object of the observation. This severely limits the application of the technique for astronomical observations. Another major limitation is the small field of view over which the adaptive optics correction is good. As the angular distance from the guide star increases, the image quality degrades. A technique known as "multiconjugate adaptive optics" uses several deformable mirrors to achieve a greater field of view.

Artificial guide stars

An alternative is the use of a laser beam to generate a reference light source (a laser guide star, LGS) in the atmosphere. There are two kinds of LGSs: Rayleigh guide stars and sodium guide stars. Rayleigh guide stars work by propagating a laser, usually at near ultraviolet wavelengths, and detecting the backscatter from air at altitudes between 15–25 km (49,000–82,000 ft). Sodium guide stars use laser light at 589 nm to resonantly excite sodium atoms higher in the mesosphere and thermosphere, which then appear to "glow". The LGS can then be used as a wavefront reference in the same way as a natural guide star – except that (much fainter) natural reference stars are still required for image position (tip/tilt) information. The lasers are often pulsed, with measurement of the atmosphere being limited to a window occurring a few microseconds after the pulse has been launched. This allows the system to ignore most scattered light at ground level; only light which has travelled for several microseconds high up into the atmosphere and back is actually detected.

In retinal imaging

Artist's impression of the European Extremely Large Telescope deploying lasers for adaptive optics
 
Ocular aberrations are distortions in the wavefront passing through the pupil of the eye. These optical aberrations diminish the quality of the image formed on the retina, sometimes necessitating the wearing of spectacles or contact lenses. In the case of retinal imaging, light passing out of the eye carries similar wavefront distortions, leading to an inability to resolve the microscopic structure (cells and capillaries) of the retina. Spectacles and contact lenses correct "low-order aberrations", such as defocus and astigmatism, which tend to be stable in humans for long periods of time (months or years). While correction of these is sufficient for normal visual functioning, it is generally insufficient to achieve microscopic resolution. Additionally, "high-order aberrations", such as coma, spherical aberration, and trefoil, must also be corrected in order to achieve microscopic resolution. High-order aberrations, unlike low-order, are not stable over time, and may change over time scales of 0.1s to 0.01s. The correction of these aberrations requires continuous, high-frequency measurement and compensation.

Measurement of ocular aberrations

Ocular aberrations are generally measured using a wavefront sensor, and the most commonly used type of wavefront sensor is the Shack–Hartmann. Ocular aberrations are caused by spatial phase nonuniformities in the wavefront exiting the eye. In a Shack-Hartmann wavefront sensor, these are measured by placing a two-dimensional array of small lenses (lenslets) in a pupil plane conjugate to the eye's pupil, and a CCD chip at the back focal plane of the lenslets. The lenslets cause spots to be focused onto the CCD chip, and the positions of these spots are calculated using a centroiding algorithm. The positions of these spots are compared with the positions of reference spots, and the displacements between the two are used to determine the local curvature of the wavefront allowing one to numerically reconstruct the wavefront information—an estimate of the phase nonuniformities causing aberration.

Correction of ocular aberrations

Once the local phase errors in the wavefront are known, they can be corrected by placing a phase modulator such as a deformable mirror at yet another plane in the system conjugate to the eye's pupil. The phase errors can be used to reconstruct the wavefront, which can then be used to control the deformable mirror. Alternatively, the local phase errors can be used directly to calculate the deformable mirror instructions.

Open loop vs. closed loop operation

If the wavefront error is measured before it has been corrected by the wavefront corrector, then operation is said to be "open loop". If the wavefront error is measured after it has been corrected by the wavefront corrector, then operation is said to be "closed loop". In the latter case then the wavefront errors measured will be small, and errors in the measurement and correction are more likely to be removed. Closed loop correction is the norm.

Applications

Adaptive optics was first applied to flood-illumination retinal imaging to produce images of single cones in the living human eye. It has also been used in conjunction with scanning laser ophthalmoscopy to produce (also in living human eyes) the first images of retinal microvasculature and associated blood flow and retinal pigment epithelium cells in addition to single cones. Combined with optical coherence tomography, adaptive optics has allowed the first three-dimensional images of living cone photoreceptors to be collected.

In microscopy

In microscopy, adaptive optics is used to correct for sample-induced aberrations. The required wavefront correction is either measured directly using wavefront sensor or estimated by using sensorless AO techniques.

Other uses

GRAAL is a ground layer adaptive optics instrument assisted by lasers.
 
Besides its use for improving nighttime astronomical imaging and retinal imaging, adaptive optics technology has also been used in other settings. Adaptive optics is used for solar astronomy at observatories such as the Swedish 1-m Solar Telescope and Big Bear Solar Observatory. It is also expected to play a military role by allowing ground-based and airborne laser weapons to reach and destroy targets at a distance including satellites in orbit. The Missile Defense Agency Airborne Laser program is the principal example of this.

Adaptive optics has been used to enhance the performance of free-space optical communication systems and to control the spatial output of optical fibers.

Medical applications include imaging of the retina, where it has been combined with optical coherence tomography. Also the development of Adaptive Optics Scanning Laser Opthalmoscope (AOSLO) has enabled correcting for the aberrations of the wavefront that is reflected from the human retina and to take diffraction limited images of the human rods and cones. Development of an Adaptive Scanning Optical Microscope (ASOM) was announced by Thorlabs in April 2007. Adaptive and active optics are also being developed for use in glasses to achieve better than 20/20 vision, initially for military applications.

After propagation of a wavefront, parts of it may overlap leading to interference and preventing adaptive optics from correcting it. Propagation of a curved wavefront always leads to amplitude variation. This needs to be considered if a good beam profile is to be achieved in laser applications. In material processing using lasers, adjustments can be made on the fly to allow for variation of focus-depth during piercing for changes in focal length across the working surface. Beam width can also be adjusted to switch between piercing and cutting mode. This eliminates the need for optic of the laser head to be switched, cutting down on overall processing time for more dynamic modifications.

Adaptive optics, especially wavefront-coding spatial light modulators, are frequently used in optical trapping applications to multiplex and dynamically reconfigure laser foci that are used to micro-manipulate biological specimens.

Beam stabilization

A rather simple example is the stabilization of the position and direction of laser beam between modules in a large free space optical communication system. Fourier optics is used to control both direction and position. The actual beam is measured by photo diodes. This signal is fed into some Analog-to-digital converters and a microcontroller runs a PID controller algorithm. The controller drives some digital-to-analog converters which drive stepper motors attached to mirror mounts.

If the beam is to be centered onto 4-quadrant diodes, no Analog-to-digital converter is needed. Operational amplifiers are sufficient.

Mauna Kea Observatories

From Wikipedia, the free encyclopedia
 
Mauna Kea Observatories
JCMT on Mauna Kea.jpg
Alternative namesMKO
Observatory code 568 
LocationMauna Kea, Hawaii County, US
Coordinates19°49′20″N 155°28′30″WCoordinates: 19°49′20″N 155°28′30″W
Altitude4,205 m (13,796 ft)
Websitewww.ifa.hawaii.edu/mko/
Telescopes
CSO (closed 2015)10.4 m submillimeter
CFHT3.58 m visible/infrared
Gemini North8.1 m visible/infrared
IRTF3.0 m infrared
JCMT15 m submillimeter
Subaru Telescope8.2 m visible/infrared
SMA8x6 m arrayed radio telescopes
UKIRT3.8 m infrared
VLBA receiver25 m radio telescope
Keck Observatory2x10 m visible/infrared telescopes
UH882.2 m visible/infrared
UH Hilo Hoku Ke'a Telescope0.9 m visible
Mauna Kea Observatories is located in Hawaii
Mauna Kea Observatories
Location of Mauna Kea Observatories

The Mauna Kea Observatories (MKO) are a number of independent astronomical research facilities and large telescope observatories that are located at the summit of Mauna Kea on the Big Island of Hawaiʻi, United States. The facilities are located in a 525-acre (212 ha) special land use zone known as the "Astronomy Precinct", which is located within the 11,228-acre (4,544 ha) Mauna Kea Science Reserve. The Astronomy Precinct was established in 1967 and is located on land protected by the Historical Preservation Act for its significance to Hawaiian culture.

The location is near ideal because of its dark skies from lack of light pollution, good astronomical seeing, low humidity, high elevation of 4,205 meters (13,796 ft), position above most of the water vapor in the atmosphere, clean air, good weather and low latitude location.

Origin and background

After studying photos for NASA's Apollo program that contained greater detail than any ground-based telescope, Gerard Kuiper began seeking an arid site for infrared studies. While he first began looking in Chile, he also made the decision to perform tests in the Hawaiian Islands. Tests on Maui's Haleakalā were promising, but the mountain was too low in the inversion layer and often covered by clouds. On the "Big Island" of Hawaiʻi, Mauna Kea is considered the highest island mountain in the world. While the summit is often covered with snow, the air is extremely dry. Kuiper began looking into the possibility of an observatory on Mauna Kea. After testing, he discovered the low humidity was perfect for infrared signals. He persuaded Hawaiʻi Governor John A. Burns to bulldoze a dirt road to the summit where he built a small telescope on Puʻu Poliʻahu, a cinder cone peak. The peak was the second highest on the mountain with the highest peak being holy ground, so Kuiper avoided it. Next, Kuiper tried enlisting NASA to fund a larger facility with a large telescope, housing and other needed structures. NASA, in turn decided to make the project open to competition. Professor of physics, John Jefferies of the University of Hawaii placed a bid on behalf of the university. Jefferies had gained his reputation through observations at Sacramento Peak Observatory. The proposal was for a two-meter telescope to serve both the needs of NASA and the university. While large telescopes are not ordinarily awarded to universities without well-established astronomers, Jefferies and UH were awarded the NASA contract, infuriating Kuiper, who felt that "his mountain" had been "stolen" from him. Kuiper would abandon his site (the very first telescope on Mauna Kea) over the competition and begin work in Arizona on a different NASA project. After considerable testing by Jefferies' team, the best locations were determined to be near the summit at the top of the cinder cones. Testing also determined Mauna Kea to be superb for nighttime viewing due to many factors, including the thin air, constant trade winds and being surrounded by sea. Jefferies would build a 2.24 meter telescope with the State of Hawaiʻi agreeing to build a reliable, all weather roadway to the summit. Building began in 1967 and first light was seen in 1970.

Other groups began requesting subleases on the newly accessible mountaintop. By 1970, two 24 in (0.6 m) telescopes had been constructed by the United States Air Force and Lowell Observatory. In 1973, Canada and France agreed to build the 3.6 m CFHT on Mauna Kea. However, local organizations started to raise concerns about the environmental impact of the observatory. This led the Department of Land and Natural Resources to prepare an initial management plan, drafted in 1977 and supplemented in 1980. In January 1982, the University of Hawaiʻi Board of Regents approved a plan to support the continued development of scientific facilities at the site. In 1998, 2,033 acres (823 ha) were transferred from the observatory lease to supplement the Mauna Kea Ice Age Reserve. The 1982 plan was replaced in 2000 by an extension designed to serve until 2020: it instituted an Office of Mauna Kea Management, designated 525 acres (212 ha) for astronomy, and shifted the remaining 10,763 acres (4,356 ha) to "natural and cultural preservation". This plan was further revised to address concern expressed in the Hawaiian community that a lack of respect was being shown toward the cultural value the mountain embodied to the region's indigenous people.

As of 2012, the Mauna Kea Science Reserve has 13 observation facilities, each funded by as many as 11 countries. It is one of the world's premier observatories for optical, infrared, and submillimeter astronomy, and in 2009 was the largest measured by light gathering power. There are nine telescopes working in the visible and infrared spectrum, three in the submillimeter spectrum, and one in the radio spectrum, with mirrors or dishes ranging from 0.9 to 25 m (3 to 82 ft). In comparison, the Hubble Space Telescope has a 2.4 m (7.9 ft) mirror, similar in size to the UH88, now the second smallest telescope on the mountain.

Controversies

Planned new telescopes, including the Thirty Meter Telescope, have attracted controversy due to their potential cultural and ecological impact. The multi-telescope "outrigger" extension to the Keck telescopes, which required new sites, was eventually canceled. Three or four of the mountain's 13 existing telescopes must be dismantled over the next decade with the TMT proposal to be the last area on Mauna Kea on which any telescope would ever be built.

Management

The Reserve was established in 1968, and is leased by the State of Hawaiʻi's Department of Land and Natural Resources (DLNR). The University of Hawaiʻi manages the site and leases land to several multi-national facilities, which have invested more than $2 billion in science and technology. The lease expires in 2033 and after that 40 of 45 square kilometers (25 of 28 square miles) revert to the state of Hawaii.

Location

Mauna Kea Observatories seen from the base of Mauna Kea
Mauna Kea Observatories seen from the base of Mauna Kea
 
The altitude and isolation in the middle of the Pacific Ocean makes Mauna Kea one of the best locations on earth for ground-based astronomy. It is an ideal location for submillimeter, infrared and optical observations. The seeing statistics show that Mauna Kea is the best site in terms of optical and infrared image quality; for example, the CFHT site has a median seeing of 0.43 arcseconds.

Accommodations for research astronomers are located at the Onizuka Center for International Astronomy (often called Hale Pōhaku), 7 miles (11 km) by unpaved steep road from the summit at 9,300 feet (2,800 m) above sea level.

An adjacent visitor information station is located at 9,200 feet (2,800 m). The summit of Mauna Kea is so high that tourists are advised to stop at the visitor station for at least 30 minutes to acclimate to atmospheric conditions before continuing to the summit, and scientists often stay at Hale Pōhaku for eight hours or more before spending a full night at observatories on the summit, with some telescopes requiring observers to spend one full night at Hale Pōhaku before working at the summit.

Telescopes

The Submillimeter Array of radio telescopes at night, lit by flash.
 
From left-to-right: United Kingdom Infrared Telescope, Caltech Sub-Millimeter Observatory (closed 2015), James Clerk Maxwell Telescope, Smithsonian Sub-Millimeter Array, Subaru Telescope, W.M. Keck Observatory (I & II), NASA Infrared Telescope Facility, Gemini North Telescope
 
Telescopes found at the summit of Mauna Kea are funded by government agencies of various nations. The University of Hawaiʻi directly administers two telescopes. In total, there are twelve facilities housing thirteen telescopes at or around the summit of Mauna Kea.
CSO, UKIRT, and Hoku Kea are scheduled for decommissioning as part of the Mauna Kea Comprehensive Management Plan.

Opposition and protests

In Honolulu, the governor and legislature, enthusiastic about the development, set aside an even larger area for the observatory after the initial project, causing opposition on the Big Island, in the city of Hilo. Native Hawaiians (kānaka ʻōiwi) believed the entire site was sacred and that developing the mountain, even for science, would spoil the area. Environmentalists were concerned about rare native bird populations and other citizens of Hilo were concerned about the sight of the domes from the city. Using town hall meetings, Jefferies was able to overcome opposition by weighing the economic advantage and prestige the island would receive. There has been substantial opposition to the Mauna Kea observatories that continues to grow. Over the years, the opposition to the observatories may have become the most visible example of the conflict science has encountered over access and use of environmental and culturally significant sites. Opposition to development grew shortly after expansion of the observatories commenced. Once access was opened up by the roadway to the summit, skiers began using it for recreation and objected when the road was closed as a precaution against vandalism when the telescopes were being built. Hunters voiced concerns, as did the Hawaiian Audubon Society who were supported by Governor George Ariyoshi.

The Audubon Society objected to further development on Mauna Kea over concerns to habitat of the endangered Palila, a species endemic to only specific parts of this mountain. The bird is the last of the finch billed honeycreepers existing on the island. Over 50% of native bird species had been killed off due to loss of habitat from early western settlers or the introduction of non-native species competing for resources. Hunters and sportsmen were concerned that the hunting of feral animals would be affected by the telescope operations. A "Save Mauna Kea" movement was inspired by the proliferation of telescopes, with opposition believing development of the mountain to be sacrilegious. Native Hawaiian non-profit groups, such as Kahea, whose goals are the protection of cultural heritage and the environment, oppose development on Mauna Kea as a sacred space to the Hawaiian religion. Today, Mauna Kea hosts the world's largest location for telescope observations in infrared and submillimeter astronomy. The land is protected by the United States Historical Preservation Act due to its significance to Hawaiian culture, but still allowed development.

2006 Kiholo Bay earthquake

A number of the telescopes sustained minor damage during the October 15, 2006 Kiholo Bay earthquake and aftershocks. JCMT was performing an inclinometry run and recorded the earthquake on its tilt sensors. Both CFHT and W. M. Keck Observatory were operational and back online by October 19.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...