Search This Blog

Monday, November 29, 2021

Active-pixel sensor

From Wikipedia, the free encyclopedia

An active-pixel sensor (APS) is an image sensor where each pixel sensor unit cell has a photodetector (typically a pinned photodiode) and one or more active transistors. In a metal–oxide–semiconductor (MOS) active-pixel sensor, MOS field-effect transistors (MOSFETs) are used as amplifiers. There are different types of APS, including the early NMOS APS and the much more common complementary MOS (CMOS) APS, also known as the CMOS sensor, which is widely used in digital camera technologies such as cell phone cameras, web cameras, most modern digital pocket cameras, most digital single-lens reflex cameras (DSLRs), and mirrorless interchangeable-lens cameras (MILCs). CMOS sensors emerged as an alternative to charge-coupled device (CCD) image sensors and eventually outsold them by the mid-2000s.

CMOS image sensor.

The term 'active pixel sensor' is also used to refer to the individual pixel sensor itself, as opposed to the image sensor. In this case, the image sensor is sometimes called an active pixel sensor imager, or active-pixel image sensor.

History

Background

While researching metal–oxide–semiconductor (MOS) technology, Willard Boyle and George E. Smith realized that an electric charge could be stored on a tiny MOS capacitor, which became the basic building block of the charge-couple device (CCD), which they invented in 1969. An issue with CCD technology was that it required the need for nearly perfect charge transfer, which, according to Eric Fossum, "makes their radiation 'soft,' difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response."

At RCA Laboratories, a research team including Paul K. Weimer, W.S. Pike and G. Sadasiv in 1969 proposed a solid-state image sensor with scanning circuits using thin-film transistors (TFTs), with photoconductive film used for the photodetector. A low-resolution "mostly digital" N-channel MOSFET (NMOS) imager with intra-pixel amplification, for an optical mouse application, was demonstrated by Richard F. Lyon in 1981. Another type of image sensor technology that is related to the APS is the hybrid infrared focal plane array (IRFPA), designed to operate at cryogenic temperatures in the infrared spectrum. The devices are two chips that are put together like a sandwich: one chip contains detector elements made in InGaAs or HgCdTe, and the other chip is typically made of silicon and is used to read out the photodetectors. The exact date of origin of these devices is classified, but they were in use by the mid-1980s.

A key element of the modern CMOS sensor is the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980, and then publicly reported by Teranishi and Ishihara with A. Kohono, E. Oda and K. Arai in 1982, with the addition of an anti-blooming structure. The pinned photodiode is a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. The new photodetector structure invented at NEC was given the name "pinned photodiode" (PPD) by B.C. Burkey at Kodak in 1984. In 1987, the PPD began to be incorporated into most CCD sensors, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors.

Passive-pixel sensor

The precursor to the APS was the passive-pixel sensor (PPS), a type of photodiode array (PDA). A passive-pixel sensor consists of passive pixels which are read out without amplification, with each pixel consisting of a photodiode and a MOSFET switch. In a photodiode array, pixels contain a p-n junction, integrated capacitor, and MOSFETs as selection transistors. A photodiode array was proposed by G. Weckler in 1968, predating the CCD. This was the basis for the PPS, which had image sensor elements with in-pixel selection transistors, proposed by Peter J.W. Noble in 1968, and by Savvas G. Chamberlain in 1969.

Passive-pixel sensors were being investigated as a solid-state alternative to vacuum-tube imaging devices. The MOS passive-pixel sensor used just a simple switch in the pixel to read out the photodiode integrated charge. Pixels were arrayed in a two-dimensional structure, with an access enable wire shared by pixels in the same row, and output wire shared by column. At the end of each column was a transistor. Passive-pixel sensors suffered from many limitations, such as high noise, slow readout, and lack of scalability. Early photodiode arrays were complex and impractical, requiring selection transistors to be fabricated within each pixel, along with on-chip multiplexer circuits. The noise of photodiode arrays was also a limitation to performance, as the photodiode readout bus capacitance resulted in increased noise level. Correlated double sampling (CDS) could also not be used with a photodiode array without external memory. It was not possible to fabricate active pixel sensors with a practical pixel size in the 1970s, due to limited microlithography technology at the time. Because the MOS process was so variable and MOS transistors had characteristics that changed over time (Vth instability), the CCD's charge-domain operation was more manufacturable than MOS passive pixel sensors.

Active-pixel sensor

The active-pixel sensor consists of active pixels, each containing one or more MOSFET amplifiers which convert the photo-generated charge to a voltage, amplify the signal voltage, and reduce noise. The concept of an active-pixel device was proposed by Peter Noble in 1968. He created sensor arrays with active MOS readout amplifiers per pixel, in essentially the modern three-transistor configuration: the buried photodiode-structure, selection transistor and MOS amplifier.

The MOS active-pixel concept was implemented as the charge modulation device (CMD) by Olympus in Japan during the mid-1980s. This was enabled by advances in MOSFET semiconductor device fabrication, with MOSFET scaling reaching smaller micron and then sub-micron levels during the 1980s to early 1990s. The first MOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985. The term active pixel sensor (APS) was coined by Nakamura while working on the CMD active-pixel sensor at Olympus. The CMD imager had a vertical APS structure, which increases fill-factor (or reduces pixel size) by storing the signal charge under an output NMOS transistor. Other Japanese semiconductor companies soon followed with their own active pixel sensors during the late 1980s to early 1990s. Between 1988 and 1991, Toshiba developed the "double-gate floating surface transistor" sensor, which had a lateral APS structure, with each pixel containing a buried-channel MOS photogate and a PMOS output amplifier. Between 1989 and 1992, Canon developed the base-stored image sensor (BASIS), which used a vertical APS structure similar to the Olympus sensor, but with bipolar transistors rather than MOSFETs.

In the early 1990s, American companies began developing practical MOS active pixel sensors. In 1991, Texas Instruments developed the bulk CMD (BCMD) sensor, which was fabricated at the company's Japanese branch and had a vertical APS structure similar to the Olympus CMD sensor, but was more complex and used PMOS rather than NMOS transistors.

CMOS sensor

By the late 1980s to early 1990s, the CMOS process was well-established as a well-controlled stable semiconductor manufacturing process and was the baseline process for almost all logic and microprocessors. There was a resurgence in the use of passive-pixel sensors for low-end imaging applications, while active-pixel sensors began being used for low-resolution high-function applications such as retina simulation and high-energy particle detectors. However, CCDs continued to have much lower temporal noise and fixed-pattern noise and were the dominant technology for consumer applications such as camcorders as well as for broadcast cameras, where they were displacing video camera tubes.

In 1993, the first practical APS to be successfully fabricated outside of Japan was developed at NASA's Jet Propulsion Laboratory (JPL), which fabricated a CMOS compatible APS, with its development led by Eric Fossum. It had a lateral APS structure similar to the Toshiba sensor, but was fabricated with CMOS rather than PMOS transistors. It was the first CMOS sensor with intra-pixel charge transfer.

Fossum, who worked at JPL, led the development of an image sensor that used intra-pixel charge transfer along with an in-pixel amplifier to achieve true correlated double sampling (CDS) and low temporal noise operation, and on-chip circuits for fixed-pattern noise reduction. He also published an extensive 1993 article predicting the emergence of APS imagers as the commercial successor of CCDs. The active pixel sensor (APS) was broadly defined by Fossum in this paper. He classified two types of APS structures, the lateral APS and the vertical APS. He also gave an overview of the history of APS technology, from the first APS sensors in Japan to the development of the CMOS sensor at JPL.

In 1994, Fossum proposed an improvement to the CMOS sensor: the integration of the pinned photodiode (PPD). A CMOS sensor with PPD technology was first fabricated in 1995 by a joint JPL and Kodak team that included Fossum along with P. P. K. Lee, R. C. Gee, R. M. Guidash and T. H. Lee. Between 1993 and 1995, the Jet Propulsion Laboratory developed a number of prototype devices, which validated the key features of the technology. Though primitive, these devices demonstrated good image performance with high readout speed and low power consumption.

In 1995, being frustrated by the slow pace of the technology's adoption, Fossum and his then-wife Dr. Sabrina Kemeny co-founded Photobit Corporation to commercialize the technology. It continued to develop and commercialize APS technology for a number of applications, such as web cams, high speed and motion capture cameras, digital radiography, endoscopy (pill) cameras, digital single-lens reflex cameras (DSLRs) and camera-phones. Many other small image sensor companies also sprang to life shortly thereafter due to the accessibility of the CMOS process and all quickly adopted the active pixel sensor approach.

Photobit's CMOS sensors found their way into webcams manufactured by Logitech and Intel, before Photobit was purchased by Micron Technology in 2001. The early CMOS sensor market was initially led by American manufacturers such as Micron, and Omnivision, allowing the United States to briefly recapture a portion of the overall image sensor market from Japan, before the CMOS sensor market eventually came to be dominated by Japan, South Korea and China. The CMOS sensor with PPD technology was further advanced and refined by R. M. Guidash in 1997, K. Yonemoto and H. Sumi in 2000, and I. Inoue in 2003. This led to CMOS sensors achieve imaging performance on par with CCD sensors, and later exceeding CCD sensors.

By 2000, CMOS sensors were used in a variety of applications, including low-cost cameras, PC cameras, fax, multimedia, security, surveillance, and videophones.

The video industry switched to CMOS cameras with the advent of high-definition video (HD video), as the large number of pixels would require significantly higher power consumption with CCD sensors, which would overheat and drain batteries. Sony in 2007 commercialized CMOS sensors with an original column A/D conversion circuit, for fast, low-noise performance, followed in 2009 by the CMOS back-illuminated sensor (BI sensor), with twice the sensitivity of conventional image sensors and going beyond the human eye.

CMOS sensors went on to have a significant cultural impact, leading to the mass proliferation of digital cameras and camera phones, which bolstered the rise of social media and selfie culture, and impacted social and political movements around the world. By 2007, sales of CMOS active-pixel sensors had surpassed CCD sensors, with CMOS sensors accounting for 54% of the global image sensor market at the time. By 2012, CMOS sensors increased their share to 74% of the market. As of 2017, CMOS sensors account for 89% of global image sensor sales. In recent years, the CMOS sensor technology has spread to medium-format photography with Phase One being the first to launch a medium format digital back with a Sony-built CMOS sensor.

In 2012, Sony introduced the stacked CMOS BI sensor. Fossum now performs research on the Quanta Image Sensor (QIS) technology. The QIS is a revolutionary change in the way we collect images in a camera that is being invented at Dartmouth. In the QIS, the goal is to count every photon that strikes the image sensor, and to provide resolution of 1 billion or more specialized photoelements (called jots) per sensor, and to read out jot bit planes hundreds or thousands of times per second resulting in terabits/sec of data.

Boyd Fowler of OmniVision is known for his work in CMOS image sensor development. His contributions include the first digital-pixel CMOS image sensor in 1994; the first scientific linear CMOS image sensor with single-electron RMS read noise in 2003; the first multi-megapixel scientific area CMOS image sensor with simultaneous high dynamic range (86dB), fast readout (100 frames/second) and ultra-low read noise (1.2e- RMS) (sCMOS) in 2010. He also patented the first CMOS image sensor for inter-oral dental X-rays with clipped corners for better patient comfort.

By the late 2010s CMOS sensors had largely if not completely replaced CCD sensors, as CMOS sensors can not only be made in existing semiconductor production lines, reducing costs, but they also consume less power, just to name a few advantages.

Comparison to CCDs

APS pixels solve the speed and scalability issues of the passive-pixel sensor. They generally consume less power than CCDs, have less image lag, and require less specialized manufacturing facilities. Unlike CCDs, APS sensors can combine the image sensor function and image processing functions within the same integrated circuit. APS sensors have found markets in many consumer applications, especially camera phones. They have also been used in other fields including digital radiography, military ultra high speed image acquisition, security cameras, and optical mice. Manufacturers include Aptina Imaging (independent spinout from Micron Technology, who purchased Photobit in 2001), Canon, Samsung, STMicroelectronics, Toshiba, OmniVision Technologies, Sony, and Foveon, among others. CMOS-type APS sensors are typically suited to applications in which packaging, power management, and on-chip processing are important. CMOS type sensors are widely used, from high-end digital photography down to mobile-phone cameras.

Advantages of CMOS compared with CCD

Blooming in a CCD image

A primary advantage of a CMOS sensor is that it is typically less expensive to produce than a CCD sensor, as the image capturing and image sensing elements can be combined onto the same IC, with simpler construction required.

A CMOS sensor also typically has better control of blooming (that is, of bleeding of photo-charge from an over-exposed pixel into other nearby pixels).

In three-sensor camera systems that use separate sensors to resolve the red, green, and blue components of the image in conjunction with beam splitter prisms, the three CMOS sensors can be identical, whereas most splitter prisms require that one of the CCD sensors has to be a mirror image of the other two to read out the image in a compatible order. Unlike CCD sensors, CMOS sensors have the ability to reverse the addressing of the sensor elements. CMOS Sensors with a film speed of ISO 4 million exist. 

Disadvantages of CMOS compared with CCD

Distortion caused by a rolling shutter

Since a CMOS sensor typically captures a row at a time within approximately 1/60 or 1/50 of a second (depending on refresh rate) it may result in a "rolling shutter" effect, where the image is skewed (tilted to the left or right, depending on the direction of camera or subject movement). For example, when tracking a car moving at high speed, the car will not be distorted but the background will appear to be tilted. A frame-transfer CCD sensor or "global shutter" CMOS sensor does not have this problem; instead it captures the entire image at once into a frame store.

A long-standing advantage of CCD sensors has been their capability for capturing images with lower noise. With improvements in CMOS technology, this advantage has closed as of 2020, with modern CMOS sensors available capable of outperforming CCD sensors.

The active circuitry in CMOS pixels takes some area on the surface which is not light-sensitive, reducing the photon-detection efficiency of the device (back-illuminated sensors can mitigate this problem). But the frame-transfer CCD also has about half non-sensitive area for the frame store nodes, so the relative advantages depend on which types of sensors are being compared.

Architecture

Pixel

A three-transistor active pixel sensor.

The standard CMOS APS pixel consists of a photodetector (pinned photodiode), a floating diffusion, and the so-called 4T cell consisting of four CMOS (complementary metal–oxide–semiconductor) transistors, including a transfer gate, reset gate, selection gate and source-follower readout transistor. The pinned photodiode was originally used in interline transfer CCDs due to its low dark current and good blue response, and when coupled with the transfer gate, allows complete charge transfer from the pinned photodiode to the floating diffusion (which is further connected to the gate of the read-out transistor) eliminating lag. The use of intrapixel charge transfer can offer lower noise by enabling the use of correlated double sampling (CDS). The Noble 3T pixel is still sometimes used since the fabrication requirements are less complex. The 3T pixel comprises the same elements as the 4T pixel except the transfer gate and the photodiode. The reset transistor, Mrst, acts as a switch to reset the floating diffusion to VRST, which in this case is represented as the gate of the Msf transistor. When the reset transistor is turned on, the photodiode is effectively connected to the power supply, VRST, clearing all integrated charge. Since the reset transistor is n-type, the pixel operates in soft reset. The read-out transistor, Msf, acts as a buffer (specifically, a source follower), an amplifier which allows the pixel voltage to be observed without removing the accumulated charge. Its power supply, VDD, is typically tied to the power supply of the reset transistor VRST. The select transistor, Msel, allows a single row of the pixel array to be read by the read-out electronics. Other innovations of the pixels such as 5T and 6T pixels also exist. By adding extra transistors, functions such as global shutter, as opposed to the more common rolling shutter, are possible. In order to increase the pixel densities, shared-row, four-ways and eight-ways shared read out, and other architectures can be employed. A variant of the 3T active pixel is the Foveon X3 sensor invented by Dick Merrill. In this device, three photodiodes are stacked on top of each other using planar fabrication techniques, each photodiode having its own 3T circuit. Each successive layer acts as a filter for the layer below it shifting the spectrum of absorbed light in successive layers. By deconvolving the response of each layered detector, red, green, and blue signals can be reconstructed.

Array

A typical two-dimensional array of pixels is organized into rows and columns. Pixels in a given row share reset lines, so that a whole row is reset at a time. The row select lines of each pixel in a row are tied together as well. The outputs of each pixel in any given column are tied together. Since only one row is selected at a given time, no competition for the output line occurs. Further amplifier circuitry is typically on a column basis.

Size

The size of the pixel sensor is often given in height and width, but also in the optical format.

Lateral and vertical structures

There are two types of active-pixel sensor (APS) structures, the lateral APS and vertical APS. Eric Fossum defines the lateral APS as follows:

A lateral APS structure is defined as one that has part of the pixel area used for photodetection and signal storage, and the other part is used for the active transistor(s). The advantage of this approach, compared to a vertically integrated APS, is that the fabrication process is simpler, and is highly compatible with state-of-the-art CMOS and CCD device processes.

Fossum defines the vertical APS as follows:

A vertical APS structure increases fill-factor (or reduces pixel size) by storing the signal charge under the output transistor.

Thin-film transistors

A two-transistor active/passive pixel sensor

For applications such as large-area digital X-ray imaging, thin-film transistors (TFTs) can also be used in APS architecture. However, because of the larger size and lower transconductance gain of TFTs compared with CMOS transistors, it is necessary to have fewer on-pixel TFTs to maintain image resolution and quality at an acceptable level. A two-transistor APS/PPS architecture has been shown to be promising for APS using amorphous silicon TFTs. In the two-transistor APS architecture on the right, TAMP is used as a switched-amplifier integrating functions of both Msf and Msel in the three-transistor APS. This results in reduced transistor counts per pixel, as well as increased pixel transconductance gain. Here, Cpix is the pixel storage capacitance, and it is also used to capacitively couple the addressing pulse of the "Read" to the gate of TAMP for ON-OFF switching. Such pixel readout circuits work best with low capacitance photoconductor detectors such as amorphous selenium.

Design variants

Many different pixel designs have been proposed and fabricated. The standard pixel uses the fewest wires and the fewest, most tightly packed transistors possible for an active pixel. It is important that the active circuitry in a pixel take up as little space as possible to allow more room for the photodetector. High transistor count hurts fill factor, that is, the percentage of the pixel area that is sensitive to light. Pixel size can be traded for desirable qualities such as noise reduction or reduced image lag. Noise is a measure of the accuracy with which the incident light can be measured. Lag occurs when traces of a previous frame remain in future frames, i.e. the pixel is not fully reset. The voltage noise variance in a soft-reset (gate-voltage regulated) pixel is , but image lag and fixed pattern noise may be problematic. In rms electrons, the noise is .

Hard reset

Operating the pixel via hard reset results in a Johnson–Nyquist noise on the photodiode of or , but prevents image lag, sometimes a desirable tradeoff. One way to use hard reset is replace Mrst with a p-type transistor and invert the polarity of the RST signal. The presence of the p-type device reduces fill factor, as extra space is required between p- and n-devices; it also removes the possibility of using the reset transistor as an overflow anti-blooming drain, which is a commonly exploited benefit of the n-type reset FET. Another way to achieve hard reset, with the n-type FET, is to lower the voltage of VRST relative to the on-voltage of RST. This reduction may reduce headroom, or full-well charge capacity, but does not affect fill factor, unless VDD is then routed on a separate wire with its original voltage.

Combinations of hard and soft reset

Techniques such as flushed reset, pseudo-flash reset, and hard-to-soft reset combine soft and hard reset. The details of these methods differ, but the basic idea is the same. First, a hard reset is done, eliminating image lag. Next, a soft reset is done, causing a low noise reset without adding any lag. Pseudo-flash reset requires separating VRST from VDD, while the other two techniques add more complicated column circuitry. Specifically, pseudo-flash reset and hard-to-soft reset both add transistors between the pixel power supplies and the actual VDD. The result is lower headroom, without affecting fill factor.

Active reset

A more radical pixel design is the active-reset pixel. Active reset can result in much lower noise levels. The tradeoff is a complicated reset scheme, as well as either a much larger pixel or extra column-level circuitry.

Charge-coupled device

From Wikipedia, the free encyclopedia
 
A specially developed CCD in a wire-bonded package used for ultraviolet imaging

A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging.

In a CCD image sensor, pixels are represented by p-doped metal–oxide–semiconductor (MOS) capacitors. These MOS capacitors, the basic building blocks of a CCD, are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges. Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data are required. In applications with less exacting quality demands, such as consumer and professional digital cameras, active pixel sensors, also known as CMOS sensors (complementary MOS sensors), are generally used. However, the large quality advantage CCDs enjoyed early on has narrowed over time and since the late 2010s CMOS sensors are the dominant technology, having largely if not completely replaced CCD image sensors.

History

The basis for the CCD is the metal–oxide–semiconductor (MOS) structure, with MOS capacitors being the basic building blocks of a CCD, and a depleted MOS structure used as the photodetector in early CCD devices.

In the late 1960s, Willard Boyle and George E. Smith at Bell Labs were researching MOS technology while working on semiconductor bubble memory. They realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. This led to the invention of the charge-coupled device by Boyle and Smith in 1969. They conceived of the design of what they termed, in their notebook, "Charge 'Bubble' Devices".

The initial paper describing the concept in April 1970 listed possible uses as memory, a delay line, and an imaging device. The device could also be used as a shift register. The essence of the design was the ability to transfer charge along the surface of a semiconductor from one storage capacitor to the next. The concept was similar in principle to the bucket-brigade device (BBD), which was developed at Philips Research Labs during the late 1960s.

The first experimental device demonstrating the principle was a row of closely spaced metal squares on an oxidized silicon surface electrically accessed by wire bonds. It was demonstrated by Gil Amelio, Michael Francis Tompsett and George Smith in April 1970. This was the first experimental application of the CCD in image sensor technology, and used a depleted MOS structure as the photodetector. The first patent (U.S. Patent 4,085,456) on the application of CCDs to imaging was assigned to Tompsett, who filed the application in 1971.

The first working CCD made with integrated circuit technology was a simple 8-bit shift register, reported by Tompsett, Amelio and Smith in August 1970. This device had input and output circuits and was used to demonstrate its use as a shift register and as a crude eight pixel linear imaging device. Development of the device progressed at a rapid rate. By 1971, Bell researchers led by Michael Tompsett were able to capture images with simple linear devices. Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention and began development programs. Fairchild's effort, led by ex-Bell researcher Gil Amelio, was the first with commercial devices, and by 1974 had a linear 500-element device and a 2-D 100 × 100 pixel device. Steven Sasson, an electrical engineer working for Kodak, invented the first digital still camera using a Fairchild 100 × 100 CCD in 1975.

The interline transfer (ILT) CCD device was proposed by L. Walsh and R. Dyck at Fairchild in 1973 to reduce smear and eliminate a mechanical shutter. To further reduce smear from bright light sources, the frame-interline-transfer (FIT) CCD architecture was developed by K. Horii, T. Kuroda and T. Kunii at Matsushita (now Panasonic) in 1981.

The first KH-11 KENNEN reconnaissance satellite equipped with charge-coupled device array (800 × 800 pixels) technology for imaging was launched in December 1976. Under the leadership of Kazuo Iwama, Sony started a large development effort on CCDs involving a significant investment. Eventually, Sony managed to mass-produce CCDs for their camcorders. Before this happened, Iwama died in August 1982; subsequently, a CCD chip was placed on his tombstone to acknowledge his contribution. The first mass-produced consumer CCD video camera, the CCD-G5, was released by Sony in 1983, based on a prototype developed by Yoshiaki Hagiwara in 1981.

Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. They recognized that lag can be eliminated if the signal carriers could be transferred from the photodiode to the CCD. This led to their invention of the pinned photodiode, a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. It was first publicly reported by Teranishi and Ishihara with A. Kohono, E. Oda and K. Arai in 1982, with the addition of an anti-blooming structure. The new photodetector structure invented at NEC was given the name "pinned photodiode" (PPD) by B.C. Burkey at Kodak in 1984. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors.

In January 2006, Boyle and Smith were awarded the National Academy of Engineering Charles Stark Draper Prize, and in 2009 they were awarded the Nobel Prize for Physics, for their invention of the CCD concept. Michael Tompsett was awarded the 2010 National Medal of Technology and Innovation, for pioneering work and electronic technologies including the design and development of the first CCD imagers. He was also awarded the 2012 IEEE Edison Medal for "pioneering contributions to imaging devices including CCD Imagers, cameras and thermal imagers".

Basics of operation

The charge packets (electrons, blue) are collected in potential wells (yellow) created by applying positive voltage at the gate electrodes (G). Applying positive voltage to the gate electrode in the correct sequence transfers the charge packets.

In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out of a shift register (the CCD, properly speaking).

An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, whereas a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages. In a digital device, these voltages are then sampled, digitized, and usually stored in memory; in an analog device (such as an analog video camera), they are processed into a continuous analog signal (e.g. by feeding the output of the charge amplifier into a low-pass filter), which is then processed and fed out to other circuits for transmission, recording, or other processing.

Detailed physics of operation

Sony ICX493AQA 10.14-megapixel APS-C (23.4 × 15.6 mm) CCD from digital camera Sony α DSLR-A200 or DSLR-A300, sensor side

Charge generation

Before the MOS capacitors are exposed to light, they are biased into the depletion region; in n-channel CCDs, the silicon under the bias gate is slightly p-doped or intrinsic. The gate is then biased at a positive potential, above the threshold for strong inversion, which will eventually result in the creation of an n channel below the gate as in a MOSFET. However, it takes time to reach this thermal equilibrium: up to hours in high-end scientific cameras cooled at low temperature. Initially after biasing, the holes are pushed far into the substrate, and no mobile electrons are at or near the surface; the CCD thus operates in a non-equilibrium state called deep depletion. Then, when electron–hole pairs are generated in the depletion region, they are separated by the electric field, the electrons move toward the surface, and the holes move toward the substrate. Four pair-generation processes can be identified:

  • photo-generation (up to 95% of quantum efficiency),
  • generation in the depletion region,
  • generation at the surface, and
  • generation in the neutral bulk.

The last three processes are known as dark-current generation, and add noise to the image; they can limit the total usable integration time. The accumulation of electrons at or near the surface can proceed either until image integration is over and charge begins to be transferred, or thermal equilibrium is reached. In this case, the well is said to be full. The maximum capacity of each well is known as the well depth, typically about 105 electrons per pixel.

Design and manufacturing

The photoactive region of a CCD is, generally, an epitaxial layer of silicon. It is lightly p doped (usually with boron) and is grown upon a substrate material, often p++. In buried-channel devices, the type of design utilized in most modern CCDs, certain areas of the surface of the silicon are ion implanted with phosphorus, giving them an n-doped designation. This region defines the channel in which the photogenerated charge packets will travel. Simon Sze details the advantages of a buried-channel device:

This thin layer (= 0.2–0.3 micron) is fully depleted and the accumulated photogenerated charge is kept away from the surface. This structure has the advantages of higher transfer efficiency and lower dark current, from reduced surface recombination. The penalty is smaller charge capacity, by a factor of 2–3 compared to the surface-channel CCD.

The gate oxide, i.e. the capacitor dielectric, is grown on top of the epitaxial layer and substrate.

Later in the process, polysilicon gates are deposited by chemical vapor deposition, patterned with photolithography, and etched in such a way that the separately phased gates lie perpendicular to the channels. The channels are further defined by utilization of the LOCOS process to produce the channel stop region.

Channel stops are thermally grown oxides that serve to isolate the charge packets in one column from those in another. These channel stops are produced before the polysilicon gates are, as the LOCOS process utilizes a high-temperature step that would destroy the gate material. The channel stops are parallel to, and exclusive of, the channel, or "charge carrying", regions.

Channel stops often have a p+ doped region underlying them, providing a further barrier to the electrons in the charge packets (this discussion of the physics of CCD devices assumes an electron transfer device, though hole transfer is possible).

The clocking of the gates, alternately high and low, will forward and reverse bias the diode that is provided by the buried channel (n-doped) and the epitaxial layer (p-doped). This will cause the CCD to deplete, near the p–n junction and will collect and move the charge packets beneath the gates—and within the channels—of the device.

CCD manufacturing and operation can be optimized for different uses. The above process describes a frame transfer CCD. While CCDs may be manufactured on a heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have been placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and infrared and red response. This method of manufacture is used in the construction of interline-transfer devices.

Another version of CCD is called a peristaltic CCD. In a peristaltic charge-coupled device, the charge-packet transfer operation is analogous to the peristaltic contraction and dilation of the digestive system. The peristaltic CCD has an additional implant that keeps the charge away from the silicon/silicon dioxide interface and generates a large lateral electric field from one gate to the next. This provides an additional driving force to aid in transfer of the charge packets.

Architecture

CCD from a 2.1-megapixel Argus digital camera
 
One-dimensional CCD image sensor from a fax machine

The CCD image sensors can be implemented in several different architectures. The most common are full-frame, frame-transfer, and interline. The distinguishing characteristic of each of these architectures is their approach to the problem of shuttering.

In a full-frame device, all of the image area is active, and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image smears as the device is clocked or read out.

With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically aluminum). The image can be quickly transferred from the image area to the opaque area or storage region with acceptable smear of a few percent. That image can then be read out slowly from the storage region while a new image is integrating or exposing in the active area. Frame-transfer devices typically do not require a mechanical shutter and were a common architecture for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly twice as much.

The interline architecture extends this concept one step further and masks every other column of the image sensor for storage. In this device, only one pixel shift has to occur to transfer from image area to storage area; thus, shutter times can be less than a microsecond and smear is essentially eliminated. The advantage is not free, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this deleterious characteristic by adding microlenses on the surface of the device to direct light away from the opaque regions and on the active area. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design.

The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure-prone, power-intensive mechanical shutter, an interline device is the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection and issues of money, power and time are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame devices. The frame-transfer falls in between and was a common choice before the fill-factor issue of interline devices was addressed. Today, frame-transfer is usually chosen when an interline architecture is not available, such as in a back-illuminated device.

CCDs containing grids of pixels are used in digital cameras, optical scanners, and video cameras as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a quantum efficiency of about 70 percent) making them far more efficient than photographic film, which captures only about 2 percent of the incident light.

Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. For normal silicon-based detectors, the sensitivity is limited to 1.1 μm. One other consequence of their sensitivity to infrared is that infrared from remote controls often appears on CCD-based digital cameras or camcorders if they do not have infrared blockers.

Cooling reduces the array's dark current, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise, to negligible levels.

Frame transfer CCD

A frame transfer CCD sensor

The frame transfer CCD imager was the first imaging structure proposed for CCD Imaging by Michael Tompsett at Bell Laboratories. A frame transfer CCD is a specialized CCD, often used in astronomy and some professional video cameras, designed for high exposure efficiency and correctness.

The normal functioning of a CCD, astronomical or otherwise, can be divided into two phases: exposure and readout. During the first phase, the CCD passively collects incoming photons, storing electrons in its cells. After the exposure time is passed, the cells are read out one line at a time. During the readout phase, cells are shifted down the entire area of the CCD. While they are shifted, they continue to collect light. Thus, if the shifting is not fast enough, errors can result from light that falls on a cell holding charge during the transfer. These errors are referred to as "vertical smear" and cause a strong light source to create a vertical line above and below its exact location. In addition, the CCD cannot be used to collect light while it is being read out. Unfortunately, a faster shifting requires a faster readout, and a faster readout can introduce errors in the cell charge measurement, leading to a higher noise level.

A frame transfer CCD solves both problems: it has a shielded, not light sensitive, area containing as many cells as the area exposed to light. Typically, this area is covered by a reflective material such as aluminium. When the exposure time is up, the cells are transferred very rapidly to the hidden area. Here, safe from any incoming light, cells can be read out at any speed one deems necessary to correctly measure the cells' charge. At the same time, the exposed part of the CCD is collecting light again, so no delay occurs between successive exposures.

The disadvantage of such a CCD is the higher cost: the cell area is basically doubled, and more complex control electronics are needed.

Intensified charge-coupled device

An intensified charge-coupled device (ICCD) is a CCD that is optically connected to an image intensifier that is mounted in front of the CCD.

An image intensifier includes three functional elements: a photocathode, a micro-channel plate (MCP) and a phosphor screen. These three elements are mounted one close behind the other in the mentioned sequence. The photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards the MCP by an electrical control voltage, applied between photocathode and MCP. The electrons are multiplied inside of the MCP and thereafter accelerated towards the phosphor screen. The phosphor screen finally converts the multiplied electrons back to photons which are guided to the CCD by a fiber optic or a lens.

An image intensifier inherently includes a shutter functionality: If the control voltage between the photocathode and the MCP is reversed, the emitted photoelectrons are not accelerated towards the MCP but return to the photocathode. Thus, no electrons are multiplied and emitted by the MCP, no electrons are going to the phosphor screen and no light is emitted from the image intensifier. In this case no light falls onto the CCD, which means that the shutter is closed. The process of reversing the control voltage at the photocathode is called gating and therefore ICCDs are also called gateable CCD cameras.

Besides the extremely high sensitivity of ICCD cameras, which enable single photon detection, the gateability is one of the major advantages of the ICCD over the EMCCD cameras. The highest performing ICCD cameras enable shutter times as short as 200 picoseconds.

ICCD cameras are in general somewhat higher in price than EMCCD cameras because they need the expensive image intensifier. On the other hand, EMCCD cameras need a cooling system to cool the EMCCD chip down to temperatures around 170 K (−103 °C). This cooling system adds additional costs to the EMCCD camera and often yields heavy condensation problems in the application.

ICCDs are used in night vision devices and in various scientific applications.

Electron-multiplying CCD

Electrons are transferred serially through the gain stages making up the multiplication register of an EMCCD. The high voltages used in these serial transfers induce the creation of additional charge carriers through impact ionisation.
 
In an EMCCD there is a dispersion (variation) in the number of electrons output by the multiplication register for a given (fixed) number of input electrons (shown in the legend on the right). The probability distribution for the number of output electrons is plotted logarithmically on the vertical axis for a simulation of a multiplication register. Also shown are results from the empirical fit equation shown on this page.

An electron-multiplying CCD (EMCCD, also known as an L3Vision CCD, a product commercialized by e2v Ltd., GB, L3CCD or Impactron CCD, a now-discontinued product offered in the past by Texas Instruments) is a charge-coupled device in which a gain register is placed between the shift register and the output amplifier. The gain register is split up into a large number of stages. In each stage, the electrons are multiplied by impact ionization in a similar way to an avalanche diode. The gain probability at every stage of the register is small (P < 2%), but as the number of elements is large (N > 500), the overall gain can be very high (), with single input electrons giving many thousands of output electrons. Reading a signal from a CCD gives a noise background, typically a few electrons. In an EMCCD, this noise is superimposed on many thousands of electrons rather than a single electron; the devices' primary advantage is thus their negligible readout noise. The use of avalanche breakdown for amplification of photo charges had already been described in the U.S. Patent 3,761,744 in 1973 by George E. Smith/Bell Telephone Laboratories.

EMCCDs show a similar sensitivity to intensified CCDs (ICCDs). However, as with ICCDs, the gain that is applied in the gain register is stochastic and the exact gain that has been applied to a pixel's charge is impossible to know. At high gains (> 30), this uncertainty has the same effect on the signal-to-noise ratio (SNR) as halving the quantum efficiency (QE) with respect to operation with a gain of unity. However, at very low light levels (where the quantum efficiency is most important), it can be assumed that a pixel either contains an electron—or not. This removes the noise associated with the stochastic multiplication at the risk of counting multiple electrons in the same pixel as a single electron. To avoid multiple counts in one pixel due to coincident photons in this mode of operation, high frame rates are essential. The dispersion in the gain is shown in the graph on the right. For multiplication registers with many elements and large gains it is well modelled by the equation:

where P is the probability of getting n output electrons given m input electrons and a total mean multiplication register gain of g.

Because of the lower costs and better resolution, EMCCDs are capable of replacing ICCDs in many applications. ICCDs still have the advantage that they can be gated very fast and thus are useful in applications like range-gated imaging. EMCCD cameras indispensably need a cooling system—using either thermoelectric cooling or liquid nitrogen—to cool the chip down to temperatures in the range of −65 to −95 °C (−85 to −139 °F). This cooling system unfortunately adds additional costs to the EMCCD imaging system and may yield condensation problems in the application. However, high-end EMCCD cameras are equipped with a permanent hermetic vacuum system confining the chip to avoid condensation issues.

The low-light capabilities of EMCCDs find use in astronomy and biomedical research, among other fields. In particular, their low noise at high readout speeds makes them very useful for a variety of astronomical applications involving low light sources and transient events such as lucky imaging of faint stars, high speed photon counting photometry, Fabry-Pérot spectroscopy and high-resolution spectroscopy. More recently, these types of CCDs have broken into the field of biomedical research in low-light applications including small animal imaging, single-molecule imaging, Raman spectroscopy, super resolution microscopy as well as a wide variety of modern fluorescence microscopy techniques thanks to greater SNR in low-light conditions in comparison with traditional CCDs and ICCDs.

In terms of noise, commercial EMCCD cameras typically have clock-induced charge (CIC) and dark current (dependent on the extent of cooling) that together lead to an effective readout noise ranging from 0.01 to 1 electrons per pixel read. However, recent improvements in EMCCD technology have led to a new generation of cameras capable of producing significantly less CIC, higher charge transfer efficiency and an EM gain 5 times higher than what was previously available. These advances in low-light detection lead to an effective total background noise of 0.001 electrons per pixel read, a noise floor unmatched by any other low-light imaging device.

Use in astronomy

Array of 30 CCDs used on the Sloan Digital Sky Survey telescope imaging camera, an example of "drift-scanning".

Due to the high quantum efficiencies of charge-coupled device (CCD) (the ideal quantum efficiency is 100%, one generated electron per incident photon), linearity of their outputs, ease of use compared to photographic plates, and a variety of other reasons, CCDs were very rapidly adopted by astronomers for nearly all UV-to-infrared applications.

Thermal noise and cosmic rays may alter the pixels in the CCD array. To counter such effects, astronomers take several exposures with the CCD shutter closed and opened. The average of images taken with the shutter closed is necessary to lower the random noise. Once developed, the dark frame average image is then subtracted from the open-shutter image to remove the dark current and other systematic defects (dead pixels, hot pixels, etc.) in the CCD. Newer Skipper CCDs counter noise by collecting data with the same collected charge multiple times and has applications in precision light Dark Matter searches and neutrino measurements.

The Hubble Space Telescope, in particular, has a highly developed series of steps (“data reduction pipeline”) to convert the raw CCD data to useful images.

CCD cameras used in astrophotography often require sturdy mounts to cope with vibrations from wind and other sources, along with the tremendous weight of most imaging platforms. To take long exposures of galaxies and nebulae, many astronomers use a technique known as auto-guiding. Most autoguiders use a second CCD chip to monitor deviations during imaging. This chip can rapidly detect errors in tracking and command the mount motors to correct for them.

An unusual astronomical application of CCDs, called drift-scanning, uses a CCD to make a fixed telescope behave like a tracking telescope and follow the motion of the sky. The charges in the CCD are transferred and read in a direction parallel to the motion of the sky, and at the same speed. In this way, the telescope can image a larger region of the sky than its normal field of view. The Sloan Digital Sky Survey is the most famous example of this, using the technique to produce a survey of over a quarter of the sky.

In addition to imagers, CCDs are also used in an array of analytical instrumentation including spectrometers and interferometers.

Color cameras

A Bayer filter on a CCD
 
x80 microscope view of an RGGB Bayer filter on a 240 line Sony CCD PAL Camcorder CCD sensor

Digital color cameras generally use a Bayer mask over the CCD. Each square of four pixels has one filtered red, one blue, and two green (the human eye is more sensitive to green than either red or blue). The result of this is that luminance information is collected at every pixel, but the color resolution is lower than the luminance resolution.

Better color separation can be reached by three-CCD devices (3CCD) and a dichroic beam splitter prism, that splits the image into red, green and blue components. Each of the three CCDs is arranged to respond to a particular color. Many professional video camcorders, and some semi-professional camcorders, use this technique, although developments in competing CMOS technology have made CMOS sensors, both with beam-splitters and bayer filters, increasingly popular in high-end video and digital cinema cameras. Another advantage of 3CCD over a Bayer mask device is higher quantum efficiency (higher light sensitivity), because most of the light from the lens enters one of the silicon sensors, while a Bayer mask absorbs a high proportion (more than 2/3) of the light falling on each pixel location.

For still scenes, for instance in microscopy, the resolution of a Bayer mask device can be enhanced by microscanning technology. During the process of color co-site sampling, several frames of the scene are produced. Between acquisitions, the sensor is moved in pixel dimensions, so that each point in the visual field is acquired consecutively by elements of the mask that are sensitive to the red, green, and blue components of its color. Eventually every pixel in the image has been scanned at least once in each color and the resolution of the three channels become equivalent (the resolutions of red and blue channels are quadrupled while the green channel is doubled).

Sensor sizes

Sensors (CCD / CMOS) come in various sizes, or image sensor formats. These sizes are often referred to with an inch fraction designation such as 1/1.8″ or 2/3″ called the optical format. This measurement originates back in the 1950s and the time of Vidicon tubes.

Blooming

Vertical smear

When a CCD exposure is long enough, eventually the electrons that collect in the "bins" in the brightest part of the image will overflow the bin, resulting in blooming. The structure of the CCD allows the electrons to flow more easily in one direction than another, resulting in vertical streaking.

Some anti-blooming features that can be built into a CCD reduce its sensitivity to light by using some of the pixel area for a drain structure. James M. Early developed a vertical anti-blooming drain that would not detract from the light collection area, and so did not reduce light sensitivity.

Cooperative

From Wikipedia, the free encyclopedia ...