Search This Blog

Monday, November 29, 2021

Charge-coupled device

From Wikipedia, the free encyclopedia
 
A specially developed CCD in a wire-bonded package used for ultraviolet imaging

A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging.

In a CCD image sensor, pixels are represented by p-doped metal–oxide–semiconductor (MOS) capacitors. These MOS capacitors, the basic building blocks of a CCD, are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges. Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data are required. In applications with less exacting quality demands, such as consumer and professional digital cameras, active pixel sensors, also known as CMOS sensors (complementary MOS sensors), are generally used. However, the large quality advantage CCDs enjoyed early on has narrowed over time and since the late 2010s CMOS sensors are the dominant technology, having largely if not completely replaced CCD image sensors.

History

The basis for the CCD is the metal–oxide–semiconductor (MOS) structure, with MOS capacitors being the basic building blocks of a CCD, and a depleted MOS structure used as the photodetector in early CCD devices.

In the late 1960s, Willard Boyle and George E. Smith at Bell Labs were researching MOS technology while working on semiconductor bubble memory. They realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. This led to the invention of the charge-coupled device by Boyle and Smith in 1969. They conceived of the design of what they termed, in their notebook, "Charge 'Bubble' Devices".

The initial paper describing the concept in April 1970 listed possible uses as memory, a delay line, and an imaging device. The device could also be used as a shift register. The essence of the design was the ability to transfer charge along the surface of a semiconductor from one storage capacitor to the next. The concept was similar in principle to the bucket-brigade device (BBD), which was developed at Philips Research Labs during the late 1960s.

The first experimental device demonstrating the principle was a row of closely spaced metal squares on an oxidized silicon surface electrically accessed by wire bonds. It was demonstrated by Gil Amelio, Michael Francis Tompsett and George Smith in April 1970. This was the first experimental application of the CCD in image sensor technology, and used a depleted MOS structure as the photodetector. The first patent (U.S. Patent 4,085,456) on the application of CCDs to imaging was assigned to Tompsett, who filed the application in 1971.

The first working CCD made with integrated circuit technology was a simple 8-bit shift register, reported by Tompsett, Amelio and Smith in August 1970. This device had input and output circuits and was used to demonstrate its use as a shift register and as a crude eight pixel linear imaging device. Development of the device progressed at a rapid rate. By 1971, Bell researchers led by Michael Tompsett were able to capture images with simple linear devices. Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention and began development programs. Fairchild's effort, led by ex-Bell researcher Gil Amelio, was the first with commercial devices, and by 1974 had a linear 500-element device and a 2-D 100 × 100 pixel device. Steven Sasson, an electrical engineer working for Kodak, invented the first digital still camera using a Fairchild 100 × 100 CCD in 1975.

The interline transfer (ILT) CCD device was proposed by L. Walsh and R. Dyck at Fairchild in 1973 to reduce smear and eliminate a mechanical shutter. To further reduce smear from bright light sources, the frame-interline-transfer (FIT) CCD architecture was developed by K. Horii, T. Kuroda and T. Kunii at Matsushita (now Panasonic) in 1981.

The first KH-11 KENNEN reconnaissance satellite equipped with charge-coupled device array (800 × 800 pixels) technology for imaging was launched in December 1976. Under the leadership of Kazuo Iwama, Sony started a large development effort on CCDs involving a significant investment. Eventually, Sony managed to mass-produce CCDs for their camcorders. Before this happened, Iwama died in August 1982; subsequently, a CCD chip was placed on his tombstone to acknowledge his contribution. The first mass-produced consumer CCD video camera, the CCD-G5, was released by Sony in 1983, based on a prototype developed by Yoshiaki Hagiwara in 1981.

Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. They recognized that lag can be eliminated if the signal carriers could be transferred from the photodiode to the CCD. This led to their invention of the pinned photodiode, a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. It was first publicly reported by Teranishi and Ishihara with A. Kohono, E. Oda and K. Arai in 1982, with the addition of an anti-blooming structure. The new photodetector structure invented at NEC was given the name "pinned photodiode" (PPD) by B.C. Burkey at Kodak in 1984. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors.

In January 2006, Boyle and Smith were awarded the National Academy of Engineering Charles Stark Draper Prize, and in 2009 they were awarded the Nobel Prize for Physics, for their invention of the CCD concept. Michael Tompsett was awarded the 2010 National Medal of Technology and Innovation, for pioneering work and electronic technologies including the design and development of the first CCD imagers. He was also awarded the 2012 IEEE Edison Medal for "pioneering contributions to imaging devices including CCD Imagers, cameras and thermal imagers".

Basics of operation

The charge packets (electrons, blue) are collected in potential wells (yellow) created by applying positive voltage at the gate electrodes (G). Applying positive voltage to the gate electrode in the correct sequence transfers the charge packets.

In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out of a shift register (the CCD, properly speaking).

An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, whereas a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages. In a digital device, these voltages are then sampled, digitized, and usually stored in memory; in an analog device (such as an analog video camera), they are processed into a continuous analog signal (e.g. by feeding the output of the charge amplifier into a low-pass filter), which is then processed and fed out to other circuits for transmission, recording, or other processing.

Detailed physics of operation

Sony ICX493AQA 10.14-megapixel APS-C (23.4 × 15.6 mm) CCD from digital camera Sony α DSLR-A200 or DSLR-A300, sensor side

Charge generation

Before the MOS capacitors are exposed to light, they are biased into the depletion region; in n-channel CCDs, the silicon under the bias gate is slightly p-doped or intrinsic. The gate is then biased at a positive potential, above the threshold for strong inversion, which will eventually result in the creation of an n channel below the gate as in a MOSFET. However, it takes time to reach this thermal equilibrium: up to hours in high-end scientific cameras cooled at low temperature. Initially after biasing, the holes are pushed far into the substrate, and no mobile electrons are at or near the surface; the CCD thus operates in a non-equilibrium state called deep depletion. Then, when electron–hole pairs are generated in the depletion region, they are separated by the electric field, the electrons move toward the surface, and the holes move toward the substrate. Four pair-generation processes can be identified:

  • photo-generation (up to 95% of quantum efficiency),
  • generation in the depletion region,
  • generation at the surface, and
  • generation in the neutral bulk.

The last three processes are known as dark-current generation, and add noise to the image; they can limit the total usable integration time. The accumulation of electrons at or near the surface can proceed either until image integration is over and charge begins to be transferred, or thermal equilibrium is reached. In this case, the well is said to be full. The maximum capacity of each well is known as the well depth, typically about 105 electrons per pixel.

Design and manufacturing

The photoactive region of a CCD is, generally, an epitaxial layer of silicon. It is lightly p doped (usually with boron) and is grown upon a substrate material, often p++. In buried-channel devices, the type of design utilized in most modern CCDs, certain areas of the surface of the silicon are ion implanted with phosphorus, giving them an n-doped designation. This region defines the channel in which the photogenerated charge packets will travel. Simon Sze details the advantages of a buried-channel device:

This thin layer (= 0.2–0.3 micron) is fully depleted and the accumulated photogenerated charge is kept away from the surface. This structure has the advantages of higher transfer efficiency and lower dark current, from reduced surface recombination. The penalty is smaller charge capacity, by a factor of 2–3 compared to the surface-channel CCD.

The gate oxide, i.e. the capacitor dielectric, is grown on top of the epitaxial layer and substrate.

Later in the process, polysilicon gates are deposited by chemical vapor deposition, patterned with photolithography, and etched in such a way that the separately phased gates lie perpendicular to the channels. The channels are further defined by utilization of the LOCOS process to produce the channel stop region.

Channel stops are thermally grown oxides that serve to isolate the charge packets in one column from those in another. These channel stops are produced before the polysilicon gates are, as the LOCOS process utilizes a high-temperature step that would destroy the gate material. The channel stops are parallel to, and exclusive of, the channel, or "charge carrying", regions.

Channel stops often have a p+ doped region underlying them, providing a further barrier to the electrons in the charge packets (this discussion of the physics of CCD devices assumes an electron transfer device, though hole transfer is possible).

The clocking of the gates, alternately high and low, will forward and reverse bias the diode that is provided by the buried channel (n-doped) and the epitaxial layer (p-doped). This will cause the CCD to deplete, near the p–n junction and will collect and move the charge packets beneath the gates—and within the channels—of the device.

CCD manufacturing and operation can be optimized for different uses. The above process describes a frame transfer CCD. While CCDs may be manufactured on a heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have been placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and infrared and red response. This method of manufacture is used in the construction of interline-transfer devices.

Another version of CCD is called a peristaltic CCD. In a peristaltic charge-coupled device, the charge-packet transfer operation is analogous to the peristaltic contraction and dilation of the digestive system. The peristaltic CCD has an additional implant that keeps the charge away from the silicon/silicon dioxide interface and generates a large lateral electric field from one gate to the next. This provides an additional driving force to aid in transfer of the charge packets.

Architecture

CCD from a 2.1-megapixel Argus digital camera
 
One-dimensional CCD image sensor from a fax machine

The CCD image sensors can be implemented in several different architectures. The most common are full-frame, frame-transfer, and interline. The distinguishing characteristic of each of these architectures is their approach to the problem of shuttering.

In a full-frame device, all of the image area is active, and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image smears as the device is clocked or read out.

With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically aluminum). The image can be quickly transferred from the image area to the opaque area or storage region with acceptable smear of a few percent. That image can then be read out slowly from the storage region while a new image is integrating or exposing in the active area. Frame-transfer devices typically do not require a mechanical shutter and were a common architecture for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly twice as much.

The interline architecture extends this concept one step further and masks every other column of the image sensor for storage. In this device, only one pixel shift has to occur to transfer from image area to storage area; thus, shutter times can be less than a microsecond and smear is essentially eliminated. The advantage is not free, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this deleterious characteristic by adding microlenses on the surface of the device to direct light away from the opaque regions and on the active area. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design.

The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure-prone, power-intensive mechanical shutter, an interline device is the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection and issues of money, power and time are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame devices. The frame-transfer falls in between and was a common choice before the fill-factor issue of interline devices was addressed. Today, frame-transfer is usually chosen when an interline architecture is not available, such as in a back-illuminated device.

CCDs containing grids of pixels are used in digital cameras, optical scanners, and video cameras as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a quantum efficiency of about 70 percent) making them far more efficient than photographic film, which captures only about 2 percent of the incident light.

Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. For normal silicon-based detectors, the sensitivity is limited to 1.1 μm. One other consequence of their sensitivity to infrared is that infrared from remote controls often appears on CCD-based digital cameras or camcorders if they do not have infrared blockers.

Cooling reduces the array's dark current, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise, to negligible levels.

Frame transfer CCD

A frame transfer CCD sensor

The frame transfer CCD imager was the first imaging structure proposed for CCD Imaging by Michael Tompsett at Bell Laboratories. A frame transfer CCD is a specialized CCD, often used in astronomy and some professional video cameras, designed for high exposure efficiency and correctness.

The normal functioning of a CCD, astronomical or otherwise, can be divided into two phases: exposure and readout. During the first phase, the CCD passively collects incoming photons, storing electrons in its cells. After the exposure time is passed, the cells are read out one line at a time. During the readout phase, cells are shifted down the entire area of the CCD. While they are shifted, they continue to collect light. Thus, if the shifting is not fast enough, errors can result from light that falls on a cell holding charge during the transfer. These errors are referred to as "vertical smear" and cause a strong light source to create a vertical line above and below its exact location. In addition, the CCD cannot be used to collect light while it is being read out. Unfortunately, a faster shifting requires a faster readout, and a faster readout can introduce errors in the cell charge measurement, leading to a higher noise level.

A frame transfer CCD solves both problems: it has a shielded, not light sensitive, area containing as many cells as the area exposed to light. Typically, this area is covered by a reflective material such as aluminium. When the exposure time is up, the cells are transferred very rapidly to the hidden area. Here, safe from any incoming light, cells can be read out at any speed one deems necessary to correctly measure the cells' charge. At the same time, the exposed part of the CCD is collecting light again, so no delay occurs between successive exposures.

The disadvantage of such a CCD is the higher cost: the cell area is basically doubled, and more complex control electronics are needed.

Intensified charge-coupled device

An intensified charge-coupled device (ICCD) is a CCD that is optically connected to an image intensifier that is mounted in front of the CCD.

An image intensifier includes three functional elements: a photocathode, a micro-channel plate (MCP) and a phosphor screen. These three elements are mounted one close behind the other in the mentioned sequence. The photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards the MCP by an electrical control voltage, applied between photocathode and MCP. The electrons are multiplied inside of the MCP and thereafter accelerated towards the phosphor screen. The phosphor screen finally converts the multiplied electrons back to photons which are guided to the CCD by a fiber optic or a lens.

An image intensifier inherently includes a shutter functionality: If the control voltage between the photocathode and the MCP is reversed, the emitted photoelectrons are not accelerated towards the MCP but return to the photocathode. Thus, no electrons are multiplied and emitted by the MCP, no electrons are going to the phosphor screen and no light is emitted from the image intensifier. In this case no light falls onto the CCD, which means that the shutter is closed. The process of reversing the control voltage at the photocathode is called gating and therefore ICCDs are also called gateable CCD cameras.

Besides the extremely high sensitivity of ICCD cameras, which enable single photon detection, the gateability is one of the major advantages of the ICCD over the EMCCD cameras. The highest performing ICCD cameras enable shutter times as short as 200 picoseconds.

ICCD cameras are in general somewhat higher in price than EMCCD cameras because they need the expensive image intensifier. On the other hand, EMCCD cameras need a cooling system to cool the EMCCD chip down to temperatures around 170 K (−103 °C). This cooling system adds additional costs to the EMCCD camera and often yields heavy condensation problems in the application.

ICCDs are used in night vision devices and in various scientific applications.

Electron-multiplying CCD

Electrons are transferred serially through the gain stages making up the multiplication register of an EMCCD. The high voltages used in these serial transfers induce the creation of additional charge carriers through impact ionisation.
 
In an EMCCD there is a dispersion (variation) in the number of electrons output by the multiplication register for a given (fixed) number of input electrons (shown in the legend on the right). The probability distribution for the number of output electrons is plotted logarithmically on the vertical axis for a simulation of a multiplication register. Also shown are results from the empirical fit equation shown on this page.

An electron-multiplying CCD (EMCCD, also known as an L3Vision CCD, a product commercialized by e2v Ltd., GB, L3CCD or Impactron CCD, a now-discontinued product offered in the past by Texas Instruments) is a charge-coupled device in which a gain register is placed between the shift register and the output amplifier. The gain register is split up into a large number of stages. In each stage, the electrons are multiplied by impact ionization in a similar way to an avalanche diode. The gain probability at every stage of the register is small (P < 2%), but as the number of elements is large (N > 500), the overall gain can be very high (), with single input electrons giving many thousands of output electrons. Reading a signal from a CCD gives a noise background, typically a few electrons. In an EMCCD, this noise is superimposed on many thousands of electrons rather than a single electron; the devices' primary advantage is thus their negligible readout noise. The use of avalanche breakdown for amplification of photo charges had already been described in the U.S. Patent 3,761,744 in 1973 by George E. Smith/Bell Telephone Laboratories.

EMCCDs show a similar sensitivity to intensified CCDs (ICCDs). However, as with ICCDs, the gain that is applied in the gain register is stochastic and the exact gain that has been applied to a pixel's charge is impossible to know. At high gains (> 30), this uncertainty has the same effect on the signal-to-noise ratio (SNR) as halving the quantum efficiency (QE) with respect to operation with a gain of unity. However, at very low light levels (where the quantum efficiency is most important), it can be assumed that a pixel either contains an electron—or not. This removes the noise associated with the stochastic multiplication at the risk of counting multiple electrons in the same pixel as a single electron. To avoid multiple counts in one pixel due to coincident photons in this mode of operation, high frame rates are essential. The dispersion in the gain is shown in the graph on the right. For multiplication registers with many elements and large gains it is well modelled by the equation:

where P is the probability of getting n output electrons given m input electrons and a total mean multiplication register gain of g.

Because of the lower costs and better resolution, EMCCDs are capable of replacing ICCDs in many applications. ICCDs still have the advantage that they can be gated very fast and thus are useful in applications like range-gated imaging. EMCCD cameras indispensably need a cooling system—using either thermoelectric cooling or liquid nitrogen—to cool the chip down to temperatures in the range of −65 to −95 °C (−85 to −139 °F). This cooling system unfortunately adds additional costs to the EMCCD imaging system and may yield condensation problems in the application. However, high-end EMCCD cameras are equipped with a permanent hermetic vacuum system confining the chip to avoid condensation issues.

The low-light capabilities of EMCCDs find use in astronomy and biomedical research, among other fields. In particular, their low noise at high readout speeds makes them very useful for a variety of astronomical applications involving low light sources and transient events such as lucky imaging of faint stars, high speed photon counting photometry, Fabry-Pérot spectroscopy and high-resolution spectroscopy. More recently, these types of CCDs have broken into the field of biomedical research in low-light applications including small animal imaging, single-molecule imaging, Raman spectroscopy, super resolution microscopy as well as a wide variety of modern fluorescence microscopy techniques thanks to greater SNR in low-light conditions in comparison with traditional CCDs and ICCDs.

In terms of noise, commercial EMCCD cameras typically have clock-induced charge (CIC) and dark current (dependent on the extent of cooling) that together lead to an effective readout noise ranging from 0.01 to 1 electrons per pixel read. However, recent improvements in EMCCD technology have led to a new generation of cameras capable of producing significantly less CIC, higher charge transfer efficiency and an EM gain 5 times higher than what was previously available. These advances in low-light detection lead to an effective total background noise of 0.001 electrons per pixel read, a noise floor unmatched by any other low-light imaging device.

Use in astronomy

Array of 30 CCDs used on the Sloan Digital Sky Survey telescope imaging camera, an example of "drift-scanning".

Due to the high quantum efficiencies of charge-coupled device (CCD) (the ideal quantum efficiency is 100%, one generated electron per incident photon), linearity of their outputs, ease of use compared to photographic plates, and a variety of other reasons, CCDs were very rapidly adopted by astronomers for nearly all UV-to-infrared applications.

Thermal noise and cosmic rays may alter the pixels in the CCD array. To counter such effects, astronomers take several exposures with the CCD shutter closed and opened. The average of images taken with the shutter closed is necessary to lower the random noise. Once developed, the dark frame average image is then subtracted from the open-shutter image to remove the dark current and other systematic defects (dead pixels, hot pixels, etc.) in the CCD. Newer Skipper CCDs counter noise by collecting data with the same collected charge multiple times and has applications in precision light Dark Matter searches and neutrino measurements.

The Hubble Space Telescope, in particular, has a highly developed series of steps (“data reduction pipeline”) to convert the raw CCD data to useful images.

CCD cameras used in astrophotography often require sturdy mounts to cope with vibrations from wind and other sources, along with the tremendous weight of most imaging platforms. To take long exposures of galaxies and nebulae, many astronomers use a technique known as auto-guiding. Most autoguiders use a second CCD chip to monitor deviations during imaging. This chip can rapidly detect errors in tracking and command the mount motors to correct for them.

An unusual astronomical application of CCDs, called drift-scanning, uses a CCD to make a fixed telescope behave like a tracking telescope and follow the motion of the sky. The charges in the CCD are transferred and read in a direction parallel to the motion of the sky, and at the same speed. In this way, the telescope can image a larger region of the sky than its normal field of view. The Sloan Digital Sky Survey is the most famous example of this, using the technique to produce a survey of over a quarter of the sky.

In addition to imagers, CCDs are also used in an array of analytical instrumentation including spectrometers and interferometers.

Color cameras

A Bayer filter on a CCD
 
x80 microscope view of an RGGB Bayer filter on a 240 line Sony CCD PAL Camcorder CCD sensor

Digital color cameras generally use a Bayer mask over the CCD. Each square of four pixels has one filtered red, one blue, and two green (the human eye is more sensitive to green than either red or blue). The result of this is that luminance information is collected at every pixel, but the color resolution is lower than the luminance resolution.

Better color separation can be reached by three-CCD devices (3CCD) and a dichroic beam splitter prism, that splits the image into red, green and blue components. Each of the three CCDs is arranged to respond to a particular color. Many professional video camcorders, and some semi-professional camcorders, use this technique, although developments in competing CMOS technology have made CMOS sensors, both with beam-splitters and bayer filters, increasingly popular in high-end video and digital cinema cameras. Another advantage of 3CCD over a Bayer mask device is higher quantum efficiency (higher light sensitivity), because most of the light from the lens enters one of the silicon sensors, while a Bayer mask absorbs a high proportion (more than 2/3) of the light falling on each pixel location.

For still scenes, for instance in microscopy, the resolution of a Bayer mask device can be enhanced by microscanning technology. During the process of color co-site sampling, several frames of the scene are produced. Between acquisitions, the sensor is moved in pixel dimensions, so that each point in the visual field is acquired consecutively by elements of the mask that are sensitive to the red, green, and blue components of its color. Eventually every pixel in the image has been scanned at least once in each color and the resolution of the three channels become equivalent (the resolutions of red and blue channels are quadrupled while the green channel is doubled).

Sensor sizes

Sensors (CCD / CMOS) come in various sizes, or image sensor formats. These sizes are often referred to with an inch fraction designation such as 1/1.8″ or 2/3″ called the optical format. This measurement originates back in the 1950s and the time of Vidicon tubes.

Blooming

Vertical smear

When a CCD exposure is long enough, eventually the electrons that collect in the "bins" in the brightest part of the image will overflow the bin, resulting in blooming. The structure of the CCD allows the electrons to flow more easily in one direction than another, resulting in vertical streaking.

Some anti-blooming features that can be built into a CCD reduce its sensitivity to light by using some of the pixel area for a drain structure. James M. Early developed a vertical anti-blooming drain that would not detract from the light collection area, and so did not reduce light sensitivity.

LASIK

From Wikipedia, the free encyclopedia

LASIK
US Navy 070501-N-5319A-007 Capt. Joseph Pasternak, an ophthalmology surgeon at National Naval Medical Center Bethesda, lines up the laser on Marine Corps Lt. Col. Lawrence Ryder's eye before beginning LASIK IntraLase surgery.jpg
SpecialtyOphthalmology, optometry
ICD-9-CM11.71
MeSHD020731
MedlinePlus007018

LASIK or Lasik (laser-assisted in situ keratomileusis), commonly referred to as laser eye surgery or laser vision correction, is a type of refractive surgery for the correction of myopia, hyperopia, and astigmatism. LASIK surgery is performed by an ophthalmologist who uses a laser or microkeratome to reshape the eye's cornea in order to improve visual acuity. For most people, LASIK provides a long-lasting alternative to eyeglasses or contact lenses.

LASIK is most similar to another surgical corrective procedure, photorefractive keratectomy (PRK), and LASEK. All represent advances over radial keratotomy in the surgical treatment of refractive errors of vision. For patients with moderate to high myopia or thin corneas which cannot be treated with LASIK and PRK, the phakic intraocular lens is an alternative. As of 2018, roughly 9.5 million Americans have had LASIK and, globally, between 1991 and 2016, more than 40 million procedures were performed. However, the procedure seems to be a declining option for many in recent years.

Effectiveness

In 2006, the British National Health Service's National Institute for Health and Clinical Excellence (NICE) considered evidence of the effectiveness and the potential risks of the laser surgery stating "current evidence suggests that photorefractive (laser) surgery for the correction of refractive errors is safe and effective for use in appropriately selected patients. Clinicians undertaking photorefractive (laser) surgery for the correction of refractive errors should ensure that patients understand the benefits and potential risks of the procedure. Risks include failure to achieve the expected improvement in unaided vision, development of new visual disturbances, corneal infection and flap complications. These risks should be weighed against those of wearing spectacles or contact lenses." The FDA reports "The safety and effectiveness of refractive procedures has not been determined in patients with some diseases."

Satisfaction

Surveys of LASIK surgery find rates of patient satisfaction between 92 and 98 percent. In March 2008, the American Society of Cataract and Refractive Surgery published a patient satisfaction meta-analysis of over 3,000 peer-reviewed articles from international clinical journals. Data from a systematic literature review conducted from 1988 to 2008, consisting of 309 peer-reviewed articles about "properly conducted, well-designed, randomized clinical trials" found a 95.4 percent patient satisfaction rate among LASIK patients.

A 2017 JAMA study claims that overall, preoperative symptoms decreased significantly, and visual acuity excelled. A meta-analysis discovered that 97% of patients achieved uncorrected visual acuity (UCVA) of 20/40, while 62% achieved 20/20. The increase in visual acuity allows individuals to enter occupations that were previously not an option due to their vision.

Dissatisfaction

Some people with poor outcomes from LASIK surgical procedures report a significantly reduced quality of life because of vision problems or physical pain associated with the surgery. A small percentage of patients may need to have another surgery because their condition is over- or under-corrected. Some patients need to wear contact lenses or glasses even after treatment.

The most common reason for dissatisfaction in LASIK patients is chronic severe dry eye. Independent research indicates 95% of patients experience dry eye in the initial post-operative period. This number has been reported to up to 60% after one month. Symptoms begin to improve in the vast majority of patients in the 6 to 12 months following the surgery. However, 30% of post-LASIK referrals to tertiary ophthalmology care centers have been shown to be due to chronic dry eye.

Morris Waxler, a former FDA official who was involved in the approval of LASIK, has subsequently criticized its widespread use. In 2010, Waxler made media appearances and claimed that the procedure had a failure rate greater than 50%. The FDA responded that Waxler's information was "filled with false statements, incorrect citations" and "mischaracterization of results".

A 2016 JAMA study indicates that the prevalence of complications from LASIK are higher than indicated, with the study indicating many patients experience glare, halos or other visual symptoms. Forty-three percent of participants in a JAMA study (published in 2017) reported new visual symptoms they had not experienced before.

Presbyopia

A type of LASIK, known as presbyLasik, may be used in presbyopia. Results are, however, more variable and some people have a decrease in visual acuity.

Risks

Higher-order aberrations

Higher-order aberrations are visual problems that require special testing for diagnosis and are not corrected with normal spectacles (eyeglasses). These aberrations include 'starbursts', 'ghosting', 'halos' and others. Some patients describe these symptoms post-operatively and associate them with the LASIK technique including the formation of the flap and the tissue ablation.

There is a correlation between pupil size and aberrations. This correlation may be the result of irregularity in the corneal tissue between the untouched part of the cornea and the reshaped part. Daytime post-LASIK vision is optimal, since the pupil size is smaller than the LASIK flap.

Others propose that higher-order aberrations are present preoperatively. They can be measured in micrometers (µm) whereas the smallest laser-beam size approved by the FDA is about 1000 times larger, at 0.65 mm. In situ keratomileusis effected at a later age increases the incidence of corneal higher-order wavefront aberrations. These factors demonstrate the importance of careful patient selection for LASIK treatment.

A subconjunctival hemorrhage is a common and minor post-LASIK complication.

Dry eyes

95% of patients report dry-eye symptoms after LASIK. Although it is usually temporary, it can develop into chronic and severe dry eye syndrome. Quality of life can be severely affected by dry-eye syndrome.

Underlying conditions with dry eye such as Sjögren's syndrome are considered contraindications to Lasik.

Treatments include artificial tears, prescription tears, and punctal occlusion. Punctal occlusion is accomplished by placing a collagen or silicone plug in the tear duct, which normally drains fluid from the eye. Some patients complain of ongoing dry-eye symptoms despite such treatments and dry-eye symptoms may be permanent.

Halos

Some post-LASIK patients see halos and starbursts around bright lights at night. At night, the pupil may dilate to be larger than the flap leading to the edge of the flap or stromal changes causing visual distortion of light that does not occur during the day when the pupil is smaller. The eyes can be examined for large pupils pre-operatively and the risk of this symptom assessed.

Complications due to LASIK have been classified as those that occur due to preoperative, intraoperative, early postoperative, or late postoperative sources: According to the UK National Health Service complications occur in fewer than 5% of cases.

Other complications

  • Flap complications – The incidence of flap complications is about 0.244%. Flap complications (such as displaced flaps or folds in the flaps that necessitate repositioning, diffuse lamellar keratitis, and epithelial ingrowth) are common in lamellar corneal surgeries but rarely lead to permanent loss of visual acuity. The incidence of these microkeratome-related complications decreases with increased physician experience.
  • Slipped flap – is a corneal flap that detaches from the rest of the cornea. The chances of this are greatest immediately after surgery, so patients typically are advised to go home and sleep to let the flap adhere and heal. Patients are usually given sleep goggles or eye shields to wear for several nights to prevent them from dislodging the flap in their sleep. A short operation time may decrease the chance of this complication, as there is less time for the flap to dry.
  • Flap interface particles – are a finding whose clinical significance is undetermined. Particles of various sizes and reflectivity are clinically visible in about 38.7% of eyes examined via slit lamp biomicroscopy and in 100% of eyes examined by confocal microscopy.
  • Diffuse lamellar keratitis  – an inflammatory process that involves an accumulation of white blood cells at the interface between the LASIK corneal flap and the underlying stroma. It is known colloquially as "sands of Sahara syndrome" because on slit lamp exam, the inflammatory infiltrate appears similar to waves of sand. The USAeyes organisation reports an incidence of 2.3% after LASIK. It is most commonly treated with steroid eye drops. Sometimes it is necessary for the eye surgeon to lift the flap and manually remove the accumulated cells. DLK has not been reported with photorefractive keratectomy due to the absence of flap creation.
  • Infection – the incidence of infection responsive to treatment has been estimated at 0.04%.
  • Post-LASIK corneal ectasia – a condition where the cornea starts to bulge forwards at a variable time after LASIK, causing irregular astigmatism. the condition is similar to keratoconus.
  • Subconjunctival hemorrhage – A report shows the incidence of subconjunctival hemorrhage has been estimated at 10.5%.
  • Corneal scarring – or permanent problems with cornea's shape making it impossible to wear contact lenses.
  • Epithelial ingrowth – estimated at 0.01%.
  • Traumatic flap dislocations – Cases of late traumatic flap dislocations have been reported up to thirteen years after LASIK.
  • Retinal detachment: estimated at 0.36 percent.
  • Choroidal neovascularization: estimated at 0.33 percent.
  • Uveitis: estimated at 0.18 percent.
  • For climbers – Although the cornea usually is thinner after LASIK, because of the removal of part of the stroma, refractive surgeons strive to maintain the maximum thickness to avoid structurally weakening the cornea. Decreased atmospheric pressure at higher altitudes has not been demonstrated as extremely dangerous to the eyes of LASIK patients. However, some mountain climbers have experienced a myopic shift at extreme altitudes.
  • Late postoperative complications – A large body of evidence on the chances of long-term complications is not yet established and may be changing due to advances in operator experience, instruments and techniques.
  • Potential best vision loss may occur a year after the surgery regardless of the use of eyewear.
  • Eye floaters – ocular mechanical stress created by LASIK have the potential to damage the vitreous, retina, and macula causing floaters as a result.
  • Ocular neuropathic pain (corneal neuralgia); rare

FDA's position

In October 2009, the FDA, the National Eye Institute (NEI), and the Department of Defense (DoD) launched the LASIK Quality of Life Collaboration Project (LQOLCP) to help better understand the potential risk of severe problems that can result from LASIK in response to widespread reports of problems experienced by patients after LASIK laser eye surgery. This project examined patient-reported outcomes with LASIK (PROWL). The project consisted of three phases: pilot phase, phase I, phase II (PROWL-1) and phase III (PROWL-2). The last two phases were completed in 2014.

The results of the LASIK Quality of Life Study were published in October 2014.

Based on our initial analyses of our studies:
  • Up to 46 percent of participants, who had no visual symptoms before surgery, reported at least one visual symptom at three months after surgery.
  • Participants who developed new visual symptoms after surgery, most often developed halos. Up to 40 percent of participants with no halos before LASIK had halos three months following surgery.
  • Up to 28 percent of participants with no symptoms of dry eyes before LASIK, reported dry eye symptoms at three months after their surgery.
  • Less than 1 percent of study participants experienced "a lot of" difficulty with or inability to do usual activities without corrective lenses because of their visual symptoms (halos, glare, et al.) after LASIK surgery.
  • Participants who were not satisfied with the LASIK surgery reported all types of visual symptoms the questionnaire measured (double vision/ghosting, starbursts, glare, and halos).

The FDA's director of the Division of Ophthalmic Devices, said about the LASIK study "Given the large number of patients undergoing LASIK annually, dissatisfaction and disabling symptoms may occur in a significant number of patients". Also in 2014, FDA published an article highlighting the risks and a list of factors and conditions individuals should consider when choosing a doctor for their refractive surgery.

Contraindications

Not everyone is eligible to receive LASIK. Severe keratoconus or thin corneas may disqualify patients from LASIK, though other procedures may be viable options. Those with Fuchs' corneal endothelial dystrophy, corneal epithelial basement membrane dystrophy, retinal tears, autoimmune diseases, severe dry eyes, and significant blepharitis should be treated before consideration for LASIK. Women who are pregnant or nursing are generally not eligible to undergo LASIK.

Large Pupils: These can cause symptoms such as glare, halos, starbursts, and ghost images (double vision) after surgery. Because the laser can only work a section of the eye, the outer ring of the eye is left uncorrected. At night or when dark, a patient's eyes dilate and thus the uncorrected outer section of the eye and the inner corrected section, create the problems.

Process

The planning and analysis of corneal reshaping techniques such as LASIK have been standardized by the American National Standards Institute, an approach based on the Alpins method of astigmatism analysis. The FDA website on LASIK states,

"Before undergoing a refractive procedure, you should carefully weigh the risks and benefits based on your own personal value system, and try to avoid being influenced by friends that have had the procedure or doctors encouraging you to do so."

The procedure involves creating a thin flap on the eye, folding it to enable remodeling of the tissue beneath with a laser and repositioning the flap.

Preoperative procedures

Contact lenses

Patients wearing soft contact lenses are instructed to stop wearing them 5 to 21 days before surgery. One industry body recommends that patients wearing hard contact lenses should stop wearing them for a minimum of six weeks plus another six weeks for every three years the hard contacts have been worn. The cornea is avascular because it must be transparent to function normally. Its cells absorb oxygen from the tear film. Thus, low-oxygen-permeable contact lenses reduce the cornea's oxygen absorption, sometimes resulting in corneal neovascularization—the growth of blood vessels into the cornea. This causes a slight lengthening of inflammation duration and healing time and some pain during surgery, because of greater bleeding. Although some contact lenses (notably modern RGP and soft silicone hydrogel lenses) are made of materials with greater oxygen permeability that help reduce the risk of corneal neovascularization, patients considering LASIK are warned to avoid over-wearing their contact lenses.

Pre-operative examination and education

In the United States, the FDA has approved LASIK for age 18 or 22 and over because the vision has to stabilize. More importantly the patient's eye prescription should be stable for at least one year prior to surgery. The patient may be examined with pupillary dilation and education given prior to the procedure. Before the surgery, the patient's corneas are examined with a pachymeter to determine their thickness, and with a topographer, or corneal topography machine, to measure their surface contour. Using low-power lasers, a topographer creates a topographic map of the cornea. The procedure is contraindicated if the topographer finds difficulties such as keratoconus The preparatory process also detects astigmatism and other irregularities in the shape of the cornea. Using this information, the surgeon calculates the amount and the location of corneal tissue to be removed. The patient is prescribed and self-administers an antibiotic beforehand to minimize the risk of infection after the procedure and is sometimes offered a short acting oral sedative medication as a pre-medication. Prior to the procedure, anaesthetic eye drops are instilled. Factors that may rule out LASIK for some patients include large pupils, thin corneas and extremely dry eyes.

Operative procedure

Flap creation

Flap creation with femtosecond laser
 
Flaporhexis as an alternative method to lift a femtosecond laser flap

A soft corneal suction ring is applied to the eye, holding the eye in place. This step in the procedure can sometimes cause small blood vessels to burst, resulting in bleeding or subconjunctival hemorrhage into the white (sclera) of the eye, a harmless side effect that resolves within several weeks. Increased suction causes a transient dimming of vision in the treated eye. Once the eye is immobilized, a flap is created by cutting through the corneal epithelium and Bowman's layer. This process is achieved with a mechanical microkeratome using a metal blade, or a femtosecond laser that creates a series of tiny closely arranged bubbles within the cornea. A hinge is left at one end of this flap. The flap is folded back, revealing the stroma, the middle section of the cornea. The process of lifting and folding back the flap can sometimes be uncomfortable.

Laser remodeling

The second step of the procedure uses an excimer laser (193 nm) to remodel the corneal stroma. The laser vaporizes the tissue in a finely controlled manner without damaging the adjacent stroma. No burning with heat or actual cutting is required to ablate the tissue. The layers of tissue removed are tens of micrometers thick.

Performing the laser ablation in the deeper corneal stroma provides for more rapid visual recovery and less pain than the earlier technique, photorefractive keratectomy (PRK).

During the second step, the patient's vision becomes blurry, once the flap is lifted. They will be able to see only white light surrounding the orange light of the laser, which can lead to mild disorientation. The excimer laser uses an eye tracking system that follows the patient's eye position up to 4,000 times per second, redirecting laser pulses for precise placement within the treatment zone. Typical pulses are around 1 millijoule (mJ) of pulse energy in 10 to 20 nanoseconds.

Repositioning of the flap

After the laser has reshaped the stromal layer, the LASIK flap is carefully repositioned over the treatment area by the surgeon and checked for the presence of air bubbles, debris, and proper fit on the eye. The flap remains in position by natural adhesion until healing is completed.

Postoperative care

Patients are usually given a course of antibiotic and anti-inflammatory eye drops. These are continued in the weeks following surgery. Patients are told to rest and are given dark eyeglasses to protect their eyes from bright lights and occasionally protective goggles to prevent rubbing of the eyes when asleep and to reduce dry eyes. They also are required to moisturize the eyes with preservative-free tears and follow directions for prescription drops. Occasionally after the procedure a bandage contact lens is placed to aid the healing, and typically removed after 3–4 days. Patients should be adequately informed by their surgeons of the importance of proper post-operative care to minimize the risk of complications.

Wavefront-guided

Wavefront-guided LASIK is a variation of LASIK surgery in which, rather than applying a simple correction of only long/short-sightedness and astigmatism (only lower order aberrations as in traditional LASIK), an ophthalmologist applies a spatially varying correction, guiding the computer-controlled excimer laser with measurements from a wavefront sensor. The goal is to achieve a more optically perfect eye, though the final result still depends on the physician's success at predicting changes that occur during healing and other factors that may have to do with the regularity/irregularity of the cornea and the axis of any residual astigmatism. Another important factor is whether the excimer laser can correctly register eye position in 3 dimensions, and to track the eye in all the possible directions of eye movement. If a wavefront guided treatment is performed with less than perfect registration and tracking, pre-existing aberrations can be worsened. In older patients, scattering from microscopic particles (cataract or incipient cataract) may play a role that outweighs any benefit from wavefront correction.

When treating a patient with preexisting astigmatism, most wavefront-guided LASIK lasers are designed to treat regular astigmatism as determined externally by corneal topography. In patients who have an element of internally induced astigmatism, therefore, the wavefront-guided astigmatism correction may leave regular astigmatism behind (a cross-cylinder effect). If the patient has preexisting irregular astigmatism, wavefront-guided approaches may leave both regular and irregular astigmatism behind. This can result in less-than-optimal visual acuity compared with a wavefront-guided approach combined with vector planning, as shown in a 2008 study. Thus, vector-planning offers a better alignment between corneal astigmatism and laser treatment, and leaves less regular astigmatism behind on the cornea, which is advantageous whether irregular astigmatism coexists or not.

The "leftover" astigmatism after a purely surface-guided laser correction can be calculated beforehand, and is called ocular residual astigmatism (ORA). ORA is a calculation of astigmatism due to the noncorneal surface (internal) optics. The purely refraction-based approach represented by wavefront analysis actually conflicts with corneal surgical experience developed over many years.

The pathway to "super vision" thus may require a more customized approach to corneal astigmatism than is usually attempted, and any remaining astigmatism ought to be regular (as opposed to irregular), which are both fundamental principles of vector planning overlooked by a purely wavefront-guided treatment plan. This was confirmed by the 2008 study mentioned above, which found a greater reduction in corneal astigmatism and better visual outcomes under mesopic conditions using wavefront technology combined with vector analysis than using wavefront technology alone, and also found equivalent higher-order aberrations (see below). Vector planning also proved advantageous in patients with keratoconus.

No good data can be found that compare the percentage of LASIK procedures that employ wavefront guidance versus the percentage that do not, nor the percentage of refractive surgeons who have a preference one way or the other. Wavefront technology continues to be positioned as an "advance" in LASIK with putative advantages; however, it is clear that not all LASIK procedures are performed with wavefront guidance.

Still, surgeons claim patients are generally more satisfied with this technique than with previous methods, particularly regarding lowered incidence of "halos," the visual artifact caused by spherical aberration induced in the eye by earlier methods. A meta-analysis of eight trials showed a lower incidence of these higher order aberrations in patients who had wavefront-guided LASIK compared to non-wavefront-guided LASIK. Based on their experience, the United States Air Force has described WFG-Lasik as giving "superior vision results".

Topography-assisted

Topography-assisted LASIK is intended to be an advancement in precision and reduce night-vision side effects. The first topography-assisted device received FDA approval September 13, 2013.

History

Barraquer's early work

In the 1950s, the microkeratome and keratomileusis technique were developed in Bogotá, Colombia, by the Spanish ophthalmologist Jose Barraquer. In his clinic, he would cut thin (one hundredth of a mm thick) flaps in the cornea to alter its shape. Barraquer also investigated how much of the cornea had to be left unaltered in order to provide stable long-term results. This work was followed by that of the Russian scientist, Svyatoslav Fyodorov, who developed radial keratotomy (RK) in the 1970s and designed the first posterior chamber implantable contact lenses (phakic intraocular lens) in the 1980s.

Laser refractive surgery

In 1980, Rangaswamy Srinivasan, at the IBM Research laboratory, discovered that an ultraviolet excimer laser could etch living tissue, with precision and with no thermal damage to the surrounding area. He named the phenomenon "ablative photo-decomposition" (APD). Five years later, in 1985, Steven Trokel at the Edward S. Harkness Eye Institute, Columbia University in New York City, published his work using the excimer laser in radial keratotomy. He wrote,

"The central corneal flattening obtained by radial diamond knife incisions has been duplicated by radial laser incisions in 18 enucleated human eyes. The incisions, made by 193 nm far-ultraviolet light radiation emitted by the excimer laser, produced corneal flattening ranging from 0.12 to 5.35 diopters. Both the depth of the corneal incisions and the degree of central corneal flattening correlated with the laser energy applied. Histopathology revealed the remarkably smooth edges of the laser incisions."

Together with his colleagues, Charles Munnerlyn and Terry Clapham, Trokel founded VISX USA inc. Marguerite B. MacDonald MD performed the first human VISX refractive laser eye surgery in 1989.

Patent

A number of patents have been issued for several techniques related to LASIK. Rangaswamy Srinivasan and James Wynne filed a patent application on the ultraviolet excimer laser, in 1986, issued in 1988. In 1989, Gholam A. Peyman was granted a US patent for using an excimer laser to modify corneal curvature. It was,

"A method and apparatus for modifying the curvature of a live cornea via use of an excimer laser. The live cornea has a thin layer removed therefrom, leaving an exposed internal surface thereon. Then, either the surface or thin layer is exposed to the laser beam along a predetermined pattern to ablate desired portions. The thin layer is then replaced onto the surface. Ablating a central area of the surface or thin layer makes the cornea less curved, while ablating an annular area spaced from the center of the surface or layer makes the cornea more curved. The desired predetermined pattern is formed by use of a variable diaphragm, a rotating orifice of variable size, a movable mirror or a movable fiber optic cable through which the laser beam is directed towards the exposed internal surface or removed thin layer."

The patents related to so-called broad-beam LASIK and PRK technologies were granted to US companies including Visx and Summit during 1990–1995 based on the fundamental US patent issued to IBM (1988) which claimed the use of UV laser for the ablation of organic tissues.

Implementation in the U.S.

The LASIK technique was implemented in the U.S. after its successful application elsewhere. The Food and Drug Administration (FDA) commenced a trial of the excimer laser in 1989. The first enterprise to receive FDA approval to use an excimer laser for photo-refractive keratectomy was Summit Technology (founder and CEO, Dr. David Muller). In 1992, under the direction of the FDA, Greek ophthalmologist Ioannis Pallikaris introduced LASIK to ten VISX centres. In 1998, the "Kremer Excimer Laser", serial number KEA 940202, received FDA approval for its singular use for performing LASIK. Subsequently, Summit Technology was the first company to receive FDA approval to mass manufacture and distribute excimer lasers. VISX and other companies followed.

The excimer laser that was used for the first LASIK surgeries by I. Pallikaris

Pallikaris suggested a flap of cornea could be raised by microkeratome prior to the performing of PRK with the excimer laser. The addition of a flap to PRK became known as LASIK.

Recent years

The procedure seems to be a declining option for many in the United States, dropping more than 50 percent, from about 1.5 million surgeries in 2007 to 604,000 in 2015, according to the eye-care data source Market Scope. A study in the journal Cornea determined the frequency with which LASIK was searched on Google from 2007 to 2011. Within this time frame, LASIK searches declined by 40% in the United States. Countries such as the U.K. and India also showed a decline, 22% and 24% respectively. Canada, however, showed an increase in LASIK searches by 8%. This decrease in interest can be attributed to several factors: the emergence of refractive cataract surgery, the economic recession in 2008, and unfavorable media coverage from the FDA's 2008 press release on LASIK.

Further research

Since 1991, there have been further developments such as faster lasers; larger spot areas; bladeless flap incisions; intraoperative corneal pachymetry; and "wavefront-optimized" and "wavefront-guided" techniques which were introduced by the University of Michigan's Center for Ultrafast Optical Science. The goal of replacing standard LASIK in refractive surgery is to avoid permanently weakening the cornea with incisions and to deliver less energy to the surrounding tissues. More recently, techniques like Epi-Bowman Keratectomy have been developed that avoid touching the epithelial basement membrane or Bowman's layer.

Experimental techniques

  • "plain" LASIK: LASEK, Epi-LASIK,
  • Wavefront-guided PRK,
  • advanced intraocular lenses.
  • Femtosecond laser intrastromal vision correction: using all-femtosecond correction, for example, Femtosecond Lenticule EXtraction, FLIVC, or IntraCOR),
  • Keraflex: a thermobiochemical solution which has received the CE Mark for refractive correction. and is in European clinical trials for the correction of myopia and keratoconus.
  • Technolas FEMTEC laser: for incisionless IntraCOR ablation for presbyopia, with trials ongoing for myopia and other conditions.
  • LASIK with the IntraLase femtosecond laser: early trials comparing to the «LASIK with microkeratomes for the correction of myopia suggest no significant differences in safety or efficacy. However, the femtosecond laser has a potential advantage in predictability, although this finding was not significant».

Comparison to photorefractive keratectomy

A systematic review that compared PRK and LASIK concluded that LASIK has shorter recovery time and less pain. The two techniques after a period of one year have similar results.

A 2017 systematic review found uncertainty in visual acuity, but found that in one study, those receiving PRK were less likely to achieve a refractive error, and were less likely to have an over-correction than compared to LASIK.

Cooperative

From Wikipedia, the free encyclopedia ...