Search This Blog

Sunday, November 4, 2018

Optical aberration

From Wikipedia, the free encyclopedia
,
In optics, aberration is a property of optical systems such as lenses that causes light to be spread out over some region of space rather than focused to a point. Aberrations cause the image formed by a lens to be blurred or distorted, with the nature of the distortion depending on the type of aberration. Aberration can be defined as a departure of the performance of an optical system from the predictions of paraxial optics. In an imaging system, it occurs when light from one point of an object does not converge into (or does not diverge from) a single point after transmission through the system. Aberrations occur because the simple paraxial theory is not a completely accurate model of the effect of an optical system on light, rather than due to flaws in the optical elements.

An image-forming optical system with aberration will produce an image which is not sharp. Makers of optical instruments need to correct optical systems to compensate for aberration.

Aberration can be analyzed with the techniques of geometrical optics. The articles on reflection, refraction and caustics discuss the general features of reflected and refracted rays.

Overview

Reflection from a spherical mirror. Incident rays (red) away from the center of the mirror produce reflected rays (green) that miss the focal point, F. This is due to spherical aberration.

With an ideal lens, light from any given point on an object would pass through the lens and come together at a single point in the image plane (or, more generally, the image surface). Real lenses do not focus light exactly to a single point, however, even when they are perfectly made. These deviations from the idealized lens performance are called aberrations of the lens.

Aberrations fall into two classes: monochromatic and chromatic. Monochromatic aberrations are caused by the geometry of the lens or mirror and occur both when light is reflected and when it is refracted. They appear even when using monochromatic light, hence the name.

Chromatic aberrations are caused by dispersion, the variation of a lens's refractive index with wavelength. Because of dispersion, different wavelengths of light come to focus at different points. Chromatic aberration does not appear when monochromatic light is used.

Monochromatic aberrations

The most common monochromatic aberrations are:
Although defocus is technically the lowest-order of the optical aberrations, it is usually not considered as a lens aberration, since it can be corrected by moving the lens (or the image plane) to bring the image plane to the optical focus of the lens.

In addition to these aberrations, piston and tilt are effects which shift the position of the focal point. Piston and tilt are not true optical aberrations, since when an otherwise perfect wavefront is altered by piston and tilt, it will still form a perfect, aberration-free image, only shifted to a different position.

Chromatic aberrations

Chromatic aberration occurs when different wavelengths are not focussed to the same point. Types of chromatic aberration are:
  • Axial (or "longitudinal") chromatic aberration
  • Lateral (or "transverse") chromatic aberration

Theory of monochromatic aberration

A perfect optical system would follow the theorem: Rays of light proceeding from any object point unite in an image point; and therefore the object space is reproduced in an image space. The introduction of simple auxiliary terms, due to Gauss, named the focal lengths and focal planes, permits the determination of the image of any object for any system. The Gaussian theory, however, is only true so long as the angles made by all rays with the optical axis (the symmetrical axis of the system) are infinitely small, i.e. with infinitesimal objects, images and lenses; in practice these conditions may not be realized, and the images projected by uncorrected systems are, in general, ill-defined and often completely blurred, if the aperture or field of view exceeds certain limits.

The investigations of James Clerk Maxwell and Ernst Abbe showed that the properties of these reproductions, i.e. the relative position and magnitude of the images, are not special properties of optical systems, but necessary consequences of the supposition (per Abbe) of the reproduction of all points of a space in image points, and are independent of the manner in which the reproduction is effected. These authors showed, however, that no optical system can justify these suppositions, since they are contradictory to the fundamental laws of reflection and refraction. Consequently, the Gaussian theory only supplies a convenient method of approximating to reality; and actual optical systems can only attempt to realize this unattainable ideal. At present, all that can be attempted is to reproduce a single plane in another plane; but even this has not been altogether satisfactorily accomplished: aberrations always occur, and it is improbable that these will ever be entirely corrected.

The classical theory of optics and related systems been analyzed by numerous authors.

Aberration of axial points (spherical aberration in the restricted sense)

Figure 1

Let S (fig. 1) be any optical system, rays proceeding from an axis point O under an angle u1 will unite in the axis point O'1; and those under an angle u2 in the axis point O'2. If there is refraction at a collective spherical surface, or through a thin positive lens, O'2 will lie in front of O'1 so long as the angle u2 is greater than u1 (under correction); and conversely with a dispersive surface or lenses (over correction). The caustic, in the first case, resembles the sign > (greater than); in the second < (less than). If the angle u1 is very small, O'1 is the Gaussian image; and O'1 O'2 is termed the longitudinal aberration, and O'1R the lateral aberration of the pencils with aperture u2. If the pencil with the angle u2 is that of the maximum aberration of all the pencils transmitted, then in a plane perpendicular to the axis at O'1 there is a circular disk of confusion of radius O'1R, and in a parallel plane at O'2 another one of radius O'2R2; between these two is situated the disk of least confusion.

The largest opening of the pencils, which take part in the reproduction of O, i.e. the angle u, is generally determined by the margin of one of the lenses or by a hole in a thin plate placed between, before, or behind the lenses of the system. This hole is termed the stop or diaphragm; Abbe used the term aperture stop for both the hole and the limiting margin of the lens. The component S1 of the system, situated between the aperture stop and the object O, projects an image of the diaphragm, termed by Abbe the entrance pupil; the exit pupil is the image formed by the component S2, which is placed behind the aperture stop. All rays which issue from O and pass through the aperture stop also pass through the entrance and exit pupils, since these are images of the aperture stop. Since the maximum aperture of the pencils issuing from O is the angle u subtended by the entrance pupil at this point, the magnitude of the aberration will be determined by the position and diameter of the entrance pupil. If the system be entirely behind the aperture stop, then this is itself the entrance pupil (front stop); if entirely in front, it is the exit pupil (back stop).

If the object point be infinitely distant, all rays received by the first member of the system are parallel, and their intersections, after traversing the system, vary according to their perpendicular height of incidence, i.e. their distance from the axis. This distance replaces the angle u in the preceding considerations; and the aperture, i.e. the radius of the entrance pupil, is its maximum value.

Aberration of elements, i.e. smallest objects at right angles to the axis

If rays issuing from O (fig. 1) are concurrent, it does not follow that points in a portion of a plane perpendicular at O to the axis will be also concurrent, even if the part of the plane be very small. As the diameter of the lens increases (i.e., with increasing aperture_, the neighboring point N will be reproduced, but attended by aberrations comparable in magnitude to ON. These aberrations are avoided if, according to Abbe, the sine condition, sin u'1/sin u1=sin u'2/sin u2, holds for all rays reproducing the point O. If the object point O is infinitely distant, u1 and u2 are to be replaced by h1 and h2, the perpendicular heights of incidence; the sine condition then becomes sin u'1/h1=sin u'2/h2. A system fulfilling this condition and free from spherical aberration is called aplanatic (Greek a-, privative, plann, a wandering). This word was first used by Robert Blair to characterize a superior achromatism, and, subsequently, by many writers to denote freedom from spherical aberration as well.

Since the aberration increases with the distance of the ray from the center of the lens, the aberration increases as the lens diameter increases (or, correspondingly, with the diameter of the aperture), and hence can be minimized by reducing the aperture, at the cost of also reducing the amount of light reaching the image plane.

Aberration of lateral object points (points beyond the axis) with narrow pencils. Astigmatism.

Figure 2

A point O (fig. 2) at a finite distance from the axis (or with an infinitely distant object, a point which subtends a finite angle at the system) is, in general, even then not sharply reproduced if the pencil of rays issuing from it and traversing the system is made infinitely narrow by reducing the aperture stop; such a pencil consists of the rays which can pass from the object point through the now infinitely small entrance pupil. It is seen (ignoring exceptional cases) that the pencil does not meet the refracting or reflecting surface at right angles; therefore it is astigmatic (Gr. a-, privative, stigmia, a point). Naming the central ray passing through the entrance pupil the axis of the pencil or principal ray, it can be said: the rays of the pencil intersect, not in one point, but in two focal lines, which can be assumed to be at right angles to the principal ray; of these, one lies in the plane containing the principal ray and the axis of the system, i.e. in the first principal section or meridional section, and the other at right angles to it, i.e. in the second principal section or sagittal section. We receive, therefore, in no single intercepting plane behind the system, as, for example, a focusing screen, an image of the object point; on the other hand, in each of two planes lines O' and O" are separately formed (in neighboring planes ellipses are formed), and in a plane between O' and O" a circle of least confusion. The interval O'O", termed the astigmatic difference, increases, in general, with the angle W made by the principal ray OP with the axis of the system, i.e. with the field of view. Two astigmatic image surfaces correspond to one object plane; and these are in contact at the axis point; on the one lie the focal lines of the first kind, on the other those of the second. Systems in which the two astigmatic surfaces coincide are termed anastigmatic or stigmatic.

Sir Isaac Newton was probably the discoverer of astigmation; the position of the astigmatic image lines was determined by Thomas Young; and the theory was developed by Allvar Gullstrand. A bibliography by P. Culmann is given in Moritz von Rohr's Die Bilderzeugung in optischen Instrumenten.

Aberration of lateral object points with broad pencils. Coma.

By opening the stop wider, similar deviations arise for lateral points as have been already discussed for axial points; but in this case they are much more complicated. The course of the rays in the meridional section is no longer symmetrical to the principal ray of the pencil; and on an intercepting plane there appears, instead of a luminous point, a patch of light, not symmetrical about a point, and often exhibiting a resemblance to a comet having its tail directed towards or away from the axis. From this appearance it takes its name. The unsymmetrical form of the meridional pencil—formerly the only one considered—is coma in the narrower sense only; other errors of coma have been treated by Arthur König and Moritz von Rohr, and later by Allvar Gullstrand.

Curvature of the field of the image

If the above errors be eliminated, the two astigmatic surfaces united, and a sharp image obtained with a wide aperture—there remains the necessity to correct the curvature of the image surface, especially when the image is to be received upon a plane surface, e.g. in photography. In most cases the surface is concave towards the system.

Distortion of the image

Fig. 3a: Barrel distortion
 
Fig. 3b: Pincushion distortion

Even if the image is sharp, it may be distorted compared to ideal pinhole projection. In pinhole projection, the magnification of an object is inversely proportional to its distance to the camera along the optical axis so that a camera pointing directly at a flat surface reproduces that flat surface. Distortion can be thought of as stretching the image non-uniformly, or, equivalently, as a variation in magnification across the field. While "distortion" can include arbitrary deformation of an image, the most pronounced modes of distortion produced by conventional imaging optics is "barrel distortion", in which the center of the image is magnified more than the perimeter (figure 3a). The reverse, in which the perimeter is magnified more than the center, is known as "pincushion distortion" (figure 3b). This effect is called lens distortion or image distortion, and there are algorithms to correct it.

Systems free of distortion are called orthoscopic (orthos, right, skopein to look) or rectilinear (straight lines).

Figure 4

This aberration is quite distinct from that of the sharpness of reproduction; in unsharp, reproduction, the question of distortion arises if only parts of the object can be recognized in the figure. If, in an unsharp image, a patch of light corresponds to an object point, the center of gravity of the patch may be regarded as the image point, this being the point where the plane receiving the image, e.g., a focusing screen, intersects the ray passing through the middle of the stop. This assumption is justified if a poor image on the focusing screen remains stationary when the aperture is diminished; in practice, this generally occurs. This ray, named by Abbe a principal ray (not to be confused with the principal rays of the Gaussian theory), passes through the center of the entrance pupil before the first refraction, and the center of the exit pupil after the last refraction. From this it follows that correctness of drawing depends solely upon the principal rays; and is independent of the sharpness or curvature of the image field. Referring to fig. 4, we have O'Q'/OQ = a' tan w'/a tan w = 1/N, where N is the scale or magnification of the image. For N to be constant for all values of w, a' tan w'/a tan w must also be constant. If the ratio a'/a be sufficiently constant, as is often the case, the above relation reduces to the condition of Airy, i.e. tan w'/ tan w= a constant. This simple relation (see Camb. Phil. Trans., 1830, 3, p. 1) is fulfilled in all systems which are symmetrical with respect to their diaphragm (briefly named symmetrical or holosymmetrical objectives), or which consist of two like, but different-sized, components, placed from the diaphragm in the ratio of their size, and presenting the same curvature to it (hemisymmetrical objectives); in these systems tan w' / tan w = 1.

The constancy of a'/a necessary for this relation to hold was pointed out by R. H. Bow (Brit. Journ. Photog., 1861), and Thomas Sutton (Photographic Notes, 1862); it has been treated by O. Lummer and by M. von Rohr (Zeit. f. Instrumentenk., 1897, 17, and 1898, 18, p. 4). It requires the middle of the aperture stop to be reproduced in the centers of the entrance and exit pupils without spherical aberration. M. von Rohr showed that for systems fulfilling neither the Airy nor the Bow-Sutton condition, the ratio a' cos w'/a tan w will be constant for one distance of the object. This combined condition is exactly fulfilled by holosymmetrical objectives reproducing with the scale 1, and by hemisymmetrical, if the scale of reproduction be equal to the ratio of the sizes of the two components.

Zernike model of aberrations

Circular wavefront profiles associated with aberrations may be mathematically modeled using Zernike polynomials. Developed by Frits Zernike in the 1930s, Zernike's polynomials are orthogonal over a circle of unit radius. A complex, aberrated wavefront profile may be curve-fitted with Zernike polynomials to yield a set of fitting coefficients that individually represent different types of aberrations. These Zernike coefficients are linearly independent, thus individual aberration contributions to an overall wavefront may be isolated and quantified separately.

There are even and odd Zernike polynomials. The even Zernike polynomials are defined as
and the odd Zernike polynomials as
where m and n are nonnegative integers with , Φ is the azimuthal angle in radians, and ρ is the normalized radial distance. The radial polynomials have no azimuthal dependence, and are defined as
and if is odd.

The first few Zernike polynomials are:

"Piston", equal to the mean value of the wavefront
"X-Tilt", the deviation of the overall beam in the sagittal direction
"Y-Tilt", the deviation of the overall beam in the tangential direction
"Defocus", a parabolic wavefront resulting from being out of focus
"0° Astigmatism", a cylindrical shape along the X or Y axis
"45° Astigmatism", a cylindrical shape oriented at ±45° from the X axis
"X-Coma", comatic image flaring in the horizontal direction
"Y-Coma", comatic image flaring in the vertical direction
"Third order spherical aberration"

where is the normalized pupil radius with , is the azimuthal angle around the pupil with , and the fitting coefficients are the wavefront errors in wavelengths.

As in Fourier synthesis using sines and cosines, a wavefront may be perfectly represented by a sufficiently large number of higher-order Zernike polynomials. However, wavefronts with very steep gradients or very high spatial frequency structure, such as produced by propagation through atmospheric turbulence or aerodynamic flowfields, are not well modeled by Zernike polynomials, which tend to low-pass filter fine spatial definition in the wavefront. In this case, other fitting methods such as fractals or singular value decomposition may yield improved fitting results.

The circle polynomials were introduced by Frits Zernike to evaluate the point image of an aberrated optical system taking into account the effects of diffraction. The perfect point image in the presence of diffraction had already been described by Airy, as early as 1835. It took almost hundred years to arrive at a comprehensive theory and modeling of the point image of aberrated systems (Zernike and Nijboer). The analysis by Nijboer and Zernike describes the intensity distribution close to the optimum focal plane. An extended theory that allows the calculation of the point image amplitude and intensity over a much larger volume in the focal region was recently developed (Extended Nijboer-Zernike theory). This Extended Nijboer-Zernike theory of point image or ‘point-spread function’ formation has found applications in general research on image formation, especially for systems with a high numerical aperture, and in characterizing optical systems with respect to their aberrations.

Analytic treatment of aberrations

The preceding review of the several errors of reproduction belongs to the Abbe theory of aberrations, in which definite aberrations are discussed separately; it is well suited to practical needs, for in the construction of an optical instrument certain errors are sought to be eliminated, the selection of which is justified by experience. In the mathematical sense, however, this selection is arbitrary; the reproduction of a finite object with a finite aperture entails, in all probability, an infinite number of aberrations. This number is only finite if the object and aperture are assumed to be infinitely small of a certain order; and with each order of infinite smallness, i.e. with each degree of approximation to reality (to finite objects and apertures), a certain number of aberrations is associated. This connection is only supplied by theories which treat aberrations generally and analytically by means of indefinite series.
Figure 5

A ray proceeding from an object point O (fig. 5) can be defined by the coordinates (ξ, η). Of this point O in an object plane I, at right angles to the axis, and two other coordinates (x, y), the point in which the ray intersects the entrance pupil, i.e. the plane II. Similarly the corresponding image ray may be defined by the points (ξ', η'), and (x', y'), in the planes I' and II'. The origins of these four plane coordinate systems may be collinear with the axis of the optical system; and the corresponding axes may be parallel. Each of the four coordinates ξ', η', x', y' are functions of ξ, η, x, y; and if it be assumed that the field of view and the aperture be infinitely small, then ξ, η, x, y are of the same order of infinitesimals; consequently by expanding ξ', η', x', y' in ascending powers of ξ, η, x, y, series are obtained in which it is only necessary to consider the lowest powers. It is readily seen that if the optical system be symmetrical, the origins of the coordinate systems collinear with the optical axis and the corresponding axes parallel, then by changing the signs of ξ, η, x, y, the values ξ', η', x', y' must likewise change their sign, but retain their arithmetical values; this means that the series are restricted to odd powers of the unmarked variables.

The nature of the reproduction consists in the rays proceeding from a point O being united in another point O'; in general, this will not be the case, for ξ', η' vary if ξ, η be constant, but x, y variable. It may be assumed that the planes I' and II' are drawn where the images of the planes I and II are formed by rays near the axis by the ordinary Gaussian rules; and by an extension of these rules, not, however, corresponding to reality, the Gauss image point O'0, with coordinates ξ'0, η'0, of the point O at some distance from the axis could be constructed. Writing Dξ'=ξ'-ξ'0 and Dη'=η'-η'0, then Dξ' and Dη' are the aberrations belonging to ξ, η and x, y, and are functions of these magnitudes which, when expanded in series, contain only odd powers, for the same reasons as given above. On account of the aberrations of all rays which pass through O, a patch of light, depending in size on the lowest powers of ξ, η, x, y which the aberrations contain, will be formed in the plane I'. These degrees, named by J. Petzval (Bericht uber die Ergebnisse einiger dioptrischer Untersuchungen, Buda Pesth, 1843; Akad. Sitzber., Wien, 1857, vols. xxiv. xxvi.) the numerical orders of the image, are consequently only odd powers; the condition for the formation of an image of the mth order is that in the series for Dξ' and Dη' the coefficients of the powers of the 3rd, 5th…(m-2)th degrees must vanish. The images of the Gauss theory being of the third order, the next problem is to obtain an image of 5th order, or to make the coefficients of the powers of 3rd degree zero. This necessitates the satisfying of five equations; in other words, there are five alterations of the 3rd order, the vanishing of which produces an image of the 5th order.

The expression for these coefficients in terms of the constants of the optical system, i.e. the radii, thicknesses, refractive indices and distances between the lenses, was solved by L. Seidel (Astr. Nach., 1856, p. 289); in 1840, J. Petzval constructed his portrait objective, from similar calculations which have never been published (see M. von Rohr, Theorie und Geschichte des photographischen Objectivs, Berlin, 1899, p. 248). The theory was elaborated by S. Finterswalder (Munchen. Acad. Abhandl., 1891, 17, p. 519), who also published a posthumous paper of Seidel containing a short view of his work (München. Akad. Sitzber., 1898, 28, p. 395); a simpler form was given by A. Kerber (Beiträge zur Dioptrik, Leipzig, 1895-6-7-8-9). A. Konig and M. von Rohr (see M. von Rohr, Die Bilderzeugung in optischen Instrumenten, pp. 317–323) have represented Kerber's method, and have deduced the Seidel formulae from geometrical considerations based on the Abbe method, and have interpreted the analytical results geometrically (pp. 212–316).

The aberrations can also be expressed by means of the characteristic function of the system and its differential coefficients, instead of by the radii, &c., of the lenses; these formulae are not immediately applicable, but give, however, the relation between the number of aberrations and the order. Sir William Rowan Hamilton (British Assoc. Report, 1833, p. 360) thus derived the aberrations of the third order; and in later times the method was pursued by Clerk Maxwell (Proc. London Math. Soc., 1874–1875; (see also the treatises of R. S. Heath and L. A. Herman), M. Thiesen (Berlin. Akad. Sitzber., 1890, 35, p. 804), H. Bruns (Leipzig. Math. Phys. Ber., 1895, 21, p. 410), and particularly successfully by K. Schwarzschild (Göttingen. Akad. Abhandl., 1905, 4, No. 1), who thus discovered the aberrations of the 5th order (of which there are nine), and possibly the shortest proof of the practical (Seidel) formulae. A. Gullstrand (vide supra, and Ann. d. Phys., 1905, 18, p. 941) founded his theory of aberrations on the differential geometry of surfaces.

The aberrations of the third order are: (1) aberration of the axis point; (2) aberration of points whose distance from the axis is very small, less than of the third order — the deviation from the sine condition and coma here fall together in one class; (3) astigmatism; (4) curvature of the field; (5) distortion.
(1) Aberration of the third order of axis points is dealt with in all text-books on optics. It is very important in telescope design. In telescopes aperture is usually taken as the linear diameter of the objective. It is not the same as microscope aperture which is based on the entrance pupil or field of view as seen from the object and is expressed as an angular measurement. Higher order aberrations in telescope design can be mostly neglected. For microscopes it cannot be neglected. For a single lens of very small thickness and given power, the aberration depends upon the ratio of the radii r:r', and is a minimum (but never zero) for a certain value of this ratio; it varies inversely with the refractive index (the power of the lens remaining constant). The total aberration of two or more very thin lenses in contact, being the sum of the individual aberrations, can be zero. This is also possible if the lenses have the same algebraic sign. Of thin positive lenses with n=1.5, four are necessary to correct spherical aberration of the third order. These systems, however, are not of great practical importance. In most cases, two thin lenses are combined, one of which has just so strong a positive aberration (under-correction, vide supra) as the other a negative; the first must be a positive lens and the second a negative lens; the powers, however: may differ, so that the desired effect of the lens is maintained. It is generally an advantage to secure a great refractive effect by several weaker than by one high-power lens. By one, and likewise by several, and even by an infinite number of thin lenses in contact, no more than two axis points can be reproduced without aberration of the third order. Freedom from aberration for two axis points, one of which is infinitely distant, is known as Herschel's condition. All these rules are valid, inasmuch as the thicknesses and distances of the lenses are not to be taken into account.
(2) The condition for freedom from coma in the third order is also of importance for telescope objectives; it is known as Fraunhofer's condition. (4) After eliminating the aberration On the axis, coma and astigmatism, the relation for the flatness of the field in the third order is expressed by the Petzval equation, S1/r(n'-n) = 0, where r is the radius of a refracting surface, n and n' the refractive indices of the neighboring media, and S the sign of summation for all refracting surfaces.

Practical elimination of aberrations

Laser guide stars assist in the elimination of atmospheric distortion.

The classical imaging problem is to reproduce perfectly a finite plane (the object) onto another plane (the image) through a finite aperture. It is impossible to do so perfectly for more than one such pair of planes (this was proven with increasing generality by Maxwell in 1858, by Bruns in 1895, and by Carathéodory in 1926, see summary in Walther, A., J. Opt. Soc. Am. A 6, 415–422 (1989)). For a single pair of planes (e.g. for a single focus setting of an objective), however, the problem can in principle be solved perfectly. Examples of such a theoretically perfect system include the Luneburg lens and the Maxwell fish-eye.

Practical methods solve this problem with an accuracy which mostly suffices for the special purpose of each species of instrument. The problem of finding a system which reproduces a given object upon a given plane with given magnification (insofar as aberrations must be taken into account) could be dealt with by means of the approximation theory; in most cases, however, the analytical difficulties were too great for older calculation methods but may be ameliorated by application of modern computer systems. Solutions, however, have been obtained in special cases (see A. Konig in M. von Rohr's Die Bilderzeugung, p. 373; K. Schwarzschild, Göttingen. Akad. Abhandl., 1905, 4, Nos. 2 and 3). At the present time constructors almost always employ the inverse method: they compose a system from certain, often quite personal experiences, and test, by the trigonometrical calculation of the paths of several rays, whether the system gives the desired reproduction (examples are given in A. Gleichen, Lehrbuch der geometrischen Optik, Leipzig and Berlin, 1902). The radii, thicknesses and distances are continually altered until the errors of the image become sufficiently small. By this method only certain errors of reproduction are investigated, especially individual members, or all, of those named above. The analytical approximation theory is often employed provisionally, since its accuracy does not generally suffice.

In order to render spherical aberration and the deviation from the sine condition small throughout the whole aperture, there is given to a ray with a finite angle of aperture u* (width infinitely distant objects: with a finite height of incidence h*) the same distance of intersection, and the same sine ratio as to one neighboring the axis (u* or h* may not be much smaller than the largest aperture U or H to be used in the system). The rays with an angle of aperture smaller than u* would not have the same distance of intersection and the same sine ratio; these deviations are called zones, and the constructor endeavors to reduce these to a minimum. The same holds for the errors depending upon the angle of the field of view, w: astigmatism, curvature of field and distortion are eliminated for a definite value, w*, zones of astigmatism, curvature of field and distortion, attend smaller values of w. The practical optician names such systems: corrected for the angle of aperture u* (the height of incidence h*) or the angle of field of view w*. Spherical aberration and changes of the sine ratios are often represented graphically as functions of the aperture, in the same way as the deviations of two astigmatic image surfaces of the image plane of the axis point are represented as functions of the angles of the field of view.

The final form of a practical system consequently rests on compromise; enlargement of the aperture results in a diminution of the available field of view, and vice versa. But the larger aperture will give the larger resolution. The following may be regarded as typical:
(1) Largest aperture; necessary corrections are — for the axis point, and sine condition; errors of the field of view are almost disregarded; example — high-power microscope objectives.
(2) Wide angle lens; necessary corrections are — for astigmatism, curvature of field and distortion; errors of the aperture only slightly regarded; examples — photographic widest angle objectives and oculars.
Between these extreme examples stands the normal lens: this is corrected more with regard to aperture; objectives for groups more with regard to the field of view.
(3) Long focus lenses have small fields of view and aberrations on axis are very important. Therefore zones will be kept as small as possible and design should emphasize simplicity. Because of this these lenses are the best for analytical computation.

Chromatic or color aberration

In optical systems composed of lenses, the position, magnitude and errors of the image depend upon the refractive indices of the glass employed (see Lens (optics) and Monochromatic aberration, above). Since the index of refraction varies with the color or wavelength of the light (see dispersion), it follows that a system of lenses (uncorrected) projects images of different colors in somewhat different places and sizes and with different aberrations; i.e. there are chromatic differences of the distances of intersection, of magnifications, and of monochromatic aberrations. If mixed light be employed (e.g. white light) all these images are formed and they cause a confusion, named chromatic aberration; for instance, instead of a white margin on a dark background, there is perceived a colored margin, or narrow spectrum. The absence of this error is termed achromatism, and an optical system so corrected is termed achromatic. A system is said to be chromatically under-corrected when it shows the same kind of chromatic error as a thin positive lens, otherwise it is said to be overcorrected.

If, in the first place, monochromatic aberrations be neglected — in other words, the Gaussian theory be accepted — then every reproduction is determined by the positions of the focal planes, and the magnitude of the focal lengths, or if the focal lengths, as ordinarily happens, be equal, by three constants of reproduction. These constants are determined by the data of the system (radii, thicknesses, distances, indices, etc., of the lenses); therefore their dependence on the refractive index, and consequently on the color, are calculable. The refractive indices for different wavelengths must be known for each kind of glass made use of. In this manner the conditions are maintained that any one constant of reproduction is equal for two different colors, i.e. this constant is achromatized. For example, it is possible, with one thick lens in air, to achromatize the position of a focal plane of the magnitude of the focal length. If all three constants of reproduction be achromatized, then the Gaussian image for all distances of objects is the same for the two colors, and the system is said to be in stable achromatism.

In practice it is more advantageous (after Abbe) to determine the chromatic aberration (for instance, that of the distance of intersection) for a fixed position of the object, and express it by a sum in which each component conlins the amount due to each refracting surface. In a plane containing the image point of one color, another colour produces a disk of confusion; this is similar to the confusion caused by two zones in spherical aberration. For infinitely distant objects the radius Of the chromatic disk of confusion is proportional to the linear aperture, and independent of the focal length (vide supra, Monochromatic Aberration of the Axis Point); and since this disk becomes the less harmful with an increasing image of a given object, or with increasing focal length, it follows that the deterioration of the image is proportional to the ratio of the aperture to the focal length, i.e. the relative aperture. (This explains the gigantic focal lengths in vogue before the discovery of achromatism.)

Examples:
(a) In a very thin lens, in air, only one constant of reproduction is to be observed, since the focal length and the distance of the focal point are equal. If the refractive index for one color be , and for another , and the powers, or reciprocals of the focal lengths, be and , then (1) ; is called the dispersion, and the dispersive power of the glass.
(b) Two thin lenses in contact: let and be the powers corresponding to the lenses of refractive indices and and radii , , and , respectively; let denote the total power, and , , the changes of , , and with the color. Then the following relations hold:
(2) ; and
(3) . For achromatism , hence, from (3),
(4) , or . Therefore and must have different algebraic signs, or the system must be composed of a collective and a dispersive lens. Consequently the powers of the two must be different (in order that be not zero (equation 2)), and the dispersive powers must also be different (according to 4).
Newton failed to perceive the existence of media of different dispersive powers required by achromatism; consequently he constructed large reflectors instead of refractors. James Gregory and Leonhard Euler arrived at the correct view from a false conception of the achromatism of the eye; this was determined by Chester More Hall in 1728, Klingenstierna in 1754 and by Dollond in 1757, who constructed the celebrated achromatic telescopes.

Glass with weaker dispersive power (greater ) is named crown glass; that with greater dispersive power, flint glass. For the construction of an achromatic collective lens ( positive) it follows, by means of equation (4), that a collective lens I. of crown glass and a dispersive lens II. of flint glass must be chosen; the latter, although the weaker, corrects the other chromatically by its greater dispersive power. For an achromatic dispersive lens the converse must be adopted. This is, at the present day, the ordinary type, e.g., of telescope objective; the values of the four radii must satisfy the equations (2) and (4). Two other conditions may also be postulated: one is always the elimination of the aberration on the axis; the second either the Herschel or Fraunhofer Condition, the latter being the best vide supra, Monochromatic Aberration). In practice, however, it is often more useful to avoid the second condition by making the lenses have contact, i.e. equal radii. According to P. Rudolph (Eder's Jahrb. f. Photog., 1891, 5, p. 225; 1893, 7, p. 221), cemented objectives of thin lenses permit the elimination of spherical aberration on the axis, if, as above, the collective lens has a smaller refractive index; on the other hand, they permit the elimination of astigmatism and curvature of the field, if the collective lens has a greater refractive index (this follows from the Petzval equation; see L. Seidel, Astr. Nachr., 1856, p. 289). Should the cemented system be positive, then the more powerful lens must be positive; and, according to (4), to the greater power belongs the weaker dispersive power (greater ), that is to say, crown glass; consequently the crown glass must have the greater refractive index for astigmatic and plane images. In all earlier kinds of glass, however, the dispersive power increased with the refractive index; that is, decreased as increased; but some of the Jena glasses by E. Abbe and O. Schott were crown glasses of high refractive index, and achromatic systems from such crown glasses, with flint glasses of lower refractive index, are called the new achromats, and were employed by P. Rudolph in the first anastigmats (photographic objectives).

Instead of making vanish, a certain value can be assigned to it which will produce, by the addition of the two lenses, any desired chromatic deviation, e.g. sufficient to eliminate one present in other parts of the system. If the lenses I. and II. be cemented and have the same refractive index for one color, then its effect for that one color is that of a lens of one piece; by such decomposition of a lens it can be made chromatic or achromatic at will, without altering its spherical effect. If its chromatic effect () be greater than that of the same lens, this being made of the more dispersive of the two glasses employed, it is termed hyper-chromatic.

For two thin lenses separated by a distance the condition for achromatism is ; if (e.g. if the lenses be made of the same glass), this reduces to , known as the condition for oculars.

If a constant of reproduction, for instance the focal length, be made equal for two colors, then it is not the same for other colors, if two different glasses are employed. For example, the condition for achromatism (4) for two thin lenses in contact is fulfilled in only one part of the spectrum, since varies within the spectrum. This fact was first ascertained by J. Fraunhofer, who defined the colors by means of the dark lines in the solar spectrum; and showed that the ratio of the dispersion of two glasses varied about 20% from the red to the violet (the variation for glass and water is about 50%). If, therefore, for two colors, a and b, , then for a third color, c, the focal length is different; that is, if c lies between a and b, then , and vice versa; these algebraic results follow from the fact that towards the red the dispersion of the positive crown glass preponderates, towards the violet that of the negative flint. These chromatic errors of systems, which are achromatic for two colors, are called the secondary spectrum, and depend upon the aperture and focal length in the same manner as the primary chromatic errors do.

In fig. 6, taken from M. von Rohr's Theorie und Geschichte des photographischen Objectivs, the abscissae are focal lengths, and the ordinates wavelengths. The Fraunhofer lines used are shown in adjacent table.

A' C D Green Hg. F G' Violet Hg.
767.7 656.3 589.3 546.1 486.2 454.1 405.1 nm
Figure 6

The focal lengths are made equal for the lines C and F. In the neighborhood of 550 nm the tangent to the curve is parallel to the axis of wavelengths; and the focal length varies least over a fairly large range of color, therefore in this neighborhood the color union is at its best. Moreover, this region of the spectrum is that which appears brightest to the human eye, and consequently this curve of the secondary on spectrum, obtained by making , is, according to the experiments of Sir G. G. Stokes (Proc. Roy. Soc., 1878), the most suitable for visual instruments (optical achromatism,). In a similar manner, for systems used in photography, the vertex of the color curve must be placed in the position of the maximum sensibility of the plates; this is generally supposed to be at G'; and to accomplish this the F and violet mercury lines are united. This artifice is specially adopted in objectives for astronomical photography (pure actinic achromatism). For ordinary photography, however, there is this disadvantage: the image on the focusing-screen and the correct adjustment of the photographic sensitive plate are not in register; in astronomical photography this difference is constant, but in other kinds it depends on the distance of the objects. On this account the lines D and G' are united for ordinary photographic objectives; the optical as well as the actinic image is chromatically inferior, but both lie in the same place; and consequently the best correction lies in F (this is known as the actinic correction or freedom from chemical focus).

Should there be in two lenses in contact the same focal lengths for three colours a, b, and c, i.e. , then the relative partial dispersion must be equal for the two kinds of glass employed. This follows by considering equation (4) for the two pairs of colors ac and bc. Until recently no glasses were known with a proportional degree of absorption; but R. Blair (Trans. Edin. Soc., 1791, 3, p. 3), P. Barlow, and F. S. Archer overcame the difficulty by constructing fluid lenses between glass walls. Fraunhofer prepared glasses which reduced the secondary spectrum; but permanent success was only assured on the introduction of the Jena glasses by E. Abbe and O. Schott. In using glasses not having proportional dispersion, the deviation of a third colour can be eliminated by two lenses, if an interval be allowed between them; or by three lenses in contact, which may not all consist of the old glasses. In uniting three colors an achromatism of a higher order is derived; there is yet a residual tertiary spectrum, but it can always be neglected.

The Gaussian theory is only an approximation; monochromatic or spherical aberrations still occur, which will be different for different colors; and should they be compensated for one color, the image of another color would prove disturbing. The most important is the chromatic difference of aberration of the axis point, which is still present to disturb the image, after par-axial rays of different colors are united by an appropriate combination of glasses. If a collective system be corrected for the axis point for a definite wavelength, then, on account of the greater dispersion in the negative components — the flint glasses, — overcorrection will arise for the shorter wavelengths (this being the error of the negative components), and under-correction for the longer wavelengths (the error of crown glass lenses preponderating in the red). This error was treated by Jean le Rond d'Alembert, and, in special detail, by C. F. Gauss. It increases rapidly with the aperture, and is more important with medium apertures than the secondary spectrum of par-axial rays; consequently, spherical aberration must be eliminated for two colors, and if this be impossible, then it must be eliminated for those particular wavelengths which are most effectual for the instrument in question (a graphical representation of this error is given in M. von Rohr, Theorie und Geschichte des photographischen Objectivs).

The condition for the reproduction of a surface element in the place of a sharply reproduced point — the constant of the sine relationship must also be fulfilled with large apertures for several colors. E. Abbe succeeded in computing microscope objectives free from error of the axis point and satisfying the sine condition for several colors, which therefore, according to his definition, were aplanatic for several colors; such systems he termed apochromatic. While, however, the magnification of the individual zones is the same, it is not the same for red as for blue; and there is a chromatic difference of magnification. This is produced in the same amount, but in the opposite sense, by the oculars, which Abbe used with these objectives (compensating oculars), so that it is eliminated in the image of the whole microscope. The best telescope objectives, and photographic objectives intended for three-color work, are also apochromatic, even if they do not possess quite the same quality of correction as microscope objectives do. The chromatic differences of other errors of reproduction have seldom practical importances.

Microelectromechanical systems

From Wikipedia, the free encyclopedia

Proposal submitted to DARPA in 1986 first introducing the term "microelectromechanical systems"
MEMS microcantilever resonating inside a scanning electron microscope

Microelectromechanical systems (MEMS, also written as micro-electro-mechanical, MicroElectroMechanical or microelectronic and microelectromechanical systems and the related micromechatronics) is the technology of microscopic devices, particularly those with moving parts. It merges at the nano-scale into nanoelectromechanical systems (NEMS) and nanotechnology. MEMS are also referred to as micromachines in Japan, or micro systems technology (MST) in Europe.

MEMS are made up of components between 1 and 100 micrometers in size (i.e., 0.001 to 0.1 mm), and MEMS devices generally range in size from 20 micrometres to a millimetre (i.e., 0.02 to 1.0 mm), although components arranged in arrays (e.g., digital micromirror devices) can be more than 1000 mm2. They usually consist of a central unit that processes data (the microprocessor) and several components that interact with the surroundings such as microsensors. Because of the large surface area to volume ratio of MEMS, forces produced by ambient electromagnetism (e.g., electrostatic charges and magnetic moments), and fluid dynamics (e.g., surface tension and viscosity) are more important design considerations than with larger scale mechanical devices. MEMS technology is distinguished from molecular nanotechnology or molecular electronics in that the latter must also consider surface chemistry.

The potential of very small machines was appreciated before the technology existed that could make them (see, for example, Richard Feynman's famous 1959 lecture There's Plenty of Room at the Bottom). MEMS became practical once they could be fabricated using modified semiconductor device fabrication technologies, normally used to make electronics. These include molding and plating, wet etching (KOH, TMAH) and dry etching (RIE and DRIE), electro discharge machining (EDM), and other technologies capable of manufacturing small devices. An early example of a MEMS device is the resonistor, an electromechanical monolithic resonator patented by Raymond J. Wilfinger, and the resonant gate transistor developed by Harvey C. Nathanson.

Materials for MEMS manufacturing

The fabrication of MEMS evolved from the process technology in semiconductor device fabrication, i.e. the basic techniques are deposition of material layers, patterning by photolithography and etching to produce the required shapes.

Silicon

Silicon is the material used to create most integrated circuits used in consumer electronics in the modern industry. The economies of scale, ready availability of inexpensive high-quality materials, and ability to incorporate electronic functionality make silicon attractive for a wide variety of MEMS applications. Silicon also has significant advantages engendered through its material properties. In single crystal form, silicon is an almost perfect Hookean material, meaning that when it is flexed there is virtually no hysteresis and hence almost no energy dissipation. As well as making for highly repeatable motion, this also makes silicon very reliable as it suffers very little fatigue and can have service lifetimes in the range of billions to trillions of cycles without breaking.

Polymers

Even though the electronics industry provides an economy of scale for the silicon industry, crystalline silicon is still a complex and relatively expensive material to produce. Polymers on the other hand can be produced in huge volumes, with a great variety of material characteristics. MEMS devices can be made from polymers by processes such as injection molding, embossing or stereolithography and are especially well suited to microfluidic applications such as disposable blood testing cartridges.

Metals

Metals can also be used to create MEMS elements. While metals do not have some of the advantages displayed by silicon in terms of mechanical properties, when used within their limitations, metals can exhibit very high degrees of reliability. Metals can be deposited by electroplating, evaporation, and sputtering processes. Commonly used metals include gold, nickel, aluminium, copper, chromium, titanium, tungsten, platinum, and silver.

Ceramics

The nitrides of silicon, aluminium and titanium as well as silicon carbide and other ceramics are increasingly applied in MEMS fabrication due to advantageous combinations of material properties.  AlN crystallizes in the wurtzite structure and thus shows pyroelectric and piezoelectric properties enabling sensors, for instance, with sensitivity to normal and shear forces. TiN, on the other hand, exhibits a high electrical conductivity and large elastic modulus, making it possible to implement electrostatic MEMS actuation schemes with ultrathin membranes. Moreover, the high resistance of TiN against biocorrosion qualifies the material for applications in biogenic environments and in biosensors.

MEMS basic processes

Deposition processes

One of the basic building blocks in MEMS processing is the ability to deposit thin films of material with a thickness anywhere between one micrometre, to about 100 micrometres. The NEMS process is the same, although the measurement of film deposition ranges from a few nanometres to one micrometre. There are two types of deposition processes, as follows.

Physical deposition

Physical vapor deposition ("PVD") consists of a process in which a material is removed from a target, and deposited on a surface. Techniques to do this include the process of sputtering, in which an ion beam liberates atoms from a target, allowing them to move through the intervening space and deposit on the desired substrate, and evaporation, in which a material is evaporated from a target using either heat (thermal evaporation) or an electron beam (e-beam evaporation) in a vacuum system.

Chemical deposition

Chemical deposition techniques include chemical vapor deposition ("CVD"), in which a stream of source gas reacts on the substrate to grow the material desired. This can be further divided into categories depending on the details of the technique, for example, LPCVD (Low Pressure chemical vapor deposition) and PECVD (Plasma-enhanced chemical vapor deposition).

Oxide films can also be grown by the technique of thermal oxidation, in which the (typically silicon) wafer is exposed to oxygen and/or steam, to grow a thin surface layer of silicon dioxide.

Patterning

Patterning in MEMS is the transfer of a pattern into a material.

Lithography

Lithography in MEMS context is typically the transfer of a pattern into a photosensitive material by selective exposure to a radiation source such as light. A photosensitive material is a material that experiences a change in its physical properties when exposed to a radiation source. If a photosensitive material is selectively exposed to radiation (e.g. by masking some of the radiation) the pattern of the radiation on the material is transferred to the material exposed, as the properties of the exposed and unexposed regions differs.

This exposed region can then be removed or treated providing a mask for the underlying substrate.  Photolithography is typically used with metal or other thin film deposition, wet and dry etching. Sometimes, photolithography is used to create structure without any kind of post etching. One example is SU8 based lens where SU8 based square blocks are generated. Then the photoresist is melted to form a semi-sphere which acts as a lens.

Electron beam lithography

Electron beam lithography (often abbreviated as e-beam lithography) is the practice of scanning a beam of electrons in a patterned fashion across a surface covered with a film (called the resist), ("exposing" the resist) and of selectively removing either exposed or non-exposed regions of the resist ("developing"). The purpose, as with photolithography, is to create very small structures in the resist that can subsequently be transferred to the substrate material, often by etching. It was developed for manufacturing integrated circuits, and is also used for creating nanotechnology architectures.
The primary advantage of electron beam lithography is that it is one of the ways to beat the diffraction limit of light and make features in the nanometer range. This form of maskless lithography has found wide usage in photomask-making used in photolithography, low-volume production of semiconductor components, and research & development.

The key limitation of electron beam lithography is throughput, i.e., the very long time it takes to expose an entire silicon wafer or glass substrate. A long exposure time leaves the user vulnerable to beam drift or instability which may occur during the exposure. Also, the turn-around time for reworking or re-design is lengthened unnecessarily if the pattern is not being changed the second time.

Ion beam lithography

It is known that focused-ion beam lithography has the capability of writing extremely fine lines (less than 50 nm line and space has been achieved) without proximity effect. However, because the writing field in ion-beam lithography is quite small, large area patterns must be created by stitching together the small fields.

Ion track technology

Ion track technology is a deep cutting tool with a resolution limit around 8 nm applicable to radiation resistant minerals, glasses and polymers. It is capable of generating holes in thin films without any development process. Structural depth can be defined either by ion range or by material thickness. Aspect ratios up to several 104 can be reached. The technique can shape and texture materials at a defined inclination angle. Random pattern, single-ion track structures and aimed pattern consisting of individual single tracks can be generated.

X-ray lithography

X-ray lithography is a process used in electronic industry to selectively remove parts of a thin film. It uses X-rays to transfer a geometric pattern from a mask to a light-sensitive chemical photoresist, or simply "resist", on the substrate. A series of chemical treatments then engraves the produced pattern into the material underneath the photoresist.

Diamond patterning

A simple way to carve or create patterns on the surface of nanodiamonds without damaging them could lead to a new photonic devices.

Diamond patterning is a method of forming diamond MEMS. It is achieved by the lithographic application of diamond films to a substrate such as silicon. The patterns can be formed by selective deposition through a silicon dioxide mask, or by deposition followed by micromachining or focused ion beam milling.

Etching processes

There are two basic categories of etching processes: wet etching and dry etching. In the former, the material is dissolved when immersed in a chemical solution. In the latter, the material is sputtered or dissolved using reactive ions or a vapor phase etchant.

Wet etching

Wet chemical etching consists in selective removal of material by dipping a substrate into a solution that dissolves it. The chemical nature of this etching process provides a good selectivity, which means the etching rate of the target material is considerably higher than the mask material if selected carefully.
Isotropic etching
Etching progresses at the same speed in all directions. Long and narrow holes in a mask will produce v-shaped grooves in the silicon. The surface of these grooves can be atomically smooth if the etch is carried out correctly, with dimensions and angles being extremely accurate.
Anisotropic etching
Some single crystal materials, such as silicon, will have different etching rates depending on the crystallographic orientation of the substrate. This is known as anisotropic etching and one of the most common examples is the etching of silicon in KOH (potassium hydroxide), where Si <111> planes etch approximately 100 times slower than other planes (crystallographic orientations). Therefore, etching a rectangular hole in a (100)-Si wafer results in a pyramid shaped etch pit with 54.7° walls, instead of a hole with curved sidewalls as with isotropic etching.
HF etching
Hydrofluoric acid is commonly used as an aqueous etchant for silicon dioxide (SiO
2
, also known as BOX for SOI), usually in 49% concentrated form, 5:1, 10:1 or 20:1 BOE (buffered oxide etchant) or BHF (Buffered HF). They were first used in medieval times for glass etching. It was used in IC fabrication for patterning the gate oxide until the process step was replaced by RIE.

Hydrofluoric acid is considered one of the more dangerous acids in the cleanroom. It penetrates the skin upon contact and it diffuses straight to the bone. Therefore, the damage is not felt until it is too late.
Electrochemical etching
Electrochemical etching (ECE) for dopant-selective removal of silicon is a common method to automate and to selectively control etching. An active p-n diode junction is required, and either type of dopant can be the etch-resistant ("etch-stop") material. Boron is the most common etch-stop dopant. In combination with wet anisotropic etching as described above, ECE has been used successfully for controlling silicon diaphragm thickness in commercial piezoresistive silicon pressure sensors. Selectively doped regions can be created either by implantation, diffusion, or epitaxial deposition of silicon.

Dry etching

Vapor etching
Xenon difluoride
Xenon difluoride (XeF
2
) is a dry vapor phase isotropic etch for silicon originally applied for MEMS in 1995 at University of California, Los Angeles. Primarily used for releasing metal and dielectric structures by undercutting silicon, XeF
2
has the advantage of a stiction-free release unlike wet etchants. Its etch selectivity to silicon is very high, allowing it to work with photoresist, SiO
2
, silicon nitride, and various metals for masking. Its reaction to silicon is "plasmaless", is purely chemical and spontaneous and is often operated in pulsed mode. Models of the etching action are available, and university laboratories and various commercial tools offer solutions using this approach.
Plasma etching
Modern VLSI processes avoid wet etching, and use plasma etching instead. Plasma etchers can operate in several modes by adjusting the parameters of the plasma. Ordinary plasma etching operates between 0.1 and 5 Torr. (This unit of pressure, commonly used in vacuum engineering, equals approximately 133.3 pascals.) The plasma produces energetic free radicals, neutrally charged, that react at the surface of the wafer. Since neutral particles attack the wafer from all angles, this process is isotropic.

Plasma etching can be isotropic, i.e., exhibiting a lateral undercut rate on a patterned surface approximately the same as its downward etch rate, or can be anisotropic, i.e., exhibiting a smaller lateral undercut rate than its downward etch rate. Such anisotropy is maximized in deep reactive ion etching. The use of the term anisotropy for plasma etching should not be conflated with the use of the same term when referring to orientation-dependent etching.

The source gas for the plasma usually contains small molecules rich in chlorine or fluorine. For instance, carbon tetrachloride (CCl4) etches silicon and aluminium, and trifluoromethane etches silicon dioxide and silicon nitride. A plasma containing oxygen is used to oxidize ("ash") photoresist and facilitate its removal.

Ion milling, or sputter etching, uses lower pressures, often as low as 10−4 Torr (10 mPa). It bombards the wafer with energetic ions of noble gases, often Ar+, which knock atoms from the substrate by transferring momentum. Because the etching is performed by ions, which approach the wafer approximately from one direction, this process is highly anisotropic. On the other hand, it tends to display poor selectivity. Reactive-ion etching (RIE) operates under conditions intermediate between sputter and plasma etching (between 10–3 and 10−1 Torr). Deep reactive-ion etching (DRIE) modifies the RIE technique to produce deep, narrow features.
Sputtering
Reactive ion etching (RIE)
In reactive-ion etching (RIE), the substrate is placed inside a reactor, and several gases are introduced. A plasma is struck in the gas mixture using an RF power source, which breaks the gas molecules into ions. The ions accelerate towards, and react with, the surface of the material being etched, forming another gaseous material. This is known as the chemical part of reactive ion etching. There is also a physical part, which is similar to the sputtering deposition process. If the ions have high enough energy, they can knock atoms out of the material to be etched without a chemical reaction. It is a very complex task to develop dry etch processes that balance chemical and physical etching, since there are many parameters to adjust. By changing the balance it is possible to influence the anisotropy of the etching, since the chemical part is isotropic and the physical part highly anisotropic the combination can form sidewalls that have shapes from rounded to vertical.

Deep RIE (DRIE) is a special subclass of RIE that is growing in popularity. In this process, etch depths of hundreds of micrometres are achieved with almost vertical sidewalls. The primary technology is based on the so-called "Bosch process", named after the German company Robert Bosch, which filed the original patent, where two different gas compositions alternate in the reactor. Currently there are two variations of the DRIE. The first variation consists of three distinct steps (the original Bosch process) while the second variation only consists of two steps.

In the first variation, the etch cycle is as follows:

(i) SF
6
isotropic etch;

(ii) C
4
F
8
passivation;

(iii) SF
6
anisoptropic etch for floor cleaning.

In the 2nd variation, steps (i) and (iii) are combined.

Both variations operate similarly. The C
4
F
8
creates a polymer on the surface of the substrate, and the second gas composition (SF
6
and O
2
) etches the substrate. The polymer is immediately sputtered away by the physical part of the etching, but only on the horizontal surfaces and not the sidewalls. Since the polymer only dissolves very slowly in the chemical part of the etching, it builds up on the sidewalls and protects them from etching. As a result, etching aspect ratios of 50 to 1 can be achieved. The process can easily be used to etch completely through a silicon substrate, and etch rates are 3–6 times higher than wet etching.

Die preparation

After preparing a large number of MEMS devices on a silicon wafer, individual dies have to be separated, which is called die preparation in semiconductor technology. For some applications, the separation is preceded by wafer backgrinding in order to reduce the wafer thickness. Wafer dicing may then be performed either by sawing using a cooling liquid or a dry laser process called stealth dicing.

MEMS manufacturing technologies

Bulk micromachining

Bulk micromachining is the oldest paradigm of silicon based MEMS. The whole thickness of a silicon wafer is used for building the micro-mechanical structures. Silicon is machined using various etching processes. Anodic bonding of glass plates or additional silicon wafers is used for adding features in the third dimension and for hermetic encapsulation. Bulk micromachining has been essential in enabling high performance pressure sensors and accelerometers that changed the sensor industry in the 1980s and 90's.

Surface micromachining

Surface micromachining uses layers deposited on the surface of a substrate as the structural materials, rather than using the substrate itself. Surface micromachining was created in the late 1980s to render micromachining of silicon more compatible with planar integrated circuit technology, with the goal of combining MEMS and integrated circuits on the same silicon wafer. The original surface micromachining concept was based on thin polycrystalline silicon layers patterned as movable mechanical structures and released by sacrificial etching of the underlying oxide layer. Interdigital comb electrodes were used to produce in-plane forces and to detect in-plane movement capacitively. This MEMS paradigm has enabled the manufacturing of low cost accelerometers for e.g. automotive air-bag systems and other applications where low performance and/or high g-ranges are sufficient. Analog Devices has pioneered the industrialization of surface micromachining and has realized the co-integration of MEMS and integrated circuits.

High aspect ratio (HAR) silicon micromachining

Both bulk and surface silicon micromachining are used in the industrial production of sensors, ink-jet nozzles, and other devices. But in many cases the distinction between these two has diminished. A new etching technology, deep reactive-ion etching, has made it possible to combine good performance typical of bulk micromachining with comb structures and in-plane operation typical of surface micromachining. While it is common in surface micromachining to have structural layer thickness in the range of 2 µm, in HAR silicon micromachining the thickness can be from 10 to 100 µm. The materials commonly used in HAR silicon micromachining are thick polycrystalline silicon, known as epi-poly, and bonded silicon-on-insulator (SOI) wafers although processes for bulk silicon wafer also have been created (SCREAM). Bonding a second wafer by glass frit bonding, anodic bonding or alloy bonding is used to protect the MEMS structures. Integrated circuits are typically not combined with HAR silicon micromachining.

Microelectromechanical systems chip, sometimes called "lab on a chip"

Applications

A Texas Instruments DMD chip for cinema projection
Measuring mechanical properties of a gold stripe (width ~1 µm) using MEMS inside a transmission electron microscope.

Some common commercial applications of MEMS include:

Industry structure

The global market for micro-electromechanical systems, which includes products such as automobile airbag systems, display systems and inkjet cartridges totaled $40 billion in 2006 according to Global MEMS/Microsystems Markets and Opportunities, a research report from SEMI and Yole Developpement and is forecasted to reach $72 billion by 2011.

Companies with strong MEMS programs come in many sizes. Larger firms specialize in manufacturing high volume inexpensive components or packaged solutions for end markets such as automobiles, biomedical, and electronics. Smaller firms provide value in innovative solutions and absorb the expense of custom fabrication with high sales margins. Both large and small companies typically invest in R&D to explore new MEMS technology.

The market for materials and equipment used to manufacture MEMS devices topped $1 billion worldwide in 2006. Materials demand is driven by substrates, making up over 70 percent of the market, packaging coatings and increasing use of chemical mechanical planarization (CMP). While MEMS manufacturing continues to be dominated by used semiconductor equipment, there is a migration to 200 mm lines and select new tools, including etch and bonding for certain MEMS applications.

Adaptive optics

From Wikipedia, the free encyclopedia

A deformable mirror can be used to correct wavefront errors in an astronomical telescope.
 
Illustration of a (simplified) adaptive optics system. The light first hits a tip–tilt (TT) mirror and then a deformable mirror (DM) which corrects the wavefront. Part of the light is tapped off by a beamsplitter (BS) to the wavefront sensor and the control hardware which sends updated signals to the DM and TT mirrors.
 
An artist's impression of adaptive optics.

Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of incoming wavefront distortions by deforming a mirror in order to compensate for the distortion. It is used in astronomical telescopes and laser communication systems to remove the effects of atmospheric distortion, in microscopy, optical fabrication and in retinal imaging systems to reduce optical aberrations. Adaptive optics works by measuring the distortions in a wavefront and compensating for them with a device that corrects those errors such as a deformable mirror or a liquid crystal array.

Adaptive optics should not be confused with active optics, which works on a longer timescale to correct the primary mirror geometry.

Other methods can achieve resolving power exceeding the limit imposed by atmospheric distortion, such as speckle imaging, aperture synthesis, and lucky imaging, or by moving outside the atmosphere with space telescopes, such as the Hubble Space Telescope.

History

Adaptive thin shell mirror.

Adaptive optics was first envisioned by Horace W. Babcock in 1953, and was also considered in science fiction, as in Poul Anderson's novel Tau Zero (1970), but it did not come into common usage until advances in computer technology during the 1990s made the technique practical.

Some of the initial development work on adaptive optics was done by the US military during the Cold War and was intended for use in tracking Soviet satellites.

Microelectromechanical systems (MEMS) deformable mirrors and magnetics concept deformable mirrors are currently the most widely used technology in wavefront shaping applications for adaptive optics given their versatility, stroke, maturity of technology and the high resolution wavefront correction that they afford.

Tip–tilt correction

The simplest form of adaptive optics is tip-tilt correction, which corresponds to correction of the tilts of the wavefront in two dimensions (equivalent to correction of the position offsets for the image). This is performed using a rapidly moving tip–tilt mirror that makes small rotations around two of its axes. A significant fraction of the aberration introduced by the atmosphere can be removed in this way.

Tip–tilt mirrors are effectively segmented mirrors having only one segment which can tip and tilt, rather than having an array of multiple segments that can tip and tilt independently. Due to the relative simplicity of such mirrors and having a large stroke, meaning they have large correcting power, most AO systems use these, first, to correct low order aberrations. Higher order aberrations may then be corrected with deformable mirrors.

In astronomy

Astronomers at the Very Large Telescope site in Chile use adaptive optics.
 
Laser being launched into the night sky from the VLT Adaptive Optics Facility.

Atmospheric seeing

When light from a star passes through the Earth's atmosphere, the wavefront is perturbed.
 
The Shack-Hartmann sensor is one type of wavefront sensor used for adaptive optics.
 
Negative images of a star through a telescope. The left-hand panel shows the slow-motion movie of a star when the adaptive optics system is switched off. The right-hand panel shows the slow motion movie of the same star when the AO system is switched on.

When light from a star or another astronomical object enters the Earth's atmosphere, atmospheric turbulence (introduced, for example, by different temperature layers and different wind speeds interacting) can distort and move the image in various ways. Visual images produced by any telescope larger than approximately 20 centimeters are blurred by these distortions.

Wavefront sensing and correction

An adaptive optics system tries to correct these distortions, using a wavefront sensor which takes some of the astronomical light, a deformable mirror that lies in the optical path, and a computer that receives input from the detector. The wavefront sensor measures the distortions the atmosphere has introduced on the timescale of a few milliseconds; the computer calculates the optimal mirror shape to correct the distortions and the surface of the deformable mirror is reshaped accordingly. For example, an 8–10 m telescope (like the VLT or Keck) can produce AO-corrected images with an angular resolution of 30–60 milliarcsecond (mas) resolution at infrared wavelengths, while the resolution without correction is of the order of 1 arcsecond.

In order to perform adaptive optics correction, the shape of the incoming wavefronts must be measured as a function of position in the telescope aperture plane. Typically the circular telescope aperture is split up into an array of pixels in a wavefront sensor, either using an array of small lenslets (a Shack–Hartmann sensor), or using a curvature or pyramid sensor which operates on images of the telescope aperture. The mean wavefront perturbation in each pixel is calculated. This pixelated map of the wavefronts is fed into the deformable mirror and used to correct the wavefront errors introduced by the atmosphere. It is not necessary for the shape or size of the astronomical object to be known – even Solar System objects which are not point-like can be used in a Shack–Hartmann wavefront sensor, and time-varying structure on the surface of the Sun is commonly used for adaptive optics at solar telescopes. The deformable mirror corrects incoming light so that the images appear sharp.

Using guide stars

Natural guide stars

Because a science target is often too faint to be used as a reference star for measuring the shape of the optical wavefronts, a nearby brighter guide star can be used instead. The light from the science target has passed through approximately the same atmospheric turbulence as the reference star's light and so its image is also corrected, although generally to a lower accuracy.

A laser beam directed toward the centre of the Milky Way. This laser beam can then be used as a guide star for the AO.

The necessity of a reference star means that an adaptive optics system cannot work everywhere on the sky, but only where a guide star of sufficient luminosity (for current systems, about magnitude 12–15) can be found very near to the object of the observation. This severely limits the application of the technique for astronomical observations. Another major limitation is the small field of view over which the adaptive optics correction is good. As the angular distance from the guide star increases, the image quality degrades. A technique known as "multiconjugate adaptive optics" uses several deformable mirrors to achieve a greater field of view.

Artificial guide stars

An alternative is the use of a laser beam to generate a reference light source (a laser guide star, LGS) in the atmosphere. LGSs come in two flavors: Rayleigh guide stars and sodium guide stars. Rayleigh guide stars work by propagating a laser, usually at near ultraviolet wavelengths, and detecting the backscatter from air at altitudes between 15–25 km. Sodium guide stars use laser light at 589 nm to excite sodium atoms higher in the mesosphere and thermosphere, which then appear to "glow". The LGS can then be used as a wavefront reference in the same way as a natural guide star – except that (much fainter) natural reference stars are still required for image position (tip/tilt) information. The lasers are often pulsed, with measurement of the atmosphere being limited to a window occurring a few microseconds after the pulse has been launched. This allows the system to ignore most scattered light at ground level; only light which has travelled for several microseconds high up into the atmosphere and back is actually detected.

In retinal imaging

Artist's impression of the European Extremely Large Telescope deploying lasers for adaptive optics

Ocular aberrations are distortions in the wavefront passing through the pupil of the eye. These optical aberrations diminish the quality of the image formed on the retina, sometimes necessitating the wearing of spectacles or contact lenses. In the case of retinal imaging, light passing out of the eye carries similar wavefront distortions, leading to an inability to resolve the microscopic structure (cells and capillaries) of the retina. Spectacles and contact lenses correct "low-order aberrations", such as defocus and astigmatism, which tend to be stable in humans for long periods of time (months or years). While correction of these is sufficient for normal visual functioning, it is generally insufficient to achieve microscopic resolution. Additionally, "high-order aberrations", such as coma, spherical aberration, and trefoil, must also be corrected in order to achieve microscopic resolution. High-order aberrations, unlike low-order, are not stable over time, and may change over time scales of 0.1s to 0.01s. The correction of these aberrations requires continuous, high-frequency measurement and compensation.

Measurement of ocular aberrations

Ocular aberrations are generally measured using a wavefront sensor, and the most commonly used type of wavefront sensor is the Shack-Hartmann. Ocular aberrations are caused by spatial phase nonuniformities in the wavefront exiting the eye. In a Shack-Hartmann wavefront sensor, these are measured by placing a two-dimensional array of small lenses (lenslets) in a pupil plane conjugate to the eye's pupil, and a CCD chip at the back focal plane of the lenslets. The lenslets cause spots to be focused onto the CCD chip, and the positions of these spots are calculated using a centroiding algorithm. The positions of these spots are compared with the positions of reference spots, and the displacements between the two are used to determine the local curvature of the wavefront allowing one to numerically reconstruct the wavefront information—an estimate of the phase nonuniformities causing aberration.

Correction of ocular aberrations

Once the local phase errors in the wavefront are known, they can be corrected by placing a phase modulator such as a deformable mirror at yet another plane in the system conjugate to the eye's pupil. The phase errors can be used to reconstruct the wavefront, which can then be used to control the deformable mirror. Alternatively, the local phase errors can be used directly to calculate the deformable mirror instructions.

Open loop vs. closed loop operation

If the wavefront error is measured before it has been corrected by the wavefront corrector, then operation is said to be "open loop". If the wavefront error is measured after it has been corrected by the wavefront corrector, then operation is said to be "closed loop". In the latter case then the wavefront errors measured will be small, and errors in the measurement and correction are more likely to be removed. Closed loop correction is the norm.

Applications

Adaptive optics was first applied to flood-illumination retinal imaging to produce images of single cones in the living human eye. It has also been used in conjunction with scanning laser ophthalmoscopy to produce (also in living human eyes) the first images of retinal microvasculature and associated blood flow and retinal pigment epithelium cells in addition to single cones. Combined with optical coherence tomography, adaptive optics has allowed the first three-dimensional images of living cone photoreceptors to be collected.

In microscopy

In microscopy, adaptive optics is used to correct for sample-induced aberrations. The required wavefront correction is either measured directly using wavefront sensor or estimated by using sensorless AO techniques.

Other uses

GRAAL is a ground layer adaptive optics instrument assisted by lasers.

Besides its use for improving nighttime astronomical imaging and retinal imaging, adaptive optics technology has also been used in other settings. Adaptive optics is used for solar astronomy at observatories such as the Swedish 1-m Solar Telescope and Big Bear Solar Observatory. It is also expected to play a military role by allowing ground-based and airborne laser weapons to reach and destroy targets at a distance including satellites in orbit. The Missile Defense Agency Airborne Laser program is the principal example of this.

Adaptive optics has been used to enhance the performance of free space optical communication systems and to control the spatial output of optical fibers.

Medical applications include imaging of the retina, where it has been combined with optical coherence tomography. Also the development of Adaptive Optics Scanning Laser Opthalmoscope (AOSLO) has enabled correcting for the aberrations of the wavefront that is reflected from the human retina and to take diffraction limited images of the human rods and cones. Development of an Adaptive Scanning Optical Microscope (ASOM) was announced by Thorlabs in April 2007. Adaptive and active optics are also being developed for use in glasses to achieve better than 20/20 vision, initially for military applications.

After propagation of a wavefront, parts of it may overlap leading to interference and preventing adaptive optics from correcting it. Propagation of a curved wavefront always leads to amplitude variation. This needs to be considered if a good beam profile is to be achieved in laser applications.

Adaptive optics, especially wavefront-coding spatial light modulators, are frequently used in optical trapping applications to multiplex and dynamically reconfigure laser foci that are used to micro-manipulate biological specimens.

Beam stabilization

A rather simple example is the stabilization of the position and direction of laser beam between modules in a large free space optical communication system. Fourier optics is used to control both direction and position. The actual beam is measured by photo diodes. This signal is fed into some Analog-to-digital converters and a microcontroller runs a PID controller algorithm. The controller drives some digital-to-analog converters which drive stepper motors attached to mirror mounts.

If the beam is to be centered onto 4-quadrant diodes, no Analog-to-digital converter is needed. Operational amplifiers are sufficient.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...