Search This Blog

Saturday, October 10, 2020

Negative feedback

From Wikipedia, the free encyclopedia
 
A simple negative feedback system descriptive, for example, of some electronic amplifiers. The feedback is negative if the loop gain AB is negative.

Negative feedback (or balancing feedback) occurs when some function of the output of a system, process, or mechanism is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances.

Whereas positive feedback tends to lead to instability via exponential growth, oscillation or chaotic behavior, negative feedback generally promotes stability. Negative feedback tends to promote a settling to equilibrium, and reduces the effects of perturbations. Negative feedback loops in which just the right amount of correction is applied with optimum timing can be very stable, accurate, and responsive.

Negative feedback is widely used in mechanical and electronic engineering, and also within living organisms, and can be seen in many other fields from chemistry and economics to physical systems such as the climate. General negative feedback systems are studied in control systems engineering.

Negative feedback loops also play an integral role in maintaining the atmospheric balance in various systems on Earth. One such feedback system is the interaction between solar radiation, cloud cover, and planet temperature.

Blood glucose levels are maintained at a constant level in the body by a negative feedback mechanism. When the blood glucose level is too high, the pancreas secretes insulin and when the level is too low, the pancreas then secretes glucagon. The flat line shown represents the homeostatic set point. The sinusoidal line represents the blood glucose level.

Examples

  • Mercury thermostats (circa 1600) using expansion and contraction of columns of mercury in response to temperature changes were used in negative feedback systems to control vents in furnaces, maintaining a steady internal temperature.
  • In the invisible hand of the market metaphor of economic theory (1776), reactions to price movements provide a feedback mechanism to match supply and demand.
  • In centrifugal governors (1788), negative feedback is used to maintain a near-constant speed of an engine, irrespective of the load or fuel-supply conditions.
  • In a steering engine (1866), power assistance is applied to the rudder with a feedback loop, to maintain the direction set by the steersman.
  • In servomechanisms, the speed or position of an output, as determined by a sensor, is compared to a set value, and any error is reduced by negative feedback to the input.
  • In audio amplifiers, negative feedback reduces distortion, minimises the effect of manufacturing variations in component parameters, and compensates for changes in characteristics due to temperature change.
  • In analog computing feedback around operational amplifiers is used to generate mathematical functions such as addition, subtraction, integration, differentiation, logarithm, and antilog functions.
  • In a phase locked loop (1932) feedback is used to maintain a generated alternating waveform in a constant phase to a reference signal. In many implementations the generated waveform is the output, but when used as a demodulator in an FM radio receiver, the error feedback voltage serves as the demodulated output signal. If there is a frequency divider between the generated waveform and the phase comparator, the device acts as a frequency multiplier.
  • In organisms, feedback enables various measures (e.g. body temperature, or blood sugar level) to be maintained within a desired range by homeostatic processes.

History

Negative feedback as a control technique may be seen in the refinements of the water clock introduced by Ktesibios of Alexandria in the 3rd century BCE. Self-regulating mechanisms have existed since antiquity, and were used to maintain a constant level in the reservoirs of water clocks as early as 200 BCE.

The fly-ball governor is an early example of negative feedback.

Negative feedback was implemented in the 17th Century. Cornelius Drebbel had built thermostatically-controlled incubators and ovens in the early 1600s, and centrifugal governors were used to regulate the distance and pressure between millstones in windmills.  James Watt patented a form of governor in 1788 to control the speed of his steam engine, and James Clerk Maxwell in 1868 described "component motions" associated with these governors that lead to a decrease in a disturbance or the amplitude of an oscillation.

The term "feedback" was well established by the 1920s, in reference to a means of boosting the gain of an electronic amplifier. Friis and Jensen described this action as "positive feedback" and made passing mention of a contrasting "negative feed-back action" in 1924. Harold Stephen Black came up with the idea of using negative feedback in electronic amplifiers in 1927, submitted a patent application in 1928, and detailed its use in his paper of 1934, where he defined negative feedback as a type of coupling that reduced the gain of the amplifier, in the process greatly increasing its stability and bandwidth.

Karl Küpfmüller published papers on a negative-feedback-based automatic gain control system and a feedback system stability criterion in 1928.

Nyquist and Bode built on Black's work to develop a theory of amplifier stability.

Early researchers in the area of cybernetics subsequently generalized the idea of negative feedback to cover any goal-seeking or purposeful behavior.

All purposeful behavior may be considered to require negative feed-back. If a goal is to be attained, some signals from the goal are necessary at some time to direct the behavior.

Cybernetics pioneer Norbert Wiener helped to formalize the concepts of feedback control, defining feedback in general as "the chain of the transmission and return of information", and negative feedback as the case when:

The information fed back to the control center tends to oppose the departure of the controlled from the controlling quantity...(p97)

While the view of feedback as any "circularity of action" helped to keep the theory simple and consistent, Ashby pointed out that, while it may clash with definitions that require a "materially evident" connection, "the exact definition of feedback is nowhere important". Ashby pointed out the limitations of the concept of "feedback":

The concept of 'feedback', so simple and natural in certain elementary cases, becomes artificial and of little use when the interconnections between the parts become more complex...Such complex systems cannot be treated as an interlaced set of more or less independent feedback circuits, but only as a whole. For understanding the general principles of dynamic systems, therefore, the concept of feedback is inadequate in itself. What is important is that complex systems, richly cross-connected internally, have complex behaviors, and that these behaviors can be goal-seeking in complex patterns.(p54)

To reduce confusion, later authors have suggested alternative terms such as degenerative, self-correcting, balancing, or discrepancy-reducing in place of "negative".

Overview

Feedback loops in the human body

In many physical and biological systems, qualitatively different influences can oppose each other. For example, in biochemistry, one set of chemicals drives the system in a given direction, whereas another set of chemicals drives it in an opposing direction. If one or both of these opposing influences are non-linear, equilibrium point(s) result.

In biology, this process (in general, biochemical) is often referred to as homeostasis; whereas in mechanics, the more common term is equilibrium.

In engineering, mathematics and the physical, and biological sciences, common terms for the points around which the system gravitates include: attractors, stable states, eigenstates/eigenfunctions, equilibrium points, and setpoints.

In control theory, negative refers to the sign of the multiplier in mathematical models for feedback. In delta notation, −Δoutput is added to or mixed into the input. In multivariate systems, vectors help to illustrate how several influences can both partially complement and partially oppose each other.

Some authors, in particular with respect to modelling business systems, use negative to refer to the reduction in difference between the desired and actual behavior of a system. In a psychology context, on the other hand, negative refers to the valence of the feedback – attractive versus aversive, or praise versus criticism.

In contrast, positive feedback is feedback in which the system responds so as to increase the magnitude of any particular perturbation, resulting in amplification of the original signal instead of stabilization. Any system in which there is positive feedback together with a gain greater than one will result in a runaway situation. Both positive and negative feedback require a feedback loop to operate.

However, negative feedback systems can still be subject to oscillations. This is caused by a phase shift around any loop. Due to these phase shifts the feedback signal of some frequencies can ultimately become in phase with the input signal and thus turn into positive feedback, creating a runaway condition. Even before the point where the phase shift becomes 180 degrees, stability of the negative feedback loop will become compromised, leading to increasing under- and overshoot following a disturbance. This problem is often dealt with by attenuating or changing the phase of the problematic frequencies in a design step called compensation. Unless the system naturally has sufficient damping, many negative feedback systems have low pass filters or dampers fitted.

Some specific implementations

There are a large number of different examples of negative feedback and some are discussed below.

Error-controlled regulation

Basic error-controlled regulator loop
 
A regulator R adjusts the input to a system T so the monitored essential variables E are held to set-point values S that result in the desired system output despite disturbances D.

One use of feedback is to make a system (say T) self-regulating to minimize the effect of a disturbance (say D). Using a negative feedback loop, a measurement of some variable (for example, a process variable, say E) is subtracted from a required value (the 'set point') to estimate an operational error in system status, which is then used by a regulator (say R) to reduce the gap between the measurement and the required value. The regulator modifies the input to the system T according to its interpretation of the error in the status of the system. This error may be introduced by a variety of possible disturbances or 'upsets', some slow and some rapid. The regulation in such systems can range from a simple 'on-off' control to a more complex processing of the error signal.

It may be noted that the physical form of the signals in the system may change from point to point. So, for example, a change in weather may cause a disturbance to the heat input to a house (as an example of the system T) that is monitored by a thermometer as a change in temperature (as an example of an 'essential variable' E), converted by the thermostat (a 'comparator') into an electrical error in status compared to the 'set point' S, and subsequently used by the regulator (containing a 'controller' that commands gas control valves and an ignitor) ultimately to change the heat provided by a furnace (an 'effector') to counter the initial weather-related disturbance in heat input to the house.

Error controlled regulation is typically carried out using a Proportional-Integral-Derivative Controller (PID controller). The regulator signal is derived from a weighted sum of the error signal, integral of the error signal, and derivative of the error signal. The weights of the respective components depend on the application.

Mathematically, the regulator signal is given by:

where

is the integral time
is the derivative time

Negative feedback amplifier

The negative feedback amplifier was invented by Harold Stephen Black at Bell Laboratories in 1927, and granted a patent in 1937 (US Patent 2,102,671 "a continuation of application Serial No. 298,155, filed August 8, 1928 ...").

"The patent is 52 pages long plus 35 pages of figures. The first 43 pages amount to a small treatise on feedback amplifiers!"

There are many advantages to feedback in amplifiers. In design, the type of feedback and amount of feedback are carefully selected to weigh and optimize these various benefits.

Advantages of negative voltage feedback in amplifiers are-

  1. It reduces non linear distortion that is it has higher fidelity.
  2. It increases circuit stability that is the gain remains stable though there are variations in ambient temperature, frequency and signal amplitude.
  3. It increases bandwidth that is the frequency response is improved.
  4. It is possible to modify the input and output impedances.
  5. The harmonic distortion, phase distortion are less.
  6. The amplitude and frequency distortion are less.
  7. Noise is reduced considerably.
  8. An important advantage of negative feedback is that it stabilizes the gain.

Though negative feedback has many advantages, amplifiers with feedback can oscillate. See the article on step response. They may even exhibit instability. Harry Nyquist of Bell Laboratories proposed the Nyquist stability criterion and the Nyquist plot that identify stable feedback systems, including amplifiers and control systems.

Negative feedback amplifier with external disturbance. The feedback is negative if βA >0.

The figure shows a simplified block diagram of a negative feedback amplifier.

The feedback sets the overall (closed-loop) amplifier gain at a value:

where the approximate value assumes βA >> 1. This expression shows that a gain greater than one requires β < 1. Because the approximate gain 1/β is independent of the open-loop gain A, the feedback is said to 'desensitize' the closed-loop gain to variations in A (for example, due to manufacturing variations between units, or temperature effects upon components), provided only that the gain A is sufficiently large. In this context, the factor (1+βA) is often called the 'desensitivity factor', and in the broader context of feedback effects that include other matters like electrical impedance and bandwidth, the 'improvement factor'.

If the disturbance D is included, the amplifier output becomes:

which shows that the feedback reduces the effect of the disturbance by the 'improvement factor' (1+β A). The disturbance D might arise from fluctuations in the amplifier output due to noise and nonlinearity (distortion) within this amplifier, or from other noise sources such as power supplies.

The difference signal I–βO at the amplifier input is sometimes called the "error signal". According to the diagram, the error signal is:

From this expression, it can be seen that a large 'improvement factor' (or a large loop gain βA) tends to keep this error signal small.

Although the diagram illustrates the principles of the negative feedback amplifier, modeling a real amplifier as a unilateral forward amplification block and a unilateral feedback block has significant limitations. For methods of analysis that do not make these idealizations, see the article Negative feedback amplifier.

Operational amplifier circuits

A feedback voltage amplifier using an op amp with finite gain but infinite input impedances and zero output impedance.

The operational amplifier was originally developed as a building block for the construction of analog computers, but is now used almost universally in all kinds of applications including audio equipment and control systems.

Operational amplifier circuits typically employ negative feedback to get a predictable transfer function. Since the open-loop gain of an op-amp is extremely large, a small differential input signal would drive the output of the amplifier to one rail or the other in the absence of negative feedback. A simple example of the use of feedback is the op-amp voltage amplifier shown in the figure.

The idealized model of an operational amplifier assumes that the gain is infinite, the input impedance is infinite, output resistance is zero, and input offset currents and voltages are zero. Such an ideal amplifier draws no current from the resistor divider. Ignoring dynamics (transient effects and propagation delay), the infinite gain of the ideal op-amp means this feedback circuit drives the voltage difference between the two op-amp inputs to zero. Consequently, the voltage gain of the circuit in the diagram, assuming an ideal op amp, is the reciprocal of feedback voltage division ratio β:

.

A real op-amp has a high but finite gain A at low frequencies, decreasing gradually at higher frequencies. In addition, it exhibits a finite input impedance and a non-zero output impedance. Although practical op-amps are not ideal, the model of an ideal op-amp often suffices to understand circuit operation at low enough frequencies. As discussed in the previous section, the feedback circuit stabilizes the closed-loop gain and desensitizes the output to fluctuations generated inside the amplifier itself.

Mechanical engineering

The ballcock or float valve uses negative feedback to control the water level in a cistern.

An example of the use of negative feedback control is the ballcock control of water level (see diagram), or a pressure regulator. In modern engineering, negative feedback loops are found in engine governors, fuel injection systems and carburettors. Similar control mechanisms are used in heating and cooling systems, such as those involving air conditioners, refrigerators, or freezers.

Biology

Control of endocrine hormones by negative feedback.

Some biological systems exhibit negative feedback such as the baroreflex in blood pressure regulation and erythropoiesis. Many biological processes (e.g., in the human anatomy) use negative feedback. Examples of this are numerous, from the regulating of body temperature, to the regulating of blood glucose levels. The disruption of feedback loops can lead to undesirable results: in the case of blood glucose levels, if negative feedback fails, the glucose levels in the blood may begin to rise dramatically, thus resulting in diabetes.

For hormone secretion regulated by the negative feedback loop: when gland X releases hormone X, this stimulates target cells to release hormone Y. When there is an excess of hormone Y, gland X "senses" this and inhibits its release of hormone X. As shown in the figure, most endocrine hormones are controlled by a physiologic negative feedback inhibition loop, such as the glucocorticoids secreted by the adrenal cortex. The hypothalamus secretes corticotropin-releasing hormone (CRH), which directs the anterior pituitary gland to secrete adrenocorticotropic hormone (ACTH). In turn, ACTH directs the adrenal cortex to secrete glucocorticoids, such as cortisol. Glucocorticoids not only perform their respective functions throughout the body but also negatively affect the release of further stimulating secretions of both the hypothalamus and the pituitary gland, effectively reducing the output of glucocorticoids once a sufficient amount has been released.

Chemistry

Closed systems containing substances undergoing a reversible chemical reaction can also exhibit negative feedback in accordance with Le Chatelier's principle which shift the chemical equilibrium to the opposite side of the reaction in order to reduce a stress. For example, in the reaction

N2 + 3 H2 ⇌ 2 NH3 + 92 kJ/mol

If a mixture of the reactants and products exists at equilibrium in a sealed container and nitrogen gas is added to this system, then the equilibrium will shift toward the product side in response. If the temperature is raised, then the equilibrium will shift toward the reactant side which, because the reverse reaction is endothermic, will partially reduces the temperature.

Self-organization

Self-organization is the capability of certain systems "of organizing their own behavior or structure". There are many possible factors contributing to this capacity, and most often positive feedback is identified as a possible contributor. However, negative feedback also can play a role.

Economics

In economics, automatic stabilisers are government programs that are intended to work as negative feedback to dampen fluctuations in real GDP.

Mainstream economics asserts that the market pricing mechanism operates to match supply and demand, because mismatches between them feed back into the decision-making of suppliers and demanders of goods, altering prices and thereby reducing any discrepancy. However Norbert Wiener wrote in 1948:

"There is a belief current in many countries and elevated to the rank of an official article of faith in the United States that free competition is itself a homeostatic process... Unfortunately the evidence, such as it is, is against this simple-minded theory."

The notion of economic equilibrium being maintained in this fashion by market forces has also been questioned by numerous heterodox economists such as financier George Soros and leading ecological economist and steady-state theorist Herman Daly, who was with the World Bank in 1988–1994.

Environmental Applications

A basic and common example of a negative feedback system in the environment is the interaction among cloud cover, plant growth, solar radiation, and planet temperature. As incoming solar radiation increases, planet temperature increases. As the temperature increases, the amount of plant life that can grow increases. This plant life can then make products such as sulfur which produce more cloud cover. An increase in cloud cover leads to higher albedo, or surface reflectivity, of the Earth. As albedo increases, however, the amount of solar radiation decreases. This, in turn, affects the rest of the cycle.

Cloud cover, and in turn planet albedo and temperature, is also influenced by the hydrological cycle. As planet temperature increases, more water vapor is produced, creating more clouds. The clouds then block incoming solar radiation, lowering the temperature of the planet. This interaction produces less water vapor and therefore less cloud cover. The cycle then repeats in a negative feedback loop. In this way, negative feedback loops in the environment have a stabilizing effect.

Finite-difference time-domain method

From Wikipedia, the free encyclopedia
In finite-difference time-domain method, "Yee lattice" is used to discretize Maxwell's equations in space. This scheme involves the placement of electric and magnetic fields on a staggered grid.

Finite-difference time-domain (FDTD) or Yee's method (named after the Chinese American applied mathematician Kane S. Yee, born 1934) is a numerical analysis technique used for modeling computational electrodynamics (finding approximate solutions to the associated system of differential equations). Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run, and treat nonlinear material properties in a natural way.

The FDTD method belongs in the general class of grid-based differential numerical modeling methods (finite difference methods). The time-dependent Maxwell's equations (in partial differential form) are discretized using central-difference approximations to the space and time partial derivatives. The resulting finite-difference equations are solved in either software or hardware in a leapfrog manner: the electric field vector components in a volume of space are solved at a given instant in time; then the magnetic field vector components in the same spatial volume are solved at the next instant in time; and the process is repeated over and over again until the desired transient or steady-state electromagnetic field behavior is fully evolved.

History

Finite difference schemes for time-dependent partial differential equations (PDEs) have been employed for many years in computational fluid dynamics problems, including the idea of using centered finite difference operators on staggered grids in space and time to achieve second-order accuracy. The novelty of Kane Yee's FDTD scheme, presented in his seminal 1966 paper, was to apply centered finite difference operators on staggered grids in space and time for each electric and magnetic vector field component in Maxwell's curl equations. The descriptor "Finite-difference time-domain" and its corresponding "FDTD" acronym were originated by Allen Taflove in 1980. Since about 1990, FDTD techniques have emerged as primary means to computationally model many scientific and engineering problems dealing with electromagnetic wave interactions with material structures. Current FDTD modeling applications range from near-DC (ultralow-frequency geophysics involving the entire Earth-ionosphere waveguide) through microwaves (radar signature technology, antennas, wireless communications devices, digital interconnects, biomedical imaging/treatment) to visible light (photonic crystals, nanoplasmonics, solitons, and biophotonics). In 2006, an estimated 2,000 FDTD-related publications appeared in the science and engineering literature (see Popularity). As of 2013, there are at least 25 commercial/proprietary FDTD software vendors; 13 free-software/open-source-software FDTD projects; and 2 freeware/closed-source FDTD projects, some not for commercial use (see External links).

Development of FDTD and Maxwell's equations

An appreciation of the basis, technical development, and possible future of FDTD numerical techniques for Maxwell's equations can be developed by first considering their history. The following lists some of the key publications in this area.

Partial chronology of FDTD techniques and applications for Maxwell's equations.
year event
1928 Courant, Friedrichs, and Lewy (CFL) publish seminal paper with the discovery of conditional stability of explicit time-dependent finite difference schemes, as well as the classic FD scheme for solving second-order wave equation in 1-D and 2-D.
1950 First appearance of von Neumann's method of stability analysis for implicit/explicit time-dependent finite difference methods.
1966 Yee described the FDTD numerical technique for solving Maxwell's curl equations on grids staggered in space and time.
1969 Lam reported the correct numerical CFL stability condition for Yee's algorithm by employing von Neumann stability analysis.
1975 Taflove and Brodwin reported the first sinusoidal steady-state FDTD solutions of two- and three-dimensional electromagnetic wave interactions with material structures; and the first bioelectromagnetics models.
1977 Holland and Kunz & Lee applied Yee's algorithm to EMP problems.
1980 Taflove coined the FDTD acronym and published the first validated FDTD models of sinusoidal steady-state electromagnetic wave penetration into a three-dimensional metal cavity.
1981 Mur published the first numerically stable, second-order accurate, absorbing boundary condition (ABC) for Yee's grid.
1982–83 Taflove and Umashankar developed the first FDTD electromagnetic wave scattering models computing sinusoidal steady-state near-fields, far-fields, and radar cross-section for two- and three-dimensional structures.
1984 Liao et al reported an improved ABC based upon space-time extrapolation of the field adjacent to the outer grid boundary.
1985 Gwarek introduced the lumped equivalent circuit formulation of FDTD.
1986 Choi and Hoefer published the first FDTD simulation of waveguide structures.
1987–88 Kriegsmann et al and Moore et al published the first articles on ABC theory in IEEE Transactions on Antennas and Propagation.
1987–88, 1992 Contour-path subcell techniques were introduced by Umashankar et al to permit FDTD modeling of thin wires and wire bundles, by Taflove et al to model penetration through cracks in conducting screens, and by Jurgens et al to conformally model the surface of a smoothly curved scatterer.
1988 Sullivan et al published the first 3-D FDTD model of sinusoidal steady-state electromagnetic wave absorption by a complete human body.
1988 FDTD modeling of microstrips was introduced by Zhang et al.
1990–91 FDTD modeling of frequency-dependent dielectric permittivity was introduced by Kashiwa and Fukai, Luebbers et al, and Joseph et al.
1990–91 FDTD modeling of antennas was introduced by Maloney et al, Katz et al, and Tirkas, and Balanis.
1990 FDTD modeling of picosecond optoelectronic switches was introduced by Sano and Shibata, and El-Ghazaly et al.
1992–94 FDTD modeling of the propagation of optical pulses in nonlinear dispersive media was introduced, including the first temporal solitons in one dimension by Goorjian and Taflove; beam self-focusing by Ziolkowski and Judkins; the first temporal solitons in two dimensions by Joseph et al; and the first spatial solitons in two dimensions by Joseph and Taflove.
1992 FDTD modeling of lumped electronic circuit elements was introduced by Sui et al.
1993 Toland et al published the first FDTD models of gain devices (tunnel diodes and Gunn diodes) exciting cavities and antennas.
1993 Aoyagi et al present a hybrid Yee algorithm/scalar-wave equation and demonstrate equivalence of Yee scheme to finite difference scheme for electromagnetic wave equation.
1994 Thomas et al introduced a Norton's equivalent circuit for the FDTD space lattice, which permits the SPICE circuit analysis tool to implement accurate subgrid models of nonlinear electronic components or complete circuits embedded within the lattice.
1994 Berenger introduced the highly effective, perfectly matched layer (PML) ABC for two-dimensional FDTD grids, which was extended to non-orthogonal meshes by Navarro et al , and three dimensions by Katz et al, and to dispersive waveguide terminations by Reuter et al.
1994 Chew and Weedon introduced the coordinate stretching PML that is easily extended to three dimensions, other coordinate systems and other physical equations.
1995–96 Sacks et al and Gedney introduced a physically realizable, uniaxial perfectly matched layer (UPML) ABC.
1997 Liu introduced the pseudospectral time-domain (PSTD) method, which permits extremely coarse spatial sampling of the electromagnetic field at the Nyquist limit.
1997 Ramahi introduced the complementary operators method (COM) to implement highly effective analytical ABCs.
1998 Maloney and Kesler introduced several novel means to analyze periodic structures in the FDTD space lattice.
1998 Nagra and York introduced a hybrid FDTD-quantum mechanics model of electromagnetic wave interactions with materials having electrons transitioning between multiple energy levels.
1998 Hagness et al introduced FDTD modeling of the detection of breast cancer using ultrawideband radar techniques.
1999 Schneider and Wagner introduced a comprehensive analysis of FDTD grid dispersion based upon complex wavenumbers.
2000–01 Zheng, Chen, and Zhang introduced the first three-dimensional alternating-direction implicit (ADI) FDTD algorithm with provable unconditional numerical stability.
2000 Roden and Gedney introduced the advanced convolutional PML (CPML) ABC.
2000 Rylander and Bondeson introduced a provably stable FDTD - finite-element time-domain hybrid technique.
2002 Hayakawa et al and Simpson and Taflove independently introduced FDTD modeling of the global Earth-ionosphere waveguide for extremely low-frequency geophysical phenomena.
2003 DeRaedt introduced the unconditionally stable, “one-step” FDTD technique.
2004 Soriano and Navarro derived the stability condition for Quantum FDTD technique.
2008 Ahmed, Chua, Li and Chen introduced the three-dimensional locally one-dimensional (LOD)FDTD method and proved unconditional numerical stability.
2008 Taniguchi, Baba, Nagaoka and Ametani introduced a Thin Wire Representation for FDTD Computations for conductive media
2009 Oliveira and Sobrinho applied the FDTD method for simulating lightning strokes in a power substation
2010 Chaudhury and Boeuf demonstrated the numerical procedure to couple FDTD and plasma fluid model for studying microwave-plasma interaction.
2012 Moxley et al developed a generalized finite-difference time-domain quantum method for the N-body interacting Hamiltonian.
2013 Moxley et al developed a generalized finite-difference time-domain scheme for solving nonlinear Schrödinger equations.
2014 Moxley et al developed an implicit generalized finite-difference time-domain scheme for solving nonlinear Schrödinger equations.

FDTD models and methods

When Maxwell's differential equations are examined, it can be seen that the change in the E-field in time (the time derivative) is dependent on the change in the H-field across space (the curl). This results in the basic FDTD time-stepping relation that, at any point in space, the updated value of the E-field in time is dependent on the stored value of the E-field and the numerical curl of the local distribution of the H-field in space.

The H-field is time-stepped in a similar manner. At any point in space, the updated value of the H-field in time is dependent on the stored value of the H-field and the numerical curl of the local distribution of the E-field in space. Iterating the E-field and H-field updates results in a marching-in-time process wherein sampled-data analogs of the continuous electromagnetic waves under consideration propagate in a numerical grid stored in the computer memory.

Illustration of a standard Cartesian Yee cell used for FDTD, about which electric and magnetic field vector components are distributed. Visualized as a cubic voxel, the electric field components form the edges of the cube, and the magnetic field components form the normals to the faces of the cube. A three-dimensional space lattice consists of a multiplicity of such Yee cells. An electromagnetic wave interaction structure is mapped into the space lattice by assigning appropriate values of permittivity to each electric field component, and permeability to each magnetic field component.

This description holds true for 1-D, 2-D, and 3-D FDTD techniques. When multiple dimensions are considered, calculating the numerical curl can become complicated. Kane Yee's seminal 1966 paper proposed spatially staggering the vector components of the E-field and H-field about rectangular unit cells of a Cartesian computational grid so that each E-field vector component is located midway between a pair of H-field vector components, and conversely. This scheme, now known as a Yee lattice, has proven to be very robust, and remains at the core of many current FDTD software constructs.

Furthermore, Yee proposed a leapfrog scheme for marching in time wherein the E-field and H-field updates are staggered so that E-field updates are conducted midway during each time-step between successive H-field updates, and conversely. On the plus side, this explicit time-stepping scheme avoids the need to solve simultaneous equations, and furthermore yields dissipation-free numerical wave propagation. On the minus side, this scheme mandates an upper bound on the time-step to ensure numerical stability. As a result, certain classes of simulations can require many thousands of time-steps for completion.

Using the FDTD method

To implement an FDTD solution of Maxwell's equations, a computational domain must first be established. The computational domain is simply the physical region over which the simulation will be performed. The E and H fields are determined at every point in space within that computational domain. The material of each cell within the computational domain must be specified. Typically, the material is either free-space (air), metal, or dielectric. Any material can be used as long as the permeability, permittivity, and conductivity are specified.

The permittivity of dispersive materials in tabular form cannot be directly substituted into the FDTD scheme. Instead, it can be approximated using multiple Debye, Drude, Lorentz or critical point terms. This approximation can be obtained using open fitting programs and does not necessarily have physical meaning.

Once the computational domain and the grid materials are established, a source is specified. The source can be current on a wire, applied electric field or impinging plane wave. In the last case FDTD can be used to simulate light scattering from arbitrary shaped objects, planar periodic structures at various incident angles, and photonic band structure of infinite periodic structures.

Since the E and H fields are determined directly, the output of the simulation is usually the E or H field at a point or a series of points within the computational domain. The simulation evolves the E and H fields forward in time.

Processing may be done on the E and H fields returned by the simulation. Data processing may also occur while the simulation is ongoing.

While the FDTD technique computes electromagnetic fields within a compact spatial region, scattered and/or radiated far fields can be obtained via near-to-far-field transformations.

Strengths of FDTD modeling

Every modeling technique has strengths and weaknesses, and the FDTD method is no different.

  • FDTD is a versatile modeling technique used to solve Maxwell's equations. It is intuitive, so users can easily understand how to use it and know what to expect from a given model.
  • FDTD is a time-domain technique, and when a broadband pulse (such as a Gaussian pulse) is used as the source, then the response of the system over a wide range of frequencies can be obtained with a single simulation. This is useful in applications where resonant frequencies are not exactly known, or anytime that a broadband result is desired.
  • Since FDTD calculates the E and H fields everywhere in the computational domain as they evolve in time, it lends itself to providing animated displays of the electromagnetic field movement through the model. This type of display is useful in understanding what is going on in the model, and to help ensure that the model is working correctly.
  • The FDTD technique allows the user to specify the material at all points within the computational domain. A wide variety of linear and nonlinear dielectric and magnetic materials can be naturally and easily modeled.
  • FDTD allows the effects of apertures to be determined directly. Shielding effects can be found, and the fields both inside and outside a structure can be found directly or indirectly.
  • FDTD uses the E and H fields directly. Since most EMI/EMC modeling applications are interested in the E and H fields, it is convenient that no conversions must be made after the simulation has run to get these values.

Weaknesses of FDTD modeling

  • Since FDTD requires that the entire computational domain be gridded, and the grid spatial discretization must be sufficiently fine to resolve both the smallest electromagnetic wavelength and the smallest geometrical feature in the model, very large computational domains can be developed, which results in very long solution times. Models with long, thin features, (like wires) are difficult to model in FDTD because of the excessively large computational domain required. Methods such as eigenmode expansion can offer a more efficient alternative as they do not require a fine grid along the z-direction.
  • There is no way to determine unique values for permittivity and permeability at a material interface.
  • Space and time steps must satisfy the CFL condition, or the leapfrog integration used to solve the partial differential equation is likely to become unstable.
  • FDTD finds the E/H fields directly everywhere in the computational domain. If the field values at some distance are desired, it is likely that this distance will force the computational domain to be excessively large. Far-field extensions are available for FDTD, but require some amount of postprocessing.
  • Since FDTD simulations calculate the E and H fields at all points within the computational domain, the computational domain must be finite to permit its residence in the computer memory. In many cases this is achieved by inserting artificial boundaries into the simulation space. Care must be taken to minimize errors introduced by such boundaries. There are a number of available highly effective absorbing boundary conditions (ABCs) to simulate an infinite unbounded computational domain. Most modern FDTD implementations instead use a special absorbing "material", called a perfectly matched layer (PML) to implement absorbing boundaries.
  • Because FDTD is solved by propagating the fields forward in the time domain, the electromagnetic time response of the medium must be modeled explicitly. For an arbitrary response, this involves a computationally expensive time convolution, although in most cases the time response of the medium (or Dispersion (optics)) can be adequately and simply modeled using either the recursive convolution (RC) technique, the auxiliary differential equation (ADE) technique, or the Z-transform technique. An alternative way of solving Maxwell's equations that can treat arbitrary dispersion easily is the pseudo-spectral spatial domain (PSSD), which instead propagates the fields forward in space.

Grid truncation techniques

The most commonly used grid truncation techniques for open-region FDTD modeling problems are the Mur absorbing boundary condition (ABC), the Liao ABC, and various perfectly matched layer (PML) formulations. The Mur and Liao techniques are simpler than PML. However, PML (which is technically an absorbing region rather than a boundary condition per se) can provide orders-of-magnitude lower reflections. The PML concept was introduced by J.-P. Berenger in a seminal 1994 paper in the Journal of Computational Physics. Since 1994, Berenger's original split-field implementation has been modified and extended to the uniaxial PML (UPML), the convolutional PML (CPML), and the higher-order PML. The latter two PML formulations have increased ability to absorb evanescent waves, and therefore can in principle be placed closer to a simulated scattering or radiating structure than Berenger's original formulation.

To reduce undesired numerical reflection from the PML additional back absorbing layers technique can be used.

Popularity

Notwithstanding both the general increase in academic publication throughput during the same period and the overall expansion of interest in all Computational electromagnetics (CEM) techniques, there are seven primary reasons for the tremendous expansion of interest in FDTD computational solution approaches for Maxwell's equations:

  1. FDTD does not require a matrix inversion. Being a fully explicit computation, FDTD avoids the difficulties with matrix inversions that limit the size of frequency-domain integral-equation and finite-element electromagnetics models to generally fewer than 109 electromagnetic field unknowns. FDTD models with as many as 109 field unknowns have been run; there is no intrinsic upper bound to this number.
  2. FDTD is accurate and robust. The sources of error in FDTD calculations are well understood, and can be bounded to permit accurate models for a very large variety of electromagnetic wave interaction problems.
  3. FDTD treats impulsive behavior naturally. Being a time-domain technique, FDTD directly calculates the impulse response of an electromagnetic system. Therefore, a single FDTD simulation can provide either ultrawideband temporal waveforms or the sinusoidal steady-state response at any frequency within the excitation spectrum.
  4. FDTD treats nonlinear behavior naturally. Being a time-domain technique, FDTD directly calculates the nonlinear response of an electromagnetic system. This allows natural hybriding of FDTD with sets of auxiliary differential equations that describe nonlinearities from either the classical or semi-classical standpoint. One research frontier is the development of hybrid algorithms which join FDTD classical electrodynamics models with phenomena arising from quantum electrodynamics, especially vacuum fluctuations, such as the Casimir effect.
  5. FDTD is a systematic approach. With FDTD, specifying a new structure to be modeled is reduced to a problem of mesh generation rather than the potentially complex reformulation of an integral equation. For example, FDTD requires no calculation of structure-dependent Green functions.
  6. Parallel-processing computer architectures have come to dominate supercomputing. FDTD scales with high efficiency on parallel-processing CPU-based computers, and extremely well on recently developed GPU-based accelerator technology.
  7. Computer visualization capabilities are increasing rapidly. While this trend positively influences all numerical techniques, it is of particular advantage to FDTD methods, which generate time-marched arrays of field quantities suitable for use in color videos to illustrate the field dynamics.

Taflove has argued that these factors combine to suggest that FDTD will remain one of the dominant computational electrodynamics techniques (as well as potentially other multiphysics problems).

Implementations

There are hundreds of simulation tools (e.g. OmniSim, XFdtd, Lumerical, CST Studio Suite, OptiFDTD etc.) that implement FDTD algorithms, many optimized to run on parallel-processing clusters.

Frederick Moxley suggests further applications with computational quantum mechanics and simulations.

Rayleigh scattering

From Wikipedia, the free encyclopedia
 
 
Rayleigh scattering causes the blue color of the daytime sky and the reddening of the Sun at sunset.

Rayleigh scattering (/ˈrli/ RAY-lee), named after the nineteenth-century British physicist Lord Rayleigh (John William Strutt), is the predominantly elastic scattering of light or other electromagnetic radiation by particles much smaller than the wavelength of the radiation. For light frequencies well below the resonance frequency of the scattering particle (normal dispersion regime), the amount of scattering is inversely proportional to the fourth power of the wavelength.

Rayleigh scattering results from the electric polarizability of the particles. The oscillating electric field of a light wave acts on the charges within a particle, causing them to move at the same frequency. The particle, therefore, becomes a small radiating dipole whose radiation we see as scattered light. The particles may be individual atoms or molecules; it can occur when light travels through transparent solids and liquids, but is most prominently seen in gases.

Rayleigh scattering of sunlight in Earth's atmosphere causes diffuse sky radiation, which is the reason for the blue color of the daytime and twilight sky, as well as the yellowish to reddish hue of the low Sun. Sunlight is also subject to Raman scattering, which changes the rotational state of the molecules and gives rise to polarization effects.

Scattering by particles similar to, or larger than, the wavelength of light is typically treated by the Mie theory, the discrete dipole approximation and other computational techniques. Rayleigh scattering applies to particles that are small with respect to wavelengths of light, and that are optically "soft" (i.e., with a refractive index close to 1). Anomalous diffraction theory applies to optically soft but larger particles.

History

In 1869, while attempting to determine whether any contaminants remained in the purified air he used for infrared experiments, John Tyndall discovered that bright light scattering off nanoscopic particulates was faintly blue-tinted. He conjectured that a similar scattering of sunlight gave the sky its blue hue, but he could not explain the preference for blue light, nor could atmospheric dust explain the intensity of the sky's color.

In 1871, Lord Rayleigh published two papers on the color and polarization of skylight to quantify Tyndall's effect in water droplets in terms of the tiny particulates' volumes and refractive indices.  In 1881 with the benefit of James Clerk Maxwell's 1865 proof of the electromagnetic nature of light, he showed that his equations followed from electromagnetism. In 1899, he showed that they applied to individual molecules, with terms containing particulate volumes and refractive indices replaced with terms for molecular polarizability.

Small size parameter approximation

The size of a scattering particle is often parameterized by the ratio

where r is the particle's radius, λ is the wavelength of the light and x is a dimensionless parameter that characterizes the particle's interaction with the incident radiation such that: Objects with x ≫ 1 act as geometric shapes, scattering light according to their projected area. At the intermediate x ≃ 1 of Mie scattering, interference effects develop through phase variations over the object's surface. Rayleigh scattering applies to the case when the scattering particle is very small (x ≪ 1, with a particle size < 1 /10 wavelength) and the whole surface re-radiates with the same phase. Because the particles are randomly positioned, the scattered light arrives at a particular point with a random collection of phases; it is incoherent and the resulting intensity is just the sum of the squares of the amplitudes from each particle and therefore proportional to the inverse fourth power of the wavelength and the sixth power of its size. The wavelength dependence is characteristic of dipole scattering and the volume dependence will apply to any scattering mechanism. In detail, the intensity I of light scattered by any one of the small spheres of diameter d and refractive index n from a beam of unpolarized light of wavelength λ and intensity I0 is given by


where R is the distance to the particle and θ is the scattering angle. Averaging this over all angles gives the Rayleigh scattering cross-section


The fraction of light scattered by scattering particles over the unit travel length (e.g., meter) is the number of particles per unit volume N times the cross-section. For example, the major constituent of the atmosphere, nitrogen, has a Rayleigh cross section of 5.1×10−31 m2 at a wavelength of 532 nm (green light). This means that at atmospheric pressure, where there are about 2×1025 molecules per cubic meter, about a fraction 10−5 of the light will be scattered for every meter of travel.

The strong wavelength dependence of the scattering (~λ−4) means that shorter (blue) wavelengths are scattered more strongly than longer (red) wavelengths.

From molecules

Figure showing the greater proportion of blue light scattered by the atmosphere relative to red light.

The expression above can also be written in terms of individual molecules by expressing the dependence on refractive index in terms of the molecular polarizability α, proportional to the dipole moment induced by the electric field of the light. 

Effect of fluctuations

When the dielectric constant of a certain region of volume is different from the average dielectric constant of the medium , then any incident light will be scattered according to the following equation

where represents the variance of the fluctuation in the dielectric constant .

Cause of the blue color of the sky

Scattered blue light is polarized. The picture on the right is shot through a polarizing filter: the polarizer transmits light that is linearly polarized in a specific direction.

The strong wavelength dependence of the scattering (~λ−4) means that shorter (blue) wavelengths are scattered more strongly than longer (red) wavelengths. This results in the indirect blue light coming from all regions of the sky. Rayleigh scattering is a good approximation of the manner in which light scattering occurs within various media for which scattering particles have a small size (parameter).

A portion of the beam of light coming from the sun scatters off molecules of gas and other small particles in the atmosphere. Here, Rayleigh scattering primarily occurs through sunlight's interaction with randomly located air molecules. It is this scattered light that gives the surrounding sky its brightness and its color. As previously stated, Rayleigh scattering is inversely proportional to the fourth power of wavelength, so that shorter wavelength violet and blue light will scatter more than the longer wavelengths (yellow and especially red light). However, the Sun, like any star, has its own spectrum and so I0 in the scattering formula above is not constant but falls away in the violet. In addition the oxygen in the Earth's atmosphere absorbs wavelengths at the edge of the ultra-violet region of the spectrum. The resulting color, which appears like a pale blue, actually is a mixture of all the scattered colors, mainly blue and green. Conversely, glancing toward the sun, the colors that were not scattered away — the longer wavelengths such as red and yellow light — are directly visible, giving the sun itself a slightly yellowish hue. Viewed from space, however, the sky is black and the sun is white.

The reddening of the sun is intensified when it is near the horizon because the light being received directly from it must pass through more of the atmosphere. The effect is further increased because the sunlight must pass through a greater proportion of the atmosphere nearer the earth's surface, where it is denser. This removes a significant proportion of the shorter wavelength (blue) and medium wavelength (green) light from the direct path to the observer. The remaining unscattered light is therefore mostly of longer wavelengths and appears more red.

Some of the scattering can also be from sulfate particles. For years after large Plinian eruptions, the blue cast of the sky is notably brightened by the persistent sulfate load of the stratospheric gases. Some works of the artist J. M. W. Turner may owe their vivid red colours to the eruption of Mount Tambora in his lifetime.

In locations with little light pollution, the moonlit night sky is also blue, because moonlight is reflected sunlight, with a slightly lower color temperature due to the brownish color of the moon. The moonlit sky is not perceived as blue, however, because at low light levels human vision comes mainly from rod cells that do not produce any color perception (Purkinje effect).

In amorphous solids

Rayleigh scattering is also an important mechanism of wave scattering in amorphous solids such as glass, and is responsible for acoustic wave damping and phonon damping in glasses and granular matter at low or not too high temperatures.

In optical fibers

Rayleigh scattering is an important component of the scattering of optical signals in optical fibers. Silica fibers are glasses, disordered materials with microscopic variations of density and refractive index. These give rise to energy losses due to the scattered light, with the following coefficient:

where n is the refraction index, p is the photoelastic coefficient of the glass, k is the Boltzmann constant, and β is the isothermal compressibility. Tf is a fictive temperature, representing the temperature at which the density fluctuations are "frozen" in the material.

In porous materials

Rayleigh scattering in opalescent glass: it appears blue from the side, but orange light shines through.

Rayleigh-type λ−4 scattering can also be exhibited by porous materials. An example is the strong optical scattering by nanoporous materials. The strong contrast in refractive index between pores and solid parts of sintered alumina results in very strong scattering, with light completely changing direction each five micrometers on average. The λ−4-type scattering is caused by the nanoporous structure (a narrow pore size distribution around ~70 nm) obtained by sintering monodispersive alumina powder.

 

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...