Search This Blog

Tuesday, April 4, 2017

Feedback

From Wikipedia, the free encyclopedia

Feedback exists between two parts when each affects the other.[1](p53)
A feedback loop where all outputs of a process are available as causal inputs to that process

Feedback occurs when outputs of a system are routed back as inputs as part of a chain of cause-and-effect that forms a circuit or loop.[2] The system can then be said to feed back into itself. The notion of cause-and-effect has to be handled carefully when applied to feedback systems:
"Simple causal reasoning about a feedback system is difficult because the first system influences the second and second system influences the first, leading to a circular argument. This makes reasoning based upon cause and effect tricky, and it is necessary to analyze the system as a whole." [3]

History

Self-regulating mechanisms have existed since antiquity, and the idea of feedback had started to enter economic theory in Britain by the eighteenth century, but it wasn't at that time recognized as a universal abstraction and so didn't have a name.[4]

The verb phrase "to feed back", in the sense of returning to an earlier position in a mechanical process, was in use in the US by the 1860s,[5][6] and in 1909, Nobel laureate Karl Ferdinand Braun used the term "feed-back" as a noun to refer to (undesired) coupling between components of an electronic circuit.[7]

By the end of 1912, researchers using early electronic amplifiers (audions) had discovered that deliberately coupling part of the output signal back to the input circuit would boost the amplification (through regeneration), but would also cause the audion to howl or sing.[8] This action of feeding back of the signal from output to input gave rise to the use of the term "feedback" as a distinct word by 1920.[8]

Over the years there has been some dispute as to the best definition of feedback. According to Ashby (1956), mathematicians and theorists interested in the principles of feedback mechanisms prefer the definition of circularity of action, which keeps the theory simple and consistent. For those with more practical aims, feedback should be a deliberate effect via some more tangible connection.
"[Practical experimenters] object to the mathematician's definition, pointing out that this would force them to say that feedback was present in the ordinary pendulum ... between its position and its momentum—a 'feedback' that, from the practical point of view, is somewhat mystical. To this the mathematician retorts that if feedback is to be considered present only when there is an actual wire or nerve to represent it, then the theory becomes chaotic and riddled with irrelevancies."[1](p54)
Focusing on uses in management theory, Ramaprasad (1983) defines feedback generally as "...information about the gap between the actual level and the reference level of a system parameter" that is used to "alter the gap in some way." He emphasizes that the information by itself is not feedback unless translated into action.[9]

Types

Positive and negative feedback

Maintaining a desired system performance despite disturbance using negative feedback to reduce system error.

There are two types of feedback: positive feedback and negative feedback.

As an example of negative feedback, the diagram might represent a cruise control system in a car, for example, that matches a target speed such as the speed limit. The controlled system is the car; its input includes the combined torque from the engine and from the changing slope of the road (the disturbance). The car's speed (status) is measured by a speedometer. The error signal is the departure of the speed as measured by the speedometer from the target speed (set point). This measured error is interpreted by the controller to adjust the accelerator, commanding the fuel flow to the engine (the effector). The resulting change in engine torque, the feedback, combines with the torque exerted by the changing road grade to reduce the error in speed, minimizing the road disturbance.

The terms "positive" and "negative" were first applied to feedback prior to WWII. The idea of positive feedback was already current in the 1920s with the introduction of the regenerative circuit.[10] Friis and Jensen (1924) described regeneration in a set of electronic amplifiers as a case where the "feed-back" action is positive in contrast to negative feed-back action, which they mention only in passing.[11] Harold Stephen Black's classic 1934 paper first details the use of negative feedback in electronic amplifiers. According to Black:
"Positive feed-back increases the gain of the amplifier, negative feed-back reduces it."[12]
According to Mindell (2002) confusion in the terms arose shortly after this:
"...Friis and Jensen had made the same distinction Black used between 'positive feed-back' and 'negative feed-back', based not on the sign of the feedback itself but rather on its effect on the amplifier’s gain. In contrast, Nyquist and Bode, when they built on Black’s work, referred to negative feedback as that with the sign reversed. Black had trouble convincing others of the utility of his invention in part because confusion existed over basic matters of definition."[10](p121)
Even prior to the terms being applied, James Clerk Maxwell had described several kinds of "component motions" associated with the centrifugal governors used in steam engines, distinguishing between those that lead to a continual increase in a disturbance or the amplitude of an oscillation, and those that lead to a decrease of the same.[13]

Terminology

The terms positive and negative feedback are defined in different ways within different disciplines.
  1. the altering of the gap between reference and actual values of a parameter, based on whether the gap is widening (positive) or narrowing (negative).[9]
  2. the valence of the action or effect that alters the gap, based on whether it has a happy (positive) or unhappy (negative) emotional connotation to the recipient or observer.[14]
The two definitions may cause confusion, such as when an incentive (reward) is used to boost poor performance (narrow a gap). Referring to definition 1, some authors use alternative terms, replacing positive/negative with self-reinforcing/self-correcting,[15] reinforcing/balancing,[16] discrepancy-enhancing/discrepancy-reducing[17] or regenerative/degenerative[18] respectively. And for definition 2, some authors advocate describing the action or effect as positive/negative reinforcement or punishment rather than feedback.[9][19] Yet even within a single discipline an example of feedback can be called either positive or negative, depending on how values are measured or referenced.[20]
This confusion may arise because feedback can be used for either informational or motivational purposes, and often has both a qualitative and a quantitative component. As Connellan and Zemke (1993) put it:
"Quantitative feedback tells us how much and how many. Qualitative feedback tells us how good, bad or indifferent."[21](p102)

Limitations of negative and positive feedback

While simple systems can sometimes be described as one or the other type, many systems with feedback loops cannot be so easily designated as simply positive or negative, and this is especially true when multiple loops are present.
"When there are only two parts joined so that each affects the other, the properties of the feedback give important and useful information about the properties of the whole. But when the parts rise to even as few as four, if every one affects the other three, then twenty circuits can be traced through them; and knowing the properties of all the twenty circuits does not give complete information about the system."[1](p54)

Other types of feedback

In general, feedback systems can have many signals fed back and the feedback loop frequently contain mixtures of positive and negative feedback where positive and negative feedback can dominate at different frequencies or different points in the state space of a system.

The term bipolar feedback has been coined to refer to biological systems where positive and negative feedback systems can interact, the output of one affecting the input of another, and vice versa.[22]

Some systems with feedback can have very complex behaviors such as chaotic behaviors in non-linear systems, while others have much more predictable behaviors, such as those that are used to make and design digital systems.

Feedback is used extensively in digital systems. For example, binary counters and similar devices employ feedback where the current state and inputs are used to calculate a new state which is then fed back and clocked back into the device to update it.

Applications

Biology

In biological systems such as organisms, ecosystems, or the biosphere, most parameters must stay under control within a narrow range around a certain optimal level under certain environmental conditions. The deviation of the optimal value of the controlled parameter can result from the changes in internal and external environments. A change of some of the environmental conditions may also require change of that range to change for the system to function. The value of the parameter to maintain is recorded by a reception system and conveyed to a regulation module via an information channel. An example of this is Insulin oscillations.
Biological systems contain many types of regulatory circuits, both positive and negative. As in other contexts, positive and negative do not imply that the feedback causes good or bad effects. A negative feedback loop is one that tends to slow down a process, whereas the positive feedback loop tends to accelerate it. The mirror neurons are part of a social feedback system, when an observed action is "mirrored" by the brain—like a self-performed action.

Feedback is also central to the operations of genes and gene regulatory networks. Repressor (see Lac repressor) and activator proteins are used to create genetic operons, which were identified by Francois Jacob and Jacques Monod in 1961 as feedback loops. These feedback loops may be positive (as in the case of the coupling between a sugar molecule and the proteins that import sugar into a bacterial cell), or negative (as is often the case in metabolic consumption).

On a larger scale, feedback can have a stabilizing effect on animal populations even when profoundly affected by external changes, although time lags in feedback response can give rise to predator-prey cycles.[23]

In zymology, feedback serves as regulation of activity of an enzyme by its direct product(s) or downstream metabolite(s) in the metabolic pathway (see Allosteric regulation).

The hypothalamic–pituitary–adrenal axis is largely controlled by positive and negative feedback, much of which is still unknown.

In psychology, the body receives a stimulus from the environment or internally that causes the release of hormones. Release of hormones then may cause more of those hormones to be released, causing a positive feedback loop. This cycle is also found in certain behaviour. For example, "shame loops" occur in people who blush easily. When they realize that they are blushing, they become even more embarrassed, which leads to further blushing, and so on.[24]

Climate science

The climate system is characterized by strong positive and negative feedback loops between processes that affect the state of the atmosphere, ocean, and land. A simple example is the ice-albedo positive feedback loop whereby melting snow exposes more dark ground (of lower albedo), which in turn absorbs heat and causes more snow to melt.

Control theory

Feedback is extensively used in control theory, using a variety of methods including state space (controls), full state feedback (also known as pole placement), and so forth. Note that in the context of control theory, "feedback" is traditionally assumed to specify "negative feedback".[25]
Further information: PID controller

The most common general-purpose controller using a control-loop feedback mechanism is a proportional-integral-derivative (PID) controller. Heuristically, the terms of a PID controller can be interpreted as corresponding to time: the proportional term depends on the present error, the integral term on the accumulation of past errors, and the derivative term is a prediction of future error, based on current rate of change.[26]

Mechanical engineering

In ancient times, the float valve was used to regulate the flow of water in Greek and Roman water clocks; similar float valves are used to regulate fuel in a carburettor and also used to regulate tank water level in the flush toilet.

The Dutch inventor Cornelius Drebbel (1572-1633) built thermostats (c1620) to control the temperature of chicken incubators and chemical furnaces. In 1745, the windmill was improved by blacksmith Edmund Lee, who added a fantail to keep the face of the windmill pointing into the wind. In 1787, Thomas Mead regulated the rotation speed of a windmill by using a centrifugal pendulum to adjust the distance between the bedstone and the runner stone (i.e., to adjust the load).

The use of the centrifugal governor by James Watt in 1788 to regulate the speed of his steam engine was one factor leading to the Industrial Revolution. Steam engines also use float valves and pressure release valves as mechanical regulation devices. A mathematical analysis of Watt's governor was done by James Clerk Maxwell in 1868.[13]

The Great Eastern was one of the largest steamships of its time and employed a steam powered rudder with feedback mechanism designed in 1866 by John McFarlane Gray. Joseph Farcot coined the word servo in 1873 to describe steam-powered steering systems. Hydraulic servos were later used to position guns. Elmer Ambrose Sperry of the Sperry Corporation designed the first autopilot in 1912. Nicolas Minorsky published a theoretical analysis of automatic ship steering in 1922 and described the PID controller.[27]

Internal combustion engines of the late 20th century employed mechanical feedback mechanisms such as the vacuum timing advance but mechanical feedback was replaced by electronic engine management systems once small, robust and powerful single-chip microcontrollers became affordable.

Electronic engineering

The simplest form of a feedback amplifier can be represented by the ideal block diagram made up of unilateral elements.[28]

The use of feedback is widespread in the design of electronic amplifiers, oscillators, and stateful logic circuit elements such as flip-flops and counters. Electronic feedback systems are also very commonly used to control mechanical, thermal and other physical processes.

If the signal is inverted on its way round the control loop, the system is said to have negative feedback;[29] otherwise, the feedback is said to be positive. Negative feedback is often deliberately introduced to increase the stability and accuracy of a system by correcting or reducing the influence of unwanted changes. This scheme can fail if the input changes faster than the system can respond to it. When this happens, the lag in arrival of the correcting signal can result in overcorrection, causing the output to oscillate or "hunt".[30] While often an unwanted consequence of system behaviour, this effect is used deliberately in electronic oscillators.

Harry Nyquist contributed the Nyquist plot for assessing the stability of feedback systems. An easier assessment, but less general, is based upon gain margin and phase margin using Bode plots (contributed by Hendrik Bode). Design to ensure stability often involves frequency compensation, one method of compensation being pole splitting.

Electronic feedback loops are used to control the output of electronic devices, such as amplifiers. A feedback loop is created when all or some portion of the output is fed back to the input. A device is said to be operating open loop if no output feedback is being employed and closed loop if feedback is being used.[31]

When two or more amplifiers are cross-coupled using positive feedback, complex behaviors can be created. These multivibrators are widely used and include:
  • astable circuits, which act as oscillators
  • monostable circuits, which can be pushed into a state, and will return to the stable state after some time
  • bistable circuits, which have two stable states that the circuit can be switched between

Negative feedback

A Negative feedback occurs when the fed-back output signal has a relative phase of 180° with respect to the input signal (upside down). This situation is sometimes referred to as being out of phase, but that term also is used to indicate other phase separations, as in "90° out of phase". Negative feedback can be used to correct output errors or to desensitize a system to unwanted fluctuations.[32] In feedback amplifiers, this correction is generally for waveform distortion reduction[citation needed] or to establish a specified gain level. A general expression for the gain of a negative feedback amplifier is the asymptotic gain model.

Positive feedback

Positive feedback occurs when the fed-back signal is in phase with the input signal. Under certain gain conditions, positive feedback reinforces the input signal to the point where the output of the device oscillates between its maximum and minimum possible states. Positive feedback may also introduce hysteresis into a circuit. This can cause the circuit to ignore small signals and respond only to large ones. It is sometimes used to eliminate noise from a digital signal. Under some circumstances, positive feedback may cause a device to latch, i.e., to reach a condition in which the output is locked to its maximum or minimum state. This fact is very widely used in digital electronics to make bistable circuits for volatile storage of information.

The loud squeals that sometimes occurs in audio systems, PA systems, and rock music are known as audio feedback. If a microphone is in front of a loudspeaker that it is connected to, sound that the microphone picks up comes out of the speaker, and is picked up by the microphone and re-amplified. If the loop gain is sufficient, howling or squealing at the maximum power of the amplifier is possible.

Oscillator


An electronic oscillator is an electronic circuit that produces a periodic, oscillating electronic signal, often a sine wave or a square wave.[33][34] Oscillators convert direct current (DC) from a power supply to an alternating current signal. They are widely used in many electronic devices. Common examples of signals generated by oscillators include signals broadcast by radio and television transmitters, clock signals that regulate computers and quartz clocks, and the sounds produced by electronic beepers and video games.[33]

Oscillators are often characterized by the frequency of their output signal:
  • A low-frequency oscillator (LFO) is an electronic oscillator that generates a frequency below ≈20 Hz. This term is typically used in the field of audio synthesizers, to distinguish it from an audio frequency oscillator.
  • An audio oscillator produces frequencies in the audio range, about 16 Hz to 20 kHz.[34]
  • An RF oscillator produces signals in the radio frequency (RF) range of about 100 kHz to 100 GHz.[34]
Oscillators designed to produce a high-power AC output from a DC supply are usually called inverters.

There are two main types of electronic oscillator: the linear or harmonic oscillator and the nonlinear or relaxation oscillator.[34][35]

Latches and flip-flops

An SR latch, constructed from a pair of cross-coupled NOR gates.

A latch or a flip-flop is a circuit that has two stable states and can be used to store state information. They typically constructed using feedback that crosses over between two arms of the circuit, to provide the circuit with a state. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element in sequential logic. Latches and flip-flops are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems.

Latches and flip-flops are used as data storage elements. Such data storage can be used for storage of state, and such a circuit is described as sequential logic. When used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal.

Flip-flops can be either simple (transparent or opaque) or clocked (synchronous or edge-triggered). Although the term flip-flop has historically referred generically to both simple and clocked circuits, in modern usage it is common to reserve the term flip-flop exclusively for discussing clocked circuits; the simple ones are commonly called latches.[36][37]

Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive. That is, when a latch is enabled it becomes transparent, while a flip flop's output only changes on a single type (positive going or negative going) of clock edge.

Software

Feedback loops provide generic mechanisms for controlling the running, maintenance, and evolution of software and computing systems.[38] Feedback-loops are important models in the engineering of adaptive software, as they define the behaviour of the interactions among the control elements over the adaptation process, to guarantee system properties at run-time. Feedback loops and foundations of control theory have been successfully applied to computing systems.[39] In particular, they have been applied to the development of products such as IBM's Universal Database server and IBM Tivoli. From a software perspective, the autonomic (MAPE, monitor analyze plan execute) loop proposed by researchers of IBM is another valuable contribution to the application of feedback loops to the control of dynamic properties and the design and evolution of autonomic software systems.[40][41]

User interface design

Feedback is also a useful design principle for designing user interfaces.

Video feedback

Video feedback is the video equivalent of acoustic feedback. It involves a loop between a video camera input and a video output, e.g., a television screen or monitor. Aiming the camera at the display produces a complex video image based on the feedback.[42]

Social sciences

For "feedback" in a performance or educational context, see performance appraisal and corrective feedback.

Economics and finance

The stock market is an example of a system prone to oscillatory "hunting", governed by positive and negative feedback resulting from cognitive and emotional factors among market participants. For example,
  • When stocks are rising (a bull market), the belief that further rises are probable gives investors an incentive to buy (positive feedback—reinforcing the rise, see also stock market bubble); but the increased price of the shares, and the knowledge that there must be a peak after which the market falls, ends up deterring buyers (negative feedback—stabilizing the rise).
  • Once the market begins to fall regularly (a bear market), some investors may expect further losing days and refrain from buying (positive feedback—reinforcing the fall), but others may buy because stocks become more and more of a bargain (negative feedback—stabilizing the fall).
George Soros used the word reflexivity, to describe feedback in the financial markets and developed an investment theory based on this principle.

The conventional economic equilibrium model of supply and demand supports only ideal linear negative feedback and was heavily criticized by Paul Ormerod in his book The Death of Economics, which, in turn, was criticized by traditional economists. This book was part of a change of perspective as economists started to recognise that chaos theory applied to nonlinear feedback systems including financial markets.

Monday, April 3, 2017

Joseph Fourier

From Wikipedia, the free encyclopedia

Joseph Fourier
Fourier2.jpg
Jean-Baptiste Joseph Fourier
Born 21 March 1768
Auxerre, Burgundy, Kingdom of France (now in Yonne, France)
Died 16 May 1830 (aged 62)
Paris, Kingdom of France
Residence France
Nationality French
Fields Mathematician, physicist, historian
Institutions École Normale
École Polytechnique
Alma mater École Normale
Academic advisors Joseph-Louis Lagrange
Notable students Peter Gustav Lejeune Dirichlet
Claude-Louis Navier
Giovanni Plana
Known for Fourier series
Fourier transform
Fourier's law of conduction
Fourier–Motzkin elimination

Jean-Baptiste Joseph Fourier - (/ˈfʊəriˌ, -iər/;[1] French: [fuʁje]; 21 March 1768 – 16 May 1830) was a French mathematician and physicist born in Auxerre and best known for initiating the investigation of Fourier series and their applications to problems of heat transfer and vibrations. The Fourier transform and Fourier's law are also named in his honour. Fourier is also generally credited with the discovery of the greenhouse effect.[2]

Biography

Fourier was born at Auxerre (now in the Yonne département of France), the son of a tailor. He was orphaned at age nine. Fourier was recommended to the Bishop of Auxerre, and through this introduction, he was educated by the Benedictine Order of the Convent of St. Mark. The commissions in the scientific corps of the army were reserved for those of good birth, and being thus ineligible, he accepted a military lectureship on mathematics. He took a prominent part in his own district in promoting the French Revolution, serving on the local Revolutionary Committee. He was imprisoned briefly during the Terror but in 1795 was appointed to the École Normale, and subsequently succeeded Joseph-Louis Lagrange at the École Polytechnique.

Fourier accompanied Napoleon Bonaparte on his Egyptian expedition in 1798, as scientific adviser, and was appointed secretary of the Institut d'Égypte. Cut off from France by the English fleet, he organized the workshops on which the French army had to rely for their munitions of war. He also contributed several mathematical papers to the Egyptian Institute (also called the Cairo Institute) which Napoleon founded at Cairo, with a view of weakening English influence in the East. After the British victories and the capitulation of the French under General Menou in 1801, Fourier returned to France.
1820 watercolor caricatures of French mathematicians Adrien-Marie Legendre (left) and Joseph Fourier (right) by French artist Julien-Leopold Boilly, watercolor portrait numbers 29 and 30 of Album de 73 Portraits-Charge Aquarelle’s des Membres de I’Institute.[3]

In 1801,[4] Napoleon appointed Fourier Prefect (Governor) of the Department of Isère in Grenoble, where he oversaw road construction and other projects. However, Fourier had previously returned home from the Napoleon expedition to Egypt to resume his academic post as professor at École Polytechnique when Napoleon decided otherwise in his remark

... the Prefect of the Department of Isère having recently died, I would like to express my confidence in citizen Fourier by appointing him to this place.[4]

Hence being faithful to Napoleon, he took the office of Prefect.[4] It was while at Grenoble that he began to experiment on the propagation of heat. He presented his paper On the Propagation of Heat in Solid Bodies to the Paris Institute on December 21, 1807. He also contributed to the monumental Description de l'Égypte.[5]

Fourier moved to England in 1816. Later, he returned to France, and in 1822 succeeded Jean Baptiste Joseph Delambre as Permanent Secretary of the French Academy of Sciences. In 1830, he was elected a foreign member of the Royal Swedish Academy of Sciences.

In 1830, his diminished health began to take its toll:
Fourier had already experienced, in Egypt and Grenoble, some attacks of aneurism of the heart. At Paris, it was impossible to be mistaken with respect to the primary cause of the frequent suffocations which he experienced. A fall, however, which he sustained on the 4th of May 1830, while descending a flight of stairs, aggravated the malady to an extent beyond what could have been ever feared.[6]
Shortly after this event, he died in his bed on 16 May 1830.

Fourier was buried in the Père Lachaise Cemetery in Paris, a tomb decorated with an Egyptian motif to reflect his position as secretary of the Cairo Institute, and his collation of Description de l'Égypte. His name is one of the 72 names inscribed on the Eiffel Tower.

A bronze statue was erected in Auxerre in 1849, but it was melted down for armaments during World War II.[7] Joseph Fourier University in Grenoble is named after him.

The Analytic Theory of Heat

Sketch of Fourier, circa 1820.

In 1822 Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytical Theory of Heat),[8] in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. This book was translated,[9] with editorial 'corrections',[10] into English 56 years later by Freeman (1878).[11] The book was also edited, with many editorial corrections, by Darboux and republished in French in 1888.[10]

There were three important contributions in this work, one purely mathematical, two essentially physical. In mathematics, Fourier claimed that any function of a variable, whether continuous or discontinuous, can be expanded in a series of sines of multiples of the variable. Though this result is not correct without additional conditions, Fourier's observation that some discontinuous functions are the sum of infinite series was a breakthrough. The question of determining when a Fourier series converges has been fundamental for centuries. Joseph-Louis Lagrange had given particular cases of this (false) theorem, and had implied that the method was general, but he had not pursued the subject. Peter Gustav Lejeune Dirichlet was the first to give a satisfactory demonstration of it with some restrictive conditions. This work provides the foundation for what is today known as the Fourier transform.

One important physical contribution in the book was the concept of dimensional homogeneity in equations; i.e. an equation can be formally correct only if the dimensions match on either side of the equality; Fourier made important contributions to dimensional analysis.[12] The other physical contribution was Fourier's proposal of his partial differential equation for conductive diffusion of heat. This equation is now taught to every student of mathematical physics.

Determinate equations

Bust of Fourier in Grenoble

Fourier left an unfinished work on determinate equations which was edited by Claude-Louis Navier and published in 1831. This work contains much original matter — in particular, there is a demonstration of Fourier's theorem on the position of the roots of an algebraic equation. Joseph-Louis Lagrange had shown how the roots of an algebraic equation might be separated by means of another equation whose roots were the squares of the differences of the roots of the original equation. François Budan, in 1807 and 1811, had enunciated the theorem generally known by the name of Fourier, but the demonstration was not altogether satisfactory. Fourier's proof[13] is the same as that usually given in textbooks on the theory of equations. The final solution of the problem was given in 1829 by Jacques Charles François Sturm.

Discovery of the greenhouse effect

Fourier's grave, Père Lachaise Cemetery

In the 1820s Fourier calculated that an object the size of the Earth, and at its distance from the Sun, should be considerably colder than the planet actually is if warmed by only the effects of incoming solar radiation. He examined various possible sources of the additional observed heat in articles published in 1824[14] and 1827.[15] While he ultimately suggested that interstellar radiation might be responsible for a large portion of the additional warmth, Fourier's consideration of the possibility that the Earth's atmosphere might act as an insulator of some kind is widely recognized as the first proposal of what is now known as the greenhouse effect,[16] although Fourier never called it that.[17][18]

In his articles, Fourier referred to an experiment by de Saussure, who lined a vase with blackened cork. Into the cork, he inserted several panes of transparent glass, separated by intervals of air. Midday sunlight was allowed to enter at the top of the vase through the glass panes. The temperature became more elevated in the more interior compartments of this device. Fourier concluded that gases in the atmosphere could form a stable barrier like the glass panes.[19] This conclusion may have contributed to the later use of the metaphor of the 'greenhouse effect' to refer to the processes that determine atmospheric temperatures.[20] Fourier noted that the actual mechanisms that determine the temperatures of the atmosphere included convection, which was not present in de Saussure's experimental device.

Works

  • Fourier, Joseph (1821). Rapport sur les tontines. 5. Paris: Memoirs of the Royal Academy of Sciences of the Institut de France. pp. 26–43.

Half-life

From Wikipedia, the free encyclopedia

Half-life (abbreviated t1⁄2) is the time required for a quantity to reduce to half its initial value. The term is commonly used in nuclear physics to describe how quickly unstable atoms undergo, or how long stable atoms survive, radioactive decay. The term is also used more generally to characterize any type of exponential or non-exponential decay. For example, the medical sciences refer to the biological half-life of drugs and other chemicals in the body. The converse of half-life is doubling time.

The original term, half-life period, dating to Ernest Rutherford's discovery of the principle in 1907, was shortened to half-life in the early 1950s.[1] Rutherford applied the principle of a radioactive element's half-life to studies of age determination of rocks by measuring the decay period of radium to lead-206.

Half-life is constant over the lifetime of an exponentially decaying quantity, and it is a characteristic unit for the exponential decay equation. The accompanying table shows the reduction of a quantity as a function of the number of half-lives elapsed.

Probabilistic nature

Simulation of many identical atoms undergoing radioactive decay, starting with either 4 atoms per box (left) or 400 (right). The number at the top is how many half-lives have elapsed. Note the consequence of the law of large numbers: with more atoms, the overall decay is more regular and more predictable.

A half-life usually describes the decay of discrete entities, such as radioactive atoms. In that case, it does not work to use the definition that states "half-life is the time required for exactly half of the entities to decay". For example, if there are 3 radioactive atoms with a half-life of one second, there will not be "1.5 atoms" left after one second.

Instead, the half-life is defined in terms of probability: "Half-life is the time required for exactly half of the entities to decay on average". In other words, the probability of a radioactive atom decaying within its half-life is 50%.

For example, the image on the right is a simulation of many identical atoms undergoing radioactive decay. Note that after one half-life there are not exactly one-half of the atoms remaining, only approximately, because of the random variation in the process. Nevertheless, when there are many identical atoms decaying (right boxes), the law of large numbers suggests that it is a very good approximation to say that half of the atoms remain after one half-life.

There are various simple exercises that demonstrate probabilistic decay, for example involving flipping coins or running a statistical computer program.[2][3][4]

Formulas for half-life in exponential decay

An exponential decay can be described by any of the following three equivalent formulas:
{\displaystyle {\begin{aligned}N(t)&=N_{0}\left({\frac {1}{2}}\right)^{\frac {t}{t_{1/2}}}\\N(t)&=N_{0}e^{-{\frac {t}{\tau }}}\\N(t)&=N_{0}e^{-\lambda t}\end{aligned}}}
where
  • N0 is the initial quantity of the substance that will decay (this quantity may be measured in grams, moles, number of atoms, etc.),
  • N(t) is the quantity that still remains and has not yet decayed after a time t,
  • t1⁄2 is the half-life of the decaying quantity,
  • τ is a positive number called the mean lifetime of the decaying quantity,
  • λ is a positive number called the decay constant of the decaying quantity.
The three parameters t1⁄2, τ, and λ are all directly related in the following way:
{\displaystyle t_{1/2}={\frac {\ln(2)}{\lambda }}=\tau \ln(2)}
By plugging in and manipulating these relationships, we get all of the following equivalent descriptions of exponential decay, in terms of the half-life:
{\displaystyle {\begin{aligned}N(t)&=N_{0}\left({\frac {1}{2}}\right)^{\frac {t}{t_{1/2}}}=N_{0}2^{-t/t_{1/2}}\\&=N_{0}e^{-t\ln(2)/t_{1/2}}\\t_{1/2}&={\frac {t}{\log _{2}(N_{0}/N(t))}}={\frac {t}{\log _{2}(N_{0})-\log _{2}(N(t))}}\\&={\frac {1}{\log _{2^{t}}(N_{0})-\log _{2^{t}}(N(t))}}={\frac {t\ln(2)}{\ln(N_{0})-\ln(N(t))}}\end{aligned}}}
Regardless of how it's written, we can plug into the formula to get
  • N(0)=N_{0} as expected (this is the definition of "initial quantity")
  • {\displaystyle N\left(t_{1/2}\right)={\frac {1}{2}}N_{0}} as expected (this is the definition of half-life)
  • \lim _{t\to \infty }N(t)=0; i.e., amount approaches zero as t approaches infinity as expected (the longer we wait, the less remains).

Decay by two or more processes

Some quantities decay by two exponential-decay processes simultaneously. In this case, the actual half-life T1⁄2 can be related to the half-lives t1 and t2 that the quantity would have if each of the decay processes acted in isolation:
{\displaystyle {\frac {1}{T_{1/2}}}={\frac {1}{t_{1}}}+{\frac {1}{t_{2}}}}
For three or more processes, the analogous formula is:
{\displaystyle {\frac {1}{T_{1/2}}}={\frac {1}{t_{1}}}+{\frac {1}{t_{2}}}+{\frac {1}{t_{3}}}+\cdots }
For a proof of these formulas, see Exponential decay § Decay by two or more processes.

Examples

Half life demonstrated using dice in a classroom experiment

There is a half-life describing any exponential-decay process. For example:
  • The current flowing through an RC circuit or RL circuit decays with a half-life of RCln(2) or ln(2)L/R, respectively. For this example, the term half time might be used instead of "half life", but they mean the same thing.
  • In a first-order chemical reaction, the half-life of the reactant is ln(2)/λ, where λ is the reaction rate constant.
  • In radioactive decay, the half-life is the length of time after which there is a 50% chance that an atom will have undergone nuclear decay. It varies depending on the atom type and isotope, and is usually determined experimentally. See List of nuclides.
The half life of a species is the time it takes for the concentration of the substance to fall to half of its initial value.

In non-exponential decay

The decay of many physical quantities is not exponential—for example, the evaporation of water from a puddle, or (often) the chemical reaction of a molecule. In such cases, the half-life is defined the same way as before: as the time elapsed before half of the original quantity has decayed. However, unlike in an exponential decay, the half-life depends on the initial quantity, and the prospective half-life will change over time as the quantity decays.
As an example, the radioactive decay of carbon-14 is exponential with a half-life of 5,730 years. A quantity of carbon-14 will decay to half of its original amount (on average) after 5,730 years, regardless of how big or small the original quantity was. After another 5,730 years, one-quarter of the original will remain. On the other hand, the time it will take a puddle to half-evaporate depends on how deep the puddle is. Perhaps a puddle of a certain size will evaporate down to half its original volume in one day. But on the second day, there is no reason to expect that one-quarter of the puddle will remain; in fact, it will probably be much less than that. This is an example where the half-life reduces as time goes on. (In other non-exponential decays, it can increase instead.)

The decay of a mixture of two or more materials which each decay exponentially, but with different half-lives, is not exponential. Mathematically, the sum of two exponential functions is not a single exponential function. A common example of such a situation is the waste of nuclear power stations, which is a mix of substances with vastly different half-lives. Consider a mixture of a rapidly decaying element A, with a half-life of 1 second, and a slowly decaying element B, with a half-life of 1 year. In a couple of minutes, almost all atoms of element A will have decayed after repeated halving of the initial number of atoms, but very few of the atoms of element B will have done so as only a tiny fraction of its half-life has elapsed. Thus, the mixture taken as a whole will not decay by halves.

In biology and pharmacology

A biological half-life or elimination half-life is the time it takes for a substance (drug, radioactive nuclide, or other) to lose one-half of its pharmacologic, physiologic, or radiological activity. In a medical context, the half-life may also describe the time that it takes for the concentration of a substance in blood plasma to reach one-half of its steady-state value (the "plasma half-life"). The relationship between the biological and plasma half-lives of a substance can be complex, due to factors including accumulation in tissues, active metabolites, and receptor interactions.[5]
While a radioactive isotope decays almost perfectly according to so-called "first order kinetics" where the rate constant is a fixed number, the elimination of a substance from a living organism usually follows more complex chemical kinetics.

For example, the biological half-life of water in a human being is about 9 to 10 days,[citation needed] though this can be altered by behavior and various other conditions. The biological half-life of cesium in human beings is between one and four months.

Self-awareness

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Self-awareness The Painter and the...