Search This Blog

Wednesday, October 7, 2020

Autocatalysis

From Wikipedia, the free encyclopedia
 

A single chemical reaction is said to be autocatalytic if one of the reaction products is also a catalyst for the same or a coupled reaction. Such a reaction is called an autocatalytic reaction.

A set of chemical reactions can be said to be "collectively autocatalytic" if a number of those reactions produce, as reaction products, catalysts for enough of the other reactions that the entire set of chemical reactions is self-sustaining given an input of energy and food molecules.

Chemical reactions

A chemical reaction of two reactants and two products can be written as

where the Greek letters are stoichiometric coefficients and the capital Latin letters represent chemical species. The chemical reaction proceeds in both the forward and reverse direction. This equation is easily generalized to any number of reactants, products, and reactions.

Chemical equilibrium

In chemical equilibrium the forward and reverse reaction rates are such that each chemical species is being created at the same rate it is being destroyed. In other words, the rate of the forward reaction is equal to the rate of the reverse reaction.

Here, the brackets indicate the concentration of the chemical species, in moles per liter, and k+ and k are rate constants.

Far from equilibrium

Far from equilibrium, the forward and reverse reaction rates no longer balance and the concentration of reactants and products is no longer constant. For every forward reaction molecules of A are destroyed. For every reverse reaction molecules of A are created. In the case of an elementary reaction step the reaction order in each direction equals the molecularity, so that the rate of change in the number of moles of A is then

This system of equations has a single stable fixed point when the forward rates and the reverse rates are equal (when for every species). This means that the system evolves to the equilibrium state, and this is the only state to which it evolves.

Autocatalytic reactions

Sigmoid variation of product concentration in autocatalytic reactions

Autocatalytic reactions are those in which at least one of the products is a reactant. Perhaps the simplest autocatalytic reaction can be written

with the rate equations (for an elementary reaction)

.

This reaction is one in which a molecule of species A interacts with a molecule of species B. The A molecule is converted into a B molecule. The final product consists of the original B molecule plus the B molecule created in the reaction.

The key feature of these rate equations is that they are nonlinear; the second term on the right varies as the square of the concentration of B. This feature can lead to multiple fixed points of the system, much like a quadratic equation can have multiple roots. Multiple fixed points allow for multiple states of the system. A system existing in multiple macroscopic states is more orderly (has lower entropy) than a system in a single state.

The concentrations of A and B vary in time according to

and

.

The graph for these equations is a sigmoid curve (specifically a logistic function), which is typical for autocatalytic reactions: these chemical reactions proceed slowly at the start (the induction period) because there is little catalyst present, the rate of reaction increases progressively as the reaction proceeds as the amount of catalyst increases and then it again slows down as the reactant concentration decreases. If the concentration of a reactant or product in an experiment follows a sigmoid curve, the reaction may be autocatalytic.

These kinetic equations apply for example to the acid-catalyzed hydrolysis of some esters to carboxylic acids and alcohols. There must be at least some acid present initially to start the catalyzed mechanism; if not the reaction must start by an alternate uncatalyzed path which is usually slower. The above equations for the catalyzed mechanism would imply that the concentration of acid product remains zero forever.

Creation of order

Background

The second law of thermodynamics states that the disorder (entropy) of a physical or chemical system and its surroundings (a closed system) must increase with time. Systems left to themselves become increasingly random, and orderly energy of a system like uniform motion degrades eventually to the random motion of particles in a heat bath.

There are, however, many instances in which physical systems spontaneously become emergent or orderly. For example, despite the destruction they cause, hurricanes have a very orderly vortex motion when compared to the random motion of the air molecules in a closed room. Even more spectacular is the order created by chemical systems; the most dramatic being the order associated with life.

This is consistent with the Second Law, which requires that the total disorder of a system and its surroundings must increase with time. Order can be created in a system by an even greater decrease in order of the system's surroundings. In the hurricane example, hurricanes are formed from unequal heating within the atmosphere. The Earth's atmosphere is then far from thermal equilibrium. The order of the Earth's atmosphere increases, but at the expense of the order of the sun. The sun is becoming more disorderly as it ages and throws off light and material to the rest of the universe. The total disorder of the sun and the earth increases despite the fact that orderly hurricanes are generated on earth.

A similar example exists for living chemical systems. The sun provides energy to green plants. The green plants are food for other living chemical systems. The energy absorbed by plants and converted into chemical energy generates a system on earth that is orderly and far from chemical equilibrium. Here, the difference from chemical equilibrium is determined by an excess of reactants over the equilibrium amount. Once again, order on earth is generated at the expense of entropy increase of the sun. The total entropy of the earth and the rest of the universe increases, consistent with the Second Law.

Some autocatalytic reactions also generate order in a system at the expense of its surroundings. For example, (clock reactions) have intermediates whose concentrations oscillate in time, corresponding to temporal order. Other reactions generate spatial separation of chemical species corresponding to spatial order. More complex reactions are involved in metabolic pathways and metabolic networks in biological systems.

The transition to order as the distance from equilibrium increases is not usually continuous. Order typically appears abruptly. The threshold between the disorder of chemical equilibrium and order is known as a phase transition. The conditions for a phase transition can be determined with the mathematical machinery of non-equilibrium thermodynamics.

Temporal order

A chemical reaction cannot oscillate about a position of final equilibrium because the second law of thermodynamics requires that a thermodynamic system approach equilibrium and not recede from it. For a closed system at constant temperature and pressure, the Gibbs free energy must decrease continuously and not oscillate. However it is possible that the concentrations of some reaction intermediates oscillate, and also that the rate of formation of products oscillates.

Idealized example: Lotka–Volterra equation

The Lotka–Volterra equation is isomorphic with the predator–prey model and the two-reaction autocatalytic model. In this example baboons and cheetahs are equivalent to two different chemical species X and Y in autocatalytic reactions.

Consider a coupled set of two autocatalytic reactions in which the concentration of one of the reactants A is much larger than its equilibrium value. In this case, the forward reaction rate is so much larger than the reverse rates that we can neglect the reverse rates.

with the rate equations

.

Here, we have neglected the depletion of the reactant A, since its concentration is so large. The rate constants for the three reactions are , , and , respectively.

This system of rate equations is known as the Lotka–Volterra equation and is most closely associated with population dynamics in predator–prey relationships. This system of equations can yield oscillating concentrations of the reaction intermediates X and Y. The amplitude of the oscillations depends on the concentration of A (which decreases without oscillation). Such oscillations are a form of emergent temporal order that is not present in equilibrium.

Another idealized example: Brusselator

Another example of a system that demonstrates temporal order is the Brusselator (see Prigogine reference). It is characterized by the reactions

with the rate equations

where, for convenience, the rate constants have been set to 1.

The Brusselator in the unstable regime. A=1. B=2.5. X(0)=1. Y(0)=0. The system approaches a limit cycle. For B<1+A the system is stable and approaches a fixed point.

The Brusselator has a fixed point at

.

The fixed point becomes unstable when

leading to an oscillation of the system. Unlike the Lotka-Volterra equation, the oscillations of the Brusselator do not depend on the amount of reactant present initially. Instead, after sufficient time, the oscillations approach a limit cycle.[6]

Spatial order

An idealized example of spatial spontaneous symmetry breaking is the case in which we have two boxes of material separated by a permeable membrane so that material can diffuse between the two boxes. It is assumed that identical Brusselators are in each box with nearly identical initial conditions. (see Prigogine reference)

Here, the numerical subscripts indicate which box the material is in. There are additional terms proportional to the diffusion coefficient D that account for the exchange of material between boxes.

If the system is initiated with the same conditions in each box, then a small fluctuation will lead to separation of materials between the two boxes. One box will have a predominance of X, and the other will have a predominance of Y.

Real examples

Real examples of clock reactions are the Belousov–Zhabotinsky reaction (BZ reaction), the Briggs–Rauscher reaction, the Bray–Liebhafsky reaction and the iodine clock reaction. These are oscillatory reactions, and the concentration of products and reactants can be approximated in terms of damped oscillations.

The best-known reaction, the BZ reaction, can be created with a mixture of potassium bromate , malonic acid , and manganese sulfate prepared in a heated solution with sulfuric acid as solvent.

Optics example

Another autocatalytic system is one driven by light coupled to photo-polymerization reactions. In a process termed optical autocatalysis, positive feedback is created between light intensity and photo-polymerization rate, via polymerization-induced increases in the refractive index. Light's preference to occupy regions of higher refractive index results in leakage of light into regions of higher molecular weight, thereby amplifying the photo-chemical reaction. The positive feedback may be expressed as:

Noting that photo-polymerization rate is proportional to intensity and that refractive index is proportional to molecular weight, the positive feedback between intensity and photo-polymerization establishes the auto-catalytic behavior. Optical auto-catalysis has been shown to result on spontaneous pattern formation in photopolymers. Hosein and co-workers discovered that optical autocatalysis can also occur in photoreactive polymer blends, and that the process can induce binary phase morphologies with the same pattern as the light profile. The light undergoes optical modulation instability, spontaneous dividing into a multitude of optical filaments, and the polymer system thereby forms filaments within the blend structure. The result is a new system that couples optical autocatalytic behavior to spinodal decomposition.

Biological example

It is known that an important metabolic cycle, glycolysis, displays temporal order. Glycolysis consists of the degradation of one molecule of glucose and the overall production of two molecules of ATP. The process is therefore of great importance to the energetics of living cells. The global glycolysis reaction involves glucose, ADP, NAD, pyruvate, ATP, and NADH.

.

The details of the process are quite involved, however, a section of the process is autocatalyzed by phosphofructokinase (PFK). This portion of the process is responsible for oscillations in the pathway that lead to the process oscillating between an active and an inactive form. Thus, the autocatalytic reaction can modulate the process.

Shape tailoring of thin layers

It is possible to use the results from an autocatalytic reaction coupled with reaction–diffusion system theory to tailor the design of a thin layer. The autocatalytic process allows controlling the nonlinear behavior of the oxidation front, which is used to establish the initial geometry needed to generate the arbitrary final geometry. It has been successfully done in the wet oxidation of to obtain arbitrary shaped layers of .

Phase transitions

The initial amounts of reactants determine the distance from a chemical equilibrium of the system. The greater the initial concentrations the further the system is from equilibrium. As the initial concentration increases, an abrupt change in order occurs. This abrupt change is known as phase transition. At the phase transition, fluctuations in macroscopic quantities, such as chemical concentrations, increase as the system oscillates between the more ordered state (lower entropy, such as ice) and the more disordered state (higher entropy, such as liquid water). Also, at the phase transition, macroscopic equations, such as the rate equations, fail. Rate equations can be derived from microscopic considerations. The derivations typically rely on a mean field theory approximation to microscopic dynamical equations. Mean field theory breaks down in the presence of large fluctuations. Therefore, since large fluctuations occur in the neighborhood of a phase transition, macroscopic equations, such as rate equations, fail. As the initial concentration increases further, the system settles into an ordered state in which fluctuations are again small.

Asymmetric autocatalysis

Asymmetric autocatalysis occurs when the reaction product is chiral and thus acts as a chiral catalyst for its own production. Reactions of this type, such as the Soai reaction, have the property that they can amplify a very small enantiomeric excess into a large one. This has been proposed as an important step in the origin of biological homochirality.

Role in origin of life

In 1995 Stuart Kauffman proposed that life initially arose as autocatalytic chemical networks.

British ethologist Richard Dawkins wrote about autocatalysis as a potential explanation for abiogenesis in his 2004 book The Ancestor's Tale. He cites experiments performed by Julius Rebek and his colleagues at the Scripps Research Institute in California in which they combined amino adenosine and pentafluorophenyl ester with the autocatalyst amino adenosine triacid ester (AATE). One system from the experiment contained variants of AATE which catalyzed the synthesis of themselves. This experiment demonstrated the possibility that autocatalysts could exhibit competition within a population of entities with heredity, which could be interpreted as a rudimentary form of natural selection, and that certain environmental changes (such as irradiation) could alter the chemical structure of some of these self-replicating molecules (an analog for mutation) in such ways that could either boost or interfere with its ability to react, thus boosting or interfering with its ability to replicate and spread in the population.

Autocatalysis plays a major role in the processes of life. Two researchers who have emphasized its role in the origins of life are Robert Ulanowicz  and Stuart Kauffman.

Autocatalysis occurs in the initial transcripts of rRNA. The introns are capable of excising themselves by the process of two nucleophilic transesterification reactions. The RNA able to do this is sometimes referred to as a ribozyme. Additionally, the citric acid cycle is an autocatalytic cycle run in reverse.

Ultimately, biological metabolism itself can be seen as a vast autocatalytic set, in that all of the molecular constituents of a biological cell are produced by reactions involving this same set of molecules.

Examples of autocatalytic reactions

Positive feedback

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Positive_feedback

Alarm or panic can sometimes be spread by positive feedback among a herd of animals to cause a stampede.
 
Causal loop diagram that depicts the causes of a stampede as a positive feedback loop
 
In sociology a network effect can quickly create the positive feedback of a bank run. The above photo is of the UK Northern Rock 2007 bank run.

Positive feedback (exacerbating feedback, self-reinforcing feedback) is a process that occurs in a feedback loop which exacerbates the effects of a small disturbance. That is, the effects of a perturbation on a system include an increase in the magnitude of the perturbation. That is, A produces more of B which in turn produces more of A. In contrast, a system in which the results of a change act to reduce or counteract it has negative feedback. Both concepts play an important role in science and engineering, including biology, chemistry, and cybernetics.

Mathematically, positive feedback is defined as a positive loop gain around a closed loop of cause and effect. That is, positive feedback is in phase with the input, in the sense that it adds to make the input larger. Positive feedback tends to cause system instability. When the loop gain is positive and above 1, there will typically be exponential growth, increasing oscillations, chaotic behavior or other divergences from equilibrium. System parameters will typically accelerate towards extreme values, which may damage or destroy the system, or may end with the system latched into a new stable state. Positive feedback may be controlled by signals in the system being filtered, damped, or limited, or it can be cancelled or reduced by adding negative feedback.

Positive feedback is used in digital electronics to force voltages away from intermediate voltages into '0' and '1' states. On the other hand, thermal runaway is a type of positive feedback that can destroy semiconductor junctions. Positive feedback in chemical reactions can increase the rate of reactions, and in some cases can lead to explosions. Positive feedback in mechanical design causes tipping-point, or 'over-centre', mechanisms to snap into position, for example in switches and locking pliers. Out of control, it can cause bridges to collapse. Positive feedback in economic systems can cause boom-then-bust cycles. A familiar example of positive feedback is the loud squealing or howling sound produced by audio feedback in public address systems: the microphone picks up sound from its own loudspeakers, amplifies it, and sends it through the speakers again.

Platelet clotting demonstrates positive feedback. The damaged blood vessel wall releases chemicals that initiate the formation of a blood clot through platelet congregation. As more platelets gather, more chemicals are released that speed up the process. The process gets faster and faster until the blood vessel wall is completely sealed and the positive feedback loop has ended. The exponential form of the graph illustrates the positive feedback mechanism.

Overview

Positive feedback enhances or amplifies an effect by it having an influence on the process which gave rise to it. For example, when part of an electronic output signal returns to the input, and is in phase with it, the system gain is increased. The feedback from the outcome to the originating process can be direct, or it can be via other state variables. Such systems can give rich qualitative behaviors, but whether the feedback is instantaneously positive or negative in sign has an extremely important influence on the results. Positive feedback reinforces and negative feedback moderates the original process. Positive and negative in this sense refer to loop gains greater than or less than zero, and do not imply any value judgements as to the desirability of the outcomes or effects. A key feature of positive feedback is thus that small disturbances get bigger. When a change occurs in a system, positive feedback causes further change, in the same direction.

Basic

A basic feedback system can be represented by this block diagram. In the diagram the + symbol is an adder and A and B are arbitrary causal functions.

A simple feedback loop is shown in the diagram. If the loop gain AB is positive, then a condition of positive or regenerative feedback exists.

If the functions A and B are linear and AB is smaller than unity, then the overall system gain from the input to output is finite, but can be very large as AB approaches unity. In that case, it can be shown that the overall or "closed loop" gain from input to output is:

When AB > 1, the system is unstable, so does not have a well-defined gain; the gain may be called infinite.

Thus depending on the feedback, state changes can be convergent, or divergent. The result of positive feedback is to augment changes, so that small perturbations may result in big changes.

A system in equilibrium in which there is positive feedback to any change from its current state may be unstable, in which case the system is said to be in an unstable equilibrium. The magnitude of the forces that act to move such a system away from its equilibrium are an increasing function of the "distance" of the state from the equilibrium.

Positive feedback does not necessarily imply instability of an equilibrium, for example stable on and off states may exist in positive-feedback architectures.

Hysteresis

Hysteresis causes the output value to depend on the history of the input
 
In a Schmitt trigger circuit, feedback to the non-inverting input of an amplifier pushes the output directly away from the applied voltage towards the maximum or minimum voltage the amplifier can generate.

In the real world, positive feedback loops typically do not cause ever-increasing growth, but are modified by limiting effects of some sort. According to Donella Meadows:

"Positive feedback loops are sources of growth, explosion, erosion, and collapse in systems. A system with an unchecked positive loop ultimately will destroy itself. That’s why there are so few of them. Usually a negative loop will kick in sooner or later."

Hysteresis, in which the starting point affects where the system ends up, can be generated by positive feedback. When the gain of the feedback loop is above 1, then the output moves away from the input: if it is above the input, then it moves towards the nearest positive limit, while if it is below the input then it moves towards the nearest negative limit.

Once it reaches the limit, it will be stable. However, if the input goes past the limit, then the feedback will change sign and the output will move in the opposite direction until it hits the opposite limit. The system therefore shows bistable behaviour.

Terminology

The terms positive and negative were first applied to feedback before World War II. The idea of positive feedback was already current in the 1920s with the introduction of the regenerative circuit.

Friis & Jensen (1924) described regeneration in a set of electronic amplifiers as a case where the "feed-back" action is positive in contrast to negative feed-back action, which they mention only in passing. Harold Stephen Black's classic 1934 paper first details the use of negative feedback in electronic amplifiers. According to Black:

"Positive feed-back increases the gain of the amplifier, negative feed-back reduces it."

According to Mindell (2002) confusion in the terms arose shortly after this:

"...Friis and Jensen had made the same distinction Black used between 'positive feed-back' and 'negative feed-back', based not on the sign of the feedback itself but rather on its effect on the amplifier’s gain. In contrast, Nyquist and Bode, when they built on Black’s work, referred to negative feedback as that with the sign reversed. Black had trouble convincing others of the utility of his invention in part because confusion existed over basic matters of definition."

Examples and applications

In electronics

A vintage style regenerative radio receiver. Due to the controlled use of positive feedback, sufficient amplification can be derived from a single vacuum tube or valve (centre).

Regenerative circuits were invented and patented in 1914 for the amplification and reception of very weak radio signals. Carefully controlled positive feedback around a single transistor amplifier can multiply its gain by 1,000 or more. Therefore, a signal can be amplified 20,000 or even 100,000 times in one stage, that would normally have a gain of only 20 to 50. The problem with regenerative amplifiers working at these very high gains is that they easily become unstable and start to oscillate. The radio operator has to be prepared to tweak the amount of feedback fairly continuously for good reception. Modern radio receivers use the superheterodyne design, with many more amplification stages, but much more stable operation and no positive feedback.

The oscillation that can break out in a regenerative radio circuit is used in electronic oscillators. By the use of tuned circuits or a piezoelectric crystal (commonly quartz), the signal that is amplified by the positive feedback remains linear and sinusoidal. There are several designs for such harmonic oscillators, including the Armstrong oscillator, Hartley oscillator, Colpitts oscillator, and the Wien bridge oscillator. They all use positive feedback to create oscillations.

Many electronic circuits, especially amplifiers, incorporate negative feedback. This reduces their gain, but improves their linearity, input impedance, output impedance, and bandwidth, and stabilises all of these parameters, including the closed-loop gain. These parameters also become less dependent on the details of the amplifying device itself, and more dependent on the feedback components, which are less likely to vary with manufacturing tolerance, age and temperature. The difference between positive and negative feedback for AC signals is one of phase: if the signal is fed back out of phase, the feedback is negative and if it is in phase the feedback is positive. One problem for amplifier designers who use negative feedback is that some of the components of the circuit will introduce phase shift in the feedback path. If there is a frequency (usually a high frequency) where the phase shift reaches 180°, then the designer must ensure that the amplifier gain at that frequency is very low (usually by low-pass filtering). If the loop gain (the product of the amplifier gain and the extent of the positive feedback) at any frequency is greater than one, then the amplifier will oscillate at that frequency (Barkhausen stability criterion). Such oscillations are sometimes called parasitic oscillations. An amplifier that is stable in one set of conditions can break into parasitic oscillation in another. This may be due to changes in temperature, supply voltage, adjustment of front-panel controls, or even the proximity of a person or other conductive item.

Amplifiers may oscillate gently in ways that are hard to detect without an oscilloscope, or the oscillations may be so extensive that only a very distorted or no required signal at all gets through, or that damage occurs. Low frequency parasitic oscillations have been called 'motorboating' due to the similarity to the sound of a low-revving exhaust note.

 

The effect of using a Schmitt trigger (B) instead of a comparator (A)

Many common digital electronic circuits employ positive feedback. While normal simple boolean logic gates usually rely simply on gain to push digital signal voltages away from intermediate values to the values that are meant to represent boolean '0' and '1', but many more complex gates use feedback. When an input voltage is expected to vary in an analogue way, but sharp thresholds are required for later digital processing, the Schmitt trigger circuit uses positive feedback to ensure that if the input voltage creeps gently above the threshold, the output is forced smartly and rapidly from one logic state to the other. One of the corollaries of the Schmitt trigger's use of positive feedback is that, should the input voltage move gently down again past the same threshold, the positive feedback will hold the output in the same state with no change. This effect is called hysteresis: the input voltage has to drop past a different, lower threshold to 'un-latch' the output and reset it to its original digital value. By reducing the extent of the positive feedback, the hysteresis-width can be reduced, but it can not entirely be eradicated. The Schmitt trigger is, to some extent, a latching circuit.

Positive feedback is a mechanism by which an output is enhanced, such as protein levels. However, in order to avoid any fluctuation in the protein level, the mechanism is inhibited stochastically (I), therefore when the concentration of the activated protein (A) is past the threshold ([I]), the loop mechanism is activated and the concentration of A increases exponentially if d[A]=k [A]
 
Illustration of an R-S ('reset-set') flip-flop made from two digital nor gates with positive feedback. Red and black mean logical '1' and '0', respectively.

An electronic flip-flop, or "latch", or "bistable multivibrator", is a circuit that due to high positive feedback is not stable in a balanced or intermediate state. Such a bistable circuit is the basis of one bit of electronic memory. The flip-flop uses a pair of amplifiers, transistors, or logic gates connected to each other so that positive feedback maintains the state of the circuit in one of two unbalanced stable states after the input signal has been removed, until a suitable alternative signal is applied to change the state. Computer random access memory (RAM) can be made in this way, with one latching circuit for each bit of memory.

Thermal runaway occurs in electronic systems because some aspect of a circuit is allowed to pass more current when it gets hotter, then the hotter it gets, the more current it passes, which heats it some more and so it passes yet more current. The effects are usually catastrophic for the device in question. If devices have to be used near to their maximum power-handling capacity, and thermal runaway is possible or likely under certain conditions, improvements can usually be achieved by careful design.

A phonograph turntable is prone to acoustic feedback.

Audio and video systems can demonstrate positive feedback. If a microphone picks up the amplified sound output of loudspeakers in the same circuit, then howling and screeching sounds of audio feedback (at up to the maximum power capacity of the amplifier) will be heard, as random noise is re-amplified by positive feedback and filtered by the characteristics of the audio system and the room.

Audio and live music

Audio feedback (also known as acoustic feedback, simply as feedback, or the Larsen effect) is a special kind of positive feedback which occurs when a sound loop exists between an audio input (for example, a microphone or guitar pickup) and an audio output (for example, a loudly-amplified loudspeaker). In this example, a signal received by the microphone is amplified and passed out of the loudspeaker. The sound from the loudspeaker can then be received by the microphone again, amplified further, and then passed out through the loudspeaker again. The frequency of the resulting sound is determined by resonance frequencies in the microphone, amplifier, and loudspeaker, the acoustics of the room, the directional pick-up and emission patterns of the microphone and loudspeaker, and the distance between them. For small PA systems the sound is readily recognized as a loud squeal or screech.

Feedback is almost always considered undesirable when it occurs with a singer's or public speaker's microphone at an event using a sound reinforcement system or PA system. Audio engineers use various electronic devices, such as equalizers and, since the 1990s, automatic feedback detection devices to prevent these unwanted squeals or screeching sounds, which detract from the audience's enjoyment of the event. On the other hand, since the 1960s, electric guitar players in rock music bands using loud guitar amplifiers and distortion effects have intentionally created guitar feedback to create a desirable musical effect. "I Feel Fine" by the Beatles marks one of the earliest examples of the use of feedback as a recording effect in popular music. It starts with a single, percussive feedback note produced by plucking the A string on Lennon's guitar. Artists such as the Kinks and the Who had already used feedback live, but Lennon remained proud of the fact that the Beatles were perhaps the first group to deliberately put it on vinyl. In one of his last interviews, he said, "I defy anybody to find a record—unless it's some old blues record in 1922—that uses feedback that way."

The principles of audio feedback were first discovered by Danish scientist Søren Absalon Larsen. Microphones are not the only transducers subject to this effect. Record deck pickup cartridges can do the same, usually in the low frequency range below about 100 Hz, manifesting as a low rumble. Jimi Hendrix was an innovator in the intentional use of guitar feedback in his guitar solos to create unique sound effects. He helped develop the controlled and musical use of audio feedback in electric guitar playing, and later Brian May was a famous proponent of the technique.

Video

Similarly, if a video camera is pointed at a monitor screen that is displaying the camera's own signal, then repeating patterns can be formed on the screen by positive feedback. This video feedback effect was used in the opening sequences to the first ten series of the television program Doctor Who.

Switches

In electrical switches, including bimetallic strip based thermostats, the switch usually has hysteresis in the switching action. In these cases hysteresis is mechanically achieved via positive feedback within a tipping point mechanism. The positive feedback action minimises the length of time arcing occurs for during the switching and also holds the contacts in an open or closed state.

In biology

Positive feedback is the amplification of a body's response to a stimulus. For example, in childbirth, when the head of the fetus pushes up against the cervix (1) it stimulates a nerve impulse from the cervix to the brain (2). When the brain is notified, it signals the pituitary gland to release a hormone called oxytocin(3). Oxytocin is then carried via the bloodstream to the uterus (4) causing contractions, pushing the fetus towards the cervix eventually inducing childbirth.

In physiology

A number of examples of positive feedback systems may be found in physiology.

  • One example is the onset of contractions in childbirth, known as the Ferguson reflex. When a contraction occurs, the hormone oxytocin causes a nerve stimulus, which stimulates the hypothalamus to produce more oxytocin, which increases uterine contractions. This results in contractions increasing in amplitude and frequency.
  • Another example is the process of blood clotting. The loop is initiated when injured tissue releases signal chemicals that activate platelets in the blood. An activated platelet releases chemicals to activate more platelets, causing a rapid cascade and the formation of a blood clot.
  • Lactation also involves positive feedback in that as the baby suckles on the nipple there is a nerve response into the spinal cord and up into the hypothalamus of the brain, which then stimulates the pituitary gland to produce more prolactin to produce more milk.
  • A spike in estrogen during the follicular phase of the menstrual cycle causes ovulation.
  • The generation of nerve signals is another example, in which the membrane of a nerve fibre causes slight leakage of sodium ions through sodium channels, resulting in a change in the membrane potential, which in turn causes more opening of channels, and so on (Hodgkin cycle). So a slight initial leakage results in an explosion of sodium leakage which creates the nerve action potential.
  • In excitation–contraction coupling of the heart, an increase in intracellular calcium ions to the cardiac myocyte is detected by ryanodine receptors in the membrane of the sarcoplasmic reticulum which transport calcium out into the cytosol in a positive feedback physiological response.

In most cases, such feedback loops culminate in counter-signals being released that suppress or break the loop. Childbirth contractions stop when the baby is out of the mother's body. Chemicals break down the blood clot. Lactation stops when the baby no longer nurses.

In gene regulation

Positive feedback is a well studied phenomenon in gene regulation, where it is most often associated with bistability. Positive feedback occurs when a gene activates itself directly or indirectly via a double negative feedback loop. Genetic engineers have constructed and tested simple positive feedback networks in bacteria to demonstrate the concept of bistability. A classic example of positive feedback is the lac operon in E. coli. Positive feedback plays an integral role in cellular differentiation, development, and cancer progression, and therefore, positive feedback in gene regulation can have significant physiological consequences. Random motions in molecular dynamics coupled with positive feedback can trigger interesting effects, such as create population of phenotypically different cells from the same parent cell. This happens because noise can become amplified by positive feedback. Positive feedback can also occur in other forms of cell signaling, such as enzyme kinetics or metabolic pathways.

In evolutionary biology

Positive feedback loops have been used to describe aspects of the dynamics of change in biological evolution. For example, beginning at the macro level, Alfred J. Lotka (1945) argued that the evolution of the species was most essentially a matter of selection that fed back energy flows to capture more and more energy for use by living systems. At the human level, Richard D. Alexander (1989) proposed that social competition between and within human groups fed back to the selection of intelligence thus constantly producing more and more refined human intelligence. Crespi (2004) discussed several other examples of positive feedback loops in evolution. The analogy of Evolutionary arms races provide further examples of positive feedback in biological systems.

During the Phanerozoic the biodiversity shows a steady but not monotonic increase from near zero to several thousands of genera.

It has been shown that changes in biodiversity through the Phanerozoic correlate much better with hyperbolic model (widely used in demography and macrosociology) than with exponential and logistic models (traditionally used in population biology and extensively applied to fossil biodiversity as well). The latter models imply that changes in diversity are guided by a first-order positive feedback (more ancestors, more descendants) and/or a negative feedback arising from resource limitation. Hyperbolic model implies a second-order positive feedback. The hyperbolic pattern of the world population growth has been demonstrated (see below) to arise from a second-order positive feedback between the population size and the rate of technological growth. The hyperbolic character of biodiversity growth can be similarly accounted for by a positive feedback between the diversity and community structure complexity. It has been suggested that the similarity between the curves of biodiversity and human population probably comes from the fact that both are derived from the interference of the hyperbolic trend (produced by the positive feedback) with cyclical and stochastic dynamics.

Immune system

A cytokine storm, or hypercytokinemia is a potentially fatal immune reaction consisting of a positive feedback loop between cytokines and immune cells, with highly elevated levels of various cytokines. In normal immune function, positive feedback loops can be utilized to enhance the action of B lymphocytes. When a B cell binds its antibodies to an antigen and becomes activated, it begins releasing antibodies and secreting a complement protein called C3. Both C3 and a B cell's antibodies can bind to a pathogen, and when a B cell has its antibodies bind to a pathogen with C3, it speeds up that B cell's secretion of more antibodies and more C3, thus creating a positive feedback loop.

Cell death

Apoptosis is a caspase-mediated process of cellular death, whose aim is the removal of long-lived or damaged cells. A failure of this process has been implicated in prominent conditions such as cancer or Parkinson's disease. The very core of the apoptotic process is the auto-activation of caspases, which may be modeled via a positive-feedback loop. This positive feedback exerts an auto-activation of the effector caspase by means of intermediate caspases. When isolated from the rest of apoptotic pathway, this positive-feedback presents only one stable steady state, regardless of the number of intermediate activation steps of the effector caspase. When this core process is complemented with inhibitors and enhancers of caspases effects, this process presents bistability, thereby modeling the alive and dying states of a cell.

In psychology

Winner (1996) described gifted children as driven by positive feedback loops involving setting their own learning course, this feeding back satisfaction, thus further setting their learning goals to higher levels and so on. Winner termed this positive feedback loop as a "rage to master." Vandervert (2009a, 2009b) proposed that the child prodigy can be explained in terms of a positive feedback loop between the output of thinking/performing in working memory, which then is fed to the cerebellum where it is streamlined, and then fed back to working memory thus steadily increasing the quantitative and qualitative output of working memory. Vandervert also argued that this working memory/cerebellar positive feedback loop was responsible for language evolution in working memory.

In economics

Markets with social influence

Product recommendations and information about past purchases have been shown to influence consumers choices significantly whether it is for music, movie, book, technological, and other type of products. Social influence often induces a rich-get-richer phenomenon (Matthew effect) where popular products tend to become even more popular.

Market dynamics

According to the theory of reflexivity advanced by George Soros, price changes are driven by a positive feedback process whereby investors' expectations are influenced by price movements so their behaviour acts to reinforce movement in that direction until it becomes unsustainable, whereupon the feedback drives prices in the opposite direction.

Systemic risk

Systemic risk is the risk that an amplification or leverage or positive feedback process presents to a system. This is usually unknown, and under certain conditions this process can amplify exponentially and rapidly lead to destructive or chaotic behavior. A Ponzi scheme is a good example of a positive-feedback system: funds from new investors are used to pay out unusually high returns, which in turn attract more new investors, causing rapid growth toward collapse. W. Brian Arthur has also studied and written on positive feedback in the economy (e.g. W. Brian Arthur, 1990). Hyman Minsky proposed a theory that certain credit expansion practices could make a market economy into "a deviation amplifying system" that could suddenly collapse, sometimes called a "Minsky moment".

Simple systems that clearly separate the inputs from the outputs are not prone to systemic risk. This risk is more likely as the complexity of the system increases, because it becomes more difficult to see or analyze all the possible combinations of variables in the system even under careful stress testing conditions. The more efficient a complex system is, the more likely it is to be prone to systemic risks, because it takes only a small amount of deviation to disrupt the system. Therefore, well-designed complex systems generally have built-in features to avoid this condition, such as a small amount of friction, or resistance, or inertia, or time delay to decouple the outputs from the inputs within the system. These factors amount to an inefficiency, but they are necessary to avoid instabilities.

The 2010 Flash Crash incident was blamed on the practice of high-frequency trading (HFT), although whether HFT really increases systemic risk remains controversial.

Human population growth

Agriculture and human population can be considered to be in a positive feedback mode, which means that one drives the other with increasing intensity. It is suggested that this positive feedback system will end sometime with a catastrophe, as modern agriculture is using up all of the easily available phosphate and is resorting to highly efficient monocultures which are more susceptible to systemic risk.

Technological innovation and human population can be similarly considered, and this has been offered as an explanation for the apparent hyperbolic growth of the human population in the past, instead of a simpler exponential growth. It is proposed that the growth rate is accelerating because of second-order positive feedback between population and technology. Technological growth increases the carrying capacity of land for people, which leads to a growing population, and this in turn drives further technological growth.

Prejudice, social institutions and poverty

Gunnar Myrdal described a vicious circle of increasing inequalities, and poverty, which is known as "circular cumulative causation".

In meteorology

Drought intensifies through positive feedback. A lack of rain decreases soil moisture, which kills plants and/or causes them to release less water through transpiration. Both factors limit evapotranspiration, the process by which water vapor is added to the atmosphere from the surface, and add dry dust to the atmosphere, which absorbs water. Less water vapor means both low dew point temperatures and more efficient daytime heating, decreasing the chances of humidity in the atmosphere leading to cloud formation. Lastly, without clouds, there cannot be rain, and the loop is complete.

In climatology

Climate "forcings" may push a climate system in the direction of warming or cooling, for example, increased atmospheric concentrations of greenhouse gases cause warming at the surface. Forcings are external to the climate system and feedbacks are internal processes of the system. Some feedback mechanisms act in relative isolation to the rest of the climate system while others are tightly coupled. Forcings, feedbacks and the dynamics of the climate system determine how much and how fast the climate changes. The main positive feedback in global warming is the tendency of warming to increase the amount of water vapor in the atmosphere, which in turn leads to further warming. The main negative feedback comes from the Stefan–Boltzmann law, the amount of heat radiated from the Earth into space is proportional to the fourth power of the temperature of Earth's surface and atmosphere.

Other examples of positive feedback subsystems in climatology include:

  • A warmer atmosphere will melt ice and this changes the albedo which further warms the atmosphere.
  • Methane hydrates can be unstable so that a warming ocean could release more methane, which is also a greenhouse gas.
  • Peat, occurring naturally in peat bogs, contains carbon. When peat dries it decomposes, and may additionally burn. Peat also releases nitrous oxide.
  • Global warming affects the cloud distribution. Clouds at higher altitudes enhance the greenhouse effects, while low clouds mainly reflect back sunlight, having opposite effects on temperature.

The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report states that "Anthropogenic warming could lead to some effects that are abrupt or irreversible, depending upon the rate and magnitude of the climate change."

In sociology

A self-fulfilling prophecy is a social positive feedback loop between beliefs and behavior: if enough people believe that something is true, their behavior can make it true, and observations of their behavior may in turn increase belief. A classic example is a bank run.

Another sociological example of positive feedback is the network effect. When more people are encouraged to join a network this increases the reach of the network therefore the network expands ever more quickly. A viral video is an example of the network effect in which links to a popular video are shared and redistributed, ensuring that more people see the video and then re-publish the links. This is the basis for many social phenomena, including Ponzi schemes and chain letters. In many cases population size is the limiting factor to the feedback effect.

In chemistry

If a chemical reaction causes the release of heat, and the reaction itself happens faster at higher temperatures, then there is a high likelihood of positive feedback. If the heat produced is not removed from the reactants fast enough, thermal runaway can occur and very quickly lead to a chemical explosion.

In conservation

Many wildlife are hunted for their parts which can be quite valuable. The closer to extinction that targeted species become, the higher the price there is on their parts. This is an example of positive feedback

Matthew effect


From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The Matthew effect of accumulated advantage, Matthew principle, or Matthew effect for short, is sometimes summarized by the adage "the rich get richer and the poor get poorer". The concept is applicable to matters of fame or status, but may also be applied literally to cumulative advantage of economic capital. In the beginning, Matthew effects were primarily focused on the inequality in the way scientists were recognized for their work. However, Norman Storer, of Columbia University, led a new wave of research. He believed he discovered that the inequality that existed in the social sciences also existed in other institutions.

The term was coined by sociologist Robert K. Merton in 1968 and takes its name from the parable of the talents or minas in the biblical Gospel of Matthew. Merton credited his collaborator and wife, sociologist Harriet Zuckerman, as co-author of the concept of the Matthew effect.

Etymology

The concept is named according to two of the parables of Jesus in the synoptic Gospels (Table 2, of the Eusebian Canons).

The concept concludes both synoptic versions of the parable of the talents:

For to every one who has will more be given, and he will have abundance; but from him who has not, even what he has will be taken away.

— Matthew 25:29, RSV.

I tell you, that to every one who has will more be given; but from him who has not, even what he has will be taken away.

— Luke 19:26, RSV.

The concept concludes two of the three synoptic versions of the parable of the lamp under a bushel (absent in the version of Matthew):

For to him who has will more be given; and from him who has not, even what he has will be taken away

— Mark 4:25, RSV.

Take heed then how you hear; for to him who has will more be given, and from him who has not, even what he thinks that he has will be taken away.

— Luke 8:18, RSV.

The concept is presented again in Matthew outside of a parable during Christ's explanation to his disciples of the purpose of parables:

And he answered them, "To you it has been given to know the secrets of the kingdom of heaven, but to them it has not been given. For to him who has will more be given, and he will have abundance; but from him who has not, even what he has will be taken away."

— Matthew 13:11–12, RSV.

Sociology of science

In the sociology of science, "Matthew effect" was a term coined by Robert K. Merton to describe how, among other things, eminent scientists will often get more credit than a comparatively unknown researcher, even if their work is similar; it also means that credit will usually be given to researchers who are already famous. For example, a prize will almost always be awarded to the most senior researcher involved in a project, even if all the work was done by a graduate student. This was later formulated by Stephen Stigler as Stigler's law of eponymy – "No scientific discovery is named after its original discoverer" – with Stigler explicitly naming Merton as the true discoverer, making his "law" an example of itself.

Merton furthermore argued that in the scientific community the Matthew effect reaches beyond simple reputation to influence the wider communication system, playing a part in social selection processes and resulting in a concentration of resources and talent. He gave as an example the disproportionate visibility given to articles from acknowledged authors, at the expense of equally valid or superior articles written by unknown authors. He also noted that the concentration of attention on eminent individuals can lead to an increase in their self-assurance, pushing them to perform research in important but risky problem areas.

Examples

As credit is valued in science, specific claims of the Matthew effect are contentious. Many examples below exemplify more famous scientists getting credit for discoveries due to their fame, even as other less notable scientists had preempted their work.

Ray Solomonoff ... introduced [what is now known as] "Kolmogorov complexity" in a long journal paper in 1964. ... This makes Solomonoff the first inventor and raises the question whether we should talk about Solomonoff complexity. ...
  • There are many uncontroversial examples of the Matthew effect in mathematics, where a concept is due to one mathematician (and well-documented as such), but is attributed to a later (possibly much later), more famous mathematician who worked on it. For instance, the Poincaré disk model and Poincaré half-plane model of hyperbolic space are both named for Henri Poincaré, but were introduced by Eugenio Beltrami in 1868 (when Poincaré was 14 and had not as yet contributed to hyperbolic geometry).
  • A model for career progress quantitatively incorporates the Matthew Effect in order to predict the distribution of individual career length in competitive professions. The model predictions are validated by analyzing the empirical distributions of career length for careers in science and professional sports (e.g. Major League Baseball). As a result, the disparity between the large number of short careers and the relatively small number of extremely long careers can be explained by the "rich-get-richer" mechanism, which in this framework, provides more experienced and more reputable individuals with a competitive advantage in obtaining new career opportunities.
  • In his 2011 book The Better Angels of Our Nature: Why Violence Has Declined, cognitive psychologist Steven Pinker refers to the Matthew Effect in societies, whereby everything seems to go right in some, and wrong in others. He speculates in Chapter 9 that this could be the result of a positive feedback loop in which reckless behavior by some individuals creates a chaotic environment that encourages reckless behavior by others. He cites research by Martin Daly and Margo Wilson showing that the more unstable the environment, the more steeply people discount the future, and thus the less forward-looking their behavior.
  • A large Matthew effect was discovered in a study of science funding in the Netherlands, where winners just above the funding threshold were found to accumulate more than twice as much funding during the subsequent eight years as non-winners with near-identical review scores that fell just below the threshold.

In science, dramatic differences in the productivity may be explained by three phenomena: sacred spark, cumulative advantage, and search costs minimization by journal editors. The sacred spark paradigm suggests that scientists differ in their initial abilities, talent, skills, persistence, work habits, etc. that provide particular individuals with an early advantage. These factors have a multiplicative effect which helps these scholars succeed later. The cumulative advantage model argues that an initial success helps a researcher gain access to resources (e.g., teaching release, best graduate students, funding, facilities, etc.), which in turn results in further success. Search costs minimization by journal editors takes place when editors try to save time and effort by consciously or subconsciously selecting articles from well-known scholars. Whereas the exact mechanism underlying these phenomena is yet unknown, it is documented that a minority of all academics produce the most research output and attract the most citations.

Education

In education, the term "Matthew effect" has been adopted by psychologist Keith Stanovich to describe a phenomenon observed in research on how new readers acquire the skills to read: early success in acquiring reading skills usually leads to later successes in reading as the learner grows, while failing to learn to read before the third or fourth year of schooling may be indicative of lifelong problems in learning new skills.

This is because children who fall behind in reading would read less, increasing the gap between them and their peers. Later, when students need to "read to learn" (where before they were learning to read), their reading difficulty creates difficulty in most other subjects. In this way they fall further and further behind in school, dropping out at a much higher rate than their peers.

In the words of Stanovich:

Slow reading acquisition has cognitive, behavioral, and motivational consequences that slow the development of other cognitive skills and inhibit performance on many academic tasks. In short, as reading develops, other cognitive processes linked to it track the level of reading skill. Knowledge bases that are in reciprocal relationships with reading are also inhibited from further development. The longer this developmental sequence is allowed to continue, the more generalized the deficits will become, seeping into more and more areas of cognition and behavior. Or to put it more simply – and sadly – in the words of a tearful nine-year-old, already falling frustratingly behind his peers in reading progress, "Reading affects everything you do."

The Matthew effect plays a role in today's educational system.

Students around the United States participate in the SAT every year to then send those scores to the colleges to which they are applying. The distributor of the SAT, the College board, conducted a study based on the income earned by the families of the test takers. The results showed the Matthew effect is prevalent when it comes to a family's economic earnings: "Students from families earning more than $200,000 a year average a combined score of 1,714, while students from families earning under $20,000 a year average a combined score of 1,326."

Not only do students with a wealthier family score better, but statistics show that students with parents that have accomplished more in school perform better as well. A student with a parent with a graduate degree, for example, averages 300 points higher on their SAT compared to a student with a parent with only a high school degree.

Network science

In network science, the Matthew effect is used to describe the preferential attachment of earlier nodes in a network, which explains that these nodes tend to attract more links early on. "Because of preferential attachment, a node that acquires more connections than another one will increase its connectivity at a higher rate, and thus an initial difference in the connectivity between two nodes will increase further as the network grows, while the degree of individual nodes will grow proportional with the square root of time." The Matthew Effect therefore explains the growth of some nodes in vast networks such as the Internet.

Markets with social influence

Social influence often induces a rich-get-richer phenomenon where popular products tend to become even more popular. An example of the Matthew Effect's role on social influence. Salganik, Dodds, and Watts created an experimental virtual market named MUSICLAB. In MUSICLAB, people could listen to music and choose to download the songs they enjoyed the most. The song choices were unknown songs produced by unknown bands. There were two groups tested; one group was given zero additional information on the songs and one group was told the popularity of each song and the number of times it had previously been downloaded. As a result, the group that saw which songs were the most popular and were downloaded the most were then biased to choose those songs as well. The songs that were most popular and downloaded the most stayed at the top of the list and consistently received the most plays. To summarize the experiments findings, the performance rankings had the largest effect boosting expected downloads the most. Download rankings had a decent effect; however, not as impactful as the performance rankings. Also, Abeliuk et al. (2016) proved that when utilizing “performance rankings”, a monopoly will be created for the most popular songs.

Political science

Liberalization in autocracies is more likely to succeed in countries with the advantage of a better starting point concerning political institutions, GDP, and education. These more privileged countries can also carry out key reforms more rapidly, and are able to do so even in areas with no initial advantage.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...