Search This Blog

Tuesday, November 8, 2022

Exponential decay

From Wikipedia, the free encyclopedia
 
A quantity undergoing exponential decay. Larger decay constants make the quantity vanish much more rapidly. This plot shows decay for decay constant (λ) of 25, 5, 1, 1/5, and 1/25 for x from 0 to 5.

A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the following differential equation, where N is the quantity and λ (lambda) is a positive rate called the exponential decay constant:

The solution to this equation (see derivation below) is:

where N(t) is the quantity at time t, N0 = N(0) is the initial quantity, that is, the quantity at time t = 0, and the constant λ is called the decay constant, disintegration constant, rate constant, or transformation constant.

Measuring rates of decay

Mean lifetime

If the decaying quantity, N(t), is the number of discrete elements in a certain set, it is possible to compute the average length of time that an element remains in the set. This is called the mean lifetime (or simply the lifetime), where the exponential time constant, , relates to the decay rate, λ, in the following way:

The mean lifetime can be looked at as a "scaling time", because the exponential decay equation can be written in terms of the mean lifetime, , instead of the decay constant, λ:

and that is the time at which the population of the assembly is reduced to 1/e ≈ 0.367879441 times its initial value.

For example, if the initial population of the assembly, N(0), is 1000, then the population at time , , is 368.

A very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, rather than e. In that case the scaling time is the "half-life".

Half-life

A more intuitive characteristic of exponential decay for many people is the time required for the decaying quantity to fall to one half of its initial value. (If N(t) is discrete, then this is the median life-time rather than the mean life-time.) This time is called the half-life, and often denoted by the symbol t1/2. The half-life can be written in terms of the decay constant, or the mean lifetime, as:

When this expression is inserted for in the exponential equation above, and ln 2 is absorbed into the base, this equation becomes:

Thus, the amount of material left is 2−1 = 1/2 raised to the (whole or fractional) number of half-lives that have passed. Thus, after 3 half-lives there will be 1/23 = 1/8 of the original material left.

Therefore, the mean lifetime is equal to the half-life divided by the natural log of 2, or:

For example, polonium-210 has a half-life of 138 days, and a mean lifetime of 200 days.

Solution of the differential equation

The equation that describes exponential decay is

or, by rearranging (applying the technique called separation of variables),

Integrating, we have

where C is the constant of integration, and hence

where the final substitution, N0 = eC, is obtained by evaluating the equation at t = 0, as N0 is defined as being the quantity at t = 0.

This is the form of the equation that is most commonly used to describe exponential decay. Any one of decay constant, mean lifetime, or half-life is sufficient to characterise the decay. The notation λ for the decay constant is a remnant of the usual notation for an eigenvalue. In this case, λ is the eigenvalue of the negative of the differential operator with N(t) as the corresponding eigenfunction. The units of the decay constant are s−1.

Derivation of the mean lifetime

Given an assembly of elements, the number of which decreases ultimately to zero, the mean lifetime, , (also called simply the lifetime) is the expected value of the amount of time before an object is removed from the assembly. Specifically, if the individual lifetime of an element of the assembly is the time elapsed between some reference time and the removal of that element from the assembly, the mean lifetime is the arithmetic mean of the individual lifetimes.

Starting from the population formula

first let c be the normalizing factor to convert to a probability density function:

or, on rearranging,

Exponential decay is a scalar multiple of the exponential distribution (i.e. the individual lifetime of each object is exponentially distributed), which has a well-known expected value. We can compute it here using integration by parts.

Decay by two or more processes

A quantity may decay via two or more different processes simultaneously. In general, these processes (often called "decay modes", "decay channels", "decay routes" etc.) have different probabilities of occurring, and thus occur at different rates with different half-lives, in parallel. The total decay rate of the quantity N is given by the sum of the decay routes; thus, in the case of two processes:

The solution to this equation is given in the previous section, where the sum of is treated as a new total decay constant .

Partial mean life associated with individual processes is by definition the multiplicative inverse of corresponding partial decay constant: . A combined can be given in terms of s:

Since half-lives differ from mean life by a constant factor, the same equation holds in terms of the two corresponding half-lives:

where is the combined or total half-life for the process, and are so-named partial half-lives of corresponding processes. Terms "partial half-life" and "partial mean life" denote quantities derived from a decay constant as if the given decay mode were the only decay mode for the quantity. The term "partial half-life" is misleading, because it cannot be measured as a time interval for which a certain quantity is halved.

In terms of separate decay constants, the total half-life can be shown to be

For a decay by three simultaneous exponential processes the total half-life can be computed as above:

Decay series / coupled decay

In nuclear science and pharmacokinetics, the agent of interest might be situated in a decay chain, where the accumulation is governed by exponential decay of a source agent, while the agent of interest itself decays by means of an exponential process.

These systems are solved using the Bateman equation.

In the pharmacology setting, some ingested substances might be absorbed into the body by a process reasonably modeled as exponential decay, or might be deliberately formulated to have such a release profile.

Applications and examples

Exponential decay occurs in a wide variety of situations. Most of these fall into the domain of the natural sciences.

Many decay processes that are often treated as exponential, are really only exponential so long as the sample is large and the law of large numbers holds. For small samples, a more general analysis is necessary, accounting for a Poisson process.

Natural sciences

  • Chemical reactions: The rates of certain types of chemical reactions depend on the concentration of one or another reactant. Reactions whose rate depends only on the concentration of one reactant (known as first-order reactions) consequently follow exponential decay. For instance, many enzyme-catalyzed reactions behave this way.
  • Electrostatics: The electric charge (or, equivalently, the potential) contained in a capacitor (capacitance C) changes exponentially, if the capacitor experiences a constant external load (resistance R). The exponential time-constant τ for the process is R C, and the half-life is therefore R C ln2. This applies to both charging and discharging, i.e. a capacitor charges or discharges according to the same law. The same equations can be applied to the current in an inductor. (Furthermore, the particular case of a capacitor or inductor changing through several parallel resistors makes an interesting example of multiple decay processes, with each resistor representing a separate process. In fact, the expression for the equivalent resistance of two resistors in parallel mirrors the equation for the half-life with two decay processes.)
  • Geophysics: Atmospheric pressure decreases approximately exponentially with increasing height above sea level, at a rate of about 12% per 1000m.
  • Heat transfer: If an object at one temperature is exposed to a medium of another temperature, the temperature difference between the object and the medium follows exponential decay (in the limit of slow processes; equivalent to "good" heat conduction inside the object, so that its temperature remains relatively uniform through its volume). See also Newton's law of cooling.
  • Luminescence: After excitation, the emission intensity – which is proportional to the number of excited atoms or molecules – of a luminescent material decays exponentially. Depending on the number of mechanisms involved, the decay can be mono- or multi-exponential.
  • Pharmacology and toxicology: It is found that many administered substances are distributed and metabolized (see clearance) according to exponential decay patterns. The biological half-lives "alpha half-life" and "beta half-life" of a substance measure how quickly a substance is distributed and eliminated.
  • Physical optics: The intensity of electromagnetic radiation such as light or X-rays or gamma rays in an absorbent medium, follows an exponential decrease with distance into the absorbing medium. This is known as the Beer-Lambert law.
  • Radioactivity: In a sample of a radionuclide that undergoes radioactive decay to a different state, the number of atoms in the original state follows exponential decay as long as the remaining number of atoms is large. The decay product is termed a radiogenic nuclide.
  • Thermoelectricity: The decline in resistance of a Negative Temperature Coefficient Thermistor as temperature is increased.
  • Vibrations: Some vibrations may decay exponentially; this characteristic is often found in damped mechanical oscillators, and used in creating ADSR envelopes in synthesizers. An overdamped system will simply return to equilibrium via an exponential decay.
  • Beer froth: Arnd Leike, of the Ludwig Maximilian University of Munich, won an Ig Nobel Prize for demonstrating that beer froth obeys the law of exponential decay.

Social sciences

  • Finance: a retirement fund will decay exponentially being subject to discrete payout amounts, usually monthly, and an input subject to a continuous interest rate. A differential equation dA/dt = input – output can be written and solved to find the time to reach any amount A, remaining in the fund.
  • In simple glottochronology, the (debatable) assumption of a constant decay rate in languages allows one to estimate the age of single languages. (To compute the time of split between two languages requires additional assumptions, independent of exponential decay).

Computer science

  • The core routing protocol on the Internet, BGP, has to maintain a routing table in order to remember the paths a packet can be deviated to. When one of these paths repeatedly changes its state from available to not available (and vice versa), the BGP router controlling that path has to repeatedly add and remove the path record from its routing table (flaps the path), thus spending local resources such as CPU and RAM and, even more, broadcasting useless information to peer routers. To prevent this undesired behavior, an algorithm named route flapping damping assigns each route a weight that gets bigger each time the route changes its state and decays exponentially with time. When the weight reaches a certain limit, no more flapping is done, thus suppressing the route.
Graphs comparing doubling times and half lives of exponential growths (bold lines) and decay (faint lines), and their 70/t and 72/t approximations. In the SVG version, hover over a graph to highlight it and its complement.

Energy level

From Wikipedia, the free encyclopedia
 
Energy levels for an electron in an atom: ground state and excited states. After absorbing energy, an electron may "jump" from the ground state to a higher energy excited state.

A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized.

In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N...).

Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration.

If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy.

If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. An energy level is regarded as degenerate if there is more than one measurable quantum mechanical state associated with it.

Explanation

Wavefunctions of a hydrogen atom, showing the probability of finding the electron in the space around the nucleus. Each stationary state defines a specific energy level of the atom.

Quantized energy levels result from the wave behavior of particles, which gives a relationship between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave functions that have well defined energies have the form of a standing wave. States having well-defined energies are called stationary states because they are the states that do not change in time. Informally, these states correspond to a whole number of wavelengths of the wavefunction along a closed path (a path that ends where it started), such as a circular orbit around an atom, where the number of wavelengths gives the type of atomic orbital (0 for s-orbitals, 1 for p-orbitals and so on). Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator.

Any superposition (linear combination) of energy states is also a quantum state, but such states change with time and do not have well-defined energies. A measurement of the energy results in the collapse of the wavefunction, which results in a new state that consists of just a single energy state. Measurement of the possible energy levels of an object is called spectroscopy.

History

The first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926.

Atoms

Intrinsic energy levels

In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has completely left the atom, i.e. when the electron's principal quantum number n = ∞. When the electron is bound to the atom in any closer value of n, the electron's energy is lower and is considered negative.

Orbital state energy level: atom/ion with nucleus + one electron

Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by :

(typically between 1 eV and 103 eV), where R is the Rydberg constant, Z is the atomic number, n is the principal quantum number, h is Planck's constant, and c is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number n.

This equation is obtained from combining the Rydberg formula for any hydrogen-like element (shown below) with E = h ν = h c / λ assuming that the principal quantum number n above = n1 in the Rydberg formula and n2 = ∞ (principal quantum number of the energy level the electron descends from, when emitting a photon). The Rydberg formula was derived from empirical spectroscopic emission data.

An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants.

Electron-electron interactions in atoms

If there is more than one electron around the atom, electron-electron-interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low.

For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with Z as the atomic number. A simple (though not complete) way to understand this is as a shielding effect, where the outer electrons see an effective nucleus of reduced charge, since the inner electrons are bound tightly to the nucleus and partially cancel its charge. This leads to an approximate correction where Z is substituted with an effective nuclear charge symbolized as Zeff that depends strongly on the principal quantum number.

In such cases, the orbital types (determined by the azimuthal quantum number ) as well as their levels within the molecule affect Zeff and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule.

Fine structure splitting

Fine structure arises from relativistic kinetic energy corrections, spin–orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of s shell electrons inside the nucleus). These affect the levels by a typical order of magnitude of 10−3 eV.

Hyperfine structure

This even finer structure is due to electron–nucleus spin–spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of 10−4 eV.

Energy levels due to external fields

Zeeman effect

There is an interaction energy associated with the magnetic dipole moment, μL, arising from the electronic orbital angular momentum, L, given by

with

.

Additionally taking into account the magnetic momentum arising from the electron spin.

Due to relativistic effects (Dirac equation), there is a magnetic momentum, μS, arising from the electron spin

,

with gS the electron-spin g-factor (about 2), resulting in a total magnetic moment, μ,

.

The interaction energy therefore becomes

.

Stark effect

Molecules

Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs.  In polyatomic molecules, different vibrational and rotational energy levels are also involved.

Roughly speaking, a molecular energy state, i.e. an eigenstate of the molecular Hamiltonian, is the sum of the electronic, vibrational, rotational, nuclear, and translational components, such that:

where Eelectronic is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule.

The molecular energy levels are labelled by the molecular term symbols. The specific energies of these components vary with the specific energy state and the substance.

Energy level diagrams

There are various types of energy level diagrams for bonds between atoms in a molecule.

Examples
Molecular orbital diagrams, Jablonski diagrams, and Franck–Condon diagrams.

Energy level transitions

An increase in energy level from E1 to E2 resulting from absorption of a photon represented by the red squiggly arrow, and whose energy is hν
 
A decrease in energy level from E2 to E1 resulting in emission of a photon represented by the red squiggly arrow, and whose energy is hν

Electrons in atoms and molecules can change (make transitions in) energy levels by emitting or absorbing a photon (of electromagnetic radiation), whose energy must be exactly equal to the energy difference between the two levels. Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved.

If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to Planck's constant (h) times its frequency (f) and thus is proportional to its frequency, or inversely to its wavelength (λ).

ΔE = h f = h c / λ,

since c, the speed of light, equals to f λ

Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum.

An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ → σ*, π → π*, or n → π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital. Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ* → σ, π* → π, or π* → n.

A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics.

Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy, and thermally excites molecules to higher average amplitudes of vibrational and rotational modes (excites the molecules to higher internal energy levels). This means that as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possibly colored glow.

An electron farther from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus.

Crystalline materials

Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi level, the vacuum level, and the energy levels of any defect states in the crystal.

Butane

From Wikipedia, the free encyclopedia ...