Search This Blog

Saturday, March 8, 2025

Schrödinger equation

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation

The Schrödinger equation is a partial differential equation that governs the wave function of a non-relativistic quantum-mechanical system. Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, an Austrian physicist, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.

Conceptually, the Schrödinger equation is the quantum counterpart of Newton's second law in classical mechanics. Given a set of known initial conditions, Newton's second law makes a mathematical prediction as to what path a given physical system will take over time. The Schrödinger equation gives the evolution over time of the wave function, the quantum-mechanical characterization of an isolated physical system. The equation was postulated by Schrödinger based on a postulate of Louis de Broglie that all matter has an associated matter wave. The equation predicted bound states of the atom in agreement with experimental observations.

The Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. Other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, and the path integral formulation, developed chiefly by Richard Feynman. When these approaches are compared, the use of the Schrödinger equation is sometimes called "wave mechanics".

The equation given by Schrödinger is nonrelativistic because it contains a first derivative in time and a second derivative in space, and therefore space and time are not on equal footing. Paul Dirac incorporated special relativity and quantum mechanics into a single formulation that simplifies to the Schrödinger equation in the non-relativistic limit. This is the Dirac equation, which contains a single derivative in both space and time. Another partial differential equation, the Klein–Gordon equation, led to a problem with probability density even though it was a relativistic wave equation. The probability density could be negative, which is physically unviable. This was fixed by Dirac by taking the so-called square root of the Klein–Gordon operator and in turn introducing Dirac matrices. In a modern context, the Klein–Gordon equation describes spin-less particles, while the Dirac equation describes spin-1/2 particles.

Definition

Preliminaries

Introductory courses on physics or chemistry typically introduce the Schrödinger equation in a way that can be appreciated knowing only the concepts and notations of basic calculus, particularly derivatives with respect to space and time. A special case of the Schrödinger equation that admits a statement in those terms is the position-space Schrödinger equation for a single nonrelativistic particle in one dimension: Here, is a wave function, a function that assigns a complex number to each point at each time . The parameter is the mass of the particle, and is the potential that represents the environment in which the particle exists. The constant is the imaginary unit, and is the reduced Planck constant, which has units of action (energy multiplied by time).

Complex plot of a wave function that satisfies the nonrelativistic free Schrödinger equation with V = 0. For more details see wave packet

Broadening beyond this simple case, the mathematical formulation of quantum mechanics developed by Paul Dirac, David Hilbert, John von Neumann, and Hermann Weyl defines the state of a quantum mechanical system to be a vector belonging to a separable complex Hilbert space . This vector is postulated to be normalized under the Hilbert space's inner product, that is, in Dirac notation it obeys . The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of square-integrable functions , while the Hilbert space for the spin of a single proton is the two-dimensional complex vector space with the usual inner product.

Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are self-adjoint operators acting on the Hilbert space. A wave function can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue is non-degenerate and the probability is given by , where is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by , where is the projector onto its associated eigenspace.

A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle's Hilbert space. Physicists sometimes regard these eigenstates, composed of elements outside the Hilbert space, as "generalized eigenvectors". These are used for calculational convenience and do not represent physical states. Thus, a position-space wave function as used above can be written as the inner product of a time-dependent state vector with unphysical but convenient "position eigenstates" :

Time-dependent equation

Each of these three rows is a wave function which satisfies the time-dependent Schrödinger equation for a harmonic oscillator. Left: The real part (blue) and imaginary part (red) of the wave function. Right: The probability distribution of finding the particle with this wave function at a given position. The top two rows are examples of stationary states, which correspond to standing waves. The bottom row is an example of a state which is not a stationary state.

The form of the Schrödinger equation depends on the physical situation. The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time:

Time-dependent Schrödinger equation (general)

where is time, is the state vector of the quantum system ( being the Greek letter psi), and is an observable, the Hamiltonian operator.

The term "Schrödinger equation" can refer to both the general equation, or the specific nonrelativistic version. The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in diverse expressions for the Hamiltonian. The specific nonrelativistic version is an approximation that yields accurate results in many situations, but only to a certain extent (see relativistic quantum mechanics and relativistic quantum field theory).

To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system, then insert it into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system. In practice, the square of the absolute value of the wave function at each point is taken to define a probability density function. For example, given a wave function in position space as above, we have

Time-independent equation

The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states. These states are particularly important as their individual study later simplifies the task of solving the time-dependent Schrödinger equation for any state. Stationary states can also be described by a simpler form of the Schrödinger equation, the time-independent Schrödinger equation.

Time-independent Schrödinger equation (general)

where is the energy of the system. This is only used when the Hamiltonian itself is not dependent on time explicitly. However, even in this case the total wave function is dependent on time as explained in the section on linearity below. In the language of linear algebra, this equation is an eigenvalue equation. Therefore, the wave function is an eigenfunction of the Hamiltonian operator with corresponding eigenvalue(s) .

Properties

Linearity

The Schrödinger equation is a linear differential equation, meaning that if two state vectors and are solutions, then so is any linear combination of the two state vectors where a and b are any complex numbers. Moreover, the sum can be extended for any number of state vectors. This property allows superpositions of quantum states to be solutions of the Schrödinger equation. Even more generally, it holds that a general solution to the Schrödinger equation can be found by taking a weighted sum over a basis of states. A choice often employed is the basis of energy eigenstates, which are solutions of the time-independent Schrödinger equation. In this basis, a time-dependent state vector can be written as the linear combination where are complex numbers and the vectors are solutions of the time-independent equation .

Unitarity

Holding the Hamiltonian constant, the Schrödinger equation has the solution The operator is known as the time-evolution operator, and it is unitary: it preserves the inner product between vectors in the Hilbert space. Unitarity is a general feature of time evolution under the Schrödinger equation. If the initial state is , then the state at a later time will be given by for some unitary operator . Conversely, suppose that is a continuous family of unitary operators parameterized by . Without loss of generality, the parameterization can be chosen so that is the identity operator and that for any . Then depends upon the parameter in such a way that for some self-adjoint operator , called the generator of the family . A Hamiltonian is just such a generator (up to the factor of the Planck constant that would be set to 1 in natural units). To see that the generator is Hermitian, note that with , we have so is unitary only if, to first order, its derivative is Hermitian.

Changes of basis

The Schrödinger equation is often presented using quantities varying as functions of position, but as a vector-operator equation it has a valid representation in any arbitrary complete basis of kets in Hilbert space. As mentioned above, "bases" that lie outside the physical Hilbert space are also employed for calculational purposes. This is illustrated by the position-space and momentum-space Schrödinger equations for a nonrelativistic, spinless particle. The Hilbert space for such a particle is the space of complex square-integrable functions on three-dimensional Euclidean space, and its Hamiltonian is the sum of a kinetic-energy term that is quadratic in the momentum operator and a potential-energy term: Writing for a three-dimensional position vector and for a three-dimensional momentum vector, the position-space Schrödinger equation is The momentum-space counterpart involves the Fourier transforms of the wave function and the potential: The functions and are derived from by where and do not belong to the Hilbert space itself, but have well-defined inner products with all elements of that space.

When restricted from three dimensions to one, the position-space equation is just the first form of the Schrödinger equation given above. The relation between position and momentum in quantum mechanics can be appreciated in a single dimension. In canonical quantization, the classical variables and are promoted to self-adjoint operators and that satisfy the canonical commutation relation This implies that so the action of the momentum operator in the position-space representation is . Thus, becomes a second derivative, and in three dimensions, the second derivative becomes the Laplacian .

The canonical commutation relation also implies that the position and momentum operators are Fourier conjugates of each other. Consequently, functions originally defined in terms of their position dependence can be converted to functions of momentum using the Fourier transform. In solid-state physics, the Schrödinger equation is often written for functions of momentum, as Bloch's theorem ensures the periodic crystal lattice potential couples with for only discrete reciprocal lattice vectors . This makes it convenient to solve the momentum-space Schrödinger equation at each point in the Brillouin zone independently of the other points in the Brillouin zone.

Probability current

The Schrödinger equation is consistent with local probability conservation. It also ensures that a normalized wavefunction remains normalized after time evolution. In matrix mechanics, this means that the time evolution operator is a unitary operator. In contrast to, for example, the Klein Gordon equation, although a redefined inner product of a wavefunction can be time independent, the total volume integral of modulus square of the wavefunction need not be time independent.

The continuity equation for probability in non relativistic quantum mechanics is stated as: where is the probability current or probability flux (flow per unit area).

If the wavefunction is represented as where is a real function which represents the complex phase of the wavefunction, then the probability flux is calculated as:Hence, the spatial variation of the phase of a wavefunction is said to characterize the probability flux of the wavefunction. Although the term appears to play the role of velocity, it does not represent velocity at a point since simultaneous measurement of position and velocity violates uncertainty principle.

Separation of variables

If the Hamiltonian is not an explicit function of time, Schrödinger's equation reads: The operator on the left side depends only on time; the one on the right side depends only on space. Solving the equation by separation of variables means seeking a solution of the form of a product of spatial and temporal parts where is a function of all the spatial coordinate(s) of the particle(s) constituting the system only, and is a function of time only. Substituting this expression for into the time dependent left hand side shows that is a phase factor: A solution of this type is called stationary, since the only time dependence is a phase factor that cancels when the probability density is calculated via the Born rule.

The spatial part of the full wave function solves the equation where the energy appears in the phase factor.

This generalizes to any number of particles in any number of dimensions (in a time-independent potential): the standing wave solutions of the time-independent equation are the states with definite energy, instead of a probability distribution of different energies. In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels. The energy eigenstates form a basis: any wave function may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is an example of the spectral theorem, and in a finite-dimensional state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix.

Separation of variables can also be a useful method for the time-independent Schrödinger equation. For example, depending on the symmetry of the problem, the Cartesian axes might be separated, as in or radial and angular coordinates might be separated:

Examples

Particle in a box

1-dimensional potential energy box (or infinite potential well)

The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy inside a certain region and infinite potential energy outside. For the one-dimensional case in the direction, the time-independent Schrödinger equation may be written

With the differential operator defined by the previous equation is evocative of the classic kinetic energy analogue, with state in this case having energy coincident with the kinetic energy of the particle.

The general solutions of the Schrödinger equation for the particle in a box are or, from Euler's formula,

The infinite potential walls of the box determine the values of and at and where must be zero. Thus, at , and . At , in which cannot be zero as this would conflict with the postulate that has norm 1. Therefore, since , must be an integer multiple of ,

This constraint on implies a constraint on the energy levels, yielding

A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy.

Harmonic oscillator

A harmonic oscillator in classical mechanics (A–B) and quantum mechanics (C–H). In (A–B), a ball, attached to a spring, oscillates back and forth. (C–H) are six solutions to the Schrödinger Equation for this situation. The horizontal axis is position, the vertical axis is the real part (blue) or imaginary part (red) of the wave function. Stationary states, or energy eigenstates, which are solutions to the time-independent Schrödinger equation, are shown in C, D, E, F, but not G or H.

The Schrödinger equation for this situation is where is the displacement and the angular frequency. Furthermore, it can be used to describe approximately a wide variety of other systems, including vibrating atoms, molecules, and atoms or ions in lattices, and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in quantum mechanics.

The solutions in position space are where , and the functions are the Hermite polynomials of order . The solution set may be generated by

The eigenvalues are

The case is called the ground state, its energy is called the zero-point energy, and the wave function is a Gaussian.

The harmonic oscillator, like the particle in a box, illustrates the generic feature of the Schrödinger equation that the energies of bound eigenstates are discretized.

Hydrogen atom

Wave functions of the electron in a hydrogen atom at different energy levels. They are plotted according to solutions of the Schrödinger equation.

The Schrödinger equation for the electron in a hydrogen atom (or a hydrogen-like atom) is where is the electron charge, is the position of the electron relative to the nucleus, is the magnitude of the relative position, the potential term is due to the Coulomb interaction, wherein is the permittivity of free space and is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass and the electron of mass . The negative sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the electron mass is used since the electron and proton together orbit each other about a common center of mass, and constitute a two-body problem to solve. The motion of the electron is of principal interest here, so the equivalent one-body problem is the motion of the electron using the reduced mass.

The Schrödinger equation for a hydrogen atom can be solved by separation of variables. In this case, spherical polar coordinates are the most convenient. Thus, where R are radial functions and are spherical harmonics of degree and order . This is the only atom for which the Schrödinger equation has been solved for exactly. Multi-electron atoms require approximate methods. The family of solutions are: where

Approximate solutions

It is typically not possible to solve the Schrödinger equation exactly for situations of physical interest. Accordingly, approximate solutions are obtained using techniques like variational methods and WKB approximation. It is also common to treat a problem of interest as a small modification to a problem that can be solved exactly, a method known as perturbation theory.

Semiclassical limit

One simple way to compare classical to quantum mechanics is to consider the time-evolution of the expected position and expected momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics. The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential , the Ehrenfest theorem says Although the first of these equations is consistent with the classical behavior, the second is not: If the pair were to satisfy Newton's second law, the right-hand side of the second equation would have to be which is typically not the same as . For a general , therefore, quantum mechanics can lead to predictions where expectation values do not mimic the classical behavior. In the case of the quantum harmonic oscillator, however, is linear and this distinction disappears, so that in this very special case, the expected position and expected momentum do exactly follow the classical trajectories.

For general systems, the best we can hope for is that the expected position and momentum will approximately follow the classical trajectories. If the wave function is highly concentrated around a point , then and will be almost the same, since both will be approximately equal to . In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position.

The Schrödinger equation in its general form is closely related to the Hamilton–Jacobi equation (HJE) where is the classical action and is the Hamiltonian function (not operator). Here the generalized coordinates for (used in the context of the HJE) can be set to the position in Cartesian coordinates as .

Substituting where is the probability density, into the Schrödinger equation and then taking the limit in the resulting equation yield the Hamilton–Jacobi equation.

Density matrices

Wave functions are not always the most convenient way to describe quantum systems and their behavior. When the preparation of a system is only imperfectly known, or when the system under investigation is a part of a larger whole, density matrices may be used instead. A density matrix is a positive semi-definite operator whose trace is equal to 1. (The term "density operator" is also used, particularly when the underlying Hilbert space is infinite-dimensional.) The set of all density matrices is convex, and the extreme points are the operators that project onto vectors in the Hilbert space. These are the density-matrix representations of wave functions; in Dirac notation, they are written

The density-matrix analogue of the Schrödinger equation for wave functions is where the brackets denote a commutator. This is variously known as the von Neumann equation, the Liouville–von Neumann equation, or just the Schrödinger equation for density matrices. If the Hamiltonian is time-independent, this equation can be easily solved to yield

More generally, if the unitary operator describes wave function evolution over some time interval, then the time evolution of a density matrix over that same interval is given by

Unitary evolution of a density matrix conserves its von Neumann entropy.

Relativistic quantum physics and quantum field theory

The one-particle Schrödinger equation described above is valid essentially in the nonrelativistic domain. For one reason, it is essentially invariant under Galilean transformations, which form the symmetry group of Newtonian dynamics. Moreover, processes that change particle number are natural in relativity, and so an equation for one particle (or any fixed number thereof) can only be of limited use. A more general form of the Schrödinger equation that also applies in relativistic situations can be formulated within quantum field theory (QFT), a framework that allows the combination of quantum mechanics with special relativity. The region in which both simultaneously apply may be described by relativistic quantum mechanics. Such descriptions may use time evolution generated by a Hamiltonian operator, as in the Schrödinger functional method.

Klein–Gordon and Dirac equations

Attempts to combine quantum physics with special relativity began with building relativistic wave equations from the relativistic energy–momentum relation instead of nonrelativistic energy equations. The Klein–Gordon equation and the Dirac equation are two such equations. The Klein–Gordon equation, was the first such equation to be obtained, even before the nonrelativistic one-particle Schrödinger equation, and applies to massive spinless particles. Historically, Dirac obtained the Dirac equation by seeking a differential equation that would be first-order in both time and space, a desirable property for a relativistic theory. Taking the "square root" of the left-hand side of the Klein–Gordon equation in this way required factorizing it into a product of two operators, which Dirac wrote using 4 × 4 matrices . Consequently, the wave function also became a four-component function, governed by the Dirac equation that, in free space, read

This has again the form of the Schrödinger equation, with the time derivative of the wave function being given by a Hamiltonian operator acting upon the wave function. Including influences upon the particle requires modifying the Hamiltonian operator. For example, the Dirac Hamiltonian for a particle of mass m and electric charge q in an electromagnetic field (described by the electromagnetic potentials φ and A) is: in which the γ = (γ1, γ2, γ3) and γ0 are the Dirac gamma matrices related to the spin of the particle. The Dirac equation is true for all spin-12 particles, and the solutions to the equation are 4-component spinor fields with two components corresponding to the particle and the other two for the antiparticle.

For the Klein–Gordon equation, the general form of the Schrödinger equation is inconvenient to use, and in practice the Hamiltonian is not expressed in an analogous way to the Dirac Hamiltonian. The equations for relativistic quantum fields, of which the Klein–Gordon and Dirac equations are two examples, can be obtained in other ways, such as starting from a Lagrangian density and using the Euler–Lagrange equations for fields, or using the representation theory of the Lorentz group in which certain representations can be used to fix the equation for a free particle of given spin (and mass).

In general, the Hamiltonian to be substituted in the general Schrödinger equation is not just a function of the position and momentum operators (and possibly time), but also of spin matrices. Also, the solutions to a relativistic wave equation, for a massive particle of spin s, are complex-valued 2(2s + 1)-component spinor fields.

Fock space

As originally formulated, the Dirac equation is an equation for a single quantum particle, just like the single-particle Schrödinger equation with wave function . This is of limited use in relativistic quantum mechanics, where particle number is not fixed. Heuristically, this complication can be motivated by noting that mass–energy equivalence implies material particles can be created from energy. A common way to address this in QFT is to introduce a Hilbert space where the basis states are labeled by particle number, a so-called Fock space. The Schrödinger equation can then be formulated for quantum states on this Hilbert space. However, because the Schrödinger equation picks out a preferred time axis, the Lorentz invariance of the theory is no longer manifest, and accordingly, the theory is often formulated in other ways.

History

Erwin Schrödinger

Following Max Planck's quantization of light (see black-body radiation), Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs of wave–particle duality. Since energy and momentum are related in the same way as frequency and wave number in special relativity, it followed that the momentum of a photon is inversely proportional to its wavelength , or proportional to its wave number : where is the Planck constant and is the reduced Planck constant. Louis de Broglie hypothesized that this is true for all particles, even particles which have mass such as electrons. He showed that, assuming that the matter waves propagate along with their particle counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus of an atom are allowed. These quantized orbits correspond to discrete energy levels, and de Broglie reproduced the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular momentum according to According to de Broglie, the electron is described by a wave, and a whole number of wavelengths must fit along the circumference of the electron's orbit:

This approach essentially confined the electron wave in one dimension, along a circular orbit of radius .

In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation. Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation and solve for its energy eigenvalues for the hydrogen atom; the paper was rejected by the Physical Review, according to Kamen.

Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Schrödinger decided to find a proper 3-dimensional wave equation for the electron. He was guided by William Rowan Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system—the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action.

Schrödinger's equation inscribed on the gravestone of Annemarie and Erwin Schrödinger. (Newton's dot notation for the time derivative is used.)

The equation he found is

By that time Arnold Sommerfeld had refined the Bohr model with relativistic corrections. Schrödinger used the relativistic energy–momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units):

He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself with a mistress in a mountain cabin in December 1925.

While at the cabin, Schrödinger decided that his earlier nonrelativistic calculations were novel enough to publish and decided to leave off the problem of relativistic corrections for the future. Despite the difficulties in solving the differential equation for hydrogen (he had sought help from his friend the mathematician Hermann Weyl) Schrödinger showed that his nonrelativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in 1926. Schrödinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a wave , moving in a potential well , created by the proton. This computation accurately reproduced the energy levels of the Bohr model.

The Schrödinger equation details the behavior of but says nothing of its nature. Schrödinger tried to interpret the real part of as a charge density, and then revised this proposal, saying in his next paper that the modulus squared of is a charge density. This approach was, however, unsuccessful. In 1926, just a few days after this paper was published, Max Born successfully interpreted as the probability amplitude, whose modulus squared is equal to probability density. Later, Schrödinger himself explained this interpretation as follows:

The already ... mentioned psi-function.... is now the means for predicting probability of measurement results. In it is embodied the momentarily attained sum of theoretically based future expectation, somewhat as laid down in a catalog.

— Erwin Schrödinger

Interpretation

The Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. However, the Schrödinger equation does not directly say what, exactly, the wave function is. The meaning of the Schrödinger equation and how the mathematical entities in it relate to physical reality depends upon the interpretation of quantum mechanics that one adopts.

In the views often grouped together as the Copenhagen interpretation, a system's wave function is a collection of statistical information about that system. The Schrödinger equation relates information about the system at one time to information about it at another. While the time-evolution process represented by the Schrödinger equation is continuous and deterministic, in that knowing the wave function at one instant is in principle sufficient to calculate it for all future times, wave functions can also change discontinuously and stochastically during a measurement. The wave function changes, according to this school of thought, because new information is available. The post-measurement wave function generally cannot be known prior to the measurement, but the probabilities for the different possibilities can be calculated using the Born rule.[26][51][note 4] Other, more recent interpretations of quantum mechanics, such as relational quantum mechanics and QBism also give the Schrödinger equation a status of this sort.

Schrödinger himself suggested in 1952 that the different terms of a superposition evolving under the Schrödinger equation are "not alternatives but all really happen simultaneously". This has been interpreted as an early version of Everett's many-worlds interpretation. This interpretation, formulated independently in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This interpretation removes the axiom of wave function collapse, leaving only continuous evolution under the Schrödinger equation, and so all possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and why should the probabilities be given by the Born rule? Several ways to answer these questions in the many-worlds framework have been proposed, but there is no consensus on whether they are successful.

Bohmian mechanics reformulates quantum mechanics to make it deterministic, at the price of adding a force due to a "quantum potential". It attributes to each physical system not only a wave function but in addition a real position that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation.

Electromagnetic absorption by water

Absorption spectrum (attenuation coefficient vs. wavelength) of liquid water (red), atmospheric water vapor (green) and ice (blue line) between 667 nm and 200 μm. The plot for vapor is a transformation of data Synthetic spectrum for gas mixture 'Pure H2O' (296K, 1 atm) retrieved from Hitran on the Web Information System.
Liquid water absorption spectrum across a wide wavelength range

The absorption of electromagnetic radiation by water depends on the state of the water.

The absorption in the gas phase occurs in three regions of the spectrum. Rotational transitions are responsible for absorption in the microwave and far-infrared, vibrational transitions in the mid-infrared and near-infrared. Vibrational bands have rotational fine structure. Electronic transitions occur in the vacuum ultraviolet regions.

Its weak absorption in the visible spectrum results in the pale blue color of water.

Overview

The water molecule, in the gaseous state, has three types of transition that can give rise to absorption of electromagnetic radiation:

  • Rotational transitions, in which the molecule gains a quantum of rotational energy. Atmospheric water vapour at ambient temperature and pressure gives rise to absorption in the far-infrared region of the spectrum, from about 200 cm−1 (50 μm) to longer wavelengths towards the microwave region.
  • Vibrational transitions in which a molecule gains a quantum of vibrational energy. The fundamental transitions give rise to absorption in the mid-infrared in the regions around 1650 cm−1 (μ band, 6 μm) and 3500 cm−1 (so-called X band, 2.9 μm)
  • Electronic transitions in which a molecule is promoted to an excited electronic state. The lowest energy transition of this type is in the vacuum ultraviolet region.

In reality, vibrations of molecules in the gaseous state are accompanied by rotational transitions, giving rise to a vibration-rotation spectrum. Furthermore, vibrational overtones and combination bands occur in the near-infrared region. The HITRAN spectroscopy database lists more than 37,000 spectral lines for gaseous H216O, ranging from the microwave region to the visible spectrum.

In liquid water the rotational transitions are effectively quenched, but absorption bands are affected by hydrogen bonding. In crystalline ice the vibrational spectrum is also affected by hydrogen bonding and there are lattice vibrations causing absorption in the far-infrared. Electronic transitions of gaseous molecules will show both vibrational and rotational fine structure.

Units

Infrared absorption band positions may be given either in wavelength (usually in micrometers, μm) or wavenumber (usually in reciprocal centimeters, cm−1) scale.

Rotational spectrum

Part of the pure rotation absorption spectrum of water vapour
Rotating water molecule

The water molecule is an asymmetric top, that is, it has three independent moments of inertia. Rotation about the 2-fold symmetry axis is illustrated at the left. Because of the low symmetry of the molecule, a large number of transitions can be observed in the far infrared region of the spectrum. Measurements of microwave spectra have provided a very precise value for the O−H bond length, 95.84 ± 0.05 pm and H−O−H bond angle, 104.5 ± 0.3°.

Vibrational spectrum

The three fundamental vibrations of the water molecule
ν1, O-H symmetric stretching 3657 cm−1 (2.734 μm)
ν2, H-O-H bending 1595 cm−1 (6.269 μm)
ν3, O-H asymmetric stretching 3756 cm−1 (2.662 μm)

The water molecule has three fundamental molecular vibrations. The O-H stretching vibrations give rise to absorption bands with band origins at 3657 cm−11, 2.734 μm) and 3756 cm−13, 2.662 μm) in the gas phase. The asymmetric stretching vibration, of B2 symmetry in the point group C2v is a normal vibration. The H-O-H bending mode origin is at 1595 cm−12, 6.269 μm). Both symmetric stretching and bending vibrations have A1 symmetry, but the frequency difference between them is so large that mixing is effectively zero. In the gas phase all three bands show extensive rotational fine structure. In the near-infrared spectrum ν3 has a series of overtones at wavenumbers somewhat less than n·ν3, n=2,3,4,5... Combination bands, such as ν2 + ν3 are also easily observed in the near-infrared region. The presence of water vapor in the atmosphere is important for atmospheric chemistry especially as the infrared and near infrared spectra are easy to observe. Standard (atmospheric optical) codes are assigned to absorption bands as follows. 0.718 μm (visible): α, 0.810 μm: μ, 0.935 μm: ρστ, 1.13 μm: φ, 1.38 μm: ψ, 1.88 μm: Ω, 2.68 μm: X. The gaps between the bands define the infrared window in the Earth's atmosphere.

The infrared spectrum of liquid water is dominated by the intense absorption due to the fundamental O-H stretching vibrations. Because of the high intensity, very short path lengths, usually less than 50 μm, are needed to record the spectra of aqueous solutions. There is no rotational fine structure, but the absorption bands are broader than might be expected, because of hydrogen bonding. Peak maxima for liquid water are observed at 3450 cm−1 (2.898 μm), 3615 cm−1 (2.766 μm) and 1640 cm −1 (6.097 μm). Direct measurement of the infrared spectra of aqueous solutions requires that the cuvette windows be made of substances such as calcium fluoride which are water-insoluble. This difficulty can alternatively be overcome by using an attenuated total reflectance (ATR) device rather than transmission.

In the near-infrared range liquid water has absorption bands around 1950 nm (5128 cm−1), 1450 nm (6896 cm−1), 1200 nm (8333 cm−1) and 970 nm, (10300 cm−1). The regions between these bands can be used in near-infrared spectroscopy to measure the spectra of aqueous solutions, with the advantage that glass is transparent in this region, so glass cuvettes can be used. The absorption intensity is weaker than for the fundamental vibrations, but this is not important as longer path-length cuvettes can be used. The absorption band at 698 nm (14300 cm−1) is a 3rd overtone (n=4). It tails off onto the visible region and is responsible for the intrinsic blue color of water. This can be observed with a standard UV/vis spectrophotometer, using a 10 cm path-length. The colour can be seen by eye by looking through a column of water about 10 m in length; the water must be passed through an ultrafilter to eliminate color due to Rayleigh scattering which also can make water appear blue.

The spectrum of ice is similar to that of liquid water, with peak maxima at 3400 cm−1 (2.941 μm), 3220 cm−1 (3.105 μm) and 1620 cm−1 (6.17 μm)

In both liquid water and ice clusters, low-frequency vibrations occur, which involve the stretching (TS) or bending (TB) of intermolecular hydrogen bonds (O–H•••O). Bands at wavelengths λ = 50-55 μm or 182-200 cm−1 (44 μm, 227 cm−1 in ice) have been attributed to TS, intermolecular stretch, and 200 μm or 50 cm−1 (166 μm, 60 cm−1 in ice), to TB, intermolecular bend

Visible region

Predicted wavelengths of overtones and combination bands of liquid water in the visible region
ν1, ν3 ν2 wavelength /nm
4 0 742
4 1 662
5 0 605
5 1 550
6 0 514
6 1 474
7 0 449
7 1 418
8 0 401
8 1 376

Absorption coefficients for 200 nm and 900 nm are almost equal at 6.9 m−1 (attenuation length of 14.5 cm). Very weak light absorption, in the visible region, by liquid water has been measured using an integrating cavity absorption meter (ICAM). The absorption was attributed to a sequence of overtone and combination bands whose intensity decreases at each step, giving rise to an absolute minimum at 418 nm, at which wavelength the attenuation coefficient is about 0.0044 m−1, which is an attenuation length of about 227 meters. These values correspond to pure absorption without scattering effects. The attenuation of, e.g., a laser beam would be slightly stronger.

Visible light absorption spectrum of pure water (absorption coefficient vs. wavelength)

Electronic spectrum

The electronic transitions of the water molecule lie in the vacuum ultraviolet region. For water vapor the bands have been assigned as follows.

  • 65 nm band — many different electronic transitions, photoionization, photodissociation
  • discrete features between 115 and 180 nm
    • set of narrow bands between 115 and 125 nm
      Rydberg series: 1b1 (n2) → many different Rydberg states and 3a1 (n1) → 3sa1 Rydberg state
    • 128 nm band
      Rydberg series: 3a1 (n1) → 3sa1 Rydberg state and 1b1 (n2) → 3sa1 Rydberg state
    • 166.5 nm band
      1b1 (n2) → 4a11*-like orbital)

Microwaves and radio waves

Dielectric permittivity and dielectric loss of water between 0 °C and 100 °C, the arrows showing the effect of increasing temperature

The pure rotation spectrum of water vapor extends into the microwave region.

Liquid water has a broad absorption spectrum in the microwave region, which has been explained in terms of changes in the hydrogen bond network giving rise to a broad, featureless, microwave spectrum. The absorption (equivalent to dielectric loss) is used in microwave ovens to heat food that contains water molecules. A frequency of 2.45 GHz, wavelength 122 mm, is commonly used.

Radiocommunication at GHz frequencies is very difficult in fresh waters and even more so in salt waters.

Atmospheric effects

Synthetic stick absorption spectrum of a simple gas mixture corresponding to the Earth's atmosphere composition based on HITRAN data created using Hitran on the Web system. Green color - water vapor, WN – wavenumber (caution: lower wavelengths on the right, higher on the left). Water vapor concentration for this gas mixture is 0.4%.

Water vapor is a greenhouse gas in the Earth's atmosphere, responsible for 70% of the known absorption of incoming sunlight, particularly in the infrared region, and about 60% of the atmospheric absorption of thermal radiation by the Earth known as the greenhouse effect. It is also an important factor in multispectral imaging and hyperspectral imaging used in remote sensing because water vapor absorbs radiation differently in different spectral bands. Its effects are also an important consideration in infrared astronomy and radio astronomy in the microwave or millimeter wave bands. The South Pole Telescope was constructed in Antarctica in part because the elevation and low temperatures there mean there is very little water vapor in the atmosphere.

Similarly, carbon dioxide absorption bands occur around 1400, 1600 and 2000 nm, but its presence in the Earth's atmosphere accounts for just 26% of the greenhouse effect. Carbon dioxide gas absorbs energy in some small segments of the thermal infrared spectrum that water vapor misses. This extra absorption within the atmosphere causes the air to warm just a bit more and the warmer the atmosphere the greater its capacity to hold more water vapor. This extra water vapor absorption further enhances the Earth's greenhouse effect.

In the atmospheric window between approximately 8000 and 14000 nm, in the far-infrared spectrum, carbon dioxide and water absorption is weak. This window allows most of the thermal radiation in this band to be radiated out to space directly from the Earth's surface. This band is also used for remote sensing of the Earth from space, for example with thermal Infrared imaging.

As well as absorbing radiation, water vapour occasionally emits radiation in all directions, according to the Black Body Emission curve for its current temperature overlaid on the water absorption spectrum. Much of this energy will be recaptured by other water molecules, but at higher altitudes, radiation sent towards space is less likely to be recaptured, as there is less water available to recapture radiation of water-specific absorbing wavelengths. By the top of the troposphere, about 12 km above sea level, most water vapor condenses to liquid water or ice as it releases its heat of vapourization. Once changed state, liquid water and ice fall away to lower altitudes. This will be balanced by incoming water vapour rising via convection currents.

Liquid water and ice emit radiation at a higher rate than water vapour (see graph above). Water at the top of the troposphere, particularly in liquid and solid states, cools as it emits net photons to space. Neighboring gas molecules other than water (e.g. nitrogen) are cooled by passing their heat kinetically to the water. This is why temperatures at the top of the troposphere (known as the tropopause) are about -50 degrees Celsius.

Synaptic vesicle

From Wikipedia, the free encyclopedia
Synaptic vesicle
Neuron A (transmitting) to neuron B (receiving).
1Mitochondrion;
2. Synaptic vesicle with neurotransmitters;
3. Autoreceptor
4Synapse with neurotransmitter released (serotonin);
5. Postsynaptic receptors activated by neurotransmitter (induction of a postsynaptic potential);
6Calcium channel;
7Exocytosis of a vesicle;
8. Recaptured neurotransmitter.

In a neuron, synaptic vesicles (or neurotransmitter vesicles) store various neurotransmitters that are released at the synapse. The release is regulated by a voltage-dependent calcium channel. Vesicles are essential for propagating nerve impulses between neurons and are constantly recreated by the cell. The area in the axon that holds groups of vesicles is an axon terminal or "terminal bouton". Up to 130 vesicles can be released per bouton over a ten-minute period of stimulation at 0.2 Hz. In the visual cortex of the human brain, synaptic vesicles have an average diameter of 39.5 nanometers (nm) with a standard deviation of 5.1 nm.

Structure

Primary hippocampal neurons observed at 10 days in vitro by confocal microscopy. In both images neurons are stained with a somatodendritic marker, microtubule associated protein (red). In the right image, synaptic vesicles are stained in green (yellow where the green and red overlap). Scale bar = 25 μm.

Synaptic vesicles are relatively simple because only a limited number of proteins fit into a sphere of 40 nm diameter. Purified vesicles have a protein:phospholipid ratio of 1:3 with a lipid composition of 40% phosphatidylcholine, 32% phosphatidylethanolamine, 12% phosphatidylserine, 5% phosphatidylinositol, and 10% cholesterol.

Synaptic vesicles contain two classes of obligatory components: transport proteins involved in neurotransmitter uptake, and trafficking proteins that participate in synaptic vesicle exocytosis, endocytosis, and recycling.

  • Transport proteins are composed of proton pumps that generate electrochemical gradients, which allow for neurotransmitter uptake, and neurotransmitter transporters that regulate the actual uptake of neurotransmitters. The necessary proton gradient is created by V-ATPase, which breaks down ATP for energy. Vesicular transporters move neurotransmitters from the cells' cytoplasm into the synaptic vesicles. Vesicular glutamate transporters, for example, sequester glutamate into vesicles by this process.
  • Trafficking proteins are more complex. They include intrinsic membrane proteins, peripherally bound proteins, and proteins such as SNAREs. These proteins do not share a characteristic that would make them identifiable as synaptic vesicle proteins, and little is known about how these proteins are specifically deposited into synaptic vesicles. Many but not all of the known synaptic vesicle proteins interact with non-vesicular proteins and are linked to specific functions.

The stoichiometry for the movement of different neurotransmitters into a vesicle is given in the following table.

Neurotransmitter type(s) Inward movement Outward movement
norepinephrine, dopamine, histamine, serotonin and acetylcholine neurotransmitter+ 2 H+
GABA and glycine neurotransmitter 1 H+
glutamate neurotransmitter + Cl 1 H+

Recently, it has been discovered that synaptic vesicles also contain small RNA molecules, including transfer RNA fragments, Y RNA fragments and mirRNAs. This discovery is believed to have broad impact on studying chemical synapses.

Effects of neurotoxins

Some neurotoxins, such as batrachotoxin, are known to destroy synaptic vesicles. The tetanus toxin damages vesicle-associated membrane proteins (VAMP), a type of v-SNARE, while botulinum toxins damage t-SNARES and v-SNARES and thus inhibit synaptic transmission. A spider toxin called alpha-Latrotoxin binds to neurexins, damaging vesicles and causing massive release of neurotransmitters.

Vesicle pools

Vesicles in the nerve terminal are grouped into three pools: the readily releasable pool, the recycling pool, and the reserve pool. These pools are distinguished by their function and position in the nerve terminal. The readily releasable pool are docked to the cell membrane, making these the first group of vesicles to be released on stimulation. The readily releasable pool is small and is quickly exhausted. The recycling pool is proximate to the cell membrane, and tend to be cycled at moderate stimulation, so that the rate of vesicle release is the same as, or lower than, the rate of vesicle formation. This pool is larger than the readily releasable pool, but it takes longer to become mobilised. The reserve pool contains vesicles that are not released under normal conditions. This reserve pool can be quite large (~50%) in neurons grown on a glass substrate, but is very small or absent at mature synapses in intact brain tissue.

Physiology

Synaptic vesicle cycle

The events of the synaptic vesicle cycle can be divided into a few key steps:

1. Trafficking to the synapse

Synaptic vesicle components in the presynaptic neuron are initially trafficked to the synapse using members of the kinesin motor family. In C. elegans the major motor for synaptic vesicles is UNC-104. There is also evidence that other proteins such as UNC-16/Sunday Driver regulate the use of motors for transport of synaptic vesicles.

2. Transmitter loading

Once at the synapse, synaptic vesicles are loaded with a neurotransmitter. Loading of transmitter is an active process requiring a neurotransmitter transporter and a proton pump ATPase that provides an electrochemical gradient. These transporters are selective for different classes of transmitters. Characterization of unc-17 and unc-47, which encode the vesicular acetylcholine transporter and vesicular GABA transporter have been described to date.

3. Docking

The loaded synaptic vesicles must dock near release sites, however docking is a step of the cycle that we know little about. Many proteins on synaptic vesicles and at release sites have been identified, however none of the identified protein interactions between the vesicle proteins and release site proteins can account for the docking phase of the cycle. Mutants in rab-3 and munc-18 alter vesicle docking or vesicle organization at release sites, but they do not completely disrupt docking. SNARE proteins, now also appear to be involved in the docking step of the cycle.

4. Priming

After the synaptic vesicles initially dock, they must be primed before they can begin fusion. Priming prepares the synaptic vesicle so that they are able to fuse rapidly in response to a calcium influx. This priming step is thought to involve the formation of partially assembled SNARE complexes. The proteins Munc13, RIM, and RIM-BP participate in this event. Munc13 is thought to stimulate the change of the t-SNARE syntaxin from a closed conformation to an open conformation, which stimulates the assembly of v-SNARE /t-SNARE complexes. RIM also appears to regulate priming, but is not essential for the step.

5. Fusion

Primed vesicles fuse very quickly with the cell membrane in response to calcium elevations in the cytoplasm. This releases the stored neurotransmitter into the synaptic cleft. The fusion event is thought to be mediated directly by the SNAREs and driven by the energy provided from SNARE assembly. The calcium-sensing trigger for this event is the calcium-binding synaptic vesicle protein synaptotagmin. The ability of SNAREs to mediate fusion in a calcium-dependent manner recently has been reconstituted in vitro. Consistent with SNAREs being essential for the fusion process, v-SNARE and t-SNARE mutants of C. elegans are lethal. Similarly, mutants in Drosophila and knockouts in mice indicate that these SNARES play a critical role in synaptic exocytosis.

6. Endocytosis

This accounts for the re-uptake of synaptic vesicles in the full contact fusion model. However, other studies have been compiling evidence suggesting that this type of fusion and endocytosis is not always the case.

Vesicle recycling

Two leading mechanisms of action are thought to be responsible for synaptic vesicle recycling: full collapse fusion and the "kiss-and-run" method. Both mechanisms begin with the formation of the synaptic pore that releases transmitter to the extracellular space. After release of the neurotransmitter, the pore can either dilate fully so that the vesicle collapses completely into the synaptic membrane, or it can close rapidly and pinch off the membrane to generate kiss-and-run fusion.

Full collapse fusion

It has been shown that periods of intense stimulation at neural synapses deplete vesicle count as well as increase cellular capacitance and surface area.[19] This indicates that after synaptic vesicles release their neurotransmitter payload, they merge with and become part of, the cellular membrane. After tagging synaptic vesicles with HRP (horseradish peroxidase), Heuser and Reese found that portions of the cellular membrane at the frog neuromuscular junction were taken up by the cell and converted back into synaptic vesicles. Studies suggest that the entire cycle of exocytosis, retrieval, and reformation of the synaptic vesicles requires less than 1 minute.

In full collapse fusion, the synaptic vesicle merges and becomes incorporated into the cell membrane. The formation of the new membrane is a protein mediated process and can only occur under certain conditions. After an action potential, Ca2+ floods to the presynaptic membrane. Ca2+ binds to specific proteins in the cytoplasm, one of which is synaptotagmin, which in turn trigger the complete fusion of the synaptic vesicle with the cellular membrane. This complete fusion of the pore is assisted by SNARE proteins. This large family of proteins mediate docking of synaptic vesicles in an ATP-dependent manner. With the help of synaptobrevin on the synaptic vesicle, the t-SNARE complex on the membrane, made up of syntaxin and SNAP-25, can dock, prime, and fuse the synaptic vesicle into the membrane.

The mechanism behind full collapse fusion has been shown to be the target of the botulinum and tetanus toxins. The botulinum toxin has protease activity which degrades the SNAP-25 protein. The SNAP-25 protein is required for vesicle fusion that releases neurotransmitters, in particular acetylcholine. Botulinum toxin essentially cleaves these SNARE proteins, and in doing so, prevents synaptic vesicles from fusing with the cellular synaptic membrane and releasing their neurotransmitters. Tetanus toxin follows a similar pathway, but instead attacks the protein synaptobrevin on the synaptic vesicle. In turn, these neurotoxins prevent synaptic vesicles from completing full collapse fusion. Without this mechanism in effect, muscle spasms, paralysis, and death can occur.

"Kiss-and-run"

The second mechanism by which synaptic vesicles are recycled is known as kiss-and-run fusion. In this case, the synaptic vesicle "kisses" the cellular membrane, opening a small pore for its neurotransmitter payload to be released through, then closes the pore and is recycled back into the cell. The kiss-and-run mechanism has been a hotly debated topic. Its effects have been observed and recorded; however the reason behind its use as opposed to full collapse fusion is still being explored. It has been speculated that kiss-and-run is often employed to conserve scarce vesicular resources as well as being utilized to respond to high-frequency inputs. Experiments have shown that kiss-and-run events do occur. First observed by Katz and del Castillo, it was later observed that the kiss-and-run mechanism was different from full collapse fusion in that cellular capacitance did not increase in kiss-and-run events. This reinforces the idea of a kiss-and-run fashion, the synaptic vesicle releases its payload and then separates from the membrane.

Modulation

Cells thus appear to have at least two mechanisms to follow for membrane recycling. Under certain conditions, cells can switch from one mechanism to the other. Slow, conventional, full collapse fusion predominates the synaptic membrane when Ca2+ levels are low, and the fast kiss-and-run mechanism is followed when Ca2+ levels are high.

Ales et al. showed that raised concentrations of extracellular calcium ions shift the preferred mode of recycling and synaptic vesicle release to the kiss-and-run mechanism in a calcium-concentration-dependent manner. It has been proposed that during secretion of neurotransmitters at synapses, the mode of exocytosis is modulated by calcium to attain optimal conditions for coupled exocytosis and endocytosis according to synaptic activity.

Experimental evidence suggests that kiss-and-run is the dominant mode of synaptic release at the beginning of stimulus trains. In this context, kiss-and-run reflects a high vesicle release probability. The incidence of kiss-and-run is also increased by rapid firing and stimulation of the neuron, suggesting that the kinetics of this type of release is faster than other forms of vesicle release.

History

With the advent of the electron microscope in the early 1950s, nerve endings were found to contain a large number of electron-lucent (transparent to electrons) vesicles. The term synaptic vesicle was first introduced by De Robertis and Bennett in 1954. This was shortly after transmitter release at the frog neuromuscular junction was found to induce postsynaptic miniature end-plate potentials that were ascribed to the release of discrete packages of neurotransmitter (quanta) from the presynaptic nerve terminal. It was thus reasonable to hypothesize that the transmitter substance (acetylcholine) was contained in such vesicles, which by a secretory mechanism would release their contents into the synaptic cleft (vesicle hypothesis).

The missing link was the demonstration that the neurotransmitter acetylcholine is actually contained in synaptic vesicles. About ten years later, the application of subcellular fractionation techniques to brain tissue permitted the isolation first of nerve endings (synaptosomes), and subsequently of synaptic vesicles from mammalian brain. Two competing laboratories were involved in this work, that of Victor P. Whittaker at the Institute of Animal Physiology, Agricultural Research Council, Babraham, Cambridge, UK and that of Eduardo de Robertis at the Instituto de Anatomía General y Embriología, Facultad de Medicina, Universidad de Buenos Aires, Argentina. Whittaker's work demonstrating acetylcholine in vesicle fractions from guinea-pig brain was first published in abstract form in 1960 and then in more detail in 1963 and 1964, and the paper of the de Robertis group demonstrating an enrichment of bound acetylcholine in synaptic vesicle fractions from rat brain appeared in 1963. Both groups released synaptic vesicles from isolated synaptosomes by osmotic shock. The content of acetylcholine in a vesicle was originally estimated to be 1000–2000 molecules. Subsequent work identified the vesicular localization of other neurotransmitters, such as amino acids, catecholamines, serotonin, and ATP. Later, synaptic vesicles could also be isolated from other tissues such as the superior cervical ganglion, or the octopus brain. The isolation of highly purified fractions of cholinergic synaptic vesicles from the ray Torpedo electric organ was an important step forward in the study of vesicle biochemistry and function.

Hormesis

From Wikipedia, the free encyclopedia
Hormesis is a biological phenomenon where a low dose of a potentially harmful stressor, such as a toxin or environmental factor, stimulates a beneficial adaptive response in an organism. In other words, small doses of stressors that would be damaging in larger amounts can actually enhance resilience, stimulate growth, or improve health at lower levels. 

Hormesis is a two-phased dose-response relationship to an environmental agent whereby low-dose amounts have a beneficial effect and high-dose amounts are either inhibitory to function or toxic. Within the hormetic zone, the biological response to low-dose amounts of some stressors is generally favorable. An example is the breathing of oxygen, which is required in low amounts (in air) via respiration in living animals, but can be toxic in high amounts, even in a managed clinical setting.[5]

In toxicology, hormesis is a dose-response phenomenon to xenobiotics or other stressors. In physiology and nutrition, hormesis has regions extending from low-dose deficiencies to homeostasis, and potential toxicity at high levels. Physiological concentrations of an agent above or below homeostasis may adversely affect an organism, where the hormetic zone is a region of homeostasis of balanced nutrition. In pharmacology, the hormetic zone is similar to the therapeutic window.

In the context of toxicology, the hormesis model of dose response is vigorously debated. The biochemical mechanisms by which hormesis works (particularly in applied cases pertaining to behavior and toxins) remain under early laboratory research and are not well understood.

Etymology

The term "hormesis" derives from Greek hórmēsis for "rapid motion, eagerness", itself from ancient Greek hormáein to excite. The same Greek root provides the word hormone. The term "hormetics" is used for the study of hormesis. The word hormesis was first reported in English in 1943.

History

A form of hormesis famous in antiquity was Mithridatism, the practice whereby Mithridates VI of Pontus supposedly made himself immune to a variety of toxins by regular exposure to small doses. Mithridate and theriac, polypharmaceutical electuaries claiming descent from his formula and initially including flesh from poisonous animals, were consumed for centuries by emperors, kings, and queens as protection against poison and ill health. In the Renaissance, the Swiss doctor Paracelsus said, "All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison."

German pharmacologist Hugo Schulz first described such a phenomenon in 1888 following his own observations that the growth of yeast could be stimulated by small doses of poisons. This was coupled with the work of German physician Rudolph Arndt, who studied animals given low doses of drugs, eventually giving rise to the Arndt–Schulz rule. Arndt's advocacy of homeopathy contributed to the rule's diminished credibility in the 1920s and 1930s. The term "hormesis" was coined and used for the first time in a scientific paper by Chester M. Southam and J. Ehrlich in 1943 in the journal Phytopathology, volume 33, pp. 517–541.

In 2004, Edward Calabrese evaluated the concept of hormesis. Over 600 substances show a U-shaped dose–response relationship; Calabrese and Baldwin wrote: "One percent (195 out of 20,285) of the published articles contained 668 dose-response relationships that met the entry criteria [of a U-shaped response indicative of hormesis]"

Examples

Carbon monoxide

Carbon monoxide is produced in small quantities across phylogenetic kingdoms, where it has essential roles as a neurotransmitter (subcategorized as a gasotransmitter). The majority of endogenous carbon monoxide is produced by heme oxygenase; the loss of heme oxygenase and subsequent loss of carbon monoxide signaling has catastrophic implications for an organism. In addition to physiological roles, small amounts of carbon monoxide can be inhaled or administered in the form of carbon monoxide-releasing molecules as a therapeutic agent.

Regarding the hormetic curve graph:

  • Deficiency zone: an absence of carbon monoxide signaling has toxic implications
  • Hormetic zone / region of homeostasis: small amount of carbon monoxide has a positive effect:
    • essential as a neurotransmitter
    • beneficial as a pharmaceutical
  • Toxicity zone: excessive exposure results in carbon monoxide poisoning

Oxygen

Many organisms maintain a hormesis relationship with oxygen, which follows a hormetic curve similar to carbon monoxide:

Physical exercise

Physical exercise intensity may exhibit a hormetic curve. Individuals with low levels of physical activity are at risk for some diseases; however, individuals engaged in moderate, regular exercise may experience less disease risk.

Mitohormesis

The possible effect of small amounts of oxidative stress is under laboratory research. Mitochondria are sometimes described as "cellular power plants" because they generate most of the cell's supply of adenosine triphosphate (ATP), a source of chemical energy. Reactive oxygen species (ROS) have been discarded as unwanted byproducts of oxidative phosphorylation in mitochondria by the proponents of the free-radical theory of aging promoted by Denham Harman. The free-radical theory states that compounds inactivating ROS would lead to a reduction of oxidative stress and thereby produce an increase in lifespan, although this theory holds only in basic research. However, in over 19 clinical trials, "nutritional and genetic interventions to boost antioxidants have generally failed to increase life span."

Whether this concept applies to humans remains to be shown, although a 2007 epidemiological study supports the possibility of mitohormesis, indicating that supplementation with beta-carotene, vitamin A or vitamin E may increase disease prevalence in humans. More recent studies have reported that rapamycin exhibits hormesis, where low doses can enhance cellular longevity by partially inhibiting mTOR, unlike higher doses that are toxic due to complete inhibition. This partial inhibition of mTOR (by the hormetic effect of low-dose rapamycin) modulates mTOR–mitochondria cross-talk, thereby demonstrating mitohormesis; and consequently reducing oxidative damage, metabolic dysregulation, and mitochondrial dysfunction, thus slowing cellular aging.

Alcohol

Alcohol is believed to be hormetic in preventing heart disease and stroke, although the benefits of light drinking may have been exaggerated. The gut microbiome of a typical healthy individual naturally ferments small amounts of ethanol, and in rare cases dysbiosis leads to auto-brewery syndrome, therefore whether benefits of alcohol are derived from the behavior of consuming alcoholic drinks or as a homeostasis factor in normal physiology via metabolites from commensal microbiota remains unclear.

In 2012, researchers at UCLA found that tiny amounts (1 mM, or 0.005%) of ethanol doubled the lifespan of Caenorhabditis elegans, a roundworm frequently used in biological studies, that were starved of other nutrients. Higher doses of 0.4% provided no longevity benefit. However, worms exposed to 0.005% did not develop normally (their development was arrested). The authors argue that the worms were using ethanol as an alternative energy source in the absence of other nutrition, or had initiated a stress response. They did not test the effect of ethanol on worms fed a normal diet.

Methylmercury

In 2010, a paper in the journal Environmental Toxicology & Chemistry showed that low doses of methylmercury, a potent neurotoxic pollutant, improved the hatching rate of mallard eggs. The author of the study, Gary Heinz, who led the study for the U.S. Geological Survey at the Patuxent Wildlife Research Center in Beltsville, stated that other explanations are possible. For instance, the flock he studied might have harbored some low, subclinical infection and that mercury, well known to be antimicrobial, might have killed the infection that otherwise hurt reproduction in the untreated birds.

Radiation

Ionizing radiation

Hormesis has been observed in a number of cases in humans and animals exposed to chronic low doses of ionizing radiation. A-bomb survivors who received high doses exhibited shortened lifespan and increased cancer mortality, but those who received low doses had lower cancer mortality than the Japanese average.

In Taiwan, recycled radiocontaminated steel was inadvertently used in the construction of over 100 apartment buildings, causing the long-term exposure of 10,000 people. The average dose rate was 50 mSv/year and a subset of the population (1,000 people) received a total dose over 4,000 mSv over ten years. In the widely used linear no-threshold model used by regulatory bodies, the expected cancer deaths in this population would have been 302 with 70 caused by the extra ionizing radiation, with the remainder caused by natural background radiation. The observed cancer rate, though, was quite low at 7 cancer deaths when 232 would be predicted by the LNT model had they not been exposed to the radiation from the building materials. Ionizing radiation hormesis appears to be at work.

Chemical and ionizing radiation combined

No experiment can be performed in perfect isolation. Thick lead shielding around a chemical dose experiment to rule out the effects of ionizing radiation is built and rigorously controlled for in the laboratory, and certainly not the field. Likewise the same applies for ionizing radiation studies. Ionizing radiation is released when an unstable particle releases radiation, creating two new substances and energy in the form of an electromagnetic wave. The resulting materials are then free to interact with any environmental elements, and the energy released can also be used as a catalyst in further ionizing radiation interactions.

The resulting confusion in the low-dose exposure field (radiation and chemical) arise from lack of consideration of this concept as described by Mothersill and Seymory.

Nucleotide excision repair

Veterans of the Gulf War (1991) who suffered from the persistent symptoms of Gulf War Illness (GWI) were likely exposed to stresses from toxic chemicals and/or radiation. The DNA damaging (genotoxic) effects of such exposures can be, at least partially, overcome by the DNA nucleotide excision repair (NER) pathway. Lymphocytes from GWI veterans exhibited a significantly elevated level of NER repair. It was suggested that this increased NER capability in exposed veterans was likely a hormetic response, that is, an induced protective response resulting from battlefield exposure.

Applications

Effects in aging

One of the areas where the concept of hormesis has been explored extensively with respect to its applicability is aging. Since the basic survival capacity of any biological system depends on its homeostatic ability, biogerontologists proposed that exposing cells and organisms to mild stress should result in the adaptive or hormetic response with various biological benefits. This idea has preliminary evidence showing that repetitive mild stress exposure may have anti-aging effects in laboratory models. Some mild stresses used for such studies on the application of hormesis in aging research and interventions are heat shock, irradiation, prooxidants, hypergravity, and food restriction. The example of heat shock refers to the proteostasis network. The addition of a bit of stress on the cell can lead to activation of signaling pathways and unfolded protein response pathways that upregulate chaperones, downregulate translation, and other processes that allow the cell to respond to stress. In this way, the activation of these pathways prepares the cell for other stressors since the pathways are already activated. However, too much stress or prolonged stress can actually damage the cell and lead to cell death on occasion. Such compounds that may modulate stress responses in cells have been termed "hormetins".

Controversy

Hormesis suggests dangerous substances have benefits. Concerns exist that the concept has been leveraged by lobbyists to weaken environmental regulations of some well-known toxic substances in the US.

Radiation controversy

The hypothesis of hormesis has generated the most controversy when applied to ionizing radiation. This hypothesis is called radiation hormesis. For policy-making purposes, the commonly accepted model of dose response in radiobiology is the linear no-threshold model (LNT), which assumes a strictly linear dependence between the risk of radiation-induced adverse health effects and radiation dose, implying that there is no safe dose of radiation for humans.

Nonetheless, many countries including the Czech Republic, Germany, Austria, Poland, and the United States have radon therapy centers whose whole primary operating principle is the assumption of radiation hormesis, or beneficial impact of small doses of radiation on human health. Countries such as Germany and Austria at the same time have imposed very strict antinuclear regulations, which have been described as radiophobic inconsistency.

The United States National Research Council (part of the National Academy of Sciences), the National Council on Radiation Protection and Measurements (a body commissioned by the United States Congress) and the United Nations Scientific Committee on the Effects of Ionizing Radiation all agree that radiation hormesis is not clearly shown, nor clearly the rule for radiation doses.

A United States–based National Council on Radiation Protection and Measurements stated in 2001 that evidence for radiation hormesis is insufficient and radiation protection authorities should continue to apply the LNT model for purposes of risk estimation.

A 2005 report commissioned by the French National Academy concluded that evidence for hormesis occurring at low doses is sufficient and LNT should be reconsidered as the methodology used to estimate risks from low-level sources of radiation, such as deep geological repositories for nuclear waste.

Policy consequences

Hormesis remains largely unknown to the public, requiring a policy change for a possible toxin to consider exposure risk of small doses.

Globular cluster

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Globular_cluster     Globular cluster...