Search This Blog

Monday, May 28, 2018

Mathematical formulation of quantum mechanics

From Wikipedia, the free encyclopedia

The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. Such are distinguished from mathematical formalisms for theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces and operators on these spaces. Many of these structures are drawn from functional analysis, a research area within pure mathematics that was influenced in part by the needs of quantum mechanics. In brief, values of physical observables such as energy and momentum were no longer considered as values of functions on phase space, but as eigenvalues; more precisely as spectral values of linear operators in Hilbert space.[1]

These formulations of quantum mechanics continue to be used today. At the heart of the description are ideas of quantum state and quantum observables which are radically different from those used in previous models of physical reality. While the mathematics permits calculation of many quantities that can be measured experimentally, there is a definite theoretical limit to values that can be simultaneously measured. This limitation was first elucidated by Heisenberg through a thought experiment, and is represented mathematically in the new formalism by the non-commutativity of operators representing quantum observables.

Prior to the emergence of quantum mechanics as a separate theory, the mathematics used in physics consisted mainly of formal mathematical analysis, beginning with calculus, and increasing in complexity up to differential geometry and partial differential equations. Probability theory was used in statistical mechanics. Geometric intuition played a strong role in the first two and, accordingly, theories of relativity were formulated entirely in terms of geometric concepts. The phenomenology of quantum physics arose roughly between 1895 and 1915, and for the 10 to 15 years before the emergence of quantum theory (around 1925) physicists continued to think of quantum theory within the confines of what is now called classical physics, and in particular within the same mathematical structures. The most sophisticated example of this is the Sommerfeld–Wilson–Ishiwara quantization rule, which was formulated entirely on the classical phase space.

History of the formalism

The "old quantum theory" and the need for new mathematics

In the 1890s, Planck was able to derive the blackbody spectrum which was later used to avoid the classical ultraviolet catastrophe by making the unorthodox assumption that, in the interaction of electromagnetic radiation with matter, energy could only be exchanged in discrete units which he called quanta. Planck postulated a direct proportionality between the frequency of radiation and the quantum of energy at that frequency. The proportionality constant, h, is now called Planck's constant in his honor.

In 1905, Einstein explained certain features of the photoelectric effect by assuming that Planck's energy quanta were actual particles, which were later dubbed photons.

light at the right frequency.

All of these developments were phenomenological and challenged the theoretical physics of the time. Bohr and Sommerfeld went on to modify classical mechanics in an attempt to deduce the Bohr model from first principles. They proposed that, of all closed classical orbits traced by a mechanical system in its phase space, only the ones that enclosed an area which was a multiple of Planck's constant were actually allowed. The most sophisticated version of this formalism was the so-called Sommerfeld–Wilson–Ishiwara quantization. Although the Bohr model of the hydrogen atom could be explained in this way, the spectrum of the helium atom (classically an unsolvable 3-body problem) could not be predicted. The mathematical status of quantum theory remained uncertain for some time.

In 1923 de Broglie proposed that wave–particle duality applied not only to photons but to electrons and every other physical system.

The situation changed rapidly in the years 1925–1930, when working mathematical foundations were found through the groundbreaking work of Erwin Schrödinger, Werner Heisenberg, Max Born, Pascual Jordan, and the foundational work of John von Neumann, Hermann Weyl and Paul Dirac, and it became possible to unify several different approaches in terms of a fresh set of ideas. The physical interpretation of the theory was also clarified in these years after Werner Heisenberg discovered the uncertainty relations and Niels Bohr introduced the idea of complementarity.

The "new quantum theory"

Werner Heisenberg's matrix mechanics was the first successful attempt at replicating the observed quantization of atomic spectra. Later in the same year, Schrödinger created his wave mechanics. Schrödinger's formalism was considered easier to understand, visualize and calculate as it led to differential equations, which physicists were already familiar with solving. Within a year, it was shown that the two theories were equivalent.

Schrödinger himself initially did not understand the fundamental probabilistic nature of quantum mechanics, as he thought that the absolute square of the wave function of an electron should be interpreted as the charge density of an object smeared out over an extended, possibly infinite, volume of space. It was Max Born who introduced the interpretation of the absolute square of the wave function as the probability distribution of the position of a pointlike object. Born's idea was soon taken over by Niels Bohr in Copenhagen who then became the "father" of the Copenhagen interpretation of quantum mechanics. Schrödinger's wave function can be seen to be closely related to the classical Hamilton–Jacobi equation. The correspondence to classical mechanics was even more explicit, although somewhat more formal, in Heisenberg's matrix mechanics. In his PhD thesis project, Paul Dirac[2] discovered that the equation for the operators in the Heisenberg representation, as it is now called, closely translates to classical equations for the dynamics of certain quantities in the Hamiltonian formalism of classical mechanics, when one expresses them through Poisson brackets, a procedure now known as canonical quantization.

To be more precise, already before Schrödinger, the young postdoctoral fellow Werner Heisenberg invented his matrix mechanics, which was the first correct quantum mechanics–– the essential breakthrough. Heisenberg's matrix mechanics formulation was based on algebras of infinite matrices, a very radical formulation in light of the mathematics of classical physics, although he started from the index-terminology of the experimentalists of that time, not even aware that his "index-schemes" were matrices, as Born soon pointed out to him. In fact, in these early years, linear algebra was not generally popular with physicists in its present form.

Although Schrödinger himself after a year proved the equivalence of his wave-mechanics and Heisenberg's matrix mechanics, the reconciliation of the two approaches and their modern abstraction as motions in Hilbert space is generally attributed to Paul Dirac, who wrote a lucid account in his 1930 classic The Principles of Quantum Mechanics. He is the third, and possibly most important, pillar of that field (he soon was the only one to have discovered a relativistic generalization of the theory). In his above-mentioned account, he introduced the bra–ket notation, together with an abstract formulation in terms of the Hilbert space used in functional analysis; he showed that Schrödinger's and Heisenberg's approaches were two different representations of the same theory, and found a third, most general one, which represented the dynamics of the system. His work was particularly fruitful in all kinds of generalizations of the field.

The first complete mathematical formulation of this approach, known as the Dirac–von Neumann axioms, is generally credited to John von Neumann's 1932 book Mathematical Foundations of Quantum Mechanics, although Hermann Weyl had already referred to Hilbert spaces (which he called unitary spaces) in his 1927 classic paper and book. It was developed in parallel with a new approach to the mathematical spectral theory based on linear operators rather than the quadratic forms that were David Hilbert's approach a generation earlier. Though theories of quantum mechanics continue to evolve to this day, there is a basic framework for the mathematical formulation of quantum mechanics which underlies most approaches and can be traced back to the mathematical work of John von Neumann. In other words, discussions about interpretation of the theory, and extensions to it, are now mostly conducted on the basis of shared assumptions about the mathematical foundations.

Later developments

The application of the new quantum theory to electromagnetism resulted in quantum field theory, which was developed starting around 1930. Quantum field theory has driven the development of more sophisticated formulations of quantum mechanics, of which the ones presented here are simple special cases.
On a different front, von Neumann originally dispatched quantum measurement with his infamous postulate on the collapse of the wavefunction, raising a host of philosophical problems. Over the intervening 70 years, the problem of measurement became an active research area and itself spawned some new formulations of quantum mechanics.
A related topic is the relationship to classical mechanics. Any new physical theory is supposed to reduce to successful old theories in some approximation. For quantum mechanics, this translates into the need to study the so-called classical limit of quantum mechanics. Also, as Bohr emphasized, human cognitive abilities and language are inextricably linked to the classical realm, and so classical descriptions are intuitively more accessible than quantum ones. In particular, quantization, namely the construction of a quantum theory whose classical limit is a given and known classical theory, becomes an important area of quantum physics in itself.

Finally, some of the originators of quantum theory (notably Einstein and Schrödinger) were unhappy with what they thought were the philosophical implications of quantum mechanics. In particular, Einstein took the position that quantum mechanics must be incomplete, which motivated research into so-called hidden-variable theories. The issue of hidden variables has become in part an experimental issue with the help of quantum optics.

Mathematical structure of quantum mechanics

A physical system is generally described by three basic ingredients: states; observables; and dynamics (or law of time evolution) or, more generally, a group of physical symmetries. A classical description can be given in a fairly direct way by a phase space model of mechanics: states are points in a symplectic phase space, observables are real-valued functions on it, time evolution is given by a one-parameter group of symplectic transformations of the phase space, and physical symmetries are realized by symplectic transformations. A quantum description normally consists of a Hilbert space of states, observables are self adjoint operators on the space of states, time evolution is given by a one-parameter group of unitary transformations on the Hilbert space of states, and physical symmetries are realized by unitary transformations.

Postulates of quantum mechanics

The following summary of the mathematical framework of quantum mechanics can be partly traced back to the Dirac–von Neumann axioms.
  • Each physical system is associated with a (topologically) separable complex Hilbert space H with inner productφ|ψ⟩. Rays (that is, subspaces of complex dimension 1) in H are associated with quantum states of the system. In other words, quantum states can be identified with equivalence classes of vectors of length 1 in H, where two vectors represent the same state if they differ only by a phase factor. Separability is a mathematically convenient hypothesis, with the physical interpretation that countably many observations are enough to uniquely determine the state. "A quantum mechanical state is a ray in projective Hilbert space, not a vector. Many textbooks fail to make this distinction, which could be partly a result of the fact that the Schrödinger equation itself involves Hilbert-space "vectors", with the result that the imprecise use of "state vector" rather than ray is very difficult to avoid."[4]
  • The Hilbert space of a composite system is the Hilbert space tensor product of the state spaces associated with the component systems (for instance, J. M. Jauch, Foundations of quantum mechanics, section 11.7). For a non-relativistic system consisting of a finite number of distinguishable particles, the component systems are the individual particles.
  • The expectation value (in the sense of probability theory) of the observable A for the system in state represented by the unit vector ψH is
\langle \psi \mid A\mid \psi \rangle
  • By spectral theory, we can associate a probability measure to the values of A in any state ψ. We can also show that the possible values of the observable A in any state must belong to the spectrum of A. In the special case A has only discrete spectrum, the possible outcomes of measuring A are its eigenvalues. More precisely, if we represent the state ψ in the basis formed by the eigenvectors of A, then the square of the modulus of the component attached to a given eigenvector is the probability of observing its corresponding eigenvalue.
  • More generally, a state can be represented by a so-called density operator, which is a trace class, nonnegative self-adjoint operator ρ normalized to be of trace 1. The expected value of A in the state ρ is
\operatorname {tr} (A\rho )
  • If ρψ is the orthogonal projector onto the one-dimensional subspace of H spanned by |ψ, then
\operatorname {tr} (A\rho _{\psi })=\left\langle \psi \mid A\mid \psi \right\rangle
  • Density operators are those that are in the closure of the convex hull of the one-dimensional orthogonal projectors. Conversely, one-dimensional orthogonal projectors are extreme points of the set of density operators. Physicists also call one-dimensional orthogonal projectors pure states and other density operators mixed states.
One can in this formalism state Heisenberg's uncertainty principle and prove it as a theorem, although the exact historical sequence of events, concerning who derived what and under which framework, is the subject of historical investigations outside the scope of this article.

Furthermore, to the postulates of quantum mechanics one should also add basic statements on the properties of spin and Pauli's exclusion principle, see below.

Pictures of dynamics

In the so-called Schrödinger picture of quantum mechanics, the dynamics is given as follows:
  • The time evolution of the state is given by a differentiable function from the real numbers R, representing instants of time, to the Hilbert space of system states. This map is characterized by a differential equation as follows: If |ψ(t)⟩ denotes the state of the system at any one time t, the following Schrödinger equation holds:
Schrödinger equation (general) i\hbar {\frac {d}{dt}}\left|\psi (t)\right\rangle =H\left|\psi (t)\right\rangle
where H is a densely defined self-adjoint operator, called the system Hamiltonian, i is the imaginary unit and ħ is the reduced Planck constant. As an observable, H corresponds to the total energy of the system.

Alternatively, by Stone's theorem one can state that there is a strongly continuous one-parameter unitary map U(t): HH such that
\left|\psi (t+s)\right\rangle =U(t)\left|\psi (s)\right\rangle
for all times s, t. The existence of a self-adjoint Hamiltonian H such that
U(t)=e^{-(i/\hbar )tH}
is a consequence of Stone's theorem on one-parameter unitary groups. It is assumed that H does not depend on time and that the perturbation starts at t0 = 0; otherwise one must use the Dyson series, formally written as
U(t)={\mathcal {T}}\left[\exp \left(-{\frac {i}{\hbar }}\int _{t_{0}}^{t}\,{\rm {d}}t'\,H(t')\right)\right]\,,
where {\mathcal {T}} is Dyson's time-ordering symbol.

This symbol permutes a product of noncommuting operators of the form
B_{1}(t_{1})\cdot B_{2}(t_{2})\cdot \dots \cdot B_{n}(t_{n})
into the uniquely determined re-ordered expression
B_{i_{1}}(t_{i_{1}})\cdot B_{i_{2}}(t_{i_{2}})\cdot \dots \cdot B_{i_{n}}(t_{i_{n}}) with t_{i_{1}}\geq t_{i_{2}}\geq \dots \geq t_{i_{n}}\,.
The result is a causal chain, the primary cause in the past on the utmost r.h.s., and finally the present effect on the utmost l.h.s.
  • The Heisenberg picture of quantum mechanics focuses on observables and instead of considering states as varying in time, it regards the states as fixed and the observables as changing. To go from the Schrödinger to the Heisenberg picture one needs to define time-independent states and time-dependent operators thus:
\left|\psi \right\rangle =\left|\psi (0)\right\rangle
A(t)=U(-t)AU(t).\quad
It is then easily checked that the expected values of all observables are the same in both pictures
\langle \psi \mid A(t)\mid \psi \rangle =\langle \psi (t)\mid A\mid \psi (t)\rangle
and that the time-dependent Heisenberg operators satisfy
Heisenberg picture (general) {\frac {d}{dt}}A(t)={\frac {i}{\hbar }}[H,A(t)]+{\frac {\partial A(t)}{\partial t}},
which is true for time-dependent A = A(t). Notice the commutator expression is purely formal when one of the operators is unbounded. One would specify a representation for the expression to make sense of it.
  • The so-called Dirac picture or interaction picture has time-dependent states and observables, evolving with respect to different Hamiltonians. This picture is most useful when the evolution of the observables can be solved exactly, confining any complications to the evolution of the states. For this reason, the Hamiltonian for the observables is called "free Hamiltonian" and the Hamiltonian for the states is called "interaction Hamiltonian". In symbols:
Dirac picture i\hbar {\frac {d}{dt}}\left|\psi (t)\right\rangle ={H}_{\rm {int}}(t)\left|\psi (t)\right\rangle
i\hbar {d \over dt}A(t)=[A(t),H_{0}].
The interaction picture does not always exist, though. In interacting quantum field theories, Haag's theorem states that the interaction picture does not exist. This is because the Hamiltonian cannot be split into a free and an interacting part within a superselection sector. Moreover, even if in the Schrödinger picture the Hamiltonian does not depend on time, e.g. H = H0 + V, in the interaction picture it does, at least, if V does not commute with H0, since
H_{\rm {int}}(t)\equiv e^{{(i/\hbar })tH_{0}}\,V\,e^{{(-i/\hbar })tH_{0}}.
So the above-mentioned Dyson-series has to be used anyhow.

The Heisenberg picture is the closest to classical Hamiltonian mechanics (for example, the commutators appearing in the above equations directly translate into the classical Poisson brackets); but this is already rather "high-browed", and the Schrödinger picture is considered easiest to visualize and understand by most people, to judge from pedagogical accounts of quantum mechanics. The Dirac picture is the one used in perturbation theory, and is specially associated to quantum field theory and many-body physics.

Similar equations can be written for any one-parameter unitary group of symmetries of the physical system. Time would be replaced by a suitable coordinate parameterizing the unitary group (for instance, a rotation angle, or a translation distance) and the Hamiltonian would be replaced by the conserved quantity associated to the symmetry (for instance, angular or linear momentum).

Representations

The original form of the Schrödinger equation depends on choosing a particular representation of Heisenberg's canonical commutation relations. The Stone–von Neumann theorem dictates that all irreducible representations of the finite-dimensional Heisenberg commutation relations are unitarily equivalent. A systematic understanding of its consequences has led to the phase space formulation of quantum mechanics, which works in full phase space instead of Hilbert space, so then with a more intuitive link to the classical limit thereof. This picture also simplifies considerations of quantization, the deformation extension from classical to quantum mechanics.

The quantum harmonic oscillator is an exactly solvable system where the different representations are easily compared. There, apart from the Heisenberg, or Schrödinger (position or momentum), or phase-space representations, one also encounters the Fock (number) representation and the Segal–Bargmann (Fock-space or coherent state) representation (named after Irving Segal and Valentine Bargmann). All four are unitarily equivalent.

Time as an operator

The framework presented so far singles out time as the parameter that everything depends on. It is possible to formulate mechanics in such a way that time becomes itself an observable associated to a self-adjoint operator. At the classical level, it is possible to arbitrarily parameterize the trajectories of particles in terms of an unphysical parameter s, and in that case the time t becomes an additional generalized coordinate of the physical system. At the quantum level, translations in s would be generated by a "Hamiltonian" H − E, where E is the energy operator and H is the "ordinary" Hamiltonian. However, since s is an unphysical parameter, physical states must be left invariant by "s-evolution", and so the physical state space is the kernel of H − E (this requires the use of a rigged Hilbert space and a renormalization of the norm).

This is related to the quantization of constrained systems and quantization of gauge theories. It is also possible to formulate a quantum theory of "events" where time becomes an observable (see D. Edwards).

Spin

In addition to their other properties, all particles possess a quantity called spin, an intrinsic angular momentum. Despite the name, particles do not literally spin around an axis, and quantum mechanical spin has no correspondence in classical physics. In the position representation, a spinless wavefunction has position r and time t as continuous variables, ψ = ψ(r, t), for spin wavefunctions the spin is an additional discrete variable: ψ = ψ(r, t, σ), where σ takes the values;
\sigma =-S\hbar ,-(S-1)\hbar ,\dots ,0,\dots ,+(S-1)\hbar ,+S\hbar \,.
That is, the state of a single particle with spin S is represented by a (2S + 1)-component spinor of complex-valued wave functions.

Two classes of particles with very different behaviour are bosons which have integer spin (S = 0, 1, 2...), and fermions possessing half-integer spin (S = ​12, ​32, ​52, ...).

Pauli's principle

The property of spin relates to another basic property concerning systems of N identical particles: Pauli's exclusion principle, which is a consequence of the following permutation behaviour of an N-particle wave function; again in the position representation one must postulate that for the transposition of any two of the N particles one always should have
Pauli principle \psi (\dots ,\,\mathbf {r} _{i},\sigma _{i},\,\dots ,\,\mathbf {r} _{j},\sigma _{j},\,\dots )=(-1)^{2S}\cdot \psi (\dots ,\,\mathbf {r} _{j},\sigma _{j},\,\dots ,\mathbf {r} _{i},\sigma _{i},\,\dots )
i.e., on transposition of the arguments of any two particles the wavefunction should reproduce, apart from a prefactor (−1)2S which is +1 for bosons, but (−1) for fermions. Electrons are fermions with S = 1/2; quanta of light are bosons with S = 1. In nonrelativistic quantum mechanics all particles are either bosons or fermions; in relativistic quantum theories also "supersymmetric" theories exist, where a particle is a linear combination of a bosonic and a fermionic part. Only in dimension d = 2 can one construct entities where (−1)2S is replaced by an arbitrary complex number with magnitude 1, called anyons.

Although spin and the Pauli principle can only be derived from relativistic generalizations of quantum mechanics the properties mentioned in the last two paragraphs belong to the basic postulates already in the non-relativistic limit. Especially, many important properties in natural science, e.g. the periodic system of chemistry, are consequences of the two properties.

The problem of measurement

The picture given in the preceding paragraphs is sufficient for description of a completely isolated system. However, it fails to account for one of the main differences between quantum mechanics and classical mechanics, that is, the effects of measurement.[5] The von Neumann description of quantum measurement of an observable A, when the system is prepared in a pure state ψ is the following (note, however, that von Neumann's description dates back to the 1930s and is based on experiments as performed during that time – more specifically the Compton–Simon experiment; it is not applicable to most present-day measurements within the quantum domain):
  • Let A have spectral resolution
A=\int \lambda \,d\operatorname {E} _{A}(\lambda ),
where EA is the resolution of the identity (also called projection-valued measure) associated to A. Then the probability of the measurement outcome lying in an interval B of R is |EA(Bψ|2. In other words, the probability is obtained by integrating the characteristic function of B against the countably additive measure
\langle \psi \mid \operatorname {E} _{A}\psi \rangle .
  • If the measured value is contained in B, then immediately after the measurement, the system will be in the (generally non-normalized) state EA(B)ψ. If the measured value does not lie in B, replace B by its complement for the above state.
For example, suppose the state space is the n-dimensional complex Hilbert space Cn and A is a Hermitian matrix with eigenvalues λi, with corresponding eigenvectors ψi. The projection-valued measure associated with A, EA, is then
\operatorname {E} _{A}(B)=|\psi _{i}\rangle \langle \psi _{i}|,
where B is a Borel set containing only the single eigenvalue λi. If the system is prepared in state
|\psi \rangle \,
Then the probability of a measurement returning the value λi can be calculated by integrating the spectral measure
\langle \psi \mid \operatorname {E} _{A}\psi \rangle
over Bi. This gives trivially
\langle \psi |\psi _{i}\rangle \langle \psi _{i}\mid \psi \rangle =|\langle \psi \mid \psi _{i}\rangle |^{2}.
The characteristic property of the von Neumann measurement scheme is that repeating the same measurement will give the same results. This is also called the projection postulate.

A more general formulation replaces the projection-valued measure with a positive-operator valued measure (POVM). To illustrate, take again the finite-dimensional case. Here we would replace the rank-1 projections
|\psi _{i}\rangle \langle \psi _{i}|\,
by a finite set of positive operators
F_{i}F_{i}^{*}\,
whose sum is still the identity operator as before (the resolution of identity). Just as a set of possible outcomes {λ1 ... λn} is associated to a projection-valued measure, the same can be said for a POVM. Suppose the measurement outcome is λi. Instead of collapsing to the (unnormalized) state
|\psi _{i}\rangle \langle \psi _{i}|\psi \rangle \,
after the measurement, the system now will be in the state
F_{i}|\psi \rangle .\,
Since the Fi Fi* operators need not be mutually orthogonal projections, the projection postulate of von Neumann no longer holds.

The same formulation applies to general mixed states.

In von Neumann's approach, the state transformation due to measurement is distinct from that due to time evolution in several ways. For example, time evolution is deterministic and unitary whereas measurement is non-deterministic and non-unitary. However, since both types of state transformation take one quantum state to another, this difference was viewed by many as unsatisfactory. The POVM formalism views measurement as one among many other quantum operations, which are described by completely positive maps which do not increase the trace.

In any case it seems that the above-mentioned problems can only be resolved if the time evolution included not only the quantum system, but also, and essentially, the classical measurement apparatus (see above).

The relative state interpretation

An alternative interpretation of measurement is Everett's relative state interpretation, which was later dubbed the "many-worlds interpretation" of quantum physics.

Latent heat

From Wikipedia, the free encyclopedia

Latent heat is thermal energy released or absorbed, by a body or a thermodynamic system, during a constant-temperature process — usually a first-order phase transition.

Latent heat can be understood as heat energy in hidden form which is supplied or extracted to change the state of a substance without changing its temperature. Examples are latent heat of fusion and latent heat of vaporization involved in phase changes, i.e. a substance condensing or vaporizing at a specified temperature and pressure.[1][2]

The term was introduced around 1762 by British chemist Joseph Black. It is derived from the Latin latere (to lie hidden). Black used the term in the context of calorimetry where a heat transfer caused a volume change in a body while its temperature was constant.

In contrast to latent heat, sensible heat is a heat transfer that results in a temperature change in a body.

Usage

The terms ″sensible heat″ and ″latent heat″ refer to types of heat transfer between a body and its surroundings; they depend on the properties of the body. ″Sensible heat″ is ″sensed″ or felt in a process as a change in the body's temperature. ″Latent heat″ is heat transferred in a process without change of the body's temperature, for example, in a phase change ( solid / liquid / gas ).

Both sensible and latent heats are observed in many processes of transfer of energy in nature. Latent heat is associated with the change of phase of atmospheric or ocean water, vaporization, condensation, freezing or melting, whereas sensible heat is energy transferred that is evident in change of the temperature of the atmosphere or ocean, or ice, without those phase changes, though it is associated with changes of pressure and volume.

The original usage of the term, as introduced by Black, was applied to systems that were intentionally held at constant temperature. Such usage referred to latent heat of expansion and several other related latent heats. These latent heats are defined independently of the conceptual framework of thermodynamics.[3]

When a body is heated at constant temperature by thermal radiation in a microwave field for example, it may expand by an amount described by its latent heat with respect to volume or latent heat of expansion, or increase its pressure by an amount described by its latent heat with respect to pressure.[4] Latent heat is energy released or absorbed, by a body or a thermodynamic system, during a constant-temperature process. Two common forms of latent heat are latent heat of fusion (melting) and latent heat of vaporization (boiling). These names describe the direction of energy flow when changing from one phase to the next: from solid to liquid, and liquid to gas.

In both cases the change is endothermic, meaning that the system absorbs energy. For example, when water evaporates, energy is required for the water molecules to overcome the forces of attraction between them, the transition from water to vapor requires an input of energy.

If the vapor then condenses to a liquid on a surface, then the vapor's latent energy absorbed during evaporation is released as the liquid's sensible heat onto the surface.

The large value of the enthalpy of condensation of water vapor is the reason that steam is a far more effective heating medium than boiling water, and is more hazardous.

Meteorology

In meteorology, latent heat flux is the flux of heat from the Earth's surface to the atmosphere that is associated with evaporation or transpiration of water at the surface and subsequent condensation of water vapor in the troposphere. It is an important component of Earth's surface energy budget. Latent heat flux has been commonly measured with the Bowen ratio technique, or more recently since the mid-1900s by the Jonathan Beaver method.

History

The English word latent comes from Latin latēns, meaning lying hidden.[5][6] The term latent heat was introduced into calorimetry around 1750 when Joseph Black, commissioned by producers of Scotch whisky in search of ideal quantities of fuel and water for their distilling process,[7] to studying system changes, such as of volume and pressure, when the thermodynamic system was held at constant temperature in a thermal bath. James Prescott Joule characterised latent energy as the energy of interaction in a given configuration of particles, i.e. a form of potential energy, and the sensible heat as an energy that was indicated by the thermometer,[8] relating the latter to thermal energy.

Specific latent heat

A specific latent heat (L) expresses the amount of energy in the form of heat (Q) required to completely effect a phase change of a unit of mass (m), usually 1kg, of a substance as an intensive property:
L = \frac {Q}{m}.
Intensive properties are material characteristics and are not dependent on the size or extent of the sample. Commonly quoted and tabulated in the literature are the specific latent heat of fusion and the specific latent heat of vaporization for many substances.

From this definition, the latent heat for a given mass of a substance is calculated by
Q = {m}  {L}
where:
Q is the amount of energy released or absorbed during the change of phase of the substance (in kJ or in BTU),
m is the mass of the substance (in kg or in lb), and
L is the specific latent heat for a particular substance (kJ kg−1 or in BTU lb−1), either Lf for fusion, or Lv for vaporization.

Table of specific latent heats

The following table shows the specific latent heats and change of phase temperatures (at standard pressure) of some common fluids and gases.

Substance S.L.H. of
Fusion
kJ/kg
Melting
Point
°C
S.L.H. of
Vaporization
kJ/kg
Boiling
Point
°C
Ethyl alcohol 108 −114 855 78.3
Ammonia 332.17 −77.74 1369 −33.34
Carbon dioxide 184 −78 574 −57
Helium     21 −268.93
Hydrogen(2) 58 −259 455 −253
Lead[9] 23.0 327.5 871 1750
Nitrogen 25.7 −210 200 −196
Oxygen 13.9 −219 213 −183
Refrigerant R134a   −101 215.9 −26.6
Refrigerant R152a   −116 326.5 -25
Toluene 72.1 −93 351 110.6
Turpentine     293  
Water 334 0 2264.705 100

Specific latent heat for condensation of water in clouds

The specific latent heat of condensation of water in the temperature range from −25 °C to 40 °C is approximated by the following empirical cubic function:
L_\text{water}(T) = (2500.8 - 2.36 T + 0.0016 T^2 - 0.00006 T^3)~\text{J/g},
where the temperature T is taken to be the numerical value in °C.

For sublimation and deposition from and into ice, the specific latent heat is almost constant in the temperature range from −40 °C to 0 °C and can be approximated by the following empirical quadratic function:
L_\text{ice}(T) = (2834.1 - 0.29 T - 0.004 T^2)~\text{J/g}.[10]

Variation with temperature (or pressure)

Temperature-dependency of the heats of vaporization for water, methanol, benzene, and acetone.

As the temperature (or pressure) rises to the critical point the LHOV falls to zero.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...