Search This Blog

Tuesday, June 5, 2018

Chemical thermodynamics

From Wikipedia, the free encyclopedia

Chemical thermodynamics is the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. Chemical thermodynamics involves not only laboratory measurements of various thermodynamic properties, but also the application of mathematical methods to the study of chemical questions and the spontaneity of processes.

The structure of chemical thermodynamics is based on the first two laws of thermodynamics. Starting from the first and second laws of thermodynamics, four equations called the "fundamental equations of Gibbs" can be derived. From these four, a multitude of equations, relating the thermodynamic properties of the thermodynamic system can be derived using relatively simple mathematics. This outlines the mathematical framework of chemical thermodynamics.[1]

History


J. Willard Gibbs - founder of chemical thermodynamics

In 1865, the German physicist Rudolf Clausius, in his Mechanical Theory of Heat, suggested that the principles of thermochemistry, e.g. the heat evolved in combustion reactions, could be applied to the principles of thermodynamics.[2] Building on the work of Clausius, between the years 1873-76 the American mathematical physicist Willard Gibbs published a series of three papers, the most famous one being the paper On the Equilibrium of Heterogeneous Substances. In these papers, Gibbs showed how the first two laws of thermodynamics could be measured graphically and mathematically to determine both the thermodynamic equilibrium of chemical reactions as well as their tendencies to occur or proceed. Gibbs’ collection of papers provided the first unified body of thermodynamic theorems from the principles developed by others, such as Clausius and Sadi Carnot.

During the early 20th century, two major publications successfully applied the principles developed by Gibbs to chemical processes, and thus established the foundation of the science of chemical thermodynamics. The first was the 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall. This book was responsible for supplanting the chemical affinity with the term free energy in the English-speaking world. The second was the 1933 book Modern Thermodynamics by the methods of Willard Gibbs written by E. A. Guggenheim. In this manner, Lewis, Randall, and Guggenheim are considered as the founders of modern chemical thermodynamics because of the major contribution of these two books in unifying the application of thermodynamics to chemistry.[1]

Overview

The primary objective of chemical thermodynamics is the establishment of a criterion for the determination of the feasibility or spontaneity of a given transformation.[3] In this manner, chemical thermodynamics is typically used to predict the energy exchanges that occur in the following processes:
  1. Chemical reactions
  2. Phase changes
  3. The formation of solutions
The following state functions are of primary concern in chemical thermodynamics:
Most identities in chemical thermodynamics arise from application of the first and second laws of thermodynamics, particularly the law of conservation of energy, to these state functions.

The 3 laws of thermodynamics:
  1. The energy of the universe is constant.
  2. In any spontaneous process, there is always an increase in entropy of the universe (DJS -- in the universe!  Many errors are made here by forgetting that.)
  3. The entropy of a perfect crystal(well ordered) at 0 Kelvin is zero

Chemical energy

Chemical energy is the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. Breaking or making of chemical bonds involves energy or heat, which may be either absorbed or evolved from a chemical system.

Energy that can be released (or absorbed) because of a reaction between a set of chemical substances is equal to the difference between the energy content of the products and the reactants. This change in energy is called the change in internal energy of a chemical reaction. Where {\displaystyle \Delta _{\rm {f}}U_{\mathrm {reactants} }^{\rm {o}}} is the internal energy of formation of the reactant molecules that can be calculated from the bond energies of the various chemical bonds of the molecules under consideration and {\displaystyle \Delta _{\rm {f}}U_{\mathrm {products} }^{\rm {o}}} is the internal energy of formation of the product molecules. The change in internal energy is a process which is equal to the heat change if it is measured under conditions of constant volume (at STP condition), as in a closed rigid container such as a bomb calorimeter. However, under conditions of constant pressure, as in reactions in vessels open to the atmosphere, the measured heat change is not always equal to the internal energy change, because pressure-volume work also releases or absorbs energy. (The heat change at constant pressure is called the enthalpy change; in this case the enthalpy of formation).

Another useful term is the heat of combustion, which is the energy released due to a combustion reaction and often applied in the study of fuels. Food is similar to hydrocarbon fuel and carbohydrate fuels, and when it is oxidized, its caloric content is similar (though not assessed in the same way as a hydrocarbon fuel — see food energy).

In chemical thermodynamics the term used for the chemical potential energy is chemical potential, and for chemical transformation an equation most often used is the Gibbs-Duhem equation.

Chemical reactions

In most cases of interest in chemical thermodynamics there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which always create entropy unless they are at equilibrium, or are maintained at a "running equilibrium" through "quasi-static" changes by being coupled to constraining devices, such as pistons or electrodes, to deliver and receive external work. Even for homogeneous "bulk" materials, the free energy functions depend on the composition, as do all the extensive thermodynamic potentials, including the internal energy. If the quantities { Ni }, the number of chemical species, are omitted from the formulae, it is impossible to describe compositional changes.

Gibbs function or Gibbs Energy

For a "bulk" (unstructured) system they are the last remaining extensive variables. For an unstructured, homogeneous "bulk" system, there are still various extensive compositional variables { Ni } that G depends on, which specify the composition, the amounts of each chemical substance, expressed as the numbers of molecules present or (dividing by Avogadro's number = 6.023 × 1023), the numbers of moles
G=G(T,P,\{N_{i}\})\,.
For the case where only PV work is possible
{\displaystyle \mathrm {d} G=-S\,\mathrm {d} T+V\,\mathrm {d} P+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}\,}
in which μi is the chemical potential for the i-th component in the system
\mu _{i}=\left({\frac  {\partial G}{\partial N_{i}}}\right)_{{T,P,N_{{j\neq i}},etc.}}\,.
The expression for dG is especially useful at constant T and P, conditions which are easy to achieve experimentally and which approximates the condition in living creatures
{\displaystyle (\mathrm {d} G)_{T,P}=\sum _{i}\mu _{i}\,\mathrm {d} N_{i}\,.}

Chemical affinity

While this formulation is mathematically defensible, it is not particularly transparent since one does not simply add or remove molecules from a system. There is always a process involved in changing the composition; e.g., a chemical reaction (or many), or movement of molecules from one phase (liquid) to another (gas or solid). We should find a notation which does not seem to imply that the amounts of the components ( Ni ) can be changed independently. All real processes obey conservation of mass, and in addition, conservation of the numbers of atoms of each kind. Whatever molecules are transferred to or from should be considered part of the "system".

Consequently, we introduce an explicit variable to represent the degree of advancement of a process, a progress variable ξ for the extent of reaction (Prigogine & Defay, p. 18; Prigogine, pp. 4–7; Guggenheim, p. 37.62), and to the use of the partial derivativeG/∂ξ (in place of the widely used "ΔG", since the quantity at issue is not a finite change). The result is an understandable expression for the dependence of dG on chemical reactions (or other processes). If there is just one reaction
{\displaystyle (\mathrm {d} G)_{T,P}=\left({\frac {\partial G}{\partial \xi }}\right)_{T,P}\,\mathrm {d} \xi .\,}
If we introduce the stoichiometric coefficient for the i-th component in the reaction
\nu _{i}=\partial N_{i}/\partial \xi \,
which tells how many molecules of i are produced or consumed, we obtain an algebraic expression for the partial derivative
\left({\frac  {\partial G}{\partial \xi }}\right)_{{T,P}}=\sum _{i}\mu _{i}\nu _{i}=-{\mathbb  {A}}\,
where, (De Donder; Progoine & Defay, p. 69; Guggenheim, pp. 37,240), we introduce a concise and historical name for this quantity, the "affinity", symbolized by A, as introduced by Théophile de Donder in 1923. The minus sign comes from the fact the affinity was defined to represent the rule that spontaneous changes will ensue only when the change in the Gibbs free energy of the process is negative, meaning that the chemical species have a positive affinity for each other. The differential for G takes on a simple form which displays its dependence on compositional change
{\displaystyle (\mathrm {d} G)_{T,P}=-\mathbb {A} \,d\xi \,.}
If there are a number of chemical reactions going on simultaneously, as is usually the case
{\displaystyle (\mathrm {d} G)_{T,P}=-\sum _{k}\mathbb {A} _{k}\,d\xi _{k}\,.}
a set of reaction coordinates { ξj }, avoiding the notion that the amounts of the components ( Ni ) can be changed independently. The expressions above are equal to zero at thermodynamic equilibrium, while in the general case for real systems, they are negative because all chemical reactions proceeding at a finite rate produce entropy. This can be made even more explicit by introducing the reaction rates dξj/dt. For each and every physically independent process (Prigogine & Defay, p. 38; Prigogine, p. 24)
{\mathbb  {A}}\ {\dot  {\xi }}\leq 0\,.
This is a remarkable result since the chemical potentials are intensive system variables, depending only on the local molecular milieu. They cannot "know" whether the temperature and pressure (or any other system variables) are going to be held constant over time. It is a purely local criterion and must hold regardless of any such constraints. Of course, it could have been obtained by taking partial derivatives of any of the other fundamental state functions, but nonetheless is a general criterion for (−T times) the entropy production from that spontaneous process; or at least any part of it that is not captured as external work. (See Constraints below.)

We now relax the requirement of a homogeneous “bulk” system by letting the chemical potentials and the affinity apply to any locality in which a chemical reaction (or any other process) is occurring. By accounting for the entropy production due to irreversible processes, the inequality for dG is now replaced by an equality
{\displaystyle \mathrm {d} G=-S\,\mathrm {d} T+V\,\mathrm {d} P-\sum _{k}\mathbb {A} _{k}\,\mathrm {d} \xi _{k}+W'\,}
or
{\displaystyle \mathrm {d} G_{T,P}=-\sum _{k}\mathbb {A} _{k}\,\mathrm {d} \xi _{k}+W'.\,}
Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as T times a corresponding increase in the entropy of the system and/or its surrounding. Or it may go partly toward doing external work and partly toward creating entropy. The important point is that the extent of reaction for a chemical reaction may be coupled to the displacement of some external mechanical or electrical quantity in such a way that one can advance only if the other one also does. The coupling may occasionally be rigid, but it is often flexible and variable.

Solutions

In solution chemistry and biochemistry, the Gibbs free energy decrease (∂G/∂ξ, in molar units, denoted cryptically by ΔG) is commonly used as a surrogate for (−T times) the entropy produced by spontaneous chemical reactions in situations where there is no work being done; or at least no "useful" work; i.e., other than perhaps some ± P dV. The assertion that all spontaneous reactions have a negative ΔG is merely a restatement of the fundamental thermodynamic relation, giving it the physical dimensions of energy and somewhat obscuring its significance in terms of entropy. When there is no useful work being done, it would be less misleading to use the Legendre transforms of the entropy appropriate for constant T, or for constant T and P, the Massieu functions −F/T and −G/T respectively.

Non equilibrium

Generally the systems treated with the conventional chemical thermodynamics are either at equilibrium or near equilibrium. Ilya Prigogine developed the thermodynamic treatment of open systems that are far from equilibrium. In doing so he has discovered phenomena and structures of completely new and completely unexpected types. His generalized, nonlinear and irreversible thermodynamics has found surprising applications in a wide variety of fields.

The non equilibrium thermodynamics has been applied for explaining how ordered structures e.g. the biological systems, can develop from disorder. Even if Onsager's relations are utilized, the classical principles of equilibrium in thermodynamics still show that linear systems close to equilibrium always develop into states of disorder which are stable to perturbations and cannot explain the occurrence of ordered structures.

Prigogine called these systems dissipative systems, because they are formed and maintained by the dissipative processes which take place because of the exchange of energy between the system and its environment and because they disappear if that exchange ceases. They may be said to live in symbiosis with their environment.

The method which Prigogine used to study the stability of the dissipative structures to perturbations is of very great general interest. It makes it possible to study the most varied problems, such as city traffic problems, the stability of insect communities, the development of ordered biological structures and the growth of cancer cells to mention but a few examples.

System constraints

In this regard, it is crucial to understand the role of walls and other constraints, and the distinction between independent processes and coupling. Contrary to the clear implications of many reference sources, the previous analysis is not restricted to homogeneous, isotropic bulk systems which can deliver only PdV work to the outside world, but applies even to the most structured systems. There are complex systems with many chemical "reactions" going on at the same time, some of which are really only parts of the same, overall process. An independent process is one that could proceed even if all others were unaccountably stopped in their tracks. Understanding this is perhaps a “thought experiment” in chemical kinetics, but actual examples exist.

A gas reaction which results in an increase in the number of molecules will lead to an increase in volume at constant external pressure. If it occurs inside a cylinder closed with a piston, the equilibrated reaction can proceed only by doing work against an external force on the piston. The extent variable for the reaction can increase only if the piston moves, and conversely, if the piston is pushed inward, the reaction is driven backwards.

Similarly, a redox reaction might occur in an electrochemical cell with the passage of current in wires connecting the electrodes. The half-cell reactions at the electrodes are constrained if no current is allowed to flow. The current might be dissipated as joule heating, or it might in turn run an electrical device like a motor doing mechanical work. An automobile lead-acid battery can be recharged, driving the chemical reaction backwards. In this case as well, the reaction is not an independent process. Some, perhaps most, of the Gibbs free energy of reaction may be delivered as external work.

The hydrolysis of ATP to ADP and phosphate can drive the force times distance work delivered by living muscles, and synthesis of ATP is in turn driven by a redox chain in mitochondria and chloroplasts, which involves the transport of ions across the membranes of these cellular organelles. The coupling of processes here, and in the previous examples, is often not complete. Gas can leak slowly past a piston, just as it can slowly leak out of a rubber balloon. Some reaction may occur in a battery even if no external current is flowing. There is usually a coupling coefficient, which may depend on relative rates, which determines what percentage of the driving free energy is turned into external work, or captured as "chemical work"; a misnomer for the free energy of another chemical process.

T-symmetry

From Wikipedia, the free encyclopedia
 
T-symmetry or time reversal symmetry is the theoretical symmetry of physical laws under the transformation of time reversal:
T:t\mapsto -t.
Although in restricted contexts one may find this symmetry, the observable universe itself does not show symmetry under time reversal, primarily due to the second law of thermodynamics. Hence time is said to be non-symmetric, or asymmetric, except for equilibrium states when the second law of thermodynamics predicts the time symmetry to hold. However, quantum noninvasive measurements are predicted to violate time symmetry even in equilibrium,[1] contrary to their classical counterparts, although it has not yet been experimentally confirmed.

Time asymmetries are generally distinguished as among those...
  1. intrinsic to the dynamic physical law (e.g., for the weak force)
  2. due to the initial conditions of our universe (e.g., for the second law of thermodynamics)
  3. due to measurements (e.g., for the noninvasive measurements)

Invariance

A toy called the teeter-totter illustrates, in cross-section, the two aspects of time reversal invariance. When set into motion atop a pedestal (rocking side to side, as in the image), the figure oscillates for a very long time. The toy is engineered to minimize friction and illustrate the reversibility of Newton's laws of motion. However, the mechanically stable state of the toy is when the figure falls down from the pedestal into one of arbitrarily many positions. This is an illustration of the law of increase of entropy through Boltzmann's identification of the logarithm of the number of states with the entropy.

Physicists also discuss the time-reversal invariance of local and/or macroscopic descriptions of physical systems, independent of the invariance of the underlying microscopic physical laws. For example, Maxwell's equations with material absorption or Newtonian mechanics with friction are not time-reversal invariant at the macroscopic level where they are normally applied, even if they are invariant at the microscopic level; when one includes the atomic motions, the "lost" energy is translated into heat.

Macroscopic phenomena: the second law of thermodynamics

Our daily experience shows that T-symmetry does not hold for the behavior of bulk materials. Of these macroscopic laws, most notable is the second law of thermodynamics. Many other phenomena, such as the relative motion of bodies with friction, or viscous motion of fluids, reduce to this, because the underlying mechanism is the dissipation of usable energy (for example, kinetic energy) into heat.

The question of whether this time-asymmetric dissipation is really inevitable has been considered by many physicists, often in the context of Maxwell's demon. The name comes from a thought experiment described by James Clerk Maxwell in which a microscopic demon guards a gate between two halves of a room. It only lets slow molecules into one half, only fast ones into the other. By eventually making one side of the room cooler than before and the other hotter, it seems to reduce the entropy of the room, and reverse the arrow of time. Many analyses have been made of this; all show that when the entropy of room and demon are taken together, this total entropy does increase. Modern analyses of this problem have taken into account Claude E. Shannon's relation between entropy and information. Many interesting results in modern computing are closely related to this problem — reversible computing, quantum computing and physical limits to computing, are examples. These seemingly metaphysical questions are today, in these ways, slowly being converted into hypotheses of the physical sciences.

The current consensus hinges upon the Boltzmann-Shannon identification of the logarithm of phase space volume with the negative of Shannon information, and hence to entropy. In this notion, a fixed initial state of a macroscopic system corresponds to relatively low entropy because the coordinates of the molecules of the body are constrained. As the system evolves in the presence of dissipation, the molecular coordinates can move into larger volumes of phase space, becoming more uncertain, and thus leading to increase in entropy.

One can, however, equally well imagine a state of the universe in which the motions of all of the particles at one instant were the reverse (strictly, the CPT reverse). Such a state would then evolve in reverse, so presumably entropy would decrease (Loschmidt's paradox). Why is 'our' state preferred over the other?

One position is to say that the constant increase of entropy we observe happens only because of the initial state of our universe. Other possible states of the universe (for example, a universe at heat death equilibrium) would actually result in no increase of entropy. In this view, the apparent T-asymmetry of our universe is a problem in cosmology: why did the universe start with a low entropy? This view, if it remains viable in the light of future cosmological observation, would connect this problem to one of the big open questions beyond the reach of today's physics — the question of initial conditions of the universe.

Macroscopic phenomena: black holes

An object can cross through the event horizon of a black hole from the outside, and then fall rapidly to the central region where our understanding of physics breaks down. Since within a black hole the forward light-cone is directed towards the center and the backward light-cone is directed outward, it is not even possible to define time-reversal in the usual manner. The only way anything can escape from a black hole is as Hawking radiation.

The time reversal of a black hole would be a hypothetical object known as a white hole. From the outside they appear similar. While a black hole has a beginning and is inescapable, a white hole has an ending and cannot be entered. The forward light-cones of a white hole are directed outward; and its backward light-cones are directed towards the center.

The event horizon of a black hole may be thought of as a surface moving outward at the local speed of light and is just on the edge between escaping and falling back. The event horizon of a white hole is a surface moving inward at the local speed of light and is just on the edge between being swept outward and succeeding in reaching the center. They are two different kinds of horizons—the horizon of a white hole is like the horizon of a black hole turned inside-out.

The modern view of black hole irreversibility is to relate it to the second law of thermodynamics, since black holes are viewed as thermodynamic objects. Indeed, according to the Gauge–gravity duality conjecture, all microscopic processes in a black hole are reversible, and only the collective behavior is irreversible, as in any other macroscopic, thermal system.[citation needed]

Kinetic consequences: detailed balance and Onsager reciprocal relations

In physical and chemical kinetics, T-symmetry of the mechanical microscopic equations implies two important laws: the principle of detailed balance and the Onsager reciprocal relations. T-symmetry of the microscopic description together with its kinetic consequences are called microscopic reversibility.

Effect of time reversal on some variables of classical physics

Even

Classical variables that do not change upon time reversal include:
{\vec {x}}\!, Position of a particle in three-space
{\vec {a}}\!, Acceleration of the particle
{\vec {F}}\!, Force on the particle
E\!, Energy of the particle
\phi \!, Electric potential (voltage)
{\vec {E}}\!, Electric field
{\vec {D}}\!, Electric displacement
\rho \!, Density of electric charge
{\vec {P}}\!, Electric polarization
Energy density of the electromagnetic field
Maxwell stress tensor
All masses, charges, coupling constants, and other physical constants, except those associated with the weak force.

Odd

Classical variables that time reversal negates include:
t\!, The time when an event occurs
{\vec {v}}\!, Velocity of a particle
{\vec {p}}\!, Linear momentum of a particle
{\vec {l}}\!, Angular momentum of a particle (both orbital and spin)
{\vec {A}}\!, Electromagnetic vector potential
{\vec {B}}\!, Magnetic field
{\vec {H}}\!, Magnetic auxiliary field
{\vec {j}}\!, Density of electric current
{\vec {M}}\!, Magnetization
{\vec {S}}\!, Poynting vector
Power (rate of work done).

Microscopic phenomena: time reversal invariance

Since most systems are asymmetric under time reversal, it is interesting to ask whether there are phenomena that do have this symmetry. In classical mechanics, a velocity v reverses under the operation of T, but an acceleration does not. Therefore, one models dissipative phenomena through terms that are odd in v. However, delicate experiments in which known sources of dissipation are removed reveal that the laws of mechanics are time reversal invariant. Dissipation itself is originated in the second law of thermodynamics.

The motion of a charged body in a magnetic field, B involves the velocity through the Lorentz force term v×B, and might seem at first to be asymmetric under T. A closer look assures us that B also changes sign under time reversal. This happens because a magnetic field is produced by an electric current, J, which reverses sign under T. Thus, the motion of classical charged particles in electromagnetic fields is also time reversal invariant. (Despite this, it is still useful to consider the time-reversal non-invariance in a local sense when the external field is held fixed, as when the magneto-optic effect is analyzed. This allows one to analyze the conditions under which optical phenomena that locally break time-reversal, such as Faraday isolators and directional dichroism, can occur.) The laws of gravity also seem to be time reversal invariant in classical mechanics.

In physics one separates the laws of motion, called kinematics, from the laws of force, called dynamics. Following the classical kinematics of Newton's laws of motion, the kinematics of quantum mechanics is built in such a way that it presupposes nothing about the time reversal symmetry of the dynamics. In other words, if the dynamics are invariant, then the kinematics will allow it to remain invariant; if the dynamics is not, then the kinematics will also show this. The structure of the quantum laws of motion are richer, and we examine these next.

Time reversal in quantum mechanics

Two-dimensional representations of parity are given by a pair of quantum states that go into each other under parity. However, this representation can always be reduced to linear combinations of states, each of which is either even or odd under parity. One says that all irreducible representations of parity are one-dimensional.  Kramers' theorem states that time reversal need not have this property because it is represented by an anti-unitary operator.

This section contains a discussion of the three most important properties of time reversal in quantum mechanics; chiefly,
  1. that it must be represented as an anti-unitary operator,
  2. that it protects non-degenerate quantum states from having an electric dipole moment,
  3. that it has two-dimensional representations with the property T2 = −1.
The strangeness of this result is clear if one compares it with parity. If parity transforms a pair of quantum states into each other, then the sum and difference of these two basis states are states of good parity. Time reversal does not behave like this. It seems to violate the theorem that all abelian groups be represented by one-dimensional irreducible representations. The reason it does this is that it is represented by an anti-unitary operator. It thus opens the way to spinors in quantum mechanics.

Anti-unitary representation of time reversal

Eugene Wigner showed that a symmetry operation S of a Hamiltonian is represented, in quantum mechanics either by a unitary operator, S = U, or an antiunitary one, S = UK where U is unitary, and K denotes complex conjugation. These are the only operations that act on Hilbert space so as to preserve the length of the projection of any one state-vector onto another state-vector.

Consider the parity operator. Acting on the position, it reverses the directions of space, so that PxP−1 = −x. Similarly, it reverses the direction of momentum, so that PpP−1 = −p, where x and p are the position and momentum operators. This preserves the canonical commutator [x, p] = , where ħ is the reduced Planck constant, only if P is chosen to be unitary, PiP−1 = i.

On the other hand, the time reversal operator T, it does nothing to the x-operator, TxT−1 = x, but it reverses the direction of p, so that TpT−1 = −p. The canonical commutator is invariant only if T is chosen to be anti-unitary, i.e., TiT−1 = −i.

Another argument involves energy, the time-component of the momentum. If time reversal were implemented as a unitary operator, it would reverse the sign of the energy just as space-reversal reverses the sign of the momentum. This is not possible, because, unlike momentum, energy is always positive. Since energy in quantum mechanics is defined as the phase factor exp(-iEt) that one gets when one moves forward in time, the way to reverse time while preserving the sign of the energy is to also reverse the sense of "i", so that the sense of phases is reversed.

Similarly, any operation that reverses the sense of phase, which changes the sign of i, will turn positive energies into negative energies unless it also changes the direction of time. So every antiunitary symmetry in a theory with positive energy must reverse the direction of time. Every antiunitary operator can be written as the product of the time reversal operator and a unitary operator that does not reverse time.

For a particle with spin J, one can use the representation
T=e^{-i\pi J_{y}/\hbar }K,
where Jy is the y-component of the spin, and use of TJT−1 = −J has been made.

Electric dipole moments

This has an interesting consequence on the electric dipole moment (EDM) of any particle. The EDM is defined through the shift in the energy of a state when it is put in an external electric field: Δe = d·E + E·δ·E, where d is called the EDM and δ, the induced dipole moment. One important property of an EDM is that the energy shift due to it changes sign under a parity transformation. However, since d is a vector, its expectation value in a state |ψ> must be proportional to <ψ| J |ψ>. Thus, under time reversal, an invariant state must have vanishing EDM. In other words, a non-vanishing EDM signals both P and T symmetry-breaking.[2]

It is interesting to examine this argument further, since one feels that some molecules, such as water, must have EDM irrespective of whether T is a symmetry. This is correct: if a quantum system has degenerate ground states that transform into each other under parity, then time reversal need not be broken to give EDM.

Experimentally observed bounds on the electric dipole moment of the nucleon currently set stringent limits on the violation of time reversal symmetry in the strong interactions, and their modern theory: quantum chromodynamics. Then, using the CPT invariance of a relativistic quantum field theory, this puts strong bounds on strong CP violation.

Experimental bounds on the electron electric dipole moment also place limits on theories of particle physics and their parameters.[3][4]

Kramers' theorem

For T, which is an anti-unitary Z2 symmetry generator
T2 = UKUK = U U* = U (UT)−1 = Φ,
where Φ is a diagonal matrix of phases. As a result, U = ΦUT and UT = UΦ, showing that
U = Φ U Φ.
This means that the entries in Φ are ±1, as a result of which one may have either T2 = ±1. This is specific to the anti-unitarity of T. For a unitary operator, such as the parity, any phase is allowed.

Next, take a Hamiltonian invariant under T. Let |a> and T|a> be two quantum states of the same energy. Now, if T2 = −1, then one finds that the states are orthogonal: a result called Kramers' theorem. This implies that if T2 = −1, then there is a twofold degeneracy in the state. This result in non-relativistic quantum mechanics presages the spin statistics theorem of quantum field theory.
Quantum states that give unitary representations of time reversal, i.e., have T2=1, are characterized by a multiplicative quantum number, sometimes called the T-parity.

Time reversal transformation for fermions in quantum field theories can be represented by an 8-component spinor in which the above-mentioned T-parity can be a complex number with unit radius. The CPT invariance is not a theorem but a better-to-have property in these class of theories.

Time reversal of the known dynamical laws

Particle physics codified the basic laws of dynamics into the standard model. This is formulated as a quantum field theory that has CPT symmetry, i.e., the laws are invariant under simultaneous operation of time reversal, parity and charge conjugation. However, time reversal itself is seen not to be a symmetry (this is usually called CP violation). There are two possible origins of this asymmetry, one through the mixing of different flavours of quarks in their weak decays, the second through a direct CP violation in strong interactions. The first is seen in experiments, the second is strongly constrained by the non-observation of the EDM of a neutron.

It is important to stress that this time reversal violation is unrelated to the second law of thermodynamics, because due to the conservation of the CPT symmetry, the effect of time reversal is to rename particles as antiparticles and vice versa. Thus the second law of thermodynamics is thought to originate in the initial conditions in the universe.

Time reversal of noninvasive measurements

Strong measurements (both classical and quantum) are certainly disturbing, causing asymmetry due to second law of thermodynamics. However, noninvasive measurements should not disturb the evolution so they are expected to be time-symmetric. Surprisingly, it is true only in classical physics but not quantum, even in a thermodynamically invariant equilibrium state. [1] This type of asymmetry is independent of CPT symmetry but has not yet been confirmed experimentally due to extreme conditions of the checking proposal.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...