Search This Blog

Monday, July 17, 2017

Hydrogen bond

From Wikipedia, the free encyclopedia
 
AFM image of napthalenetetracarboxylic diimide molecules on silver-terminated silicon, interacting via hydrogen bonding, taken at 77  K.[1] ("Hydrogen bonds" in the top image are exaggerated by artifacts of the imaging technique.[2][3])
Model of hydrogen bonds (1) between molecules of water

A hydrogen bond is the electrostatic attraction between two polar groups that occurs when a hydrogen (H) atom covalently bound to a highly electronegative atom such as nitrogen (N), oxygen (O), or fluorine (F) experiences the electrostatic field of another highly electronegative atom nearby.
Hydrogen bonds can occur between molecules (intermolecular) or within different parts of a single molecule (intramolecular).[4] Depending on geometry and environment, the hydrogen bond free energy content is between 1 and 5 kcal/mol. This makes it stronger than a van der Waals interaction, but weaker than covalent or ionic bonds. This type of bond can occur in inorganic molecules such as water and in organic molecules like DNA and proteins.

Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides that have much weaker hydrogen bonds.[5] Intramolecular hydrogen bonding is partly responsible for the secondary and tertiary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural.
In 2011, an IUPAC Task Group recommended a modern evidence-based definition of hydrogen bonding, which was published in the IUPAC journal Pure and Applied Chemistry. This definition specifies:
The hydrogen bond is an attractive interaction between a hydrogen atom from a molecule or a molecular fragment X–H in which X is more electronegative than H, and an atom or a group of atoms in the same or a different molecule, in which there is evidence of bond formation.[6]
An accompanying detailed technical report provides the rationale behind the new definition.[7]

Bonding

An example of intermolecular hydrogen bonding in a self-assembled dimer complex reported by Meijer and coworkers.[8] The hydrogen bonds are represented by dotted lines.
Intramolecular hydrogen bonding in acetylacetone helps stabilize the enol tautomer.

A hydrogen atom attached to a relatively electronegative atom will play the role of the hydrogen bond donor.[9] This electronegative atom is usually fluorine, oxygen, or nitrogen. A hydrogen attached to carbon can also participate in hydrogen bonding when the carbon atom is bound to electronegative atoms, as is the case in chloroform, CHCl3.[10][11][12] An example of a hydrogen bond donor is the hydrogen from the hydroxyl group of ethanol, which is bonded to an oxygen.

In a hydrogen bond, the electronegative atom not covalently attached to the hydrogen is named proton acceptor, whereas the one covalently bound to the hydrogen is named the proton donor.
Examples of hydrogen bond donating (donors) and hydrogen bond accepting groups (acceptors)
Cyclic dimer of acetic acid; dashed green lines represent hydrogen bonds

In the donor molecule, the electronegative atom attracts the electron cloud from around the hydrogen nucleus of the donor, and, by decentralizing the cloud, leaves the atom with a positive partial charge. Because of the small size of hydrogen relative to other atoms and molecules, the resulting charge, though only partial, represents a large charge density. A hydrogen bond results when this strong positive charge density attracts a lone pair of electrons on another heteroatom, which then becomes the hydrogen-bond acceptor.

The hydrogen bond is often described as an electrostatic dipole-dipole interaction. However, it also has some features of covalent bonding: it is directional and strong, produces interatomic distances shorter than the sum of the van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a type of valence. These covalent features are more substantial when acceptors bind hydrogens from more electronegative donors.

The partially covalent nature of a hydrogen bond raises the following questions: "To which molecule or atom does the hydrogen nucleus belong?" and "Which should be labeled 'donor' and which 'acceptor'?" Usually, this is simple to determine on the basis of interatomic distances in the X−H···Y system, where the dots represent the hydrogen bond: the X−H distance is typically ≈110 pm, whereas the H···Y distance is ≈160 to 200 pm. Liquids that display hydrogen bonding (such as water) are called associated liquids.

Hydrogen bonds can vary in strength from very weak (1–2 kJ mol−1) to extremely strong (161.5 kJ mol−1 in the ion HF
2
).[13][14] Typical enthalpies in vapor include:
  • F−H···:F (161.5 kJ/mol or 38.6 kcal/mol)
  • O−H···:N (29 kJ/mol or 6.9 kcal/mol)
  • O−H···:O (21 kJ/mol or 5.0 kcal/mol)
  • N−H···:N (13 kJ/mol or 3.1 kcal/mol)
  • N−H···:O (8 kJ/mol or 1.9 kcal/mol)
  • HO−H···:OH+
    3
    (18 kJ/mol[15] or 4.3 kcal/mol; data obtained using molecular dynamics as detailed in the reference and should be compared to 7.9 kJ/mol for bulk water, obtained using the same molecular dynamics.)
Quantum chemical calculations of the relevant interresidue potential constants (compliance constants) revealed[how?] large differences between individual H bonds of the same type. For example, the central interresidue N−H···N hydrogen bond between guanine and cytosine is much stronger in comparison to the N−H···N bond between the adenine-thymine pair.[16]

The length of hydrogen bonds depends on bond strength, temperature, and pressure. The bond strength itself is dependent on temperature, pressure, bond angle, and environment (usually characterized by local dielectric constant). The typical length of a hydrogen bond in water is 197 pm. The ideal bond angle depends on the nature of the hydrogen bond donor. The following hydrogen bond angles between a hydrofluoric acid donor and various acceptors have been determined experimentally:[17]

Acceptor···donor VSEPR geometry Angle (°)
HCN···HF linear 180
H2CO···HF trigonal planar 120
H2O···HF pyramidal 46
H2S···HF pyramidal 89
SO2···HF trigonal 142

History

In the book The Nature of the Chemical Bond, Linus Pauling credits T. S. Moore and T. F. Winmill with the first mention of the hydrogen bond, in 1912.[18][19] Moore and Winmill used the hydrogen bond to account for the fact that trimethylammonium hydroxide is a weaker base than tetramethylammonium hydroxide. The description of hydrogen bonding in its better-known setting, water, came some years later, in 1920, from Latimer and Rodebush.[20] In that paper, Latimer and Rodebush cite work by a fellow scientist at their laboratory, Maurice Loyal Huggins, saying, "Mr. Huggins of this laboratory in some work as yet unpublished, has used the idea of a hydrogen kernel held between two atoms as a theory in regard to certain organic compounds."

Hydrogen bonds in water

Crystal structure of hexagonal ice. Gray dashed lines indicate hydrogen bonds

The most ubiquitous and perhaps simplest example of a hydrogen bond is found between water molecules. In a discrete water molecule, there are two hydrogen atoms and one oxygen atom. Two molecules of water can form a hydrogen bond between them; the simplest case, when only two molecules are present, is called the water dimer and is often used as a model system. When more molecules are present, as is the case with liquid water, more bonds are possible because the oxygen of one water molecule has two lone pairs of electrons, each of which can form a hydrogen bond with a hydrogen on another water molecule. This can repeat such that every water molecule is H-bonded with up to four other molecules, as shown in the figure (two through its two lone pairs, and two through its two hydrogen atoms). Hydrogen bonding strongly affects the crystal structure of ice, helping to create an open hexagonal lattice. The density of ice is less than the density of water at the same temperature; thus, the solid phase of water floats on the liquid, unlike most other substances.

Liquid water's high boiling point is due to the high number of hydrogen bonds each molecule can form, relative to its low molecular mass. Owing to the difficulty of breaking these bonds, water has a very high boiling point, melting point, and viscosity compared to otherwise similar liquids not conjoined by hydrogen bonds. Water is unique because its oxygen atom has two lone pairs and two hydrogen atoms, meaning that the total number of bonds of a water molecule is up to four. For example, hydrogen fluoride—which has three lone pairs on the F atom but only one H atom—can form only two bonds; (ammonia has the opposite problem: three hydrogen atoms but only one lone pair).
H−F···H−F···H−F
The exact number of hydrogen bonds formed by a molecule of liquid water fluctuates with time and depends on the temperature.[21] From TIP4P liquid water simulations at 25 °C, it was estimated that each water molecule participates in an average of 3.59 hydrogen bonds. At 100 °C, this number decreases to 3.24 due to the increased molecular motion and decreased density, while at 0 °C, the average number of hydrogen bonds increases to 3.69.[21] A more recent study found a much smaller number of hydrogen bonds: 2.357 at 25 °C.[22] The differences may be due to the use of a different method for defining and counting the hydrogen bonds.

Where the bond strengths are more equivalent, one might instead find the atoms of two interacting water molecules partitioned into two polyatomic ions of opposite charge, specifically hydroxide (OH) and hydronium (H3O+). (Hydronium ions are also known as "hydroxonium" ions.)
H−O H3O+
Indeed, in pure water under conditions of standard temperature and pressure, this latter formulation is applicable only rarely; on average about one in every 5.5 × 108 molecules gives up a proton to another water molecule, in accordance with the value of the dissociation constant for water under such conditions. It is a crucial part of the uniqueness of water.

Because water may form hydrogen bonds with solute proton donors and acceptors, it may competitively inhibit the formation of solute intermolecular or intramolecular hydrogen bonds. Consequently, hydrogen bonds between or within solute molecules dissolved in water are almost always unfavorable relative to hydrogen bonds between water and the donors and acceptors for hydrogen bonds on those solutes.[23] Hydrogen bonds between water molecules have an average lifetime of 10−11 seconds, or 10 picoseconds.[24]

Bifurcated and over-coordinated hydrogen bonds in water

A single hydrogen atom can participate in two hydrogen bonds, rather than one. This type of bonding is called "bifurcated" (split in two or "two-forked"). It can exist, for instance, in complex natural or synthetic organic molecules.[25] It has been suggested that a bifurcated hydrogen atom is an essential step in water reorientation.[26]

Acceptor-type hydrogen bonds (terminating on an oxygen's lone pairs) are more likely to form bifurcation (it is called overcoordinated oxygen, OCO) than are donor-type hydrogen bonds, beginning on the same oxygen's hydrogens.[27]

Hydrogen bonds in DNA and proteins

The structure of part of a DNA double helix
Hydrogen bonding between guanine and cytosine, one of two types of base pairs in DNA.

Hydrogen bonding also plays an important role in determining the three-dimensional structures adopted by proteins and nucleic bases. In these macromolecules, bonding between parts of the same macromolecule cause it to fold into a specific shape, which helps determine the molecule's physiological or biochemical role. For example, the double helical structure of DNA is due largely to hydrogen bonding between its base pairs (as well as pi stacking interactions), which link one complementary strand to the other and enable replication.

In the secondary structure of proteins, hydrogen bonds form between the backbone oxygens and amide hydrogens. When the spacing of the amino acid residues participating in a hydrogen bond occurs regularly between positions i and i + 4, an alpha helix is formed. When the spacing is less, between positions i and i + 3, then a 310 helix is formed. When two strands are joined by hydrogen bonds involving alternating residues on each participating strand, a beta sheet is formed. Hydrogen bonds also play a part in forming the tertiary structure of protein through interaction of R-groups. (See also protein folding).

The role of hydrogen bonds in protein folding has also been linked to osmolyte-induced protein stabilization. Protective osmolytes, such as trehalose and sorbitol, shift the protein folding equilibrium toward the folded state, in a concentration dependent manner. While the prevalent explanation for osmolyte action relies on excluded volume effects, that are entropic in nature, recent Circular dichroism (CD) experiments have shown osmolyte to act through an enthalpic effect.[28] The molecular mechanism for their role in protein stabilization is still not well established, though several mechanism have been proposed. Recently, computer molecular dynamics simulations suggested that osmolytes stabilize proteins by modifying the hydrogen bonds in the protein hydration layer.[29]

Several studies have shown that hydrogen bonds play an important role for the stability between subunits in multimeric proteins. For example, a study of sorbitol dehydrogenase displayed an important hydrogen bonding network which stabilizes the tetrameric quaternary structure within the mammalian sorbitol dehydrogenase protein family.[30]

A protein backbone hydrogen bond incompletely shielded from water attack is a dehydron. Dehydrons promote the removal of water through proteins or ligand binding. The exogenous dehydration enhances the electrostatic interaction between the amide and carbonyl groups by de-shielding their partial charges. Furthermore, the dehydration stabilizes the hydrogen bond by destabilizing the nonbonded state consisting of dehydrated isolated charges.[31]

Hydrogen bonds in polymers

Para-aramid structure
A strand of cellulose (conformation Iα), showing the hydrogen bonds (dashed) within and between cellulose molecules.

Many polymers are strengthened by hydrogen bonds in their main chains. Among the synthetic polymers, the best known example is nylon, where hydrogen bonds occur in the repeat unit and play a major role in crystallization of the material. The bonds occur between carbonyl and amine groups in the amide repeat unit. They effectively link adjacent chains to create crystals, which help reinforce the material. The effect is greatest in aramid fibre, where hydrogen bonds stabilize the linear chains laterally. The chain axes are aligned along the fibre axis, making the fibres extremely stiff and strong. Hydrogen bonds are also important in the structure of cellulose and derived polymers in its many different forms in nature, such as wood and natural fibres such as cotton and flax.

The hydrogen bond networks make both natural and synthetic polymers sensitive to humidity levels in the atmosphere because water molecules can diffuse into the surface and disrupt the network. Some polymers are more sensitive than others. Thus nylons are more sensitive than aramids, and nylon 6 more sensitive than nylon-11.

Symmetric hydrogen bond

A symmetric hydrogen bond is a special type of hydrogen bond in which the proton is spaced exactly halfway between two identical atoms. The strength of the bond to each of those atoms is equal. It is an example of a three-center four-electron bond. This type of bond is much stronger than a "normal" hydrogen bond. The effective bond order is 0.5, so its strength is comparable to a covalent bond. It is seen in ice at high pressure, and also in the solid phase of many anhydrous acids such as hydrofluoric acid and formic acid at high pressure. It is also seen in the bifluoride ion [F−H−F].

Symmetric hydrogen bonds have been observed recently spectroscopically in formic acid at high pressure (>GPa). Each hydrogen atom forms a partial covalent bond with two atoms rather than one. Symmetric hydrogen bonds have been postulated in ice at high pressure (Ice X). Low-barrier hydrogen bonds form when the distance between two heteroatoms is very small.

Dihydrogen bond

The hydrogen bond can be compared with the closely related dihydrogen bond, which is also an intermolecular bonding interaction involving hydrogen atoms. These structures have been known for some time, and well characterized by crystallography;[32] however, an understanding of their relationship to the conventional hydrogen bond, ionic bond, and covalent bond remains unclear. Generally, the hydrogen bond is characterized by a proton acceptor that is a lone pair of electrons in nonmetallic atoms (most notably in the nitrogen, and chalcogen groups). In some cases, these proton acceptors may be pi-bonds or metal complexes. In the dihydrogen bond, however, a metal hydride serves as a proton acceptor, thus forming a hydrogen-hydrogen interaction. Neutron diffraction has shown that the molecular geometry of these complexes is similar to hydrogen bonds, in that the bond length is very adaptable to the metal complex/hydrogen donor system.[32]

Advanced theory of the hydrogen bond

In 1999, Isaacs et al.[33] showed from interpretations of the anisotropies in the Compton profile of ordinary ice that the hydrogen bond is partly covalent. However, this interpretation was challenged by Ghanty et al.,[34] who concluded that considering electrostatic forces alone could explain the experimental results. Some NMR data on hydrogen bonds in proteins also indicate covalent bonding.
Most generally, the hydrogen bond can be viewed as a metric-dependent electrostatic scalar field between two or more intermolecular bonds. This is slightly different from the intramolecular bound states of, for example, covalent or ionic bonds; however, hydrogen bonding is generally still a bound state phenomenon, since the interaction energy has a net negative sum. The initial theory of hydrogen bonding proposed by Linus Pauling suggested that the hydrogen bonds had a partial covalent nature. This remained a controversial conclusion until the late 1990s when NMR techniques were employed by F. Cordier et al. to transfer information between hydrogen-bonded nuclei, a feat that would only be possible if the hydrogen bond contained some covalent character.[35] While much experimental data has been recovered for hydrogen bonds in water, for example, that provide good resolution on the scale of intermolecular distances and molecular thermodynamics, the kinetic and dynamical properties of the hydrogen bond in dynamic systems remain unchanged.

Dynamics probed by spectroscopic means

The dynamics of hydrogen bond structures in water can be probed by the IR spectrum of OH stretching vibration.[36] In the hydrogen bonding network in protic organic ionic plastic crystals (POIPCs), which are a type of phase change material exhibiting solid-solid phase transitions prior to melting, variable-temperature infrared spectroscopy can reveal the temperature dependence of hydrogen bonds and the dynamics of both the anions and the cations.[37] The sudden weakening of hydrogen bonds during the solid-solid phase transition seems to be coupled with the onset of orientational or rotational disorder of the ions.[37]

Hydrogen bonding phenomena

  • Dramatically higher boiling points of NH3, H2O, and HF compared to the heavier analogues PH3, H2S, and HCl.
  • Increase in the melting point, boiling point, solubility, and viscosity of many compounds can be explained by the concept of hydrogen bonding.
  • Occurrence of proton tunneling during DNA replication is believed to be responsible for cell mutations.[38]
  • Viscosity of anhydrous phosphoric acid and of glycerol
  • Dimer formation in carboxylic acids and hexamer formation in hydrogen fluoride, which occur even in the gas phase, resulting in gross deviations from the ideal gas law.
  • Pentamer formation of water and alcohols in apolar solvents.
  • High water solubility of many compounds such as ammonia is explained by hydrogen bonding with water molecules.
  • Negative azeotropy of mixtures of HF and water
  • Deliquescence of NaOH is caused in part by reaction of OH with moisture to form hydrogen-bonded H
    3
    O
    2
    species. An analogous process happens between NaNH2 and NH3, and between NaF and HF.
  • The fact that ice is less dense than liquid water is due to a crystal structure stabilized by hydrogen bonds.
  • The presence of hydrogen bonds can cause an anomaly in the normal succession of states of matter for certain mixtures of chemical compounds as temperature increases or decreases. These compounds can be liquid until a certain temperature, then solid even as the temperature increases, and finally liquid again as the temperature rises over the "anomaly interval"[39]
  • Smart rubber utilizes hydrogen bonding as its sole means of bonding, so that it can "heal" when torn, because hydrogen bonding can occur on the fly between two surfaces of the same polymer.
  • Strength of nylon and cellulose fibres.
  • Wool, being a protein fibre, is held together by hydrogen bonds, causing wool to recoil when stretched. However, washing at high temperatures can permanently break the hydrogen bonds and a garment may permanently lose its shape.

Saturday, July 15, 2017

Carnot cycle

From Wikipedia, the free encyclopedia

The Carnot cycle is a theoretical thermodynamic cycle proposed by Carnot in 1824 and expanded upon by others in the 1830s and 1840s. It provides an upper limit on the efficiency that any classical thermodynamic engine can achieve during the conversion of heat into work, or conversely, the efficiency of a refrigeration system in creating a temperature difference (e.g. refrigeration) by the application of work to the system. It is not an actual thermodynamic cycle but is a theoretical construct.

Every single thermodynamic system exists in a particular state. When a system is taken through a series of different states and finally returned to its initial state, a thermodynamic cycle is said to have occurred. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine. A system undergoing a Carnot cycle is called a Carnot heat engine, although such a "perfect" engine is only a theoretical construct and cannot be built in practice.[1] However, a microscopic Carnot heat engine has been designed and run.[2]

Essentially, there are two systems at temperatures Th and Tc (hot and cold respectively), which are so large that their temperatures are practically unaffected by a single cycle. As such, they are called "heat reservoirs". Since the cycle is reversible, there is no generation of entropy during the cycle; entropy is conserved. During the cycle, an arbitrary amount of entropy ΔS is extracted from the hot reservoir, and deposited in the cold reservoir. Since there is no volume change in either reservoir, they do no work, and during the cycle, an amount of energy ThΔS is extracted from the hot reservoir and a smaller amount of energy TcΔS is deposited in the cold reservoir. The difference in the two energies (Th-TcS is equal to the work done by the engine.

Stages

Figure 1: A Carnot cycle illustrated on a PV diagram to illustrate the work done.

The Carnot cycle when acting as a heat engine consists of the following steps:
  1. Reversible isothermal expansion of the gas at the "hot" temperature, T1 (isothermal heat addition or absorption). During this step (1 to 2 on Figure 1, A to B in Figure 2) the gas is allowed to expand and it does work on the surroundings. The temperature of the gas does not change during the process, and thus the expansion is isothermal. The gas expansion is propelled by absorption of heat energy Q1 from the high temperature reservoir and results in an increase of entropy of the gas in the amount {\displaystyle \Delta S_{1}=Q_{1}/T_{1}}.
  2. Isentropic (reversible adiabatic) expansion of the gas (isentropic work output). For this step (2 to 3 on Figure 1, B to C in Figure 2) the mechanisms of the engine are assumed to be thermally insulated, thus they neither gain nor lose heat (an adiabatic process). The gas continues to expand, doing work on the surroundings, and losing an amount of internal energy equal to the work that leaves the system. The gas expansion causes it to cool to the "cold" temperature, T2. The entropy remains unchanged.
  3. Reversible isothermal compression of the gas at the "cold" temperature, T2. (isothermal heat rejection) (3 to 4 on Figure 1, C to D on Figure 2) Now the surroundings do work on the gas, causing an amount of heat energy Q2 to leave the system to the low temperature reservoir and the entropy of the system decreases in the amount {\displaystyle \Delta S_{2}=Q_{2}/T_{2}}. (This is the same amount of entropy absorbed in step 1, as can be seen from the Clausius inequality.)
  4. Isentropic compression of the gas (isentropic work input). (4 to 1 on Figure 1, D to A on Figure 2) Once again the mechanisms of the engine are assumed to be thermally insulated, and frictionless, hence reversible. During this step, the surroundings do work on the gas, increasing its internal energy and compressing it, causing the temperature to rise to T1 due solely to the work added to the system, but the entropy remains unchanged. At this point the gas is in the same state as at the start of step 1.
In this case,
{\displaystyle \Delta S_{1}=\Delta S_{2}},
or,
{\displaystyle Q_{1}/T_{1}=Q_{2}/T_{2}}.
This is true as Q_{2} and T_{2} are both lower and in fact are in the same ratio as {\displaystyle Q_{1}/T_{1}}.

The pressure-volume graph

When the Carnot cycle is plotted on a pressure volume diagram, the isothermal stages follow the isotherm lines for the working fluid, adiabatic stages move between isotherms and the area bounded by the complete cycle path represents the total work that can be done during one cycle.

Properties and significance

The temperature-entropy diagram

Figure 2: A Carnot cycle acting as a heat engine, illustrated on a temperature-entropy diagram. The cycle takes place between a hot reservoir at temperature TH and a cold reservoir at temperature TC. The vertical axis is temperature, the horizontal axis is entropy.
A generalized thermodynamic cycle taking place between a hot reservoir at temperature TH and a cold reservoir at temperature TC. By the second law of thermodynamics, the cycle cannot extend outside the temperature band from TC to TH. The area in red QC is the amount of energy exchanged between the system and the cold reservoir. The area in white W is the amount of work energy exchanged by the system with its surroundings. The amount of heat exchanged with the hot reservoir is the sum of the two. If the system is behaving as an engine, the process moves clockwise around the loop, and moves counter-clockwise if it is behaving as a refrigerator. The efficiency to the cycle is the ratio of the white area (work) divided by the sum of the white and red areas (heat absorbed from the hot reservoir).

The behaviour of a Carnot engine or refrigerator is best understood by using a temperature-entropy diagram (TS diagram), in which the thermodynamic state is specified by a point on a graph with entropy (S) as the horizontal axis and temperature (T) as the vertical axis. For a simple closed system (control mass analysis), any point on the graph will represent a particular state of the system. A thermodynamic process will consist of a curve connecting an initial state (A) and a final state (B). The area under the curve will be:
Q=\int _{A}^{B}T\,dS\quad \quad (1)
which is the amount of thermal energy transferred in the process. If the process moves to greater entropy, the area under the curve will be the amount of heat absorbed by the system in that process. If the process moves towards lesser entropy, it will be the amount of heat removed. For any cyclic process, there will be an upper portion of the cycle and a lower portion. For a clockwise cycle, the area under the upper portion will be the thermal energy absorbed during the cycle, while the area under the lower portion will be the thermal energy removed during the cycle. The area inside the cycle will then be the difference between the two, but since the internal energy of the system must have returned to its initial value, this difference must be the amount of work done by the system over the cycle. Referring to figure 1, mathematically, for a reversible process we may write the amount of work done over a cyclic process as:
{\displaystyle W=\oint PdV=\oint (dQ-dU)=\oint (TdS-dU)=\oint TdS-\oint dU=\oint TdS\quad \quad \quad \quad (2)}
Since dU is an exact differential, its integral over any closed loop is zero and it follows that the area inside the loop on a T-S diagram is equal to the total work performed if the loop is traversed in a clockwise direction, and is equal to the total work done on the system as the loop is traversed in a counterclockwise direction.
A Carnot cycle taking place between a hot reservoir at temperature TH and a cold reservoir at temperature TC.

The Carnot cycle

A visualization of the Carnot cycle

Evaluation of the above integral is particularly simple for the Carnot cycle. The amount of energy transferred as work is
{\displaystyle W=\oint PdV=\oint TdS=(T_{H}-T_{C})(S_{B}-S_{A})}
The total amount of thermal energy transferred from the hot reservoir to the system will be
Q_{H}=T_{H}(S_{B}-S_{A})\,
and the total amount of thermal energy transferred from the system to the cold reservoir will be
Q_{C}=T_{C}(S_{B}-S_{A})\,
The efficiency \eta is defined to be:
\eta ={\frac {W}{Q_{H}}}=1-{\frac {T_{C}}{T_{H}}}\quad \quad \quad \quad \quad \quad \quad \quad \quad (3)
where
W is the work done by the system (energy exiting the system as work),
Q_{C} is the heat taken from the system (heat energy leaving the system),
Q_{H} is the heat put into the system (heat energy entering the system),
T_{C} is the absolute temperature of the cold reservoir, and
T_{H} is the absolute temperature of the hot reservoir.
S_{B} is the maximum system entropy
S_{A} is the minimum system entropy
This definition of efficiency makes sense for a heat engine, since it is the fraction of the heat energy extracted from the hot reservoir and converted to mechanical work. A Rankine cycle is usually the practical approximation.

The Reversed Carnot cycle

The Carnot heat-engine cycle described is a totally reversible cycle. That is, all the processes that comprise it can be reversed, in which case it becomes the Carnot refrigeration cycle. This time, the cycle remains exactly the same except that the directions of any heat and work interactions are reversed. Heat is absorbed from the low-temperature reservoir, heat is rejected to a high-temperature reservoir, and a work input is required to accomplish all this. The P-V diagram of the reversed Carnot cycle is the same as for the Carnot cycle except that the directions of the processes are reversed.[3]

Carnot's theorem

It can be seen from the above diagram, that for any cycle operating between temperatures T_{H} and T_{C}, none can exceed the efficiency of a Carnot cycle.

A real engine (left) compared to the Carnot cycle (right). The entropy of a real material changes with temperature. This change is indicated by the curve on a T-S diagram. For this figure, the curve indicates a vapor-liquid equilibrium (See Rankine cycle). Irreversible systems and losses of energy (for example, work due to friction and heat losses) prevent the ideal from taking place at every step.

Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs. Thus, Equation 3 gives the maximum efficiency possible for any engine using the corresponding temperatures. A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient. Rearranging the right side of the equation gives what may be a more easily understood form of the equation. Namely that the theoretical maximum efficiency of a heat engine equals the difference in temperature between the hot and cold reservoir divided by the absolute temperature of the hot reservoir. Looking at this formula an interesting fact becomes apparent; Lowering the temperature of the cold reservoir will have more effect on the ceiling efficiency of a heat engine than raising the temperature of the hot reservoir by the same amount. In the real world, this may be difficult to achieve since the cold reservoir is often an existing ambient temperature.

In other words, maximum efficiency is achieved if and only if no new entropy is created in the cycle, which would be the case if e.g. friction leads to dissipation of work into heat. In that case the cycle is not reversible and the Clausius theorem becomes an inequality rather than an equality. Otherwise, since entropy is a state function, the required dumping of heat into the environment to dispose of excess entropy leads to a (minimal) reduction in efficiency. So Equation 3 gives the efficiency of any reversible heat engine.

In mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise. For the case when work and heat fluctuations are counted, there is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath.[4]

Efficiency of real heat engines

Carnot realized that in reality it is not possible to build a thermodynamically reversible engine, so real heat engines are even less efficient than indicated by Equation 3. In addition, real engines that operate along this cycle are rare. Nevertheless, Equation 3 is extremely useful for determining the maximum efficiency that could ever be expected for a given set of thermal reservoirs.

Although Carnot's cycle is an idealisation, the expression of Carnot efficiency is still useful. Consider the average temperatures,
\langle T_{H}\rangle ={\frac {1}{\Delta S}}\int _{Q_{in}}TdS
\langle T_{C}\rangle ={\frac {1}{\Delta S}}\int _{Q_{out}}TdS
at which heat is input and output, respectively. Replace TH and TC in Equation (3) by 〈TH〉 and 〈TC〉 respectively.

For the Carnot cycle, or its equivalent, the average value 〈TH〉 will equal the highest temperature available, namely TH, and 〈TC〉 the lowest, namely TC. For other less efficient cycles, 〈TH〉 will be lower than TH, and 〈TC〉 will be higher than TC. This can help illustrate, for example, why a reheater or a regenerator can improve the thermal efficiency of steam power plants—and why the thermal efficiency of combined-cycle power plants (which incorporate gas turbines operating at even higher temperatures) exceeds that of conventional steam plants. The first prototype of the diesel engine was based on the Carnot cycle.

Entropy (arrow of time)

From Wikipedia, the free encyclopedia

Entropy is the only quantity in the physical sciences (apart from certain rare interactions in particle physics; see below) that requires a particular direction for time, sometimes called an arrow of time. As one goes "forward" in time, the second law of thermodynamics says, the entropy of an isolated system can increase, but not decrease. Hence, from one perspective, entropy measurement is a way of distinguishing the past from the future. However, in thermodynamic systems that are not closed, entropy can decrease with time: many systems, including living systems, reduce local entropy at the expense of an environmental increase, resulting in a net increase in entropy. Examples of such systems and phenomena include the formation of typical crystals, the workings of a refrigerator and living organisms, used in thermodynamics.

Entropy, like temperature, is an abstract concept, yet, like temperature, everyone has an intuitive sense of the effects of entropy. Watching a movie, it is usually easy to determine whether it is being run forward or in reverse. When run in reverse, broken glasses spontaneously reassemble; smoke goes down a chimney; wood "unburns", cooling the environment; and ice "unmelts", warming the environment. No physical laws are broken in the reverse movie except the second law of thermodynamics, which reflects the time-asymmetry of entropy. An intuitive understanding of the irreversibility of certain physical phenomena (and subsequent creation of entropy) allows one to make this determination.

By contrast, all physical processes occurring at the atomic level, such as mechanics, do not pick out an arrow of time. Going forward in time, an atom might move to the left, whereas going backward in time the same atom might move to the right; the behavior of the atom is not qualitatively different in either case. It would, however, be an astronomically improbable event if a macroscopic amount of gas that originally filled a container evenly spontaneously shrunk to occupy only half the container.

Certain subatomic interactions involving the weak nuclear force violate the conservation of parity, but only very rarely,[citation needed] According to the CPT theorem, this means they should also be time irreversible, and so establish an arrow of time. This, however, is neither linked to the thermodynamic arrow of time, nor has anything to do with our daily experience of time irreversibility.[1]
Question dropshade.png Unsolved problem in physics:
Arrow of time: Why did the universe have such low entropy in the past, resulting in the distinction between past and future and the second law of thermodynamics?
(more unsolved problems in physics)

Overview

The Second Law of Thermodynamics allows for the entropy to remain the same regardless of the direction of time. If the entropy is constant in either direction of time, there would be no preferred direction. However, the entropy can only be a constant if the system is in the highest possible state of disorder, such as a gas that always was, and always will be, uniformly spread out in its container. The existence of a thermodynamic arrow of time implies that the system is highly ordered in one time direction only, which would by definition be the "past". Thus this law is about the boundary conditions rather than the equations of motion of our world.

The Second Law of Thermodynamics is statistical in nature, and therefore its reliability arises from the huge number of particles present in macroscopic systems. It is not impossible, in principle, for all 6 × 1023 atoms in a mole of a gas to spontaneously migrate to one half of a container; it is only fantastically unlikely—so unlikely that no macroscopic violation of the Second Law has ever been observed. T Symmetry is the symmetry of physical laws under a time reversal transformation. Although in restricted contexts one may find this symmetry, the observable universe itself does not show symmetry under time reversal, primarily due to the second law of thermodynamics.

The thermodynamic arrow is often linked to the cosmological arrow of time, because it is ultimately about the boundary conditions of the early universe. According to the Big Bang theory, the Universe was initially very hot with energy distributed uniformly. For a system in which gravity is important, such as the universe, this is a low-entropy state (compared to a high-entropy state of having all matter collapsed into black holes, a state to which the system may eventually evolve). As the Universe grows, its temperature drops, which leaves less energy available to perform work in the future than was available in the past. Additionally, perturbations in the energy density grow (eventually forming galaxies and stars). Thus the Universe itself has a well-defined thermodynamic arrow of time. But this does not address the question of why the initial state of the universe was that of low entropy. If cosmic expansion were to halt and reverse due to gravity, the temperature of the Universe would once again grow hotter, but its entropy would also continue to increase due to the continued growth of perturbations and the eventual black hole formation,[2] until the latter stages of the Big Crunch when entropy would be lower than now.[citation needed]

An example of apparent irreversibility

Consider the situation in which a large container is filled with two separated liquids, for example a dye on one side and water on the other. With no barrier between the two liquids, the random jostling of their molecules will result in them becoming more mixed as time passes. However, if the dye and water are mixed then one does not expect them to separate out again when left to themselves. A movie of the mixing would seem realistic when played forwards, but unrealistic when played backwards.

If the large container is observed early on in the mixing process, it might be found only partially mixed. It would be reasonable to conclude that, without outside intervention, the liquid reached this state because it was more ordered in the past, when there was greater separation, and will be more disordered, or mixed, in the future.

Now imagine that the experiment is repeated, this time with only a few molecules, perhaps ten, in a very small container. One can easily imagine that by watching the random jostling of the molecules it might occur — by chance alone — that the molecules became neatly segregated, with all dye molecules on one side and all water molecules on the other. That this can be expected to occur from time to time can be concluded from the fluctuation theorem; thus it is not impossible for the molecules to segregate themselves. However, for a large numbers of molecules it is so unlikely that one would have to wait, on average, many times longer than the age of the universe for it to occur. Thus a movie that showed a large number of molecules segregating themselves as described above would appear unrealistic and one would be inclined to say that the movie was being played in reverse. See Boltzmann's Second Law as a law of disorder.

Mathematics of the arrow

The mathematics behind the arrow of time, entropy, and basis of the second law of thermodynamics derive from the following set-up, as detailed by Carnot (1824), Clapeyron (1832), and Clausius (1854):
Entropy-diagram.png
Here, as common experience demonstrates, when a hot body T1, such as a furnace, is put into physical contact, such as being connected via a body of fluid (working body), with a cold body T2, such as a stream of cold water, energy will invariably flow from hot to cold in the form of heat Q, and given time the system will reach equilibrium. Entropy, defined as Q/T, was conceived by Rudolf Clausius as a function to measure the molecular irreversibility of this process, i.e. the dissipative work the atoms and molecules do on each other during the transformation.

In this diagram, one can calculate the entropy change ΔS for the passage of the quantity of heat Q from the temperature T1, through the "working body" of fluid (see heat engine), which was typically a body of steam, to the temperature T2. Moreover, one could assume, for the sake of argument, that the working body contains only two molecules of water.

Next, if we make the assignment, as originally done by Clausius:
S={\frac  {Q}{T}}
Then the entropy change or "equivalence-value" for this transformation is:
\Delta S=S_{{{\mathit  {final}}}}-S_{{{\mathit  {initial}}}}\,
which equals:
\Delta S=\left({\frac  {Q}{T_{2}}}-{\frac  {Q}{T_{1}}}\right)
and by factoring out Q, we have the following form, as was derived by Clausius:
\Delta S=Q\left({\frac  {1}{T_{2}}}-{\frac  {1}{T_{1}}}\right)
Thus, for example, if Q was 50 units, T1 was initially 100 degrees, and T2 was initially 1 degree, then the entropy change for this process would be 49.5. Hence, entropy increased for this process, the process took a certain amount of "time", and one can correlate entropy increase with the passage of time. For this system configuration, subsequently, it is an "absolute rule". This rule is based on the fact that all natural processes are irreversible by virtue of the fact that molecules of a system, for example two molecules in a tank, not only do external work (such as to push a piston), but also do internal work on each other, in proportion to the heat used to do work (see: Mechanical equivalent of heat) during the process. Entropy accounts for the fact that internal inter-molecular friction exists.

Maxwell's demon

In 1867, James Clerk Maxwell introduced a now-famous thought experiment that highlighted the contrast between the statistical nature of entropy and the deterministic nature of the underlying physical processes. This experiment, known as Maxwell's demon, consists of a hypothetical "demon" that guards a trapdoor between two containers filled with gases at equal temperatures. By allowing fast molecules through the trapdoor in only one direction and only slow molecules in the other direction, the demon raises the temperature of one gas and lowers the temperature of the other, apparently violating the Second Law.

Maxwell's thought experiment was only resolved in the 20th century by Leó Szilárd, Charles H. Bennett, Seth Lloyd and others. The key idea is that the demon itself necessarily possesses a non-negligible amount of entropy that increases even as the gases lose entropy, so that the entropy of the system as a whole increases. This is because the demon has to contain many internal "parts" (essentially: a memory space to store information on the gas molecules) if it is to perform its job reliably, and therefore must be considered a macroscopic system with non-vanishing entropy. An equivalent way of saying this is that the information possessed by the demon on which atoms are considered fast or slow, can be considered a form of entropy known as information entropy.

Correlations

An important difference between the past and the future is that in any system (such as a gas of particles) its initial conditions are usually such that its different parts are uncorrelated, but as the system evolves and its different parts interact with each other, they become correlated.[3] For example, whenever dealing with a gas of particles, it is always assumed that its initial conditions are such that there is no correlation between the states of different particles (i.e. the speeds and locations of the different particles are completely random, up to the need to conform with the macrostate of the system). This is closely related to the Second Law of Thermodynamics.

Take for example (experiment A) a closed box that is, at the beginning, half-filled with ideal gas. As time passes, the gas obviously expands to fill the whole box, so that the final state is a box full of gas. This is an irreversible process, since if the box is full at the beginning (experiment B), it does not become only half-full later, except for the very unlikely situation where the gas particles have very special locations and speeds. But this is precisely because we always assume that the initial conditions are such that the particles have random locations and speeds. This is not correct for the final conditions of the system, because the particles have interacted between themselves, so that their locations and speeds have become dependent on each other, i.e. correlated. This can be understood if we look at experiment A backwards in time, which we'll call experiment C: now we begin with a box full of gas, but the particles do not have random locations and speeds; rather, their locations and speeds are so particular, that after some time they all move to one half of the box, which is the final state of the system (this is the initial state of experiment A, because now we're looking at the same experiment backwards!). The interactions between particles now do not create correlations between the particles, but in fact turn them into (at least seemingly) random, "canceling" the pre-existing correlations. The only difference between experiment C (which defies the Second Law of Thermodynamics) and experiment B (which obeys the Second Law of Thermodynamics) is that in the former the particles are uncorrelated at the end, while in the latter the particles are uncorrelated at the beginning.[citation needed]

In fact, if all the microscopic physical processes are reversible (see discussion below), then the Second Law of Thermodynamics can be proven for any isolated system of particles with initial conditions in which the particles states are uncorrelated. To do this, one must acknowledge the difference between the measured entropy of a system—which depends only on its macrostate (its volume, temperature etc.)—and its information entropy (also called Kolmogorov complexity),[4] which is the amount of information (number of computer bits) needed to describe the exact microstate of the system. The measured entropy is independent of correlations between particles in the system, because they do not affect its macrostate, but the information entropy does depend on them, because correlations lower the randomness of the system and thus lowers the amount of information needed to describe it.[5] Therefore, in the absence of such correlations the two entropies are identical, but otherwise the information entropy is smaller than the measured entropy, and the difference can be used as a measure of the amount of correlations.

Now, by Liouville's theorem, time-reversal of all microscopic processes implies that the amount of information needed to describe the exact microstate of an isolated system (its information-theoretic joint entropy) is constant in time. This joint entropy is equal to the marginal entropy (entropy assuming no correlations) plus the entropy of correlation (mutual entropy, or its negative mutual information). If we assume no correlations between the particles initially, then this joint entropy is just the marginal entropy, which is just the initial thermodynamic entropy of the system, divided by Boltzmann's constant. However, if these are indeed the initial conditions (and this is a crucial assumption), then such correlations form with time. In other words, there is a decreasing mutual entropy (or increasing mutual information), and for a time that is not too long—the correlations (mutual information) between particles only increase with time. Therefore, the thermodynamic entropy, which is proportional to the marginal entropy, must also increase with time [6] (note that "not too long" in this context is relative to the time needed, in a classical version of the system, for it to pass through all its possible microstates—a time that can be roughly estimated as \tau e^{S}, where \tau is the time between particle collisions and S is the system's entropy. In any practical case this time is huge compared to everything else). Note that the correlation between particles is not a fully objective quantity. One cannot measure the mutual entropy, one can only measure its change, assuming one can measure a microstate. Thermodynamics is restricted to the case where microstates cannot be distinguished, which means that only the marginal entropy, proportional to the thermodynamic entropy, can be measured, and, in a practical sense, always increases.

The arrow of time in various phenomena

All phenomena that behave differently in one time direction can ultimately be linked to the Second Law of Thermodynamics. This includes the fact that ice cubes melt in hot coffee rather than assembling themselves out of the coffee, that a block sliding on a rough surface slows down rather than speeding up, and that we can remember the past rather than the future. This last phenomenon, called the "psychological arrow of time", has deep connections with Maxwell's demon and the physics of information; In fact, it is easy to understand its link to the Second Law of Thermodynamics if one views memory as correlation between brain cells (or computer bits) and the outer world. Since the Second Law of Thermodynamics is equivalent to the growth with time of such correlations, then it states that memory is created as we move towards the future (rather than towards the past).

Current research

Current research focuses mainly on describing the thermodynamic arrow of time mathematically, either in classical or quantum systems, and on understanding its origin from the point of view of cosmological boundary conditions.

Dynamical systems

Some current research in dynamical systems indicates a possible "explanation" for the arrow of time.[citation needed] There are several ways to describe the time evolution of a dynamical system. In the classical framework, one considers a differential equation, where one of the parameters is explicitly time. By the very nature of differential equations, the solutions to such systems are inherently time-reversible. However, many of the interesting cases are either ergodic or mixing, and it is strongly suspected that mixing and ergodicity somehow underlie the fundamental mechanism of the arrow of time.

Mixing and ergodic systems do not have exact solutions, and thus proving time irreversibility in a mathematical sense is (as of 2006) impossible. Some progress can be made by studying discrete-time models or difference equations. Many discrete-time models, such as the iterated functions considered in popular fractal-drawing programs, are explicitly not time-reversible, as any given point "in the present" may have several different "pasts" associated with it: indeed, the set of all pasts is known as the Julia set. Since such systems have a built-in irreversibility, it is inappropriate to use them to explain why time is not reversible.

There are other systems that are chaotic, and are also explicitly time-reversible: among these is the baker's map, which is also exactly solvable. An interesting avenue of study is to examine solutions to such systems not by iterating the dynamical system over time, but instead, to study the corresponding Frobenius-Perron operator or transfer operator for the system. For some of these systems, it can be explicitly, mathematically shown that the transfer operators are not trace-class. This means that these operators do not have a unique eigenvalue spectrum that is independent of the choice of basis. In the case of the baker's map, it can be shown that several unique and inequivalent diagonalizations or bases exist, each with a different set of eigenvalues. It is this phenomenon that can be offered as an "explanation" for the arrow of time. That is, although the iterated, discrete-time system is explicitly time-symmetric, the transfer operator is not. Furthermore, the transfer operator can be diagonalized in one of two inequivalent ways: one that describes the forward-time evolution of the system, and one that describes the backwards-time evolution.

As of 2006, this type of time-symmetry breaking has been demonstrated for only a very small number of exactly-solvable, discrete-time systems. The transfer operator for more complex systems has not been consistently formulated, and its precise definition is mired in a variety of subtle difficulties. In particular, it has not been shown that it has a broken symmetry for the simplest exactly-solvable continuous-time ergodic systems, such as Hadamard's billiards, or the Anosov flow on the tangent space of PSL(2,R).

Quantum mechanics

Research on irreversibility in quantum mechanics takes several different directions. One avenue is the study of rigged Hilbert spaces, and in particular, how discrete and continuous eigenvalue spectra intermingle[citation needed]. For example, the rational numbers are completely intermingled with the real numbers, and yet have a unique, distinct set of properties. It is hoped that the study of Hilbert spaces with a similar inter-mingling will provide insight into the arrow of time.

Another distinct approach is through the study of quantum chaos by which attempts are made to quantize systems as classically chaotic, ergodic or mixing.[citation needed] The results obtained are not dissimilar from those that come from the transfer operator method. For example, the quantization of the Boltzmann gas, that is, a gas of hard (elastic) point particles in a rectangular box reveals that the eigenfunctions are space-filling fractals that occupy the entire box, and that the energy eigenvalues are very closely spaced and have an "almost continuous" spectrum (for a finite number of particles in a box, the spectrum must be, of necessity, discrete). If the initial conditions are such that all of the particles are confined to one side of the box, the system very quickly evolves into one where the particles fill the entire box. Even when all of the particles are initially on one side of the box, their wave functions do, in fact, permeate the entire box: they constructively interfere on one side, and destructively interfere on the other. Irreversibility is then argued by noting that it is "nearly impossible" for the wave functions to be "accidentally" arranged in some unlikely state: such arrangements are a set of zero measure. Because the eigenfunctions are fractals, much of the language and machinery of entropy and statistical mechanics can be imported to discuss and argue the quantum case.[citation needed]

Cosmology

Some processes that involve high energy particles and are governed by the weak force (such as K-meson decay) defy the symmetry between time directions. However, all known physical processes do preserve a more complicated symmetry (CPT symmetry), and are therefore unrelated to the second law of thermodynamics, or to our day-to-day experience of the arrow of time. A notable exception is the wave function collapse in quantum mechanics, which is an irreversible process. It has been conjectured that the collapse of the wave function may be the reason for the Second Law of Thermodynamics. However it is more accepted today that the opposite is correct, namely that the (possibly merely apparent) wave function collapse is a consequence of quantum decoherence, a process that is ultimately an outcome of the Second Law of Thermodynamics.

The universe was in a uniform, high density state at its very early stages, shortly after the big bang. The hot gas in the early universe was near thermodynamic equilibrium (giving rise to the horizon problem) and hence in a state of maximum entropy, given its volume. Expansion of a gas increases its entropy, however, and expansion of the universe has therefore enabled an ongoing increase in entropy. Viewed from later eras, the early universe can thus be considered to be highly ordered. The uniformity of this early near-equilibrium state has been explained by the theory of cosmic inflation.
According to this theory our universe (or, rather, its accessible part, a radius of 46 billion light years around our location) evolved from a tiny, totally uniform volume (a portion of a much bigger universe), which expanded greatly; hence it was highly ordered. Fluctuations were then created by quantum processes related to its expansion, in a manner supposed to be such that these fluctuations are uncorrelated for any practical use. This is supposed to give the desired initial conditions needed for the Second Law of Thermodynamics.

Our universe is apparently an open universe, so that its expansion will never terminate, but it is an interesting thought experiment to imagine what would have happened had our universe been closed. In such a case, its expansion would stop at a certain time in the distant future, and then begin to shrink. Moreover, a closed universe is finite. It is unclear what would happen to the Second Law of Thermodynamics in such a case. One could imagine at least three different scenarios (in fact, only the third one is plausible, since the first two require a smooth cosmic evolution, contrary to what is observed):
  • A highly controversial view is that in such a case the arrow of time will reverse.[7] The quantum fluctuations—which in the meantime have evolved into galaxies and stars—will be in superposition in such a way that the whole process described above is reversed—i.e., the fluctuations are erased by destructive interference and total uniformity is achieved once again. Thus the universe ends in a big crunch, which is similar to its beginning in the big bang. Because the two are totally symmetric, and the final state is very highly ordered, entropy must decrease close to the end of the universe, so that the Second Law of Thermodynamics reverses when the universe shrinks. This can be understood as follows: in the very early universe, interactions between fluctuations created entanglement (quantum correlations) between particles spread all over the universe; during the expansion, these particles became so distant that these correlations became negligible (see quantum decoherence). At the time the expansion halts and the universe starts to shrink, such correlated particles arrive once again at contact (after circling around the universe), and the entropy starts to decrease—because highly correlated initial conditions may lead to a decrease in entropy. Another way of putting it, is that as distant particles arrive, more and more order is revealed because these particles are highly correlated with particles that arrived earlier.
  • It could be that this is the crucial point where the wavefunction collapse is important: if the collapse is real, then the quantum fluctuations will not be in superposition any longer; rather they had collapsed to a particular state (a particular arrangement of galaxies and stars), thus creating a big crunch, which is very different from the big bang. Such a scenario may be viewed as adding boundary conditions (say, at the distant future) that dictate the wavefunction collapse.[8]
  • The broad consensus among the scientific community today is that smooth initial conditions lead to a highly non-smooth final state, and that this is in fact the source of the thermodynamic arrow of time.[9] Highly non-smooth gravitational systems tend to collapse to black holes, so the wavefunction of the whole universe evolves from a superposition of small fluctuations to a superposition of states with many black holes in each. It may even be that it is impossible for the universe to have both a smooth beginning and a smooth ending. Note that in this scenario the energy density of the universe in the final stages of its shrinkage is much larger than in the corresponding initial stages of its expansion (there is no destructive interference, unlike in the first scenario described above), and consists of mostly black holes rather than free particles.
In the first scenario, the cosmological arrow of time is the reason for both the thermodynamic arrow of time and the quantum arrow of time. Both will slowly disappear as the universe will come to a halt, and will later be reversed.

In the second and third scenarios, it is the difference between the initial state and the final state of the universe that is responsible for the thermodynamic arrow of time. This is independent of the cosmological arrow of time. In the second scenario, the quantum arrow of time may be seen as the deep reason for this.

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...