Search This Blog

Saturday, July 15, 2017

Entropy (arrow of time)

From Wikipedia, the free encyclopedia

Entropy is the only quantity in the physical sciences (apart from certain rare interactions in particle physics; see below) that requires a particular direction for time, sometimes called an arrow of time. As one goes "forward" in time, the second law of thermodynamics says, the entropy of an isolated system can increase, but not decrease. Hence, from one perspective, entropy measurement is a way of distinguishing the past from the future. However, in thermodynamic systems that are not closed, entropy can decrease with time: many systems, including living systems, reduce local entropy at the expense of an environmental increase, resulting in a net increase in entropy. Examples of such systems and phenomena include the formation of typical crystals, the workings of a refrigerator and living organisms, used in thermodynamics.

Entropy, like temperature, is an abstract concept, yet, like temperature, everyone has an intuitive sense of the effects of entropy. Watching a movie, it is usually easy to determine whether it is being run forward or in reverse. When run in reverse, broken glasses spontaneously reassemble; smoke goes down a chimney; wood "unburns", cooling the environment; and ice "unmelts", warming the environment. No physical laws are broken in the reverse movie except the second law of thermodynamics, which reflects the time-asymmetry of entropy. An intuitive understanding of the irreversibility of certain physical phenomena (and subsequent creation of entropy) allows one to make this determination.

By contrast, all physical processes occurring at the atomic level, such as mechanics, do not pick out an arrow of time. Going forward in time, an atom might move to the left, whereas going backward in time the same atom might move to the right; the behavior of the atom is not qualitatively different in either case. It would, however, be an astronomically improbable event if a macroscopic amount of gas that originally filled a container evenly spontaneously shrunk to occupy only half the container.

Certain subatomic interactions involving the weak nuclear force violate the conservation of parity, but only very rarely,[citation needed] According to the CPT theorem, this means they should also be time irreversible, and so establish an arrow of time. This, however, is neither linked to the thermodynamic arrow of time, nor has anything to do with our daily experience of time irreversibility.[1]
Question dropshade.png Unsolved problem in physics:
Arrow of time: Why did the universe have such low entropy in the past, resulting in the distinction between past and future and the second law of thermodynamics?
(more unsolved problems in physics)

Overview

The Second Law of Thermodynamics allows for the entropy to remain the same regardless of the direction of time. If the entropy is constant in either direction of time, there would be no preferred direction. However, the entropy can only be a constant if the system is in the highest possible state of disorder, such as a gas that always was, and always will be, uniformly spread out in its container. The existence of a thermodynamic arrow of time implies that the system is highly ordered in one time direction only, which would by definition be the "past". Thus this law is about the boundary conditions rather than the equations of motion of our world.

The Second Law of Thermodynamics is statistical in nature, and therefore its reliability arises from the huge number of particles present in macroscopic systems. It is not impossible, in principle, for all 6 × 1023 atoms in a mole of a gas to spontaneously migrate to one half of a container; it is only fantastically unlikely—so unlikely that no macroscopic violation of the Second Law has ever been observed. T Symmetry is the symmetry of physical laws under a time reversal transformation. Although in restricted contexts one may find this symmetry, the observable universe itself does not show symmetry under time reversal, primarily due to the second law of thermodynamics.

The thermodynamic arrow is often linked to the cosmological arrow of time, because it is ultimately about the boundary conditions of the early universe. According to the Big Bang theory, the Universe was initially very hot with energy distributed uniformly. For a system in which gravity is important, such as the universe, this is a low-entropy state (compared to a high-entropy state of having all matter collapsed into black holes, a state to which the system may eventually evolve). As the Universe grows, its temperature drops, which leaves less energy available to perform work in the future than was available in the past. Additionally, perturbations in the energy density grow (eventually forming galaxies and stars). Thus the Universe itself has a well-defined thermodynamic arrow of time. But this does not address the question of why the initial state of the universe was that of low entropy. If cosmic expansion were to halt and reverse due to gravity, the temperature of the Universe would once again grow hotter, but its entropy would also continue to increase due to the continued growth of perturbations and the eventual black hole formation,[2] until the latter stages of the Big Crunch when entropy would be lower than now.[citation needed]

An example of apparent irreversibility

Consider the situation in which a large container is filled with two separated liquids, for example a dye on one side and water on the other. With no barrier between the two liquids, the random jostling of their molecules will result in them becoming more mixed as time passes. However, if the dye and water are mixed then one does not expect them to separate out again when left to themselves. A movie of the mixing would seem realistic when played forwards, but unrealistic when played backwards.

If the large container is observed early on in the mixing process, it might be found only partially mixed. It would be reasonable to conclude that, without outside intervention, the liquid reached this state because it was more ordered in the past, when there was greater separation, and will be more disordered, or mixed, in the future.

Now imagine that the experiment is repeated, this time with only a few molecules, perhaps ten, in a very small container. One can easily imagine that by watching the random jostling of the molecules it might occur — by chance alone — that the molecules became neatly segregated, with all dye molecules on one side and all water molecules on the other. That this can be expected to occur from time to time can be concluded from the fluctuation theorem; thus it is not impossible for the molecules to segregate themselves. However, for a large numbers of molecules it is so unlikely that one would have to wait, on average, many times longer than the age of the universe for it to occur. Thus a movie that showed a large number of molecules segregating themselves as described above would appear unrealistic and one would be inclined to say that the movie was being played in reverse. See Boltzmann's Second Law as a law of disorder.

Mathematics of the arrow

The mathematics behind the arrow of time, entropy, and basis of the second law of thermodynamics derive from the following set-up, as detailed by Carnot (1824), Clapeyron (1832), and Clausius (1854):
Entropy-diagram.png
Here, as common experience demonstrates, when a hot body T1, such as a furnace, is put into physical contact, such as being connected via a body of fluid (working body), with a cold body T2, such as a stream of cold water, energy will invariably flow from hot to cold in the form of heat Q, and given time the system will reach equilibrium. Entropy, defined as Q/T, was conceived by Rudolf Clausius as a function to measure the molecular irreversibility of this process, i.e. the dissipative work the atoms and molecules do on each other during the transformation.

In this diagram, one can calculate the entropy change ΔS for the passage of the quantity of heat Q from the temperature T1, through the "working body" of fluid (see heat engine), which was typically a body of steam, to the temperature T2. Moreover, one could assume, for the sake of argument, that the working body contains only two molecules of water.

Next, if we make the assignment, as originally done by Clausius:
S={\frac  {Q}{T}}
Then the entropy change or "equivalence-value" for this transformation is:
\Delta S=S_{{{\mathit  {final}}}}-S_{{{\mathit  {initial}}}}\,
which equals:
\Delta S=\left({\frac  {Q}{T_{2}}}-{\frac  {Q}{T_{1}}}\right)
and by factoring out Q, we have the following form, as was derived by Clausius:
\Delta S=Q\left({\frac  {1}{T_{2}}}-{\frac  {1}{T_{1}}}\right)
Thus, for example, if Q was 50 units, T1 was initially 100 degrees, and T2 was initially 1 degree, then the entropy change for this process would be 49.5. Hence, entropy increased for this process, the process took a certain amount of "time", and one can correlate entropy increase with the passage of time. For this system configuration, subsequently, it is an "absolute rule". This rule is based on the fact that all natural processes are irreversible by virtue of the fact that molecules of a system, for example two molecules in a tank, not only do external work (such as to push a piston), but also do internal work on each other, in proportion to the heat used to do work (see: Mechanical equivalent of heat) during the process. Entropy accounts for the fact that internal inter-molecular friction exists.

Maxwell's demon

In 1867, James Clerk Maxwell introduced a now-famous thought experiment that highlighted the contrast between the statistical nature of entropy and the deterministic nature of the underlying physical processes. This experiment, known as Maxwell's demon, consists of a hypothetical "demon" that guards a trapdoor between two containers filled with gases at equal temperatures. By allowing fast molecules through the trapdoor in only one direction and only slow molecules in the other direction, the demon raises the temperature of one gas and lowers the temperature of the other, apparently violating the Second Law.

Maxwell's thought experiment was only resolved in the 20th century by Leó Szilárd, Charles H. Bennett, Seth Lloyd and others. The key idea is that the demon itself necessarily possesses a non-negligible amount of entropy that increases even as the gases lose entropy, so that the entropy of the system as a whole increases. This is because the demon has to contain many internal "parts" (essentially: a memory space to store information on the gas molecules) if it is to perform its job reliably, and therefore must be considered a macroscopic system with non-vanishing entropy. An equivalent way of saying this is that the information possessed by the demon on which atoms are considered fast or slow, can be considered a form of entropy known as information entropy.

Correlations

An important difference between the past and the future is that in any system (such as a gas of particles) its initial conditions are usually such that its different parts are uncorrelated, but as the system evolves and its different parts interact with each other, they become correlated.[3] For example, whenever dealing with a gas of particles, it is always assumed that its initial conditions are such that there is no correlation between the states of different particles (i.e. the speeds and locations of the different particles are completely random, up to the need to conform with the macrostate of the system). This is closely related to the Second Law of Thermodynamics.

Take for example (experiment A) a closed box that is, at the beginning, half-filled with ideal gas. As time passes, the gas obviously expands to fill the whole box, so that the final state is a box full of gas. This is an irreversible process, since if the box is full at the beginning (experiment B), it does not become only half-full later, except for the very unlikely situation where the gas particles have very special locations and speeds. But this is precisely because we always assume that the initial conditions are such that the particles have random locations and speeds. This is not correct for the final conditions of the system, because the particles have interacted between themselves, so that their locations and speeds have become dependent on each other, i.e. correlated. This can be understood if we look at experiment A backwards in time, which we'll call experiment C: now we begin with a box full of gas, but the particles do not have random locations and speeds; rather, their locations and speeds are so particular, that after some time they all move to one half of the box, which is the final state of the system (this is the initial state of experiment A, because now we're looking at the same experiment backwards!). The interactions between particles now do not create correlations between the particles, but in fact turn them into (at least seemingly) random, "canceling" the pre-existing correlations. The only difference between experiment C (which defies the Second Law of Thermodynamics) and experiment B (which obeys the Second Law of Thermodynamics) is that in the former the particles are uncorrelated at the end, while in the latter the particles are uncorrelated at the beginning.[citation needed]

In fact, if all the microscopic physical processes are reversible (see discussion below), then the Second Law of Thermodynamics can be proven for any isolated system of particles with initial conditions in which the particles states are uncorrelated. To do this, one must acknowledge the difference between the measured entropy of a system—which depends only on its macrostate (its volume, temperature etc.)—and its information entropy (also called Kolmogorov complexity),[4] which is the amount of information (number of computer bits) needed to describe the exact microstate of the system. The measured entropy is independent of correlations between particles in the system, because they do not affect its macrostate, but the information entropy does depend on them, because correlations lower the randomness of the system and thus lowers the amount of information needed to describe it.[5] Therefore, in the absence of such correlations the two entropies are identical, but otherwise the information entropy is smaller than the measured entropy, and the difference can be used as a measure of the amount of correlations.

Now, by Liouville's theorem, time-reversal of all microscopic processes implies that the amount of information needed to describe the exact microstate of an isolated system (its information-theoretic joint entropy) is constant in time. This joint entropy is equal to the marginal entropy (entropy assuming no correlations) plus the entropy of correlation (mutual entropy, or its negative mutual information). If we assume no correlations between the particles initially, then this joint entropy is just the marginal entropy, which is just the initial thermodynamic entropy of the system, divided by Boltzmann's constant. However, if these are indeed the initial conditions (and this is a crucial assumption), then such correlations form with time. In other words, there is a decreasing mutual entropy (or increasing mutual information), and for a time that is not too long—the correlations (mutual information) between particles only increase with time. Therefore, the thermodynamic entropy, which is proportional to the marginal entropy, must also increase with time [6] (note that "not too long" in this context is relative to the time needed, in a classical version of the system, for it to pass through all its possible microstates—a time that can be roughly estimated as \tau e^{S}, where \tau is the time between particle collisions and S is the system's entropy. In any practical case this time is huge compared to everything else). Note that the correlation between particles is not a fully objective quantity. One cannot measure the mutual entropy, one can only measure its change, assuming one can measure a microstate. Thermodynamics is restricted to the case where microstates cannot be distinguished, which means that only the marginal entropy, proportional to the thermodynamic entropy, can be measured, and, in a practical sense, always increases.

The arrow of time in various phenomena

All phenomena that behave differently in one time direction can ultimately be linked to the Second Law of Thermodynamics. This includes the fact that ice cubes melt in hot coffee rather than assembling themselves out of the coffee, that a block sliding on a rough surface slows down rather than speeding up, and that we can remember the past rather than the future. This last phenomenon, called the "psychological arrow of time", has deep connections with Maxwell's demon and the physics of information; In fact, it is easy to understand its link to the Second Law of Thermodynamics if one views memory as correlation between brain cells (or computer bits) and the outer world. Since the Second Law of Thermodynamics is equivalent to the growth with time of such correlations, then it states that memory is created as we move towards the future (rather than towards the past).

Current research

Current research focuses mainly on describing the thermodynamic arrow of time mathematically, either in classical or quantum systems, and on understanding its origin from the point of view of cosmological boundary conditions.

Dynamical systems

Some current research in dynamical systems indicates a possible "explanation" for the arrow of time.[citation needed] There are several ways to describe the time evolution of a dynamical system. In the classical framework, one considers a differential equation, where one of the parameters is explicitly time. By the very nature of differential equations, the solutions to such systems are inherently time-reversible. However, many of the interesting cases are either ergodic or mixing, and it is strongly suspected that mixing and ergodicity somehow underlie the fundamental mechanism of the arrow of time.

Mixing and ergodic systems do not have exact solutions, and thus proving time irreversibility in a mathematical sense is (as of 2006) impossible. Some progress can be made by studying discrete-time models or difference equations. Many discrete-time models, such as the iterated functions considered in popular fractal-drawing programs, are explicitly not time-reversible, as any given point "in the present" may have several different "pasts" associated with it: indeed, the set of all pasts is known as the Julia set. Since such systems have a built-in irreversibility, it is inappropriate to use them to explain why time is not reversible.

There are other systems that are chaotic, and are also explicitly time-reversible: among these is the baker's map, which is also exactly solvable. An interesting avenue of study is to examine solutions to such systems not by iterating the dynamical system over time, but instead, to study the corresponding Frobenius-Perron operator or transfer operator for the system. For some of these systems, it can be explicitly, mathematically shown that the transfer operators are not trace-class. This means that these operators do not have a unique eigenvalue spectrum that is independent of the choice of basis. In the case of the baker's map, it can be shown that several unique and inequivalent diagonalizations or bases exist, each with a different set of eigenvalues. It is this phenomenon that can be offered as an "explanation" for the arrow of time. That is, although the iterated, discrete-time system is explicitly time-symmetric, the transfer operator is not. Furthermore, the transfer operator can be diagonalized in one of two inequivalent ways: one that describes the forward-time evolution of the system, and one that describes the backwards-time evolution.

As of 2006, this type of time-symmetry breaking has been demonstrated for only a very small number of exactly-solvable, discrete-time systems. The transfer operator for more complex systems has not been consistently formulated, and its precise definition is mired in a variety of subtle difficulties. In particular, it has not been shown that it has a broken symmetry for the simplest exactly-solvable continuous-time ergodic systems, such as Hadamard's billiards, or the Anosov flow on the tangent space of PSL(2,R).

Quantum mechanics

Research on irreversibility in quantum mechanics takes several different directions. One avenue is the study of rigged Hilbert spaces, and in particular, how discrete and continuous eigenvalue spectra intermingle[citation needed]. For example, the rational numbers are completely intermingled with the real numbers, and yet have a unique, distinct set of properties. It is hoped that the study of Hilbert spaces with a similar inter-mingling will provide insight into the arrow of time.

Another distinct approach is through the study of quantum chaos by which attempts are made to quantize systems as classically chaotic, ergodic or mixing.[citation needed] The results obtained are not dissimilar from those that come from the transfer operator method. For example, the quantization of the Boltzmann gas, that is, a gas of hard (elastic) point particles in a rectangular box reveals that the eigenfunctions are space-filling fractals that occupy the entire box, and that the energy eigenvalues are very closely spaced and have an "almost continuous" spectrum (for a finite number of particles in a box, the spectrum must be, of necessity, discrete). If the initial conditions are such that all of the particles are confined to one side of the box, the system very quickly evolves into one where the particles fill the entire box. Even when all of the particles are initially on one side of the box, their wave functions do, in fact, permeate the entire box: they constructively interfere on one side, and destructively interfere on the other. Irreversibility is then argued by noting that it is "nearly impossible" for the wave functions to be "accidentally" arranged in some unlikely state: such arrangements are a set of zero measure. Because the eigenfunctions are fractals, much of the language and machinery of entropy and statistical mechanics can be imported to discuss and argue the quantum case.[citation needed]

Cosmology

Some processes that involve high energy particles and are governed by the weak force (such as K-meson decay) defy the symmetry between time directions. However, all known physical processes do preserve a more complicated symmetry (CPT symmetry), and are therefore unrelated to the second law of thermodynamics, or to our day-to-day experience of the arrow of time. A notable exception is the wave function collapse in quantum mechanics, which is an irreversible process. It has been conjectured that the collapse of the wave function may be the reason for the Second Law of Thermodynamics. However it is more accepted today that the opposite is correct, namely that the (possibly merely apparent) wave function collapse is a consequence of quantum decoherence, a process that is ultimately an outcome of the Second Law of Thermodynamics.

The universe was in a uniform, high density state at its very early stages, shortly after the big bang. The hot gas in the early universe was near thermodynamic equilibrium (giving rise to the horizon problem) and hence in a state of maximum entropy, given its volume. Expansion of a gas increases its entropy, however, and expansion of the universe has therefore enabled an ongoing increase in entropy. Viewed from later eras, the early universe can thus be considered to be highly ordered. The uniformity of this early near-equilibrium state has been explained by the theory of cosmic inflation.
According to this theory our universe (or, rather, its accessible part, a radius of 46 billion light years around our location) evolved from a tiny, totally uniform volume (a portion of a much bigger universe), which expanded greatly; hence it was highly ordered. Fluctuations were then created by quantum processes related to its expansion, in a manner supposed to be such that these fluctuations are uncorrelated for any practical use. This is supposed to give the desired initial conditions needed for the Second Law of Thermodynamics.

Our universe is apparently an open universe, so that its expansion will never terminate, but it is an interesting thought experiment to imagine what would have happened had our universe been closed. In such a case, its expansion would stop at a certain time in the distant future, and then begin to shrink. Moreover, a closed universe is finite. It is unclear what would happen to the Second Law of Thermodynamics in such a case. One could imagine at least three different scenarios (in fact, only the third one is plausible, since the first two require a smooth cosmic evolution, contrary to what is observed):
  • A highly controversial view is that in such a case the arrow of time will reverse.[7] The quantum fluctuations—which in the meantime have evolved into galaxies and stars—will be in superposition in such a way that the whole process described above is reversed—i.e., the fluctuations are erased by destructive interference and total uniformity is achieved once again. Thus the universe ends in a big crunch, which is similar to its beginning in the big bang. Because the two are totally symmetric, and the final state is very highly ordered, entropy must decrease close to the end of the universe, so that the Second Law of Thermodynamics reverses when the universe shrinks. This can be understood as follows: in the very early universe, interactions between fluctuations created entanglement (quantum correlations) between particles spread all over the universe; during the expansion, these particles became so distant that these correlations became negligible (see quantum decoherence). At the time the expansion halts and the universe starts to shrink, such correlated particles arrive once again at contact (after circling around the universe), and the entropy starts to decrease—because highly correlated initial conditions may lead to a decrease in entropy. Another way of putting it, is that as distant particles arrive, more and more order is revealed because these particles are highly correlated with particles that arrived earlier.
  • It could be that this is the crucial point where the wavefunction collapse is important: if the collapse is real, then the quantum fluctuations will not be in superposition any longer; rather they had collapsed to a particular state (a particular arrangement of galaxies and stars), thus creating a big crunch, which is very different from the big bang. Such a scenario may be viewed as adding boundary conditions (say, at the distant future) that dictate the wavefunction collapse.[8]
  • The broad consensus among the scientific community today is that smooth initial conditions lead to a highly non-smooth final state, and that this is in fact the source of the thermodynamic arrow of time.[9] Highly non-smooth gravitational systems tend to collapse to black holes, so the wavefunction of the whole universe evolves from a superposition of small fluctuations to a superposition of states with many black holes in each. It may even be that it is impossible for the universe to have both a smooth beginning and a smooth ending. Note that in this scenario the energy density of the universe in the final stages of its shrinkage is much larger than in the corresponding initial stages of its expansion (there is no destructive interference, unlike in the first scenario described above), and consists of mostly black holes rather than free particles.
In the first scenario, the cosmological arrow of time is the reason for both the thermodynamic arrow of time and the quantum arrow of time. Both will slowly disappear as the universe will come to a halt, and will later be reversed.

In the second and third scenarios, it is the difference between the initial state and the final state of the universe that is responsible for the thermodynamic arrow of time. This is independent of the cosmological arrow of time. In the second scenario, the quantum arrow of time may be seen as the deep reason for this.

Friday, July 14, 2017

Heat death of the universe

From Wikipedia, the free encyclopedia
The heat death of the universe is a plausible ultimate fate of the universe in which the universe has diminished to a state of no thermodynamic free energy and therefore can no longer sustain processes that increase entropy. Heat death does not imply any particular absolute temperature; it only requires that temperature differences or other processes may no longer be exploited to perform work. In the language of physics, this is when the universe reaches thermodynamic equilibrium (maximum entropy).

If the topology of the universe is open or flat, or if dark energy is a positive cosmological constant (both of which are supported by current data), the universe will continue expanding forever and a heat death is expected to occur,[1] with the universe cooling to approach equilibrium at a very low temperature after a very long time period.

The hypothesis of heat death stems from the ideas of William Thomson, 1st Baron Kelvin, who in the 1850s took the theory of heat as mechanical energy loss in nature (as embodied in the first two laws of thermodynamics) and extrapolated it to larger processes on a universal scale.

Origins of the idea

The idea of heat death stems from the second law of thermodynamics, of which one version states that entropy tends to increase in an isolated system. From this, the hypothesis infers that if the universe lasts for a sufficient time, it will asymptotically approach a state where all energy is evenly distributed. In other words, according to this hypothesis, in nature there is a tendency to the dissipation (energy transformation) of mechanical energy (motion) into thermal energy; hence, by extrapolation, there exists the view that the mechanical movement of the universe will run down, as work is converted to heat, in time because of the second law.

The conjecture that all bodies in the universe cool off, eventually becoming too cold to support life, seems to have been first put forward by the French astronomer Jean-Sylvain Bailly in 1777 in his writings on the history of astronomy and in the ensuing correspondence with Voltaire. In Bailly's view, all planets have an internal heat and are now at some particular stage of cooling. Jupiter, for instance, is still too hot for life to arise there for thousands of years, while the Moon is already too cold. The final state, in this view, is described as one of "equilibrium" in which all motion ceases.[2]

The idea of heat death as a consequence of the laws of thermodynamics, however, was first proposed in loose terms beginning in 1851 by William Thomson, 1st Baron Kelvin, who theorized further on the mechanical energy loss views of Sadi Carnot (1824), James Joule (1843), and Rudolf Clausius (1850). Thomson’s views were then elaborated on more definitively over the next decade by Hermann von Helmholtz and William Rankine.[citation needed]

History

The idea of heat death of the universe derives from discussion of the application of the first two laws of thermodynamics to universal processes. Specifically, in 1851 William Thomson (Lord Kelvin) outlined the view, as based on recent experiments on the dynamical theory of heat, that "heat is not a substance, but a dynamical form of mechanical effect, we perceive that there must be an equivalence between mechanical work and heat, as between cause and effect."[3]
Lord Kelvin originated the idea of universal heat death in 1852.

In 1852, Thomson published On a Universal Tendency in Nature to the Dissipation of Mechanical Energy in which he outlined the rudiments of the second law of thermodynamics summarized by the view that mechanical motion and the energy used to create that motion will tend to dissipate or run down, naturally.[4] The ideas in this paper, in relation to their application to the age of the sun and the dynamics of the universal operation, attracted the likes of William Rankine and Hermann von Helmholtz. The three of them were said to have exchanged ideas on this subject.[5] In 1862, Thomson published "On the age of the Sun’s heat", an article in which he reiterated his fundamental beliefs in the indestructibility of energy (the first law) and the universal dissipation of energy (the second law), leading to diffusion of heat, cessation of useful motion (work), and exhaustion of potential energy through the material universe while clarifying his view of the consequences for the universe as a whole. In a key paragraph, Thomson wrote:
The result would inevitably be a state of universal rest and death, if the universe were finite and left to obey existing laws. But it is impossible to conceive a limit to the extent of matter in the universe; and therefore science points rather to an endless progress, through an endless space, of action involving the transformation of potential energy into palpable motion and hence into heat, than to a single finite mechanism, running down like a clock, and stopping for ever.[6]
In the years to follow both Thomson’s 1852 and the 1865 papers, Helmholtz and Rankine both credited Thomson with the idea, but read further into his papers by publishing views stating that Thomson argued that the universe will end in a "heat death" (Helmholtz) which will be the "end of all physical phenomena" (Rankine).[5][7]

Current status

Proposals about the final state of the universe depend on the assumptions made about its ultimate fate, and these assumptions have varied considerably over the late 20th century and early 21st century. In a hypothesized "open" or "flat" universe that continues expanding indefinitely, a heat death is expected to occur.[1] If the cosmological constant is zero, the universe will approach absolute zero temperature over a very long timescale. However, if the cosmological constant is positive, as appears to be the case in recent observations, the temperature will asymptote to a non-zero, positive value and the universe will approach a state of maximum entropy.[8]
The "heat death" situation could be avoided if there is a method or mechanism to regenerate hydrogen atoms from radiation, dark energy or other sources in order to avoid a gradual running down of the universe due to the conversion of matter into energy and heavier elements in stellar processes.[9][10]

Time frame for heat death

From the Big Bang through the present day, matter and dark matter in the universe are thought to have been concentrated in stars, galaxies, and galaxy clusters, and are presumed to continue to be so well into the future. Therefore, the universe is not in thermodynamic equilibrium and objects can do physical work.[11], §VID. The decay time for a supermassive black hole of roughly 1 galaxy-mass (1011 solar masses) due to Hawking radiation is on the order of 10100 years,[12] so entropy can be produced until at least that time. After that time, the universe enters the so-called Dark Era, and is expected to consist chiefly of a dilute gas of photons and leptons.[11]§VIA With only very diffuse matter remaining, activity in the universe will have tailed off dramatically, with extremely low energy levels and extremely long time scales. Speculatively, it is possible that the universe may enter a second inflationary epoch, or, assuming that the current vacuum state is a false vacuum, the vacuum may decay into a lower-energy state.[11], §VE. It is also possible that entropy production will cease and the universe will reach heat death.[11], §VID. Possibly another universe could be created by random quantum fluctuations or quantum tunneling in roughly 10^{10^{10^{56}}} years.[13] Over an infinite time, there would be a spontaneous entropy decrease via the Poincaré recurrence theorem[citation needed], thermal fluctuations,[14][15] and Fluctuation theorem.[16][17]

Controversies

Max Planck wrote that the phrase 'entropy of the universe' has no meaning because it admits of no accurate definition.[18][19] More recently, Grandy writes: "It is rather presumptuous to speak of the entropy of a universe about which we still understand so little, and we wonder how one might define thermodynamic entropy for a universe and its major constituents that have never been in equilibrium in their entire existence."[20] According to Tisza: "If an isolated system is not in equilibrium, we cannot associate an entropy with it."[21] Buchdahl writes of "the entirely unjustifiable assumption that the universe can be treated as a closed thermodynamic system".[22] According to Gallavotti: "... there is no universally accepted notion of entropy for systems out of equilibrium, even when in a stationary state."[23] Discussing the question of entropy for non-equilibrium states in general, Lieb and Yngvason express their opinion as follows: "Despite the fact that most physicists believe in such a nonequilibrium entropy, it has so far proved impossible to define it in a clearly satisfactory way."[24] In the opinion of Čápek and Sheehan, "no known formulation [of entropy] applies to all possible thermodynamic regimes."[25] In Landsberg's opinion, "The third misconception is that thermodynamics, and in particular, the concept of entropy, can without further enquiry be applied to the whole universe. ... These questions have a certain fascination, but the answers are speculations, and lie beyond the scope of this book."[26]

A recent analysis of entropy states that "The entropy of a general gravitational field is still not known," and that "gravitational entropy is difficult to quantify." The analysis considers several possible assumptions that would be needed for estimates, and suggests that the visible universe has more entropy than previously thought. This is because the analysis concludes that supermassive black holes are the largest contributor.[27] Another writer goes further; "It has long been known that gravity is important for keeping the universe out of thermal equilibrium. Gravitationally bound systems have negative specific heat—that is, the velocities of their components increase when energy is removed. ... Such a system does not evolve toward a homogeneous equilibrium state. Instead it becomes increasingly structured and heterogeneous as it fragments into subsystems."[28]

Laws of thermodynamics

From Wikipedia, the free encyclopedia

The four laws of thermodynamics define fundamental physical quantities (temperature, energy, and entropy) that characterize thermodynamic systems at thermal equilibrium. The laws describe how these quantities behave under various circumstances, and forbid certain phenomena (such as perpetual motion).

The four laws of thermodynamics are:[1][2][3][4][5]
There have been suggestions of additional laws, but none of them achieves the generality of the four accepted laws, and they are not mentioned in standard textbooks.[1][2][3][4][6][7]

The laws of thermodynamics are important fundamental laws in physics and they are applicable in other natural sciences.

Zeroth law

The zeroth law of thermodynamics may be stated in the following form:
If two systems are both in thermal equilibrium with a third system then they are in thermal equilibrium with each other.[8]
The law is intended to allow the existence of an empirical parameter, the temperature, as a property of a system such that systems in thermal equilibrium with each other have the same temperature. The law as stated here is compatible with the use of a particular physical body, for example a mass of gas, to match temperatures of other bodies, but does not justify regarding temperature as a quantity that can be measured on a scale of real numbers.

Though this version of the law is one of the more commonly stated, it is only one of a diversity of statements that are labeled as "the zeroth law" by competent writers. Some statements go further so as to supply the important physical fact that temperature is one-dimensional, that one can conceptually arrange bodies in real number sequence from colder to hotter.[9][10][11] Perhaps there exists no unique "best possible statement" of the "zeroth law", because there is in the literature a range of formulations of the principles of thermodynamics, each of which call for their respectively appropriate versions of the law.

Although these concepts of temperature and of thermal equilibrium are fundamental to thermodynamics and were clearly stated in the nineteenth century, the desire to explicitly number the above law was not widely felt until Fowler and Guggenheim did so in the 1930s, long after the first, second, and third law were already widely understood and recognized. Hence it was numbered the zeroth law. The importance of the law as a foundation to the earlier laws is that it allows the definition of temperature in a non-circular way without reference to entropy, its conjugate variable. Such a temperature definition is said to be 'empirical'.[12][13][14][15][16][17]

First law

The first law of thermodynamics may be stated in several ways :
The increase in internal energy of a closed system is equal to total of the energy added to the system. In particular, if the energy entering the system is supplied as heat and energy leaves the system as work, the heat is accounted as positive and the work is accounted as negative.
{\displaystyle \Delta U_{system}=Q-W}
In the case of a thermodynamic cycle of a closed system, which returns to its original state, the heat Qin supplied to the system in one stage of the cycle, minus the heat Qout removed from it in another stage of the cycle, plus the work added to the system Win equals the work that leaves the system Wout.
\Delta U_{system\,(full\,cycle)}=0
hence, for a full cycle,
{\displaystyle Q=Q_{in}-Q_{out}+W_{in}-W_{out}=W_{net}}
For the particular case of a thermally isolated system (adiabatically isolated), the change of the internal energy of an adiabatically isolated system can only be the result of the work added to the system, because the adiabatic assumption is: Q = 0.
{\displaystyle \Delta U_{system}=U_{final}-U_{initial}=W_{in}-W_{out}}
More specifically, the First Law encompasses several principles:
This states that energy can be neither created nor destroyed. However, energy can change forms, and energy can flow from one place to another. A particular consequence of the law of conservation of energy is that the total energy of an isolated system does not change.
If a system has a definite temperature, then its total energy has three distinguishable components. If the system is in motion as a whole, it has kinetic energy. If the system as a whole is in an externally imposed force field (e.g. gravity), it has potential energy relative to some reference point in space. Finally, it has internal energy, which is a fundamental quantity of thermodynamics. The establishment of the concept of internal energy distinguishes the first law of thermodynamics from the more general law of conservation of energy.
E_{total}=\mathrm {KE} _{system}+\mathrm {PE} _{system}+U_{system}
The internal energy of a substance can be explained as the sum of the diverse kinetic energies of the erratic microscopic motions of its constituent atoms, and of the potential energy of interactions between them. Those microscopic energy terms are collectively called the substance's internal energy (U), and are accounted for by macroscopic thermodynamic property. The total of the kinetic energies of microscopic motions of the constituent atoms increases as the system's temperature increases; this assumes no other interactions at the microscopic level of the system such as chemical reactions, potential energy of constituent atoms with respect to each other.
  • Work is a process of transferring energy to or from a system in ways that can be described by macroscopic mechanical forces exerted by factors in the surroundings, outside the system. Examples are an externally driven shaft agitating a stirrer within the system, or an externally imposed electric field that polarizes the material of the system, or a piston that compresses the system. Unless otherwise stated, it is customary to treat work as occurring without its dissipation to the surroundings. Practically speaking, in all natural process, some of the work is dissipated by internal friction or viscosity. The work done by the system can come from its overall kinetic energy, from its overall potential energy, or from its internal energy.
For example, when a machine (not a part of the system) lifts a system upwards, some energy is transferred from the machine to the system. The system's energy increases as work is done on the system and in this particular case, the energy increase of the system is manifested as an increase in the system's gravitational potential energy. Work added to the system increases the Potential Energy of the system:
{\displaystyle W=\Delta \mathrm {PE} _{system}}
Or in general, the energy added to the system in the form of work can be partitioned to kinetic, potential or internal energy forms:
{\displaystyle W=\Delta \mathrm {KE} _{system}+\Delta \mathrm {PE} _{system}+\Delta U_{system}}
  • When matter is transferred into a system, that masses' associated internal energy and potential energy are transferred with it.
{\displaystyle \left(u\,\Delta M\right)_{in}=\Delta U_{system}}
where u denotes the internal energy per unit mass of the transferred matter, as measured while in the surroundings; and ΔM denotes the amount of transferred mass.
  • The flow of heat is a form of energy transfer.
Heating is a natural process of moving energy to or from a system other than by work or the transfer of matter. Direct passage of heat is only from a hotter to a colder system.
If the system has rigid walls that are impermeable to matter, and consequently energy cannot be transferred as work into or out from the system, and no external long-range force field affects it that could change its internal energy, then the internal energy can only be changed by the transfer of energy as heat:
\Delta U_{system}=Q
where Q denotes the amount of energy transferred into the system as heat.

Combining these principles leads to one traditional statement of the first law of thermodynamics: it is not possible to construct a machine which will perpetually output work without an equal amount of energy input to that machine. Or more briefly, a perpetual motion machine of the first kind is impossible.

Second law

The second law of thermodynamics indicates the irreversibility of natural processes, and, in many cases, the tendency of natural processes to lead towards spatial homogeneity of matter and energy, and especially of temperature. It can be formulated in a variety of interesting and important ways.
It implies the existence of a quantity called the entropy of a thermodynamic system. In terms of this quantity it implies that
When two initially isolated systems in separate but nearby regions of space, each in thermodynamic equilibrium with itself but not necessarily with each other, are then allowed to interact, they will eventually reach a mutual thermodynamic equilibrium. The sum of the entropies of the initially isolated systems is less than or equal to the total entropy of the final combination. Equality occurs just when the two original systems have all their respective intensive variables (temperature, pressure) equal; then the final system also has the same values.
This statement of the second law is founded on the assumption, that in classical thermodynamics, the entropy of a system is defined only when it has reached internal thermodynamic equilibrium (thermodynamic equilibrium with itself).

The second law is applicable to a wide variety of processes, reversible and irreversible. All natural processes are irreversible. Reversible processes are a useful and convenient theoretical fiction, but do not occur in nature.

A prime example of irreversibility is in the transfer of heat by conduction or radiation. It was known long before the discovery of the notion of entropy that when two bodies initially of different temperatures come into thermal connection, then heat always flows from the hotter body to the colder one.

The second law tells also about kinds of irreversibility other than heat transfer, for example those of friction and viscosity, and those of chemical reactions. The notion of entropy is needed to provide that wider scope of the law.

According to the second law of thermodynamics, in a theoretical and fictive reversible heat transfer, an element of heat transferred, δQ, is the product of the temperature (T), both of the system and of the sources or destination of the heat, with the increment (dS) of the system's conjugate variable, its entropy (S)
\delta Q=T\,dS\,.[1]
Entropy may also be viewed as a physical measure of the lack of physical information about the microscopic details of the motion and configuration of a system, when only the macroscopic states are known. This lack of information is often described as disorder on a microscopic or molecular scale. The law asserts that for two given macroscopically specified states of a system, there is a quantity called the difference of information entropy between them. This information entropy difference defines how much additional microscopic physical information is needed to specify one of the macroscopically specified states, given the macroscopic specification of the other - often a conveniently chosen reference state which may be presupposed to exist rather than explicitly stated. A final condition of a natural process always contains microscopically specifiable effects which are not fully and exactly predictable from the macroscopic specification of the initial condition of the process. This is why entropy increases in natural processes - the increase tells how much extra microscopic information is needed to distinguish the final macroscopically specified state from the initial macroscopically specified state.[18]

Third law

The third law of thermodynamics is sometimes stated as follows:
The entropy of a perfect crystal of any pure substance approaches zero as the temperature approaches absolute zero.
At zero temperature the system must be in a state with the minimum thermal energy. This statement holds true if the perfect crystal has only one state with minimum energy. Entropy is related to the number of possible microstates according to:
S=k_{\mathrm {B} }\,\mathrm {ln} \,\Omega
Where S is the entropy of the system, kB Boltzmann's constant, and Ω the number of microstates (e.g. possible configurations of atoms). At absolute zero there is only 1 microstate possible (Ω=1 as all the atoms are identical for a pure substance and as a result all orders are identical as there is only one combination) and ln(1) = 0.

A more general form of the third law that applies to a system such as a glass that may have more than one minimum microscopically distinct energy state, or may have a microscopically distinct state that is "frozen in" though not a strictly minimum energy state and not strictly speaking a state of thermodynamic equilibrium, at absolute zero temperature:
The entropy of a system approaches a constant value as the temperature approaches zero.
The constant value (not necessarily zero) is called the residual entropy of the system.

History

Circa 1797, Count Rumford (born Benjamin Thompson) showed that endless mechanical action can generate indefinitely large amounts of heat from a fixed amount of working substance thus challenging the caloric theory of heat, which held that there would be a finite amount of caloric heat/energy in a fixed amount of working substance. The first established thermodynamic principle, which eventually became the second law of thermodynamics, was formulated by Sadi Carnot in 1824. By 1860, as formalized in the works of those such as Rudolf Clausius and William Thomson, two established principles of thermodynamics had evolved, the first principle and the second principle, later restated as thermodynamic laws. By 1873, for example, thermodynamicist Josiah Willard Gibbs, in his memoir Graphical Methods in the Thermodynamics of Fluids, clearly stated the first two absolute laws of thermodynamics. Some textbooks throughout the 20th century have numbered the laws differently. In some fields removed from chemistry, the second law was considered to deal with the efficiency of heat engines only, whereas what was called the third law dealt with entropy increases. Directly defining zero points for entropy calculations was not considered to be a law. Gradually, this separation was combined into the second law and the modern third law was widely adopted.

Magnet school

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Magnet_sc...