Search This Blog

Friday, July 14, 2017

Heat death of the universe

From Wikipedia, the free encyclopedia
The heat death of the universe is a plausible ultimate fate of the universe in which the universe has diminished to a state of no thermodynamic free energy and therefore can no longer sustain processes that increase entropy. Heat death does not imply any particular absolute temperature; it only requires that temperature differences or other processes may no longer be exploited to perform work. In the language of physics, this is when the universe reaches thermodynamic equilibrium (maximum entropy).

If the topology of the universe is open or flat, or if dark energy is a positive cosmological constant (both of which are supported by current data), the universe will continue expanding forever and a heat death is expected to occur,[1] with the universe cooling to approach equilibrium at a very low temperature after a very long time period.

The hypothesis of heat death stems from the ideas of William Thomson, 1st Baron Kelvin, who in the 1850s took the theory of heat as mechanical energy loss in nature (as embodied in the first two laws of thermodynamics) and extrapolated it to larger processes on a universal scale.

Origins of the idea

The idea of heat death stems from the second law of thermodynamics, of which one version states that entropy tends to increase in an isolated system. From this, the hypothesis infers that if the universe lasts for a sufficient time, it will asymptotically approach a state where all energy is evenly distributed. In other words, according to this hypothesis, in nature there is a tendency to the dissipation (energy transformation) of mechanical energy (motion) into thermal energy; hence, by extrapolation, there exists the view that the mechanical movement of the universe will run down, as work is converted to heat, in time because of the second law.

The conjecture that all bodies in the universe cool off, eventually becoming too cold to support life, seems to have been first put forward by the French astronomer Jean-Sylvain Bailly in 1777 in his writings on the history of astronomy and in the ensuing correspondence with Voltaire. In Bailly's view, all planets have an internal heat and are now at some particular stage of cooling. Jupiter, for instance, is still too hot for life to arise there for thousands of years, while the Moon is already too cold. The final state, in this view, is described as one of "equilibrium" in which all motion ceases.[2]

The idea of heat death as a consequence of the laws of thermodynamics, however, was first proposed in loose terms beginning in 1851 by William Thomson, 1st Baron Kelvin, who theorized further on the mechanical energy loss views of Sadi Carnot (1824), James Joule (1843), and Rudolf Clausius (1850). Thomson’s views were then elaborated on more definitively over the next decade by Hermann von Helmholtz and William Rankine.[citation needed]

History

The idea of heat death of the universe derives from discussion of the application of the first two laws of thermodynamics to universal processes. Specifically, in 1851 William Thomson (Lord Kelvin) outlined the view, as based on recent experiments on the dynamical theory of heat, that "heat is not a substance, but a dynamical form of mechanical effect, we perceive that there must be an equivalence between mechanical work and heat, as between cause and effect."[3]
Lord Kelvin originated the idea of universal heat death in 1852.

In 1852, Thomson published On a Universal Tendency in Nature to the Dissipation of Mechanical Energy in which he outlined the rudiments of the second law of thermodynamics summarized by the view that mechanical motion and the energy used to create that motion will tend to dissipate or run down, naturally.[4] The ideas in this paper, in relation to their application to the age of the sun and the dynamics of the universal operation, attracted the likes of William Rankine and Hermann von Helmholtz. The three of them were said to have exchanged ideas on this subject.[5] In 1862, Thomson published "On the age of the Sun’s heat", an article in which he reiterated his fundamental beliefs in the indestructibility of energy (the first law) and the universal dissipation of energy (the second law), leading to diffusion of heat, cessation of useful motion (work), and exhaustion of potential energy through the material universe while clarifying his view of the consequences for the universe as a whole. In a key paragraph, Thomson wrote:
The result would inevitably be a state of universal rest and death, if the universe were finite and left to obey existing laws. But it is impossible to conceive a limit to the extent of matter in the universe; and therefore science points rather to an endless progress, through an endless space, of action involving the transformation of potential energy into palpable motion and hence into heat, than to a single finite mechanism, running down like a clock, and stopping for ever.[6]
In the years to follow both Thomson’s 1852 and the 1865 papers, Helmholtz and Rankine both credited Thomson with the idea, but read further into his papers by publishing views stating that Thomson argued that the universe will end in a "heat death" (Helmholtz) which will be the "end of all physical phenomena" (Rankine).[5][7]

Current status

Proposals about the final state of the universe depend on the assumptions made about its ultimate fate, and these assumptions have varied considerably over the late 20th century and early 21st century. In a hypothesized "open" or "flat" universe that continues expanding indefinitely, a heat death is expected to occur.[1] If the cosmological constant is zero, the universe will approach absolute zero temperature over a very long timescale. However, if the cosmological constant is positive, as appears to be the case in recent observations, the temperature will asymptote to a non-zero, positive value and the universe will approach a state of maximum entropy.[8]
The "heat death" situation could be avoided if there is a method or mechanism to regenerate hydrogen atoms from radiation, dark energy or other sources in order to avoid a gradual running down of the universe due to the conversion of matter into energy and heavier elements in stellar processes.[9][10]

Time frame for heat death

From the Big Bang through the present day, matter and dark matter in the universe are thought to have been concentrated in stars, galaxies, and galaxy clusters, and are presumed to continue to be so well into the future. Therefore, the universe is not in thermodynamic equilibrium and objects can do physical work.[11], §VID. The decay time for a supermassive black hole of roughly 1 galaxy-mass (1011 solar masses) due to Hawking radiation is on the order of 10100 years,[12] so entropy can be produced until at least that time. After that time, the universe enters the so-called Dark Era, and is expected to consist chiefly of a dilute gas of photons and leptons.[11]§VIA With only very diffuse matter remaining, activity in the universe will have tailed off dramatically, with extremely low energy levels and extremely long time scales. Speculatively, it is possible that the universe may enter a second inflationary epoch, or, assuming that the current vacuum state is a false vacuum, the vacuum may decay into a lower-energy state.[11], §VE. It is also possible that entropy production will cease and the universe will reach heat death.[11], §VID. Possibly another universe could be created by random quantum fluctuations or quantum tunneling in roughly 10^{10^{10^{56}}} years.[13] Over an infinite time, there would be a spontaneous entropy decrease via the Poincaré recurrence theorem[citation needed], thermal fluctuations,[14][15] and Fluctuation theorem.[16][17]

Controversies

Max Planck wrote that the phrase 'entropy of the universe' has no meaning because it admits of no accurate definition.[18][19] More recently, Grandy writes: "It is rather presumptuous to speak of the entropy of a universe about which we still understand so little, and we wonder how one might define thermodynamic entropy for a universe and its major constituents that have never been in equilibrium in their entire existence."[20] According to Tisza: "If an isolated system is not in equilibrium, we cannot associate an entropy with it."[21] Buchdahl writes of "the entirely unjustifiable assumption that the universe can be treated as a closed thermodynamic system".[22] According to Gallavotti: "... there is no universally accepted notion of entropy for systems out of equilibrium, even when in a stationary state."[23] Discussing the question of entropy for non-equilibrium states in general, Lieb and Yngvason express their opinion as follows: "Despite the fact that most physicists believe in such a nonequilibrium entropy, it has so far proved impossible to define it in a clearly satisfactory way."[24] In the opinion of Čápek and Sheehan, "no known formulation [of entropy] applies to all possible thermodynamic regimes."[25] In Landsberg's opinion, "The third misconception is that thermodynamics, and in particular, the concept of entropy, can without further enquiry be applied to the whole universe. ... These questions have a certain fascination, but the answers are speculations, and lie beyond the scope of this book."[26]

A recent analysis of entropy states that "The entropy of a general gravitational field is still not known," and that "gravitational entropy is difficult to quantify." The analysis considers several possible assumptions that would be needed for estimates, and suggests that the visible universe has more entropy than previously thought. This is because the analysis concludes that supermassive black holes are the largest contributor.[27] Another writer goes further; "It has long been known that gravity is important for keeping the universe out of thermal equilibrium. Gravitationally bound systems have negative specific heat—that is, the velocities of their components increase when energy is removed. ... Such a system does not evolve toward a homogeneous equilibrium state. Instead it becomes increasingly structured and heterogeneous as it fragments into subsystems."[28]

Laws of thermodynamics

From Wikipedia, the free encyclopedia

The four laws of thermodynamics define fundamental physical quantities (temperature, energy, and entropy) that characterize thermodynamic systems at thermal equilibrium. The laws describe how these quantities behave under various circumstances, and forbid certain phenomena (such as perpetual motion).

The four laws of thermodynamics are:[1][2][3][4][5]
There have been suggestions of additional laws, but none of them achieves the generality of the four accepted laws, and they are not mentioned in standard textbooks.[1][2][3][4][6][7]

The laws of thermodynamics are important fundamental laws in physics and they are applicable in other natural sciences.

Zeroth law

The zeroth law of thermodynamics may be stated in the following form:
If two systems are both in thermal equilibrium with a third system then they are in thermal equilibrium with each other.[8]
The law is intended to allow the existence of an empirical parameter, the temperature, as a property of a system such that systems in thermal equilibrium with each other have the same temperature. The law as stated here is compatible with the use of a particular physical body, for example a mass of gas, to match temperatures of other bodies, but does not justify regarding temperature as a quantity that can be measured on a scale of real numbers.

Though this version of the law is one of the more commonly stated, it is only one of a diversity of statements that are labeled as "the zeroth law" by competent writers. Some statements go further so as to supply the important physical fact that temperature is one-dimensional, that one can conceptually arrange bodies in real number sequence from colder to hotter.[9][10][11] Perhaps there exists no unique "best possible statement" of the "zeroth law", because there is in the literature a range of formulations of the principles of thermodynamics, each of which call for their respectively appropriate versions of the law.

Although these concepts of temperature and of thermal equilibrium are fundamental to thermodynamics and were clearly stated in the nineteenth century, the desire to explicitly number the above law was not widely felt until Fowler and Guggenheim did so in the 1930s, long after the first, second, and third law were already widely understood and recognized. Hence it was numbered the zeroth law. The importance of the law as a foundation to the earlier laws is that it allows the definition of temperature in a non-circular way without reference to entropy, its conjugate variable. Such a temperature definition is said to be 'empirical'.[12][13][14][15][16][17]

First law

The first law of thermodynamics may be stated in several ways :
The increase in internal energy of a closed system is equal to total of the energy added to the system. In particular, if the energy entering the system is supplied as heat and energy leaves the system as work, the heat is accounted as positive and the work is accounted as negative.
{\displaystyle \Delta U_{system}=Q-W}
In the case of a thermodynamic cycle of a closed system, which returns to its original state, the heat Qin supplied to the system in one stage of the cycle, minus the heat Qout removed from it in another stage of the cycle, plus the work added to the system Win equals the work that leaves the system Wout.
\Delta U_{system\,(full\,cycle)}=0
hence, for a full cycle,
{\displaystyle Q=Q_{in}-Q_{out}+W_{in}-W_{out}=W_{net}}
For the particular case of a thermally isolated system (adiabatically isolated), the change of the internal energy of an adiabatically isolated system can only be the result of the work added to the system, because the adiabatic assumption is: Q = 0.
{\displaystyle \Delta U_{system}=U_{final}-U_{initial}=W_{in}-W_{out}}
More specifically, the First Law encompasses several principles:
This states that energy can be neither created nor destroyed. However, energy can change forms, and energy can flow from one place to another. A particular consequence of the law of conservation of energy is that the total energy of an isolated system does not change.
If a system has a definite temperature, then its total energy has three distinguishable components. If the system is in motion as a whole, it has kinetic energy. If the system as a whole is in an externally imposed force field (e.g. gravity), it has potential energy relative to some reference point in space. Finally, it has internal energy, which is a fundamental quantity of thermodynamics. The establishment of the concept of internal energy distinguishes the first law of thermodynamics from the more general law of conservation of energy.
E_{total}=\mathrm {KE} _{system}+\mathrm {PE} _{system}+U_{system}
The internal energy of a substance can be explained as the sum of the diverse kinetic energies of the erratic microscopic motions of its constituent atoms, and of the potential energy of interactions between them. Those microscopic energy terms are collectively called the substance's internal energy (U), and are accounted for by macroscopic thermodynamic property. The total of the kinetic energies of microscopic motions of the constituent atoms increases as the system's temperature increases; this assumes no other interactions at the microscopic level of the system such as chemical reactions, potential energy of constituent atoms with respect to each other.
  • Work is a process of transferring energy to or from a system in ways that can be described by macroscopic mechanical forces exerted by factors in the surroundings, outside the system. Examples are an externally driven shaft agitating a stirrer within the system, or an externally imposed electric field that polarizes the material of the system, or a piston that compresses the system. Unless otherwise stated, it is customary to treat work as occurring without its dissipation to the surroundings. Practically speaking, in all natural process, some of the work is dissipated by internal friction or viscosity. The work done by the system can come from its overall kinetic energy, from its overall potential energy, or from its internal energy.
For example, when a machine (not a part of the system) lifts a system upwards, some energy is transferred from the machine to the system. The system's energy increases as work is done on the system and in this particular case, the energy increase of the system is manifested as an increase in the system's gravitational potential energy. Work added to the system increases the Potential Energy of the system:
{\displaystyle W=\Delta \mathrm {PE} _{system}}
Or in general, the energy added to the system in the form of work can be partitioned to kinetic, potential or internal energy forms:
{\displaystyle W=\Delta \mathrm {KE} _{system}+\Delta \mathrm {PE} _{system}+\Delta U_{system}}
  • When matter is transferred into a system, that masses' associated internal energy and potential energy are transferred with it.
{\displaystyle \left(u\,\Delta M\right)_{in}=\Delta U_{system}}
where u denotes the internal energy per unit mass of the transferred matter, as measured while in the surroundings; and ΔM denotes the amount of transferred mass.
  • The flow of heat is a form of energy transfer.
Heating is a natural process of moving energy to or from a system other than by work or the transfer of matter. Direct passage of heat is only from a hotter to a colder system.
If the system has rigid walls that are impermeable to matter, and consequently energy cannot be transferred as work into or out from the system, and no external long-range force field affects it that could change its internal energy, then the internal energy can only be changed by the transfer of energy as heat:
\Delta U_{system}=Q
where Q denotes the amount of energy transferred into the system as heat.

Combining these principles leads to one traditional statement of the first law of thermodynamics: it is not possible to construct a machine which will perpetually output work without an equal amount of energy input to that machine. Or more briefly, a perpetual motion machine of the first kind is impossible.

Second law

The second law of thermodynamics indicates the irreversibility of natural processes, and, in many cases, the tendency of natural processes to lead towards spatial homogeneity of matter and energy, and especially of temperature. It can be formulated in a variety of interesting and important ways.
It implies the existence of a quantity called the entropy of a thermodynamic system. In terms of this quantity it implies that
When two initially isolated systems in separate but nearby regions of space, each in thermodynamic equilibrium with itself but not necessarily with each other, are then allowed to interact, they will eventually reach a mutual thermodynamic equilibrium. The sum of the entropies of the initially isolated systems is less than or equal to the total entropy of the final combination. Equality occurs just when the two original systems have all their respective intensive variables (temperature, pressure) equal; then the final system also has the same values.
This statement of the second law is founded on the assumption, that in classical thermodynamics, the entropy of a system is defined only when it has reached internal thermodynamic equilibrium (thermodynamic equilibrium with itself).

The second law is applicable to a wide variety of processes, reversible and irreversible. All natural processes are irreversible. Reversible processes are a useful and convenient theoretical fiction, but do not occur in nature.

A prime example of irreversibility is in the transfer of heat by conduction or radiation. It was known long before the discovery of the notion of entropy that when two bodies initially of different temperatures come into thermal connection, then heat always flows from the hotter body to the colder one.

The second law tells also about kinds of irreversibility other than heat transfer, for example those of friction and viscosity, and those of chemical reactions. The notion of entropy is needed to provide that wider scope of the law.

According to the second law of thermodynamics, in a theoretical and fictive reversible heat transfer, an element of heat transferred, δQ, is the product of the temperature (T), both of the system and of the sources or destination of the heat, with the increment (dS) of the system's conjugate variable, its entropy (S)
\delta Q=T\,dS\,.[1]
Entropy may also be viewed as a physical measure of the lack of physical information about the microscopic details of the motion and configuration of a system, when only the macroscopic states are known. This lack of information is often described as disorder on a microscopic or molecular scale. The law asserts that for two given macroscopically specified states of a system, there is a quantity called the difference of information entropy between them. This information entropy difference defines how much additional microscopic physical information is needed to specify one of the macroscopically specified states, given the macroscopic specification of the other - often a conveniently chosen reference state which may be presupposed to exist rather than explicitly stated. A final condition of a natural process always contains microscopically specifiable effects which are not fully and exactly predictable from the macroscopic specification of the initial condition of the process. This is why entropy increases in natural processes - the increase tells how much extra microscopic information is needed to distinguish the final macroscopically specified state from the initial macroscopically specified state.[18]

Third law

The third law of thermodynamics is sometimes stated as follows:
The entropy of a perfect crystal of any pure substance approaches zero as the temperature approaches absolute zero.
At zero temperature the system must be in a state with the minimum thermal energy. This statement holds true if the perfect crystal has only one state with minimum energy. Entropy is related to the number of possible microstates according to:
S=k_{\mathrm {B} }\,\mathrm {ln} \,\Omega
Where S is the entropy of the system, kB Boltzmann's constant, and Ω the number of microstates (e.g. possible configurations of atoms). At absolute zero there is only 1 microstate possible (Ω=1 as all the atoms are identical for a pure substance and as a result all orders are identical as there is only one combination) and ln(1) = 0.

A more general form of the third law that applies to a system such as a glass that may have more than one minimum microscopically distinct energy state, or may have a microscopically distinct state that is "frozen in" though not a strictly minimum energy state and not strictly speaking a state of thermodynamic equilibrium, at absolute zero temperature:
The entropy of a system approaches a constant value as the temperature approaches zero.
The constant value (not necessarily zero) is called the residual entropy of the system.

History

Circa 1797, Count Rumford (born Benjamin Thompson) showed that endless mechanical action can generate indefinitely large amounts of heat from a fixed amount of working substance thus challenging the caloric theory of heat, which held that there would be a finite amount of caloric heat/energy in a fixed amount of working substance. The first established thermodynamic principle, which eventually became the second law of thermodynamics, was formulated by Sadi Carnot in 1824. By 1860, as formalized in the works of those such as Rudolf Clausius and William Thomson, two established principles of thermodynamics had evolved, the first principle and the second principle, later restated as thermodynamic laws. By 1873, for example, thermodynamicist Josiah Willard Gibbs, in his memoir Graphical Methods in the Thermodynamics of Fluids, clearly stated the first two absolute laws of thermodynamics. Some textbooks throughout the 20th century have numbered the laws differently. In some fields removed from chemistry, the second law was considered to deal with the efficiency of heat engines only, whereas what was called the third law dealt with entropy increases. Directly defining zero points for entropy calculations was not considered to be a law. Gradually, this separation was combined into the second law and the modern third law was widely adopted.

Thursday, July 13, 2017

Enthalpy

From Wikipedia, the free encyclopedia
Enthalpy /ˈɛnθəlpi/ is a measurement of energy in a thermodynamic system. It is the thermodynamic quantity equivalent to the total heat content of a system. It is equal to the internal energy of the system plus the product of pressure and volume.[1]

More technically, it includes the internal energy, which is the energy required to create a system, and the amount of energy required to make room for it by displacing its environment and establishing its volume and pressure.[2]

Enthalpy is defined as a state function that depends only on the prevailing equilibrium state identified by the system's internal energy, pressure, and volume. It is an extensive quantity. The unit of measurement for enthalpy in the International System of Units (SI) is the joule, but other historical, conventional units are still in use, such as the British thermal unit and the calorie.

Enthalpy is the preferred expression of system energy changes in many chemical, biological, and physical measurements at constant pressure, because it simplifies the description of energy transfer. At constant pressure, the enthalpy change equals the energy transferred from the environment through heating or work other than expansion work.

The total enthalpy, H, of a system cannot be measured directly. The same situation exists in classical mechanics: only a change or difference in energy carries physical meaning. Enthalpy itself is a thermodynamic potential, so in order to measure the enthalpy of a system, we must refer to a defined reference point; therefore what we measure is the change in enthalpy, ΔH. The ΔH is a positive change in endothermic reactions, and negative in heat-releasing exothermic processes.

For processes under constant pressure, ΔH is equal to the change in the internal energy of the system, plus the pressure-volume work that the system has done on its surroundings.[3] This means that the change in enthalpy under such conditions is the heat absorbed (or released) by the material through a chemical reaction or by external heat transfer. Enthalpies for chemical substances at constant pressure assume standard state: most commonly 1 bar pressure. Standard state does not, strictly speaking, specify a temperature (see standard state), but expressions for enthalpy generally reference the standard heat of formation at 25 °C.

Enthalpy of ideal gases and incompressible solids and liquids does not depend on pressure, unlike entropy and Gibbs energy. Real materials at common temperatures and pressures usually closely approximate this behavior, which greatly simplifies enthalpy calculation and use in practical designs and analyses.

Origins

The word enthalpy stems from the Ancient Greek verb enthalpein (ἐνθάλπειν), which means "to warm in".[4] It combines the Classical Greek prefix ἐν- en-, meaning "to put into", and the verb θάλπειν thalpein, meaning "to heat". The word enthalpy is often incorrectly attributed to Benoît Paul Émile Clapeyron and Rudolf Clausius through the 1850 publication of their Clausius–Clapeyron relation. This misconception was popularized by the 1927 publication of The Mollier Steam Tables and Diagrams. However, neither the concept, the word, nor the symbol for enthalpy existed until well after Clapeyron's death.

The earliest writings to contain the concept of enthalpy did not appear until 1875,[5] when Josiah Willard Gibbs introduced "a heat function for constant pressure". However, Gibbs did not use the word "enthalpy" in his writings.[note 1]

The actual word first appears in the scientific literature in a 1909 publication by J. P. Dalton. According to that publication, Heike Kamerlingh Onnes actually coined the word.[6]
Over the years, scientists used many different symbols to denote enthalpy. In 1922 Alfred W. Porter proposed the symbol "H" as a standard,[7] thus finalizing the terminology still in use today.

Formal definition

The enthalpy of a homogeneous system is defined as[8][9]
{\displaystyle H=U+pV,}
where
H is the enthalpy of the system,
U is the internal energy of the system,
p is the pressure of the system,
V is the volume of the system.
Enthalpy is an extensive property. This means that, for homogeneous systems, the enthalpy is proportional to the size of the system. It is convenient to introduce the specific enthalpy h = H/m, where m is the mass of the system, or the molar enthalpy Hm = H/n, where n is the number of moles (h and Hm are intensive properties). For inhomogeneous systems the enthalpy is the sum of the enthalpies of the composing subsystems:
{\displaystyle H=\sum _{k}H_{k},}
where the label k refers to the various subsystems. In case of continuously varying p, T or composition, the summation becomes an integral:
{\displaystyle H=\int \rho h\,dV,}
where ρ is the density.

The enthalpy of homogeneous systems can be viewed as function H(S,p) of the entropy S and the pressure p, and a differential relation for it can be derived as follows. We start from the first law of thermodynamics for closed systems for an infinitesimal process:
{\displaystyle dU=\delta Q-\delta W.}
Here, δQ is a small amount of heat added to the system, and δW a small amount of work performed by the system. In a homogeneous system only reversible processes can take place, so the second law of thermodynamics gives δQ = T dS, with T the absolute temperature of the system. Furthermore, if only pV work is done, δW = p dV. As a result,
{\displaystyle dU=T\,dS-p\,dV.}
Adding d(pV) to both sides of this expression gives
{\displaystyle dU+d(pV)=T\,dS-p\,dV+d(pV),}
or
{\displaystyle d(U+pV)=T\,dS+V\,dp.}
So
{\displaystyle dH(S,p)=T\,dS+V\,dp.}

Other expressions

The above expression of dH in terms of entropy and pressure may be unfamiliar to some readers. However, there are expressions in terms of more familiar variables such as temperature and pressure:[8]:88[10]
{\displaystyle dH=C_{p}\,dT+V(1-\alpha T)\,dp.}
Here Cp is the heat capacity at constant pressure and α is the coefficient of (cubic) thermal expansion:
{\displaystyle \alpha ={\frac {1}{V}}\left({\frac {\partial V}{\partial T}}\right)_{p}.}
With this expression one can, in principle, determine the enthalpy if Cp and V are known as functions of p and T.

Note that for an ideal gas, αT = 1,[note 2] so that
{\displaystyle dH=C_{p}\,dT.}
In a more general form, the first law describes the internal energy with additional terms involving the chemical potential and the number of particles of various types. The differential statement for dH then becomes
{\displaystyle dH=T\,dS+V\,dp+\sum _{i}\mu _{i}\,dN_{i},}
where μi is the chemical potential per particle for an i-type particle, and Ni is the number of such particles. The last term can also be written as μidni (with dni the number of moles of component i added to the system and, in this case, μi the molar chemical potential) or as μidmi (with dmi the mass of component i added to the system and, in this case, μi the specific chemical potential).

Physical interpretation

The U term can be interpreted as the energy required to create the system, and the pV term as the energy that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, n moles of a gas of volume V at pressure p and temperature T, is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy U plus pV, where pV is the work done in pushing against the ambient (atmospheric) pressure.

In basic physics and statistical mechanics it may be more interesting to study the internal properties of the system and therefore the internal energy is used.[11][12] In basic chemistry, experiments are often conducted at constant atmospheric pressure, and the pressure-volume work represents an energy exchange with the atmosphere that cannot be accessed or controlled, so that ΔH is the expression chosen for the heat of reaction.

For a heat engine a change in its internal energy is the difference between the heat input and the pressure-volume work done by the working substance while a change in its enthalpy is the difference between the heat input and the work done by the engine:[13]
{\displaystyle dH=\delta Q-\delta W}
where the work W done by the engine is:
{\displaystyle W=-\oint VdP}

Relationship to heat

In order to discuss the relation between the enthalpy increase and heat supply, we return to the first law for closed systems: dU = δQδW. We apply it to the special case with a uniform pressure at the surface. In this case the work term can be split into two contributions, the so-called pV work, given by p dV (where here p is the pressure at the surface, dV is the increase of the volume of the system) and all other types of work δW′, such as by a shaft or by electromagnetic interaction. So we write δW = p dV + δW′. In this case the first law reads:
{\displaystyle dU=\delta Q-p\,dV-\delta W',}
or
{\displaystyle dH=\delta Q+V\,dp-\delta W'.}
From this relation we see that the increase in enthalpy of a system is equal to the added heat:
{\displaystyle dH=\delta Q,}
provided that the system is under constant pressure (dp = 0) and that the only work done by the system is expansion work (δW' = 0).[14]

Applications

In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from "nothingness"; the mechanical work required, pV, differs based upon the conditions that obtain during the creation of the thermodynamic system.

Energy must be supplied to remove particles from the surroundings to make space for the creation of the system, assuming that the pressure p remains constant; this is the pV term. The supplied energy must also provide the change in internal energy, U, which includes activation energies, ionization energies, mixing energies, vaporization energies, chemical bond energies, and so forth. Together, these constitute the change in the enthalpy U + pV. For systems at constant pressure, with no external work done other than the pV work, the change in enthalpy is the heat received by the system.

For a simple system, with a constant number of particles, the difference in enthalpy is the maximum amount of thermal energy derivable from a thermodynamic process in which the pressure is held constant.[15]

Heat of reaction

The total enthalpy of a system cannot be measured directly, the enthalpy change of a system is measured instead. Enthalpy change is defined by the following equation:
{\displaystyle \Delta H=H_{\mathrm {f} }-H_{\mathrm {i} },}
where
ΔH is the "enthalpy change",
Hf is the final enthalpy of the system (in a chemical reaction, the enthalpy of the products),
Hi is the initial enthalpy of the system (in a chemical reaction, the enthalpy of the reactants).
For an exothermic reaction at constant pressure, the system's change in enthalpy equals the energy released in the reaction, including the energy retained in the system and lost through expansion against its surroundings. In a similar manner, for an endothermic reaction, the system's change in enthalpy is equal to the energy absorbed in the reaction, including the energy lost by the system and gained from compression from its surroundings. A relatively easy way to determine whether or not a reaction is exothermic or endothermic is to determine the sign of ΔH. If ΔH is positive, the reaction is endothermic, that is heat is absorbed by the system due to the products of the reaction having a greater enthalpy than the reactants. On the other hand, if ΔH is negative, the reaction is exothermic, that is the overall decrease in enthalpy is achieved by the generation of heat.[16]

Specific enthalpy

The specific enthalpy of a uniform system is defined as h = H/m where m is the mass of the system. The SI unit for specific enthalpy is joule per kilogram. It can be expressed in other specific quantities by h = u + pv, where u is the specific internal energy, p is the pressure, and v is specific volume, which is equal to 1/ρ, where ρ is the density.

Enthalpy changes

An enthalpy change describes the change in enthalpy observed in the constituents of a thermodynamic system when undergoing a transformation or chemical reaction. It is the difference between the enthalpy after the process has completed, i.e. the enthalpy of the products, and the initial enthalpy of the system, i.e. the reactants. These processes are reversible[why?] and the enthalpy for the reverse process is the negative value of the forward change.

A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of substances. Enthalpy changes are routinely measured and compiled in chemical and physical reference works, such as the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized in thermodynamics.

When used in these recognized terms the qualifier change is usually dropped and the property is simply termed enthalpy of 'process'. Since these properties are often used as reference values it is very common to quote them for a standardized set of environmental parameters, or standard conditions, including:
  • A temperature of 25 °C or 298 K,
  • A pressure of one atmosphere (1 atm or 101.325 kPa),
  • A concentration of 1.0 M when the element or compound is present in solution,
  • Elements or compounds in their normal physical states, i.e. standard state.
For such standardized values the name of the enthalpy is commonly prefixed with the term standard, e.g. standard enthalpy of formation.

Chemical properties:
  • Enthalpy of reaction, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of substance reacts completely.
  • Enthalpy of formation, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a compound is formed from its elementary antecedents.
  • Enthalpy of combustion, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a substance burns completely with oxygen.
  • Enthalpy of hydrogenation, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of an unsaturated compound reacts completely with an excess of hydrogen to form a saturated compound.
  • Enthalpy of atomization, defined as the enthalpy change required to atomize one mole of compound completely.
  • Enthalpy of neutralization, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of water is formed when an acid and a base react.
  • Standard Enthalpy of solution, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a solute is dissolved completely in an excess of solvent, so that the solution is at infinite dilution.
  • Standard enthalpy of Denaturation (biochemistry), defined as the enthalpy change required to denature one mole of compound.
  • Enthalpy of hydration, defined as the enthalpy change observed when one mole of gaseous ions are completely dissolved in water forming one mole of aqueous ions.
Physical properties:
  • Enthalpy of fusion, defined as the enthalpy change required to completely change the state of one mole of substance between solid and liquid states.
  • Enthalpy of vaporization, defined as the enthalpy change required to completely change the state of one mole of substance between liquid and gaseous states.
  • Enthalpy of sublimation, defined as the enthalpy change required to completely change the state of one mole of substance between solid and gaseous states.
  • Lattice enthalpy, defined as the energy required to separate one mole of an ionic compound into separated gaseous ions to an infinite distance apart (meaning no force of attraction).
  • Enthalpy of mixing, defined as the enthalpy change upon mixing of two (non-reacting) chemical substances.

Open systems

In thermodynamic open systems, matter may flow in and out of the system boundaries. The first law of thermodynamics for open systems states: The increase in the internal energy of a system is equal to the amount of energy added to the system by matter flowing in and by heating, minus the amount lost by matter flowing out and in the form of work done by the system:
{\displaystyle dU=\delta Q+dU_{\text{in}}-dU_{\text{out}}-\delta W,}
where Uin is the average internal energy entering the system, and Uout is the average internal energy leaving the system.
During steady, continuous operation, an energy balance applied to an open system equates shaft work performed by the system to heat added plus net enthalpy added

The region of space enclosed by the boundaries of the open system is usually called a control volume, and it may or may not correspond to physical walls. If we choose the shape of the control volume such that all flow in or out occurs perpendicular to its surface, then the flow of matter into the system performs work as if it were a piston of fluid pushing mass into the system, and the system performs work on the flow of matter out as if it were driving a piston of fluid. There are then two types of work performed: flow work described above, which is performed on the fluid (this is also often called pV work), and shaft work, which may be performed on some mechanical device.

These two types of work are expressed in the equation
{\displaystyle \delta W=d(p_{\text{out}}V_{\text{out}})-d(p_{\text{in}}V_{\text{in}})+\delta W_{\text{shaft}}.}
Substitution into the equation above for the control volume (cv) yields:
{\displaystyle dU_{\text{cv}}=\delta Q+dU_{\text{in}}+d(p_{\text{in}}V_{\text{in}})-dU_{\text{out}}-d(p_{\text{out}}V_{\text{out}})-\delta W_{\text{shaft}}.}
The definition of enthalpy, H, permits us to use this thermodynamic potential to account for both internal energy and pV work in fluids for open systems:
{\displaystyle dU_{\text{cv}}=\delta Q+dH_{\text{in}}-dH_{\text{out}}-\delta W_{\text{shaft}}.}
If we allow also the system boundary to move (e.g. due to moving pistons), we get a rather general form of the first law for open systems.[17] In terms of time derivatives it reads:
{\displaystyle {\frac {dU}{dt}}=\sum _{k}{\dot {Q}}_{k}+\sum _{k}{\dot {H}}_{k}-\sum _{k}p_{k}{\frac {dV_{k}}{dt}}-P,}
with sums over the various places k where heat is supplied, matter flows into the system, and boundaries are moving. The k terms represent enthalpy flows, which can be written as
{\displaystyle {\dot {H}}_{k}=h_{k}{\dot {m}}_{k}=H_{m}{\dot {n}}_{k},}
with k the mass flow and k the molar flow at position k respectively. The term dVk/dt represents the rate of change of the system volume at position k that results in pV power done by the system. The parameter P represents all other forms of power done by the system such as shaft power, but it can also be e.g. electric power produced by an electrical power plant.

Note that the previous expression holds true only if the kinetic energy flow rate is conserved between system inlet and outlet.[clarification needed] Otherwise, it has to be included in the enthalpy balance. During steady-state operation of a device (see turbine, pump, and engine), the average dU/dt may be set equal to zero. This yields a useful expression for the average power generation for these devices in the absence of chemical reactions:
{\displaystyle P=\sum _{k}\left\langle {\dot {Q}}_{k}\right\rangle +\sum _{k}\left\langle {\dot {H}}_{k}\right\rangle -\sum _{k}\left\langle p_{k}{\frac {dV_{k}}{dt}}\right\rangle ,}
where the angle brackets denote time averages. The technical importance of the enthalpy is directly related to its presence in the first law for open systems, as formulated above.

Diagrams

Ts diagram of nitrogen.[18] The red curve at the left is the melting curve. The red dome represents the two-phase region with the low-entropy side the saturated liquid and the high-entropy side the saturated gas. The black curves give the Ts relation along isobars. The pressures are indicated in bar. The blue curves are isenthalps (curves of constant enthalpy). The values are indicated in blue in kJ/kg. The specific points a, b, etc., are treated in the main text.

Nowadays the enthalpy values of important substances can be obtained using commercial software. Practically all relevant material properties can be obtained either in tabular or in graphical form. There are many types of diagrams, such as hT diagrams, which give the specific enthalpy as function of temperature for various pressures, and hp diagrams, which give h as function of p for various T. One of the most common diagrams is the temperature–specific entropy diagram (Ts-diagram). It gives the melting curve and saturated liquid and vapor values together with isobars and isenthalps. These diagrams are powerful tools in the hands of the thermal engineer.

Some basic applications

The points a through h in the figure play a role in the discussion in this section.
a: T = 300 K, p = 1 bar, s = 6.85 kJ/(kg K), h = 461 kJ/kg;
b: T = 380 K, p = 2 bar, s = 6.85 kJ/(kg K), h = 530 kJ/kg;
c: T = 300 K, p = 200 bar, s = 5.16 kJ/(kg K), h = 430 kJ/kg;
d: T = 270 K, p = 1 bar, s = 6.79 kJ/(kg K), h = 430 kJ/kg;
e: T = 108 K, p = 13 bar, s = 3.55 kJ/(kg K), h = 100 kJ/kg (saturated liquid at 13 bar);
f: T = 77.2 K, p = 1 bar, s = 3.75 kJ/(kg K), h = 100 kJ/kg;
g: T = 77.2 K, p = 1 bar, s = 2.83 kJ/(kg K), h = 28 kJ/kg (saturated liquid at 1 bar);
h: T = 77.2 K, p = 1 bar, s = 5.41 kJ/(kg K), h = 230 kJ/kg (saturated gas at 1 bar);

Throttling

Schematic diagram of a throttling in the steady state. Fluid enters the system (dotted rectangle) at point 1 and leaves it at point 2. The mass flow is .

One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule-Thomson expansion. It concerns a steady adiabatic flow of a fluid through a flow resistance (valve, porous plug, or any other type of flow resistance) as shown in the figure. This process is very important, since it is at the heart of domestic refrigerators, where it is responsible for the temperature drop between ambient temperature and the interior of the refrigerator. It is also the final stage in many types of liquefiers.

In the first law for open systems (see above) applied to the system, all terms are zero, except the terms for the enthalpy flow. Hence
{\displaystyle 0={\dot {m}}h_{1}-{\dot {m}}h_{2}.}
Since the mass flow is constant, the specific enthalpies at the two sides of the flow resistance are the same:
{\displaystyle h_{1}=h_{2},}
that is, the enthalpy per unit mass does not change during the throttling. The consequences of this relation can be demonstrated using the Ts diagram above. Point c is at 200 bar and room temperature (300 K). A Joule–Thomson expansion from 200 bar to 1 bar follows a curve of constant enthalpy of roughly 425 kJ/kg (not shown in the diagram) lying between the 400 and 450 kJ/kg isenthalps and ends in point d, which is at a temperature of about 270 K. Hence the expansion from 200 bar to 1 bar cools nitrogen from 300 K to 270 K. In the valve, there is a lot of friction, and a lot of entropy is produced, but still the final temperature is below the starting value!

Point e is chosen so that it is on the saturated liquid line with h = 100 kJ/kg. It corresponds roughly with p = 13 bar and T = 108 K. Throttling from this point to a pressure of 1 bar ends in the two-phase region (point f). This means that a mixture of gas and liquid leaves the throttling valve. Since the enthalpy is an extensive parameter, the enthalpy in f (hf) is equal to the enthalpy in g (hg) multiplied by the liquid fraction in f (xf) plus the enthalpy in h (hh) multiplied by the gas fraction in f (1 − xf). So
{\displaystyle h_{\mathbf {f} }=x_{\mathbf {f} }h_{\mathbf {g} }+(1-x_{\mathbf {f} })h_{\mathbf {h} }.}
With numbers: 100 = xf × 28 + (1 − xf) × 230, so xf = 0.64. This means that the mass fraction of the liquid in the liquid–gas mixture that leaves the throttling valve is 64%.

Compressors

Schematic diagram of a compressor in the steady state. Fluid enters the system (dotted rectangle) at point 1 and leaves it at point 2. The mass flow is . A power P is applied and a heat flow is released to the surroundings at ambient temperature Ta.

A power P is applied e.g. as electrical power. If the compression is adiabatic, the gas temperature goes up. In the reversible case it would be at constant entropy, which corresponds with a vertical line in the Ts diagram. For example, compressing nitrogen from 1 bar (point a) to 2 bar (point b) would result in a temperature increase from 300 K to 380 K. In order to let the compressed gas exit at ambient temperature Ta, heat exchange, e.g. by cooling water, is necessary. In the ideal case the compression is isothermal. The average heat flow to the surroundings is . Since the system is in the steady state the first law gives
{\displaystyle 0=-{\dot {Q}}+{\dot {m}}h_{1}-{\dot {m}}h_{2}+P.}
The minimal power needed for the compression is realized if the compression is reversible. In that case the second law of thermodynamics for open systems gives
{\displaystyle 0=-{\frac {\dot {Q}}{T_{\mathrm {a} }}}+{\dot {m}}s_{1}-{\dot {m}}s_{2}.}
Eliminating gives for the minimal power
{\displaystyle {\frac {P_{\text{min}}}{\dot {m}}}=h_{2}-h_{1}-T_{\mathrm {a} }(s_{2}-s_{1}).}
For example, compressing 1 kg of nitrogen from 1 bar to 200 bar costs at least (hcha) − Ta(scsa). With the data, obtained with the Ts diagram, we find a value of (430 − 461) − 300 × (5.16 − 6.85) = 476 kJ/kg.

The relation for the power can be further simplified by writing it as
{\displaystyle {\frac {P_{\text{min}}}{\dot {m}}}=\int _{1}^{2}(dh-T_{\mathrm {a} }\,ds).}
With dh = Tds + vdp, this results in the final relation
{\displaystyle {\frac {P_{\text{min}}}{\dot {m}}}=\int _{1}^{2}v\,dp.}

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...