Search This Blog

Monday, December 23, 2024

Thermodynamic diagrams

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Thermodynamic_diagrams

Thermodynamic diagrams
are diagrams used to represent the thermodynamic states of a material (typically fluid) and the consequences of manipulating this material. For instance, a temperature–entropy diagram (T–s diagram) may be used to demonstrate the behavior of a fluid as it is changed by a compressor.

Overview

Especially in meteorology they are used to analyze the actual state of the atmosphere derived from the measurements of radiosondes, usually obtained with weather balloons. In such diagrams, temperature and humidity values (represented by the dew point) are displayed with respect to pressure. Thus the diagram gives at a first glance the actual atmospheric stratification and vertical water vapor distribution. Further analysis gives the actual base and top height of convective clouds or possible instabilities in the stratification.

By assuming the energy amount due to solar radiation it is possible to predict the 2 m (6.6 ft) temperature, humidity, and wind during the day, the development of the boundary layer of the atmosphere, the occurrence and development of clouds and the conditions for soaring flight during the day.

The main feature of thermodynamic diagrams is the equivalence between the area in the diagram and energy. When air changes pressure and temperature during a process and prescribes a closed curve within the diagram the area enclosed by this curve is proportional to the energy which has been gained or released by the air.

Types of thermodynamic diagrams

General purpose diagrams include:

Specific to weather services, there are mainly three different types of thermodynamic diagrams used:

All three diagrams are derived from the physical P–alpha diagram which combines pressure (P) and specific volume (alpha) as its basic coordinates. The P–alpha diagram shows a strong deformation of the grid for atmospheric conditions and is therefore not useful in atmospheric sciences. The three diagrams are constructed from the P–alpha diagram by using appropriate coordinate transformations.

Not a thermodynamic diagram in a strict sense, since it does not display the energy–area equivalence, is the

But due to its simpler construction it is preferred in education.

Another widely-used diagram that does not display the energy–area equivalence is the θ-z diagram (Theta-height diagram), extensively used boundary layer meteorology.

Characteristics

Thermodynamic diagrams usually show a net of five different lines:

  • isobars = lines of constant pressure
  • isotherms = lines of constant temperature
  • dry adiabats = lines of constant potential temperature representing the temperature of a rising parcel of dry air
  • saturated adiabats or pseudoadiabats = lines representing the temperature of a rising parcel saturated with water vapor
  • mixing ratio = lines representing the dewpoint of a rising parcel

The lapse rate, dry adiabatic lapse rate (DALR) and moist adiabatic lapse rate (MALR), are obtained. With the help of these lines, parameters such as cloud condensation level, level of free convection, onset of cloud formation. etc. can be derived from the soundings.

Example

The path or series of states through which a system passes from an initial equilibrium state to a final equilibrium state and can be viewed graphically on a pressure-volume (P-V), pressure-temperature (P-T), and temperature-entropy (T-s) diagrams.

There are an infinite number of possible paths from an initial point to an end point in a process. In many cases the path matters, however, changes in the thermodynamic properties depend only on the initial and final states and not upon the path.

Figure 1

Consider a gas in cylinder with a free floating piston resting on top of a volume of gas V1 at a temperature T1. If the gas is heated so that the temperature of the gas goes up to T2 while the piston is allowed to rise to V2 as in Figure 1, then the pressure is kept the same in this process due to the free floating piston being allowed to rise making the process an isobaric process or constant pressure process. This Process Path is a straight horizontal line from state one to state two on a P-V diagram.

Figure 2

It is often valuable to calculate the work done in a process. The work done in a process is the area beneath the process path on a P-V diagram. Figure 2 If the process is isobaric, then the work done on the piston is easily calculated. For example, if the gas expands slowly against the piston, the work done by the gas to raise the piston is the force F times the distance d. But the force is just the pressure P of the gas times the area A of the piston, F = PA. Thus

  • W = Fd
  • W = PAd
  • W = P(V2V1)
figure 3

Now let’s say that the piston was not able to move smoothly within the cylinder due to static friction with the walls of the cylinder. Assuming that the temperature was increased slowly, you would find that the process path is not straight and no longer isobaric, but would instead undergo an isometric process till the force exceeded that of the frictional force and then would undergo an isothermal process back to an equilibrium state. This process would be repeated till the end state is reached. See figure 3. The work done on the piston in this case would be different due to the additional work required for the resistance of the friction. The work done due to friction would be the difference between the work done on these two process paths.

Many engineers neglect friction at first in order to generate a simplified model. For more accurate information, the height of the highest point, or the max pressure, to surpass the static friction would be proportional to the frictional coefficient and the slope going back down to the normal pressure would be the same as an isothermal process if the temperature was increased at a slow enough rate.

Another path in this process is an isometric process. This is a process where volume is held constant which shows as a vertical line on a P-V diagram. Figure 3 Since the piston is not moving during this process, there is not any work being done.

State function

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/State_function

In the thermodynamics of equilibrium, a state function, function of state, or point function for a thermodynamic system is a mathematical function relating several state variables or state quantities (that describe equilibrium states of a system) that depend only on the current equilibrium thermodynamic state of the system (e.g. gas, liquid, solid, crystal, or emulsion), not the path which the system has taken to reach that state. A state function describes equilibrium states of a system, thus also describing the type of system. A state variable is typically a state function so the determination of other state variable values at an equilibrium state also determines the value of the state variable as the state function at that state. The ideal gas law is a good example. In this law, one state variable (e.g., pressure, volume, temperature, or the amount of substance in a gaseous equilibrium system) is a function of other state variables so is regarded as a state function. A state function could also describe the number of a certain type of atoms or molecules in a gaseous, liquid, or solid form in a heterogeneous or homogeneous mixture, or the amount of energy required to create such a system or change the system into a different equilibrium state.

Internal energy, enthalpy, and entropy are examples of state quantities or state functions because they quantitatively describe an equilibrium state of a thermodynamic system, regardless of how the system has arrived in that state. In contrast, mechanical work and heat are process quantities or path functions because their values depend on a specific "transition" (or "path") between two equilibrium states that a system has taken to reach the final equilibrium state. Exchanged heat (in certain discrete amounts) can be associated with changes of state function such as enthalpy. The description of the system heat exchange is done by a state function, and thus enthalpy changes point to an amount of heat. This can also apply to entropy when heat is compared to temperature. The description breaks down for quantities exhibiting hysteresis.

History

It is likely that the term "functions of state" was used in a loose sense during the 1850s and 1860s by those such as Rudolf Clausius, William Rankine, Peter Tait, and William Thomson. By the 1870s, the term had acquired a use of its own. In his 1873 paper "Graphical Methods in the Thermodynamics of Fluids", Willard Gibbs states: "The quantities v, p, t, ε, and η are determined when the state of the body is given, and it may be permitted to call them functions of the state of the body."

Overview

A thermodynamic system is described by a number of thermodynamic parameters (e.g. temperature, volume, or pressure) which are not necessarily independent. The number of parameters needed to describe the system is the dimension of the state space of the system (D). For example, a monatomic gas with a fixed number of particles is a simple case of a two-dimensional system (D = 2). Any two-dimensional system is uniquely specified by two parameters. Choosing a different pair of parameters, such as pressure and volume instead of pressure and temperature, creates a different coordinate system in two-dimensional thermodynamic state space but is otherwise equivalent. Pressure and temperature can be used to find volume, pressure and volume can be used to find temperature, and temperature and volume can be used to find pressure. An analogous statement holds for higher-dimensional spaces, as described by the state postulate.

Generally, a state space is defined by an equation of the form , where P denotes pressure, T denotes temperature, V denotes volume, and the ellipsis denotes other possible state variables like particle number N and entropy S. If the state space is two-dimensional as in the above example, it can be visualized as a three-dimensional graph (a surface in three-dimensional space). However, the labels of the axes are not unique (since there are more than three state variables in this case), and only two independent variables are necessary to define the state.

When a system changes state continuously, it traces out a "path" in the state space. The path can be specified by noting the values of the state parameters as the system traces out the path, whether as a function of time or a function of some other external variable. For example, having the pressure P(t) and volume V(t) as functions of time from time t0 to t1 will specify a path in two-dimensional state space. Any function of time can then be integrated over the path. For example, to calculate the work done by the system from time t0 to time t1, calculate . In order to calculate the work W in the above integral, the functions P(t) and V(t) must be known at each time t over the entire path. In contrast, a state function only depends upon the system parameters' values at the endpoints of the path. For example, the following equation can be used to calculate the work plus the integral of V dP over the path:

In the equation, can be expressed as the exact differential of the function P(t)V(t). Therefore, the integral can be expressed as the difference in the value of P(t)V(t) at the end points of the integration. The product PV is therefore a state function of the system.

The notation d will be used for an exact differential. In other words, the integral of dΦ will be equal to Φ(t1) − Φ(t0). The symbol δ will be reserved for an inexact differential, which cannot be integrated without full knowledge of the path. For example, δW = PdV will be used to denote an infinitesimal increment of work.

State functions represent quantities or properties of a thermodynamic system, while non-state functions represent a process during which the state functions change. For example, the state function PV is proportional to the internal energy of an ideal gas, but the work W is the amount of energy transferred as the system performs work. Internal energy is identifiable; it is a particular form of energy. Work is the amount of energy that has changed its form or location.

List of state functions

The following are considered to be state functions in thermodynamics:

Algorithmic information theory

Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."

Besides the formalization of a universal measure for irreducible information content of computably generated objects, some main achievements of AIT were to show that: in fact algorithmic complexity follows (in the self-delimited case) the same inequalities (except for a constant) that entropy does, as in classical information theory; randomness is incompressibility; and, within the realm of randomly generated software, the probability of occurrence of any data structure is of the order of the shortest program that generates it when running on a universal machine.

AIT principally studies measures of irreducible information content of strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers. One of the main motivations behind AIT is the very study of the information carried by mathematical objects as in the field of metamathematics, e.g., as shown by the incompleteness results mentioned below. Other main motivations came from surpassing the limitations of classical information theory for single and fixed objects, formalizing the concept of randomness, and finding a meaningful probabilistic inference without prior knowledge of the probability distribution (e.g., whether it is independent and identically distributed, Markovian, or even stationary). In this way, AIT is known to be basically founded upon three main mathematical concepts and the relations between them: algorithmic complexity, algorithmic randomness, and algorithmic probability.

Overview

Algorithmic information theory principally studies complexity measures on strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers.

Informally, from the point of view of algorithmic information theory, the information content of a string is equivalent to the length of the most-compressed possible self-contained representation of that string. A self-contained representation is essentially a program—in some fixed but otherwise irrelevant universal programming language—that, when run, outputs the original string.

From this point of view, a 3000-page encyclopedia actually contains less information than 3000 pages of completely random letters, despite the fact that the encyclopedia is much more useful. This is because to reconstruct the entire sequence of random letters, one must know what every single letter is. On the other hand, if every vowel were removed from the encyclopedia, someone with reasonable knowledge of the English language could reconstruct it, just as one could likely reconstruct the sentence "Ths sntnc hs lw nfrmtn cntnt" from the context and consonants present.

Unlike classical information theory, algorithmic information theory gives formal, rigorous definitions of a random string and a random infinite sequence that do not depend on physical or philosophical intuitions about nondeterminism or likelihood. (The set of random strings depends on the choice of the universal Turing machine used to define Kolmogorov complexity, but any choice gives identical asymptotic results because the Kolmogorov complexity of a string is invariant up to an additive constant depending only on the choice of universal Turing machine. For this reason the set of random infinite sequences is independent of the choice of universal machine.)

Some of the results of algorithmic information theory, such as Chaitin's incompleteness theorem, appear to challenge common mathematical and philosophical intuitions. Most notable among these is the construction of Chaitin's constant Ω, a real number that expresses the probability that a self-delimiting universal Turing machine will halt when its input is supplied by flips of a fair coin (sometimes thought of as the probability that a random computer program will eventually halt). Although Ω is easily defined, in any consistent axiomatizable theory one can only compute finitely many digits of Ω, so it is in some sense unknowable, providing an absolute limit on knowledge that is reminiscent of Gödel's incompleteness theorems. Although the digits of Ω cannot be determined, many properties of Ω are known; for example, it is an algorithmically random sequence and thus its binary digits are evenly distributed (in fact it is normal).

History

Algorithmic information theory was founded by Ray Solomonoff, who published the basic ideas on which the field is based as part of his invention of algorithmic probability—a way to overcome serious problems associated with the application of Bayes' rules in statistics. He first described his results at a Conference at Caltech in 1960, and in a report, February 1960, "A Preliminary Report on a General Theory of Inductive Inference." Algorithmic information theory was later developed independently by Andrey Kolmogorov, in 1965 and Gregory Chaitin, around 1966.

There are several variants of Kolmogorov complexity or algorithmic information; the most widely used one is based on self-delimiting programs and is mainly due to Leonid Levin (1974). Per Martin-Löf also contributed significantly to the information theory of infinite sequences. An axiomatic approach to algorithmic information theory based on the Blum axioms (Blum 1967) was introduced by Mark Burgin in a paper presented for publication by Andrey Kolmogorov (Burgin 1982). The axiomatic approach encompasses other approaches in the algorithmic information theory. It is possible to treat different measures of algorithmic information as particular cases of axiomatically defined measures of algorithmic information. Instead of proving similar theorems, such as the basic invariance theorem, for each particular measure, it is possible to easily deduce all such results from one corresponding theorem proved in the axiomatic setting. This is a general advantage of the axiomatic approach in mathematics. The axiomatic approach to algorithmic information theory was further developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003).

Precise definitions

A binary string is said to be random if the Kolmogorov complexity of the string is at least the length of the string. A simple counting argument shows that some strings of any given length are random, and almost all strings are very close to being random. Since Kolmogorov complexity depends on a fixed choice of universal Turing machine (informally, a fixed "description language" in which the "descriptions" are given), the collection of random strings does depend on the choice of fixed universal machine. Nevertheless, the collection of random strings, as a whole, has similar properties regardless of the fixed machine, so one can (and often does) talk about the properties of random strings as a group without having to first specify a universal machine.

An infinite binary sequence is said to be random if, for some constant c, for all n, the Kolmogorov complexity of the initial segment of length n of the sequence is at least n − c. It can be shown that almost every sequence (from the point of view of the standard measure—"fair coin" or Lebesgue measure—on the space of infinite binary sequences) is random. Also, since it can be shown that the Kolmogorov complexity relative to two different universal machines differs by at most a constant, the collection of random infinite sequences does not depend on the choice of universal machine (in contrast to finite strings). This definition of randomness is usually called Martin-Löf randomness, after Per Martin-Löf, to distinguish it from other similar notions of randomness. It is also sometimes called 1-randomness to distinguish it from other stronger notions of randomness (2-randomness, 3-randomness, etc.). In addition to Martin-Löf randomness concepts, there are also recursive randomness, Schnorr randomness, and Kurtz randomness etc. Yongge Wang showed that all of these randomness concepts are different.

(Related definitions can be made for alphabets other than the set .)

Specific sequence

Algorithmic information theory (AIT) is the information theory of individual objects, using computer science, and concerns itself with the relationship between computation, information, and randomness.

The information content or complexity of an object can be measured by the length of its shortest description. For instance the string

"0101010101010101010101010101010101010101010101010101010101010101"

has the short description "32 repetitions of '01'", while

"1100100001100001110111101110110011111010010000100101011110010110"

presumably has no simple description other than writing down the string itself.

More formally, the algorithmic complexity (AC) of a string x is defined as the length of the shortest program that computes or outputs x, where the program is run on some fixed reference universal computer.

A closely related notion is the probability that a universal computer outputs some string x when fed with a program chosen at random. This algorithmic "Solomonoff" probability (AP) is key in addressing the old philosophical problem of induction in a formal way.

The major drawback of AC and AP are their incomputability. Time-bounded "Levin" complexity penalizes a slow program by adding the logarithm of its running time to its length. This leads to computable variants of AC and AP, and universal "Levin" search (US) solves all inversion problems in optimal time (apart from some unrealistically large multiplicative constant).

AC and AP also allow a formal and rigorous definition of randomness of individual strings to not depend on physical or philosophical intuitions about non-determinism or likelihood. Roughly, a string is algorithmic "Martin-Löf" random (AR) if it is incompressible in the sense that its algorithmic complexity is equal to its length.

AC, AP, and AR are the core sub-disciplines of AIT, but AIT spawns into many other areas. It serves as the foundation of the Minimum Description Length (MDL) principle, can simplify proofs in computational complexity theory, has been used to define a universal similarity metric between objects, solves the Maxwell daemon problem, and many others.

Entropy (classical thermodynamics)

In classical thermodynamics, entropy (from Greek τρoπή (tropḗ) 'transformation') is a property of a thermodynamic system that expresses the direction or outcome of spontaneous changes in the system. The term was introduced by Rudolf Clausius in the mid-19th century to explain the relationship of the internal energy that is available or unavailable for transformations in form of heat and work. Entropy predicts that certain processes are irreversible or impossible, despite not violating the conservation of energy. The definition of entropy is central to the establishment of the second law of thermodynamics, which states that the entropy of isolated systems cannot decrease with time, as they always tend to arrive at a state of thermodynamic equilibrium, where the entropy is highest. Entropy is therefore also considered to be a measure of disorder in the system.

Ludwig Boltzmann explained the entropy as a measure of the number of possible microscopic configurations Ω of the individual atoms and molecules of the system (microstates) which correspond to the macroscopic state (macrostate) of the system. He showed that the thermodynamic entropy is k ln Ω, where the factor k has since been known as the Boltzmann constant.

Concept

Figure 1. A thermodynamic model system

Differences in pressure, density, and temperature of a thermodynamic system tend to equalize over time. For example, in a room containing a glass of melting ice, the difference in temperature between the warm room and the cold glass of ice and water is equalized by energy flowing as heat from the room to the cooler ice and water mixture. Over time, the temperature of the glass and its contents and the temperature of the room achieve a balance. The entropy of the room has decreased. However, the entropy of the glass of ice and water has increased more than the entropy of the room has decreased. In an isolated system, such as the room and ice water taken together, the dispersal of energy from warmer to cooler regions always results in a net increase in entropy. Thus, when the system of the room and ice water system has reached thermal equilibrium, the entropy change from the initial state is at its maximum. The entropy of the thermodynamic system is a measure of the progress of the equalization.

Many irreversible processes result in an increase of entropy. One of them is mixing of two or more different substances, occasioned by bringing them together by removing a wall that separates them, keeping the temperature and pressure constant. The mixing is accompanied by the entropy of mixing. In the important case of mixing of ideal gases, the combined system does not change its internal energy by work or heat transfer; the entropy increase is then entirely due to the spreading of the different substances into their new common volume.

From a macroscopic perspective, in classical thermodynamics, the entropy is a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. Entropy is a key ingredient of the Second law of thermodynamics, which has important consequences e.g. for the performance of heat engines, refrigerators, and heat pumps.

Definition

According to the Clausius equality, for a closed homogeneous system, in which only reversible processes take place,

With being the uniform temperature of the closed system and the incremental reversible transfer of heat energy into that system.

That means the line integral is path-independent.

A state function , called entropy, may be defined which satisfies

Entropy measurement

The thermodynamic state of a uniform closed system is determined by its temperature T and pressure P. A change in entropy can be written as

The first contribution depends on the heat capacity at constant pressure CP through

This is the result of the definition of the heat capacity by δQ = CP dT and T dS = δQ. The second term may be rewritten with one of the Maxwell relations

and the definition of the volumetric thermal-expansion coefficient

so that

With this expression the entropy S at arbitrary P and T can be related to the entropy S0 at some reference state at P0 and T0 according to

In classical thermodynamics, the entropy of the reference state can be put equal to zero at any convenient temperature and pressure. For example, for pure substances, one can take the entropy of the solid at the melting point at 1 bar equal to zero. From a more fundamental point of view, the third law of thermodynamics suggests that there is a preference to take S = 0 at T = 0 (absolute zero) for perfectly ordered materials such as crystals.

S(P, T) is determined by followed a specific path in the P-T diagram: integration over T at constant pressure P0, so that dP = 0, and in the second integral one integrates over P at constant temperature T, so that dT = 0. As the entropy is a function of state the result is independent of the path.

The above relation shows that the determination of the entropy requires knowledge of the heat capacity and the equation of state (which is the relation between P,V, and T of the substance involved). Normally these are complicated functions and numerical integration is needed. In simple cases it is possible to get analytical expressions for the entropy. In the case of an ideal gas, the heat capacity is constant and the ideal gas law PV = nRT gives that αVV = V/T = nR/p, with n the number of moles and R the molar ideal-gas constant. So, the molar entropy of an ideal gas is given by

In this expression CP now is the molar heat capacity.

The entropy of inhomogeneous systems is the sum of the entropies of the various subsystems. The laws of thermodynamics hold rigorously for inhomogeneous systems even though they may be far from internal equilibrium. The only condition is that the thermodynamic parameters of the composing subsystems are (reasonably) well-defined.

Temperature-entropy diagrams

Fig.2 Temperature–entropy diagram of nitrogen. The red curve at the left is the melting curve. The red dome represents the two-phase region with the low-entropy side the saturated liquid and the high-entropy side the saturated gas. The black curves give the TS relation along isobars. The pressures are indicated in bar. The blue curves are isenthalps (curves of constant enthalpy). The values are indicated in blue in kJ/kg.

Entropy values of important substances may be obtained from reference works or with commercial software in tabular form or as diagrams. One of the most common diagrams is the temperature-entropy diagram (TS-diagram). For example, Fig.2 shows the TS-diagram of nitrogen, depicting the melting curve and saturated liquid and vapor values with isobars and isenthalps.

Entropy change in irreversible transformations

We now consider inhomogeneous systems in which internal transformations (processes) can take place. If we calculate the entropy S1 before and S2 after such an internal process the Second Law of Thermodynamics demands that S2 ≥ S1 where the equality sign holds if the process is reversible. The difference Si = S2S1 is the entropy production due to the irreversible process. The Second law demands that the entropy of an isolated system cannot decrease.

Suppose a system is thermally and mechanically isolated from the environment (isolated system). For example, consider an insulating rigid box divided by a movable partition into two volumes, each filled with gas. If the pressure of one gas is higher, it will expand by moving the partition, thus performing work on the other gas. Also, if the gases are at different temperatures, heat can flow from one gas to the other provided the partition allows heat conduction. Our above result indicates that the entropy of the system as a whole will increase during these processes. There exists a maximum amount of entropy the system may possess under the circumstances. This entropy corresponds to a state of stable equilibrium, since a transformation to any other equilibrium state would cause the entropy to decrease, which is forbidden. Once the system reaches this maximum-entropy state, no part of the system can perform work on any other part. It is in this sense that entropy is a measure of the energy in a system that cannot be used to do work.

An irreversible process degrades the performance of a thermodynamic system, designed to do work or produce cooling, and results in entropy production. The entropy generation during a reversible process is zero. Thus entropy production is a measure of the irreversibility and may be used to compare engineering processes and machines.

Thermal machines

Figure 3: Heat engine diagram. The system, discussed in the text, is indicated by the dotted rectangle. It contains the two reservoirs and the heat engine. The arrows define the positive directions of the flows of heat and work.

Clausius' identification of S as a significant quantity was motivated by the study of reversible and irreversible thermodynamic transformations. A heat engine is a thermodynamic system that can undergo a sequence of transformations which ultimately return it to its original state. Such a sequence is called a cyclic process, or simply a cycle. During some transformations, the engine may exchange energy with its environment. The net result of a cycle is

  1. mechanical work done by the system (which can be positive or negative, the latter meaning that work is done on the engine),
  2. heat transferred from one part of the environment to another. In the steady state, by the conservation of energy, the net energy lost by the environment is equal to the work done by the engine.

If every transformation in the cycle is reversible, the cycle is reversible, and it can be run in reverse, so that the heat transfers occur in the opposite directions and the amount of work done switches sign.

Heat engines

Consider a heat engine working between two temperatures TH and Ta. With Ta we have ambient temperature in mind, but, in principle it may also be some other low temperature. The heat engine is in thermal contact with two heat reservoirs which are supposed to have a very large heat capacity so that their temperatures do not change significantly if heat QH is removed from the hot reservoir and Qa is added to the lower reservoir. Under normal operation TH > Ta and QH, Qa, and W are all positive.

As our thermodynamical system we take a big system which includes the engine and the two reservoirs. It is indicated in Fig.3 by the dotted rectangle. It is inhomogeneous, closed (no exchange of matter with its surroundings), and adiabatic (no exchange of heat with its surroundings). It is not isolated since per cycle a certain amount of work W is produced by the system given by the first law of thermodynamics

We used the fact that the engine itself is periodic, so its internal energy has not changed after one cycle. The same is true for its entropy, so the entropy increase S2 − S1 of our system after one cycle is given by the reduction of entropy of the hot source and the increase of the cold sink. The entropy increase of the total system S2 - S1 is equal to the entropy production Si due to irreversible processes in the engine so

The Second law demands that Si ≥ 0. Eliminating Qa from the two relations gives

The first term is the maximum possible work for a heat engine, given by a reversible engine, as one operating along a Carnot cycle. Finally

This equation tells us that the production of work is reduced by the generation of entropy. The term TaSi gives the lost work, or dissipated energy, by the machine.

Correspondingly, the amount of heat, discarded to the cold sink, is increased by the entropy generation

These important relations can also be obtained without the inclusion of the heat reservoirs. See the article on entropy production.

Refrigerators

The same principle can be applied to a refrigerator working between a low temperature TL and ambient temperature. The schematic drawing is exactly the same as Fig.3 with TH replaced by TL, QH by QL, and the sign of W reversed. In this case the entropy production is

and the work needed to extract heat QL from the cold source is

The first term is the minimum required work, which corresponds to a reversible refrigerator, so we have

i.e., the refrigerator compressor has to perform extra work to compensate for the dissipated energy due to irreversible processes which lead to entropy production.

Thermodynamic diagrams

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Thermodynamic_diagrams Thermodynamic diagrams are diagrams used to repr...