Search This Blog

Tuesday, May 30, 2023

Specific heat capacity

From Wikipedia, the free encyclopedia

In thermodynamics, the specific heat capacity (symbol c) of a substance is the heat capacity of a sample of the substance divided by the mass of the sample, also sometimes referred to as massic heat capacity. Informally, it is the amount of heat that must be added to one unit of mass of the substance in order to cause an increase of one unit in temperature. The SI unit of specific heat capacity is joule per kelvin per kilogram, J⋅kg−1⋅K−1. For example, the heat required to raise the temperature of 1 kg of water by 1 K is 4184 joules, so the specific heat capacity of water is 4184 J⋅kg−1⋅K−1.

Specific heat capacity often varies with temperature, and is different for each state of matter. Liquid water has one of the highest specific heat capacities among common substances, about 4184 J⋅kg−1⋅K−1 at 20 °C; but that of ice, just below 0 °C, is only 2093 J⋅kg−1⋅K−1. The specific heat capacities of iron, granite, and hydrogen gas are about 449 J⋅kg−1⋅K−1, 790 J⋅kg−1⋅K−1, and 14300 J⋅kg−1⋅K−1, respectively. While the substance is undergoing a phase transition, such as melting or boiling, its specific heat capacity is technically undefined, because the heat goes into changing its state rather than raising its temperature.

The specific heat capacity of a substance, especially a gas, may be significantly higher when it is allowed to expand as it is heated (specific heat capacity at constant pressure) than when it is heated in a closed vessel that prevents expansion (specific heat capacity at constant volume). These two values are usually denoted by and , respectively; their quotient is the heat capacity ratio.

The term specific heat may also refer to the ratio between the specific heat capacities of a substance at a given temperature and of a reference substance at a reference temperature, such as water at 15 °C; much in the fashion of specific gravity. Specific heat capacity is also related to other intensive measures of heat capacity with other denominators. If the amount of substance is measured as a number of moles, one gets the molar heat capacity instead, whose SI unit is joule per kelvin per mole, J⋅mol−1⋅K−1. If the amount is taken to be the volume of the sample (as is sometimes done in engineering), one gets the volumetric heat capacity, whose SI unit is joule per kelvin per cubic meter, J⋅m−3⋅K−1.

One of the first scientists to use the concept was Joseph Black, an 18th-century medical doctor and professor of medicine at Glasgow University. He measured the specific heat capacities of many substances, using the term capacity for heat.

Definition

The specific heat capacity of a substance, usually denoted by or s, is the heat capacity of a sample of the substance, divided by the mass of the sample:

where represents the amount of heat needed to uniformly raise the temperature of the sample by a small increment .

Like the heat capacity of an object, the specific heat capacity of a substance may vary, sometimes substantially, depending on the starting temperature of the sample and the pressure applied to it. Therefore, it should be considered a function of those two variables.

These parameters are usually specified when giving the specific heat capacity of a substance. For example, "Water (liquid): = 4187 J⋅kg−1⋅K−1 (15 °C)" When not specified, published values of the specific heat capacity generally are valid for some standard conditions for temperature and pressure.

However, the dependency of on starting temperature and pressure can often be ignored in practical contexts, e.g. when working in narrow ranges of those variables. In those contexts one usually omits the qualifier , and approximates the specific heat capacity by a constant suitable for those ranges.

Specific heat capacity is an intensive property of a substance, an intrinsic characteristic that does not depend on the size or shape of the amount in consideration. (The qualifier "specific" in front of an extensive property often indicates an intensive property derived from it.)

Variations

The injection of heat energy into a substance, besides raising its temperature, usually causes an increase in its volume and/or its pressure, depending on how the sample is confined. The choice made about the latter affects the measured specific heat capacity, even for the same starting pressure and starting temperature . Two particular choices are widely used:

  • If the pressure is kept constant (for instance, at the ambient atmospheric pressure), and the sample is allowed to expand, the expansion generates work as the force from the pressure displaces the enclosure or the surrounding fluid. That work must come from the heat energy provided. The specific heat capacity thus obtained is said to be measured at constant pressure (or isobaric), and is often denoted , , etc.
  • On the other hand, if the expansion is prevented — for example by a sufficiently rigid enclosure, or by increasing the external pressure to counteract the internal one — no work is generated, and the heat energy that would have gone into it must instead contribute to the internal energy of the sample, including raising its temperature by an extra amount. The specific heat capacity obtained this way is said to be measured at constant volume (or isochoric) and denoted , , , etc.

The value of is usually less than the value of . This difference is particularly notable in gases where values under constant pressure are typically 30% to 66.7% greater than those at constant volume. Hence the heat capacity ratio of gases is typically between 1.3 and 1.67.

Applicability

The specific heat capacity can be defined and measured for gases, liquids, and solids of fairly general composition and molecular structure. These include gas mixtures, solutions and alloys, or heterogenous materials such as milk, sand, granite, and concrete, if considered at a sufficiently large scale.

The specific heat capacity can be defined also for materials that change state or composition as the temperature and pressure change, as long as the changes are reversible and gradual. Thus, for example, the concepts are definable for a gas or liquid that dissociates as the temperature increases, as long as the products of the dissociation promptly and completely recombine when it drops.

The specific heat capacity is not meaningful if the substance undergoes irreversible chemical changes, or if there is a phase change, such as melting or boiling, at a sharp temperature within the range of temperatures spanned by the measurement.

Measurement

The specific heat capacity of a substance is typically determined according to the definition; namely, by measuring the heat capacity of a sample of the substance, usually with a calorimeter, and dividing by the sample's mass. Several techniques can be applied for estimating the heat capacity of a substance, such as fast differential scanning calorimetry.

Graph of temperature of phases of water heated from −100 °C to 200 °C – the dashed line example shows that melting and heating 1 kg of ice at −50 °C to water at 40 °C needs 600 kJ

The specific heat capacities of gases can be measured at constant volume, by enclosing the sample in a rigid container. On the other hand, measuring the specific heat capacity at constant volume can be prohibitively difficult for liquids and solids, since one often would need impractical pressures in order to prevent the expansion that would be caused by even small increases in temperature. Instead, the common practice is to measure the specific heat capacity at constant pressure (allowing the material to expand or contract as it wishes), determine separately the coefficient of thermal expansion and the compressibility of the material, and compute the specific heat capacity at constant volume from these data according to the laws of thermodynamics.

Units

International system

The SI unit for specific heat capacity is joule per kelvin per kilogram J/kg⋅K, J⋅K−1⋅kg−1. Since an increment of temperature of one degree Celsius is the same as an increment of one kelvin, that is the same as joule per degree Celsius per kilogram: J/(kg⋅°C). Sometimes the gram is used instead of kilogram for the unit of mass: 1 J⋅g−1⋅K−1 = 1000 J⋅kg−1⋅K−1.

The specific heat capacity of a substance (per unit of mass) has dimension L2⋅Θ−1⋅T−2, or (L/T)2/Θ. Therefore, the SI unit J⋅kg−1⋅K−1 is equivalent to metre squared per second squared per kelvin (m2⋅K−1⋅s−2).

Imperial engineering units

Professionals in construction, civil engineering, chemical engineering, and other technical disciplines, especially in the United States, may use English Engineering units including the pound (lb = 0.45359237 kg) as the unit of mass, the degree Fahrenheit or Rankine (°R = 5/9 K, about 0.555556 K) as the unit of temperature increment, and the British thermal unit (BTU ≈ 1055.056 J), as the unit of heat.

In those contexts, the unit of specific heat capacity is BTU/lb⋅°R, or 1 BTU/lb⋅°R = 4186.68J/kg⋅K. The BTU was originally defined so that the average specific heat capacity of water would be 1 BTU/lb⋅°F. Note the value's similarity to that of the calorie - 4187 J/kg⋅°C ≈ 4184 J/kg⋅°C (~.07%) - as they are essentially measuring the same energy, using water as a basis reference, scaled to their systems' respective lbs and °F, or kg and °C.

Calories

In chemistry, heat amounts were often measured in calories. Confusingly, two units with that name, denoted "cal" or "Cal", have been commonly used to measure amounts of heat:

  • the "small calorie" (or "gram-calorie", "cal") is 4.184 J, exactly. It was originally defined so that the specific heat capacity of liquid water would be 1 cal/°C⋅g.
  • The "grand calorie" (also "kilocalorie", "kilogram-calorie", or "food calorie"; "kcal" or "Cal") is 1000 small calories, that is, 4184 J, exactly. It was defined so that the specific heat capacity of water would be 1 Cal/°C⋅kg.

While these units are still used in some contexts (such as kilogram calorie in nutrition), their use is now deprecated in technical and scientific fields. When heat is measured in these units, the unit of specific heat capacity is usually

cal/°C⋅g ("small calorie") = 1 Cal/°C⋅kg = 1 kcal/°C⋅kg ("large calorie") = 4184 J/kg⋅K = 4.184 kJ/kg⋅K.

Note that while cal is 11000 of a Cal or kcal, it is also per gram instead of kilogram: ergo, in either unit, the specific heat capacity of water is approximately 1.

Physical basis

The temperature of a sample of a substance reflects the average kinetic energy of its constituent particles (atoms or molecules) relative to its center of mass. However, not all energy provided to a sample of a substance will go into raising its temperature, exemplified via the equipartition theorem.

Monatomic gases

Quantum mechanics predicts that, at room temperature and ordinary pressures, an isolated atom in a gas cannot store any significant amount of energy except in the form of kinetic energy. Thus, heat capacity per mole is the same for all monatomic gases (such as the noble gases). More precisely, and , where is the ideal gas unit (which is the product of Boltzmann conversion constant from kelvin microscopic energy unit to the macroscopic energy unit joule, and the Avogadro number).

Therefore, the specific heat capacity (per unit of mass, not per mole) of a monatomic gas will be inversely proportional to its (adimensional) atomic weight . That is, approximately,

For the noble gases, from helium to xenon, these computed values are

Gas He Ne Ar Kr Xe
4.00 20.17 39.95 83.80 131.29
(J⋅K−1⋅kg−1) 3118 618.3 312.2 148.8 94.99
(J⋅K−1⋅kg−1) 5197 1031 520.3 248.0 158.3

Polyatomic gases

On the other hand, a polyatomic gas molecule (consisting of two or more atoms bound together) can store heat energy in other forms besides its kinetic energy. These forms include rotation of the molecule, and vibration of the atoms relative to its center of mass.

These extra degrees of freedom or "modes" contribute to the specific heat capacity of the substance. Namely, when heat energy is injected into a gas with polyatomic molecules, only part of it will go into increasing their kinetic energy, and hence the temperature; the rest will go to into those other degrees of freedom. In order to achieve the same increase in temperature, more heat energy will have to be provided to a mol of that substance than to a mol of a monatomic gas. Therefore, the specific heat capacity of a polyatomic gas depends not only on its molecular mass, but also on the number of degrees of freedom that the molecules have.

Quantum mechanics further says that each rotational or vibrational mode can only take or lose energy in certain discrete amount (quanta). Depending on the temperature, the average heat energy per molecule may be too small compared to the quanta needed to activate some of those degrees of freedom. Those modes are said to be "frozen out". In that case, the specific heat capacity of the substance is going to increase with temperature, sometimes in a step-like fashion, as more modes become unfrozen and start absorbing part of the input heat energy.

For example, the molar heat capacity of nitrogen N
2
at constant volume is (at 15 °C, 1 atm), which is . That is the value expected from theory if each molecule had 5 degrees of freedom. These turn out to be three degrees of the molecule's velocity vector, plus two degrees from its rotation about an axis through the center of mass and perpendicular to the line of the two atoms. Because of those two extra degrees of freedom, the specific heat capacity of N
2
(736 J⋅K−1⋅kg−1) is greater than that of an hypothetical monatomic gas with the same molecular mass 28 (445 J⋅K−1⋅kg−1), by a factor of 5/3.

This value for the specific heat capacity of nitrogen is practically constant from below −150 °C to about 300 °C. In that temperature range, the two additional degrees of freedom that correspond to vibrations of the atoms, stretching and compressing the bond, are still "frozen out". At about that temperature, those modes begin to "un-freeze", and as a result starts to increase rapidly at first, then slower as it tends to another constant value. It is 35.5 J⋅K−1⋅mol−1 at 1500 °C, 36.9 at 2500 °C, and 37.5 at 3500 °C. The last value corresponds almost exactly to the predicted value for 7 degrees of freedom per molecule.

Derivations of heat capacity

Relation between specific heat capacities

Starting from the fundamental thermodynamic relation one can show,

where,

A derivation is discussed in the article Relations between specific heats.

For an ideal gas, if is expressed as molar density in the above equation, this equation reduces simply to Mayer's relation,

where and are intensive property heat capacities expressed on a per mole basis at constant pressure and constant volume, respectively.

Specific heat capacity

The specific heat capacity of a material on a per mass basis is

which in the absence of phase transitions is equivalent to

where

  • is the heat capacity of a body made of the material in question,
  • is the mass of the body,
  • is the volume of the body, and
  • is the density of the material.

For gases, and also for other materials under high pressures, there is need to distinguish between different boundary conditions for the processes under consideration (since values differ significantly between different conditions). Typical processes for which a heat capacity may be defined include isobaric (constant pressure, ) or isochoric (constant volume, ) processes. The corresponding specific heat capacities are expressed as

A related parameter to is , the volumetric heat capacity. In engineering practice, for solids or liquids often signifies a volumetric heat capacity, rather than a constant-volume one. In such cases, the mass-specific heat capacity is often explicitly written with the subscript , as . Of course, from the above relationships, for solids one writes

For pure homogeneous chemical compounds with established molecular or molar mass or a molar quantity is established, heat capacity as an intensive property can be expressed on a per mole basis instead of a per mass basis by the following equations analogous to the per mass equations:

where n = number of moles in the body or thermodynamic system. One may refer to such a per mole quantity as molar heat capacity to distinguish it from specific heat capacity on a per-mass basis.

Polytropic heat capacity

The polytropic heat capacity is calculated at processes if all the thermodynamic properties (pressure, volume, temperature) change

The most important polytropic processes run between the adiabatic and the isotherm functions, the polytropic index is between 1 and the adiabatic exponent (γ or κ)

Dimensionless heat capacity

The dimensionless heat capacity of a material is

where

Again, SI units shown for example.

Read more about the quantities of dimension one at BIPM

In the Ideal gas article, dimensionless heat capacity is expressed as .

Heat capacity at absolute zero

From the definition of entropy

the absolute entropy can be calculated by integrating from zero kelvins temperature to the final temperature Tf

The heat capacity must be zero at zero temperature in order for the above integral not to yield an infinite absolute entropy, thus violating the third law of thermodynamics. One of the strengths of the Debye model is that (unlike the preceding Einstein model) it predicts the proper mathematical form of the approach of heat capacity toward zero, as absolute zero temperature is approached.

Solid phase

The theoretical maximum heat capacity for larger and larger multi-atomic gases at higher temperatures, also approaches the Dulong–Petit limit of 3R, so long as this is calculated per mole of atoms, not molecules. The reason is that gases with very large molecules, in theory have almost the same high-temperature heat capacity as solids, lacking only the (small) heat capacity contribution that comes from potential energy that cannot be stored between separate molecules in a gas.

The Dulong–Petit limit results from the equipartition theorem, and as such is only valid in the classical limit of a microstate continuum, which is a high temperature limit. For light and non-metallic elements, as well as most of the common molecular solids based on carbon compounds at standard ambient temperature, quantum effects may also play an important role, as they do in multi-atomic gases. These effects usually combine to give heat capacities lower than 3R per mole of atoms in the solid, although in molecular solids, heat capacities calculated per mole of molecules in molecular solids may be more than 3R. For example, the heat capacity of water ice at the melting point is about 4.6R per mole of molecules, but only 1.5R per mole of atoms. The lower than 3R number "per atom" (as is the case with diamond and beryllium) results from the “freezing out” of possible vibration modes for light atoms at suitably low temperatures, just as in many low-mass-atom gases at room temperatures. Because of high crystal binding energies, these effects are seen in solids more often than liquids: for example the heat capacity of liquid water is twice that of ice at near the same temperature, and is again close to the 3R per mole of atoms of the Dulong–Petit theoretical maximum.

For a more modern and precise analysis of the heat capacities of solids, especially at low temperatures, it is useful to use the idea of phonons. See Debye model.

Theoretical estimation

The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at non-cryogenic temperatures, the heat capacity at room temperature approaches 3R = 24.94 joules per kelvin per mole of atoms (Dulong–Petit law, R is the gas constant). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below. Water (liquid): CP = 4185.5 J⋅K−1⋅kg−1 (15 °C, 101.325 kPa) Water (liquid): CVH = 74.539 J⋅K−1⋅mol−1 (25 °C) For liquids and gases, it is important to know the pressure to which given heat capacity data refer. Most published data are given for standard pressure. However, different standard conditions for temperature and pressure have been defined by different organizations. The International Union of Pure and Applied Chemistry (IUPAC) changed its recommendation from one atmosphere to the round value 100 kPa (≈750.062 Torr).

Calculation from first principles

The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at non-cryogenic temperatures, the heat capacity at room temperature approaches 3R = 24.94 joules per kelvin per mole of atoms (Dulong–Petit law, R is the gas constant). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below.

Relation between heat capacities

Measuring the specific heat capacity at constant volume can be prohibitively difficult for liquids and solids. That is, small temperature changes typically require large pressures to maintain a liquid or solid at constant volume, implying that the containing vessel must be nearly rigid or at least very strong (see coefficient of thermal expansion and compressibility). Instead, it is easier to measure the heat capacity at constant pressure (allowing the material to expand or contract freely) and solve for the heat capacity at constant volume using mathematical relationships derived from the basic thermodynamic laws.

The heat capacity ratio, or adiabatic index, is the ratio of the heat capacity at constant pressure to heat capacity at constant volume. It is sometimes also known as the isentropic expansion factor.

Ideal gas

For an ideal gas, evaluating the partial derivatives above according to the equation of state, where R is the gas constant, for an ideal gas

Substituting

this equation reduces simply to Mayer's relation:

The differences in heat capacities as defined by the above Mayer relation is only exact for an ideal gas and would be different for any real gas.

Specific heat capacity

The specific heat capacity of a material on a per mass basis is

which in the absence of phase transitions is equivalent to

where

  • is the heat capacity of a body made of the material in question,
  • is the mass of the body,
  • is the volume of the body,
  • is the density of the material.

For gases, and also for other materials under high pressures, there is need to distinguish between different boundary conditions for the processes under consideration (since values differ significantly between different conditions). Typical processes for which a heat capacity may be defined include isobaric (constant pressure, ) or isochoric (constant volume, ) processes. The corresponding specific heat capacities are expressed as

From the results of the previous section, dividing through by the mass gives the relation

A related parameter to is , the volumetric heat capacity. In engineering practice, for solids or liquids often signifies a volumetric heat capacity, rather than a constant-volume one. In such cases, the specific heat capacity is often explicitly written with the subscript , as . Of course, from the above relationships, for solids one writes

For pure homogeneous chemical compounds with established molecular or molar mass, or a molar quantity, heat capacity as an intensive property can be expressed on a per-mole basis instead of a per-mass basis by the following equations analogous to the per mass equations:

where n is the number of moles in the body or thermodynamic system. One may refer to such a per-mole quantity as molar heat capacity to distinguish it from specific heat capacity on a per-mass basis.

Polytropic heat capacity

The polytropic heat capacity is calculated at processes if all the thermodynamic properties (pressure, volume, temperature) change:

The most important polytropic processes run between the adiabatic and the isotherm functions, the polytropic index is between 1 and the adiabatic exponent (γ or κ).

Dimensionless heat capacity

The dimensionless heat capacity of a material is

where

  • is the heat capacity of a body made of the material in question (J/K),
  • n is the amount of substance in the body (mol),
  • R is the gas constant (J/(K⋅mol)),
  • N is the number of molecules in the body (dimensionless),
  • kB is the Boltzmann constant (J/(K⋅molecule)).

In the ideal gas article, dimensionless heat capacity is expressed as and is related there directly to half the number of degrees of freedom per particle. This holds true for quadratic degrees of freedom, a consequence of the equipartition theorem.

More generally, the dimensionless heat capacity relates the logarithmic increase in temperature to the increase in the dimensionless entropy per particle , measured in nats.

Alternatively, using base-2 logarithms, relates the base-2 logarithmic increase in temperature to the increase in the dimensionless entropy measured in bits.

Heat capacity at absolute zero

From the definition of entropy

the absolute entropy can be calculated by integrating from zero to the final temperature Tf:

Thermodynamic derivation

In theory, the specific heat capacity of a substance can also be derived from its abstract thermodynamic modeling by an equation of state and an internal energy function.

State of matter in a homogeneous sample

To apply the theory, one considers the sample of the substance (solid, liquid, or gas) for which the specific heat capacity can be defined; in particular, that it has homogeneous composition and fixed mass . Assume that the evolution of the system is always slow enough for the internal pressure and temperature be considered uniform throughout. The pressure would be equal to the pressure applied to it by the enclosure or some surrounding fluid, such as air.

The state of the material can then be specified by three parameters: its temperature , the pressure , and its specific volume , where is the volume of the sample. (This quantity is the reciprocal of the material's density .) Like and , the specific volume is an intensive property of the material and its state, that does not depend on the amount of substance in the sample.

Those variables are not independent. The allowed states are defined by an equation of state relating those three variables: The function depends on the material under consideration. The specific internal energy stored internally in the sample, per unit of mass, will then be another function of these state variables, that is also specific of the material. The total internal energy in the sample then will be .

For some simple materials, like an ideal gas, one can derive from basic theory the equation of state and even the specific internal energy In general, these functions must be determined experimentally for each substance.

Conservation of energy

The absolute value of this quantity is undefined, and (for the purposes of thermodynamics) the state of "zero internal energy" can be chosen arbitrarily. However, by the law of conservation of energy, any infinitesimal increase in the total internal energy must be matched by the net flow of heat energy into the sample, plus any net mechanical energy provided to it by enclosure or surrounding medium on it. The latter is , where is the change in the sample's volume in that infinitesimal step. Therefore

hence

If the volume of the sample (hence the specific volume of the material) is kept constant during the injection of the heat amount , then the term is zero (no mechanical work is done). Then, dividing by ,

where is the change in temperature that resulted from the heat input. The left-hand side is the specific heat capacity at constant volume of the material.

For the heat capacity at constant pressure, it is useful to define the specific enthalpy of the system as the sum . An infinitesimal change in the specific enthalpy will then be

therefore

If the pressure is kept constant, the second term on the left-hand side is zero, and

The left-hand side is the specific heat capacity at constant pressure of the material.

Connection to equation of state

In general, the infinitesimal quantities are constrained by the equation of state and the specific internal energy function. Namely,

Here denotes the (partial) derivative of the state equation with respect to its argument, keeping the other two arguments fixed, evaluated at the state in question. The other partial derivatives are defined in the same way. These two equations on the four infinitesimal increments normally constrain them to a two-dimensional linear subspace space of possible infinitesimal state changes, that depends on the material and on the state. The constant-volume and constant-pressure changes are only two particular directions in this space.

This analysis also holds no matter how the energy increment is injected into the sample, namely by heat conduction, irradiation, electromagnetic induction, radioactive decay, etc.

Relation between heat capacities

For any specific volume , denote the function that describes how the pressure varies with the temperature , as allowed by the equation of state, when the specific volume of the material is forcefully kept constant at . Analogously, for any pressure , let be the function that describes how the specific volume varies with the temperature, when the pressure is kept constant at . Namely, those functions are such that

and
for any values of . In other words, the graphs of and are slices of the surface defined by the state equation, cut by planes of constant and constant , respectively.

Then, from the fundamental thermodynamic relation it follows that

This equation can be rewritten as

where

both depending on the state .

The heat capacity ratio, or adiabatic index, is the ratio of the heat capacity at constant pressure to heat capacity at constant volume. It is sometimes also known as the isentropic expansion factor.

Calculation from first principles

The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at non-cryogenic temperatures, the heat capacity at room temperature approaches 3R = 24.94 joules per kelvin per mole of atoms (Dulong–Petit law, R is the gas constant). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below. However, attention should be made for the consistency of such ab-initio considerations when used along with an equation of state for the considered material.

Ideal gas

For an ideal gas, evaluating the partial derivatives above according to the equation of state, where R is the gas constant, for an ideal gas

Substituting

this equation reduces simply to Mayer's relation:

The differences in heat capacities as defined by the above Mayer relation is only exact for an ideal gas and would be different for any real gas.

Wireless ad hoc network

From Wikipedia, the free encyclopedia

A wireless ad hoc network (WANET) or mobile ad hoc network (MANET) is a decentralized type of wireless network. The network is ad hoc because it does not rely on a pre-existing infrastructure, such as routers or wireless access points. Instead, each node participates in routing by forwarding data for other nodes. The determination of which nodes forward data is made dynamically on the basis of network connectivity and the routing algorithm in use.

Such wireless networks lack the complexities of infrastructure setup and administration, enabling devices to create and join networks "on the fly".

Each device in a MANET is free to move independently in any direction, and will therefore change its links to other devices frequently. Each must forward traffic unrelated to its own use, and therefore be a router. The primary challenge in building a MANET is equipping each device to continuously maintain the information required to properly route traffic. This becomes harder as the scale of the MANET increases due to 1) the desire to route packets to/through every other node, 2) the percentage of overhead traffic needed to maintain real-time routing status, 3) each node has its own goodput to route independent and unaware of others needs, and 4) all must share limited communication bandwidth, such as a slice of radio spectrum.

Such networks may operate by themselves or may be connected to the larger Internet. They may contain one or multiple and different transceivers between nodes. This results in a highly dynamic, autonomous topology. MANETs usually have a routable networking environment on top of a link layer ad hoc network.

History

Packet radio

A Stanford Research Institute's Packet Radio Van, site of the first three-way internetworked transmission.
 
Initial, large-scale trials of the Near-term digital radio, February 1998.

The earliest wireless data network was called PRNET, the packet radio network, and was sponsored by Defense Advanced Research Projects Agency (DARPA) in the early 1970s. Bolt, Beranek and Newman Inc. (BBN) and SRI International designed, built, and experimented with these earliest systems. Experimenters included Robert Kahn, Jerry Burchfiel, and Ray Tomlinson. Similar experiments took place in the amateur radio community with the x25 protocol. These early packet radio systems predated the Internet, and indeed were part of the motivation of the original Internet Protocol suite. Later DARPA experiments included the Survivable Radio Network (SURAN) project, which took place in the 1980s. A successor to these systems was fielded in the mid-1990s for the US Army, and later other nations, as the Near-term digital radio.

Another third wave of academic and research activity started in the mid-1990s with the advent of inexpensive 802.11 radio cards for personal computers. Current wireless ad hoc networks are designed primarily for military utility. Problems with packet radios are: (1) bulky elements, (2) slow data rate, (3) unable to maintain links if mobility is high. The project did not proceed much further until the early 1990s when wireless ad hoc networks were born.

Early work on MANET

The growth of laptops and 802.11/Wi-Fi wireless networking have made MANETs a popular research topic since the mid-1990s. Many academic papers evaluate protocols and their abilities, assuming varying degrees of mobility within a bounded space, usually with all nodes within a few hops of each other. Different protocols are then evaluated based on measures such as the packet drop rate, the overhead introduced by the routing protocol, end-to-end packet delays, network throughput, ability to scale, etc.

In the early 1990s, Charles Perkins from SUN Microsystems USA, and Chai Keong Toh from Cambridge University separately started to work on a different Internet, that of a wireless ad hoc network. Perkins was working on the dynamic addressing issues. Toh worked on a new routing protocol, which was known as ABR – associativity-based routing. Perkins eventually proposed DSDV – Destination Sequence Distance Vector routing, which was based on distributed distance vector routing. Toh's proposal was an on-demand based routing, i.e. routes are discovered on-the-fly in real-time as and when needed. ABR was submitted to IETF as RFCs. ABR was implemented successfully into Linux OS on Lucent WaveLAN 802.11a enabled laptops and a practical ad hoc mobile network was therefore proven to be possible in 1999. Another routing protocol known as AODV was subsequently introduced and later proven and implemented in 2005. In 2007, David Johnson and Dave Maltz proposed DSR – Dynamic Source Routing.

Applications

The decentralized nature of wireless ad hoc networks makes them suitable for a variety of applications where central nodes can't be relied on and may improve the scalability of networks compared to wireless managed networks, though theoretical and practical limits to the overall capacity of such networks have been identified. Minimal configuration and quick deployment make ad hoc networks suitable for emergency situations like natural disasters or military conflicts. The presence of dynamic and adaptive routing protocols enables ad hoc networks to be formed quickly.

Mobile ad hoc networks (MANETs)

A mobile ad hoc network (MANET) is a continuously self-configuring, self-organizing, infrastructure-less network of mobile devices connected without wires. It is sometimes known as "on-the-fly" networks or "spontaneous networks".

Vehicular ad hoc networks (VANETs)

VANETs are used for communication between vehicles and roadside equipment. Intelligent vehicular ad hoc networks (InVANETs) are a kind of artificial intelligence that helps vehicles to behave in intelligent manners during vehicle-to-vehicle collisions, accidents. Vehicles are using radio waves to communicate with each other, creating communication networks instantly on-the-fly while vehicles move along roads. VANET needs to be secured with lightweight protocols.

Smartphone ad hoc networks (SPANs)

A SPAN leverages existing hardware (primarily Wi-Fi and Bluetooth) and software (protocols) in commercially available smartphones to create peer-to-peer networks without relying on cellular carrier networks, wireless access points, or traditional network infrastructure. SPANs differ from traditional hub and spoke networks, such as Wi-Fi Direct, in that they support multi-hop relays and there is no notion of a group leader so peers can join and leave at will without destroying the network. Apple's iPhone with iOS version 7.0 and higher is capable of multi-peer ad hoc mesh networking.

Wireless mesh networks

Mesh networks take their name from the topology of the resultant network. In a fully connected mesh, each node is connected to every other node, forming a "mesh". A partial mesh, by contrast, has a topology in which some nodes are not connected to others, although this term is seldom in use. Wireless ad hoc networks can take the form of a mesh networks or others. A wireless ad hoc network does not have fixed topology, and its connectivity among nodes is totally dependent on the behavior of the devices, their mobility patterns, distance with each other, etc. Hence, wireless mesh networks are a particular type of wireless ad hoc networks, with special emphasis on the resultant network topology. While some wireless mesh networks (particularly those within a home) have relatively infrequent mobility and thus infrequent link breaks, other more mobile mesh networks require frequent routing adjustments to account for lost links.

Army tactical MANETs

Military or tactical MANETs are used by military units with emphasis on data rate, real-time requirement, fast re-routing during mobility, data security, radio range, and integration with existing systems. Common radio waveforms include the US Army's JTRS SRW, Silvus Technologies MN-MIMO Waveform (Mobile Networked MIMO), Persistent System's WaveRelay and the Domo Tactical Communications (DTC) MeshUltra Tactical Waveform. Ad hoc mobile communications come in well to fulfill this need, especially its infrastructureless nature, fast deployment and operation. Military MANETs are used by military units with an emphasis on rapid deployment, infrastructureless, all-wireless networks (no fixed radio towers), robustness (link breaks are no problem), security, range, and instant operation.

Air Force UAV ad hoc networks

Flying ad hoc networks (FANETs) are composed of unmanned aerial vehicles, allowing great mobility and providing connectivity to remote areas.

Unmanned aerial vehicle, is an aircraft with no pilot on board. UAVs can be remotely controlled (i.e., flown by a pilot at a ground control station) or can fly autonomously based on pre-programmed flight plans. Civilian usage of UAV include modeling 3D terrains, package delivery (Logistics), etc.

UAVs have also been used by US Air Force for data collection and situation sensing, without risking the pilot in a foreign unfriendly environment. With wireless ad hoc network technology embedded into the UAVs, multiple UAVs can communicate with each other and work as a team, collaboratively to complete a task and mission. If a UAV is destroyed by an enemy, its data can be quickly offloaded wirelessly to other neighboring UAVs. The UAV ad hoc communication network is also sometimes referred to UAV instant sky network. More generally, aerial MANET in UAVs are now (as of 2021) successfully implemented and operational as mini tactical reconnaissance ISR UAVs like the BRAMOR C4EYE from Slovenia.

Navy ad hoc networks

Navy ships traditionally use satellite communications and other maritime radios to communicate with each other or with ground station back on land. However, such communications are restricted by delays and limited bandwidth. Wireless ad hoc networks enable ship-area-networks to be formed while at sea, enabling high-speed wireless communications among ships, enhancing their sharing of imaging and multimedia data, and better co-ordination in battlefield operations. Some defense companies (such as Rockwell Collins, Silvus Technologies and Rohde & Schwartz) have produced products that enhance ship-to-ship and ship-to-shore communications.

Sensor networks

Sensors are useful devices that collect information related to a specific parameter, such as noise, temperature, humidity, pressure, etc. Sensors are increasingly connected via wireless to allow large-scale collection of sensor data. With a large sample of sensor data, analytics processing can be used to make sense out of these data. The connectivity of wireless sensor networks rely on the principles behind wireless ad hoc networks, since sensors can now be deploy without any fixed radio towers, and they can now form networks on-the-fly. "Smart Dust" was one of the early projects done at U C Berkeley, where tiny radios were used to interconnect smart dust. More recently, mobile wireless sensor networks (MWSNs) have also become an area of academic interest.

Robotics

Efforts have been made to co-ordinate and control a group of robots to undertake collaborative work to complete a task. Centralized control is often based on a "star" approach, where robots take turns to talk to the controller station. However, with wireless ad hoc networks, robots can form a communication network on-the-fly, i.e., robots can now "talk" to each other and collaborate in a distributed fashion. With a network of robots, the robots can communicate among themselves, share local information, and distributively decide how to resolve a task in the most effective and efficient way.

Disaster response

Another civilian use of wireless ad hoc network is for public safety. At times of disasters (floods, storms, earthquakes, fires, etc.), a quick and instant wireless communication network is necessary. Especially at times of earthquakes when radio towers had collapsed or were destroyed, wireless ad hoc networks can be formed independently. Firefighters and rescue workers can use ad hoc networks to communicate and rescue those injured. Commercial radios with such capability are available on the market.

Hospital ad hoc network

Wireless ad hoc networks allow sensors, videos, instruments, and other devices to be deployed and interconnected wirelessly for clinic and hospital patient monitoring, doctor and nurses alert notification, and also making senses of such data quickly at fusion points, so that lives can be saved.

Data monitoring and mining

MANETS can be used for facilitating the collection of sensor data for data mining for a variety of applications such as air pollution monitoring and different types of architectures can be used for such applications. A key characteristic of such applications is that nearby sensor nodes monitoring an environmental feature typically register similar values. This kind of data redundancy due to the spatial correlation between sensor observations inspires the techniques for in-network data aggregation and mining. By measuring the spatial correlation between data sampled by different sensors, a wide class of specialized algorithms can be developed to develop more efficient spatial data mining algorithms as well as more efficient routing strategies. Also, researchers have developed performance models for MANET to apply queueing theory.

Challenges

Several books and works have revealed the technical and research challenges facing wireless ad hoc networks or MANETs. The advantages for users, the technical difficulties in implementation, and the side effect on radio spectrum pollution can be briefly summarized below:

Advantages for users

The obvious appeal of MANETs is that the network is decentralised and nodes/devices are mobile, that is to say there is no fixed infrastructure which provides the possibility for numerous applications in different areas such as environmental monitoring, disaster relief and military communications. Since the early 2000s, interest in MANETs has greatly increased which, in part, is due to the fact mobility can improve network capacity, shown by Grossglauser and Tse along with the introduction of new technologies.

One main advantage to a decentralised network is that they are typically more robust than centralised networks due to the multi-hop fashion in which information is relayed. For example, in the cellular network setting, a drop in coverage occurs if a base station stops working, however the chance of a single point of failure in a MANET is reduced significantly since the data can take multiple paths. Since the MANET architecture evolves with time it has the potential to resolve issues such as isolation/disconnection from the network. Further advantages of MANETS over networks with a fixed topology include flexibility (an ad hoc network can be created anywhere with mobile devices), scalability (you can easily add more nodes to the network) and lower administration costs (no need to build an infrastructure first).

Implementation difficulties

With a time evolving network it is clear we should expect variations in network performance due to no fixed architecture (no fixed connections). Furthermore, since network topology determines interference and thus connectivity, the mobility pattern of devices within the network will impact on network performance, possibly resulting in data having to be resent a lot of times (increased delay) and finally allocation of network resources such as power remains unclear. Finally, finding a model that accurately represents human mobility whilst remaining mathematically tractable remains an open problem due to the large range of factors that influence it. Some typical models used include the random walk, random waypoint and levy flight models.

Side effects

Radios and Modulation

Wireless ad hoc networks can operate over different types of radios. All radios use modulation to move information over a certain bandwidth of radio frequencies. Given the need to move large amounts of information quickly over long distances, a MANET radio channel ideally has large bandwidth (e.g. amount of radio spectrum), lower frequencies, and higher power. Given the desire to communicate with many other nodes ideally simultaneously, many channels are needed. Given radio spectrum is shared and regulated, there is less bandwidth available at lower frequencies. Processing many radio channels requires many resources. Given the need for mobility, small size and lower power consumption are very important. Picking a MANET radio and modulation has many trade-offs; many start with the specific frequency and bandwidth they are allowed to use.

Radios can be UHF (300 – 3000 MHz), SHF (3 – 30 GHz), and EHF (30 – 300 GHz). Wi-Fi ad hoc uses the unlicensed ISM 2.4 GHz radios. They can also be used on 5.8 GHz radios.

The higher the frequency, such as those of 300 GHz, absorption of the signal will be more predominant. Army tactical radios usually employ a variety of UHF and SHF radios, including those of VHF to provide a variety of communication modes. At the 800, 900, 1200, 1800 MHz range, cellular radios are predominant. Some cellular radios use ad hoc communications to extend cellular range to areas and devices not reachable by the cellular base station.

Next generation Wi-Fi known as 802.11ax provides low delay, high capacity (up to 10Gbit/s) and low packet loss rate, offering 12 streams – 8 streams at 5 GHz and 4 streams at 2.4 GHz. IEEE 802.11ax uses 8x8 MU-MIMO, OFDMA, and 80 MHz channels. Hence, 802.11ax has the ability to form high capacity Wi-Fi ad hoc networks.

At 60 GHz, there is another form of Wi-Fi known as WiGi – wireless gigabit. This has the ability to offer up to 7Gbit/s throughput. Currently, WiGi is targeted to work with 5G cellular networks.

Circa 2020, the general consensus finds the 'best' modulation for moving information over higher frequency waves to be Orthogonal frequency-division multiplexing, as used in 4G LTE, 5G, and Wi-Fi.

Protocol stack

The challenges affecting MANETs span from various layers of the OSI protocol stack. The media access layer (MAC) has to be improved to resolve collisions and hidden terminal problems. The network layer routing protocol has to be improved to resolve dynamically changing network topologies and broken routes. The transport layer protocol has to be improved to handle lost or broken connections. The session layer protocol has to deal with discovery of servers and services.

A major limitation with mobile nodes is that they have high mobility, causing links to be frequently broken and reestablished. Moreover, the bandwidth of a wireless channel is also limited, and nodes operate on limited battery power, which will eventually be exhausted. These factors make the design of a mobile ad hoc network challenging.

The cross-layer design deviates from the traditional network design approach in which each layer of the stack would be made to operate independently. The modified transmission power will help that node to dynamically vary its propagation range at the physical layer. This is because the propagation distance is always directly proportional to transmission power. This information is passed from the physical layer to the network layer so that it can take optimal decisions in routing protocols. A major advantage of this protocol is that it allows access of information between physical layer and top layers (MAC and network layer).

Some elements of the software stack were developed to allow code updates in situ, i.e., with the nodes embedded in their physical environment and without needing to bring the nodes back into the lab facility. Such software updating relied on epidemic mode of dissemination of information and had to be done both efficiently (few network transmissions) and fast.

Routing

Routing in wireless ad hoc networks or MANETs generally falls into three categories, namely: proactive routing, reactive routing, and hybrid routing.

Proactive routing

This type of protocols maintains fresh lists of destinations and their routes by periodically distributing routing tables throughout the network. The main disadvantages of such algorithms are:

  • Respective amount of data for maintenance.
  • Slow reaction on restructuring and failures.

Example: Optimized Link State Routing Protocol (OLSR)

Distance vector routing

As in a fix net nodes maintain routing tables. Distance-vector protocols are based on calculating the direction and distance to any link in a network. "Direction" usually means the next hop address and the exit interface. "Distance" is a measure of the cost to reach a certain node. The least cost route between any two nodes is the route with minimum distance. Each node maintains a vector (table) of minimum distance to every node. The cost of reaching a destination is calculated using various route metrics. RIP uses the hop count of the destination whereas IGRP takes into account other information such as node delay and available bandwidth.

Reactive routing

This type of protocol finds a route based on user and traffic demand by flooding the network with Route Request or Discovery packets. The main disadvantages of such algorithms are:

  • High latency time in route finding.
  • Excessive flooding can lead to network clogging.

However, clustering can be used to limit flooding. The latency incurred during route discovery is not significant compared to periodic route update exchanges by all nodes in the network.

Example: Ad hoc On-Demand Distance Vector Routing (AODV)

Flooding

Is a simple routing algorithm in which every incoming packet is sent through every outgoing link except the one it arrived on. Flooding is used in bridging and in systems such as Usenet and peer-to-peer file sharing and as part of some routing protocols, including OSPF, DVMRP, and those used in wireless ad hoc networks.

Hybrid routing

This type of protocol combines the advantages of proactive and reactive routing. The routing is initially established with some proactively prospected routes and then serves the demand from additionally activated nodes through reactive flooding. The choice of one or the other method requires predetermination for typical cases. The main disadvantages of such algorithms are:

  1. Advantage depends on number of other nodes activated.
  2. Reaction to traffic demand depends on gradient of traffic volume.

Example: Zone Routing Protocol (ZRP)

Position-based routing

Position-based routing methods use information on the exact locations of the nodes. This information is obtained for example via a GPS receiver. Based on the exact location the best path between source and destination nodes can be determined.

Example: "Location-Aided Routing in mobile ad hoc networks" (LAR)

Technical requirements for implementation

An ad hoc network is made up of multiple "nodes" connected by "links."

Links are influenced by the node's resources (e.g., transmitter power, computing power and memory) and behavioral properties (e.g., reliability), as well as link properties (e.g. length-of-link and signal loss, interference and noise). Since links can be connected or disconnected at any time, a functioning network must be able to cope with this dynamic restructuring, preferably in a way that is timely, efficient, reliable, robust, and scalable.

The network must allow any two nodes to communicate by relaying the information via other nodes. A "path" is a series of links that connects two nodes. Various routing methods use one or two paths between any two nodes; flooding methods use all or most of the available paths.

Medium-access control

In most wireless ad hoc networks, the nodes compete for access to shared wireless medium, often resulting in collisions (interference). Collisions can be handled using centralized scheduling or distributed contention access protocols. Using cooperative wireless communications improves immunity to interference by having the destination node combine self-interference and other-node interference to improve decoding of the desired signals.

Simulation

One key problem in wireless ad hoc networks is foreseeing the variety of possible situations that can occur. As a result, modeling and simulation (M&S) using extensive parameter sweeping and what-if analysis becomes an extremely important paradigm for use in ad hoc networks. One solution is the use of simulation tools like OPNET, NetSim or ns2. A comparative study of various simulators for VANETs reveal that factors such as constrained road topology, multi-path fading and roadside obstacles, traffic flow models, trip models, varying vehicular speed and mobility, traffic lights, traffic congestion, drivers' behavior, etc., have to be taken into consideration in the simulation process to reflect realistic conditions.

Emulation testbed

In 2009, the U.S. Army Research Laboratory (ARL) and Naval Research Laboratory (NRL) developed a Mobile Ad-Hoc Network emulation testbed, where algorithms and applications were subjected to representative wireless network conditions. The testbed was based on a version of the "MANE" (Mobile Ad hoc Network Emulator) software originally developed by NRL.

Mathematical models

The traditional model is the random geometric graph. Early work included simulating ad hoc mobile networks on sparse and densely connected topologies. Nodes are firstly scattered in a constrained physical space randomly. Each node then has a predefined fixed cell size (radio range). A node is said to be connected to another node if this neighbor is within its radio range. Nodes are then moved (migrated away) based on a random model, using random walk or brownian motion. Different mobility and number of nodes present yield different route length and hence different number of multi-hops.

A randomly constructed geometric graph drawn inside a square

These are graphs consisting of a set of nodes placed according to a point process in some usually bounded subset of the n-dimensional plane, mutually coupled according to a boolean probability mass function of their spatial separation (see e.g. unit disk graphs). The connections between nodes may have different weights to model the difference in channel attenuations. One can then study network observables (such as connectivity, centrality or the degree distribution) from a graph-theoretic perspective. One can further study network protocols and algorithms to improve network throughput and fairness.

Security

Most wireless ad hoc networks do not implement any network access control, leaving these networks vulnerable to resource consumption attacks where a malicious node injects packets into the network with the goal of depleting the resources of the nodes relaying the packets.

To thwart or prevent such attacks, it was necessary to employ authentication mechanisms that ensure that only authorized nodes can inject traffic into the network. Even with authentication, these networks are vulnerable to packet dropping or delaying attacks, whereby an intermediate node drops the packet or delays it, rather than promptly sending it to the next hop.

In a multicast and dynamic environment, establishing temporary 1:1 secure 'sessions' using PKI with every other node is not feasible (like is done with HTTPS, most VPNs, etc. at the transport layer). Instead, a common solution is to use pre-shared keys for symmetric, authenticated encryption at the link layer, for example MACsec using AES-256-GCM. With this method, every properly formatted packet received is authenticated then passed along for decryption or dropped. It also means the key(s) in each node must be changed more often and simultaneously (e.g. to avoid reusing an IV).

Trust management

Trust establishment and management in MANETs face challenges due to resource constraints and the complex interdependency of networks. Managing trust in a MANET needs to consider the interactions between the composite cognitive, social, information and communication networks, and take into account the resource constraints (e.g., computing power, energy, bandwidth, time), and dynamics (e.g., topology changes, node mobility, node failure, propagation channel conditions).

Researchers of trust management in MANET suggested that such complex interactions require a composite trust metric that captures aspects of communications and social networks, and corresponding trust measurement, trust distribution, and trust management schemes.

Continuous monitoring of every node within a MANET is necessary for trust and reliability but difficult because it by definition is dis-continuous, 2) it requires input from the node itself and 3) from its 'nearby' peers.

Evolutionary algorithm

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Evolutionary_algorithm In computational int...