Search This Blog

Sunday, July 6, 2025

Zeroth law of thermodynamics

The zeroth law of thermodynamics is one of the four principal laws of thermodynamics. It provides an independent definition of temperature without reference to entropy, which is defined in the second law. The law was established by Ralph H. Fowler in the 1930s, long after the first, second, and third laws had been widely recognized.

The zeroth law states that if two thermodynamic systems are both in thermal equilibrium with a third system, then the two systems are in thermal equilibrium with each other.

Two systems are said to be in thermal equilibrium if they are linked by a wall permeable only to heat, and they do not change over time.

Another formulation by James Clerk Maxwell is "All heat is of the same kind". Another statement of the law is "All diathermal walls are equivalent".

The zeroth law is important for the mathematical formulation of thermodynamics. It makes the relation of thermal equilibrium between systems an equivalence relation, which can represent equality of some quantity associated with each system. A quantity that is the same for two systems, if they can be placed in thermal equilibrium with each other, is a scale of temperature. The zeroth law is needed for the definition of such scales, and justifies the use of practical thermometers.

Equivalence relation

A thermodynamic system is by definition in its own state of internal thermodynamic equilibrium, that is to say, there is no change in its observable state (i.e. macrostate) over time and no flows occur in it. One precise statement of the zeroth law is that the relation of thermal equilibrium is an equivalence relation on pairs of thermodynamic systems. In other words, the set of all systems each in its own state of internal thermodynamic equilibrium may be divided into subsets in which every system belongs to one and only one subset, and is in thermal equilibrium with every other member of that subset, and is not in thermal equilibrium with a member of any other subset. This means that a unique "tag" can be assigned to every system, and if the "tags" of two systems are the same, they are in thermal equilibrium with each other, and if different, they are not. This property is used to justify the use of empirical temperature as a tagging system. Empirical temperature provides further relations of thermally equilibrated systems, such as order and continuity with regard to "hotness" or "coldness", but these are not implied by the standard statement of the zeroth law.

If it is defined that a thermodynamic system is in thermal equilibrium with itself (i.e., thermal equilibrium is reflexive), then the zeroth law may be stated as follows:

If a body C, be in thermal equilibrium with two other bodies, A and B, then A and B are in thermal equilibrium with one another.

This statement asserts that thermal equilibrium is a left-Euclidean relation between thermodynamic systems. If we also define that every thermodynamic system is in thermal equilibrium with itself, then thermal equilibrium is also a reflexive relation. Binary relations that are both reflexive and Euclidean are equivalence relations. Thus, again implicitly assuming reflexivity, the zeroth law is therefore often expressed as a right-Euclidean statement:

If two systems are in thermal equilibrium with a third system, then they are in thermal equilibrium with each other.

One consequence of an equivalence relationship is that the equilibrium relationship is symmetric: If A is in thermal equilibrium with B, then B is in thermal equilibrium with A. Thus, the two systems are in thermal equilibrium with each other, or they are in mutual equilibrium. Another consequence of equivalence is that thermal equilibrium is described as a transitive relation:

If A is in thermal equilibrium with B and if B is in thermal equilibrium with C, then A is in thermal equilibrium with C.

A reflexive, transitive relation does not guarantee an equivalence relationship. For the above statement to be true, both reflexivity and symmetry must be implicitly assumed.

It is the Euclidean relationships which apply directly to thermometry. An ideal thermometer is a thermometer which does not measurably change the state of the system it is measuring. Assuming that the unchanging reading of an ideal thermometer is a valid tagging system for the equivalence classes of a set of equilibrated thermodynamic systems, then the systems are in thermal equilibrium, if a thermometer gives the same reading for each system. If the system are thermally connected, no subsequent change in the state of either one can occur. If the readings are different, then thermally connecting the two systems causes a change in the states of both systems. The zeroth law provides no information regarding this final reading.

Foundation of temperature

Nowadays, there are two nearly separate concepts of temperature, the thermodynamic concept, and that of the kinetic theory of gases and other materials.

The zeroth law belongs to the thermodynamic concept, but this is no longer the primary international definition of temperature. The current primary international definition of temperature is in terms of the kinetic energy of freely moving microscopic particles such as molecules, related to temperature through the Boltzmann constant . The present article is about the thermodynamic concept, not about the kinetic theory concept.

The zeroth law establishes thermal equilibrium as an equivalence relationship. An equivalence relationship on a set (such as the set of all systems each in its own state of internal thermodynamic equilibrium) divides that set into a collection of distinct subsets ("disjoint subsets") where any member of the set is a member of one and only one such subset. In the case of the zeroth law, these subsets consist of systems which are in mutual equilibrium. This partitioning allows any member of the subset to be uniquely "tagged" with a label identifying the subset to which it belongs. Although the labeling may be quite arbitrary, temperature is just such a labeling process which uses the real number system for tagging. The zeroth law justifies the use of suitable thermodynamic systems as thermometers to provide such a labeling, which yield any number of possible empirical temperature scales, and justifies the use of the second law of thermodynamics to provide an absolute, or thermodynamic temperature scale. Such temperature scales bring additional continuity and ordering (i.e., "hot" and "cold") properties to the concept of temperature.

In the space of thermodynamic parameters, zones of constant temperature form a surface, that provides a natural order of nearby surfaces. One may therefore construct a global temperature function that provides a continuous ordering of states. The dimensionality of a surface of constant temperature is one less than the number of thermodynamic parameters, thus, for an ideal gas described with three thermodynamic parameters P, V and N, it is a two-dimensional surface.

For example, if two systems of ideal gases are in joint thermodynamic equilibrium across an immovable diathermal wall, then P1V1/N1 = P2V2/N2 where Pi is the pressure in the ith system, Vi is the volume, and Ni is the amount (in moles, or simply the number of atoms) of gas.

The surface PV/N = constant defines surfaces of equal thermodynamic temperature, and one may label defining T so that PV/N = RT, where R is some constant. These systems can now be used as a thermometer to calibrate other systems. Such systems are known as "ideal gas thermometers".

In a sense, focused on the zeroth law, there is only one kind of diathermal wall or one kind of heat, as expressed by Maxwell's dictum that "All heat is of the same kind". But in another sense, heat is transferred in different ranks, as expressed by Arnold Sommerfeld's dictum "Thermodynamics investigates the conditions that govern the transformation of heat into work. It teaches us to recognize temperature as the measure of the work-value of heat. Heat of higher temperature is richer, is capable of doing more work. Work may be regarded as heat of an infinitely high temperature, as unconditionally available heat." This is why temperature is the particular variable indicated by the zeroth law's statement of equivalence.

Dependence on the existence of walls permeable only to heat

In Constantin Carathéodory's (1909) theory, it is postulated that there exist walls "permeable only to heat", though heat is not explicitly defined in that paper. This postulate is a physical postulate of existence. It does not say that there is only one kind of heat. This paper of Carathéodory states as proviso 4 of its account of such walls: "Whenever each of the systems S1 and S2 is made to reach equilibrium with a third system S3 under identical conditions, systems S1 and S2 are in mutual equilibrium".

It is the function of this statement in the paper, not there labeled as the zeroth law, to provide not only for the existence of transfer of energy other than by work or transfer of matter, but further to provide that such transfer is unique in the sense that there is only one kind of such wall, and one kind of such transfer. This is signaled in the postulate of this paper of Carathéodory that precisely one non-deformation variable is needed to complete the specification of a thermodynamic state, beyond the necessary deformation variables, which are not restricted in number. It is therefore not exactly clear what Carathéodory means when in the introduction of this paper he writes

It is possible to develop the whole theory without assuming the existence of heat, that is of a quantity that is of a different nature from the normal mechanical quantities.

It is the opinion of Elliott H. Lieb and Jakob Yngvason (1999) that the derivation from statistical mechanics of the law of entropy increase is a goal that has so far eluded the deepest thinkers. Thus the idea remains open to consideration that the existence of heat and temperature are needed as coherent primitive concepts for thermodynamics, as expressed, for example, by Maxwell and Max Planck. On the other hand, Planck (1926) clarified how the second law can be stated without reference to heat or temperature, by referring to the irreversible and universal nature of friction in natural thermodynamic processes.

History

Writing long before the term "zeroth law" was coined, in 1871 Maxwell discussed at some length ideas which he summarized by the words "All heat is of the same kind". Modern theorists sometimes express this idea by postulating the existence of a unique one-dimensional hotness manifold, into which every proper temperature scale has a monotonic mapping. This may be expressed by the statement that there is only one kind of temperature, regardless of the variety of scales in which it is expressed. Another modern expression of this idea is that "All diathermal walls are equivalent". This might also be expressed by saying that there is precisely one kind of non-mechanical, non-matter-transferring contact equilibrium between thermodynamic systems.

According to Sommerfeld, Ralph H. Fowler coined the term zeroth law of thermodynamics while discussing the 1935 text by Meghnad Saha and B.N. Srivastava.

They write on page 1 that "every physical quantity must be measurable in numerical terms". They presume that temperature is a physical quantity and then deduce the statement "If a body A is in temperature equilibrium with two bodies B and C, then B and C themselves are in temperature equilibrium with each other". Then they italicize a self-standing paragraph, as if to state their basic postulate:

Any of the physical properties of A which change with the application of heat may be observed and utilised for the measurement of temperature.

They do not themselves here use the phrase "zeroth law of thermodynamics". There are very many statements of these same physical ideas in the physics literature long before this text, in very similar language. What was new here was just the label zeroth law of thermodynamics.

Fowler & Guggenheim (1936/1965) wrote of the zeroth law as follows:

... we introduce the postulate: If two assemblies are each in thermal equilibrium with a third assembly, they are in thermal equilibrium with each other.

They then proposed that

... it may be shown to follow that the condition for thermal equilibrium between several assemblies is the equality of a certain single-valued function of the thermodynamic states of the assemblies, which may be called the temperature t, any one of the assemblies being used as a "thermometer" reading the temperature t on a suitable scale. This postulate of the "Existence of temperature" could with advantage be known as the zeroth law of thermodynamics.

The first sentence of this present article is a version of this statement. It is not explicitly evident in the existence statement of Fowler and Edward A. Guggenheim that temperature refers to a unique attribute of a state of a system, such as is expressed in the idea of the hotness manifold. Also their statement refers explicitly to statistical mechanical assemblies, not explicitly to macroscopic thermodynamically defined systems.

Inclusive fitness

From Wikipedia, the free encyclopedia
 
Inclusive fitness is a conceptual framework in evolutionary biology first defined by W. D. Hamilton in 1964. It is primarily used to aid the understanding of how social traits are expected to evolve in structured populations. It involves partitioning an individual's expected fitness returns into two distinct components: direct fitness returns - the component of a focal individual’s fitness that is independent of who it interacts with socially; indirect fitness returns - the component that is dependent on who it interacts with socially. The direct component of an individual's fitness is often called its personal fitness, while an individual’s direct and indirect fitness components taken together are often called its inclusive fitness..

Under an inclusive fitness framework direct fitness returns are realised through the offspring a focal individual produces independent of who it interacts with, while indirect fitness returns are realised by adding up all the effects our focal individual has on the (number of) offspring produced by those it interacts with weighted by the relatedness of our focal individual to those it interacts with. This can be visualised in a sexually reproducing system (assuming identity by descent) by saying that an individual's own child, who carries one half of that individual's genes, represents one offspring equivalent. A sibling's child, who will carry one-quarter of the individual's genes, will then represent 1/2 offspring equivalent (and so on - see coefficient of relationship for further examples).

Neighbour-modulated fitness is the conceptual inverse of inclusive fitness. Where inclusive fitness calculates an individual’s indirect fitness component by summing the fitness that focal individual receives through modifying the productivities of those it interacts with (its neighbours), neighbour-modulated fitness instead calculates it by summing the effects an individual’s neighbours have on that focal individual’s productivity. When taken over an entire population, these two frameworks give functionally equivalent results. Hamilton’s rule is a particularly important result in the fields of evolutionary ecology and behavioral ecology that follows naturally from the partitioning of fitness into direct and indirect components, as given by inclusive and neighbour-modulated fitness. It enables us to see how the average trait value of a population is expected to evolve under the assumption of small mutational steps.

Kin selection is a well known case whereby inclusive fitness effects can influence the evolution of social behaviours. Kin selection relies on positive relatedness (driven by identity by descent) to enable individuals who positively influence the fitness of those they interact with at a cost to their own personal fitness, to outcompete individuals employing more selfish strategies. It is thought to be one of the primary mechanisms underlying the evolution of altruistic behaviour, alongside the less prevalent reciprocity (see also reciprocal altruism), and to be of particular importance in enabling the evolution of eusociality among other forms of group living. Inclusive fitness has also been used to explain the existence of spiteful behaviour, where individuals negatively influence the fitness of those they interact with at a cost to their own personal fitness.

Inclusive fitness and neighbour-modulated fitness are both frameworks that leverage the individual as the unit of selection. It is from this that the gene-centered view of evolution emerged: a perspective that has facilitated much of the work done into the evolution of conflict (examples include parent-offspring conflict, interlocus sexual conflict, and intragenomic conflict).

Overview

The British evolutionary biologist W. D. Hamilton showed mathematically that, because other members of a population may share one's genes, a gene can also increase its evolutionary success by indirectly promoting the reproduction and survival of other individuals who also carry that gene. This is variously called "kin theory", "kin selection theory" or "inclusive fitness theory". The most obvious category of such individuals is close genetic relatives, and where these are concerned, the application of inclusive fitness theory is often more straightforwardly treated via the narrower kin selection theory. Hamilton's theory, alongside reciprocal altruism, is considered one of the two primary mechanisms for the evolution of social behaviors in natural species and a major contribution to the field of sociobiology, which holds that some behaviors can be dictated by genes, and therefore can be passed to future generations and may be selected for as the organism evolves.

Belding's ground squirrel provides an example; it gives an alarm call to warn its local group of the presence of a predator. By emitting the alarm, it gives its own location away, putting itself in more danger. In the process, however, the squirrel may protect its relatives within the local group (along with the rest of the group). Therefore, if the effect of the trait influencing the alarm call typically protects the other squirrels in the immediate area, it will lead to the passing on of more copies of the alarm call trait in the next generation than the squirrel could leave by reproducing on its own. In such a case natural selection will increase the trait that influences giving the alarm call, provided that a sufficient fraction of the shared genes include the gene(s) predisposing to the alarm call.

Synalpheus regalis, a eusocial shrimp, is an organism whose social traits meet the inclusive fitness criterion. The larger defenders protect the young juveniles in the colony from outsiders. By ensuring the young's survival, the genes will continue to be passed on to future generations.

Inclusive fitness is more generalized than strict kin selection, which requires that the shared genes are identical by descent. Inclusive fitness is not limited to cases where "kin" ('close genetic relatives') are involved.

Hamilton's rule

Hamilton's rule is most easily derived in the framework of neighbour-modulated fitness, where the fitness of a focal individual is considered to be modulated by the actions of its neighbours. This is the inverse of inclusive fitness where we consider how a focal individual modulates the fitness of its neighbours. However, taken over the entire population, these two approaches are equivalent to each other so long as fitness remains linear in trait value. A simple derivation of Hamilton's rule can be gained via the Price equation as follows. If an infinite population is assumed, such that any non-selective effects can be ignored, the Price equation can be written as:

Where represents trait value and represents fitness, either taken for an individual or averaged over the entire population. If fitness is linear in trait value, the fitness for an individual can be written as:

Where is the component of an individual's fitness which is independent of trait value, parameterizes the effect of individual 's phenotype on its own fitness (written negative, by convention, to represent a fitness cost), is the average trait value of individual 's neighbours, and parameterizes the effect of individual 's neighbours on its fitness (written positive, by convention, to represent a fitness benefit). Substituting into the Price equation then gives:

Since by definition does not covary with , this rearranges to:

Since this term must, by definition, be greater than 0. This is because variances can never be negative, and negative mean fitness is undefined (if mean fitness is 0 the population has crashed, similarly 0 variance would imply a monomorphic population, in both cases a change in mean trait value is impossible). It can then be said that that mean trait value will increase () when:

or

Giving Hamilton's rule, where relatedness () is a regression coefficient of the form , or . Relatedness here can vary between a value of 1 (only interacting with individuals of the same trait value) and -1 (only interacting with individuals of a [most] different trait value), and will be 0 when all individuals in the population interact with equal likelihood.

Fitness in practice, however, does not tend to be linear in trait value -this would imply an increase to an infinitely large trait value being just as valuable to fitness as a similar increase to a very small trait value. Consequently, to apply Hamilton's rule to biological systems the conditions under which fitness can be approximated to being linear in trait value must first be found. There are two main methods used to approximate fitness as being linear in trait value; performing a partial regression with respect to both the focal individual's trait value and its neighbours average trait value, or taking a first order Taylor series approximation of fitness with respect to trait value. Performing a partial regression requires minimal assumptions, but only provides a statistical relationship as opposed to a mechanistic one, and cannot be extrapolated beyond the dataset that it was generated from. Linearizing via a Taylor series approximation, however, provides a powerful mechanistic relationship (see also causal model), but requires the assumption that evolution proceeds in sufficiently small mutational steps that the difference in trait value between an individual and its neighbours is close to 0 (in accordance with Fisher's geometric model): although in practice this approximation can often still retain predictive power under larger mutational steps.

As a first order approximation (linear in trait value), Hamilton's rule can only inform about how the mean trait value in a population is expected to change (directional selection). It contains no information about how the variance in trait value is expected to change (disruptive selection). As such it cannot be considered sufficient to determine evolutionary stability, even when Hamilton's rule predicts no change in trait value. This is because disruptive selection terms, and subsequent conditions for evolutionary branching, must instead be obtained from second order approximations (quadratic in trait value) of fitness.

Gardner et al. (2007) suggest that Hamilton's rule can be applied to multi-locus models, but that it should be done at the point of interpreting theory, rather than the starting point of enquiry. They suggest that one should "use standard population genetics, game theory, or other methodologies to derive a condition for when the social trait of interest is favoured by selection and then use Hamilton's rule as an aid for conceptualizing this result". It is now becoming increasingly popular to use adaptive dynamics approaches to gain selection conditions which are directly interpretable with respect to Hamilton's rule.

Altruism

The concept serves to explain how natural selection can perpetuate altruism. If there is an "altruism gene" (or complex of genes) that influences an organism's behaviour to be helpful and protective of relatives and their offspring, this behaviour also increases the proportion of the altruism gene in the population, because relatives are likely to share genes with the altruist due to common descent. In formal terms, if such a complex of genes arises, Hamilton's rule (rbc) specifies the selective criteria (in terms of cost, benefit and relatedness) for such a trait to increase in frequency in the population. Hamilton noted that inclusive fitness theory does not by itself predict that a species will necessarily evolve such altruistic behaviours, since an opportunity or context for interaction between individuals is a more primary and necessary requirement in order for any social interaction to occur in the first place. As Hamilton put it, "Altruistic or selfish acts are only possible when a suitable social object is available. In this sense behaviours are conditional from the start." In other words, while inclusive fitness theory specifies a set of necessary criteria for the evolution of altruistic traits, it does not specify a sufficient condition for their evolution in any given species. More primary necessary criteria include the existence of gene complexes for altruistic traits in gene pool, as mentioned above, and especially that "a suitable social object is available", as Hamilton noted. The American evolutionary biologist Paul W. Sherman gives a fuller discussion of Hamilton's latter point:

To understand any species' pattern of nepotism, two questions about individuals' behavior must be considered: (1) what is reproductively ideal?, and (2) what is socially possible? With his formulation of "inclusive fitness," Hamilton suggested a mathematical way of answering (1). Here I suggest that the answer to (2) depends on demography, particularly its spatial component, dispersal, and its temporal component, mortality. Only when ecological circumstances affecting demography consistently make it socially possible will nepotism be elaborated according to what is reproductively ideal. For example, if dispersing is advantageous and if it usually separates relatives permanently, as in many birds, on the rare occasions when nestmates or other kin live in proximity, they will not preferentially cooperate. Similarly, nepotism will not be elaborated among relatives that have infrequently coexisted in a population's or a species' evolutionary history. If an animal's life history characteristicsusually preclude the existence of certain relatives, that is if kin are usually unavailable, the rare coexistence of such kin will not occasion preferential treatment. For example, if reproductives generally die soon after zygotes are formed, as in many temperate zone insects, the unusual individual that survives to interact with its offspring is not expected to behave parentally.

The occurrence of sibling cannibalism in several species underlines the point that inclusive fitness theory should not be understood to simply predict that genetically related individuals will inevitably recognize and engage in positive social behaviours towards genetic relatives. Only in species that have the appropriate traits in their gene pool, and in which individuals typically interacted with genetic relatives in the natural conditions of their evolutionary history, will social behaviour potentially be elaborated, and consideration of the evolutionarily typical demographic composition of grouping contexts of that species is thus a first step in understanding how selection pressures upon inclusive fitness have shaped the forms of its social behaviour. Richard Dawkins gives a simplified illustration:

If families [genetic relatives] happen to go around in groups, this fact provides a useful rule of thumb for kin selection: 'care for any individual you often see'."

Evidence from a variety of species including primates and other social mammals suggests that contextual cues (such as familiarity) are often significant proximate mechanisms mediating the expression of altruistic behaviour, regardless of whether the participants are always in fact genetic relatives or not. This is nevertheless evolutionarily stable since selection pressure acts on typical conditions, not on the rare occasions where actual genetic relatedness differs from that normally encountered. Inclusive fitness theory thus does not imply that organisms evolve to direct altruism towards genetic relatives. Many popular treatments do however promote this interpretation, as illustrated in a review:

[M]any misunderstandings persist. In many cases, they result from conflating "coefficient of relatedness" and "proportion of shared genes," which is a short step from the intuitively appealing—but incorrect—interpretation that "animals tend to be altruistic toward those with whom they share a lot of genes." These misunderstandings don't just crop up occasionally; they are repeated in many writings, including undergraduate psychology textbooks—most of them in the field of social psychology, within sections describing evolutionary approaches to altruism. (Park 2007, p860)

Such misunderstandings of inclusive fitness' implications for the study of altruism, even amongst professional biologists utilizing the theory, are widespread, prompting prominent theorists to regularly attempt to highlight and clarify the mistakes. An example of attempted clarification is West et al. (2010):

In his original papers on inclusive fitness theory, Hamilton pointed out a sufficiently high relatedness to favour altruistic behaviours could accrue in two ways—kin discrimination or limited dispersal. There is a huge theoretical literature on the possible role of limited dispersal, as well as experimental evolution tests of these models. However, despite this, it is still sometimes claimed that kin selection requires kin discrimination. Furthermore, a large number of authors appear to have implicitly or explicitly assumed that kin discrimination is the only mechanism by which altruistic behaviours can be directed towards relatives... [T]here is a huge industry of papers reinventing limited dispersal as an explanation for cooperation. The mistakes in these areas seem to stem from the incorrect assumption that kin selection or indirect fitness benefits require kin discrimination (misconception 5), despite the fact that Hamilton pointed out the potential role of limited dispersal in his earliest papers on inclusive fitness theory.

Green-beard effect

As well as interactions in reliable contexts of genetic relatedness, altruists may also have some way to recognize altruistic behaviour in unrelated individuals and be inclined to support them. As Dawkins points out in The Selfish Gene and The Extended Phenotype, this must be distinguished from the green-beard effect.

The green-beard effect is the act of a gene (or several closely linked genes), that:

  1. Produces a phenotype.
  2. Allows recognition of that phenotype in others.
  3. Causes the individual to preferentially treat other individuals with the same gene.

The green-beard effect was originally a thought experiment by Hamilton in his publications on inclusive fitness in 1964, although it hadn't yet been observed. As of today, it has been observed in few species. Its rarity is probably due to its susceptibility to 'cheating' whereby individuals can gain the trait that confers the advantage, without the altruistic behaviour. This normally would occur via the crossing over of chromosomes which happens frequently, often rendering the green-beard effect a transient state. However, Wang et al. has shown in one of the species where the effect is common (fire ants), recombination cannot occur due to a large genetic transversion, essentially forming a supergene. This, along with homozygote inviability at the green-beard loci allows for the extended maintenance of the green-beard effect.

Equally, cheaters may not be able to invade the green-beard population if the mechanism for preferential treatment and the phenotype are intrinsically linked. In budding yeast (Saccharomyces cerevisiae), the dominant allele FLO1 is responsible for flocculation (self-adherence between cells) which helps protect them against harmful substances such as ethanol. While 'cheater' yeast cells occasionally find their way into the biofilm-like substance that is formed from FLO1 expressing yeast, they cannot invade as the FLO1 expressing yeast will not bind to them in return, and thus the phenotype is intrinsically linked to the preference.

Parent–offspring conflict and optimization

Early writings on inclusive fitness theory (including Hamilton 1964) used K in place of B/C. Thus Hamilton's rule was expressed as

is the necessary and sufficient condition for selection for altruism.

Where B is the gain to the beneficiary, C is the cost to the actor and r is the number of its own offspring equivalents the actor expects in one of the offspring of the beneficiary. r is either called the coefficient of relatedness or coefficient of relationship, depending on how it is computed. The method of computing has changed over time, as has the terminology. It is not clear whether or not changes in the terminology followed changes in computation.

Robert Trivers (1974) defined "parent-offspring conflict" as any case where

i.e., K is between 1 and 2. The benefit is greater than the cost but is less than twice the cost. In this case, the parent would wish the offspring to behave as if r is 1 between siblings, although it is actually presumed to be 1/2 or closely approximated by 1/2. In other words, a parent would wish its offspring to give up ten offspring in order to raise 11 nieces and nephews. The offspring, when not manipulated by the parent, would require at least 21 nieces and nephews to justify the sacrifice of 10 of its own offspring.

The parent is trying to maximize its number of grandchildren, while the offspring is trying to maximize the number of its own offspring equivalents (via offspring and nieces and nephews) it produces. If the parent cannot manipulate the offspring and therefore loses in the conflict, the grandparents with the fewest grandchildren seem to be selected for. In other words, if the parent has no influence on the offspring's behaviour, grandparents with fewer grandchildren increase in frequency in the population.

By extension, parents with the fewest offspring will also increase in frequency. This seems to go against Ronald Fisher's "Fundamental Theorem of Natural Selection" which states that the change in fitness over the course of a generation equals the variance in fitness at the beginning of the generation. Variance is defined as the square of a quantity—standard deviation —and as a square must always be positive (or zero). That would imply that e fitness could never decrease as time passes. This goes along with the intuitive idea that lower fitness cannot be selected for. During parent-offspring conflict, the number of stranger equivalents reared per offspring equivalents reared is going down. Consideration of this phenomenon caused Orlove (1979) and Grafen (2006) to say that nothing is being maximized.

According to Trivers, if Sigmund Freud had tried to explain intra-family conflict after Hamilton instead of before him, he would have attributed the motivation for the conflict and for the castration complex to resource allocation issues rather than to sexual jealousy.

Incidentally, when k=1 or k=2, the average number of offspring per parent stays constant as time goes by. When k<1 or k>2 then the average number of offspring per parent increases as time goes by.

The term "gene" can refer to a locus (location) on an organism's DNA—a section that codes for a particular trait. Alternative versions of the code at that location are called "alleles." If there are two alleles at a locus, one of which codes for altruism and the other for selfishness, an individual who has one of each is said to be a heterozygote at that locus. If the heterozygote uses half of its resources raising its own offspring and the other half helping its siblings raise theirs, that condition is called codominance. If there is codominance the "2" in the above argument is exactly 2. If by contrast, the altruism allele is more dominant, then the 2 in the above would be replaced by a number smaller than 2. If the selfishness allele is the more dominant, something greater than 2 would replace the 2.

Opposing view

A 2010 paper by Martin Nowak, Corina Tarnita, and E. O. Wilson suggested that standard natural selection theory is superior to inclusive fitness theory, stating that the interactions between cost and benefit cannot be explained only in terms of relatedness. This, Nowak said, makes Hamilton's rule at worst superfluous and at best ad hoc. Gardner in turn was critical of the paper, describing it as "a really terrible article", and along with other co-authors has written a reply, submitted to Nature. The disagreement stems from a long history of confusion over what Hamilton's rule represents. Hamilton's rule gives the direction of mean phenotypic change (directional selection) so long as fitness is linear in phenotype, and the utility of Hamilton's rule is simply a reflection of when it is suitable to consider fitness as being linear in phenotype. The primary (and strictest) case is when evolution proceeds in very small mutational steps. Under such circumstances Hamilton's rule then emerges as the result of taking a first order Taylor series approximation of fitness with regards to phenotype. This assumption of small mutational steps (otherwise known as δ-weak selection) is often made on the basis of Fisher's geometric model and underpins much of modern evolutionary theory.

In work prior to Nowak et al. (2010), various authors derived different versions of a formula for , all designed to preserve Hamilton's rule. Orlove noted that if a formula for is defined so as to ensure that Hamilton's rule is preserved, then the approach is by definition ad hoc. However, he published an unrelated derivation of the same formula for – a derivation designed to preserve two statements about the rate of selection – which on its own was similarly ad hoc. Orlove argued that the existence of two unrelated derivations of the formula for reduces or eliminates the ad hoc nature of the formula, and of inclusive fitness theory as well. The derivations were demonstrated to be unrelated by corresponding parts of the two identical formulae for being derived from the genotypes of different individuals. The parts that were derived from the genotypes of different individuals were terms to the right of the minus sign in the covariances in the two versions of the formula for . By contrast, the terms left of the minus sign in both derivations come from the same source. In populations containing only two trait values, it has since been shown that is in fact Sewall Wright's coefficient of relationship.

Engles (1982) suggested that the c/b ratio be considered as a continuum of this behavioural trait rather than discontinuous in nature. From this approach fitness transactions can be better observed because there is more to what is happening to affect an individual's fitness than just losing and gaining.

Saturday, July 5, 2025

Ocean temperature

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Ocean_temperature
Graph showing ocean temperature versus depth on the vertical axis. The graph shows several thermoclines (or thermal layers) based on seasons and latitude. The temperature at zero depth is the sea surface temperature.

The ocean temperature plays a crucial role in the global climate system, ocean currents and for marine habitats. It varies depending on depth, geographical location and season. Not only does the temperature differ in seawater, so does the salinity. Warm surface water is generally saltier than the cooler deep or polar waters. In polar regions, the upper layers of ocean water are cold and fresh. Deep ocean water is cold, salty water found deep below the surface of Earth's oceans. This water has a uniform temperature of around 0-3 °C. The ocean temperature also depends on the amount of solar radiation falling on its surface. In the tropics, with the Sun nearly overhead, the temperature of the surface layers can rise to over 30 °C (86 °F). Near the poles the temperature in equilibrium with the sea ice is about −2 °C (28 °F).

There is a continuous large-scale circulation of water in the oceans. One part of it is the thermohaline circulation (THC). It is driven by global density gradients created by surface heat and freshwater fluxes. Warm surface currents cool as they move away from the tropics. This happens as the water becomes denser and sinks. Changes in temperature and density move the cold water back towards the equator as a deep sea current. Then it eventually wells up again towards the surface.

Ocean temperature as a term applies to the temperature in the ocean at any depth. It can also apply specifically to the ocean temperatures that are not near the surface. In this case it is synonymous with deep ocean temperature).

It is clear that the oceans are warming as a result of climate change and this rate of warming is increasing. The upper ocean (above 700 m) is warming fastest, but the warming trend extends throughout the ocean. In 2022, the global ocean was the hottest ever recorded by humans.

Definition and types

Sea surface temperature

Sea surface temperature since 1979 in the extrapolar region (between 60 degrees south and 60 degrees north latitude).
Sea surface temperature (or ocean surface temperature) is the temperature of ocean water close to the surface. The exact meaning of surface varies in the literature and in practice. It is usually between 1 millimetre (0.04 in) and 20 metres (70 ft) below the sea surface. Sea surface temperatures greatly modify air masses in the Earth's atmosphere within a short distance of the shore. The thermohaline circulation has a major impact on average sea surface temperature throughout most of the world's oceans.

Deep ocean temperature

Experts refer to the temperature further below the surface as ocean temperature or deep ocean temperature. Ocean temperatures more than 20 metres below the surface vary by region and time. They contribute to variations in ocean heat content and ocean stratification. The increase of both ocean surface temperature and deeper ocean temperature is an important effect of climate change on oceans.

Deep ocean water is the name for cold, salty water found deep below the surface of Earth's oceans. Deep ocean water makes up about 90% of the volume of the oceans. Deep ocean water has a very uniform temperature of around 0-3 °C. Its salinity is about 3.5% or 35 ppt (parts per thousand).

Relevance

Ocean temperature and dissolved oxygen concentrations have a big influence on many aspects of the ocean. These two key parameters affect the ocean's primary productivity, the oceanic carbon cycle, nutrient cycles, and marine ecosystems. They work in conjunction with salinity and density to control a range of processes. These include mixing versus stratification, ocean currents and the thermohaline circulation.

Ocean heat content

Experts calculate ocean heat content by using ocean temperatures at different depths.

The ocean heat content (OHC) has been increasing for decades as the ocean has been absorbing most of the excess heat resulting from greenhouse gas emissions from human activities. The graph shows OHC calculated to a water depth of 700 and to 2000 meters.

Ocean heat content (OHC) or ocean heat uptake (OHU) is the energy absorbed and stored by oceans. It is an important indicator of global warming. Ocean heat content is calculated by measuring ocean temperature at many different locations and depths, and integrating the areal density of a change in enthalpic energy over an ocean basin or entire ocean.

Between 1971 and 2018, a steady upward trend in ocean heat content accounted for over 90% of Earth's excess energy from global warming. Scientists estimate a 1961–2022 warming trend of 0.43 ± 0.08 W/m², accelerating at about 0.15 ± 0.04 W/m² per decade. By 2020, about one third of the added energy had propagated to depths below 700 meters. The five highest ocean heat observations to a depth of 2000 meters all occurred in the period 2020–2024. The main driver of this increase has been human-caused greenhouse gas emissions.

Measurements

There are various ways to measure ocean temperature. Below the sea surface, it is important to refer to the specific depth of measurement as well as measuring the general temperature. The reason is there is a lot of variation with depths. This is especially the case during the day. At this time low wind speed and a lot of sunshine may lead to the formation of a warm layer at the ocean surface and big changes in temperature as you get deeper. Experts call these strong daytime vertical temperature gradients a diurnal thermocline.

The basic technique involves lowering a device to measure temperature and other parameters electronically. This device is called CTD which stands for conductivity, temperature, and depth. It continuously sends the data up to the ship via a conducting cable. This device is usually mounted on a frame that includes water sampling bottles. Since the 2010s autonomous vehicles such as gliders or mini-submersibles have been increasingly available. They carry the same CTD sensors, but operate independently of a research ship.

Scientists can deploy CTD systems from research ships on moorings gliders and even on seals. With research ships they receive data through the conducting cable. For the other methods they use telemetry.

There are other ways of measuring sea surface temperature. At this near-surface layer measurements are possible using thermometers or satellites with spectroscopy. Weather satellites have been available to determine this parameter since 1967. Scientists created the first global composites during 1970.

The Advanced Very High Resolution Radiometer (AVHRR) is widely used to measure sea surface temperature from space.

There are various devices to measure ocean temperatures at different depths. These include the Nansen bottle, bathythermograph, CTD, or ocean acoustic tomography. Moored and drifting buoys also measure sea surface temperatures. Examples are those deployed by the Global Drifter Program and the National Data Buoy Center. The World Ocean Database Project is the largest database for temperature profiles from all of the world’s oceans.

A small test fleet of deep Argo floats aims to extend the measurement capability down to about 6000 meters. It will accurately sample temperature for a majority of the ocean volume once it is in full use.

The most frequent measurement technique on ships and buoys is thermistors and mercury thermometers. Scientists often use mercury thermometers to measure the temperature of surface waters. They can put them in buckets dropped over the side of a ship. To measure deeper temperatures they put them on Nansen bottles.

Monitoring through Argo program

Argo is an international programme for researching the ocean. It uses profiling floats to observe temperature, salinity and currents. Recently it has observed bio-optical properties in the Earth's oceans. It has been operating since the early 2000s. The real-time data it provides support climate and oceanographic research. A special research interest is to quantify the ocean heat content (OHC). The Argo fleet consists of almost 4000 drifting "Argo floats" (as profiling floats used by the Argo program are often called) deployed worldwide. Each float weighs 20–30 kg. In most cases probes drift at a depth of 1000 metres. Experts call this the parking depth. Every 10 days, by changing their buoyancy, they dive to a depth of 2000 metres and then move to the sea-surface. As they move they measure conductivity and temperature profiles as well as pressure. Scientists calculate salinity and density from these measurements. Seawater density is important in determining large-scale motions in the ocean.

Ocean warming

The illustration of temperature changes from 1960 to 2019 across each ocean starting at the Southern Ocean around Antarctica.

It is clear that the ocean is warming as a result of climate change, and this rate of warming is increasing. The global ocean was the warmest it had ever been recorded by humans in 2022. This is determined by the ocean heat content, which exceeded the previous 2021 maximum in 2022. The steady rise in ocean temperatures is an unavoidable result of the Earth's energy imbalance, which is primarily caused by rising levels of greenhouse gases. Between pre-industrial times and the 2011–2020 decade, the ocean's surface has heated between 0.68 and 1.01 °C.

The majority of ocean heat gain occurs in the Southern Ocean. For example, between the 1950s and the 1980s, the temperature of the Antarctic Southern Ocean rose by 0.17 °C (0.31 °F), nearly twice the rate of the global ocean.

The warming rate varies with depth. The upper ocean (above 700 m) is warming the fastest. At an ocean depth of a thousand metres the warming occurs at a rate of nearly 0.4 °C per century (data from 1981 to 2019). In deeper zones of the ocean (globally speaking), at 2000 metres depth, the warming has been around 0.1 °C per century. The warming pattern is different for the Antarctic Ocean (at 55°S), where the highest warming (0.3 °C per century) has been observed at a depth of 4500 m.

Overall, scientists project that all regions of the oceans will warm by 2050, but models disagree for SST changes expected in the subpolar North Atlantic, the equatorial Pacific, and the Southern Ocean. The future global mean SST increase for the period 1995-2014 to 2081-2100 is 0.86 °C under the most modest greenhouse gas emissions scenarios, and up to 2.89 °C under the most severe emissions scenarios.

A study published in 2025 in Environmental Research Letters reported that global mean sea surface temperature increases had more than quadrupled, from 0.06 K per decade during 1985–89 to 0.27 K per decade for 2019–23. The researchers projected that the increase inferred over the past 40 years would likely be exceeded within the next 20 years.

Causes

The cause of recent observed changes is the warming of the Earth due to human-caused emissions of greenhouse gases such as carbon dioxide and methane. Growing concentrations of greenhouse gases increases Earth's energy imbalance, further warming surface temperatures. The ocean takes up most of the added heat in the climate system, raising ocean temperatures.

Main physical effects

Increased stratification and lower oxygen levels

Higher air temperatures warm the ocean surface. And this leads to greater ocean stratification. Reduced mixing of the ocean layers stabilises warm water near the surface. At the same time it reduces cold, deep water circulation. The reduced up and down mixing reduces the ability of the ocean to absorb heat. This directs a larger fraction of future warming toward the atmosphere and land. Energy available for tropical cyclones and other storms is likely to increase. Nutrients for fish in the upper ocean layers are set to decrease. This is also like to reduce the capacity of the oceans to store carbon.

Warmer water cannot contain as much oxygen as cold water. Increased thermal stratification may reduce the supply of oxygen from the surface waters to deeper waters. This would further decrease the water's oxygen content.[44] This process is called ocean deoxygenation. The ocean has already lost oxygen throughout the water column. Oxygen minimum zones are expanding worldwide.

Changing ocean currents

Varying temperatures associated with sunlight and air temperatures at different latitudes cause ocean currents. Prevailing winds and the different densities of saline and fresh water are another cause of currents. Air tends to be warmed and thus rise near the equator, then cool and thus sink slightly further poleward. Near the poles, cool air sinks, but is warmed and rises as it then travels along the surface equatorward. The sinking and upwelling that occur in lower latitudes, and the driving force of the winds on surface water, mean the ocean currents circulate water throughout the entire sea. Global warming on top of these processes causes changes to currents, especially in the regions where deep water is formed.

In the geologic past

Scientists believe the sea temperature was much hotter in the Precambrian period. Such temperature reconstructions derive from oxygen and silicon isotopes from rock samples. These reconstructions suggest the ocean had a temperature of 55–85 °C 2,000 to 3,500 million years ago. It then cooled to milder temperatures of between 10 and 40 °C by 1,000 million years ago. Reconstructed proteins from Precambrian organisms also provide evidence that the ancient world was much warmer than today.

The Cambrian Explosion approximately 538.8 million years ago was a key event in the evolution of life on Earth. This event took place at a time when scientists believe sea surface temperatures reached about 60 °C. Such high temperatures are above the upper thermal limit of 38 °C for modern marine invertebrates. They preclude a major biological revolution.

During the later Cretaceous period, from 100 to 66 million years ago, average global temperatures reached their highest level in the last 200 million years or so. This was probably the result of the configuration of the continents during this period. It allowed for improved circulation in the oceans. This discouraged the formation of large scale ice sheet.

Data from an oxygen isotope database indicate that there have been seven global warming events during the geologic past. These include the Late Cambrian, Early Triassic, Late Cretaceous, and Paleocene-Eocene transition. The surface of the sea was about 5-30º warmer than today in these warming period.

Late Pleistocene extinctions

From Wikipedia, the free encyclopedia ...