Search This Blog

Tuesday, July 8, 2025

Absolute zero

From Wikipedia, the free encyclopedia
Zero kelvin (−273.15 °C) is defined as absolute zero.

Absolute zero is the lowest possible temperature, a state at which a system's internal energy, and in ideal cases entropy, reach their minimum values. The absolute zero is defined as 0 K on the Kelvin scale, equivalent to −273.15 °C on the Celsius scale, and −459.67 °F on the Fahrenheit scale. The Kelvin and Rankine temperature scales set their zero points at absolute zero by design. This limit can be estimated by extrapolating the ideal gas law to the temperature at which the volume or pressure of a classical gas becomes zero.

At absolute zero, there is no thermal motion. However, due to quantum effects, the particles still exhibit minimal motion mandated by the Heisenberg uncertainty principle and, for a system of fermions, the Pauli exclusion principle. Even if absolute zero could be achieved, this residual quantum motion would persist.

Although absolute zero can be approached, it cannot be reached. Some isentropic processes, such as adiabatic expansion, can lower the system's temperature without relying on a colder medium. Nevertheless, the third law of thermodynamics implies that no physical process can reach absolute zero in a finite number of steps. As a system nears this limit, further reductions in temperature become increasingly difficult, regardless of the cooling method used. In the 21st century, scientists have achieved temperatures below 100 picokelvin (pK). At low temperatures, matter displays exotic quantum phenomena such as superconductivity, superfluidity, and Bose–Einstein condensation.

Ideal gas laws

Pressure–temperature plots for three different gas samples, measured at constant volume, all extrapolate to zero at the same point, the absolute zero.

For an ideal gas, the pressure at constant volume decreases linearly with temperature, and the volume at constant pressure also decreases linearly with temperature. When these relationships are expressed using the Celsius scale, both pressure and volume extrapolate to zero at approximately −273.15 °C. This implies the existence of a lower bound on temperature, beyond which the gas would have negative pressure or volume—an unphysical result.

To resolve this, the concept of absolute temperature is introduced, with 0 kelvins defined as the point at which pressure or volume would vanish in an ideal gas. This temperature corresponds to −273.15 °C, and is referred to as absolute zero. The ideal gas law is therefore formulated in terms of absolute temperature to remain consistent with observed gas behavior and physical limits.

Absolute temperature scales

Absolute temperature is conventionally measured in Kelvin scale (using Celsius-scaled increments) and, more rarely, in Rankine scale (using Fahrenheit-scaled increments). Absolute temperature measurement is uniquely determined by a multiplicative constant which specifies the size of the degree, so the ratios of two absolute temperatures, T2/T1, are the same in all scales.

Absolute temperature also emerges naturally in statistical mechanics. In the Maxwell–Boltzmann, Fermi–Dirac, and Bose–Einstein distributions, absolute temperature appears in the exponential factor that determines how particles populate energy states. Specifically, the relative number of particles at a given energy E depends exponentially on E/kT, where k is the Boltzmann constant and T is the absolute temperature.

Unattainability of absolute zero

Left side: Absolute zero could be reached in a finite number of steps if S(0, X1) ≠ S(0, X2). Right: An infinite number of steps is needed since S(0, X1) = S(0, X2). Here, X is some controllable parameter of the system, such as its volume or pressure.

The third law of thermodynamics concerns the behavior of entropy as temperature approaches absolute zero. It states that the entropy of a system approaches a constant minimum at 0 K. For a perfect crystal, this minimum is taken to be zero, since the system would be in a state of perfect order with only one microstate available. In some systems, there may be more than one microstate at minimum energy and there is some residual entropy at 0 K.

Several other formulations of the third law exist. Nernst heat theorem holds that the change in entropy for any constant-temperature process tends to zero as the temperature approaches zero. A key consequence is that absolute zero cannot be reached, since removing heat becomes increasingly inefficient and entropy changes vanish. This unattainability principle means no physical process can cool a system to absolute zero in a finite number of steps or finite time.

Thermal properties at low temperatures

Using the Debye model, the specific heat and entropy of a pure crystal are proportional to T 3, while the enthalpy and chemical potential are proportional to T 4 (Guggenheim, p. 111). These quantities drop toward their T = 0 limiting values and approach with zero slopes. For the specific heats at least, the limiting value itself is definitely zero, as borne out by experiments to below 10 K. Even the less detailed Einstein model shows this curious drop in specific heats. In fact, all specific heats vanish at absolute zero, not just those of crystals. Likewise for the coefficient of thermal expansion. Maxwell's relations show that various other quantities also vanish. These phenomena were unanticipated.

One model that estimates the properties of an electron gas at absolute zero in metals is the Fermi gas. The electrons, being fermions, must be in different quantum states, which leads the electrons to get very high typical velocities, even at absolute zero. The maximum energy that electrons can have at absolute zero is called the Fermi energy. The Fermi temperature is defined as this maximum energy divided by the Boltzmann constant, and is on the order of 80,000 K for typical electron densities found in metals. For temperatures significantly below the Fermi temperature, the electrons behave in almost the same way as at absolute zero. This explains the failure of the classical equipartition theorem for metals that eluded classical physicists in the late 19th century.

Gibbs free energy

Since the relation between changes in Gibbs free energy (G), the enthalpy (H) and the entropy is

thus, as T decreases, ΔG and ΔH approach each other (so long as ΔS is bounded). Experimentally, it is found that all spontaneous processes (including chemical reactions) result in a decrease in G as they proceed toward equilibrium. If ΔS and/or T are small, the condition ΔG < 0 may imply that ΔH < 0, which would indicate an exothermic reaction. However, this is not required; endothermic reactions can proceed spontaneously if the TΔS term is large enough.

Moreover, the slopes of the derivatives of ΔG and ΔH converge and are equal to zero at T = 0. This ensures that ΔG and ΔH are nearly the same over a considerable range of temperatures and justifies the approximate empirical Principle of Thomsen and Berthelot, which states that the equilibrium state to which a system proceeds is the one that evolves the greatest amount of heat, i.e., an actual process is the most exothermic one (Callen, pp. 186–187).

Zero-point energy

Probability densities and energies (indicated by an offset) of the four lowest energy eigenstates of a quantum harmonic oscillator. ZPE denotes the zero-point energy.

Even at absolute zero, a quantum system retains a minimum amount of energy due to the Heisenberg uncertainty principle, which prevents particles from having both perfectly defined position and momentum. This residual energy is known as zero-point energy. In the case of the quantum harmonic oscillator, a standard model for vibrations in atoms and molecules, the uncertainty in a particle's momentum implies it must retain some kinetic energy, while the uncertainty in its position contributes to potential energy. As a result, such a system has a nonzero energy at absolute zero.

Zero-point energy helps explain certain physical phenomena. For example, liquid helium does not solidify at normal pressure, even at temperatures near absolute zero. The large zero-point motion of helium atoms, caused by their low mass and weak interatomic forces, prevents them from settling into a solid structure. Only under high pressure does helium solidify, as the atoms are forced closer together and the interatomic forces grow stronger.

History

Robert Boyle pioneered the idea of an absolute zero.

One of the first to discuss the possibility of an absolute minimal temperature was Robert Boyle. His 1665 New Experiments and Observations touching Cold, articulated the dispute known as the primum frigidum. The concept was well known among naturalists of the time. Some contended an absolute minimum temperature occurred within earth (as one of the four classical elements), others within water, others air, and some more recently within nitre. But all of them seemed to agree that, "There is some body or other that is of its own nature supremely cold and by participation of which all other bodies obtain that quality."

Limit to the "degree of cold"

The question of whether there is a limit to the degree of coldness possible, and, if so, where the zero must be placed, was first addressed by the French physicist Guillaume Amontons in 1703, in connection with his improvements in the air thermometer. His instrument indicated temperatures by the height at which a certain mass of air sustained a column of mercury—the pressure, or "spring" of the air varying with temperature. Amontons therefore argued that the zero of his thermometer would be that temperature at which the spring of the air was reduced to nothing. He used a scale that marked the boiling point of water at +73 and the melting point of ice at +51+12, so that the zero was equivalent to about −240 on the Celsius scale. Amontons held that the absolute zero cannot be reached, so never attempted to compute it explicitly. The value of −240 °C, or "431 divisions [in Fahrenheit's thermometer] below the cold of freezing water" was published by George Martine in 1740.

This close approximation to the modern value of −273.15 °C for the zero of the air thermometer was further improved upon in 1779 by Johann Heinrich Lambert, who observed that −270 °C (−454.00 °F; 3.15 K) might be regarded as absolute cold.

Values of this order for the absolute zero were not, however, universally accepted about this period. Pierre-Simon Laplace and Antoine Lavoisier, in their 1780 treatise on heat, arrived at values ranging from 1,500 to 3,000 below the freezing point of water, and thought that in any case it must be at least 600 below. John Dalton in his Chemical Philosophy gave ten calculations of this value, and finally adopted −3,000 °C as the natural zero of temperature.

Charles's law

From 1787 to 1802, it was determined by Jacques Charles (unpublished), John Dalton, and Joseph Louis Gay-Lussac that, at constant pressure, ideal gases expanded or contracted their volume linearly (Charles's law) by about 1/273 parts per degree Celsius of temperature's change up or down, between 0° and 100° C. This suggested that the volume of a gas cooled at about −273 °C would reach zero.

Lord Kelvin's work

After James Prescott Joule had determined the mechanical equivalent of heat, Lord Kelvin approached the question from an entirely different point of view, and in 1848 devised a scale of absolute temperature that was independent of the properties of any particular substance and was based on Carnot's theory of the Motive Power of Heat and data published by Henri Victor Regnault. It followed from the principles on which this scale was constructed that its zero was placed at −273 °C, at almost precisely the same point as the zero of the air thermometer, where the air volume would reach "nothing". This value was not immediately accepted; values ranging from −271.1 °C (−455.98 °F) to −274.5 °C (−462.10 °F), derived from laboratory measurements and observations of astronomical refraction, remained in use in the early 20th century.

The race to absolute zero

Commemorative plaque in Leiden

With a better theoretical understanding of absolute zero, scientists were eager to reach this temperature in the lab. By 1845, Michael Faraday had managed to liquefy most gases then known to exist, and reached a new record for lowest temperatures by reaching −130 °C (−202 °F; 143 K). Faraday believed that certain gases, such as oxygen, nitrogen, and hydrogen, were permanent gases and could not be liquefied. Decades later, in 1873 Dutch theoretical scientist Johannes Diderik van der Waals demonstrated that these gases could be liquefied, but only under conditions of very high pressure and very low temperatures. In 1877, Louis Paul Cailletet in France and Raoul Pictet in Switzerland succeeded in producing the first droplets of liquid air at −195 °C (−319.0 °F; 78.1 K). This was followed in 1883 by the production of liquid oxygen −218 °C (−360.4 °F; 55.1 K) by the Polish professors Zygmunt Wróblewski and Karol Olszewski.

Scottish chemist and physicist James Dewar and Dutch physicist Heike Kamerlingh Onnes took on the challenge to liquefy the remaining gases, hydrogen and helium. In 1898, after 20 years of effort, Dewar was the first to liquefy hydrogen, reaching a new low-temperature record of −252 °C (−421.6 °F; 21.1 K). However, Kamerlingh Onnes, his rival, was the first to liquefy helium, in 1908, using several precooling stages and the Hampson–Linde cycle. He lowered the temperature to the boiling point of helium −269 °C (−452.20 °F; 4.15 K). By reducing the pressure of the liquid helium, he achieved an even lower temperature, near 1.5 K. These were the coldest temperatures achieved on Earth at the time and his achievement earned him the Nobel Prize in 1913. Kamerlingh Onnes would continue to study the properties of materials at temperatures near absolute zero, describing superconductivity and superfluids for the first time.

Negative temperatures

Temperatures below zero on the Celsius or Fahrenheit scales are simply colder than the zero points of those scales. In contrast, certain isolated systems can achieve negative thermodynamic temperatures (in kelvins), which are not colder than absolute zero, but paradoxically hotter than any positive temperature. If a negative-temperature system and a positive-temperature system come in contact, heat flows from the negative to the positive-temperature system.

Negative temperatures can only occur in systems that have an upper limit to the energy they can contain. In these cases, adding energy can decrease entropy, reversing the usual relationship between energy and temperature. This leads to a negative thermodynamic temperature. However, such conditions only arise in specialized, quasi-equilibrium systems such as collections of spins in a magnetic field. In contrast, ordinary systems with translational or vibrational motion have no upper energy limit, so their temperatures are always positive.

Very low temperatures

The rapid expansion of gases leaving the Boomerang Nebula, a bi-polar, filamentary, likely proto-planetary nebula in Centaurus, has a temperature of 1 K, the lowest observed outside of a laboratory.
Velocity-distribution data of a gas of rubidium atoms at a temperature within a few billionths of a degree above absolute zero. Left: just before the appearance of a Bose–Einstein condensate. Center: just after the appearance of the condensate. Right: after further evaporation, leaving a sample of nearly pure condensate.

The average temperature of the universe today is approximately 2.73 K (−270.42 °C; −454.76 °F), based on measurements of cosmic microwave background radiation. Standard models of the future expansion of the universe predict that the average temperature of the universe is decreasing over time. This temperature is calculated as the mean density of energy in space; it should not be confused with the mean electron temperature (total energy divided by particle count) which has increased over time.

Absolute zero cannot be achieved, although it is possible to reach temperatures close to it through the use of evaporative cooling, cryocoolers, dilution refrigerators, and nuclear adiabatic demagnetization. The use of laser cooling has produced temperatures of less than a billionth of a kelvin. At very low temperatures in the vicinity of absolute zero, matter exhibits many unusual properties, including superconductivity, superfluidity, and Bose–Einstein condensation. To study such phenomena, scientists have worked to obtain even lower temperatures.

  • In November 2000, nuclear spin temperatures below 100 picokelvin were reported for an experiment at the Helsinki University of Technology's Low Temperature Lab in Espoo, Finland. However, this was the temperature of one particular degree of freedom—a quantum property called nuclear spin—not the overall average thermodynamic temperature for all possible degrees in freedom.
  • In February 2003, the Boomerang Nebula was observed to have been releasing gases at a speed of 500,000 km/h (310,000 mph) for the last 1,500 years. This has cooled it down to approximately 1 K, as deduced by astronomical observation, which is the lowest natural temperature ever recorded.
  • In November 2003, 90377 Sedna was discovered and is one of the coldest known objects in the Solar System, with an average surface temperature of −240 °C (33 K; −400 °F), due to its extremely far orbit of 903 astronomical units.
  • In May 2005, the European Space Agency proposed research in space to achieve femtokelvin temperatures.
  • In May 2006, the Institute of Quantum Optics at the University of Hannover gave details of technologies and benefits of femtokelvin research in space.
  • In January 2013, physicist Ulrich Schneider of the University of Munich in Germany reported to have achieved temperatures formally below absolute zero ("negative temperature") in gases. The gas is artificially forced out of equilibrium into a high potential energy state, which is, however, cold. When it then emits radiation it approaches the equilibrium, and can continue emitting despite reaching formal absolute zero; thus, the temperature is formally negative.
  • In September 2014, scientists in the CUORE collaboration at the Laboratori Nazionali del Gran Sasso in Italy cooled a copper vessel with a volume of one cubic meter to 0.006 K (−273.144 °C; −459.659 °F) for 15 days, setting a record for the lowest temperature in the known universe over such a large contiguous volume.
  • In June 2015, experimental physicists at MIT cooled molecules in a gas of sodium potassium to a temperature of 500 nanokelvin, and it is expected to exhibit an exotic state of matter by cooling these molecules somewhat further.
  • In 2017, Cold Atom Laboratory (CAL), an experimental instrument was developed for launch to the International Space Station (ISS) in 2018. The instrument has created extremely cold conditions in the microgravity environment of the ISS leading to the formation of Bose–Einstein condensates. In this space-based laboratory, temperatures as low as 1 picokelvin are projected to be achievable, and it could further the exploration of unknown quantum mechanical phenomena and test some of the most fundamental laws of physics.
  • The current world record for effective temperatures was set in 2021 at 38 picokelvin through matter-wave lensing of rubidium Bose–Einstein condensates.

Psychoanalytic theory

From Wikipedia, the free encyclopedia

Psychoanalytic theory is the theory of the innate structure of the human soul and the dynamics of personality development relating to the practice of psychoanalysis, a method of research and for treating of mental disorders (psychopathology). Laid out by Sigmund Freud in the late 19th century (s. The Interpretation of Dreams), he developed the theory and practice of psychoanalysis until his death in 1939. Since then, it has been further refined, also divided into various sub-areas, but independent of this, Freuds structural distinction of the soul into three functionally interlocking instances has been largely retained.

Psychoanalysis with its theoretical core came to full prominence in the last third of the twentieth century, as part of the flow of critical discourse regarding psychological treatments in the 1970s. Freud himself had ceased his physiological research of the neural brain organisation in 1906 (cf. history). shifting his focus to psychology and the treatment of mental health issues by using free associations and the phenonmenon of transference. Psychoanalysis is based on the distinction between unconscious and conscious processes, and emphasized the recognition of childhood events that influence the mental functioning of adults. Freud's consideration of human evolutionary history (genetics) and then the aspect of individual psychological development in cultural contexts gave the psychoanalytic theory its characteristics.

Definition

Psychoanalytic and psychoanalytical are used in English. The latter is the older term, and at first, simply meant 'relating to the analysis of the human psyche.' But with the emergence of psychoanalysis as a distinct clinical practice, both terms came to describe that. Although both are still used, today, the normal adjective is psychoanalytic.

Psychoanalysis is defined in the Oxford English Dictionary as

A therapeutic method, originated by Sigmund Freud, for treating mental disorders by investigating the interaction of conscious and unconscious elements in the patient's mind and bringing repressed fears and conflicts into the conscious mind, using techniques such as dream interpretation and free association. Also: a system of psychological theory is associated with this method.

The beginnings

Freud began his studies on psychoanalysis in collaboration with Dr. Josef Breuer, most notably in relation to the case study of Anna O. Anna O. was subject to a number of psychosomatic disturbances, such as not being able to drink out of fear. Breuer and Freud found that hypnosis was a great help in discovering more about Anna O. and her treatment. Freud frequently referred to the study on Anna O. in his lectures on the origin and development of psychoanalysis.

Observations in the Anna O. case led Freud to theorize that the problems faced by hysterical patients could be associated with painful childhood experiences that could not be recalled. The influence of these lost memories shaped the feelings, thoughts, and behaviors of patients. These studies contributed to the development of the psychoanalytic theory.

The unconscious

In psychoanalytic theory, the unconscious mind consists of ideas and drives that have been subject to the mechanism of Repression: anxiety-producing impulses in childhood are barred from consciousness, but do not cease to exist, and exert a constant pressure in the direction of consciousness. However, the content of the unconscious is only knowable to consciousness through its representation in a disguised or distorted form, by way of dreams and neurotic symptoms, as well as in slips of the tongue and jokes. The psychoanalyst seeks to interpret these conscious manifestations in order to understand the nature of the repressed. In psychoanalytic terms, the unconscious does not include all that is not conscious, but rather that which is actively repressed from conscious thought. Freud viewed the unconscious as a repository for socially unacceptable ideas, anxiety-producing wishes or desires, traumatic memories, and painful emotions put out of consciousness by the mechanism of repression. Such unconscious mental processes can only be recognized through analysis of their effects in consciousness. Unconscious thoughts are not directly accessible to ordinary introspection, but they are capable of partially evading the censorship mechanism of repression in a disguised form, manifesting, for example, as dream elements or neurotic symptoms. Dreams and symptoms are supposed to be capable of being "interpreted" during psychoanalysis, with the help of methods such as free association, dream analysis, and analysis of verbal slips.

Personality structure

In Freud's model the psyche consists of three different elements, the id, ego, and the superego. The id is the aspect of personality that is driven by internal and basic drives and needs, such as hunger, thirst, and the drive for sex, or libido. The id acts in accordance with the pleasure principle. Due to the instinctual quality of the id, it is impulsive and unaware of the implications of actions. The superego is driven by the morality principle. It enforces the morality of social thought and action on an intrapsychic level. It employs morality, judging wrong and right and using guilt to discourage socially unacceptable behavior. The ego is driven by the reality principle. The ego seeks to balance the conflicting aims of the id and superego, by trying to satisfy the id's drives in ways that are compatible with reality. The Ego is how we view ourselves: it is what we refer to as 'I' (Freud's word is the German ich, which simply means 'I').

Defense mechanisms

The ego balances demands of the id, the superego, and of reality to maintain a healthy state of consciousness, where there is only minimal intrapsychic conflict. It thus reacts to protect the individual from stressors and from anxiety by distorting internal or external reality to a lesser or greater extent. This prevents threatening unconscious thoughts and material from entering the consciousness. The ten different defence mechanisms initially enumerated by Anna Freud are: repression, regression, reaction formation, isolation of affect, undoing, projection, introjection, turning against the self, reversal into the opposite, and sublimation. In the same work, however, she details other manoeuvres such as identification with the aggressor and intellectualisation that would later come to be considered defence mechanisms in their own right. Furthermore, this list has been greatly expanded upon by other psychoanalysts, with some authors claiming to enumerate in excess of one hundred defence mechanisms.

Psychology theories

Psychosexual development

Freud's take on the development of the personality (psyche). It is a stage theory that believes progress occurs through stages as the libido is directed to different body parts. The different stages, listed in order of progression, are Oral, Anal, Phallic (Oedipus complex), Latency, and Genital. The Genital stage is achieved if people meet all their needs throughout the other stages with enough available sexual energy. Individuals who do not meet their needs in a given stage become fixated or "stuck" in that stage.

Neo-analytic theory

Freud's theory and work with psychosexual development led to Neo-Analytic/Neo-Freudians who also believed in the importance of the unconscious, dream interpretations, defense mechanisms, and the integral influence of childhood experiences but had objections to the theory as well. They do not support the idea that personality development stops at age 6. Instead, they believe development spreads across the lifespan. They extended Freud's work and encompassed more influence from the environment and the importance of conscious thought and the unconscious. The most important theorists are Erik Erikson (Psychosocial Development), Anna Freud, Carl Jung, Alfred Adler and Karen Horney, and including the school of object relations. Erikson's Psychosocial Development theory is based on eight stages of development. The stages are trust vs. mistrust, autonomy vs. shame, initiative vs. guilt, industry vs. inferiority, identity vs. confusion, intimacy vs. isolation, generatively vs. stagnation, and integrity vs. despair. These are important to the psychoanalytic theory because they describe the different stages that people go through in life. Each stage has a major impact on their life outcomes since they are going through conflicts at each stage and whichever route they decide to take, will have certain outcomes.

Criticisms

Some claim that the theory is lacking in empirical data and too focused on pathology. Other criticisms are that the theory lacks consideration of culture and its influence on personality.

Psychoanalytic theory comes from Freud and is focused on childhood. This might be an issue since most believe studying children can be inconclusive. One major concern lies in if observed personality will be a lifelong occurrence or if the child will shed it later in life.

Application to the arts and humanities

Psychoanalytic theory is a major influence in Continental philosophy and in aesthetics in particular. Freud is sometimes considered a philosopher. The psychoanalyst Jacques Lacan, and the philosophers Michel Foucault, and Jacques Derrida, have written extensively on how psychoanalysis informs philosophical analysis. Other philosophers such as Alain Badiou and Rafael Holmberg have argued that the meaning of psychoanalysis for philosophy was not immediately clear, but that they have come to reciprocally define each other.

When analyzing literary texts, the psychoanalytic theory is sometimes used (often specifically with regard to the motives of the author and the characters) to reveal purported concealed meanings or to purportedly better understand the author's intentions.

Non-equilibrium thermodynamics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Non-equilibrium_thermodynamics 

Non-equilibrium thermodynamics
is a branch of thermodynamics that deals with physical systems that are not in thermodynamic equilibrium but can be described in terms of macroscopic quantities (non-equilibrium state variables) that represent an extrapolation of the variables used to specify the system in thermodynamic equilibrium. Non-equilibrium thermodynamics is concerned with transport processes and with the rates of chemical reactions.

Almost all systems found in nature are not in thermodynamic equilibrium, for they are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems and to chemical reactions. Many systems and processes can, however, be considered to be in equilibrium locally, thus allowing description by currently known equilibrium thermodynamics. Nevertheless, some natural systems and processes remain beyond the scope of equilibrium thermodynamic methods due to the existence of non variational dynamics, where the concept of free energy is lost.

The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. One fundamental difference between equilibrium thermodynamics and non-equilibrium thermodynamics lies in the behaviour of inhomogeneous systems, which require for their study knowledge of rates of reaction which are not considered in equilibrium thermodynamics of homogeneous systems. This is discussed below. Another fundamental and very important difference is the difficulty, in defining entropy at an instant of time in macroscopic terms for systems not in thermodynamic equilibrium. However, it can be done locally, and the macroscopic entropy will then be given by the integral of the locally defined entropy density. It has been found that many systems far outside global equilibrium still obey the concept of local equilibrium.

Scope

Difference between equilibrium and non-equilibrium thermodynamics

A profound difference separates equilibrium from non-equilibrium thermodynamics. Equilibrium thermodynamics ignores the time-courses of physical processes. In contrast, non-equilibrium thermodynamics attempts to describe their time-courses in continuous detail.

Equilibrium thermodynamics restricts its considerations to processes that have initial and final states of thermodynamic equilibrium; the time-courses of processes are deliberately ignored. Non-equilibrium thermodynamics, on the other hand, attempting to describe continuous time-courses, needs its state variables to have a very close connection with those of equilibrium thermodynamics. This conceptual issue is overcome under the assumption of local equilibrium, which entails that the relationships that hold between macroscopic state variables at equilibrium hold locally, also outside equilibrium. Throughout the past decades, the assumption of local equilibrium has been tested, and found to hold, under increasingly extreme conditions, such as in the shock front of violent explosions, on reacting surfaces, and under extreme thermal gradients.

Thus, non-equilibrium thermodynamics provides a consistent framework for modelling not only the initial and final states of a system, but also the evolution of the system in time. Together with the concept of entropy production, this provides a powerful tool in process optimisation, and provides a theoretical foundation for exergy analysis.

Non-equilibrium state variables

The suitable relationship that defines non-equilibrium thermodynamic state variables is as follows. When the system is in local equilibrium, non-equilibrium state variables are such that they can be measured locally with sufficient accuracy by the same techniques as are used to measure thermodynamic state variables, or by corresponding time and space derivatives, including fluxes of matter and energy. In general, non-equilibrium thermodynamic systems are spatially and temporally non-uniform, but their non-uniformity still has a sufficient degree of smoothness to support the existence of suitable time and space derivatives of non-equilibrium state variables.

Because of the spatial non-uniformity, non-equilibrium state variables that correspond to extensive thermodynamic state variables have to be defined as spatial densities of the corresponding extensive equilibrium state variables. When the system is in local equilibrium, intensive non-equilibrium state variables, for example temperature and pressure, correspond closely with equilibrium state variables. It is necessary that measuring probes be small enough, and rapidly enough responding, to capture relevant non-uniformity. Further, the non-equilibrium state variables are required to be mathematically functionally related to one another in ways that suitably resemble corresponding relations between equilibrium thermodynamic state variables. In reality, these requirements, although strict, have been shown to be fulfilled even under extreme conditions, such as during phase transitions, at reacting interfaces, and in plasma droplets surrounded by ambient air.There are, however, situations where there are appreciable non-linear effects even at the local scale.

Overview

Some concepts of particular importance for non-equilibrium thermodynamics include time rate of dissipation of energy (Rayleigh 1873, Onsager 1931, also), time rate of entropy production (Onsager 1931), thermodynamic fields, dissipative structure, and non-linear dynamical structure.

One problem of interest is the thermodynamic study of non-equilibrium steady states, in which entropy production and some flows are non-zero, but there is no time variation of physical variables.

One initial approach to non-equilibrium thermodynamics is sometimes called 'classical irreversible thermodynamics'. There are other approaches to non-equilibrium thermodynamics, for example extended irreversible thermodynamics, and generalized thermodynamics, but they are hardly touched on in the present article.

Quasi-radiationless non-equilibrium thermodynamics of matter in laboratory conditions

According to Wildt (see also Essex), current versions of non-equilibrium thermodynamics ignore radiant heat; they can do so because they refer to laboratory quantities of matter under laboratory conditions with temperatures well below those of stars. At laboratory temperatures, in laboratory quantities of matter, thermal radiation is weak and can be practically nearly ignored. But, for example, atmospheric physics is concerned with large amounts of matter, occupying cubic kilometers, that, taken as a whole, are not within the range of laboratory quantities; then thermal radiation cannot be ignored.

Local equilibrium thermodynamics

The terms 'classical irreversible thermodynamics' and 'local equilibrium thermodynamics' are sometimes used to refer to a version of non-equilibrium thermodynamics that demands certain simplifying assumptions, as follows. The assumptions have the effect of making each very small volume element of the system effectively homogeneous, or well-mixed, or without an effective spatial structure. Even within the thought-frame of classical irreversible thermodynamics, care is needed in choosing the independent variables for systems. In some writings, it is assumed that the intensive variables of equilibrium thermodynamics are sufficient as the independent variables for the task (such variables are considered to have no 'memory', and do not show hysteresis); in particular, local flow intensive variables are not admitted as independent variables; local flows are considered as dependent on quasi-static local intensive variables.

Also it is assumed that the local entropy density is the same function of the other local intensive variables as in equilibrium; this is called the local thermodynamic equilibrium assumption (see also Keizer (1987)). Radiation is ignored because it is transfer of energy between regions, which can be remote from one another. In the classical irreversible thermodynamic approach, there is allowed spatial variation from infinitesimal volume element to adjacent infinitesimal volume element, but it is assumed that the global entropy of the system can be found by simple spatial integration of the local entropy density. This approach assumes spatial and temporal continuity and even differentiability of locally defined intensive variables such as temperature and internal energy density. While these demands may appear severely constrictive, it has been found that the assumptions of local equilibrium hold for a wide variety of systems, including reacting interfaces, on the surfaces of catalysts, in confined systems such as zeolites, under temperature gradients as large as K m, and even in shock fronts moving at up to six times the speed of sound.

In other writings, local flow variables are considered; these might be considered as classical by analogy with the time-invariant long-term time-averages of flows produced by endlessly repeated cyclic processes; examples with flows are in the thermoelectric phenomena known as the Seebeck and the Peltier effects, considered by Kelvin in the nineteenth century and by Lars Onsager in the twentieth. These effects occur at metal junctions, which were originally effectively treated as two-dimensional surfaces, with no spatial volume, and no spatial variation.

Local equilibrium thermodynamics with materials with "memory"

A further extension of local equilibrium thermodynamics is to allow that materials may have "memory", so that their constitutive equations depend not only on present values but also on past values of local equilibrium variables. Thus time comes into the picture more deeply than for time-dependent local equilibrium thermodynamics with memoryless materials, but fluxes are not independent variables of state.

Extended irreversible thermodynamics

Extended irreversible thermodynamics is a branch of non-equilibrium thermodynamics that goes outside the restriction to the local equilibrium hypothesis. The space of state variables is enlarged by including the fluxes of mass, momentum and energy and eventually higher order fluxes. The formalism is well-suited for describing high-frequency processes and small-length scales materials.

Basic concepts

There are many examples of stationary non-equilibrium systems, some very simple, like a system confined between two thermostats at different temperatures or the ordinary Couette flow, a fluid enclosed between two flat walls moving in opposite directions and defining non-equilibrium conditions at the walls. Laser action is also a non-equilibrium process, but it depends on departure from local thermodynamic equilibrium and is thus beyond the scope of classical irreversible thermodynamics; here a strong temperature difference is maintained between two molecular degrees of freedom (with molecular laser, vibrational and rotational molecular motion), the requirement for two component 'temperatures' in the one small region of space, precluding local thermodynamic equilibrium, which demands that only one temperature be needed. Damping of acoustic perturbations or shock waves are non-stationary non-equilibrium processes. Driven complex fluids, turbulent systems and glasses are other examples of non-equilibrium systems.

The mechanics of macroscopic systems depends on a number of extensive quantities. It should be stressed that all systems are permanently interacting with their surroundings, thereby causing unavoidable fluctuations of extensive quantities. Equilibrium conditions of thermodynamic systems are related to the maximum property of the entropy. If the only extensive quantity that is allowed to fluctuate is the internal energy, all the other ones being kept strictly constant, the temperature of the system is measurable and meaningful. The system's properties are then most conveniently described using the thermodynamic potential Helmholtz free energy (A = U - TS), a Legendre transformation of the energy. If, next to fluctuations of the energy, the macroscopic dimensions (volume) of the system are left fluctuating, we use the Gibbs free energy (G = U + PV - TS), where the system's properties are determined both by the temperature and by the pressure.

Non-equilibrium systems are much more complex and they may undergo fluctuations of more extensive quantities. The boundary conditions impose on them particular intensive variables, like temperature gradients or distorted collective motions (shear motions, vortices, etc.), often called thermodynamic forces. If free energies are very useful in equilibrium thermodynamics, it must be stressed that there is no general law defining stationary non-equilibrium properties of the energy as is the second law of thermodynamics for the entropy in equilibrium thermodynamics. That is why in such cases a more generalized Legendre transformation should be considered. This is the extended Massieu potential. By definition, the entropy (S) is a function of the collection of extensive quantities . Each extensive quantity has a conjugate intensive variable (a restricted definition of intensive variable is used here by comparison to the definition given in this link) so that:

We then define the extended Massieu function as follows:

where is the Boltzmann constant, whence

The independent variables are the intensities.

Intensities are global values, valid for the system as a whole. When boundaries impose to the system different local conditions, (e.g. temperature differences), there are intensive variables representing the average value and others representing gradients or higher moments. The latter are the thermodynamic forces driving fluxes of extensive properties through the system.

It may be shown that the Legendre transformation changes the maximum condition of the entropy (valid at equilibrium) in a minimum condition of the extended Massieu function for stationary states, no matter whether at equilibrium or not.

Stationary states, fluctuations, and stability

In thermodynamics one is often interested in a stationary state of a process, allowing that the stationary state include the occurrence of unpredictable and experimentally unreproducible fluctuations in the state of the system. The fluctuations are due to the system's internal sub-processes and to exchange of matter or energy with the system's surroundings that create the constraints that define the process.

If the stationary state of the process is stable, then the unreproducible fluctuations involve local transient decreases of entropy. The reproducible response of the system is then to increase the entropy back to its maximum by irreversible processes: the fluctuation cannot be reproduced with a significant level of probability. Fluctuations about stable stationary states are extremely small except near critical points (Kondepudi and Prigogine 1998, page 323). The stable stationary state has a local maximum of entropy and is locally the most reproducible state of the system. There are theorems about the irreversible dissipation of fluctuations. Here 'local' means local with respect to the abstract space of thermodynamic coordinates of state of the system.

If the stationary state is unstable, then any fluctuation will almost surely trigger the virtually explosive departure of the system from the unstable stationary state. This can be accompanied by increased export of entropy.

Local thermodynamic equilibrium

The scope of present-day non-equilibrium thermodynamics does not cover all physical processes. A condition for the validity of many studies in non-equilibrium thermodynamics of matter is that they deal with what is known as local thermodynamic equilibrium.

Ponderable matter

Local thermodynamic equilibrium of matter (see also Keizer (1987) means that conceptually, for study and analysis, the system can be spatially and temporally divided into 'cells' or 'micro-phases' of small (infinitesimal) size, in which classical thermodynamical equilibrium conditions for matter are fulfilled to good approximation. These conditions are unfulfilled, for example, in very rarefied gases, in which molecular collisions are infrequent; and in the boundary layers of a star, where radiation is passing energy to space; and for interacting fermions at very low temperature, where dissipative processes become ineffective. When these 'cells' are defined, one admits that matter and energy may pass freely between contiguous 'cells', slowly enough to leave the 'cells' in their respective individual local thermodynamic equilibria with respect to intensive variables.

One can think here of two 'relaxation times' separated by order of magnitude. The longer relaxation time is of the order of magnitude of times taken for the macroscopic dynamical structure of the system to change. The shorter is of the order of magnitude of times taken for a single 'cell' to reach local thermodynamic equilibrium. If these two relaxation times are not well separated, then the classical non-equilibrium thermodynamical concept of local thermodynamic equilibrium loses its meaning and other approaches have to be proposed, see for instance Extended irreversible thermodynamics. For example, in the atmosphere, the speed of sound is much greater than the wind speed; this favours the idea of local thermodynamic equilibrium of matter for atmospheric heat transfer studies at altitudes below about 60 km where sound propagates, but not above 100 km, where, because of the paucity of intermolecular collisions, sound does not propagate.

Milne's definition in terms of radiative equilibrium

Edward A. Milne, thinking about stars, gave a definition of 'local thermodynamic equilibrium' in terms of the thermal radiation of the matter in each small local 'cell'. He defined 'local thermodynamic equilibrium' in a 'cell' by requiring that it macroscopically absorb and spontaneously emit radiation as if it were in radiative equilibrium in a cavity at the temperature of the matter of the 'cell'. Then it strictly obeys Kirchhoff's law of equality of radiative emissivity and absorptivity, with a black body source function. The key to local thermodynamic equilibrium here is that the rate of collisions of ponderable matter particles such as molecules should far exceed the rates of creation and annihilation of photons.

Entropy in evolving systems

It is pointed out by W.T. Grandy Jr, that entropy, though it may be defined for a non-equilibrium system is—when strictly considered—only a macroscopic quantity that refers to the whole system, and is not a dynamical variable and in general does not act as a local potential that describes local physical forces. Under special circumstances, however, one can metaphorically think as if the thermal variables behaved like local physical forces. The approximation that constitutes classical irreversible thermodynamics is built on this metaphoric thinking.

This point of view shares many points in common with the concept and the use of entropy in continuum thermomechanics, which evolved completely independently of statistical mechanics and maximum-entropy principles.

Entropy in non-equilibrium

To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that are used to fix the equilibrium state, as was described above, a set of variables that are called internal variables have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their tending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable

where is a relaxation time of a corresponding variables. It is convenient to consider the initial value are equal to zero. The above equation is valid for small deviations from equilibrium; The dynamics of internal variables in general case is considered by Pokrovskii.

Entropy of the system in non-equilibrium is a function of the total set of variables

The essential contribution to the thermodynamics of the non-equilibrium systems was brought by the Nobel Prize winner Ilya Prigogine, when he and his collaborators investigated the systems of chemically reacting substances. The stationary states of such systems exists due to exchange both particles and energy with the environment. In section 8 of the third chapter of his book, Prigogine has specified three contributions to the variation of entropy of the considered system at the given volume and constant temperature . The increment of entropy can be calculated according to the formula

The first term on the right hand side of the equation presents a stream of thermal energy into the system; the last term—a part of a stream of energy coming into the system with the stream of particles of substances that can be positive or negative, , where is chemical potential of substance . The middle term in (1) depicts energy dissipation (entropy production) due to the relaxation of internal variables . In the case of chemically reacting substances, which was investigated by Prigogine, the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalised, to consider any deviation from the equilibrium state as an internal variable, so that we consider the set of internal variables in equation (1) to consist of the quantities defining not only degrees of completeness of all chemical reactions occurring in the system, but also the structure of the system, gradients of temperature, difference of concentrations of substances and so on.

Flows and forces

The fundamental relation of classical equilibrium thermodynamics

expresses the change in entropy of a system as a function of the intensive quantities temperature , pressure and chemical potential and of the differentials of the extensive quantities energy , volume and particle number .

Following Onsager (1931,I), let us extend our considerations to thermodynamically non-equilibrium systems. As a basis, we need locally defined versions of the extensive macroscopic quantities , and and of the intensive macroscopic quantities , and .

For classical non-equilibrium studies, we will consider some new locally defined intensive macroscopic variables. We can, under suitable conditions, derive these new variables by locally defining the gradients and flux densities of the basic locally defined macroscopic quantities.

Such locally defined gradients of intensive macroscopic variables are called 'thermodynamic forces'. They 'drive' flux densities, perhaps misleadingly often called 'fluxes', which are dual to the forces. These quantities are defined in the article on Onsager reciprocal relations.

Establishing the relation between such forces and flux densities is a problem in statistical mechanics. Flux densities () may be coupled. The article on Onsager reciprocal relations considers the stable near-steady thermodynamically non-equilibrium regime, which has dynamics linear in the forces and flux densities.

In stationary conditions, such forces and associated flux densities are by definition time invariant, as also are the system's locally defined entropy and rate of entropy production. Notably, according to Ilya Prigogine and others, when an open system is in conditions that allow it to reach a stable stationary thermodynamically non-equilibrium state, it organizes itself so as to minimize total entropy production defined locally. This is considered further below.

One wants to take the analysis to the further stage of describing the behaviour of surface and volume integrals of non-stationary local quantities; these integrals are macroscopic fluxes and production rates. In general the dynamics of these integrals are not adequately described by linear equations, though in special cases they can be so described.

Onsager reciprocal relations

Following Section III of Rayleigh (1873), Onsager (1931, I) showed that in the regime where both the flows () are small and the thermodynamic forces () vary slowly, the rate of creation of entropy is linearly related to the flows:

and the flows are related to the gradient of the forces, parametrized by a matrix of coefficients conventionally denoted :

from which it follows that:

The second law of thermodynamics requires that the matrix be positive definite. Statistical mechanics considerations involving microscopic reversibility of dynamics imply that the matrix is symmetric. This fact is called the Onsager reciprocal relations.

The generalization of the above equations for the rate of creation of entropy was given by Pokrovskii.

Speculated extremal principles for non-equilibrium processes

Until recently, prospects for useful extremal principles in this area have seemed clouded. Nicolis (1999) concludes that one model of atmospheric dynamics has an attractor which is not a regime of maximum or minimum dissipation; she says this seems to rule out the existence of a global organizing principle, and comments that this is to some extent disappointing; she also points to the difficulty of finding a thermodynamically consistent form of entropy production. Another top expert offers an extensive discussion of the possibilities for principles of extrema of entropy production and of dissipation of energy: Chapter 12 of Grandy (2008) is very cautious, and finds difficulty in defining the 'rate of internal entropy production' in many cases, and finds that sometimes for the prediction of the course of a process, an extremum of the quantity called the rate of dissipation of energy may be more useful than that of the rate of entropy production; this quantity appeared in Onsager's 1931 origination of this subject. Other writers have also felt that prospects for general global extremal principles are clouded. Such writers include Glansdorff and Prigogine (1971), Lebon, Jou and Casas-Vásquez (2008), and Šilhavý (1997). There is good experimental evidence that heat convection does not obey extremal principles for time rate of entropy production. Theoretical analysis shows that chemical reactions do not obey extremal principles for the second differential of time rate of entropy production. The development of a general extremal principle seems infeasible in the current state of knowledge.

Applications

Non-equilibrium thermodynamics has been successfully applied to describe biological processes such as protein folding/unfolding and transport through membranes. It is also used to give a description of the dynamics of nanoparticles, which can be out of equilibrium in systems where catalysis and electrochemical conversion is involved. Also, ideas from non-equilibrium thermodynamics and the informatic theory of entropy have been adapted to describe general economic systems.

Absolute zero

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Absolute_...