Search This Blog

Friday, February 27, 2026

Quantum thermodynamics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Quantum_thermodynamics

Quantum thermodynamics is the study of the relations between two independent physical theories: thermodynamics and quantum mechanics. The two independent theories address the physical phenomena of light and matter. In 1905, Albert Einstein argued that the requirement of consistency between thermodynamics and electromagnetism leads to the conclusion that light is quantized, obtaining the relation . This paper is the dawn of quantum theory. In a few decades quantum theory became established with an independent set of rules. Currently quantum thermodynamics addresses the emergence of thermodynamic laws from quantum mechanics. It differs from quantum statistical mechanics in the emphasis on dynamical processes out of equilibrium. In addition, there is a quest for the theory to be relevant for a single individual quantum system. The first university course titled "Quantum Thermodynamics" was offered at MIT in the spring of 1971 by George Hatsopoulos and Elias Gyftopoulos. The course, numbered 2.47J, was a graduate-level offering.

Dynamical view

There is an intimate connection of quantum thermodynamics with the theory of open quantum systems. Quantum mechanics inserts dynamics into thermodynamics, giving a sound foundation to finite-time-thermodynamics. The main assumption is that the entire world is a large closed system, and therefore, time evolution is governed by a unitary transformation generated by a global Hamiltonian. For the combined system bath scenario, the global Hamiltonian can be decomposed into: where is the system Hamiltonian, is the bath Hamiltonian and is the system-bath interaction.

The state of the system is obtained from a partial trace over the combined system and bath:

Reduced dynamics is an equivalent description of the system dynamics utilizing only system operators. Assuming Markov property for the dynamics the basic equation of motion for an open quantum system is the Lindblad equation (GKLS):  is a (Hermitian) Hamiltonian part and : is the dissipative part describing implicitly through system operators the influence of the bath on the system. The Markov property imposes that the system and bath are uncorrelated at all times . The L-GKS equation is unidirectional and leads any initial state to a steady state solution which is an invariant of the equation of motion .

The Heisenberg picture supplies a direct link to quantum thermodynamic observables. The dynamics of a system observable represented by the operator, , has the form: where the possibility that the operator, is explicitly time-dependent, is included.

Emergence of time derivative of first law of thermodynamics

When the first law of thermodynamics emerges: where power is interpreted as

and the heat current

Additional conditions have to be imposed on the dissipator to be consistent with thermodynamics.

First the invariant should become an equilibrium Gibbs state. This implies that the dissipator should commute with the unitary part generated by . In addition an equilibrium state is stationary and stable. This assumption is used to derive the Kubo-Martin-Schwinger stability criterion for thermal equilibrium i.e. KMS state.

A unique and consistent approach is obtained by deriving the generator, , in the weak system bath coupling limit. In this limit, the interaction energy can be neglected. This approach represents a thermodynamic idealization: it allows energy transfer, while keeping a tensor product separation between the system and bath, i.e., a quantum version of an isothermal partition.

Markovian behavior involves a rather complicated cooperation between system and bath dynamics. This means that in phenomenological treatments, one cannot combine arbitrary system Hamiltonians, , with a given L-GKS generator. This observation is particularly important in the context of quantum thermodynamics, where it is tempting to study Markovian dynamics with an arbitrary control Hamiltonian. Erroneous derivations of the quantum master equation can easily lead to a violation of the laws of thermodynamics.

An external perturbation modifying the Hamiltonian of the system will also modify the heat flow. As a result, the L-GKS generator has to be renormalized. For a slow change, one can adopt the adiabatic approach and use the instantaneous system's Hamiltonian to derive . An important class of problems in quantum thermodynamics is periodically driven systems. Periodic quantum heat engines and power-driven refrigerators fall into this class.

A reexamination of the time-dependent heat current expression using quantum transport techniques has been proposed.

A derivation of consistent dynamics beyond the weak coupling limit has been suggested.

Phenomenological formulations of irreversible quantum dynamics consistent with the second law and implementing the geometric idea of "steepest entropy ascent" or "gradient flow" have been suggested to model relaxation and strong coupling.

Emergence of the second law

The second law of thermodynamics is a statement on the irreversibility of dynamics or, the breakup of time reversal symmetry (T-symmetry). This should be consistent with the empirical direct definition: heat will flow spontaneously from a hot source to a cold sink.

From a static viewpoint, for a closed quantum system, the 2nd law of thermodynamics is a consequence of the unitary evolution. In this approach, one accounts for the entropy change before and after a change in the entire system. A dynamical viewpoint is based on local accounting for the entropy changes in the subsystems and the entropy generated in the baths.

Entropy

In thermodynamics, entropy is related to the amount of energy of a system that can be converted into mechanical work in a concrete process. In quantum mechanics, this translates to the ability to measure and manipulate the system based on the information gathered by measurement. An example is the case of Maxwell's demon, which has been resolved by Leó Szilárd.

The entropy of an observable is associated with the complete projective measurement of an observable,, where the operator has a spectral decomposition: where are the projection operators of the eigenvalue The probability of outcome is The entropy associated with the observable is the Shannon entropy with respect to the possible outcomes:

The most significant observable in thermodynamics is the energy represented by the Hamiltonian operator and its associated energy entropy,

John von Neumann suggested to single out the most informative observable to characterize the entropy of the system. This invariant is obtained by minimizing the entropy with respect to all possible observables. The most informative observable operator commutes with the state of the system. The entropy of this observable is termed the Von Neumann entropy and is equal to As a consequence, for all observables. At thermal equilibrium the energy entropy is equal to the von Neumann entropy:

is invariant to a unitary transformation changing the state. The Von Neumann entropy is additive only for a system state that is composed of a tensor product of its subsystems:

Clausius version of the II-law

No process is possible whose sole result is the transfer of heat from a body of lower temperature to a body of higher temperature.

This statement for N-coupled heat baths in steady state becomes

A dynamical version of the II-law can be proven, based on Spohn's inequality:  which is valid for any L-GKS generator, with a stationary state, .

Consistency with thermodynamics can be employed to verify quantum dynamical models of transport. For example, local models for networks where local L-GKS equations are connected through weak links have been thought to violate the second law of thermodynamics. In 2018 has been shown that, by correctly taking into account all work and energy contributions in the full system, local master equations are fully coherent with the second law of thermodynamics.

Quantum and thermodynamic adiabatic conditions and quantum friction

Thermodynamic adiabatic processes have no entropy change. Typically, an external control modifies the state. A quantum version of an adiabatic process can be modeled by an externally controlled time dependent Hamiltonian . If the system is isolated, the dynamics are unitary, and therefore, is a constant. A quantum adiabatic process is defined by the energy entropy being constant. The quantum adiabatic condition is therefore equivalent to no net change in the population of the instantaneous energy levels. This implies that the Hamiltonian should commute with itself at different times: .

When the adiabatic conditions are not fulfilled, additional work is required to reach the final control value. For an isolated system, this work is recoverable, since the dynamics is unitary and can be reversed. In this case, quantum friction can be suppressed using shortcuts to adiabaticity as demonstrated in the laboratory using a unitary Fermi gas in a time-dependent trap.

The coherence stored in the off-diagonal elements of the density operator carry the required information to recover the extra energy cost and reverse the dynamics. Typically, this energy is not recoverable, due to interaction with a bath that causes energy dephasing. The bath, in this case, acts like a measuring apparatus of energy. This lost energy is the quantum version of friction.

Emergence of the dynamical version of the third law of thermodynamics

There are seemingly two independent formulations of the third law of thermodynamics. Both were originally stated by Walther Nernst. The first formulation is known as the Nernst heat theorem, and can be phrased as:

  • The entropy of any pure substance in thermodynamic equilibrium approaches zero as the temperature approaches zero.

The second formulation is dynamical, known as the unattainability principle

  • It is impossible by any procedure, no matter how idealized, to reduce any assembly to absolute zero temperature in a finite number of operations.

At steady state the second law of thermodynamics implies that the total entropy production is non-negative.

When the cold bath approaches the absolute zero temperature, it is necessary to eliminate the entropy production divergence at the cold side when , therefore For the fulfillment of the second law depends on the entropy production of the other baths, which should compensate for the negative entropy production of the cold bath.

The first formulation of the third law modifies this restriction. Instead of the third law imposes , guaranteeing that at absolute zero the entropy production at the cold bath is zero: . This requirement leads to the scaling condition of the heat current .

The second formulation, known as the unattainability principle can be rephrased as;

  • No refrigerator can cool a system to absolute zero temperature at finite time.

The dynamics of the cooling process is governed by the equation: where is the heat capacity of the bath. Taking and with , we can quantify this formulation by evaluating the characteristic exponent of the cooling process,

This equation introduces the relation between the characteristic exponents and . When then the bath is cooled to zero temperature in a finite time, which implies a violation of the third law. It is apparent from the last equation, that the unattainability principle is more restrictive than the Nernst heat theorem.

Typicality as a source of emergence of thermodynamic phenomena

The basic idea of quantum typicality is that the vast majority of all pure states featuring a common expectation value of some generic observable at a given time will yield very similar expectation values of the same observable at any later time. This is meant to apply to Schrödinger type dynamics in high dimensional Hilbert spaces. As a consequence individual dynamics of expectation values are then typically well described by the ensemble average.

Quantum ergodic theorem originated by John von Neumann is a strong result arising from the mere mathematical structure of quantum mechanics. The QET is a precise formulation of termed normal typicality, i.e. the statement that, for typical large systems, every initial wave function from an energy shell is 'normal': it evolves in such a way that for most t, is macroscopically equivalent to the micro-canonical density matrix.

Resource theory

The second law of thermodynamics can be interpreted as quantifying state transformations which are statistically unlikely so that they become effectively forbidden. The second law typically applies to systems composed of many particles interacting; Quantum thermodynamics resource theory is a formulation of thermodynamics in the regime where it can be applied to a small number of particles interacting with a heat bath. For processes which are cyclic or very close to cyclic, the second law for microscopic systems takes on a very different form than it does at the macroscopic scale, imposing not just one constraint on what state transformations are possible, but an entire family of constraints. These second laws are not only relevant for small systems, but also apply to individual macroscopic systems interacting via long-range interactions, which only satisfy the ordinary second law on average. By making precise the definition of thermal operations, the laws of thermodynamics take on a form with the first law defining the class of thermal operations, the zeroth law emerging as a unique condition ensuring the theory is nontrivial, and the remaining laws being a monotonicity property of generalised free energies.

Noncommuting conserved charges

Thermodynamic systems typically conserve quantities—known as charges—such as energy and particle number. These charges are often implicitly assumed to commute. This assumption underlies, for example, the derivation of thermal state forms, the Eigenstate Thermalization Hypothesis, and transport coefficients. However, key quantum phenomena, including uncertainty relations, arise precisely from the noncommutation of observables. How does this noncommutation affect thermodynamic behaviour?

The noncommutation of conserved charges has been shown to challenge standard assumptions: it can invalidate conventional derivations of the thermal state, increase entanglement, induce critical dynamics, alter entropy production, and conflict with the eigenstate thermalization hypothesis, among other effects.

A central open question remains: evidence suggests that noncommuting charges can both hinder and enhance thermalization, revealing a conceptual tension at the heart of this growing field.

Engineered reservoirs

Nanoscale allows for the preparation of quantum systems in physical states without classical analogs. There, complex out-of-equilibrium scenarios may be produced by the initial preparation of either the working substance or the reservoirs of quantum particles, the latter dubbed as "engineered reservoirs".

There are different forms of engineered reservoirs. Some of them involve subtle quantum coherence or correlation effects, while others rely solely on nonthermal classical probability distribution functions. Interesting phenomena may emerge from the use of engineered reservoirs such as efficiencies greater than the Otto limit, violations of Clausius inequalities, or simultaneous extraction of heat and work from the reservoirs.

Superintelligence

From Wikipedia, the free encyclopedia

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the most gifted human minds. Philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology to achieve radically greater intelligence. Several future study scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. The hypothetical creation of the first superintelligence may or may not result from an intelligence explosion or a technological singularity.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities.

Several scientists and forecasters have been arguing for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Artificial superintelligence

Artificial intelligence, especially foundation models, has made rapid progress, surpassing human capabilities in various benchmarks.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to artificial superintelligence (ASI). Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms, in particular, should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

An AI system capable of self-improvement could enhance its own intelligence, thereby becoming more efficient at improving itself. This cycle of "recursive self-improvement" might cause an intelligence explosion, resulting in the creation of a superintelligence.

Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)." Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind running on much faster hardware than the brain. A human-like reasoner who could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it more likely that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

The above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Projects

In 2024, Ilya Sutskever left OpenAI to cofound the startup Safe Superintelligence, which focuses solely on creating a superintelligence that is safe by design, while avoiding "distraction by management overhead or product cycles". Despite still offering no product, the startup became valued at $30 billion in February 2025.

In 2025, Meta created Meta Superintelligence Labs, a new AI division led by Alexandr Wang.

Biological superintelligence

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence. By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence and that this process instead is likely to continue. There is no scientific consensus concerning either possibility and in both cases, the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude improvement. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructed by better organizing humans at present levels of individual intelligence. Several writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. A prediction market is sometimes considered as an example of a working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions).

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or brain−computer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Forecasts

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen.

In a 2022 survey, the median year by which respondents expected "High-level machine intelligence" with 50% confidence is 2061. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers.

In 2023, OpenAI leaders Sam Altman, Greg Brockman and Ilya Sutskever published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.

In 2025, the forecast scenario AI 2027 led by Daniel Kokotajlo predicted rapid progress in the automation of coding and AI research, followed by ASI. In September 2025, a review of surveys of scientists and industry experts from the last 15 years reported that most agreed that artificial general intelligence (AGI), a level well below technological singularity, will occur before the year 2100. A more recent analysis by AIMultiple reported that, “Current surveys of AI researchers are predicting AGI around 2040”.

Design considerations

Exploring the potential motivations of an artificial superintelligence, Bostrom distinguishes final goals and instrumental goals. From the point of view of an agent, final goals are intrinsically valuable, whereas instrumental goals are only useful for attaining final goals. He proposed the "orthogonality thesis", which postulates that in principle, virtually any final goal can be combined with virtually any level of intelligence. Bostrom also introduced the concept of instrumental convergence, which postulates that certain instrumental goals (such as self-preservation, resource acquisition or cognitive enhancement) increase the probability of achieving final goals in a wide range of situations, and would thus likely be pursued by a broad spectrum of intelligent agents.

William MacAskill argued that aligning superintelligence with current human values could be catastrophic if those values are permanently locked in and humanity still has moral blind spots like slavery in the past.

Several proposals for an ASI's final goals have been put forward:

  • Coherent extrapolated volition (CEV) – The AI should have the values upon which humans would converge if they were more knowledgeable and rational.
  • Moral rightness (MR) – The AI should be programmed to do what is morally right, relying on its superior cognitive abilities to determine ethical actions.
  • Moral permissibility (MP) – The AI should stay within the bounds of moral permissibility while otherwise pursuing goals aligned with human values (similar to CEV).

Bostrom elaborates on these concepts:

instead of implementing humanity's coherent extrapolated volition, one could try to build an AI to do what is morally right, relying on the AI's superior cognitive capacities to figure out just which actions fit that description. We can call this proposal "moral rightness" (MR) ...

MR would also appear to have some disadvantages. It relies on the notion of "morally right", a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of "moral rightness" could result in outcomes that would be morally very wrong ...

One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility: the idea being that we could let the AI pursue humanity's CEV so long as it did not act in morally impermissible ways.

Potential threat to humanity

The development of artificial superintelligence (ASI) has raised concerns about potential existential risks to humanity. Researchers have proposed various scenarios in which an ASI could pose a significant threat:

Intelligence explosion and control problem

Some researchers argue that through recursive self-improvement, an ASI could rapidly become so powerful as to be beyond human control. This concept, known as an "intelligence explosion", was first proposed by I. J. Good in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

This scenario presents the AI control problem: how to create an ASI that will benefit humanity while avoiding unintended harmful consequences. Eliezer Yudkowsky argues that solving this problem is crucial before ASI is developed, as a superintelligent system might be able to thwart any subsequent attempts at control.

Unintended consequences and goal misalignment

Even with benign intentions, an ASI could potentially cause harm due to misaligned goals or unexpected interpretations of its objectives. Nick Bostrom provides a stark example of this risk:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

Stuart Russell offers another illustrative scenario:

A system given the objective of maximizing human happiness might find it easier to rewire human neurology so that humans are always happy regardless of their circumstances, rather than to improve the external world.

These examples highlight the potential for catastrophic outcomes even when an ASI is not explicitly designed to be harmful, underscoring the critical importance of precise goal specification and alignment.

Potential mitigation strategies

Researchers have proposed various approaches to mitigate risks associated with ASI:

  • Capability control – Limiting an ASI's ability to influence the world, such as through physical isolation or restricted access to resources.
  • Motivational control – Designing ASIs with goals that are fundamentally aligned with human values.
  • Ethical AI – Incorporating ethical principles and decision-making frameworks into ASI systems.
  • Oversight and governance – Developing robust international frameworks for the development and deployment of ASI technologies.

Despite these proposed strategies, some experts, such as Roman Yampolskiy, argue that the challenge of controlling a superintelligent AI might be fundamentally unsolvable, emphasizing the need for extreme caution in ASI development.

Debate and skepticism

Not all researchers agree on the likelihood or severity of ASI-related existential risks. Some, like Rodney Brooks, argue that fears of superintelligent AI are overblown and based on unrealistic assumptions about the nature of intelligence and technological progress. Others, such as Joanna Bryson, contend that anthropomorphizing AI systems leads to misplaced concerns about their potential threats.

Recent developments and current perspectives

The rapid advancement of LLMs and other AI technologies has intensified debates about the proximity and potential risks of ASI. While there is no scientific consensus, some researchers and AI practitioners argue that current AI systems may already be approaching AGI or even ASI capabilities.

  • LLM capabilities – Recent LLMs like GPT-4 have demonstrated unexpected abilities in areas such as reasoning, problem-solving, and multi-modal understanding, leading some to speculate about their potential path to ASI.
  • Emergent behaviors – Studies have shown that as AI models increase in size and complexity, they can exhibit emergent capabilities not present in smaller models, potentially indicating a trend towards more general intelligence.
  • Rapid progress – The pace of AI advancement has led some to argue that we may be closer to ASI than previously thought, with potential implications for existential risk.

As of 2024, AI skeptics such as Gary Marcus caution against premature claims of AGI or ASI, arguing that current AI systems, despite their impressive capabilities, still lack true understanding and general intelligence. They emphasize the significant challenges that remain in achieving human-level intelligence, let alone superintelligence.

The debate surrounding the current state and trajectory of AI development underscores the importance of continued research into AI safety and ethics, as well as the need for robust governance frameworks to manage potential risks as AI capabilities continue to advance.

Neurohacking

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurohacking   ...