Search This Blog

Monday, February 16, 2026

First law of thermodynamics

From Wikipedia, the free encyclopedia

The first law of thermodynamics is a formulation of the law of conservation of energy in the context of thermodynamic processes. For a thermodynamic process affecting a thermodynamic system without transfer of matter, the law distinguishes two principal forms of energy transfer, heat and thermodynamic work. The law also defines the internal energy of a system, an extensive property for taking account of the balance of heat transfer, thermodynamic work, and matter transfer, into and out of the system. Energy cannot be created or destroyed, but it can be transformed from one form to another. In an externally isolated system, with internal changes, the sum of all forms of energy is constant.

An equivalent statement is that perpetual motion machines of the first kind are impossible; work done by a system on its surroundings requires that the system's internal energy be consumed, so that the amount of internal energy lost by that work must be resupplied as heat by an external energy source or as work by an external machine acting on the system to sustain the work of the system continuously.

Definition

For thermodynamic processes of energy transfer without transfer of matter, the first law of thermodynamics is often expressed by the algebraic sum of contributions to the internal energy, from all work, done on or by the system, and the quantity of heat, supplied to the system. With the sign convention of Rudolf Clausius, that heat supplied to the system is positive, but work done by the system is subtracted, a change in the internal energy, is written

Modern formulations, such as by Max Planck, and by IUPAC, often replace the subtraction with addition, and consider all net energy transfers to the system as positive and all net energy transfers from the system as negative, irrespective of the use of the system, for example as an engine.

When a system expands in an isobaric process, the thermodynamic work, done by the system on the surroundings is the product, of system pressure, and system volume change, whereas is said to be the thermodynamic work done on the system by the surroundings. The change in internal energy of the system is:

where denotes the quantity of heat supplied to the system from its surroundings.

Work and heat express physical processes of supply or removal of energy, while the internal energy is a mathematical abstraction that keeps account of the changes of energy that befall the system. The term is the quantity of energy added or removed as heat in the thermodynamic sense, not referring to a form of energy within the system. Likewise, denotes the quantity of energy gained or lost through thermodynamic work. Internal energy is a property of the system, while work and heat describe the process, not the system. Thus, a given internal energy change, can be achieved by different combinations of heat and work. Heat and work are said to be path dependent, while change in internal energy depends only on the initial and final states of the system, not on the path between. Thermodynamic work is measured by change in the system, and, because of friction, is not necessarily the same as work measured by forces and distances in the surroundings, though, ideally, such can sometimes be arranged; this distinction is noted in the term 'isochoric work', at constant system volume, with which is not a form of thermodynamic work.

For thermodynamic processes of energy transfer with transfer of matter, the extensive character of internal energy can be stated: for the otherwise isolated combination of two thermodynamic systems with internal energies and into a single system with internal energy

History

In the first half of the eighteenth century, French philosopher and mathematician Émilie du Châtelet made notable contributions to the emerging theoretical framework of energy, for example by emphasising Gottfried Wilhelm Leibniz's concept of vis viva, mv2 (mass times speed squared), as distinct from Isaac Newton's momentum, mv.

Empirical developments of the early ideas, in the century following, wrestled with contravening concepts such as the caloric theory of heat.

In the few years of his life (1796–1832) after the 1824 publication of his book Reflections on the Motive Power of Fire, Sadi Carnot came to understand that the caloric theory of heat was restricted to mere calorimetry, and that heat and "motive power" are interconvertible. This is known only from his posthumously published notes. He wrote:

Heat is simply motive power, or rather motion which has changed its form. It is a movement among the particles of bodies. Whereever there is destruction of motive power, there is at the same time production of heat in quantity exactly proportional to the quantity of motive power destroyed. Reciprocally, wherever there is destruction of heat, there is production of motive power.

At that time, the concept of mechanical work had not been formulated. Carnot was aware that heat could be produced by friction and by percussion, as forms of dissipation of "motive power". As late as 1847, Lord Kelvin believed in the caloric theory of heat, being unaware of Carnot's notes.

In 1840, Germain Hess stated a conservation law (Hess's law) for the heat of reaction during chemical transformations. This law was later recognized as a consequence of the first law of thermodynamics, but Hess's statement was not explicitly concerned with the relation between energy exchanges by heat and work.

In 1842, Julius Robert von Mayer made a statement that was rendered by Clifford Truesdell (1980) as "in a process at constant pressure, the heat used to produce expansion is universally interconvertible with work", but this is not a general statement of the first law, for it does not express the concept of the thermodynamic state variable, the internal energy. Also in 1842, Mayer measured a temperature rise caused by friction in a body of paper pulp. This was near the time of the 1842–1845 work of James Prescott Joule, measuring the mechanical equivalent of heat. In 1845, Joule published a paper entitled The Mechanical Equivalent of Heat, in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on heat production by friction in the passage of electricity through a resistor and in the rotation of a paddle in a vat of water.

The first full statements of the law came in 1850 from Rudolf Clausius, and from William Rankine. Some scholars consider Rankine's statement less distinct than that of Clausius.

Original statements: the "thermodynamic approach"

The original 19th-century statements of the first law appeared in a conceptual framework in which transfer of energy as heat was taken as a primitive notion, defined by calorimetry. It was presupposed as logically prior to the theoretical development of thermodynamics. Jointly primitive with this notion of heat were the notions of empirical temperature and thermal equilibrium. This framework also took as primitive the notion of transfer of energy as work. This framework did not presume a concept of energy in general, but regarded it as derived or synthesized from the prior notions of heat and work. By one author, this framework has been called the "thermodynamic" approach.

The first explicit statement of the first law of thermodynamics, by Rudolf Clausius in 1850, referred to cyclic thermodynamic processes, and to the existence of a function of state of the system, the internal energy. He expressed it in terms of a differential equation for the increments of a thermodynamic process. This equation may be described as follows:

In a thermodynamic process involving a closed system (no transfer of matter), the increment in the internal energy is equal to the difference between the heat accumulated by the system and the thermodynamic work done by it.

Reflecting the experimental work of Mayer and of Joule, Clausius wrote:

In all cases in which work is produced by the agency of heat, a quantity of heat is consumed which is proportional to the work done; and conversely, by the expenditure of an equal quantity of work an equal quantity of heat is produced.

Because of its definition in terms of increments, the value of the internal energy of a system is not uniquely defined. It is defined only up to an arbitrary additive constant of integration, which can be adjusted to give arbitrary reference zero levels. This non-uniqueness is in keeping with the abstract mathematical nature of the internal energy. The internal energy is customarily stated relative to a conventionally chosen standard reference state of the system.

The concept of internal energy is considered by Bailyn to be of "enormous interest". Its quantity cannot be immediately measured, but can only be inferred, by differencing actual immediate measurements. Bailyn likens it to the energy states of an atom, that were revealed by Bohr's energy relation hν = EnEn. In each case, an unmeasurable quantity (the internal energy, the atomic energy level) is revealed by considering the difference of measured quantities (increments of internal energy, quantities of emitted or absorbed radiative energy).

Conceptual revision: the "mechanical approach"

In 1907, George H. Bryan wrote about systems between which there is no transfer of matter (closed systems): "Definition. When energy flows from one system or part of a system to another otherwise than by the performance of mechanical work, the energy so transferred is called heat." This definition may be regarded as expressing a conceptual revision, as follows. This reinterpretation was systematically expounded in 1909 by Constantin Carathéodory, whose attention had been drawn to it by Max Born. Largely through Born's influence, this revised conceptual approach to the definition of heat came to be preferred by many twentieth-century writers. It might be called the "mechanical approach".

Energy can also be transferred from one thermodynamic system to another in association with transfer of matter. Born points out that in general such energy transfer is not resolvable uniquely into work and heat moieties. In general, when there is transfer of energy associated with matter transfer, work and heat transfers can be distinguished only when they pass through walls physically separate from those for matter transfer.

The "mechanical" approach postulates the law of conservation of energy. It also postulates that energy can be transferred from one thermodynamic system to another adiabatically as work, and that energy can be held as the internal energy of a thermodynamic system. It also postulates that energy can be transferred from one thermodynamic system to another by a path that is non-adiabatic, and is unaccompanied by matter transfer. Initially, it "cleverly" (according to Martin Bailyn) refrains from labelling as 'heat' such non-adiabatic, unaccompanied transfer of energy. It rests on the primitive notion of walls, especially adiabatic walls and non-adiabatic walls, defined as follows. Temporarily, only for purpose of this definition, one can prohibit transfer of energy as work across a wall of interest. Then walls of interest fall into two classes, (a) those such that arbitrary systems separated by them remain independently in their own previously established respective states of internal thermodynamic equilibrium; they are defined as adiabatic; and (b) those without such independence; they are defined as non-adiabatic.

This approach derives the notions of transfer of energy as heat, and of temperature, as theoretical developments, not taking them as primitives. It regards calorimetry as a derived theory. It has an early origin in the nineteenth century, for example in the work of Hermann von Helmholtz, but also in the work of many others.

Conceptually revised statement, according to the mechanical approach

The revised statement of the first law postulates that a change in the internal energy of a system due to any arbitrary process, that takes the system from a given initial thermodynamic state to a given final equilibrium thermodynamic state, can be determined through the physical existence, for those given states, of a reference process that occurs purely through stages of adiabatic work.

The revised statement is then

For a closed system, in any arbitrary process of interest that takes it from an initial to a final state of internal thermodynamic equilibrium, the change of internal energy is the same as that for a reference adiabatic work process that links those two states. This is so regardless of the path of the process of interest, and regardless of whether it is an adiabatic or a non-adiabatic process. The reference adiabatic work process may be chosen arbitrarily from amongst the class of all such processes.

This statement is much less close to the empirical basis than are the original statements, but is often regarded as conceptually parsimonious in that it rests only on the concepts of adiabatic work and of non-adiabatic processes, not on the concepts of transfer of energy as heat and of empirical temperature that are presupposed by the original statements. Largely through the influence of Max Born, it is often regarded as theoretically preferable because of this conceptual parsimony. Born particularly observes that the revised approach avoids thinking in terms of what he calls the "imported engineering" concept of heat engines.

Basing his thinking on the mechanical approach, Born in 1921, and again in 1949, proposed to revise the definition of heat. In particular, he referred to the work of Constantin Carathéodory, who had in 1909 stated the first law without defining quantity of heat. Born's definition was specifically for transfers of energy without transfer of matter, and it has been widely followed in textbooks (examples:). Born observes that a transfer of matter between two systems is accompanied by a transfer of internal energy that cannot be resolved into heat and work components. There can be pathways to other systems, spatially separate from that of the matter transfer, that allow heat and work transfer independent of and simultaneous with the matter transfer. Energy is conserved in such transfers.

Description

Cyclic processes

The first law of thermodynamics for a closed system was expressed in two ways by Clausius. One way referred to cyclic processes and the inputs and outputs of the system, but did not refer to increments in the internal state of the system. The other way referred to an incremental change in the internal state of the system, and did not expect the process to be cyclic.

A cyclic process is one that can be repeated indefinitely often, returning the system to its initial state. Of particular interest for single cycle of a cyclic process are the net work done, and the net heat taken in (or 'consumed', in Clausius' statement), by the system.

In a cyclic process in which the system does net work on its surroundings, it is observed to be physically necessary not only that heat be taken into the system, but also, importantly, that some heat leave the system. The difference is the heat converted by the cycle into work. In each repetition of a cyclic process, the net work done by the system, measured in mechanical units, is proportional to the heat consumed, measured in calorimetric units.

The constant of proportionality is universal and independent of the system and in 1845 and 1847 was measured by James Joule, who described it as the mechanical equivalent of heat.

Various statements of the law for closed systems

The law is of great importance and generality and is consequently thought of from several points of view. Most careful textbook statements of the law express it for closed systems. It is stated in several ways, sometimes even by the same author.

For the thermodynamics of closed systems, the distinction between transfers of energy as work and as heat is central and is within the scope of the present article. For the thermodynamics of open systems, such a distinction is beyond the scope of the present article, but some limited comments are made on it in the section below headed 'First law of thermodynamics for open systems'.

There are two main ways of stating a law of thermodynamics, physically or mathematically. They should be logically coherent and consistent with one another.

An example of a physical statement is that of Planck (1897/1903):

It is in no way possible, either by mechanical, thermal, chemical, or other devices, to obtain perpetual motion, i.e. it is impossible to construct an engine which will work in a cycle and produce continuous work, or kinetic energy, from nothing.

This physical statement is restricted neither to closed systems nor to systems with states that are strictly defined only for thermodynamic equilibrium; it has meaning also for open systems and for systems with states that are not in thermodynamic equilibrium.

An example of a mathematical statement is that of Crawford (1963):

For a given system we let ΔE kin = large-scale mechanical energy, ΔE pot = large-scale potential energy, and ΔE tot = total energy. The first two quantities are specifiable in terms of appropriate mechanical variables, and by definition

For any finite process, whether reversible or irreversible,

The first law in a form that involves the principle of conservation of energy more generally is

Here Q and W are heat and work added, with no restrictions as to whether the process is reversible, quasistatic, or irreversible.

This statement by Crawford, for W, uses the sign convention of IUPAC, not that of Clausius. Though it does not explicitly say so, this statement refers to closed systems. Internal energy U is evaluated for bodies in states of thermodynamic equilibrium, which possess well-defined temperatures, relative to a reference state.

The history of statements of the law for closed systems has two main periods, before and after the work of George H. Bryan (1907), of Carathéodory (1909), and the approval of Carathéodory's work given by Born (1921). The earlier traditional versions of the law for closed systems are nowadays often considered to be out of date.

Carathéodory's celebrated presentation of equilibrium thermodynamics refers to closed systems, which are allowed to contain several phases connected by internal walls of various kinds of impermeability and permeability (explicitly including walls that are permeable only to heat). Carathéodory's 1909 version of the first law of thermodynamics was stated in an axiom which refrained from defining or mentioning temperature or quantity of heat transferred. That axiom stated that the internal energy of a phase in equilibrium is a function of state, that the sum of the internal energies of the phases is the total internal energy of the system, and that the value of the total internal energy of the system is changed by the amount of work done adiabatically on it, considering work as a form of energy. That article considered this statement to be an expression of the law of conservation of energy for such systems. This version is nowadays widely accepted as authoritative, but is stated in slightly varied ways by different authors.

Such statements of the first law for closed systems assert the existence of internal energy as a function of state defined in terms of adiabatic work. Thus heat is not defined calorimetrically or as due to temperature difference. It is defined as a residual difference between change of internal energy and work done on the system, when that work does not account for the whole of the change of internal energy and the system is not adiabatically isolated.

The 1909 Carathéodory statement of the law in axiomatic form does not mention heat or temperature, but the equilibrium states to which it refers are explicitly defined by variable sets that necessarily include "non-deformation variables", such as pressures, which, within reasonable restrictions, can be rightly interpreted as empirical temperatures, and the walls connecting the phases of the system are explicitly defined as possibly impermeable to heat or permeable only to heat.

According to A. Münster (1970), "A somewhat unsatisfactory aspect of Carathéodory's theory is that a consequence of the Second Law must be considered at this point [in the statement of the first law], i.e. that it is not always possible to reach any state 2 from any other state 1 by means of an adiabatic process." Münster instances that no adiabatic process can reduce the internal energy of a system at constant volume. Carathéodory's paper asserts that its statement of the first law corresponds exactly to Joule's experimental arrangement, regarded as an instance of adiabatic work. It does not point out that Joule's experimental arrangement performed essentially irreversible work, through friction of paddles in a liquid, or passage of electric current through a resistance inside the system, driven by motion of a coil and inductive heating, or by an external current source, which can access the system only by the passage of electrons, and so is not strictly adiabatic, because electrons are a form of matter, which cannot penetrate adiabatic walls. The paper goes on to base its main argument on the possibility of quasi-static adiabatic work, which is essentially reversible. The paper asserts that it will avoid reference to Carnot cycles, and then proceeds to base its argument on cycles of forward and backward quasi-static adiabatic stages, with isothermal stages of zero magnitude.

Sometimes the concept of internal energy is not made explicit in the statement.

Sometimes the existence of the internal energy is made explicit but work is not explicitly mentioned in the statement of the first postulate of thermodynamics. Heat supplied is then defined as the residual change in internal energy after work has been taken into account, in a non-adiabatic process.

A respected modern author states the first law of thermodynamics as "Heat is a form of energy", which explicitly mentions neither internal energy nor adiabatic work. Heat is defined as energy transferred by thermal contact with a reservoir, which has a temperature, and is generally so large that addition and removal of heat do not alter its temperature. A current student text on chemistry defines heat thus: "heat is the exchange of thermal energy between a system and its surroundings caused by a temperature difference." The author then explains how heat is defined or measured by calorimetry, in terms of heat capacity, specific heat capacity, molar heat capacity, and temperature.

A respected text disregards the Carathéodory's exclusion of mention of heat from the statement of the first law for closed systems, and admits heat calorimetrically defined along with work and internal energy. Another respected text defines heat exchange as determined by temperature difference, but also mentions that the Born (1921) version is "completely rigorous". These versions follow the traditional approach that is now considered out of date, exemplified by that of Planck (1897/1903).

Evidence for the first law of thermodynamics for closed systems

The first law of thermodynamics for closed systems was originally induced from empirically observed evidence, including calorimetric evidence. It is nowadays, however, taken to provide the definition of heat via the law of conservation of energy and the definition of work in terms of changes in the external parameters of a system. The original discovery of the law was gradual over a period of perhaps half a century or more, and some early studies were in terms of cyclic processes.

The following is an account in terms of changes of state of a closed system through compound processes that are not necessarily cyclic. This account first considers processes for which the first law is easily verified because of their simplicity, namely adiabatic processes (in which there is no transfer as heat) and adynamic processes (in which there is no transfer as work).

Adiabatic processes

In an adiabatic process, there is transfer of energy as work but not as heat. For all adiabatic process that takes a system from a given initial state to a given final state, irrespective of how the work is done, the respective eventual total quantities of energy transferred as work are one and the same, determined just by the given initial and final states. The work done on the system is defined and measured by changes in mechanical or quasi-mechanical variables external to the system. Physically, adiabatic transfer of energy as work requires the existence of adiabatic enclosures.

For instance, in Joule's experiment, the initial system is a tank of water with a paddle wheel inside. If we isolate the tank thermally, and move the paddle wheel with a pulley and a weight, we can relate the increase in temperature with the distance descended by the mass. Next, the system is returned to its initial state, isolated again, and the same amount of work is done on the tank using different devices (an electric motor, a chemical battery, a spring,...). In every case, the amount of work can be measured independently. The return to the initial state is not conducted by doing adiabatic work on the system. The evidence shows that the final state of the water (in particular, its temperature and volume) is the same in every case. It is irrelevant if the work is electrical, mechanical, chemical,... or if done suddenly or slowly, as long as it is performed in an adiabatic way, that is to say, without heat transfer into or out of the system.

Evidence of this kind shows that to increase the temperature of the water in the tank, the qualitative kind of adiabatically performed work does not matter. No qualitative kind of adiabatic work has ever been observed to decrease the temperature of the water in the tank.

A change from one state to another, for example an increase of both temperature and volume, may be conducted in several stages, for example by externally supplied electrical work on a resistor in the body, and adiabatic expansion allowing the body to do work on the surroundings. It needs to be shown that the time order of the stages, and their relative magnitudes, does not affect the amount of adiabatic work that needs to be done for the change of state. According to one respected scholar: "Unfortunately, it does not seem that experiments of this kind have ever been carried out carefully. ... We must therefore admit that the statement which we have enunciated here, and which is equivalent to the first law of thermodynamics, is not well founded on direct experimental evidence." Another expression of this view is "no systematic precise experiments to verify this generalization directly have ever been attempted".

This kind of evidence, of independence of sequence of stages, combined with the above-mentioned evidence, of independence of qualitative kind of work, would show the existence of an important state variable that corresponds with adiabatic work, but not that such a state variable represented a conserved quantity. For the latter, another step of evidence is needed, which may be related to the concept of reversibility, as mentioned below.

That important state variable was first recognized and denoted by Clausius in 1850, but he did not then name it, and he defined it in terms not only of work but also of heat transfer in the same process. It was also independently recognized in 1850 by Rankine, who also denoted it  ; and in 1851 by Kelvin who then called it "mechanical energy", and later "intrinsic energy". In 1865, after some hesitation, Clausius began calling his state function "energy". In 1882 it was named as the internal energy by Helmholtz. If only adiabatic processes were of interest, and heat could be ignored, the concept of internal energy would hardly arise or be needed. The relevant physics would be largely covered by the concept of potential energy, as was intended in the 1847 paper of Helmholtz on the principle of conservation of energy, though that did not deal with forces that cannot be described by a potential, and thus did not fully justify the principle. Moreover, that paper was critical of the early work of Joule that had by then been performed. A great merit of the internal energy concept is that it frees thermodynamics from a restriction to cyclic processes, and allows a treatment in terms of thermodynamic states.

In an adiabatic process, adiabatic work takes the system either from a reference state with internal energy to an arbitrary one with internal energy , or from the state to the state :

Except under the special, and strictly speaking, fictional, condition of reversibility, only one of the processes or is empirically feasible by a simple application of externally supplied work. The reason for this is given as the second law of thermodynamics and is not considered in the present article.

The fact of such irreversibility may be dealt with in two main ways, according to different points of view:

  • Since the work of Bryan (1907), the most accepted way to deal with it nowadays, followed by Carathéodory, is to rely on the previously established concept of quasi-static processes, as follows. Actual physical processes of transfer of energy as work are always at least to some degree irreversible. The irreversibility is often due to mechanisms known as dissipative, that transform bulk kinetic energy into internal energy. Examples are friction and viscosity. If the process is performed more slowly, the frictional or viscous dissipation is less. In the limit of infinitely slow performance, the dissipation tends to zero and then the limiting process, though fictional rather than actual, is notionally reversible, and is called quasi-static. Throughout the course of the fictional limiting quasi-static process, the internal intensive variables of the system are equal to the external intensive variables, those that describe the reactive forces exerted by the surroundings. This can be taken to justify the formula
  • Another way to deal with it is to allow that experiments with processes of heat transfer to or from the system may be used to justify the formula (1) above. Moreover, it deals to some extent with the problem of lack of direct experimental evidence that the time order of stages of a process does not matter in the determination of internal energy. This way does not provide theoretical purity in terms of adiabatic work processes, but is empirically feasible, and is in accord with experiments actually done, such as the Joule experiments mentioned just above, and with older traditions.

The formula (1) above allows that to go by processes of quasi-static adiabatic work from the state to the state we can take a path that goes through the reference state , since the quasi-static adiabatic work is independent of the path

This kind of empirical evidence, coupled with theory of this kind, largely justifies the following statement:

For all adiabatic processes between two specified states of a closed system of any nature, the net work done is the same regardless the details of the process, and determines a state function called internal energy, U.

Adynamic processes

A complementary observable aspect of the first law is about heat transfer. Adynamic transfer of energy as heat can be measured empirically by changes in the surroundings of the system of interest by calorimetry. This again requires the existence of adiabatic enclosure of the entire process, system and surroundings, though the separating wall between the surroundings and the system is thermally conductive or radiatively permeable, not adiabatic. A calorimeter can rely on measurement of sensible heat, which requires the existence of thermometers and measurement of temperature change in bodies of known sensible heat capacity under specified conditions; or it can rely on the measurement of latent heat, through measurement of masses of material that change phase, at temperatures fixed by the occurrence of phase changes under specified conditions in bodies of known latent heat of phase change. The calorimeter can be calibrated by transferring an externally determined amount of heat into it, for instance from a resistive electrical heater inside the calorimeter through which a precisely known electric current is passed at a precisely known voltage for a precisely measured period of time. The calibration allows comparison of calorimetric measurement of quantity of heat transferred with quantity of energy transferred as (surroundings-based) work. According to one textbook, "The most common device for measuring is an adiabatic bomb calorimeter." According to another textbook, "Calorimetry is widely used in present day laboratories." According to one opinion, "Most thermodynamic data come from calorimetry".

When the system evolves with transfer of energy as heat, without energy being transferred as work, in an adynamic process, the heat transferred to the system is equal to the increase in its internal energy:

General case for reversible processes

Heat transfer is practically reversible when it is driven by practically negligibly small temperature gradients. Work transfer is practically reversible when it occurs so slowly that there are no frictional effects within the system; frictional effects outside the system should also be zero if the process is to be reversible in the strict thermodynamic sense. For a particular reversible process in general, the work done reversibly on the system, , and the heat transferred reversibly to the system, are not required to occur respectively adiabatically or adynamically, but they must belong to the same particular process defined by its particular reversible path, , through the space of thermodynamic states. Then the work and heat transfers can occur and be calculated simultaneously.

Putting the two complementary aspects together, the first law for a particular reversible process can be written

This combined statement is the expression the first law of thermodynamics for reversible processes for closed systems.

In particular, if no work is done on a thermally isolated closed system we have

.

This is one aspect of the law of conservation of energy and can be stated:

The internal energy of an isolated system remains constant.

General case for irreversible processes

If, in a process of change of state of a closed system, the energy transfer is not under a practically zero temperature gradient, practically frictionless, and with nearly balanced forces, then the process is irreversible. Then the heat and work transfers may be difficult to calculate with high accuracy, although the simple equations for reversible processes still hold to a good approximation in the absence of composition changes. Importantly, the first law still holds and provides a check on the measurements and calculations of the work done irreversibly on the system, , and the heat transferred irreversibly to the system, , which belong to the same particular process defined by its particular irreversible path, , through the space of thermodynamic states.

This means that the internal energy is a function of state and that the internal energy change between two states is a function only of the two states.

Overview of the weight of evidence for the law

The first law of thermodynamics is so general that its predictions cannot all be directly tested. In many properly conducted experiments it has been precisely supported, and never violated. Indeed, within its scope of applicability, the law is so reliably established, that, nowadays, rather than experiment being considered as testing the accuracy of the law, it is more practical and realistic to think of the law as testing the accuracy of experiment. An experimental result that seems to violate the law may be assumed to be inaccurate or wrongly conceived, for example due to failure to account for an important physical factor. Thus, some may regard it as a principle more abstract than a law.

State functional formulation for infinitesimal processes

When the heat and work transfers in the equations above are infinitesimal in magnitude, they are often denoted by δ, rather than exact differentials denoted by d, as a reminder that heat and work do not describe the state of any system. The integral of an inexact differential depends upon the particular path taken through the space of thermodynamic parameters while the integral of an exact differential depends only upon the initial and final states. If the initial and final states are the same, then the integral of an inexact differential may or may not be zero, but the integral of an exact differential is always zero. The path taken by a thermodynamic system through a chemical or physical change is known as a thermodynamic process.

The first law for a closed homogeneous system may be stated in terms that include concepts that are established in the second law. The internal energy U may then be expressed as a function of the system's defining state variables S, entropy, and V, volume: U = U (S, V). In these terms, T, the system's temperature, and P, its pressure, are partial derivatives of U with respect to S and V. These variables are important throughout thermodynamics, though not necessary for the statement of the first law. Rigorously, they are defined only when the system is in its own state of internal thermodynamic equilibrium. For some purposes, the concepts provide good approximations for scenarios sufficiently near to the system's internal thermodynamic equilibrium.

The first law requires that:

Then, for the fictive case of a reversible process, dU can be written in terms of exact differentials. One may imagine reversible changes, such that there is at each instant negligible departure from thermodynamic equilibrium within the system and between system and surroundings. Then, mechanical work is given by δW = −P dV and the quantity of heat added can be expressed as δQ = T dS. For these conditions

While this has been shown here for reversible changes, it is valid more generally in the absence of chemical reactions or phase transitions, as U can be considered as a thermodynamic state function of the defining state variables S and V:

Equation (2) is known as the fundamental thermodynamic relation for a closed system in the energy representation, for which the defining state variables are S and V, with respect to which T and P are partial derivatives of U. It is only in the reversible case or for a quasistatic process without composition change that the work done and heat transferred are given by P dV and T dS.

In the case of a closed system in which the particles of the system are of different types and, because chemical reactions may occur, their respective numbers are not necessarily constant, the fundamental thermodynamic relation for dU becomes:

where dNi is the (small) increase in number of type-i particles in the reaction, and μi is known as the chemical potential of the type-i particles in the system. If dNi is expressed in mol then μi is expressed in J/mol. If the system has more external mechanical variables than just the volume that can change, the fundamental thermodynamic relation further generalizes to:

Here the Xi are the generalized forces corresponding to the external variables xi. The parameters Xi are independent of the size of the system and are called intensive parameters and the xi are proportional to the size and called extensive parameters.

For an open system, there can be transfers of particles as well as energy into or out of the system during a process. For this case, the first law of thermodynamics still holds, in the form that the internal energy is a function of state and the change of internal energy in a process is a function only of its initial and final states, as noted in the section below headed First law of thermodynamics for open systems.

A useful idea from mechanics is that the energy gained by a particle is equal to the force applied to the particle multiplied by the displacement of the particle while that force is applied. Now consider the first law without the heating term: dU = −P dV. The pressure P can be viewed as a force (and in fact has units of force per unit area) while dV is the displacement (with units of distance times area). We may say, with respect to this work term, that a pressure difference forces a transfer of volume, and that the product of the two (work) is the amount of energy transferred out of the system as a result of the process. If one were to make this term negative then this would be the work done on the system.

It is useful to view the T dS term in the same light: here the temperature is known as a "generalized" force (rather than an actual mechanical force) and the entropy is a generalized displacement.

Similarly, a difference in chemical potential between groups of particles in the system drives a chemical reaction that changes the numbers of particles, and the corresponding product is the amount of chemical potential energy transformed in process. For example, consider a system consisting of two phases: liquid water and water vapor. There is a generalized "force" of evaporation that drives water molecules out of the liquid. There is a generalized "force" of condensation that drives vapor molecules out of the vapor. Only when these two "forces" (or chemical potentials) are equal is there equilibrium, and the net rate of transfer zero.

The two thermodynamic parameters that form a generalized force-displacement pair are called "conjugate variables". The two most familiar pairs are, of course, pressure-volume, and temperature-entropy.

Fluid dynamics

In fluid dynamics, the first law of thermodynamics reads .

Spatially inhomogeneous systems

Classical thermodynamics is initially focused on closed homogeneous systems (e.g. Planck 1897/1903), which might be regarded as 'zero-dimensional' in the sense that they have no spatial variation. But it is desired to study also systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of energy is expressed in terms not only of internal energy as defined for homogeneous systems, but also in terms of kinetic energy and potential energies of parts of the inhomogeneous system with respect to each other and with respect to long-range external forces. How the total energy of a system is allocated between these three more specific kinds of energy varies according to the purposes of different writers; this is because these components of energy are to some extent mathematical artefacts rather than actually measured physical quantities. For any closed homogeneous component of an inhomogeneous closed system, if denotes the total energy of that component system, one may write

where and denote respectively the total kinetic energy and the total potential energy of the component closed homogeneous system, and denotes its internal energy.

Potential energy can be exchanged with the surroundings of the system when the surroundings impose a force field, such as gravitational or electromagnetic, on the system.

A compound system consisting of two interacting closed homogeneous component subsystems has a potential energy of interaction between the subsystems. Thus, in an obvious notation, one may write

The quantity in general lacks an assignment to either subsystem in a way that is not arbitrary, and this stands in the way of a general non-arbitrary definition of transfer of energy as work. On occasions, authors make their various respective arbitrary assignments.

The distinction between internal and kinetic energy is hard to make in the presence of turbulent motion within the system, as friction gradually dissipates macroscopic kinetic energy of localised bulk flow into molecular random motion of molecules that is classified as internal energy. The rate of dissipation by friction of kinetic energy of localised bulk flow into internal energy, whether in turbulent or in streamlined flow, is an important quantity in non-equilibrium thermodynamics. This is a serious difficulty for attempts to define entropy for time-varying spatially inhomogeneous systems.

First law of thermodynamics for open systems

For the first law of thermodynamics, there is no trivial passage of physical conception from the closed system view to an open system view. For closed systems, the concepts of an adiabatic enclosure and of an adiabatic wall are fundamental. Matter and internal energy cannot permeate or penetrate such a wall. For an open system, there is a wall that allows penetration by matter. In general, matter in diffusive motion carries with it some internal energy, and some microscopic potential energy changes accompany the motion. An open system is not adiabatically enclosed.

There are some cases in which a process for an open system can, for particular purposes, be considered as if it were for a closed system. In an open system, by definition hypothetically or potentially, matter can pass between the system and its surroundings. But when, in a particular case, the process of interest involves only hypothetical or potential but no actual passage of matter, the process can be considered as if it were for a closed system.

Internal energy for an open system

Since the revised and more rigorous definition of the internal energy of a closed system rests upon the possibility of processes by which adiabatic work takes the system from one state to another, this leaves a problem for the definition of internal energy for an open system, for which adiabatic work is not in general possible. According to Max Born, the transfer of matter and energy across an open connection "cannot be reduced to mechanics". In contrast to the case of closed systems, for open systems, in the presence of diffusion, there is no unconstrained and unconditional physical distinction between convective transfer of internal energy by bulk flow of matter, the transfer of internal energy without transfer of matter (usually called heat conduction and work transfer), and change of various potential energies. The older traditional way and the conceptually revised (Carathéodory) way agree that there is no physically unique definition of heat and work transfer processes between open systems.

In particular, between two otherwise isolated open systems an adiabatic wall is by definition impossible. This problem is solved by recourse to the principle of conservation of energy. This principle allows a composite isolated system to be derived from two other component non-interacting isolated systems, in such a way that the total energy of the composite isolated system is equal to the sum of the total energies of the two component isolated systems. Two previously isolated systems can be subjected to the thermodynamic operation of placement between them of a wall permeable to matter and energy, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new single unpartitioned system. The internal energies of the initial two systems and of the final new system, considered respectively as closed systems as above, can be measured. Then the law of conservation of energy requires that

where ΔUs and ΔUo denote the changes in internal energy of the system and of its surroundings respectively. This is a statement of the first law of thermodynamics for a transfer between two otherwise isolated open systems, that fits well with the conceptually revised and rigorous statement of the law stated above.

For the thermodynamic operation of adding two systems with internal energies U1 and U2, to produce a new system with internal energy U, one may write U = U1 + U2; the reference states for U, U1 and U2 should be specified accordingly, maintaining also that the internal energy of a system be proportional to its mass, so that the internal energies are extensive variables.

There is a sense in which this kind of additivity expresses a fundamental postulate that goes beyond the simplest ideas of classical closed system thermodynamics; the extensivity of some variables is not obvious, and needs explicit expression; indeed one author goes so far as to say that it could be recognized as a fourth law of thermodynamics, though this is not repeated by other authors.

Also of course

where ΔNs and ΔNo denote the changes in mole number of a component substance of the system and of its surroundings respectively. This is a statement of the law of conservation of mass.

Process of transfer of matter between an open system and its surroundings

A system connected to its surroundings only through contact by a single permeable wall, but otherwise isolated, is an open system. If it is initially in a state of contact equilibrium with a surrounding subsystem, a thermodynamic process of transfer of matter can be made to occur between them if the surrounding subsystem is subjected to some thermodynamic operation, for example, removal of a partition between it and some further surrounding subsystem. The removal of the partition in the surroundings initiates a process of exchange between the system and its contiguous surrounding subsystem.

An example is evaporation. One may consider an open system consisting of a collection of liquid, enclosed except where it is allowed to evaporate into or to receive condensate from its vapor above it, which may be considered as its contiguous surrounding subsystem, and subject to control of its volume and temperature.

A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically increases in the controlled volume of the vapor. Some mechanical work will be done within the surroundings by the vapor, but also some of the parent liquid will evaporate and enter the vapor collection which is the contiguous surrounding subsystem. Some internal energy will accompany the vapor that leaves the system, but it will not make sense to try to uniquely identify part of that internal energy as heat and part of it as work. Consequently, the energy transfer that accompanies the transfer of matter between the system and its surrounding subsystem cannot be uniquely split into heat and work transfers to or from the open system. The component of total energy transfer that accompanies the transfer of vapor into the surrounding subsystem is customarily called 'latent heat of evaporation', but this use of the word heat is a quirk of customary historical language, not in strict compliance with the thermodynamic definition of transfer of energy as heat. In this example, kinetic energy of bulk flow and potential energy with respect to long-range external forces such as gravity are both considered to be zero. The first law of thermodynamics refers to the change of internal energy of the open system, between its initial and final states of internal equilibrium.

Open system with multiple contacts

An open system can be in contact equilibrium with several other systems at once.

This includes cases in which there is contact equilibrium between the system, and several subsystems in its surroundings, including separate connections with subsystems through walls that are permeable to the transfer of matter and internal energy as heat and allowing friction of passage of the transferred matter, but immovable, and separate connections through adiabatic walls with others, and separate connections through diathermic walls impermeable to matter with yet others. Because there are physically separate connections that are permeable to energy but impermeable to matter, between the system and its surroundings, energy transfers between them can occur with definite heat and work characters. Conceptually essential here is that the internal energy transferred with the transfer of matter is measured by a variable that is mathematically independent of the variables that measure heat and work.

With such independence of variables, the total increase of internal energy in the process is then determined as the sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are permeable to it, and of the internal energy transferred to the system as heat through the diathermic walls, and of the energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long-range forces. These simultaneously transferred quantities of energy are defined by events in the surroundings of the system. Because the internal energy transferred with matter is not in general uniquely resolvable into heat and work components, the total energy transfer cannot in general be uniquely resolved into heat and work components. Under these conditions, the following formula can describe the process in terms of externally defined thermodynamic variables, as a statement of the first law of thermodynamics:

where ΔU0 denotes the change of internal energy of the system, and ΔUi denotes the change of internal energy of the ith of the m surrounding subsystems that are in open contact with the system, due to transfer between the system and that ith surrounding subsystem, and Q denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, and W denotes the energy transferred from the system to the surrounding subsystems that are in adiabatic connection with it. The case of a wall that is permeable to matter and can move so as to allow transfer of energy as work is not considered here.

Combination of first and second laws

If the system is described by the energetic fundamental equation, U0 = U0(S, V, Nj), and if the process can be described in the quasi-static formalism, in terms of the internal state variables of the system, then the process can also be described by a combination of the first and second laws of thermodynamics, by the formula

where there are n chemical constituents of the system and permeably connected surrounding subsystems, and where T, S, P, V, Nj, and μj, are defined as above.

For a general natural process, there is no immediate term-wise correspondence between equations (3) and (4), because they describe the process in different conceptual frames.

Nevertheless, a conditional correspondence exists. There are three relevant kinds of wall here: purely diathermal, adiabatic, and permeable to matter. If two of those kinds of wall are sealed off, leaving only one that permits transfers of energy, as work, as heat, or with matter, then the remaining permitted terms correspond precisely. If two of the kinds of wall are left unsealed, then energy transfer can be shared between them, so that the two remaining permitted terms do not correspond precisely.

For the special fictive case of quasi-static transfers, there is a simple correspondence. For this, it is supposed that the system has multiple areas of contact with its surroundings. There are pistons that allow adiabatic work, purely diathermal walls, and open connections with surrounding subsystems of completely controllable chemical potential (or equivalent controls for charged species). Then, for a suitable fictive quasi-static transfer, one can write

where is the added amount of species and is the corresponding molar entropy.

For fictive quasi-static transfers for which the chemical potentials in the connected surrounding subsystems are suitably controlled, these can be put into equation (4) to yield

where is the molar enthalpy of species .

Non-equilibrium transfers

The transfer of energy between an open system and a single contiguous subsystem of its surroundings is considered also in non-equilibrium thermodynamics. The problem of definition arises also in this case. It may be allowed that the wall between the system and the subsystem is not only permeable to matter and to internal energy, but also may be movable so as to allow work to be done when the two systems have different pressures. In this case, the transfer of energy as heat is not defined.

The first law of thermodynamics for any process on the specification of equation (3) can be defined as

where ΔU denotes the change of internal energy of the system, Δ Q denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, p Δ V denotes the work of the system and is the molar enthalpy of species , coming into the system from the surrounding that is in contact with the system.

Formula (6) is valid in general case, both for quasi-static and for irreversible processes. The situation of the quasi-static process is considered in the previous Section, which in our terms defines

To describe deviation of the thermodynamic system from equilibrium, in addition to fundamental variables that are used to fix the equilibrium state, as was described above, a set of variables that are called internal variables have been introduced, which allows to formulate for the general case

Methods for study of non-equilibrium processes mostly deal with spatially continuous flow systems. In this case, the open connection between system and surroundings is usually taken to fully surround the system, so that there are no separate connections impermeable to matter but permeable to heat. Except for the special case mentioned above when there is no actual transfer of matter, which can be treated as if for a closed system, in strictly defined thermodynamic terms, it follows that transfer of energy as heat is not defined. In this sense, there is no such thing as 'heat flow' for a continuous-flow open system. Properly, for closed systems, one speaks of transfer of internal energy as heat, but in general, for open systems, one can speak safely only of transfer of internal energy. A factor here is that there are often cross-effects between distinct transfers, for example that transfer of one substance may cause transfer of another even when the latter has zero chemical potential gradient.

Usually transfer between a system and its surroundings applies to transfer of a state variable, and obeys a balance law, that the amount lost by the donor system is equal to the amount gained by the receptor system. Heat is not a state variable. For his 1947 definition of "heat transfer" for discrete open systems, the author Prigogine carefully explains at some length that his definition of it does not obey a balance law. He describes this as paradoxical.

The situation is clarified by Gyarmati, who shows that his definition of "heat transfer", for continuous-flow systems, really refers not specifically to heat, but rather to transfer of internal energy, as follows. He considers a conceptual small cell in a situation of continuous-flow as a system defined in the so-called Lagrangian way, moving with the local center of mass. The flow of matter across the boundary is zero when considered as a flow of total mass. Nevertheless, if the material constitution is of several chemically distinct components that can diffuse with respect to one another, the system is considered to be open, the diffusive flows of the components being defined with respect to the center of mass of the system, and balancing one another as to mass transfer. Still there can be a distinction between bulk flow of internal energy and diffusive flow of internal energy in this case, because the internal energy density does not have to be constant per unit mass of material, and allowing for non-conservation of internal energy because of local conversion of kinetic energy of bulk flow to internal energy by viscosity.

Gyarmati shows that his definition of "the heat flow vector" is strictly speaking a definition of flow of internal energy, not specifically of heat, and so it turns out that his use here of the word heat is contrary to the strict thermodynamic definition of heat, though it is more or less compatible with historical custom, that often enough did not clearly distinguish between heat and internal energy; he writes "that this relation must be considered to be the exact definition of the concept of heat flow, fairly loosely used in experimental physics and heat technics". Apparently in a different frame of thinking from that of the above-mentioned paradoxical usage in the earlier sections of the historic 1947 work by Prigogine, about discrete systems, this usage of Gyarmati is consistent with the later sections of the same 1947 work by Prigogine, about continuous-flow systems, which use the term "heat flux" in just this way. This usage is also followed by Glansdorff and Prigogine in their 1971 text about continuous-flow systems. They write: "Again the flow of internal energy may be split into a convection flow ρuv and a conduction flow. This conduction flow is by definition the heat flow W. Therefore: j[U] = ρuv + W where u denotes the [internal] energy per unit mass. [These authors actually use the symbols E and e to denote internal energy but their notation has been changed here to accord with the notation of the present article. These authors actually use the symbol U to refer to total energy, including kinetic energy of bulk flow.]" This usage is followed also by other writers on non-equilibrium thermodynamics such as Lebon, Jou, and Casas-Vásquez, and de Groot and Mazur. This usage is described by Bailyn as stating the non-convective flow of internal energy, and is listed as his definition number 1, according to the first law of thermodynamics. This usage is also followed by workers in the kinetic theory of gases. This is not the ad hoc definition of "reduced heat flux" of Rolf Haase.

In the case of a flowing system of only one chemical constituent, in the Lagrangian representation, there is no distinction between bulk flow and diffusion of matter. Moreover, the flow of matter is zero into or out of the cell that moves with the local center of mass. In effect, in this description, one is dealing with a system effectively closed to the transfer of matter. But still one can validly talk of a distinction between bulk flow and diffusive flow of internal energy, the latter driven by a temperature gradient within the flowing material, and being defined with respect to the local center of mass of the bulk flow. In this case of a virtually closed system, because of the zero matter transfer, as noted above, one can safely distinguish between transfer of energy as work, and transfer of internal energy as heat.

Sunday, February 15, 2026

Debt crisis

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Debt_crisis

A debt crisis is a situation in which a government (nation, state/province, county, or city etc.) loses the ability of paying back its governmental debt. When the expenditures of a government are more than its tax revenues for a prolonged period, the government may enter into a debt crisis. Various forms of governments finance their expenditures primarily by raising money through taxation. When tax revenues are insufficient, the government can make up the difference by issuing debt.

Public debt as a percent of GDP, evolution for USA, Japan and the main EU economies.
Public debt as a percent of GDP by IMF (2024)
  >100%
  >75–100%
  >50–75%
  >25–50%
  0–25%
  no data

A debt crisis can also refer to a general term for a proliferation of massive public debt relative to tax revenues, especially in reference to Latin American countries during the 1980s, the United States and the European Union since the mid-2000s, and the Chinese debt crises of 2015.

The development charity CAFOD states that in current (2024) conditions, more than 50 countries are in debt crisis.

Debt wall

Hitting the debt wall is a dire financial situation that can occur when a nation depends on foreign debt and/or investment to subsidize their budget and then commercial deficits stop being the recipient of foreign capital flows. The lack of foreign capital flows reduces the demand for the local currency. The increased supply of currency coupled with a decreased demand then causes a significant devaluation of the currency. This hurts the industrial base of the country since it can no longer afford to buy those imported supplies needed for production. Further, any obligations in foreign currency are now significantly more expensive to service both for the government and businesses.

By region

Eurozone

European debt crisis

The European debt crisis is a crisis affecting several eurozone countries since the end of 2009. Member states affected by this crisis were unable to repay their government debt or to bail out indebted financial institutions without the assistance of third-parties (namely the International Monetary Fund, European Commission, and the European Central Bank). The causes of the crisis included high-risk lending and borrowing practices, burst real estate bubbles, and hefty deficit spending. As a result, investors have reduced their exposure to European investment products, and the value of the Euro has decreased.

Sovereign credit default swaps for EU countries in 2010-2013
Sovereign CDS showing a temporary loss of confidence in creditworthiness of certain EU countries. The left axis is in basis points; a level of 1,000 means it costs $1 million to protect $10 million of debt for five years.

The 2008 financial crisis began with a crisis in the subprime mortgage market in the United States, and developed into a full-blown international banking crisis with the collapse of the investment bank Lehman Brothers on 15 September 2008. The crisis was nonetheless followed by a global economic downturn, the Great Recession. The European debt crisis, a crisis in the banking system of the European countries using the euro, followed later.

In sovereign debt markets of PIIGS (Portugal, Ireland, Italy, Greece, Spain) created unprecedented funding pressure that spread to the national banks of the euro-zone countries and the European Central Bank (ECB) in 2010. The PIIGS announced strong fiscal reforms and austerity measures, but toward the end of the year, the euro once again suffered from stress.

Causes

The eurozone crisis resulted from the structural problem of the eurozone and a combination of complex factors, including the globalisation of finance, easy credit conditions during the 2002–2008 period that encouraged high-risk lending and borrowing practices, the 2008 financial crisis, international trade imbalances, real estate bubbles that have since burst; the Great Recession of 2008–2012, fiscal policy choices related to government revenues and expenses, and approaches used by states to bail out troubled banking industries and private bondholders, assuming private debt burdens or socializing losses.

In 1992, members of the European Union signed the Maastricht Treaty, under which they pledged to limit their deficit spending and debt levels. However, in the early 2000s, some EU member states were failing to stay within the confines of the Maastricht criteria and turned to securitising future government revenues to reduce their debts and/or deficits, sidestepping best practice and ignoring international standards. This allowed the sovereigns to mask their deficit and debt levels through a combination of techniques, including inconsistent accounting, off-balance-sheet transactions, and the use of complex currency and credit derivatives structures.

From late 2009 on, after Greece's newly elected, PASOK government stopped masking its true indebtedness and budget deficit, fears of sovereign defaults in certain European states developed in the public, and the government debt of several states was downgraded. The crisis subsequently spread to Ireland and Portugal, while raising concerns about Italy, Spain, and the European banking system, and more fundamental imbalances within the eurozone.

Greek debt crisis

Timeline

2009 December - One of the world's three leading rating agencies downgrades Greece's credit rating amid fears the government could default on its ballooning debt. PM Papandrou announces programme of tough public spending cuts.

2010 January–March - Two more rounds of tough austerity measures are announced by government, and government faces mass protests and strikes.

2010 April–May - The deficit was estimated that up to 70% of Greek government bonds were held by foreign investors, primarily banks. After publication of GDP data which showed an intermittent period of recession starting in 2007, credit rating agencies then downgraded Greek bonds to junk status in late April 2010. On 1 May 2010, the Greek government announced a series of austerity measures.

100,000 people protest against the austerity measures in front of parliament building in Athens (29 May 2011).

2011 July – November - The debt crisis deepens. All three main credit ratings agencies cut Greece's to a level associated with substantial risk of default. In November 2011, Greece faced with a storm of criticism over his referendum plan, PM Papandreou withdraws it and then announces his resignation.

Protests in Athens on 25 May 2011

2012 February - December - The second bailout programme was ratified in February 2012. A total of €240 billion was to be transferred in regular tranches through December 2014. The recession worsened and the government continued to dither over bailout program implementation. In December 2012 the Troika provided Greece with more debt relief, while the IMF extended an extra €8.2bn of loans to be transferred from January 2015 to March 2016.

2014 - In 2014 the outlook for the Greek economy was optimistic. The government predicted a structural surplus in 2014, opening access to the private lending market to the extent that its entire financing gap for 2014 was covered via private bond sales.

2015 June – July - The Greek parliament approved the referendum with no interim bailout agreement. Many Greeks continued to withdraw cash from their accounts fearing that capital controls would soon be invoked. On 13 July, after 17 hours of negotiations, Eurozone leaders reached a provisional agreement on a third bailout programme, substantially the same as their June proposal. Many financial analysts, including the largest private holder of Greek debt, private equity firm manager, Paul Kazarian, found issue with its findings, citing it as a distortion of net debt position.

2017 - The Greek finance ministry reported that the government's debt load is now €226.36 billion after increasing by €2.65 billion in the previous quarter. In June 2017, news reports indicated that the "crushing debt burden" had not been alleviated and that Greece was at the risk of defaulting on some payments.

2018 - Greek successfully exited (as declared) the bailouts on 20 August 2018.

Greek debt restructuring

The Greek debt restructuring of 2012 is notable in the history of sovereign defaults. It resulted in substantial debt relief and was implemented with relatively limited financial disruption. The process relied on a mix of new legal mechanisms, significant cash incentives, and involvement from the official sector in coordinating creditor participation. At the same time, the timing and structure of the restructuring had important implications. Some aspects of the design may have reduced the overall gains for Greece, set certain precedents, and increased potential risks for taxpayers, particularly due to the favorable treatment of holdout creditors. These factors may influence the complexity of future debt restructurings in Europe.

Effects

To take considerations that the most characteristic feature of the Greek social landscape in the current crisis is the steep rise in joblessness. The unemployment rate had fluctuated around the 10% mark in the first half of the previous decade. It then began to fall until May 2008, when unemployment figures reached their lowest level for over a decade (325,000 workers or 6.6% of the labour force). While job losses involved an unusually high number of workers, loss of earnings for those still in employment was also significant. Average real gross earnings for employees have lost more ground since the onset of the crisis than they gained in the nine years before that.

In February 2012, it was reported that 20,000 Greeks had been made homeless during the preceding year, and that 20% of shops in the historic city centre of Athens were empty.

Latin America

The U.S. foreign policy known as the Roosevelt Corollary asserted that the United States would intervene on behalf of European countries to avoid those countries intervening militarily to press their interests, including repayment of debts. This policy was used to justify interventions in the early 1900s in Venezuela, Cuba, Nicaragua, Haiti, and the Dominican Republic (1916–1924).

Argentine debt crisis

Background

Argentina's turbulent economic history: Argentina has a history of chronic economic, monetary and political problems. Economic reforms of the 1990s. In 1989, Carlos Menem became president. After some fumbling, he adopted a free-market approach that reduced the burden of government by privatizing, deregulating, cutting some tax rates, and reforming the state. The centerpiece of Menem's policies was the Convertibility Law, which took effect on 1 April 1991. Argentina's reforms were faster and deeper than any country of the time outside the former communist bloc. Real GDP grew more than 10 percent a year in 1991 and 1992, before slowing to a more normal rate of slightly below 6 percent in 1993 and 1994.

The 1998–2002 Argentine great depression was an economic depression in Argentina, which began in the third quarter of 1998 and lasted until the second quarter of 2002. It almost immediately followed the 1974–1990 Great Depression after a brief period of rapid economic growth.

Effects

Depositors protest the freezing of their accounts, mostly in dollars. They were converted to pesos at less than half their new value.

Several thousand homeless and jobless Argentines found work as cartoneros, cardboard collectors. An estimate in 2003 had 30,000 to 40,000 people scavenging the streets for cardboard to sell to recycling plants. Such desperate measures were common because of the unemployment rate, nearly 25%.

Argentine agricultural products were rejected in some international markets for fear that they might have been damaged by the chaos. The US Department of Agriculture put restrictions on Argentine food and drug exports.

Debt restructuring history

President Néstor Kirchner and Economy Minister Roberto Lavagna, who presented the first debt restructuring offer in 2005

2005 Venezuela was one of the largest single investors in Argentine bonds following these developments, which bought a total of more than $5 billion in restructured Argentine bonds from 2005 to 2007. Between 2001 and 2006, Venezuela was the largest single buyer of Argentina's debt. In 2005 and 2006, Banco Occidental de Descuento and Fondo Común, owned by Venezuelan bankers Victor Vargas Irausquin and Victor Gill Ramirez respectively, bought most of Argentina's outstanding bonds and resold them on to the market. The banks bought $100 million worth of Argentine bonds and resold the bonds for a profit of approximately $17 million. People who criticize Vargas have said that he made a $1 billion "backroom deal" with swaps of Argentine bonds as a sign of his friendship with Chavez. The Financial Times interviewed financial analysts in the United States who said that the banks profited from the resale of the bonds; the Venezuelan government did not profit.

Bondholders who had accepted the 2005 swap (three out of four did so) saw the value of their bonds rise 90% by 2012, and these continued to rise strongly during 2013.

2010 On 15 April 2010, the debt exchange was re-opened to bondholders who rejected the 2005 swap; 67% of these latter accepted the swap, leaving 7% as holdouts. Holdouts continued to put pressure on the government by attempting to seize Argentine assets abroad, and by suing to attach future Argentine payments on restructured debt to receive better treatment than cooperating creditors.

The government reached an agreement in 2005 by which 76% of the defaulted bonds were exchanged for other bonds at a nominal value of 25 to 35% of the original and at longer terms. A second debt restructuring in 2010 brought the percentage of bonds out of default to 93%, but some creditors have still not been paid. Foreign currency denominated debt thus fell as a percentage of GDP from 150% in 2003 to 8.3% in 2013.

United States

On 19 January 2023, the United States again reached the debt ceiling. In February 2024, the total federal government debt grew to $34.4 trillion after having grown by approximately $1 trillion in both of two separate 100-day periods since the previous June.

Sub-Saharan Africa

Sub-Saharan Africa has a long history of external debt, beginning in the 1980s when the public finances of many countries sharply declined following several external shocks. This led to a “lost decade” of low economic growth, increased poverty, food insecurity and socio-political instability. However, the implementation of debt relief under the Heavily Indebted Poor Countries (HIPC) initiative and the supplementary Multilateral Debt Relief Initiative (MDRI) wiped out most of Sub-Saharan Africa’s external debts. These debt relief initiatives substantially reduced nominal public debt to sustainable levels, bringing it from a GDP-weighted average of 104 percent before their implementation to nearly 30 percent during the period from 2006 to 2011. According to World Bank data, the Sub-Saharan African governments' foreign debt tripled between 2009 and 2022. According to IMF (2024), 7 African countries are in debt distress (Republic of the Congo, Ghana, Malawi, Sudan, São Tomé & Príncipe, Zambia and Zimbabwe), and 13 more are at risk of becoming debt distressed. Unlike previous debt crises, the current one is characterised by a shift from multilateral to commercial and bilateral creditors, notably China, and the proliferation of Eurobonds, aggravating debt conditions. Pressured by heavy debt burdens, there is a risk that African governments divert funds from essential sectors such as education, health care and agriculture, causing a vicious cycle of stalled development, food insecurity and an elevated risk of socio-political instability.

Protein engineering

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Protein_engineering

Protein engineering is the process of developing useful or valuable proteins through the design and production of unnatural polypeptides, often by altering amino acid sequences found in nature. It is a young discipline, with much research taking place into the understanding of protein folding and recognition for protein design principles. It has been used to improve the function of many enzymes for industrial catalysis. It is also a product and services market, with an estimated value of $168 billion by 2017.

There are two general strategies for protein engineering: rational protein design and directed evolution. These methods are not mutually exclusive; researchers will often apply both. In the future, more detailed knowledge of protein structure and function, and advances in high-throughput screening, may greatly expand the abilities of protein engineering. Eventually, even unnatural amino acids may be included, via newer methods, such as expanded genetic code, that allow encoding novel amino acids in genetic code. The applications in numerous fields, including medicine and industrial bioprocessing, are vast and numerous.

Approaches

Rational design

In rational protein design, a scientist uses detailed knowledge of the structure and function of a protein to make desired changes. In general, this has the advantage of being inexpensive and technically easy, since site-directed mutagenesis methods are well-developed. However, its major drawback is that detailed structural knowledge of a protein is often unavailable, and, even when available, it can be very difficult to predict the effects of various mutations since structural information most often provide a static picture of a protein structure. However, programs such as Folding@home and Foldit have utilized crowdsourcing techniques in order to gain insight into the folding motifs of proteins.

Computational protein design algorithms seek to identify novel amino acid sequences that are low in energy when folded to the pre-specified target structure. While the sequence-conformation space that needs to be searched is large, the most challenging requirement for computational protein design is a fast, yet accurate, energy function that can distinguish optimal sequences from similar suboptimal ones.

Multiple sequence alignment

Without structural information about a protein, sequence analysis is often useful in elucidating information about the protein. These techniques involve alignment of target protein sequences with other related protein sequences. This alignment can show which amino acids are conserved between species and are important for the function of the protein. These analyses can help to identify hot spot amino acids that can serve as the target sites for mutations. Multiple sequence alignment utilizes data bases such as PREFAB, SABMARK, OXBENCH, IRMBASE, and BALIBASE in order to cross reference target protein sequences with known sequences. Multiple sequence alignment techniques are listed below.

This method begins by performing pair wise alignment of sequences using k-tuple or Needleman–Wunsch methods. These methods calculate a matrix that depicts the pair wise similarity among the sequence pairs. Similarity scores are then transformed into distance scores that are used to produce a guide tree using the neighbor joining method. This guide tree is then employed to yield a multiple sequence alignment.

Clustal omega

This method is capable of aligning up to 190,000 sequences by utilizing the k-tuple method. Next sequences are clustered using the mBed and k-means methods. A guide tree is then constructed using the UPGMA method that is used by the HH align package. This guide tree is used to generate multiple sequence alignments.

MAFFT

This method utilizes fast Fourier transform (FFT) that converts amino acid sequences into a sequence composed of volume and polarity values for each amino acid residue. This new sequence is used to find homologous regions.

K-Align

This method utilizes the Wu-Manber approximate string matching algorithm to generate multiple sequence alignments.

Multiple sequence comparison by log expectation (MUSCLE)

This method utilizes Kmer and Kimura distances to generate multiple sequence alignments.

T-Coffee

This method utilizes tree based consistency objective functions for alignment evolution. This method has been shown to be 5–10% more accurate than Clustal W.

Coevolutionary analysis

Coevolutionary analysis is also known as correlated mutation, covariation, or co-substitution. This type of rational design involves reciprocal evolutionary changes at evolutionarily interacting loci. Generally this method begins with the generation of a curated multiple sequence alignments for the target sequence. This alignment is then subjected to manual refinement that involves removal of highly gapped sequences, as well as sequences with low sequence identity. This step increases the quality of the alignment. Next, the manually processed alignment is utilized for further coevolutionary measurements using distinct correlated mutation algorithms. These algorithms result in a coevolution scoring matrix. This matrix is filtered by applying various significance tests to extract significant coevolution values and wipe out background noise. Coevolutionary measurements are further evaluated to assess their performance and stringency. Finally, the results from this coevolutionary analysis are validated experimentally.

Structural prediction

De novo generation of protein benefits from knowledge of existing protein structures. This knowledge of existing protein structure assists with the prediction of new protein structures. Methods for protein structure prediction fall under one of the four following classes: ab initio, fragment based methods, homology modeling, and protein threading.

Ab initio

These methods involve free modeling without using any structural information about the template. Ab initio methods are aimed at prediction of the native structures of proteins corresponding to the global minimum of its free energy. some examples of ab initio methods are AMBER, GROMOS, GROMACS, CHARMM, OPLS, and ENCEPP12. General steps for ab initio methods begin with the geometric representation of the protein of interest. Next, a potential energy function model for the protein is developed. This model can be created using either molecular mechanics potentials or protein structure derived potential functions. Following the development of a potential model, energy search techniques including molecular dynamic simulations, Monte Carlo simulations and genetic algorithms are applied to the protein.

Fragment based

These methods use database information regarding structures to match homologous structures to the created protein sequences. These homologous structures are assembled to give compact structures using scoring and optimization procedures, with the goal of achieving the lowest potential energy score. Webservers for fragment information are I-TASSER, ROSETTA, ROSETTA @ home, FRAGFOLD, CABS fold, PROFESY, CREF, QUARK, UNDERTAKER, HMM, and ANGLOR.

Homology modeling

These methods are based upon the homology of proteins. These methods are also known as comparative modeling. The first step in homology modeling is generally the identification of template sequences of known structure which are homologous to the query sequence. Next the query sequence is aligned to the template sequence. Following the alignment, the structurally conserved regions are modeled using the template structure. This is followed by the modeling of side chains and loops that are distinct from the template. Finally the modeled structure undergoes refinement and assessment of quality. Servers that are available for homology modeling data are listed here: SWISS MODEL, MODELLER, ReformAlign, PyMOD, TIP-STRUCTFAST, COMPASS, 3d-PSSM, SAMT02, SAMT99, HHPRED, FAGUE, 3D-JIGSAW, META-PP, ROSETTA, and I-TASSER.

Protein threading

Protein threading can be used when a reliable homologue for the query sequence cannot be found. This method begins by obtaining a query sequence and a library of template structures. Next, the query sequence is threaded over known template structures. These candidate models are scored using scoring functions. These are scored based upon potential energy models of both query and template sequence. The match with the lowest potential energy model is then selected. Methods and servers for retrieving threading data and performing calculations are listed here: GenTHREADER, pGenTHREADER, pDomTHREADER, ORFEUS, PROSPECT, BioShell-Threading, FFASO3, RaptorX, HHPred, LOOPP server, Sparks-X, SEGMER, THREADER2, ESYPRED3D, LIBRA, TOPITS, RAPTOR, COTH, MUSTER.

For more information on rational design see site-directed mutagenesis.

Multivalent binding

Multivalent binding can be used to increase the binding specificity and affinity through avidity effects. Having multiple binding domains in a single biomolecule or complex increases the likelihood of other interactions to occur via individual binding events. Avidity or effective affinity can be much higher than the sum of the individual affinities providing a cost and time-effective tool for targeted binding.

Multivalent proteins

Multivalent proteins are relatively easy to produce by post-translational modifications or multiplying the protein-coding DNA sequence. The main advantage of multivalent and multispecific proteins is that they can increase the effective affinity for a target of a known protein. In the case of an inhomogeneous target using a combination of proteins resulting in multispecific binding can increase specificity, which has high applicability in protein therapeutics.

The most common example for multivalent binding are the antibodies, and there is extensive research for bispecific antibodies. Applications of bispecific antibodies cover a broad spectrum that includes diagnosis, imaging, prophylaxis, and therapy.

Directed evolution

In directed evolution, random mutagenesis, e.g. by error-prone PCR or sequence saturation mutagenesis, is applied to a protein, and a selection regime is used to select variants having desired traits. Further rounds of mutation and selection are then applied. This method mimics natural evolution and, in general, produces superior results to rational design. An added process, termed DNA shuffling, mixes and matches pieces of successful variants to produce better results. Such processes mimic the recombination that occurs naturally during sexual reproduction. Advantages of directed evolution are that it requires no prior structural knowledge of a protein, nor is it necessary to be able to predict what effect a given mutation will have. Indeed, the results of directed evolution experiments are often surprising in that desired changes are often caused by mutations that were not expected to have some effect. The drawback is that they require high-throughput screening, which is not feasible for all proteins. Large amounts of recombinant DNA must be mutated and the products screened for desired traits. The large number of variants often requires expensive robotic equipment to automate the process. Further, not all desired activities can be screened for easily.

Natural Darwinian evolution can be effectively imitated in the lab toward tailoring protein properties for diverse applications, including catalysis. Many experimental technologies exist to produce large and diverse protein libraries and for screening or selecting folded, functional variants. Folded proteins arise surprisingly frequently in random sequence space, an occurrence exploitable in evolving selective binders and catalysts. While more conservative than direct selection from deep sequence space, redesign of existing proteins by random mutagenesis and selection/screening is a particularly robust method for optimizing or altering extant properties. It also represents an excellent starting point for achieving more ambitious engineering goals. Allying experimental evolution with modern computational methods is likely the broadest, most fruitful strategy for generating functional macromolecules unknown to nature.

The main challenges of designing high quality mutant libraries have shown significant progress in the recent past. This progress has been in the form of better descriptions of the effects of mutational loads on protein traits. Also computational approaches have shown large advances in the innumerably large sequence space to more manageable screenable sizes, thus creating smart libraries of mutants. Library size has also been reduced to more screenable sizes by the identification of key beneficial residues using algorithms for systematic recombination. Finally a significant step forward toward efficient reengineering of enzymes has been made with the development of more accurate statistical models and algorithms quantifying and predicting coupled mutational effects on protein functions.

Generally, directed evolution may be summarized as an iterative two step process which involves generation of protein mutant libraries, and high throughput screening processes to select for variants with improved traits. This technique does not require prior knowledge of the protein structure and function relationship. Directed evolution utilizes random or focused mutagenesis to generate libraries of mutant proteins. Random mutations can be introduced using either error prone PCR, or site saturation mutagenesis. Mutants may also be generated using recombination of multiple homologous genes. Nature has evolved a limited number of beneficial sequences. Directed evolution makes it possible to identify undiscovered protein sequences which have novel functions. This ability is contingent on the proteins ability to tolerant amino acid residue substitutions without compromising folding or stability.

Directed evolution methods can be broadly categorized into two strategies, asexual and sexual methods.

Asexual methods

Asexual methods do not generate any cross links between parental genes. Single genes are used to create mutant libraries using various mutagenic techniques. These asexual methods can produce either random or focused mutagenesis.

Random mutagenesis

Random mutagenic methods produce mutations at random throughout the gene of interest. Random mutagenesis can introduce the following types of mutations: transitions, transversions, insertions, deletions, inversion, missense, and nonsense. Examples of methods for producing random mutagenesis are below.

Error prone PCR

Error prone PCR utilizes the fact that Taq DNA polymerase lacks 3' to 5' exonuclease activity. This results in an error rate of 0.001–0.002% per nucleotide per replication. This method begins with choosing the gene, or the area within a gene, one wishes to mutate. Next, the extent of error required is calculated based upon the type and extent of activity one wishes to generate. This extent of error determines the error prone PCR strategy to be employed. Following PCR, the genes are cloned into a plasmid and introduced to competent cell systems. These cells are then screened for desired traits. Plasmids are then isolated for colonies which show improved traits, and are then used as templates the next round of mutagenesis. Error prone PCR shows biases for certain mutations relative to others. Such as biases for transitions over transversions.

Rates of error in PCR can be increased in the following ways:

  1. Increase concentration of magnesium chloride, which stabilizes non complementary base pairing.
  2. Add manganese chloride to reduce base pair specificity.
  3. Increased and unbalanced addition of dNTPs.
  4. Addition of base analogs like dITP, 8 oxo-dGTP, and dPTP.
  5. Increase concentration of Taq polymerase.
  6. Increase extension time.
  7. Increase cycle time.
  8. Use less accurate Taq polymerase.

Also see polymerase chain reaction for more information.

Rolling circle error-prone PCR

This PCR method is based upon rolling circle amplification, which is modeled from the method that bacteria use to amplify circular DNA. This method results in linear DNA duplexes. These fragments contain tandem repeats of circular DNA called concatamers, which can be transformed into bacterial strains. Mutations are introduced by first cloning the target sequence into an appropriate plasmid. Next, the amplification process begins using random hexamer primers and Φ29 DNA polymerase under error prone rolling circle amplification conditions. Additional conditions to produce error prone rolling circle amplification are 1.5 pM of template DNA, 1.5 mM MnCl2 and a 24 hour reaction time. MnCl2 is added into the reaction mixture to promote random point mutations in the DNA strands. Mutation rates can be increased by increasing the concentration of MnCl2, or by decreasing concentration of the template DNA. Error prone rolling circle amplification is advantageous relative to error prone PCR because of its use of universal random hexamer primers, rather than specific primers. Also the reaction products of this amplification do not need to be treated with ligases or endonucleases. This reaction is isothermal.

Chemical mutagenesis

Chemical mutagenesis involves the use of chemical agents to introduce mutations into genetic sequences. Examples of chemical mutagens follow.

Sodium bisulfate is effective at mutating G/C rich genomic sequences. This is because sodium bisulfate catalyses deamination of unmethylated cytosine to uracil.

Ethyl methane sulfonate alkylates guanidine residues. This alteration causes errors during DNA replication.

Nitrous acid causes transversion by de-amination of adenine and cytosine.

The dual approach to random chemical mutagenesis is an iterative two step process. First it involves the in vivo chemical mutagenesis of the gene of interest via EMS. Next, the treated gene is isolated and cloning into an untreated expression vector in order to prevent mutations in the plasmid backbone. This technique preserves the plasmids genetic properties.

Targeting glycosylases to embedded arrays for mutagenesis (TaGTEAM)

This method has been used to create targeted in vivo mutagenesis in yeast. This method involves the fusion of a 3-methyladenine DNA glycosylase to tetR DNA-binding domain. This has been shown to increase mutation rates by over 800 time in regions of the genome containing tetO sites.

Mutagenesis by random insertion and deletion

This method involves alteration in length of the sequence via simultaneous deletion and insertion of chunks of bases of arbitrary length. This method has been shown to produce proteins with new functionalities via introduction of new restriction sites, specific codons, four base codons for non-natural amino acids.

Transposon based random mutagenesis

Recently many methods for transposon based random mutagenesis have been reported. This methods include, but are not limited to the following: PERMUTE-random circular permutation, random protein truncation, random nucleotide triplet substitution, random domain/tag/multiple amino acid insertion, codon scanning mutagenesis, and multicodon scanning mutagenesis. These aforementioned techniques all require the design of mini-Mu transposons. Thermo scientific manufactures kits for the design of these transposons.

Random mutagenesis methods altering the target DNA length

These methods involve altering gene length via insertion and deletion mutations. An example is the tandem repeat insertion (TRINS) method. This technique results in the generation of tandem repeats of random fragments of the target gene via rolling circle amplification and concurrent incorporation of these repeats into the target gene.

Mutator strains

Mutator strains are bacterial cell lines which are deficient in one or more DNA repair mechanisms. An example of a mutator strand is the E. coli XL1-RED. This subordinate strain of E. coli is deficient in the MutS, MutD, MutT DNA repair pathways. Use of mutator strains is useful at introducing many types of mutation; however, these strains show progressive sickness of culture because of the accumulation of mutations in the strains own genome.

Focused mutagenesis

Focused mutagenic methods produce mutations at predetermined amino acid residues. These techniques require and understanding of the sequence-function relationship for the protein of interest. Understanding of this relationship allows for the identification of residues which are important in stability, stereoselectivity, and catalytic efficiency. Examples of methods that produce focused mutagenesis are below.

Site saturation mutagenesis

Site saturation mutagenesis is a PCR based method used to target amino acids with significant roles in protein function. The two most common techniques for performing this are whole plasmid single PCR, and overlap extension PCR.

Whole plasmid single PCR is also referred to as site directed mutagenesis (SDM). SDM products are subjected to Dpn endonuclease digestion. This digestion results in cleavage of only the parental strand, because the parental strand contains a GmATC which is methylated at N6 of adenine. SDM does not work well for large plasmids of over ten kilobases. Also, this method is only capable of replacing two nucleotides at a time.

Overlap extension PCR requires the use of two pairs of primers. One primer in each set contains a mutation. A first round of PCR using these primer sets is performed and two double stranded DNA duplexes are formed. A second round of PCR is then performed in which these duplexes are denatured and annealed with the primer sets again to produce heteroduplexes, in which each strand has a mutation. Any gaps in these newly formed heteroduplexes are filled with DNA polymerases and further amplified.

Sequence saturation mutagenesis (SeSaM)

Sequence saturation mutagenesis results in the randomization of the target sequence at every nucleotide position. This method begins with the generation of variable length DNA fragments tailed with universal bases via the use of template transferases at the 3' termini. Next, these fragments are extended to full length using a single stranded template. The universal bases are replaced with a random standard base, causing mutations. There are several modified versions of this method such as SeSAM-Tv-II, SeSAM-Tv+, and SeSAM-III.

Single primer reactions in parallel (SPRINP)

This site saturation mutagenesis method involves two separate PCR reaction. The first of which uses only forward primers, while the second reaction uses only reverse primers. This avoids the formation of primer dimer formation.

Mega primed and ligase free focused mutagenesis

This site saturation mutagenic technique begins with one mutagenic oligonucleotide and one universal flanking primer. These two reactants are used for an initial PCR cycle. Products from this first PCR cycle are used as mega primers for the next PCR.

Ω-PCR

This site saturation mutagenic method is based on overlap extension PCR. It is used to introduce mutations at any site in a circular plasmid.

PFunkel-ominchange-OSCARR

This method utilizes user defined site directed mutagenesis at single or multiple sites simultaneously. OSCARR is an acronym for one pot simple methodology for cassette randomization and recombination. This randomization and recombination results in randomization of desired fragments of a protein. Omnichange is a sequence independent, multisite saturation mutagenesis which can saturate up to five independent codons on a gene.

Trimer-dimer mutagenesis

This method removes redundant codons and stop codons.

Cassette mutagenesis

This is a PCR based method. Cassette mutagenesis begins with the synthesis of a DNA cassette containing the gene of interest, which is flanked on either side by restriction sites. The endonuclease which cleaves these restriction sites also cleaves sites in the target plasmid. The DNA cassette and the target plasmid are both treated with endonucleases to cleave these restriction sites and create sticky ends. Next the products from this cleavage are ligated together, resulting in the insertion of the gene into the target plasmid. An alternative form of cassette mutagenesis called combinatorial cassette mutagenesis is used to identify the functions of individual amino acid residues in the protein of interest. Recursive ensemble mutagenesis then utilizes information from previous combinatorial cassette mutagenesis. Codon cassette mutagenesis allows you to insert or replace a single codon at a particular site in double stranded DNA.

Sexual methods

Sexual methods of directed evolution involve in vitro recombination which mimic natural in vivo recombination. Generally these techniques require high sequence homology between parental sequences. These techniques are often used to recombine two different parental genes, and these methods do create cross overs between these genes.

In vitro homologous recombination

Homologous recombination can be categorized as either in vivo or in vitro. In vitro homologous recombination mimics natural in vivo recombination. These in vitro recombination methods require high sequence homology between parental sequences. These techniques exploit the natural diversity in parental genes by recombining them to yield chimeric genes. The resulting chimera show a blend of parental characteristics.

DNA shuffling

This in vitro technique was one of the first techniques in the era of recombination. It begins with the digestion of homologous parental genes into small fragments by DNase1. These small fragments are then purified from undigested parental genes. Purified fragments are then reassembled using primer-less PCR. This PCR involves homologous fragments from different parental genes priming for each other, resulting in chimeric DNA. The chimeric DNA of parental size is then amplified using end terminal primers in regular PCR.

Random priming in vitro recombination (RPR)

This in vitro homologous recombination method begins with the synthesis of many short gene fragments exhibiting point mutations using random sequence primers. These fragments are reassembled to full length parental genes using primer-less PCR. These reassembled sequences are then amplified using PCR and subjected to further selection processes. This method is advantageous relative to DNA shuffling because there is no use of DNase1, thus there is no bias for recombination next to a pyrimidine nucleotide. This method is also advantageous due to its use of synthetic random primers which are uniform in length, and lack biases. Finally this method is independent of the length of DNA template sequence, and requires a small amount of parental DNA.

Truncated metagenomic gene-specific PCR

This method generates chimeric genes directly from metagenomic samples. It begins with isolation of the desired gene by functional screening from metagenomic DNA sample. Next, specific primers are designed and used to amplify the homologous genes from different environmental samples. Finally, chimeric libraries are generated to retrieve the desired functional clones by shuffling these amplified homologous genes.

Staggered extension process (StEP)

This in vitro method is based on template switching to generate chimeric genes. This PCR based method begins with an initial denaturation of the template, followed by annealing of primers and a short extension time. All subsequent cycle generate annealing between the short fragments generated in previous cycles and different parts of the template. These short fragments and the templates anneal together based on sequence complementarity. This process of fragments annealing template DNA is known as template switching. These annealed fragments will then serve as primers for further extension. This method is carried out until the parental length chimeric gene sequence is obtained. Execution of this method only requires flanking primers to begin. There is also no need for Dnase1 enzyme.

Random chimeragenesis on transient templates (RACHITT)

This method has been shown to generate chimeric gene libraries with an average of 14 crossovers per chimeric gene. It begins by aligning fragments from a parental top strand onto the bottom strand of a uracil containing template from a homologous gene. 5' and 3' overhang flaps are cleaved and gaps are filled by the exonuclease and endonuclease activities of Pfu and taq DNA polymerases. The uracil containing template is then removed from the heteroduplex by treatment with a uracil DNA glcosylase, followed by further amplification using PCR. This method is advantageous because it generates chimeras with relatively high crossover frequency. However it is somewhat limited due to the complexity and the need for generation of single stranded DNA and uracil containing single stranded template DNA.

Synthetic shuffling

Shuffling of synthetic degenerate oligonucleotides adds flexibility to shuffling methods, since oligonucleotides containing optimal codons and beneficial mutations can be included.

In vivo Homologous Recombination

Cloning performed in yeast involves PCR dependent reassembly of fragmented expression vectors. These reassembled vectors are then introduced to, and cloned in yeast. Using yeast to clone the vector avoids toxicity and counter-selection that would be introduced by ligation and propagation in E. coli.

Mutagenic organized recombination process by homologous in vivo grouping (MORPHING)

This method introduces mutations into specific regions of genes while leaving other parts intact by utilizing the high frequency of homologous recombination in yeast.

Phage-assisted continuous evolution (PACE)

This method utilizes a bacteriophage with a modified life cycle to transfer evolving genes from host to host. The phage's life cycle is designed in such a way that the transfer is correlated with the activity of interest from the enzyme. This method is advantageous because it requires minimal human intervention for the continuous evolution of the gene.

In vitro non-homologous recombination methods

These methods are based upon the fact that proteins can exhibit similar structural identity while lacking sequence homology.

Exon shuffling

Exon shuffling is the combination of exons from different proteins by recombination events occurring at introns. Orthologous exon shuffling involves combining exons from orthologous genes from different species. Orthologous domain shuffling involves shuffling of entire protein domains from orthologous genes from different species. Paralogous exon shuffling involves shuffling of exon from different genes from the same species. Paralogous domain shuffling involves shuffling of entire protein domains from paralogous proteins from the same species. Functional homolog shuffling involves shuffling of non-homologous domains which are functional related. All of these processes being with amplification of the desired exons from different genes using chimeric synthetic oligonucleotides. This amplification products are then reassembled into full length genes using primer-less PCR. During these PCR cycles the fragments act as templates and primers. This results in chimeric full length genes, which are then subjected to screening.

Incremental truncation for the creation of hybrid enzymes (ITCHY)

Fragments of parental genes are created using controlled digestion by exonuclease III. These fragments are blunted using endonuclease, and are ligated to produce hybrid genes. THIOITCHY is a modified ITCHY technique which utilized nucleotide triphosphate analogs such as α-phosphothioate dNTPs. Incorporation of these nucleotides blocks digestion by exonuclease III. This inhibition of digestion by exonuclease III is called spiking. Spiking can be accomplished by first truncating genes with exonuclease to create fragments with short single stranded overhangs. These fragments then serve as templates for amplification by DNA polymerase in the presence of small amounts of phosphothioate dNTPs. These resulting fragments are then ligated together to form full length genes. Alternatively the intact parental genes can be amplified by PCR in the presence of normal dNTPs and phosphothioate dNTPs. These full length amplification products are then subjected to digestion by an exonuclease. Digestion will continue until the exonuclease encounters an α-pdNTP, resulting in fragments of different length. These fragments are then ligated together to generate chimeric genes.

SCRATCHY

This method generates libraries of hybrid genes inhibiting multiple crossovers by combining DNA shuffling and ITCHY. This method begins with the construction of two independent ITCHY libraries. The first with gene A on the N-terminus. And the other having gene B on the N-terminus. These hybrid gene fragments are separated using either restriction enzyme digestion or PCR with terminus primers via agarose gel electrophoresis. These isolated fragments are then mixed together and further digested using DNase1. Digested fragments are then reassembled by primerless PCR with template switching.

Recombined extension on truncated templates (RETT)

This method generates libraries of hybrid genes by template switching of uni-directionally growing polynucleotides in the presence of single stranded DNA fragments as templates for chimeras. This method begins with the preparation of single stranded DNA fragments by reverse transcription from target mRNA. Gene specific primers are then annealed to the single stranded DNA. These genes are then extended during a PCR cycle. This cycle is followed by template switching and annealing of the short fragments obtained from the earlier primer extension to other single stranded DNA fragments. This process is repeated until full length single stranded DNA is obtained.

Sequence homology-independent protein recombination (SHIPREC)

This method generates recombination between genes with little to no sequence homology. These chimeras are fused via a linker sequence containing several restriction sites. This construct is then digested using DNase1. Fragments are made are made blunt ended using S1 nuclease. These blunt end fragments are put together into a circular sequence by ligation. This circular construct is then linearized using restriction enzymes for which the restriction sites are present in the linker region. This results in a library of chimeric genes in which contribution of genes to 5' and 3' end will be reversed as compared to the starting construct.

Sequence independent site directed chimeragenesis (SISDC)

This method results in a library of genes with multiple crossovers from several parental genes. This method does not require sequence identity among the parental genes. This does require one or two conserved amino acids at every crossover position. It begins with alignment of parental sequences and identification of consensus regions which serve as crossover sites. This is followed by the incorporation of specific tags containing restriction sites followed by the removal of the tags by digestion with Bac1, resulting in genes with cohesive ends. These gene fragments are mixed and ligated in an appropriate order to form chimeric libraries.

Degenerate homo-duplex recombination (DHR)

This method begins with alignment of homologous genes, followed by identification of regions of polymorphism. Next the top strand of the gene is divided into small degenerate oligonucleotides. The bottom strand is also digested into oligonucleotides to serve as scaffolds. These fragments are combined in solution are top strand oligonucleotides are assembled onto bottom strand oligonucleotides. Gaps between these fragments are filled with polymerase and ligated.

Random multi-recombinant PCR (RM-PCR)

This method involves the shuffling of plural DNA fragments without homology, in a single PCR. This results in the reconstruction of complete proteins by assembly of modules encoding different structural units.

User friendly DNA recombination (USERec)

This method begins with the amplification of gene fragments which need to be recombined, using uracil dNTPs. This amplification solution also contains primers, PfuTurbo, and Cx Hotstart DNA polymerase. Amplified products are next incubated with USER enzyme. This enzyme catalyzes the removal of uracil residues from DNA creating single base pair gaps. The USER enzyme treated fragments are mixed and ligated using T4 DNA ligase and subjected to Dpn1 digestion to remove the template DNA. These resulting dingle stranded fragments are subjected to amplification using PCR, and are transformed into E. coli.

Golden Gate shuffling (GGS) recombination

This method allows you to recombine at least 9 different fragments in an acceptor vector by using type 2 restriction enzyme which cuts outside of the restriction sites. It begins with sub cloning of fragments in separate vectors to create Bsa1 flanking sequences on both sides. These vectors are then cleaved using type II restriction enzyme Bsa1, which generates four nucleotide single strand overhangs. Fragments with complementary overhangs are hybridized and ligated using T4 DNA ligase. Finally these constructs are then transformed into E. coli cells, which are screened for expression levels.

Phosphoro thioate-based DNA recombination method (PRTec)

This method can be used to recombine structural elements or entire protein domains. This method is based on phosphorothioate chemistry which allows the specific cleavage of phosphorothiodiester bonds. The first step in the process begins with amplification of fragments that need to be recombined along with the vector backbone. This amplification is accomplished using primers with phosphorothiolated nucleotides at 5' ends. Amplified PCR products are cleaved in an ethanol-iodine solution at high temperatures. Next these fragments are hybridized at room temperature and transformed into E. coli which repair any nicks.

Integron

This system is based upon a natural site specific recombination system in E. coli. This system is called the integron system, and produces natural gene shuffling. This method was used to construct and optimize a functional tryptophan biosynthetic operon in trp-deficient E. coli by delivering individual recombination cassettes or trpA-E genes along with regulatory elements with the integron system.

Y-Ligation based shuffling (YLBS)

This method generates single stranded DNA strands, which encompass a single block sequence either at the 5' or 3' end, complementary sequences in a stem loop region, and a D branch region serving as a primer binding site for PCR. Equivalent amounts of both 5' and 3' half strands are mixed and formed a hybrid due to the complementarity in the stem region. Hybrids with free phosphorylated 5' end in 3' half strands are then ligated with free 3' ends in 5' half strands using T4 DNA ligase in the presence of 0.1 mM ATP. Ligated products are then amplified by two types of PCR to generate pre 5' half and pre 3' half PCR products. These PCR product are converted to single strands via avidin-biotin binding to the 5' end of the primes containing stem sequences that were biotin labeled. Next, biotinylated 5' half strands and non-biotinylated 3' half strands are used as 5' and 3' half strands for the next Y-ligation cycle.

Semi-rational design

Semi-rational design uses information about a proteins sequence, structure and function, in tandem with predictive algorithms. Together these are used to identify target amino acid residues which are most likely to influence protein function. Mutations of these key amino acid residues create libraries of mutant proteins that are more likely to have enhanced properties.

Advances in semi-rational enzyme engineering and de novo enzyme design provide researchers with powerful and effective new strategies to manipulate biocatalysts. Integration of sequence and structure based approaches in library design has proven to be a great guide for enzyme redesign. Generally, current computational de novo and redesign methods do not compare to evolved variants in catalytic performance. Although experimental optimization may be produced using directed evolution, further improvements in the accuracy of structure predictions and greater catalytic ability will be achieved with improvements in design algorithms. Further functional enhancements may be included in future simulations by integrating protein dynamics.

Biochemical and biophysical studies, along with fine-tuning of predictive frameworks will be useful to experimentally evaluate the functional significance of individual design features. Better understanding of these functional contributions will then give feedback for the improvement of future designs.

Directed evolution will likely not be replaced as the method of choice for protein engineering, although computational protein design has fundamentally changed the way protein engineering can manipulate bio-macromolecules. Smaller, more focused and functionally-rich libraries may be generated by using in methods which incorporate predictive frameworks for hypothesis-driven protein engineering. New design strategies and technical advances have begun a departure from traditional protocols, such as directed evolution, which represents the most effective strategy for identifying top-performing candidates in focused libraries. Whole-gene library synthesis is replacing shuffling and mutagenesis protocols for library preparation. Also highly specific low throughput screening assays are increasingly applied in place of monumental screening and selection efforts of millions of candidates. Together, these developments are poised to take protein engineering beyond directed evolution and towards practical, more efficient strategies for tailoring biocatalysts.

Screening and selection techniques

Once a protein has undergone directed evolution, ration design or semi-ration design, the libraries of mutant proteins must be screened to determine which mutants show enhanced properties. Phage display methods are one option for screening proteins. This method involves the fusion of genes encoding the variant polypeptides with phage coat protein genes. Protein variants expressed on phage surfaces are selected by binding with immobilized targets in vitro. Phages with selected protein variants are then amplified in bacteria, followed by the identification of positive clones by enzyme linked immunosorbent assay. These selected phages are then subjected to DNA sequencing.

Cell surface display systems can also be utilized to screen mutant polypeptide libraries. The library mutant genes are incorporated into expression vectors which are then transformed into appropriate host cells. These host cells are subjected to further high throughput screening methods to identify the cells with desired phenotypes.

Cell free display systems have been developed to exploit in vitro protein translation or cell free translation. These methods include mRNA display, ribosome display, covalent and non covalent DNA display, and in vitro compartmentalization.

Enzyme engineering

Enzyme engineering is the application of modifying an enzyme's structure (and, thus, its function) or modifying the catalytic activity of isolated enzymes to produce new metabolites, to allow new (catalyzed) pathways for reactions to occur, or to convert from certain compounds into others (biotransformation). These products are useful as chemicals, pharmaceuticals, fuel, food, or agricultural additives.

An enzyme reactor consists of a vessel containing a reactional medium that is used to perform a desired conversion by enzymatic means. Enzymes used in this process are free in the solution.

Examples of engineered proteins

Computing methods have been used to design a protein with a novel fold, such as Top7, and sensors for unnatural molecules. The engineering of fusion proteins has yielded rilonacept, a pharmaceutical that has secured Food and Drug Administration (FDA) approval for treating cryopyrin-associated periodic syndrome.

Another computing method, IPRO, successfully engineered the switching of cofactor specificity of Candida boidinii xylose reductase. Iterative Protein Redesign and Optimization (IPRO) redesigns proteins to increase or give specificity to native or novel substrates and cofactors. This is done by repeatedly randomly perturbing the structure of the proteins around specified design positions, identifying the lowest energy combination of rotamers, and determining whether the new design has a lower binding energy than prior ones. The iterative nature of this process allows IPRO to make additive mutations to a protein sequence that collectively improve the specificity toward desired substrates and/or cofactors.

Computation-aided design has also been used to engineer complex properties of a highly ordered nano-protein assembly. A protein cage, E. coli bacterioferritin (EcBfr), which naturally shows structural instability and an incomplete self-assembly behavior by populating two oligomerization states, is the model protein in this study. Through computational analysis and comparison to its homologs, it has been found that this protein has a smaller-than-average dimeric interface on its two-fold symmetry axis due mainly to the existence of an interfacial water pocket centered on two water-bridged asparagine residues. To investigate the possibility of engineering EcBfr for modified structural stability, a semi-empirical computational method is used to virtually explore the energy differences of the 480 possible mutants at the dimeric interface relative to the wild type EcBfr. This computational study also converges on the water-bridged asparagines. Replacing these two asparagines with hydrophobic amino acids results in proteins that fold into alpha-helical monomers and assemble into cages as evidenced by circular dichroism and transmission electron microscopy. Both thermal and chemical denaturation confirm that, all redesigned proteins, in agreement with the calculations, possess increased stability. One of the three mutations shifts the population in favor of the higher order oligomerization state in solution as shown by both size exclusion chromatography and native gel electrophoresis.

A in silico method, PoreDesigner, was developed to redesign bacterial channel protein (OmpF) to reduce its 1 nm pore size to any desired sub-nm dimension. Transport experiments on the narrowest designed pores revealed complete salt rejection when assembled in biomimetic block-polymer matrices.

Neurohacking

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurohacking   ...