Search This Blog

Thursday, February 15, 2018

Climate engineering

From Wikipedia, the free encyclopedia

Climate engineering, commonly referred to as geoengineering, also known as climate intervention,[1] is the deliberate and large-scale intervention in the Earth’s climate system with the aim of affecting adverse global warming.[2][3][4] Climate engineering is an umbrella term for measures that mainly fall into two categories: greenhouse gas removal and solar radiation management. Greenhouse gas removal approaches, of which carbon dioxide removal represents the most prominent subcategory addresses the cause of global warming by removing greenhouse gases from the atmosphere. Solar radiation management attempts to offset effects of greenhouse gases by causing the Earth to absorb less solar radiation.

Climate engineering approaches are sometimes viewed as additional potential options for limiting climate change or its impacts, alongside mitigation and adaptation.[5][6] There is substantial agreement among scientists that climate engineering cannot substitute for climate change mitigation. Some approaches might be used as accompanying measures to sharp cuts in greenhouse gas emissions.[7] Given that all types of measures for addressing climate change have economic, political, or physical limitations,[8][9] some climate engineering approaches might eventually be used as part of an ensemble of measures, which can be referred to as climate restoration.[10] Research on costs, benefits, and various types of risks of most climate engineering approaches is at an early stage and their understanding needs to improve to judge their adequacy and feasibility.[2]

Almost all research into solar radiation management has to date consisted of computer modelling or laboratory tests, and an attempt to move to outdoor experimentation has proven controversial.[11] Some carbon dioxide removal practices, such as afforestation[12], ecosystem restoration and bio-energy with carbon capture and storage projects, are underway to a limited extent. Their scalability to effectively affect global climate is, however, debated. Ocean iron fertilization has been investigated in small-scale research trials. These experiments have proven controversial.[13] The World Wildlife Fund has criticized these activities.[14]

Most experts and major reports advise against relying on climate engineering techniques as a simple solution to global warming, in part due to the large uncertainties over effectiveness and side effects. However, most experts also argue that the risks of such interventions must be seen in the context of risks of dangerous global warming.[15][16] Interventions at large scale may run a greater risk of disrupting natural systems resulting in a dilemma that those approaches that could prove highly (cost-) effective in addressing extreme climate risk, might themselves cause substantial risk.[15] Some have suggested that the concept of engineering the climate presents a so-called "moral hazard" because it could reduce political and public pressure for emissions reduction, which could exacerbate overall climate risks; others assert that the threat of climate engineering could spur emissions cuts.[17][18][19] Some are in favour of a moratorium on out-of-doors testing and deployment of solar radiation management (SRM).[20][21]

General

With respect to climate, geoengineering is defined by the Royal Society as "... the deliberate large-scale intervention in the Earth’s climate system, in order to moderate global warming."[22] Several organizations have investigated climate engineering with a view to evaluating its potential, including the US Congress,[23] the National Academy of Sciences,[24] the Royal Society,[25] and the UK Parliament.[26] The Asilomar International Conference on Climate Intervention Technologies was convened to identify and develop risk reduction guidelines for climate intervention experimentation.[27]

Some environmental organisations (such as Friends of the Earth[28] and Greenpeace[29]) have been reluctant to endorse solar radiation management, but are often more supportive of some carbon dioxide removal projects, such as afforestation and peatland restoration. Some authors have argued that any public support for climate engineering may weaken the fragile political consensus to reduce greenhouse gas emissions.[30]

History

The 1965 landmark report, "Restoring the Quality of Our Environment" by U.S. President Lyndon B. Johnson’s Science Advisory Committee warned of the harmful effects of fossil fuel emissions, the report also mentioned "deliberately bringing about countervailing climatic changes," including by "raising the albedo, or reflectivity, of the Earth."[31] Teller et al. 1997 suggested to research and deploy reflective particles, to reduce incoming solar radiation, and thus to cancel the effects of fossil fuel burning.[32]

Proposed strategies

Several climate engineering strategies have been proposed. IPCC documents detail several notable proposals.[33] These fall into two main categories: solar radiation management and carbon dioxide removal.

Solar radiation management

Solar radiation management (SRM)[4][34] techniques would seek to reduce sunlight absorbed (ultra-violet, near infra-red and visible). This would be achieved by deflecting sunlight away from the Earth, or by increasing the reflectivity (albedo) of the atmosphere or the Earth's surface. These methods would not reduce greenhouse gas concentrations in the atmosphere, and thus would not seek to address problems such as the ocean acidification caused by CO2. In general, solar radiation management projects presently appear to be able to take effect rapidly and to have very low direct implementation costs relative to greenhouse gas emissions cuts and carbon dioxide removal. Furthermore, many proposed SRM methods would be reversible in their direct climatic effects. While greenhouse gas remediation offers a more comprehensive possible solution to global warming, it does not give instantaneous results; for that, solar radiation management is required.[dubious ] Solar radiation management methods[4] may include:

Carbon dioxide removal

An oceanic phytoplankton bloom in the South Atlantic Ocean, off the coast of Argentina. The aim of ocean iron fertilization in theory is to increase such blooms by adding some iron, which would then draw carbon from the atmosphere and fix it on the seabed.
Significant reduction in ice volume in the Arctic Ocean in the range between 1979 and 2007 years

Carbon dioxide removal (sometimes known as negative emissions technologies or greenhouse gas removal) projects seek to remove carbon dioxide from the atmosphere. Proposed methods include those that directly remove such gases from the atmosphere, as well as indirect methods that seek to promote natural processes that draw down and sequester CO2 (e.g. tree planting). Many projects overlap with carbon capture and storage projects, and may not be considered to be climate engineering by all commentators. Techniques in this category include:
Many of the IPCC model projections to keep global mean temperature below 2C, are based on scenarios assuming deployment of negative emissions technologies.[37]

Justification

Tipping points and positive feedback

Climate change during the last 65 million years. The Paleocene–Eocene Thermal Maximum is labelled PETM.

It is argued that climate change may cross tipping points[38] where elements of the climate system may 'tip' from one stable state to another stable state, much like a glass tipping over. When the new state is reached, further warming may be caused by positive feedback effects,.[39] An example of a proposed causal chain leading to more warming is the decline of Arctic sea ice, potentially triggering subsequent release of ocean methane.[40] Evidence suggests a gradual and prolonged release of greenhouse gases from thawing permafrost.[41]

The precise identity of such "tipping points" is not clear, with scientists taking differing views on whether specific systems are capable of "tipping" and the point at which this "tipping" will occur.[42] An example of a previous tipping point is that which preceded the rapid warming leading up to the Paleocene–Eocene Thermal Maximum. Once a tipping point is crossed, cuts in anthropogenic greenhouse gas emissions will not be able to reverse the change. Conservation of resources and reduction of greenhouse emissions, used in conjunction with climate engineering, are therefore considered a viable option by some commentators.[43][44][45]

Buying time

Climate engineering offers the hope of temporarily reversing some aspects of global warming and allowing the natural climate to be substantially preserved whilst greenhouse gas emissions are brought under control and removed from the atmosphere by natural or artificial processes.[46]

Costs

Estimates of direct costs for climate engineering implementation vary widely. In general, carbon dioxide removal methods are more expensive than the solar radiation management ones. In their 2009 report Geoengineering the Climate the Royal Society judged afforestation and stratospheric aerosol injection as the methods with the "highest affordability" (lowest costs). More recently, research into costs of solar radiation management have been published.[47] This suggests that "well designed systems" might be available for costs in the order of a few hundred million to tens of billions of dollars per year.[48] These are much lower than costs to achieve comprehensive reductions in CO2 emissions. Such costs would be within the budget of most nations, and even some wealthy individuals.[49]

Ethics and responsibility

Climate engineering would represent a large-scale, intentional effort to modify the climate. It would differ from activities such as burning fossil fuels, as they change the climate inadvertently. Intentional climate change is often viewed differently from a moral standpoint.[50] It raises questions of whether humans have the right to change the climate deliberately, and under what conditions. For example, there may be an ethical distinction between climate engineering to minimize global warming and doing so to optimize the climate. Furthermore, ethical arguments often confront larger considerations of worldview, including individual and social religious commitments. This may imply that discussions of climate engineering should reflect on how religious commitments might influence the discourse.[51] For many people, religious beliefs are pivotal in defining the role of human beings in the wider world. Some religious communities might claim that humans have no responsibility in managing the climate, instead seeing such world systems as the exclusive domain of a Creator. In contrast, other religious communities might see the human role as one of "stewardship" or benevolent management of the world.[52] The question of ethics also relates to issues of policy decision-making. For example, the selection of a globally agreed target temperature is a significant problem in any climate engineering governance regime, as different countries or interest groups may seek different global temperatures.[53]

Politics

It has been argued that regardless of the economic, scientific and technical aspects, the difficulty of achieving concerted political action on global warming requires other approaches.[54] Those arguing political expediency say the difficulty of achieving meaningful emissions cuts[55] and the effective failure of the Kyoto Protocol demonstrate the practical difficulties of achieving carbon dioxide emissions reduction by the agreement of the international community.[56] However, others point to support for climate engineering proposals among think tanks with a history of global warming skepticism and opposition to emissions reductions as evidence that the prospect of climate engineering is itself already politicized and being promoted as part of an argument against the need for (and viability of) emissions reductions; that, rather than climate engineering being a solution to the difficulties of emissions reductions, the prospect of climate engineering is being used as part of an argument to stall emissions reductions in the first place.[57]

Risks and criticisms

Change in sea surface pH caused by anthropogenic CO2 between the 1700s and the 1990s. This ocean acidification will still be a major problem unless atmospheric CO2 is reduced.

Various criticisms have been made of climate engineering,[58] particularly solar radiation management (SRM) methods.[59] Decision making suffers from intransitivity of policy choice.[60] Some commentators appear fundamentally opposed. Groups such as ETC Group[21] and individuals such as Raymond Pierrehumbert have called for a moratorium on climate engineering techniques.[20][61]

Ineffectiveness

The effectiveness of the techniques proposed may fall short of predictions. In ocean iron fertilization, for example, the amount of carbon dioxide removed from the atmosphere may be much lower than predicted, as carbon taken up by plankton may be released back into the atmosphere from dead plankton, rather than being carried to the bottom of the sea and sequestered.[62] Model results from a 2016 study, suggest that blooming algae could even accelerate Arctic warming.[63]

Moral hazard or risk compensation

The existence of such techniques may reduce the political and social impetus to reduce carbon emissions.[64] This has generally been called a potential moral hazard, although risk compensation may be a more accurate term. This concern causes many environmental groups and campaigners to be reluctant to advocate or discuss climate engineering for fear of reducing the imperative to cut greenhouse gas emissions.[65] However, several public opinion surveys and focus groups have found evidence of either assertions of a desire to increase emission cuts in the face of climate engineering, or of no effect.[66][67][68][69][70][71][72] Other modelling work suggests that the threat of climate engineering may in fact increase the likelihood of emissions reduction.[73][74][75][76]

Governance

Climate engineering opens up various political and economic issues. The governance issues characterizing carbon dioxide removal compared to solar radiation management tend to be distinct. Carbon dioxide removal techniques are typically slow to act, expensive, and entail risks that are relatively familiar, such as the risk of carbon dioxide leakage from underground storage formations. In contrast, solar radiation management methods are fast-acting, comparatively cheap, and involve novel and more significant risks such as regional climate disruptions. As a result of these differing characteristics, the key governance problem for carbon dioxide removal (as with emissions reductions) is making sure actors do enough of it (the so-called "free rider problem"), whereas the key governance issue for solar radiation management is making sure actors do not do too much (the "free driver" problem).[77]

Domestic and international governance vary by the proposed climate engineering method. There is presently a lack of a universally agreed framework for the regulation of either climate engineering activity or research. The London Convention addresses some aspects of the law in relation to biomass ocean storage and ocean fertilization. Scientists at the Oxford Martin School at Oxford University have proposed a set of voluntary principles, which may guide climate engineering research. The short version of the 'Oxford Principles'[78] is:
  • Principle 1: Geoengineering to be regulated as a public good.
  • Principle 2: Public participation in geoengineering decision-making
  • Principle 3: Disclosure of geoengineering research and open publication of results
  • Principle 4: Independent assessment of impacts
  • Principle 5: Governance before deployment
These principles have been endorsed by the House of Commons of the United Kingdom Science and Technology Select Committee on “The Regulation of Geoengineering”,[79] and have been referred to by authors discussing the issue of governance.[80]

The Asilomar conference was replicated to deal with the issue of climate engineering governance,[80] and covered in a TV documentary, broadcast in Canada.

Implementation issues

There is general consensus[who?] that no climate engineering technique is currently sufficiently safe or effective to greatly reduce climate change risks, for the reasons listed above. However, some may be able to contribute to reducing climate risks within relatively short times.

All proposed solar radiation management techniques require implementation on a relatively large scale, in order to impact the Earth's climate. The least costly proposals are budgeted at tens of billions of US dollars annually.[81] Space sunshades would cost far more. Who was to bear the substantial costs of some climate engineering techniques may be hard to agree. However, the more effective solar radiation management proposals currently appear to have low enough direct implementation costs that it would be in the interests of several single countries to implement them unilaterally.

In contrast, carbon dioxide removal, like greenhouse gas emissions reductions, have impacts proportional to their scale. These techniques would not be "implemented" in the same sense as solar radiation management ones.The problem structure of carbon dioxide removal resembles that of emissions cuts, in that both are somewhat expensive public goods, whose provision presents a collective action problem.

Before they are ready to be used, most techniques would require technical development processes that are not yet in place. As a result, many promising proposed climate engineering do not yet have the engineering development or experimental evidence to determine their feasibility or efficacy.

Public perception

In a 2017 focus group study conducted by the Cooperative Institute for Research in Environmental Sciences (CIRES) in the United States, Japan, New Zealand and Sweden, participants were asked about carbon sequestration options, reflection proposals such as with space mirrors, or brightening of clouds, and their majority responses could be summed up as follows:
  • What happens if the technologies backfire with unintended consequences?
  • Are these solutions treating the symptoms of climate change rather than the cause?
  • Shouldn’t we just change our lifestyle and consumption patterns to fight climate change, making climate engineering a last resort?
  • Isn’t there a greater need to address political solutions to reduce our emissions?
Moderators floated then the idea of a future "climate emergency" such as rapid environmental change. The participants felt that mitigation and adaptation to climate change were strongly preferred options in such a situation, and climate engineering was seen as a last resort.[82]

Evaluation of climate engineering

Most of what is known about the suggested techniques is based on laboratory experiments, observations of natural phenomena, and on computer modelling techniques. Some proposed climate engineering methods employ methods that have analogues in natural phenomena such as stratospheric sulfur aerosols and cloud condensation nuclei. As such, studies about the efficacy of these methods can draw on information already available from other research, such as that following the 1991 eruption of Mount Pinatubo. However, comparative evaluation of the relative merits of each technology is complicated, especially given modelling uncertainties and the early stage of engineering development of many proposed climate engineering methods.[83]

Reports into climate engineering have also been published in the United Kingdom by the Institution of Mechanical Engineers[9] and the Royal Society.[10] The IMechE report examined a small subset of proposed methods (air capture, urban albedo and algal-based CO2 capture techniques), and its main conclusions were that climate engineering should be researched and trialled at the small scale alongside a wider decarbonisation of the economy.[9]

The Royal Society review examined a wide range of proposed climate engineering methods and evaluated them in terms of effectiveness, affordability, timeliness and safety (assigning qualitative estimates in each assessment). The report divided proposed methods into "carbon dioxide removal" (CDR) and "solar radiation management" (SRM) approaches that respectively address longwave and shortwave radiation. The key recommendations of the report were that "Parties to the UNFCCC should make increased efforts towards mitigating and adapting to climate change, and in particular to agreeing to global emissions reductions", and that "[nothing] now known about climate engineering options gives any reason to diminish these efforts".[10] Nonetheless, the report also recommended that "research and development of climate engineering options should be undertaken to investigate whether low risk methods can be made available if it becomes necessary to reduce the rate of warming this century".[10]

In a 2009 review study, Lenton and Vaughan evaluated a range of proposed climate engineering techniques from those that sequester CO2 from the atmosphere and decrease longwave radiation trapping, to those that decrease the Earth's receipt of shortwave radiation.[8] In order to permit a comparison of disparate techniques, they used a common evaluation for each technique based on its effect on net radiative forcing. As such, the review examined the scientific plausibility of proposed methods rather than the practical considerations such as engineering feasibility or economic cost. Lenton and Vaughan found that "[air] capture and storage shows the greatest potential, combined with afforestation, reforestation and bio-char production", and noted that "other suggestions that have received considerable media attention, in particular "ocean pipes" appear to be ineffective".[8] They concluded that "[climate] geoengineering is best considered as a potential complement to the mitigation of CO2 emissions, rather than as an alternative to it".[8]

In October 2011, a Bipartisan Policy Center panel issued a report urging immediate researching and testing in case "the climate system reaches a 'tipping point' and swift remedial action is required".[84]

National Academy of Sciences

The National Academy of Sciences conducted a 21-month project to study the potential impacts, benefits, and costs of two different types of climate engineering: carbon dioxide removal and albedo modification (solar radiation management). The differences between these two classes of climate engineering "led the committee to evaluate the two types of approaches separately in companion reports, a distinction it hopes carries over to future scientific and policy discussions."[85]

According to the two-volume study released in February 2015:
Climate intervention is no substitute for reductions in carbon dioxide emissions and adaptation efforts aimed at reducing the negative consequences of climate change. However, as our planet enters a period of changing climate never before experienced in recorded human history, interest is growing in the potential for deliberate intervention in the climate system to counter climate change. ...Carbon dioxide removal strategies address a key driver of climate change, but research is needed to fully assess if any of these technologies could be appropriate for large-scale deployment. Albedo modification strategies could rapidly cool the planet’s surface but pose environmental and other risks that are not well understood and therefore should not be deployed at climate-altering scales; more research is needed to determine if albedo modification approaches could be viable in the future.[86]
The project was sponsored by the National Academy of Sciences, U.S. Intelligence Community, National Oceanic and Atmospheric Administration, NASA, and U.S. Department of Energy.[85][87]

Intergovernmental Panel on Climate Change

The Intergovernmental Panel on Climate Change (IPCC) assessed the scientific literature on climate engineering (referred to as "geoengineering" in its reports), in which it considered carbon dioxide removal and solar radiation separately. Its Fifth Assessment Report states:[88]
Models consistently suggest that SRM would generally reduce climate differences compared to a world with elevated GHG concentrations and no SRM; however, there would also be residual regional differences in climate (e.g., temperature and rainfall) when compared to a climate without elevated GHGs....
Models suggest that if SRM methods were realizable they would be effective in countering increasing temperatures, and would be less, but still, effective in countering some other climate changes. SRM would not counter all effects of climate change, and all proposed geoengineering methods also carry risks and side effects. Additional consequences cannot yet be anticipated as the level of scientific understanding about both SRM and CDR is low. There are also many (political, ethical, and practical) issues involving geoengineering that are beyond the scope of this report.

Thermodynamic free energy

From Wikipedia, the free encyclopedia

The thermodynamic free energy is the amount of work that a thermodynamic system can perform. The concept is useful in the thermodynamics of chemical or thermal processes in engineering and science. The free energy is the internal energy of a system minus the amount of energy that cannot be used to perform work. This unusable energy is given by the entropy of a system multiplied by the temperature of the system.

Like the internal energy, the free energy is a thermodynamic state function. Energy is a generalization of free energy, since energy is the ability to do work which is free energy.

Overview

Free energy is that portion of any first-law energy that is available to perform thermodynamic work; i.e., work mediated by thermal energy. Free energy is subject to irreversible loss in the course of such work.[1] Since first-law energy is always conserved, it is evident that free energy is an expendable, second-law kind of energy that can perform work within finite amounts of time. Several free energy functions may be formulated based on system criteria. Free energy functions are Legendre transformations of the internal energy. For processes involving a system at constant pressure p and temperature T, the Gibbs free energy is the most useful because, in addition to subsuming any entropy change due merely to heat, it does the same for the p dV work needed to "make space for additional molecules" produced by various processes. (Hence its utility to solution-phase chemists, including biochemists.) The Helmholtz free energy has a special theoretical importance since it is proportional to the logarithm of the partition function for the canonical ensemble in statistical mechanics. (Hence its utility to physicists; and to gas-phase chemists and engineers, who do not want to ignore p dV work.)

The historically earlier Helmholtz free energy is defined as A = UTS, where U is the internal energy, T is the absolute temperature, and S is the entropy. Its change is equal to the amount of reversible work done on, or obtainable from, a system at constant T. Thus its appellation "work content", and the designation A from Arbeit, the German word for work. Since it makes no reference to any quantities involved in work (such as p and V), the Helmholtz function is completely general: its decrease is the maximum amount of work which can be done by a system, and it can increase at most by the amount of work done on a system.

The Gibbs free energy is given by G = HTS, where H is the enthalpy. (H = U + pV, where p is the pressure and V is the volume.)

Historically, these energy terms have been used inconsistently. In physics, free energy most often refers to the Helmholtz free energy, denoted by A, while in chemistry, free energy most often refers to the Gibbs free energy. Since both fields use both functions, a compromise has been suggested, using A to denote the Helmholtz function and G for the Gibbs function. While A is preferred by IUPAC, G is sometimes still in use, and the correct free energy function is often implicit in manuscripts and presentations.

Meaning of "free"

The basic definition of "energy" is a measure of a body's (in thermodynamics, the system) ability to cause change. For example, when a person pushes a heavy box a few meters forward, that person utilizes energy by exerting it in the form of mechanical energy, also known as work, on the box by a distance of a few meters forward. The mathematical definition of this form of energy is the product of the force exerted on the object and the distance by which the box moved (Work=Force x Distance). Because the person changed the stationary position of the box, that person exerted energy on that box. The work exerted is also called "useful energy" because all of the energy from the person went into moving the box. Because energy is neither created nor destroyed, but conserved (1st Law of Thermodynamics), it is constantly being converted from one form into another. For the case of the person pushing the box, the energy in the form of internal (or potential) energy obtained through metabolism was converted into work in order to push the box. This energy conversion, however, is not linear. In other words, some internal energy went into pushing the box, whereas some was lost in the form of heat (thermal energy). The difference of the internal energy, which is defined by U and the energy lost while performing work, usually in the form of heat, which can be defined as the product of the absolute temperature T and the entropy S (entropy is a measure of disorder in a system; more specifically, the measure of the thermal energy not available to perform work) of a body is what is called the "useful energy" of the body, or the work of the body performed on an object. In thermodynamics, this is what is known as "free energy". In other words, free energy is a measure of work (useful energy) a system can perform. Mathematically, free energy is expressed as:
free energy= U-TS

This expression means that free energy (energy of a system available to perform work) is the difference of the total internal energy of a system, and the energy not available to perform work, altered by the absolute temperature of the system, also known as entropy.

In the 18th and 19th centuries, the theory of heat, i.e., that heat is a form of energy having relation to vibratory motion, was beginning to supplant both the caloric theory, i.e., that heat is a fluid, and the four element theory, in which heat was the lightest of the four elements. In a similar manner, during these years, heat was beginning to be distinguished into different classification categories, such as “free heat”, “combined heat”, “radiant heat”, specific heat, heat capacity, “absolute heat”, “latent caloric”, “free” or “perceptible” caloric (calorique sensible), among others.

In 1780, for example, Laplace and Lavoisier stated: “In general, one can change the first hypothesis into the second by changing the words ‘free heat, combined heat, and heat released’ into ‘vis viva, loss of vis viva, and increase of vis viva.’” In this manner, the total mass of caloric in a body, called absolute heat, was regarded as a mixture of two components; the free or perceptible caloric could affect a thermometer, whereas the other component, the latent caloric, could not.[2] The use of the words “latent heat” implied a similarity to latent heat in the more usual sense; it was regarded as chemically bound to the molecules of the body. In the adiabatic compression of a gas, the absolute heat remained constant but the observed rise in temperature implied that some latent caloric had become “free” or perceptible.

During the early 19th century, the concept of perceptible or free caloric began to be referred to as “free heat” or heat set free. In 1824, for example, the French physicist Sadi Carnot, in his famous “Reflections on the Motive Power of Fire”, speaks of quantities of heat ‘absorbed or set free’ in different transformations. In 1882, the German physicist and physiologist Hermann von Helmholtz coined the phrase ‘free energy’ for the expression ETS, in which the change in F (or G) determines the amount of energy ‘free’ for work under the given conditions.[3]:235

Thus, in traditional use, the term “free” was attached to Gibbs free energy, i.e., for systems at constant pressure and temperature, or to Helmholtz free energy, i.e., for systems at constant volume and temperature, to mean ‘available in the form of useful work.’[4] With reference to the Gibbs free energy, we add the qualification that it is the energy free for non-volume work.[5]:77–79

An increasing number of books and journal articles do not include the attachment “free”, referring to G as simply Gibbs energy (and likewise for the Helmholtz energy). This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the adjective ‘free’ was supposedly banished.[6][7][8] This standard, however, has not yet been universally adopted, and many published articles and books still include the descriptive ‘free’.[citation needed]

Application

Just like with the general concept of energy, free energy has multiple definitions, depending on the conditions. In physics, chemistry, and biology, these conditions are thermodynamic parameters (Temperature T, Volume V, Pressure P, etc.). Scientists have come up with many ways to define free energy while keeping certain parameters from changing; mathematically expressed as (. When temperature and volume are kept constant, this is known as Helmholtz free energy A. The mathematical expression of Helmholtz free energy is:

A = U-TS

This definition of free energy is useful in physics for explaining the behavior of isolated systems kept at a constant volume. In chemistry, on the other hand, most chemical reactions are kept at constant pressure. Under this condition, the heat of the reaction q is equal to the enthalpy H of the reaction, or all of the energy associated with the reaction. For example, if a researcher wanted to perform a combustion reaction in a bomb calorimeter, the pressure is kept constant throughout the course of a reaction per mole of the substance of the system (reactants and products). Therefore, the heat of the reaction is a direct measure of the enthalpy H of the reaction (q=). In this case, internal energy can be a measure of the enthalpy of the reaction in question (H=U). Thus under constant pressure and temperature, the free energy in a reaction is known as Gibbs free energy G.

G = H-TS

The experimental usefulness of these functions is restricted to conditions where certain variables (T, and V or external p) are held constant, although they also have theoretical importance in deriving Maxwell relations. Work other than p dV may be added, e.g., for electrochemical cells, or f dx work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors.

In most cases of interest there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy. Even for homogeneous "bulk" materials, the free energy functions depend on the (often suppressed) composition, as do all proper thermodynamic potentials (extensive functions), including the internal energy.

Name Symbol Formula Natural variables
Helmholtz free energy F U-TS T, V, \{N_i\}
Gibbs free energy G U+pV-TS T, p, \{N_i\}

Ni is the number of molecules (alternatively, moles) of type i in the system. If these quantities do not appear, it is impossible to describe compositional changes. The differentials for reversible processes are (assuming only pV work):
{\displaystyle dF=-p\,dV-S\,dT+\sum _{i}\mu _{i}\,dN_{i}\,}
{\displaystyle dG=V\,dp-S\,dT+\sum _{i}\mu _{i}\,dN_{i}\,}
where μi is the chemical potential for the ith component in the system. The second relation is especially useful at constant T and p, conditions which are easy to achieve experimentally, and which approximately characterize living creatures.
{\displaystyle (dG)_{T,p}=\sum _{i}\mu _{i}\,dN_{i}\,}
Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as T times a corresponding increase in the entropy of the system and/or its surrounding.
An example is surface free energy, the amount of increase of free energy when the area of surface increases by every unit area.

The path integral Monte Carlo method is a numerical approach for determining the values of free energies, based on quantum dynamical principles.

History

The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in previous years to describe the force that caused chemical reactions. The term affinity, as used in chemical relation, dates back to at least the time of Albertus Magnus in 1250.[citation needed]

From the 1998 textbook Modern Thermodynamics[9] by Nobel Laureate and chemistry professor Ilya Prigogine we find: "As motion was explained by the Newtonian concept of force, chemists wanted a similar concept of ‘driving force’ for chemical change. Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked a clear definition."

During the entire 18th century, the dominant view with regard to heat and light was that put forth by Isaac Newton, called the Newtonian hypothesis, which states that light and heat are forms of matter attracted or repelled by other forms of matter, with forces analogous to gravitation or to chemical affinity.

In the 19th century, the French chemist Marcellin Berthelot and the Danish chemist Julius Thomsen had attempted to quantify affinity using heats of reaction. In 1875, after quantifying the heats of reaction for a large number of compounds, Berthelot proposed the principle of maximum work, in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of a system of bodies which liberate heat.

In addition to this, in 1780 Antoine Lavoisier and Pierre-Simon Laplace laid the foundations of thermochemistry by showing that the heat given out in a reaction is equal to the heat absorbed in the reverse reaction. They also investigated the specific heat and latent heat of a number of substances, and amounts of heat given out in combustion. In a similar manner, in 1840 Swiss chemist Germain Hess formulated the principle that the evolution of heat in a reaction is the same whether the process is accomplished in one-step process or in a number of stages. This is known as Hess' law. With the advent of the mechanical theory of heat in the early 19th century, Hess’s law came to be viewed as a consequence of the law of conservation of energy.

Based on these and other ideas, Berthelot and Thomsen, as well as others, considered the heat given out in the formation of a compound as a measure of the affinity, or the work done by the chemical forces. This view, however, was not entirely correct. In 1847, the English physicist James Joule showed that he could raise the temperature of water by turning a paddle wheel in it, thus showing that heat and mechanical work were equivalent or proportional to each other, i.e., approximately, dWdQ. This statement came to be known as the mechanical equivalent of heat and was a precursory form of the first law of thermodynamics.

By 1865, the German physicist Rudolf Clausius had shown that this equivalence principle needed amendment. That is, one can use the heat derived from a combustion reaction in a coal furnace to boil water, and use this heat to vaporize steam, and then use the enhanced high-pressure energy of the vaporized steam to push a piston. Thus, we might naively reason that one can entirely convert the initial combustion heat of the chemical reaction into the work of pushing the piston. Clausius showed, however, that we must take into account the work that the molecules of the working body, i.e., the water molecules in the cylinder, do on each other as they pass or transform from one step of or state of the engine cycle to the next, e.g., from (P1,V1) to (P2,V2). Clausius originally called this the “transformation content” of the body, and then later changed the name to entropy. Thus, the heat used to transform the working body of molecules from one state to the next cannot be used to do external work, e.g., to push the piston. Clausius defined this transformation heat as dQ = T dS.

In 1873, Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes will ensue. In 1876, Gibbs built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. In his own words, to summarize his results in 1873, Gibbs states:

If we wish to express in a single equation the necessary and sufficient condition of thermodynamic equilibrium for a substance when surrounded by a medium of constant pressure p and temperature T, this equation may be written:
δ(ε + ) = 0
when δ refers to the variation produced by any variations in the state of the parts of the body, and (when different parts of the body are in different states) in the proportion in which the body is divided between the different states. The condition of stable equilibrium is that the value of the expression in the parenthesis shall be a minimum.

In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body.

Hence, in 1882, after the introduction of these arguments by Clausius and Gibbs, the German scientist Hermann von Helmholtz stated, in opposition to Berthelot and Thomas’ hypothesis that chemical affinity is a measure of the heat of reaction of chemical reaction as based on the principle of maximal work, that affinity is not the heat given out in the formation of a compound but rather it is the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy F at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (Internal energy). Thus, G or F is the amount of energy “free” for work under the given conditions.

Up until this point, the general view had been such that: “all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish”. Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Reactions by Gilbert N. Lewis and Merle Randall led to the replacement of the term “affinity” by the term “free energy” in much of the English-speaking world.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...