The Big Bounce is a hypothesized cosmological model for the origin of the known universe. It was originally suggested as a phase of the cyclic model or oscillatory universe interpretation of the Big Bang,
where the first cosmological event was the result of the collapse of a
previous universe. It receded from serious consideration in the early
1980s after inflation theory emerged as a solution to the horizon problem, which had arisen from advances in observations revealing the large-scale structure of the universe. In the early 2000s, inflation was found by some theorists to be problematic and unfalsifiable
in that its various parameters could be adjusted to fit any
observations, so that the properties of the observable universe are a
matter of chance. Alternative pictures including a Big Bounce may
provide a predictive and falsifiable possible solution to the horizon problem, and are under active investigation as of 2017.
Expansion and contraction
The concept of the Big Bounce envisions the Big Bang as the beginning of a period of expansion that followed a period of contraction. In this view, one could talk of a Big Crunch followed by a Big Bang, or more simply, a Big Bounce.
This suggests that we could be living at any point in an infinite
sequence of universes, or conversely the current universe could be the
very first iteration. However, if the condition of the interval phase
"between bounces", considered the 'hypothesis of the primeval atom', is
taken into full contingency such enumeration may be meaningless because
that condition could represent a singularity in time at each instance, if such perpetual return was absolute and undifferentiated.
The main idea behind the quantum theory of a Big Bounce is that, as density approaches infinity, the behavior of the quantum foam changes. All the so-called fundamental physical constants, including the speed of light in a vacuum, need not remain constant during a Big Crunch, especially in the time interval smaller than that in which measurement may never be possible (one unit of Planck time, roughly 10−43 seconds) spanning or bracketing the point of inflection.
History
Big bounce models were endorsed on largely aesthetic grounds by cosmologists including Willem de Sitter, Carl Friedrich von Weizsäcker, George McVittie and George Gamow (who stressed that "from the physical point of view we must forget entirely about the precollapse period").
By the early 1980s, the advancing precision and scope of observational cosmology had revealed that the large-scale structure of the universe is flat, homogenous and isotropic, a finding later accepted as the Cosmological Principle to apply at scales beyond roughly 300 million light-years. It was recognized that it was necessary to find an explanation
for how distant regions of the universe could have essentially
identical properties without ever having been in light-like
communication. A solution was proposed to be a period of exponential
expansion of space in the early universe, as a basis for what became
known as Inflation theory. Following the brief inflationary period, the universe continues to expand, but at a less rapid rate.
Various formulations of inflation theory and their detailed
implications became the subject of intense theoretical study. In the
absence of a compelling alternative, inflation became the leading
solution to the horizon problem. In the early 2000s, inflation was found
by some theorists to be problematic and unfalsifiable in that its
various parameters could be adjusted to fit any observations, a
situation known as a fine-tuning problem. Furthermore, inflation was
found to be inevitably eternal,
creating an infinity of different universes with typically different
properties, so that the properties of the observable universe are a
matter of chance.
An alternative concept including a Big Bounce was conceived as a
predictive and falsifiable possible solution to the horizon problem, and is under active investigation as of 2017.
The phrase "Big Bounce" appeared in the scientific literature in
1987, when it was first used in the title of a pair of articles (in
German) in Stern und Weltraum by Wolfgang Priester and Hans-Joachim Blome. It reappeared in 1988 in Iosif Rozental's Big Bang, Big Bounce,
a revised English-language translation of a Russian-language book (by a
different title), and in a 1991 article (in English) by Priester and
Blome in Astronomy and Astrophysics. (The phrase apparently originated as the title of a novel by Elmore Leonard in 1969, shortly after increased public awareness of the Big Bang model with of the discovery of the cosmic microwave background by Penzias and Wilson in 1965.)
The idea of existence of a big bounce in the very early universe has found a diverse support in works based on loop quantum gravity. In loop quantum cosmology, a branch of loop quantum gravity, big bounce was first discovered in February 2006, for isotropic and homogeneous models by Abhay Ashtekar, Tomasz Pawlowski and Parampreet Singh at Pennsylvania State University.
This result has been generalized to various other models by different
groups, and includes the case of spatial curvature, cosmological
constant, anisotropies and Fock quantized inhomogenities.
Martin Bojowald, an assistant professor of physics at Pennsylvania State University, published a study in July 2007 detailing work somewhat related to loop quantum gravity
that claimed to mathematically solve the time before the Big Bang,
which would give new weight to the oscillatory universe and Big Bounce
theories.
One of the main problems with the Big Bang theory is that at the moment of the Big Bang, there is a singularity
of zero volume and infinite energy. This is normally interpreted as the
end of the physics as we know it; in this case, of the theory of general relativity. This is why one expects quantum effects to become important and avoid the singularity.
However, research in loop quantum cosmology purported to show that a previously existing universe collapsed, not to the point of singularity, but to a point before that where the quantum effects of gravity
become so strongly repulsive that the universe rebounds back out,
forming a new branch. Throughout this collapse and bounce, the evolution
is unitary.
Bojowald also claims that some properties of the universe that
collapsed to form ours can also be determined. Some properties of the
prior universe are not determinable however due to some kind of
uncertainty principle. This result has been disputed by different groups
which show that due to restrictions on fluctuations stemming from the
uncertainty principle, there are strong constraints on the change in
relative fluctuations across the bounce.
In 2003, Peter Lynds has put forward a new cosmology model in
which time is cyclic. In his theory our Universe will eventually stop
expanding and then contract. Before becoming a singularity, as one would
expect from Hawking's black hole theory, the universe would bounce.
Lynds claims that a singularity would violate the second law of thermodynamics
and this stops the universe from being bounded by singularities. The
Big Crunch would be avoided with a new Big Bang. Lynds suggests the
exact history of the universe would be repeated in each cycle in an eternal recurrence. Some critics argue that while the universe may be cyclic, the histories would all be variants.
Lynds' theory has been dismissed by mainstream physicists for the lack
of a mathematical model behind its philosophical considerations.
In 2010, Roger Penrose advanced a general relativity based theory which he calls the "conformal cyclic cosmology”.
The theory explains that the universe will expand until all matter
decays and is ultimately turned to light. Since nothing in the universe
would have any time or distance scale associated with it, it becomes
identical with the Big Bang, in turn resulting in a type of Big Crunch
which becomes the next big bang, thus perpetuating the next cycle.
In 2011, Nikodem Popławski showed that a nonsingular Big Bounce appears naturally in the Einstein-Cartan-Sciama-Kibble theory of gravity.
This theory extends general relativity by removing a constraint of the
symmetry of the affine connection and regarding its antisymmetric part,
the torsion tensor,
as a dynamical variable. The minimal coupling between torsion and Dirac
spinors generates a spin-spin interaction which is significant in
fermionic matter at extremely high densities. Such an interaction averts
the unphysical Big Bang singularity, replacing it with a cusp-like
bounce at a finite minimum scale factor, before which the universe was
contracting. This scenario also explains why the present Universe at
largest scales appears spatially flat, homogeneous and isotropic,
providing a physical alternative to cosmic inflation.
In 2012, a new theory of nonsingular big bounce was successfully constructed within the frame of standard Einstein gravity.
This theory combines the benefits of matter bounce and Ekpyrotic cosmology.
Particularly, the famous BKL instability, that the homogeneous and
isotropic background cosmological solution is unstable to the growth of
anisotropic stress, is resolved in this theory. Moreover, curvature
perturbations seeded in matter contraction are able to form a nearly
scale-invariant primordial power spectrum and thus provides a consistent
mechanism to explain the cosmic microwave background (CMB) observations.
A few sources argue that distant supermassive black holes whose large size is hard to explain so soon after the Big Bang, such as ULAS J1342+0928, may be evidence for a Big Bounce, with these supermassive black holes being formed before the Big Bounce.
The anthropic principle is the principle that there is a
restrictive lower bound on how statistically probable our observations
of the universe are, given that we could only exist in the particular
type of universe capable of developing and sustaining sentient life. Proponents of the anthropic principle argue that it explains why this universe has the age and the fundamental physical constants
necessary to accommodate conscious life, since if either had been
different, we would not have been around to make observations. Anthropic
reasoning is often used to deal with the notion that the universe seems
to be fine tuned.
There are many different formulations of the anthropic principle. Philosopher Nick Bostrom
counts them at thirty, but the underlying principles can be divided
into "weak" and "strong" forms, depending on the types of cosmological
claims they entail. The weak anthropic principle (WAP), such as the one defined by Brandon Carter, states that the universe's ostensible fine tuning is the result of selection bias (specifically survivorship bias). Most often such arguments draw upon some notion of the multiverse for there to be a statistical population
of universes to select from. However, a single vast universe is
sufficient for most forms of the WAP that do not specifically deal with
fine tuning. The strong anthropic principle (SAP), as proposed by John D. Barrow and Frank Tipler, states that the universe is in some sense compelled to eventually have conscious and sapient life emerge within it.
Definition and basis
The principle was formulated as a response to a series of observations that the laws of nature and parameters of the universe take on values that are consistent with conditions for life as we know it rather than a set of values that would not be consistent with life on Earth. The anthropic principle states that this is a necessity,
because if life were impossible, no living entity would be there to
observe it, and thus would not be known. That is, it must be possible to
observe some universe, and hence, the laws and constants of any such universe must accommodate that possibility.
The term anthropic in "anthropic principle" has been argued to be a misnomer. While singling out our kind of carbon-based life, none of the finely tuned phenomena require human life or some kind of carbon chauvinism.
Any form of life or any form of heavy atom, stone, star or galaxy would
do; nothing specifically human or anthropic is involved.
The anthropic principle has given rise to some confusion and
controversy, partly because the phrase has been applied to several
distinct ideas. All versions of the principle have been accused of
discouraging the search for a deeper physical understanding of the
universe. The anthropic principle is often criticized for lacking falsifiability
and therefore its critics may point out that the anthropic principle is
a non-scientific concept, even though the weak anthropic principle,
"conditions that are observed in the universe must allow the observer to
exist", is "easy" to support in mathematics and philosophy, i.e. it is a tautology or truism.
However, building a substantive argument based on a tautological
foundation is problematic. Stronger variants of the anthropic principle
are not tautologies and thus make claims considered controversial by
some and that are contingent upon empirical verification.
In 1961, Robert Dicke noted that the age of the universe, as seen by living observers, cannot be random. Instead, biological factors constrain the universe to be more or less in a "golden age", neither too young nor too old.
If the universe were one tenth as old as its present age, there would
not have been sufficient time to build up appreciable levels of metallicity (levels of elements besides hydrogen and helium) especially carbon, by nucleosynthesis.
Small rocky planets did not yet exist. If the universe were 10 times
older than it actually is, most stars would be too old to remain on the main sequence and would have turned into white dwarfs, aside from the dimmest red dwarfs,
and stable planetary systems would have already come to an end. Thus,
Dicke explained the coincidence between large dimensionless numbers
constructed from the constants of physics and the age of the universe, a
coincidence that inspired Dirac's varying-G theory.
Dicke later reasoned that the density of matter in the universe must be almost exactly the critical density needed to prevent the Big Crunch (the "Dicke coincidences" argument). The most recent measurements may suggest that the observed density of baryonic matter, and some theoretical predictions of the amount of dark matter account for about 30% of this critical density, with the rest contributed by a cosmological constant. Steven Weinberg gave an anthropic explanation for this fact: he noted that the cosmological constant has a remarkably low value, some 120 orders of magnitude smaller than the value particle physics predicts (this has been described as the "worst prediction in physics").
However, if the cosmological constant were only several orders of
magnitude larger than its observed value, the universe would suffer
catastrophic inflation, which would preclude the formation of stars, and hence life.
The observed values of the dimensionless physical constants (such as the fine-structure constant) governing the four fundamental interactions are balanced as if fine-tuned to permit the formation of commonly found matter and subsequently the emergence of life. A slight increase in the strong interaction would bind the dineutron and the diproton and convert all hydrogen in the early universe to helium; likewise, an increase in the weak interaction
also would convert all hydrogen to helium. Water, as well as
sufficiently long-lived stable stars, both essential for the emergence
of life as we know it, would not exist.
More generally, small changes in the relative strengths of the four
fundamental interactions can greatly affect the universe's age,
structure, and capacity for life.
Origin
The phrase "anthropic principle" first appeared in Brandon Carter's contribution to a 1973 Krakówsymposium honouring Copernicus's 500th birthday. Carter, a theoretical astrophysicist, articulated the Anthropic Principle in reaction to the Copernican Principle, which states that humans do not occupy a privileged position in the Universe. Carter said: "Although our situation is not necessarily central, it is inevitably privileged to some extent." Specifically, Carter disagreed with using the Copernican principle to justify the Perfect Cosmological Principle, which states that all large regions and times in the universe must be statistically identical. The latter principle underlay the steady-state theory, which had recently been falsified by the 1965 discovery of the cosmic microwave background radiation. This discovery was unequivocal evidence that the universe has changed radically over time (for example, via the Big Bang).
Carter defined two forms of the anthropic principle, a "weak" one which referred only to anthropic selection of privileged spacetime
locations in the universe, and a more controversial "strong" form that
addressed the values of the fundamental constants of physics.
The argument can be used to explain
why the conditions happen to be just right for the existence of
(intelligent) life on the Earth at the present time. For if they were
not just right, then we should not have found ourselves to be here now,
but somewhere else, at some other appropriate time. This principle was
used very effectively by Brandon Carter and Robert Dicke
to resolve an issue that had puzzled physicists for a good many years.
The issue concerned various striking numerical relations that are
observed to hold between the physical constants (the gravitational constant, the mass of the proton, the age of the universe,
etc.). A puzzling aspect of this was that some of the relations hold
only at the present epoch in the Earth's history, so we appear,
coincidentally, to be living at a very special time (give or take a few
million years!). This was later explained, by Carter and Dicke, by the
fact that this epoch coincided with the lifetime of what are called main-sequence
stars, such as the Sun. At any other epoch, the argument ran, there
would be no intelligent life around to measure the physical constants in
question—so the coincidence had to hold, simply because there would be intelligent life around only at the particular time that the coincidence did hold!
One reason this is plausible is that there are many other places and
times in which we can imagine finding ourselves. But when applying the
strong principle, we only have one universe, with one set of fundamental
parameters, so what exactly is the point being made? Carter offers two
possibilities: First, we can use our own existence to make "predictions"
about the parameters. But second, "as a last resort", we can convert
these predictions into explanations by assuming that there is more than one universe, in fact a large and possibly infinite collection of universes, something that is now called the multiverse
("world ensemble" was Carter's term), in which the parameters (and
perhaps the laws of physics) vary across universes. The strong principle
then becomes an example of a selection effect,
exactly analogous to the weak principle. Postulating a multiverse is
certainly a radical step, but taking it could provide at least a partial
answer to a question seemingly out of the reach of normal science: "Why
do the fundamental laws of physics take the particular form we observe and not another?"
Since Carter's 1973 paper, the term anthropic principle
has been extended to cover a number of ideas that differ in important
ways from his. Particular confusion was caused in 1986 by the book The Anthropic Cosmological Principle by John D. Barrow and Frank Tipler,
published that year, which distinguished between a "weak" and "strong"
anthropic principle in a way very different from Carter's, as discussed
in the next section.
Carter was not the first to invoke some form of the anthropic principle. In fact, the evolutionary biologistAlfred Russel Wallace
anticipated the anthropic principle as long ago as 1904: "Such a vast
and complex universe as that which we know exists around us, may have
been absolutely required [...] in order to produce a world that should
be precisely adapted in every detail for the orderly development of life
culminating in man." In 1957, Robert Dicke
wrote: "The age of the Universe 'now' is not random but conditioned by
biological factors [...] [changes in the values of the fundamental
constants of physics] would preclude the existence of man to consider
the problem."
Ludwig Boltzmann may have been one of the first in modern science to use anthropic reasoning. Prior to knowledge of the Big Bang
Boltzmann's thermodynamic concepts painted a picture of a universe that
had inexplicably low entropy. Boltzmann suggested several explanations,
one of which relied on fluctuations that could produce pockets of low
entropy or Boltzmann universes. While most of the universe is
featureless in this model. To Boltzmann, it is unremarkable that
humanity happens to inhabit a Boltzmann universe, as that is the only
place where intelligent life could be.
Variants
Weak anthropic principle (WAP) (Carter): "[W]e must be prepared to take account of the fact that our location in the universe is necessarily
privileged to the extent of being compatible with our existence as
observers." Note that for Carter, "location" refers to our location in
time as well as space.
Strong anthropic principle (SAP) (Carter): "[T]he universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. To paraphrase Descartes, cogito ergo mundus talis est." The Latin tag ("I think, therefore the world is such [as it is]") makes it clear that "must" indicates a deduction from the fact of our existence; the statement is thus a truism.
In their 1986 book, The Anthropic Cosmological Principle, John Barrow and Frank Tipler depart from Carter and define the WAP and SAP as follows:
Weak anthropic principle (WAP) (Barrow and Tipler): "The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the universe be old enough for it to have already done so."Unlike
Carter they restrict the principle to carbon-based life, rather than
just "observers". A more important difference is that they apply the WAP
to the fundamental physical constants, such as the fine-structure constant, the number of spacetime dimensions, and the cosmological constant—topics that fall under Carter's SAP.
Strong anthropic principle (SAP) (Barrow and Tipler): "The
Universe must have those properties which allow life to develop within
it at some stage in its history." This
looks very similar to Carter's SAP, but unlike the case with Carter's
SAP, the "must" is an imperative, as shown by the following three
possible elaborations of the SAP, each proposed by Barrow and Tipler:
"There exists one possible Universe 'designed' with the goal of generating and sustaining 'observers'."
This can be seen as simply the classic design argument restated in the garb of contemporary cosmology. It implies that the purpose of the universe is to give rise to intelligent life, with the laws of nature and their fundamental physical constants set to ensure that life as we know it emerges and evolves.
"Observers are necessary to bring the Universe into being."
"An ensemble of other different universes is necessary for the existence of our Universe."
By contrast, Carter merely says that an ensemble of universes is necessary for the SAP to count as an explanation.
The philosophersJohn Leslie and Nick Bostrom
reject the Barrow and Tipler SAP as a fundamental misreading of Carter.
For Bostrom, Carter's anthropic principle just warns us to make
allowance for anthropic bias—that is, the bias created by anthropic selection effects (which Bostrom calls "observation" selection effects)—the necessity for observers to exist in order to get a result. He writes:
Many 'anthropic principles' are
simply confused. Some, especially those drawing inspiration from Brandon
Carter's seminal papers, are sound, but... they are too weak to do any
real scientific work. In particular, I argue that existing methodology
does not permit any observational consequences to be derived from
contemporary cosmological theories, though these theories quite plainly
can be and are being tested empirically by astronomers. What is needed
to bridge this methodological gap is a more adequate formulation of how
observation selection effects are to be taken into account.
— Anthropic Bias, Introduction
Strong self-sampling assumption (SSSA) (Bostrom):
"Each observer-moment should reason as if it were randomly selected
from the class of all observer-moments in its reference class."
Analysing an observer's experience into a sequence of "observer-moments"
helps avoid certain paradoxes; but the main ambiguity is the selection
of the appropriate "reference class": for Carter's WAP this might
correspond to all real or potential observer-moments in our universe;
for the SAP, to all in the multiverse. Bostrom's mathematical
development shows that choosing either too broad or too narrow a
reference class leads to counter-intuitive results, but he is not able
to prescribe an ideal choice.
According to Jürgen Schmidhuber, the anthropic principle essentially just says that the conditional probability
of finding yourself in a universe compatible with your existence is
always 1. It does not allow for any additional nontrivial predictions
such as "gravity won't change tomorrow". To gain more predictive power,
additional assumptions on the prior distribution of alternative universes are necessary.
Playwright and novelist Michael Frayn describes a form of the Strong Anthropic Principle in his 2006 book The Human Touch, which explores what he characterises as "the central oddity of the Universe":
It's this simple paradox. The
Universe is very old and very large. Humankind, by comparison, is only a
tiny disturbance in one small corner of it – and a very recent one. Yet
the Universe is only very large and very old because we are here to say
it is... And yet, of course, we all know perfectly well that it is what
it is whether we are here or not.
Character of anthropic reasoning
Carter
chose to focus on a tautological aspect of his ideas, which has
resulted in much confusion. In fact, anthropic reasoning interests
scientists because of something that is only implicit in the above
formal definitions, namely that we should give serious consideration to
there being other universes with different values of the "fundamental
parameters"—that is, the dimensionless physical constants and initial conditions for the Big Bang.
Carter and others have argued that life as we know it would not be
possible in most such universes. In other words, the universe we are in
is fine tuned
to permit life. Collins & Hawking (1973) characterized Carter's
then-unpublished big idea as the postulate that "there is not one
universe but a whole infinite ensemble of universes with all possible
initial conditions".
If this is granted, the anthropic principle provides a plausible
explanation for the fine tuning of our universe: the "typical" universe
is not fine-tuned, but given enough universes, a small fraction will be
capable of supporting intelligent life. Ours must be one of these, and
so the observed fine tuning should be no cause for wonder.
Although philosophers have discussed related concepts for
centuries, in the early 1970s the only genuine physical theory yielding a
multiverse of sorts was the many-worlds interpretation of quantum mechanics.
This would allow variation in initial conditions, but not in the truly
fundamental constants. Since that time a number of mechanisms for
producing a multiverse have been suggested: see the review by Max Tegmark. An important development in the 1980s was the combination of inflation theory with the hypothesis that some parameters are determined by symmetry breaking
in the early universe, which allows parameters previously thought of as
"fundamental constants" to vary over very large distances, thus eroding
the distinction between Carter's weak and strong principles. At the
beginning of the 21st century, the string landscape emerged as a mechanism for varying essentially all the constants, including the number of spatial dimensions.
The anthropic idea that fundamental parameters are selected from a
multitude of different possibilities (each actual in some universe or
other) contrasts with the traditional hope of physicists for a theory of everything having no free parameters. As Albert Einstein
said: "What really interests me is whether God had any choice in the
creation of the world." In 2002, some proponents of the leading
candidate for a "theory of everything", string theory, proclaimed "the end of the anthropic principle" since there would be no free parameters to select. In 2003, however, Leonard Susskind
stated: "...it seems plausible that the landscape is unimaginably large
and diverse. Whether we like it or not, this is the kind of behavior
that gives credence to the Anthropic Principle."
The modern form of a design argument is put forth by intelligent design. Proponents of intelligent design often cite the fine-tuning
observations that (in part) preceded the formulation of the anthropic
principle by Carter as a proof of an intelligent designer. Opponents of
intelligent design are not limited to those who hypothesize that other
universes exist; they may also argue, anti-anthropically, that the
universe is less fine-tuned than often claimed, or that accepting fine
tuning as a brute fact is less astonishing than the idea of an
intelligent creator. Furthermore, even accepting fine tuning, Sober (2005) and Ikeda and Jefferys, argue that the Anthropic Principle as conventionally stated actually undermines intelligent design.
Paul Davies's book The Goldilocks Enigma
(2006) reviews the current state of the fine tuning debate in detail,
and concludes by enumerating the following responses to that debate:
The absurd universe: Our universe just happens to be the way it is.
The unique universe: There is a deep underlying unity in physics that necessitates the Universe being the way it is. Some Theory of Everything will explain why the various features of the Universe must have exactly the values that we see.
The multiverse: Multiple universes exist, having all possible
combinations of characteristics, and we inevitably find ourselves within
a universe that allows us to exist.
Intelligent design: A creator designed the Universe with the purpose of supporting complexity and the emergence of intelligence.
The life principle: There is an underlying principle that constrains the Universe to evolve towards life and mind.
The self-explaining universe: A closed explanatory or causal loop:
"perhaps only universes with a capacity for consciousness can exist".
This is Wheeler's Participatory Anthropic Principle (PAP).
Omitted here is Lee Smolin's model of cosmological natural selection, also known as fecund universes, which proposes that universes have "offspring" that are more plentiful if they resemble our universe. Also see Gardner (2005).
Clearly each of these hypotheses resolve some aspects of the
puzzle, while leaving others unanswered. Followers of Carter would admit
only option 3 as an anthropic explanation, whereas 3 through 6 are
covered by different versions of Barrow and Tipler's SAP (which would
also include 7 if it is considered a variant of 4, as in Tipler 1994).
The anthropic principle, at least as Carter conceived it, can be
applied on scales much smaller than the whole universe. For example,
Carter (1983)
inverted the usual line of reasoning and pointed out that when
interpreting the evolutionary record, one must take into account cosmological and astrophysical considerations. With this in mind, Carter concluded that given the best estimates of the age of the universe, the evolutionary chain culminating in Homo sapiens probably admits only one or two low probability links.
Observational evidence
No
possible observational evidence bears on Carter's WAP, as it is merely
advice to the scientist and asserts nothing debatable. The obvious test
of Barrow's SAP, which says that the universe is "required" to support
life, is to find evidence of life in universes other than ours. Any
other universe is, by most definitions, unobservable (otherwise it would
be included in our portion of this universe). Thus, in principle Barrow's SAP cannot be falsified by observing a universe in which an observer cannot exist.
Philosopher John Leslie states that the Carter SAP (with multiverse) predicts the following:
Physical theory will evolve so as to strengthen the hypothesis that early phase transitions
occur probabilistically rather than deterministically, in which case
there will be no deep physical reason for the values of fundamental
constants;
Hogan
has emphasised that it would be very strange if all fundamental
constants were strictly determined, since this would leave us with no
ready explanation for apparent fine tuning. In fact we might have to
resort to something akin to Barrow and Tipler's SAP: there would be no
option for such a universe not to support life.
Probabilistic predictions of parameter values can be made given:
a particular multiverse with a "measure", i.e. a well defined "density of universes" (so, for parameter X, one can calculate the prior probabilityP(X0) dX that X is in the range X0 < X < X0 + dX), and
an estimate of the number of observers in each universe, N(X) (e.g., this might be taken as proportional to the number of stars in the universe).
The probability of observing value X is then proportional to N(X) P(X).
A generic feature of an analysis of this nature is that the expected
values of the fundamental physical constants should not be "over-tuned",
i.e. if there is some perfectly tuned predicted value (e.g. zero), the
observed value need be no closer to that predicted value than what is
required to make life possible. The small but finite value of the cosmological constant can be regarded as a successful prediction in this sense.
One thing that would not count as evidence for the Anthropic Principle is evidence that the Earth or the Solar System occupied a privileged position in the universe, in violation of the Copernican principle (for possible counterevidence to this principle, see Copernican principle), unless there was some reason to think that that position was a necessary condition for our existence as observers.
Applications of the principle
The nucleosynthesis of carbon-12
Fred Hoyle
may have invoked anthropic reasoning to predict an astrophysical
phenomenon. He is said to have reasoned, from the prevalence on Earth of
life forms whose chemistry was based on carbon-12 nuclei, that there must be an undiscovered resonance in the carbon-12 nucleus facilitating its synthesis in stellar interiors via the triple-alpha process. He then calculated the energy of this undiscovered resonance to be 7.6 million electronvolts. Willie Fowler's research group soon found this resonance, and its measured energy was close to Hoyle's prediction.
However, in 2010 Helge Kragh
argued that Hoyle did not use anthropic reasoning in making his
prediction, since he made his prediction in 1953 and anthropic reasoning
did not come into prominence until 1980. He called this an "anthropic
myth," saying that Hoyle and others made an after-the-fact connection
between carbon and life decades after the discovery of the resonance.
An investigation of the historical
circumstances of the prediction and its subsequent experimental
confirmation shows that Hoyle and his contemporaries did not associate
the level in the carbon nucleus with life at all.
Don Page criticized the entire theory of cosmic inflation as follows. He emphasized that initial conditions that made possible a thermodynamic arrow of time in a universe with a Big Bang origin, must include the assumption that at the initial singularity, the entropy of the universe was low and therefore extremely improbable. Paul Davies rebutted this criticism by invoking an inflationary version of the anthropic principle.
While Davies accepted the premise that the initial state of the visible
universe (which filled a microscopic amount of space before inflating)
had to possess a very low entropy value—due to random quantum
fluctuations—to account for the observed thermodynamic arrow of time, he
deemed this fact an advantage for the theory. That the tiny patch of
space from which our observable universe grew had to be extremely
orderly, to allow the post-inflation universe to have an arrow of time,
makes it unnecessary to adopt any "ad hoc" hypotheses about the initial
entropy state, hypotheses other Big Bang theories require.
String theory
predicts a large number of possible universes, called the "backgrounds"
or "vacua". The set of these vacua is often called the "multiverse" or "anthropic landscape" or "string landscape". Leonard Susskind
has argued that the existence of a large number of vacua puts anthropic
reasoning on firm ground: only universes whose properties are such as
to allow observers to exist are observed, while a possibly much larger
set of universes lacking such properties go unnoticed.
Steven Weinberg believes the Anthropic Principle may be appropriated by cosmologists committed to nontheism,
and refers to that Principle as a "turning point" in modern science
because applying it to the string landscape "may explain how the
constants of nature that we observe can take values suitable for life
without being fine-tuned by a benevolent creator". Others—most notably David Gross but also Lubos Motl, Peter Woit, and Lee Smolin—argue that this is not predictive. Max Tegmark, Mario Livio, and Martin Rees
argue that only some aspects of a physical theory need be observable
and/or testable for the theory to be accepted, and that many
well-accepted theories are far from completely testable at present.
There are two kinds of dimensions: spatial (bidirectional) and temporal (unidirectional). Let the number of spatial dimensions be N and the number of temporal dimensions be T. That N = 3 and T = 1, setting aside the compactified dimensions invoked by string theory and undetectable to date, can be explained by appealing to the physical consequences of letting N differ from 3 and T
differ from 1. The argument is often of an anthropic character and
possibly the first of its kind, albeit before the complete concept came
into vogue.
In 1920, Paul Ehrenfest showed that if there is only one time dimension and greater than three spatial dimensions, the orbit of a planet about its Sun cannot remain stable. The same is true of a star's orbit around the center of its galaxy. Ehrenfest also showed that if there are an even number of spatial dimensions, then the different parts of a wave impulse will travel at different speeds. If there are spatial dimensions, where k is a positive whole number, then wave impulses become distorted. In 1922, Hermann Weyl showed that Maxwell's theory of electromagnetism works only with three dimensions of space and one of time. Finally, Tangherlini showed in 1963 that when there are more than three spatial dimensions, electron orbitals around nuclei cannot be stable; electrons would either fall into the nucleus or disperse.
Max Tegmark expands on the preceding argument in the following anthropic manner. If T differs from 1, the behavior of physical systems could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, if T > 1, Tegmark maintains that protons and electrons
would be unstable and could decay into particles having greater mass
than themselves. (This is not a problem if the particles have a
sufficiently low temperature.)
Lastly, if N < 3, gravitation of any kind becomes
problematic, and the universe is probably too simple to contain
observers. For example, when N < 3, nerves cannot cross without intersecting.
Hence anthropic and other arguments rule out all cases except N = 3 and T = 1, which happens to describe the world around us.
In 2019, James Scargill argued that complex life may be possible
with two spatial dimensions. According to Scargill, a purely scalar
theory of gravity may enable a local gravitational force, and 2D
networks may be sufficient for complex neural networks.
Metaphysical interpretations
Some of the metaphysical disputes and speculations include, for example, attempts to back Pierre Teilhard de Chardin's earlier interpretation of the universe as being Christ centered (compare Omega Point), expressing a creatio evolutiva instead the elder notion of creatio continua.
From a strictly secular, humanist perspective, it allows as well to put
human beings back in the center, an anthropogenic shift in cosmology. Karl W. Giberson has been sort of laconic in stating that
What emerges is the suggestion that cosmology may at last be in possession of some raw material for a postmodern creation myth.
William Sims Bainbridge disagreed with de Chardin's optimism about a
future Omega Point at the end of history, arguing that logically we are
trapped at the Omicron Point, in the middle of the Greek alphabet rather
than advancing to the end, because the universe does not need to have
any characteristics that would support our further technical progress,
if the Anthropic principle merely requires it to be suitable for our
evolution to this point.
The book begins with an extensive review of many topics in the history of ideas
the authors deem relevant to the anthropic principle, because the
authors believe that principle has important antecedents in the notions
of teleology and intelligent design. They discuss the writings of Fichte, Hegel, Bergson, and Alfred North Whitehead, and the Omega Point cosmology of Teilhard de Chardin. Barrow and Tipler carefully distinguish teleological reasoning from eutaxiological
reasoning; the former asserts that order must have a consequent
purpose; the latter asserts more modestly that order must have a planned
cause. They attribute this important but nearly always overlooked
distinction to an obscure 1883 book by L. E. Hicks.
Seeing little sense in a principle requiring intelligent life to
emerge while remaining indifferent to the possibility of its eventual
extinction, Barrow and Tipler propose the final anthropic principle
(FAP): Intelligent information-processing must come into existence in
the universe, and, once it comes into existence, it will never die out.
Barrow and Tipler submit that the FAP is both a valid physical
statement and "closely connected with moral values". FAP places strong
constraints on the structure of the universe, constraints developed further in Tipler's The Physics of Immortality. One such constraint is that the universe must end in a Big Crunch, which seems unlikely in view of the tentative conclusions drawn since 1998 about dark energy, based on observations of very distant supernovas.
In his review of Barrow and Tipler, Martin Gardner
ridiculed the FAP by quoting the last two sentences of their book as
defining a Completely Ridiculous Anthropic Principle (CRAP):
At the instant the Omega Point is reached, life will have gained control of all
matter and forces not only in a single universe, but in all universes
whose existence is logically possible; life will have spread into all
spatial regions in all universes which could logically exist, and will
have stored an infinite amount of information, including all bits of knowledge that it is logically possible to know. And this is the end.
Reception and controversies
Carter
has frequently regretted his own choice of the word "anthropic",
because it conveys the misleading impression that the principle involves
humans specifically, rather than intelligent observers in general. Others have criticised the word "principle" as being too grandiose to describe straightforward applications of selection effects.
A common criticism of Carter's SAP is that it is an easy deus ex machina
that discourages searches for physical explanations. To quote Penrose
again: "[I]t tends to be invoked by theorists whenever they do not have a
good enough theory to explain the observed facts."
Carter's SAP and Barrow and Tipler's WAP have been dismissed as truisms or trivial tautologies—that is, statements true solely by virtue of their logical form
and not because a substantive claim is made and supported by
observation of reality. As such, they are criticized as an elaborate way
of saying, "If things were different, they would be different," which
is a valid statement, but does not make a claim of some factual
alternative over another.
Critics of the Barrow and Tipler SAP claim that it is neither testable nor falsifiable, and thus is not a scientific statement but rather a philosophical one. The same criticism has been leveled against the hypothesis of a multiverse, although some argue
that it does make falsifiable predictions. A modified version of this
criticism is that we understand so little about the emergence of life,
especially intelligent life, that it is effectively impossible to
calculate the number of observers in each universe. Also, the prior
distribution of universes as a function of the fundamental constants is
easily modified to get any desired result.
Many criticisms focus on versions of the strong anthropic principle, such as Barrow and Tipler's anthropic cosmological principle, which are teleological notions that tend to describe the existence of life as a necessary prerequisite for the observable constants of physics. Similarly, Stephen Jay Gould, Michael Shermer,
and others claim that the stronger versions of the anthropic principle
seem to reverse known causes and effects. Gould compared the claim that
the universe is fine-tuned for the benefit of our kind of life to saying
that sausages were made long and narrow so that they could fit into
modern hotdog buns, or saying that ships had been invented to house barnacles. These critics cite the vast physical, fossil, genetic, and other biological evidence consistent with life having been fine-tuned through natural selection
to adapt to the physical and geophysical environment in which life
exists. Life appears to have adapted to the universe, and not vice
versa.
Some applications of the anthropic principle have been criticized as an argument by lack of imagination, for tacitly assuming that carbon compounds and water are the only possible chemistry of life (sometimes called "carbon chauvinism", see also alternative biochemistry). The range of fundamental physical constants consistent with the evolution of carbon-based life may also be wider than those who advocate a fine tuned universe have argued. For instance, Harnik et al. propose a Weakless Universe in which the weak nuclear force is eliminated. They show that this has no significant effect on the other fundamental interactions,
provided some adjustments are made in how those interactions work.
However, if some of the fine-tuned details of our universe were
violated, that would rule out complex structures of any kind—stars, planets, galaxies, etc.
Lee Smolin
has offered a theory designed to improve on the lack of imagination
that anthropic principles have been accused of. He puts forth his fecund universes theory, which assumes universes have "offspring" through the creation of black holes whose offspring universes have values of physical constants that depend on those of the mother universe.
The philosophers of cosmology John Earman, Ernan McMullin, and Jesús Mosterín
contend that "in its weak version, the anthropic principle is a mere
tautology, which does not allow us to explain anything or to predict
anything that we did not already know. In its strong version, it is a
gratuitous speculation".
A further criticism by Mosterín concerns the flawed "anthropic"
inference from the assumption of an infinity of worlds to the existence
of one like ours:
The suggestion that an infinity of
objects characterized by certain numbers or properties implies the
existence among them of objects with any combination of those numbers or
characteristics [...] is mistaken. An infinity does not imply at all
that any arrangement is present or repeated. [...] The assumption that
all possible worlds are realized in an infinite universe is equivalent
to the assertion that any infinite set of numbers contains all numbers
(or at least all Gödel numbers of the [defining] sequences), which is
obviously false.
The thermodynamic free energy is a concept useful in the thermodynamics of chemical or thermal processes in engineering and science. The change in the free energy is the maximum amount of work that a thermodynamic system
can perform in a process at constant temperature, and its sign
indicates whether a process is thermodynamically favorable or forbidden.
Since free energy usually contains potential energy,
it is not absolute but depends on the choice of a zero point.
Therefore, only relative free energy values, or changes in free energy,
are physically meaningful.
Free energy is that portion of any first-law energy that is available to perform thermodynamic work at constant temperature, i.e., work mediated by thermal energy. Free energy is subject to irreversible loss in the course of such work. Since first-law energy is always conserved, it is evident that free energy is an expendable, second-law kind of energy. Several free energy functions may be formulated based on system criteria. Free energy functions are Legendre transforms of the internal energy.
The Gibbs free energy is given by G = H − TS, where H is the enthalpy, T is the absolute temperature, and S is the entropy. H = U + pV, where U is the internal energy, p is the pressure, and V is the volume. G is the most useful for processes involving a system at constantpressurep and temperatureT, because, in addition to subsuming any entropy change due merely to heat, a change in G also excludes the p dV
work needed to "make space for additional molecules" produced by
various processes. Gibbs free energy change therefore equals work not
associated with system expansion or compression, at constant temperature
and pressure. (Hence its utility to solution-phase chemists, including biochemists.)
The historically earlier Helmholtz free energy is defined as A = U − TS. Its change is equal to the amount of reversible work done on, or obtainable from, a system at constant T. Thus its appellation "work content", and the designation A from Arbeit, the German word for work. Since it makes no reference to any quantities involved in work (such as p and V), the Helmholtz function is completely general: its decrease is the maximum amount of work which can be done by a system at constant temperature, and it can increase at most by the amount of work done on a system isothermally. The Helmholtz free energy has a special theoretical importance since it is proportional to the logarithm of the partition function for the canonical ensemble in statistical mechanics. (Hence its utility to physicists; and to gas-phase chemists and engineers, who do not want to ignore p dV work.)
Historically, the term 'free energy' has been used for either quantity. In physics, free energy most often refers to the Helmholtz free energy, denoted by A (or F), while in chemistry, free energy
most often refers to the Gibbs free energy. The values of the two free
energies are usually quite similar and the intended free energy function
is often implicit in manuscripts and presentations.
Meaning of "free"
The
basic definition of "energy" is a measure of a body's (in
thermodynamics, the system's) ability to cause change. For example, when
a person pushes a heavy box a few meters forward, that person exerts
mechanical energy, also known as work, on the box over a distance of a
few meters forward. The mathematical definition of this form of energy
is the product of the force exerted on the object and the distance by
which the box moved (Work = Force × Distance).
Because the person changed the stationary position of the box, that
person exerted energy on that box. The work exerted can also be called
"useful energy", because energy was converted from one form into the
intended purpose, i.e. mechanical utilisation. For the case of the
person pushing the box, the energy in the form of internal (or
potential) energy obtained through metabolism was converted into work in
order to push the box. This energy conversion, however, was not
straightforward: while some internal energy went into pushing the box,
some was diverted away (lost) in the form of heat (transferred thermal
energy). For a reversible process, heat is the product of the absolute
temperature and the change in entropy of a body (entropy is a measure of disorder in a system). The difference between the change in internal energy, which is ,
and the energy lost in the form of heat is what is called the "useful
energy" of the body, or the work of the body performed on an object. In
thermodynamics, this is what is known as "free energy". In other words,
free energy is a measure of work (useful energy) a system can perform at
constant temperature. Mathematically, free energy is expressed as:
free energy
This expression has commonly been interpreted to mean that work is extracted from the internal energy while
represents energy not available to perform work. However, this is
incorrect. For instance, in an isothermal expansion of an ideal gas, the
internal energy change is and the expansion work is derived exclusively from the term supposedly not available to perform work. But it is noteworthy that the derivative form of the free energy:
(for Helmholtz free energy) does indeed indicate that a spontaneous
change in a non-reactive system's free energy (NOT the internal energy)
comprises the available energy to do work (compression in this case) and the unavailable energy . Similar expression can be written for the Gibbs free energy change.
In the 18th and 19th centuries, the theory of heat, i.e., that heat is a form of energy having relation to vibratory motion, was beginning to supplant both the caloric theory, i.e., that heat is a fluid, and the four element theory, in which heat was the lightest of the four elements. In a similar manner, during these years, heat
was beginning to be distinguished into different classification
categories, such as “free heat”, “combined heat”, “radiant heat”, specific heat, heat capacity, “absolute heat”, “latent caloric”, “free” or “perceptible” caloric (calorique sensible), among others.
In 1780, for example, Laplace and Lavoisier
stated: “In general, one can change the first hypothesis into the
second by changing the words ‘free heat, combined heat, and heat
released’ into ‘vis viva, loss of vis viva, and increase of vis viva.’” In this manner, the total mass of caloric in a body, called absolute heat,
was regarded as a mixture of two components; the free or perceptible
caloric could affect a thermometer, whereas the other component, the
latent caloric, could not.
The use of the words “latent heat” implied a similarity to latent heat
in the more usual sense; it was regarded as chemically bound to the
molecules of the body. In the adiabaticcompression
of a gas, the absolute heat remained constant but the observed rise in
temperature implied that some latent caloric had become “free” or
perceptible.
During the early 19th century, the concept of perceptible or free
caloric began to be referred to as “free heat” or heat set free. In
1824, for example, the French physicist Sadi Carnot,
in his famous “Reflections on the Motive Power of Fire”, speaks of
quantities of heat ‘absorbed or set free’ in different transformations.
In 1882, the German physicist and physiologist Hermann von Helmholtz coined the phrase ‘free energy’ for the expression E − TS, in which the change in A (or G) determines the amount of energy ‘free’ for work under the given conditions, specifically constant temperature.
Thus, in traditional use, the term “free” was attached to Gibbs
free energy for systems at constant pressure and temperature, or to
Helmholtz free energy for systems at constant temperature, to mean
‘available in the form of useful work.’
With reference to the Gibbs free energy, we need to add the
qualification that it is the energy free for non-volume work and
compositional changes.
An increasing number of books and journal articles do not include the attachment “free”, referring to G as simply Gibbs energy (and likewise for the Helmholtz energy). This is the result of a 1988 IUPAC
meeting to set unified terminologies for the international scientific
community, in which the adjective ‘free’ was supposedly banished.
This standard, however, has not yet been universally adopted, and many
published articles and books still include the descriptive ‘free’.
Application
Just
like the general concept of energy, free energy has a few definitions
suitable for different conditions. In physics, chemistry, and biology,
these conditions are thermodynamic parameters (temperature , volume , pressure ,
etc.). Scientists have come up with several ways to define free energy.
The mathematical expression of Helmholtz free energy is:
This definition of free energy is useful for gas-phase reactions or
in physics when modeling the behavior of isolated systems kept at a
constant volume. For example, if a researcher wanted to perform a
combustion reaction in a bomb calorimeter, the volume is kept constant
throughout the course of a reaction. Therefore, the heat of the reaction
is a direct measure of the free energy change, .
In solution chemistry, on the other hand, most chemical reactions are
kept at constant pressure. Under this condition, the heat of the reaction is equal to the enthalpy change of the system. Under constant pressure and temperature, the free energy in a reaction is known as Gibbs free energy .
These functions have a minimum in chemical equilibrium, as long as certain variables (, and or ) are held constant. In addition, they also have theoretical importance in deriving Maxwell relations. Work other than p dV may be added, e.g., for electrochemical cells, or f dx work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors.
is the number of molecules (alternatively, moles) of type in the system. If these quantities do not appear, it is impossible to describe compositional changes. The differentials for processes at uniform pressure and temperature are (assuming only work):
where μi is the chemical potential for the ith component in the system. The second relation is especially useful at constant and , conditions which are easy to achieve experimentally, and which approximately characterize living creatures. Under these conditions, it simplifies to
Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as times a corresponding increase in the entropy of the system and/or its surrounding.
An example is surface free energy, the amount of increase of free energy when the area of surface increases by every unit area.
The path integral Monte Carlo method is a numerical approach for determining the values of free energies, based on quantum dynamical principles.
Work and free energy change
For a reversible isothermal process, ΔS = qrev/T and therefore the definition of A results in
(at constant temperature)
This tells us that the change in free energy equals the reversible or
maximum work for a process performed at constant temperature. Under
other conditions, free-energy change is not equal to work; for instance,
for a reversible adiabatic expansion of an ideal gas, . Importantly, for a heat engine, including the Carnot cycle, the free-energy change after a full cycle is zero, ,
while the engine produces nonzero work. It is important to note that
for heat engines and other thermal systems, the free energies do not
offer convenient characterizations; internal energy and enthalpy are the
preferred potentials for characterizing thermal systems.
Free energy change and spontaneous processes
According to the second law of thermodynamics, for any process that occurs in a closed system, the inequality of Clausius, ΔS > q/Tsurr, applies. For a process at constant temperature and pressure without non-PV work, this inequality transforms into . Similarly, for a process at constant temperature and volume, .
Thus, a negative value of the change in free energy is a necessary
condition for a process to be spontaneous; this is the most useful form
of the second law of thermodynamics in chemistry. In chemical
equilibrium at constant T and p without electrical work, dG = 0.
History
The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in previous years to describe the force that caused chemical reactions. The term affinity, as used in chemical relation, dates back to at least the time of Albertus Magnus.
From the 1998 textbook Modern Thermodynamics by Nobel Laureate and chemistry professor Ilya Prigogine
we find: "As motion was explained by the Newtonian concept of force,
chemists wanted a similar concept of ‘driving force’ for chemical
change. Why do chemical reactions occur, and why do they stop at
certain points? Chemists called the ‘force’ that caused chemical
reactions affinity, but it lacked a clear definition."
During the entire 18th century, the dominant view with regard to heat and light was that put forth by Isaac Newton, called the Newtonian hypothesis,
which states that light and heat are forms of matter attracted or
repelled by other forms of matter, with forces analogous to gravitation
or to chemical affinity.
In the 19th century, the French chemist Marcellin Berthelot and the Danish chemist Julius Thomsen had attempted to quantify affinity using heats of reaction. In 1875, after quantifying the heats of reaction for a large number of compounds, Berthelot proposed the principle of maximum work,
in which all chemical changes occurring without intervention of outside
energy tend toward the production of bodies or of a system of bodies
which liberate heat.
In addition to this, in 1780 Antoine Lavoisier and Pierre-Simon Laplace laid the foundations of thermochemistry
by showing that the heat given out in a reaction is equal to the heat
absorbed in the reverse reaction. They also investigated the specific heat and latent heat of a number of substances, and amounts of heat given out in combustion. In a similar manner, in 1840 Swiss chemist Germain Hess
formulated the principle that the evolution of heat in a reaction is
the same whether the process is accomplished in one-step process or in a
number of stages. This is known as Hess' law. With the advent of the mechanical theory of heat in the early 19th century, Hess's law came to be viewed as a consequence of the law of conservation of energy.
Based on these and other ideas, Berthelot and Thomsen, as well as
others, considered the heat given out in the formation of a compound as
a measure of the affinity, or the work done by the chemical forces.
This view, however, was not entirely correct. In 1847, the English
physicist James Joule
showed that he could raise the temperature of water by turning a paddle
wheel in it, thus showing that heat and mechanical work were equivalent
or proportional to each other, i.e., approximately, dW ∝ dQ. This statement came to be known as the mechanical equivalent of heat and was a precursory form of the first law of thermodynamics.
By 1865, the German physicist Rudolf Clausius had shown that this equivalence principle needed amendment. That is, one can use the heat derived from a combustion reaction
in a coal furnace to boil water, and use this heat to vaporize steam,
and then use the enhanced high-pressure energy of the vaporized steam to
push a piston. Thus, we might naively reason that one can entirely
convert the initial combustion heat of the chemical reaction into the
work of pushing the piston. Clausius showed, however, that we must take
into account the work that the molecules of the working body, i.e., the
water molecules in the cylinder, do on each other as they pass or
transform from one step of or state of the engine cycle to the next, e.g., from () to (). Clausius originally called this the “transformation content” of the body, and then later changed the name to entropy.
Thus, the heat used to transform the working body of molecules from
one state to the next cannot be used to do external work, e.g., to push
the piston. Clausius defined this transformation heat as .
In 1873, Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces,
in which he introduced the preliminary outline of the principles of his
new equation able to predict or estimate the tendencies of various
natural processes to ensue when bodies or systems are brought into
contact. By studying the interactions of homogeneous substances in
contact, i.e., bodies, being in composition part solid, part liquid, and
part vapor, and by using a three-dimensional volume-entropy-internal energy
graph, Gibbs was able to determine three states of equilibrium, i.e.,
"necessarily stable", "neutral", and "unstable", and whether or not
changes will ensue. In 1876, Gibbs built on this framework by
introducing the concept of chemical potential
so to take into account chemical reactions and states of bodies that
are chemically different from each other. In his own words, to
summarize his results in 1873, Gibbs states:
If we wish to express in a single equation the necessary and sufficient condition of thermodynamic equilibrium for a substance when surrounded by a medium of constant pressurep and temperatureT, this equation may be written:
δ(ε − Tη + pν) = 0
when δ refers to the variation produced by any variations in the state
of the parts of the body, and (when different parts of the body are in
different states) in the proportion in which the body is divided between
the different states. The condition of stable equilibrium is that the
value of the expression in the parenthesis shall be a minimum.
In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body.
Hence, in 1882, after the introduction of these arguments by Clausius and Gibbs, the German scientist Hermann von Helmholtz
stated, in opposition to Berthelot and Thomas’ hypothesis that chemical
affinity is a measure of the heat of reaction of chemical reaction as
based on the principle of maximal work, that affinity is not the heat
given out in the formation of a compound but rather it is the largest
quantity of work which can be gained when the reaction is carried out in
a reversible manner, e.g., electrical work in a reversible cell. The
maximum work is thus regarded as the diminution of the free, or
available, energy of the system (Gibbs free energyG at T = constant, P = constant or Helmholtz free energyA at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (Internal energy). Thus, G or A is the amount of energy “free” for work under the given conditions.
Up until this point, the general view had been such that: “all
chemical reactions drive the system to a state of equilibrium in which
the affinities of the reactions vanish”. Over the next 60 years, the
term affinity came to be replaced with the term free energy. According
to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Reactions by Gilbert N. Lewis and Merle Randall led to the replacement of the term “affinity” by the term “free energy” in much of the English-speaking world.