Search This Blog

Friday, June 11, 2021

Flatness problem

From Wikipedia, the free encyclopedia
 
The local geometry of the universe is determined by whether the relative density Ω is less than, equal to or greater than 1. From top to bottom: a spherical universe with greater than critical density (Ω>1, k>0); a hyperbolic, underdense universe (Ω<1, k<0); and a flat universe with exactly the critical density (Ω=1, k=0). The spacetime of the universe is, unlike the diagrams, four-dimensional.

The flatness problem (also known as the oldness problem) is a cosmological fine-tuning problem within the Big Bang model of the universe. Such problems arise from the observation that some of the initial conditions of the universe appear to be fine-tuned to very 'special' values, and that small deviations from these values would have extreme effects on the appearance of the universe at the current time.

In the case of the flatness problem, the parameter which appears fine-tuned is the density of matter and energy in the universe. This value affects the curvature of space-time, with a very specific critical value being required for a flat universe. The current density of the universe is observed to be very close to this critical value. Since any departure of the total density from the critical value would increase rapidly over cosmic time, the early universe must have had a density even closer to the critical density, departing from it by one part in 1062 or less. This leads cosmologists to question how the initial density came to be so closely fine-tuned to this 'special' value.

The problem was first mentioned by Robert Dicke in 1969. The most commonly accepted solution among cosmologists is cosmic inflation, the idea that the universe went through a brief period of extremely rapid expansion in the first fraction of a second after the Big Bang; along with the monopole problem and the horizon problem, the flatness problem is one of the three primary motivations for inflationary theory.

Energy density and the Friedmann equation

According to Einstein's field equations of general relativity, the structure of spacetime is affected by the presence of matter and energy. On small scales space appears flat – as does the surface of the Earth if one looks at a small area. On large scales however, space is bent by the gravitational effect of matter. Since relativity indicates that matter and energy are equivalent, this effect is also produced by the presence of energy (such as light and other electromagnetic radiation) in addition to matter. The amount of bending (or curvature) of the universe depends on the density of matter/energy present.

This relationship can be expressed by the first Friedmann equation. In a universe without a cosmological constant, this is:

Here is the Hubble parameter, a measure of the rate at which the universe is expanding. is the total density of mass and energy in the universe, is the scale factor (essentially the 'size' of the universe), and is the curvature parameter — that is, a measure of how curved spacetime is. A positive, zero or negative value of corresponds to a respectively closed, flat or open universe. The constants and are Newton's gravitational constant and the speed of light, respectively.

Cosmologists often simplify this equation by defining a critical density, . For a given value of , this is defined as the density required for a flat universe, i.e. . Thus the above equation implies

.

Since the constant is known and the expansion rate can be measured by observing the speed at which distant galaxies are receding from us, can be determined. Its value is currently around 10−26 kg m−3. The ratio of the actual density to this critical value is called Ω, and its difference from 1 determines the geometry of the universe: Ω > 1 corresponds to a greater than critical density, , and hence a closed universe. Ω < 1 gives a low density open universe, and Ω equal to exactly 1 gives a flat universe.

The Friedmann equation,

can be re-arranged into

which after factoring , and using , leads to

The right hand side of the last expression above contains constants only and therefore the left hand side must remain constant throughout the evolution of the universe.

As the universe expands the scale factor increases, but the density decreases as matter (or energy) becomes spread out. For the standard model of the universe which contains mainly matter and radiation for most of its history, decreases more quickly than increases, and so the factor will decrease. Since the time of the Planck era, shortly after the Big Bang, this term has decreased by a factor of around and so must have increased by a similar amount to retain the constant value of their product.

Current value of Ω

The relative density Ω against cosmic time t (neither axis to scale). Each curve represents a possible universe: note that Ω diverges rapidly from 1. The blue curve is a universe similar to our own, which at the present time (right of the graph) has a small |Ω − 1| and therefore must have begun with Ω very close to 1 indeed. The red curve is a hypothetical different universe in which the initial value of Ω differed slightly too much from 1: by the present day it has diverged extremely and would not be able to support galaxies, stars or planets.

Measurement

The value of Ω at the present time is denoted Ω0. This value can be deduced by measuring the curvature of spacetime (since Ω = 1, or , is defined as the density for which the curvature k = 0). The curvature can be inferred from a number of observations.

One such observation is that of anisotropies (that is, variations with direction - see below) in the Cosmic Microwave Background (CMB) radiation. The CMB is electromagnetic radiation which fills the universe, left over from an early stage in its history when it was filled with photons and a hot, dense plasma. This plasma cooled as the universe expanded, and when it cooled enough to form stable atoms it no longer absorbed the photons. The photons present at that stage have been propagating ever since, growing fainter and less energetic as they spread through the ever-expanding universe.

The temperature of this radiation is almost the same at all points on the sky, but there is a slight variation (around one part in 100,000) between the temperature received from different directions. The angular scale of these fluctuations - the typical angle between a hot patch and a cold patch on the sky - depends on the curvature of the universe which in turn depends on its density as described above. Thus, measurements of this angular scale allow an estimation of Ω0.

Another probe of Ω0 is the frequency of Type-Ia supernovae at different distances from Earth. These supernovae, the explosions of degenerate white dwarf stars, are a type of standard candle; this means that the processes governing their intrinsic brightness are well understood so that a measure of apparent brightness when seen from Earth can be used to derive accurate distance measures for them (the apparent brightness decreasing in proportion to the square of the distance - see luminosity distance). Comparing this distance to the redshift of the supernovae gives a measure of the rate at which the universe has been expanding at different points in history. Since the expansion rate evolves differently over time in cosmologies with different total densities, Ω0 can be inferred from the supernovae data.

Data from the Wilkinson Microwave Anisotropy Probe (measuring CMB anisotropies) combined with that from the Sloan Digital Sky Survey and observations of type-Ia supernovae constrain Ω0 to be 1 within 1%. In other words, the term |Ω − 1| is currently less than 0.01, and therefore must have been less than 10−62 at the Planck era.

Implication

This tiny value is the crux of the flatness problem. If the initial density of the universe could take any value, it would seem extremely surprising to find it so 'finely tuned' to the critical value . Indeed, a very small departure of Ω from 1 in the early universe would have been magnified during billions of years of expansion to create a current density very far from critical. In the case of an overdensity () this would lead to a universe so dense it would cease expanding and collapse into a Big Crunch (an opposite to the Big Bang in which all matter and energy falls back into an extremely dense state) in a few years or less; in the case of an underdensity () it would expand so quickly and become so sparse it would soon seem essentially empty, and gravity would not be strong enough by comparison to cause matter to collapse and form galaxies. In either case the universe would contain no complex structures such as galaxies, stars, planets and any form of life.

This problem with the Big Bang model was first pointed out by Robert Dicke in 1969, and it motivated a search for some reason the density should take such a specific value.

Solutions to the problem

Some cosmologists agreed with Dicke that the flatness problem was a serious one, in need of a fundamental reason for the closeness of the density to criticality. But there was also a school of thought which denied that there was a problem to solve, arguing instead that since the universe must have some density it may as well have one close to as far from it, and that speculating on a reason for any particular value was "beyond the domain of science". Enough cosmologists saw the problem as a real one, however, for various solutions to be proposed.

Anthropic principle

One solution to the problem is to invoke the anthropic principle, which states that humans should take into account the conditions necessary for them to exist when speculating about causes of the universe's properties. If two types of universe seem equally likely but only one is suitable for the evolution of intelligent life, the anthropic principle suggests that finding ourselves in that universe is no surprise: if the other universe had existed instead, there would be no observers to notice the fact.

The principle can be applied to solve the flatness problem in two somewhat different ways. The first (an application of the 'strong anthropic principle') was suggested by C. B. Collins and Stephen Hawking, who in 1973 considered the existence of an infinite number of universes such that every possible combination of initial properties was held by some universe. In such a situation, they argued, only those universes with exactly the correct density for forming galaxies and stars would give rise to intelligent observers such as humans: therefore, the fact that we observe Ω to be so close to 1 would be "simply a reflection of our own existence."

An alternative approach, which makes use of the 'weak anthropic principle', is to suppose that the universe is infinite in size, but with the density varying in different places (i.e. an inhomogeneous universe). Thus some regions will be over-dense (Ω > 1) and some under-dense (Ω < 1). These regions may be extremely far apart - perhaps so far that light has not had time to travel from one to another during the age of the universe (that is, they lie outside one another's cosmological horizons). Therefore, each region would behave essentially as a separate universe: if we happened to live in a large patch of almost-critical density we would have no way of knowing of the existence of far-off under- or over-dense patches since no light or other signal has reached us from them. An appeal to the anthropic principle can then be made, arguing that intelligent life would only arise in those patches with Ω very close to 1, and that therefore our living in such a patch is unsurprising.

This latter argument makes use of a version of the anthropic principle which is 'weaker' in the sense that it requires no speculation on multiple universes, or on the probabilities of various different universes existing instead of the current one. It requires only a single universe which is infinite - or merely large enough that many disconnected patches can form - and that the density varies in different regions (which is certainly the case on smaller scales, giving rise to galactic clusters and voids).

However, the anthropic principle has been criticised by many scientists. For example, in 1979 Bernard Carr and Martin Rees argued that the principle “is entirely post hoc: it has not yet been used to predict any feature of the Universe.” Others have taken objection to its philosophical basis, with Ernan McMullin writing in 1994 that "the weak Anthropic principle is trivial ... and the strong Anthropic principle is indefensible." Since many physicists and philosophers of science do not consider the principle to be compatible with the scientific method, another explanation for the flatness problem was needed.

Inflation

The standard solution to the flatness problem invokes cosmic inflation, a process whereby the universe expands exponentially quickly (i.e. grows as with time , for some constant ) during a short period in its early history. The theory of inflation was first proposed in 1979, and published in 1981, by Alan Guth. His two main motivations for doing so were the flatness problem and the horizon problem, another fine-tuning problem of physical cosmology.

The proposed cause of inflation is a field which permeates space and drives the expansion. The field contains a certain energy density, but unlike the density of the matter or radiation present in the late universe, which decrease over time, the density of the inflationary field remains roughly constant as space expands. Therefore, the term increases extremely rapidly as the scale factor grows exponentially. Recalling the Friedmann Equation

,

and the fact that the right-hand side of this expression is constant, the term must therefore decrease with time.

Thus if initially takes any arbitrary value, a period of inflation can force it down towards 0 and leave it extremely small - around as required above, for example. Subsequent evolution of the universe will cause the value to grow, bringing it to the currently observed value of around 0.01. Thus the sensitive dependence on the initial value of Ω has been removed: a large and therefore 'unsurprising' starting value need not become amplified and lead to a very curved universe with no opportunity to form galaxies and other structures.

This success in solving the flatness problem is considered one of the major motivations for inflationary theory.

Post inflation

Although inflationary theory is regarded as having had much success, and the evidence for it is compelling, it is not universally accepted: cosmologists recognize that there are still gaps in the theory and are open to the possibility that future observations will disprove it. In particular, in the absence of any firm evidence for what the field driving inflation should be, many different versions of the theory have been proposed. Many of these contain parameters or initial conditions which themselves require fine-tuning in much the way that the early density does without inflation.

For these reasons work is still being done on alternative solutions to the flatness problem. These have included non-standard interpretations of the effect of dark energy and gravity, particle production in an oscillating universe, and use of a Bayesian statistical approach to argue that the problem is non-existent. The latter argument, suggested for example by Evrard and Coles, maintains that the idea that Ω being close to 1 is 'unlikely' is based on assumptions about the likely distribution of the parameter which are not necessarily justified. Despite this ongoing work, inflation remains by far the dominant explanation for the flatness problem. The question arises, however, whether it is still the dominant explanation because it is the best explanation, or because the community is unaware of progress on this problem. In particular, in addition to the idea that Ω is not a suitable parameter in this context, other arguments against the flatness problem have been presented: if the universe collapses in the future, then the flatness problem "exists", but only for a relatively short time, so a typical observer would not expect to measure Ω appreciably different from 1; in the case of a universe which expands forever with a positive cosmological constant, fine-tuning is needed not to achieve a (nearly) flat universe, but also to avoid it.

Einstein–Cartan theory

The flatness problem is naturally solved by the Einstein–Cartan–Sciama–Kibble theory of gravity, without an exotic form of matter required in inflationary theory. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. It has no free parameters. Including torsion gives the correct conservation law for the total (orbital plus intrinsic) angular momentum of matter in the presence of gravity. The minimal coupling between torsion and Dirac spinors obeying the nonlinear Dirac equation generates a spin-spin interaction which is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical big bang singularity, replacing it with a bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the big bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era.

Olbers' paradox

From Wikipedia, the free encyclopedia
 
In this animation depicting an infinite and homogeneous sky, successively more distant stars are revealed in each frame. As the animation progresses, the more distant stars fill the gaps between closer stars in the field of view. Eventually, the entire image is as bright as a single star.
As more distant stars are revealed in this animation depicting an infinite, homogeneous and static universe, they fill the gaps between closer stars. Olbers's paradox argues that as the night sky is dark, at least one of these three assumptions about the nature of the universe must be false.

In astrophysics and physical cosmology, Olbers' paradox, named after the German astronomer Heinrich Wilhelm Olbers (1758–1840), also known as the "dark night sky paradox", is the argument that the darkness of the night sky conflicts with the assumption of an infinite and eternal static universe. In the hypothetical case that the universe is static, homogeneous at a large scale, and populated by an infinite number of stars, any line of sight from Earth must end at the surface of a star and hence the night sky should be completely illuminated and very bright. This contradicts the observed darkness and non-uniformity of the night.

The darkness of the night sky is one of the pieces of evidence for a dynamic universe, such as the Big Bang model. That model explains the observed non-uniformity of brightness by invoking spacetime's expansion, which lengthens the light originating from the Big Bang to microwave levels via a process known as redshift; this microwave radiation background has wavelengths much longer than those of visible light, and so appears dark to the naked eye. Other explanations for the paradox have been offered, but none have wide acceptance in cosmology.

History

The first one to address the problem of an infinite number of stars and the resulting heat in the Cosmos was Cosmas Indicopleustes, a Greek monk from Alexandria, who states in his Topographia Christiana: "The crystal-made sky sustains the heat of the Sun, the moon, and the infinite number of stars; otherwise, it would have been full of fire, and it could melt or set on fire."

Edward Robert Harrison's Darkness at Night: A Riddle of the Universe (1987) gives an account of the dark night sky paradox, seen as a problem in the history of science. According to Harrison, the first to conceive of anything like the paradox was Thomas Digges, who was also the first to expound the Copernican system in English and also postulated an infinite universe with infinitely many stars. Kepler also posed the problem in 1610, and the paradox took its mature form in the 19th century work of Halley and Cheseaux. The paradox is commonly attributed to the German amateur astronomer Heinrich Wilhelm Olbers, who described it in 1823, but Harrison shows convincingly that Olbers was far from the first to pose the problem, nor was his thinking about it particularly valuable. Harrison argues that the first to set out a satisfactory resolution of the paradox was Lord Kelvin, in a little known 1901 paper, and that Edgar Allan Poe's essay Eureka (1848) curiously anticipated some qualitative aspects of Kelvin's argument:

Were the succession of stars endless, then the background of the sky would present us a uniform luminosity, like that displayed by the Galaxy – since there could be absolutely no point, in all that background, at which would not exist a star. The only mode, therefore, in which, under such a state of affairs, we could comprehend the voids which our telescopes find in innumerable directions, would be by supposing the distance of the invisible background so immense that no ray from it has yet been able to reach us at all.

The paradox

The paradox is that a static, infinitely old universe with an infinite number of stars distributed in an infinitely large space would be bright rather than dark.

A view of a square section of four concentric shells

To show this, we divide the universe into a series of concentric shells, 1 light year thick. A certain number of stars will be in the shell 1,000,000,000 to 1,000,000,001 light years away. If the universe is homogeneous at a large scale, then there would be four times as many stars in a second shell, which is between 2,000,000,000 and 2,000,000,001 light years away. However, the second shell is twice as far away, so each star in it would appear one quarter as bright as the stars in the first shell. Thus the total light received from the second shell is the same as the total light received from the first shell.

Thus each shell of a given thickness will produce the same net amount of light regardless of how far away it is. That is, the light of each shell adds to the total amount. Thus the more shells, the more light; and with infinitely many shells, there would be a bright night sky.

While dark clouds could obstruct the light, these clouds would heat up, until they were as hot as the stars, and then radiate the same amount of light.

Kepler saw this as an argument for a finite observable universe, or at least for a finite number of stars. In general relativity theory, it is still possible for the paradox to hold in a finite universe: though the sky would not be infinitely bright, every point in the sky would still be like the surface of a star.

Explanation

The poet Edgar Allan Poe suggested that the finite size of the observable universe resolves the apparent paradox. More specifically, because the universe is finitely old and the speed of light is finite, only finitely many stars can be observed from Earth (although the whole universe can be infinite in space). The density of stars within this finite volume is sufficiently low that any line of sight from Earth is unlikely to reach a star.

However, the Big Bang theory seems to introduce a new problem: it states that the sky was much brighter in the past, especially at the end of the recombination era, when it first became transparent. All points of the local sky at that era were comparable in brightness to the surface of the Sun, due to the high temperature of the universe in that era; and most light rays will originate not from a star but the relic of the Big Bang.

This problem is addressed by the fact that the Big Bang theory also involves the expansion of space, which can cause the energy of emitted light to be reduced via redshift. More specifically, the extremely energetic radiation from the Big Bang has been redshifted to microwave wavelengths (1100 times the length of its original wavelength) as a result of the cosmic expansion, and thus forms the cosmic microwave background radiation. This explains the relatively low light densities and energy levels present in most of our sky today despite the assumed bright nature of the Big Bang. The redshift also affects light from distant stars and quasars, but this diminution is minor, since the most distant galaxies and quasars have redshifts of only around 5 to 8.6.

Other factors

Steady state

The redshift hypothesised in the Big Bang model would by itself explain the darkness of the night sky even if the universe were infinitely old. In the Steady state theory the universe is infinitely old and uniform in time as well as space. There is no Big Bang in this model, but there are stars and quasars at arbitrarily great distances. The expansion of the universe causes the light from these distant stars and quasars to redshift, so that the total light flux from the sky remains finite. Thus the observed radiation density (the sky brightness of extragalactic background light) can be independent of finiteness of the universe. Mathematically, the total electromagnetic energy density (radiation energy density) in thermodynamic equilibrium from Planck's law is

e.g. for temperature 2.7 K it is 40 fJ/m3 ... 4.5×10−31 kg/m3 and for visible temperature 6000 K we get 1 J/m3 ... 1.1×10−17 kg/m3. But the total radiation emitted by a star (or other cosmic object) is at most equal to the total nuclear binding energy of isotopes in the star. For the density of the observable universe of about 4.6×10−28 kg/m3 and given the known abundance of the chemical elements, the corresponding maximal radiation energy density of 9.2×10−31 kg/m3, i.e. temperature 3.2 K (matching the value observed for the optical radiation temperature by Arthur Eddington). This is close to the summed energy density of the cosmic microwave background (CMB) and the cosmic neutrino background. The Big Bang hypothesis predicts that the CBR should have the same energy density as the binding energy density of the primordial helium, which is much greater than the binding energy density of the non-primordial elements; so it gives almost the same result. However, the steady-state model does not predict the angular distribution of the microwave background temperature accurately (as the standard ΛCDM paradigm does). Nevertheless, the modified gravitation theories (without metric expansion of the universe) cannot be ruled out as of 2017 by CMB and BAO observations.

Finite age of stars

Stars have a finite age and a finite power, thereby implying that each star has a finite impact on a sky's light field density. Edgar Allan Poe suggested that this idea could provide a resolution to Olbers' paradox; a related theory was also proposed by Jean-Philippe de Chéseaux. However, stars are continually being born as well as dying. As long as the density of stars throughout the universe remains constant, regardless of whether the universe itself has a finite or infinite age, there would be infinitely many other stars in the same angular direction, with an infinite total impact. So the finite age of the stars does not explain the paradox.

Brightness

Suppose that the universe were not expanding, and always had the same stellar density; then the temperature of the universe would continually increase as the stars put out more radiation. Eventually, it would reach 3000 K (corresponding to a typical photon energy of 0.3 eV and so a frequency of 7.5×1013 Hz), and the photons would begin to be absorbed by the hydrogen plasma filling most of the universe, rendering outer space opaque. This maximal radiation density corresponds to about 1.2×1017 eV/m3 = 2.1×10−19 kg/m3, which is much greater than the observed value of 4.7×10−31 kg/m3. So the sky is about five hundred billion times darker than it would be if the universe was neither expanding nor too young to have reached equilibrium yet. However, recent observations increasing the lower bound on the number of galaxies suggest UV absorption by hydrogen and reemission in near-IR (not visible) wavelengths also plays a role.

Fractal star distribution

A different resolution, which does not rely on the Big Bang theory, was first proposed by Carl Charlier in 1908 and later rediscovered by Benoît Mandelbrot in 1974. They both postulated that if the stars in the universe were distributed in a hierarchical fractal cosmology (e.g., similar to Cantor dust)—the average density of any region diminishes as the region considered increases—it would not be necessary to rely on the Big Bang theory to explain Olbers' paradox. This model would not rule out a Big Bang, but would allow for a dark sky even if the Big Bang had not occurred.

Mathematically, the light received from stars as a function of star distance in a hypothetical fractal cosmos is

where:

  • r0 = the distance of the nearest star, r0 > 0;
  • r = the variable measuring distance from the Earth;
  • L(r) = average luminosity per star at distance r;
  • N(r) = number of stars at distance r.

The function of luminosity from a given distance L(r)N(r) determines whether the light received is finite or infinite. For any luminosity from a given distance L(r)N(r) proportional to ra, is infinite for a ≥ −1 but finite for a < −1. So if L(r) is proportional to r−2, then for to be finite, N(r) must be proportional to rb, where b < 1. For b = 1, the numbers of stars at a given radius is proportional to that radius. When integrated over the radius, this implies that for b = 1, the total number of stars is proportional to r2. This would correspond to a fractal dimension of 2. Thus the fractal dimension of the universe would need to be less than 2 for this explanation to work.

This explanation is not widely accepted among cosmologists, since the evidence suggests that the fractal dimension of the universe is at least 2. Moreover, the majority of cosmologists accept the cosmological principle, which assumes that matter at the scale of billions of light years is distributed isotropically. Contrarily, fractal cosmology requires anisotropic matter distribution at the largest scales. Cosmic microwave background radiation has cosine anisotropy.

Heat death of the universe

From Wikipedia, the free encyclopedia

The heat death of the universe (also known as the Big Chill or Big Freeze) is a theory on the ultimate fate of the universe, which suggests the universe would evolve to a state of no thermodynamic free energy and would therefore be unable to sustain processes that increase entropy. Heat death does not imply any particular absolute temperature; it only requires that temperature differences or other processes may no longer be exploited to perform work. In the language of physics, this is when the universe reaches thermodynamic equilibrium (maximum entropy).

If the topology of the universe is open or flat, or if dark energy is a positive cosmological constant (both of which are consistent with current data), the universe will continue expanding forever, and a heat death is expected to occur, with the universe cooling to approach equilibrium at a very low temperature after a very long time period.

The hypothesis of heat death stems from the ideas of Lord Kelvin, who in the 1850s took the theory of heat as mechanical energy loss in nature (as embodied in the first two laws of thermodynamics) and extrapolated it to larger processes on a universal scale.

Origins of the idea

The idea of heat death stems from the second law of thermodynamics, of which one version states that entropy tends to increase in an isolated system. From this, the hypothesis implies that if the universe lasts for a sufficient time, it will asymptotically approach a state where all energy is evenly distributed. In other words, according to this hypothesis, there is a tendency in nature to the dissipation (energy transformation) of mechanical energy (motion) into thermal energy; hence, by extrapolation, there exists the view that, in time, the mechanical movement of the universe will run down as work is converted to heat because of the second law.

The conjecture that all bodies in the universe cool off, eventually becoming too cold to support life, seems to have been first put forward by the French astronomer Jean Sylvain Bailly in 1777 in his writings on the history of astronomy and in the ensuing correspondence with Voltaire. In Bailly's view, all planets have an internal heat and are now at some particular stage of cooling. Jupiter, for instance, is still too hot for life to arise there for thousands of years, while the Moon is already too cold. The final state, in this view, is described as one of "equilibrium" in which all motion ceases.

The idea of heat death as a consequence of the laws of thermodynamics, however, was first proposed in loose terms beginning in 1851 by Lord Kelvin (William Thomson), who theorized further on the mechanical energy loss views of Sadi Carnot (1824), James Joule (1843) and Rudolf Clausius (1850). Thomson's views were then elaborated over the next decade by Hermann von Helmholtz and William Rankine.

History

The idea of heat death of the universe derives from discussion of the application of the first two laws of thermodynamics to universal processes. Specifically, in 1851, Lord Kelvin outlined the view, as based on recent experiments on the dynamical theory of heat: "heat is not a substance, but a dynamical form of mechanical effect, we perceive that there must be an equivalence between mechanical work and heat, as between cause and effect."

Lord Kelvin originated the idea of universal heat death in 1852.

In 1852, Thomson published On a Universal Tendency in Nature to the Dissipation of Mechanical Energy, in which he outlined the rudiments of the second law of thermodynamics summarized by the view that mechanical motion and the energy used to create that motion will naturally tend to dissipate or run down. The ideas in this paper, in relation to their application to the age of the Sun and the dynamics of the universal operation, attracted the likes of William Rankine and Hermann von Helmholtz. The three of them were said to have exchanged ideas on this subject. In 1862, Thomson published "On the age of the Sun's heat", an article in which he reiterated his fundamental beliefs in the indestructibility of energy (the first law) and the universal dissipation of energy (the second law), leading to diffusion of heat, cessation of useful motion (work), and exhaustion of potential energy through the material universe, while clarifying his view of the consequences for the universe as a whole. Thomson wrote:

The result would inevitably be a state of universal rest and death, if the universe were finite and left to obey existing laws. But it is impossible to conceive a limit to the extent of matter in the universe; and therefore science points rather to an endless progress, through an endless space, of action involving the transformation of potential energy into palpable motion and hence into heat, than to a single finite mechanism, running down like a clock, and stopping for ever.

In the years to follow both Thomson's 1852 and the 1862 papers, Helmholtz and Rankine both credited Thomson with the idea, but read further into his papers by publishing views stating that Thomson argued that the universe will end in a "heat death" (Helmholtz) which will be the "end of all physical phenomena" (Rankine).

Current status

Proposals about the final state of the universe depend on the assumptions made about its ultimate fate, and these assumptions have varied considerably over the late 20th century and early 21st century. In a hypothesized "open" or "flat" universe that continues expanding indefinitely, either a heat death or a Big Rip is expected to eventually occur. If the cosmological constant is zero, the universe will approach absolute zero temperature over a very long timescale. However, if the cosmological constant is positive, as appears to be the case in recent observations (2011 Nobel Prize), the temperature will asymptote to a non-zero positive value, and the universe will approach a state of maximum entropy in which no further work is possible.

If a Big Rip does not happen long before that and protons, electrons, and neutrons bound to atom's nucleus are stable and never decay, the full "heat death" situation could be avoided if there is a method or mechanism to regenerate hydrogen atoms from radiation, dark matter, dark energy, zero-point energy, sphalerons, virtual particles, or other sources, such as retrieving matter and energy from black holes or causing black holes to explode so that mass contained in them is released, which can lead to formation of new stars and planets. If so, it is at least possible that star formation and heat transfer can continue, avoiding a gradual running down of the universe due to the conversion of matter into energy and heavier elements in stellar processes, and the absorption of matter by black holes and their subsequent evaporation as Hawking radiation.

A new study published on November 2020 found that the universe is actually getting hotter. The study probed the thermal history of the universe over the last 10 billion years. It has found that "the mean temperature of gas across the universe has increased more than 10 times over that time period and reached about 2 million degrees Kelvin today—approximately 4 million degrees Fahrenheit." Yi-Kuan Chiang, lead author of the study and a research fellow at The Ohio State University Center for Cosmology and AstroParticle Physics, stated that "Our new measurement provides a direct confirmation of the seminal work by Jim Peebles—the 2019 Nobel Laureate in Physics—who laid out the theory of how the large-scale structure forms in the universe."

Timeframe for heat death

From the Big Bang through the present day, matter and dark matter in the universe are thought to have been concentrated in stars, galaxies, and galaxy clusters, and are presumed to continue to do so well into the future. Therefore, the universe is not in thermodynamic equilibrium, and objects can do physical work. The decay time for a supermassive black hole of roughly 1 galaxy mass (1011 solar masses) due to Hawking radiation is on the order of 10100 years, so entropy can be produced until at least that time. Some large black holes in the universe are predicted to continue to grow up to perhaps 1014 M during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10106 years. After that time, the universe enters the so-called Dark Era and is expected to consist chiefly of a dilute gas of photons and leptons. With only very diffuse matter remaining, activity in the universe will have tailed off dramatically, with extremely low energy levels and extremely long timescales. Speculatively, it is possible that the universe may enter a second inflationary epoch, or assuming that the current vacuum state is a false vacuum, the vacuum may decay into a lower-energy state. It is also possible that entropy production will cease and the universe will reach heat death. Another universe could possibly be created by random quantum fluctuations or quantum tunneling in roughly years. Over vast periods of time, a spontaneous entropy decrease would eventually occur via the Poincaré recurrence theorem, thermal fluctuations, and fluctuation theorem. Such a scenario, however, has been described as "highly speculative, probably wrong, [and] completely untestable". Sean M. Carroll, originally an advocate of this idea, no longer supports it.

Opposing views

Max Planck wrote that the phrase "entropy of the universe" has no meaning because it admits of no accurate definition. More recently, Walter Grandy writes: "It is rather presumptuous to speak of the entropy of a universe about which we still understand so little, and we wonder how one might define thermodynamic entropy for a universe and its major constituents that have never been in equilibrium in their entire existence." According to Tisza: "If an isolated system is not in equilibrium, we cannot associate an entropy with it." Buchdahl writes of "the entirely unjustifiable assumption that the universe can be treated as a closed thermodynamic system". According to Gallavotti: "... there is no universally accepted notion of entropy for systems out of equilibrium, even when in a stationary state." Discussing the question of entropy for non-equilibrium states in general, Lieb and Yngvason express their opinion as follows: "Despite the fact that most physicists believe in such a nonequilibrium entropy, it has so far proved impossible to define it in a clearly satisfactory way." In Landsberg's opinion: "The third misconception is that thermodynamics, and in particular, the concept of entropy, can without further enquiry be applied to the whole universe. ... These questions have a certain fascination, but the answers are speculations, and lie beyond the scope of this book."

A 2010 analysis of entropy states, "The entropy of a general gravitational field is still not known", and "gravitational entropy is difficult to quantify". The analysis considers several possible assumptions that would be needed for estimates and suggests that the observable universe has more entropy than previously thought. This is because the analysis concludes that supermassive black holes are the largest contributor. Lee Smolin goes further: "It has long been known that gravity is important for keeping the universe out of thermal equilibrium. Gravitationally bound systems have negative specific heat—that is, the velocities of their components increase when energy is removed. ... Such a system does not evolve toward a homogeneous equilibrium state. Instead it becomes increasingly structured and heterogeneous as it fragments into subsystems." This point of view is also supported by the fact of a recent experimental discovery of a stable non-equilibrium steady state in a relatively simple closed system. It should be expected that an isolated system fragmented into subsystems does not necessarily come to thermodynamic equilibrium and remain in non-equilibrium steady state. Entropy will be transmitted from one subsystem to another, but its production will be zero, which does not contradict the second law of thermodynamics.

Liquefied petroleum gas

From Wikipedia, the free encyclopedia ...