Search This Blog

Sunday, March 2, 2025

Quantum gravity

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Quantum_gravity

A depiction of the cGh cube
 
Depicted as a Venn diagram

Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics. It deals with environments in which neither gravitational nor quantum effects can be ignored, such as in the vicinity of black holes or similar compact astrophysical objects, as well as in the early stages of the universe moments after the Big Bang.

Three of the four fundamental forces of nature are described within the framework of quantum mechanics and quantum field theory: the electromagnetic interaction, the strong force, and the weak force; this leaves gravity as the only interaction that has not been fully accommodated. The current understanding of gravity is based on Albert Einstein's general theory of relativity, which incorporates his theory of special relativity and deeply modifies the understanding of concepts like time and space. Although general relativity is highly regarded for its elegance and accuracy, it has limitations: the gravitational singularities inside black holes, the ad hoc postulation of dark matter, as well as dark energy and its relation to the cosmological constant are among the current unsolved mysteries regarding gravity, all of which signal the collapse of the general theory of relativity at different scales and highlight the need for a gravitational theory that goes into the quantum realm. At distances close to the Planck length, like those near the center of a black hole, quantum fluctuations of spacetime are expected to play an important role. Finally, the discrepancies between the predicted value for the vacuum energy and the observed values (which, depending on considerations, can be of 60 or 120 orders of magnitude) highlight the necessity for a quantum theory of gravity.

The field of quantum gravity is actively developing, and theorists are exploring a variety of approaches to the problem of quantum gravity, the most popular being M-theory and loop quantum gravity. All of these approaches aim to describe the quantum behavior of the gravitational field, which does not necessarily include unifying all fundamental interactions into a single mathematical framework. However, many approaches to quantum gravity, such as string theory, try to develop a framework that describes all fundamental forces. Such a theory is often referred to as a theory of everything. Some of the approaches, such as loop quantum gravity, make no such attempt; instead, they make an effort to quantize the gravitational field while it is kept separate from the other forces. Other lesser-known but no less important theories include causal dynamical triangulation, noncommutative geometry, and twistor theory.

One of the difficulties of formulating a quantum gravity theory is that direct observation of quantum gravitational effects is thought to only appear at length scales near the Planck scale, around 10−35 meters, a scale far smaller, and hence only accessible with far higher energies, than those currently available in high energy particle accelerators. Therefore, physicists lack experimental data which could distinguish between the competing theories which have been proposed.

Thought experiment approaches have been suggested as a testing tool for quantum gravity theories. In the field of quantum gravity there are several open questions – e.g., it is not known how spin of elementary particles sources gravity, and thought experiments could provide a pathway to explore possible resolutions to these questions, even in the absence of lab experiments or physical observations.

In the early 21st century, new experiment designs and technologies have arisen which suggest that indirect approaches to testing quantum gravity may be feasible over the next few decades. This field of study is called phenomenological quantum gravity.

Overview

Unsolved problem in physics:
How can the theory of quantum mechanics be merged with the theory of general relativity / gravitational force and remain correct at microscopic length scales? What verifiable predictions does any theory of quantum gravity make?
Diagram showing the place of quantum gravity in the hierarchy of physics theories

Much of the difficulty in meshing these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. General relativity models gravity as curvature of spacetime: in the slogan of John Archibald Wheeler, "Spacetime tells matter how to move; matter tells spacetime how to curve." On the other hand, quantum field theory is typically formulated in the flat spacetime used in special relativity. No theory has yet proven successful in describing the general situation where the dynamics of matter, modeled with quantum mechanics, affect the curvature of spacetime. If one attempts to treat gravity as simply another quantum field, the resulting theory is not renormalizable. Even in the simpler case where the curvature of spacetime is fixed a priori, developing quantum field theory becomes more mathematically challenging, and many ideas physicists use in quantum field theory on flat spacetime are no longer applicable.

It is widely hoped that a theory of quantum gravity would allow us to understand problems of very high energy and very small dimensions of space, such as the behavior of black holes, and the origin of the universe.

One major obstacle is that for quantum field theory in curved spacetime with a fixed metric, bosonic/fermionic operator fields supercommute for spacelike separated points. (This is a way of imposing a principle of locality.) However, in quantum gravity, the metric is dynamical, so that whether two points are spacelike separated depends on the state. In fact, they can be in a quantum superposition of being spacelike and not spacelike separated.

Quantum mechanics and general relativity

Graviton

The observation that all fundamental forces except gravity have one or more known messenger particles leads researchers to believe that at least one must exist for gravity. This hypothetical particle is known as the graviton. These particles act as a force particle similar to the photon of the electromagnetic interaction. Under mild assumptions, the structure of general relativity requires them to follow the quantum mechanical description of interacting theoretical spin-2 massless particles. Many of the accepted notions of a unified theory of physics since the 1970s assume, and to some degree depend upon, the existence of the graviton. The Weinberg–Witten theorem places some constraints on theories in which the graviton is a composite particle. While gravitons are an important theoretical step in a quantum mechanical description of gravity, they are generally believed to be undetectable because they interact too weakly.

Nonrenormalizability of gravity

General relativity, like electromagnetism, is a classical field theory. One might expect that, as with electromagnetism, the gravitational force should also have a corresponding quantum field theory.

However, gravity is perturbatively nonrenormalizable. For a quantum field theory to be well defined according to this understanding of the subject, it must be asymptotically free or asymptotically safe. The theory must be characterized by a choice of finitely many parameters, which could, in principle, be set by experiment. For example, in quantum electrodynamics these parameters are the charge and mass of the electron, as measured at a particular energy scale.

On the other hand, in quantizing gravity there are, in perturbation theory, infinitely many independent parameters (counterterm coefficients) needed to define the theory. For a given choice of those parameters, one could make sense of the theory, but since it is impossible to conduct infinite experiments to fix the values of every parameter, it has been argued that one does not, in perturbation theory, have a meaningful physical theory. At low energies, the logic of the renormalization group tells us that, despite the unknown choices of these infinitely many parameters, quantum gravity will reduce to the usual Einstein theory of general relativity. On the other hand, if we could probe very high energies where quantum effects take over, then every one of the infinitely many unknown parameters would begin to matter, and we could make no predictions at all.

It is conceivable that, in the correct theory of quantum gravity, the infinitely many unknown parameters will reduce to a finite number that can then be measured. One possibility is that normal perturbation theory is not a reliable guide to the renormalizability of the theory, and that there really is a UV fixed point for gravity. Since this is a question of non-perturbative quantum field theory, finding a reliable answer is difficult, pursued in the asymptotic safety program. Another possibility is that there are new, undiscovered symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken by string theory, where all of the excitations of the string essentially manifest themselves as new symmetries.

Quantum gravity as an effective field theory

In an effective field theory, not all but the first few of the infinite set of parameters in a nonrenormalizable theory are suppressed by huge energy scales and hence can be neglected when computing low-energy effects. Thus, at least in the low-energy regime, the model is a predictive quantum field theory. Furthermore, many theorists argue that the Standard Model should be regarded as an effective field theory itself, with "nonrenormalizable" interactions suppressed by large energy scales and whose effects have consequently not been observed experimentally.

By treating general relativity as an effective field theory, one can actually make legitimate predictions for quantum gravity, at least for low-energy phenomena. An example is the well-known calculation of the tiny first-order quantum-mechanical correction to the classical Newtonian gravitational potential between two masses. Another example is the calculation of the corrections to the Bekenstein-Hawking entropy formula.

Spacetime background dependence

A fundamental lesson of general relativity is that there is no fixed spacetime background, as found in Newtonian mechanics and special relativity; the spacetime geometry is dynamic. While simple to grasp in principle, this is a complex idea to understand about general relativity, and its consequences are profound and not fully explored, even at the classical level. To a certain extent, general relativity can be seen to be a relational theory, in which the only physically relevant information is the relationship between different events in spacetime.

On the other hand, quantum mechanics has depended since its inception on a fixed background (non-dynamic) structure. In the case of quantum mechanics, it is time that is given and not dynamic, just as in Newtonian classical mechanics. In relativistic quantum field theory, just as in classical field theory, Minkowski spacetime is the fixed background of the theory.

String theory

Interaction in the subatomic world: world lines of point-like particles in the Standard Model or a world sheet swept up by closed strings in string theory

String theory can be seen as a generalization of quantum field theory where instead of point particles, string-like objects propagate in a fixed spacetime background, although the interactions among closed strings give rise to space-time in a dynamic way. Although string theory had its origins in the study of quark confinement and not of quantum gravity, it was soon discovered that the string spectrum contains the graviton, and that "condensation" of certain vibration modes of strings is equivalent to a modification of the original background. In this sense, string perturbation theory exhibits exactly the features one would expect of a perturbation theory that may exhibit a strong dependence on asymptotics (as seen, for example, in the AdS/CFT correspondence) which is a weak form of background dependence.

Background independent theories

Loop quantum gravity is the fruit of an effort to formulate a background-independent quantum theory.

Topological quantum field theory provided an example of background-independent quantum theory, but with no local degrees of freedom, and only finitely many degrees of freedom globally. This is inadequate to describe gravity in 3+1 dimensions, which has local degrees of freedom according to general relativity. In 2+1 dimensions, however, gravity is a topological field theory, and it has been successfully quantized in several different ways, including spin networks.

Semi-classical quantum gravity

Quantum field theory on curved (non-Minkowskian) backgrounds, while not a full quantum theory of gravity, has shown many promising early results. In an analogous way to the development of quantum electrodynamics in the early part of the 20th century (when physicists considered quantum mechanics in classical electromagnetic fields), the consideration of quantum field theory on a curved background has led to predictions such as black hole radiation.

Phenomena such as the Unruh effect, in which particles exist in certain accelerating frames but not in stationary ones, do not pose any difficulty when considered on a curved background (the Unruh effect occurs even in flat Minkowskian backgrounds). The vacuum state is the state with the least energy (and may or may not contain particles).

Problem of time

A conceptual difficulty in combining quantum mechanics with general relativity arises from the contrasting role of time within these two frameworks. In quantum theories, time acts as an independent background through which states evolve, with the Hamiltonian operator acting as the generator of infinitesimal translations of quantum states through time. In contrast, general relativity treats time as a dynamical variable which relates directly with matter and moreover requires the Hamiltonian constraint to vanish. Because this variability of time has been observed macroscopically, it removes any possibility of employing a fixed notion of time, similar to the conception of time in quantum theory, at the macroscopic level.

Candidate theories

There are a number of proposed quantum gravity theories. Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments become available.

String theory

Projection of a Calabi–Yau manifold, one of the ways of compactifying the extra dimensions posited by string theory

The central idea of string theory is to replace the classical concept of a point particle in quantum field theory with a quantum theory of one-dimensional extended objects: string theory. At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different (electric and other) charges. In this way, string theory promises to be a unified description of all particles and interactions. The theory is successful in that one mode will always correspond to a graviton, the messenger particle of gravity; however, the price of this success is unusual features such as six extra dimensions of space in addition to the usual three for space and one for time.

In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity. As presently understood, however, string theory admits a very large number (10500 by some estimates) of consistent vacua, comprising the so-called "string landscape". Sorting through this large family of solutions remains a major challenge.

Loop quantum gravity

Simple spin network of the type used in loop quantum gravity

Loop quantum gravity seriously considers general relativity's insight that spacetime is a dynamical field and is therefore a quantum object. Its second idea is that the quantum discreteness that determines the particle-like behavior of other field theories (for instance, the photons of the electromagnetic field) also affects the structure of space.

The main result of loop quantum gravity is the derivation of a granular structure of space at the Planck length. This is derived from the following considerations: In the case of electromagnetism, the quantum operator representing the energy of each frequency of the field has a discrete spectrum. Thus the energy of each frequency is quantized, and the quanta are the photons. In the case of gravity, the operators representing the area and the volume of each surface or space region likewise have discrete spectra. Thus area and volume of any portion of space are also quantized, where the quanta are elementary quanta of space. It follows, then, that spacetime has an elementary quantum granular structure at the Planck scale, which cuts off the ultraviolet infinities of quantum field theory.

The quantum state of spacetime is described in the theory by means of a mathematical structure called spin networks. Spin networks were initially introduced by Roger Penrose in abstract form, and later shown by Carlo Rovelli and Lee Smolin to derive naturally from a non-perturbative quantization of general relativity. Spin networks do not represent quantum states of a field in spacetime: they represent directly quantum states of spacetime.

The theory is based on the reformulation of general relativity known as Ashtekar variables, which represent geometric gravity using mathematical analogues of electric and magnetic fields.[47][48] In the quantum theory, space is represented by a network structure called a spin network, evolving over time in discrete steps.

The dynamics of the theory is today constructed in several versions. One version starts with the canonical quantization of general relativity. The analogue of the Schrödinger equation is a Wheeler–DeWitt equation, which can be defined within the theory. In the covariant, or spinfoam formulation of the theory, the quantum dynamics is obtained via a sum over discrete versions of spacetime, called spinfoams. These represent histories of spin networks.

Other theories

There are a number of other approaches to quantum gravity. The theories differ depending on which features of general relativity and quantum theory are accepted unchanged, and which features are modified. Such theories include:

Experimental tests

As was emphasized above, quantum gravitational effects are extremely weak and therefore difficult to test. For this reason, the possibility of experimentally testing quantum gravity had not received much attention prior to the late 1990s. However, since the 2000s, physicists have realized that evidence for quantum gravitational effects can guide the development of the theory. Since theoretical development has been slow, the field of phenomenological quantum gravity, which studies the possibility of experimental tests, has obtained increased attention.

The most widely pursued possibilities for quantum gravity phenomenology include gravitationally mediated entanglement, violations of Lorentz invariance, imprints of quantum gravitational effects in the cosmic microwave background (in particular its polarization), and decoherence induced by fluctuations in the space-time foam. The latter scenario has been searched for in light from gamma-ray bursts and both astrophysical and atmospheric neutrinos, placing limits on phenomenological quantum gravity parameters.

ESA's INTEGRAL satellite measured polarization of photons of different wavelengths and was able to place a limit in the granularity of space that is less than 10−48 m, or 13 orders of magnitude below the Planck scale.

The BICEP2 experiment detected what was initially thought to be primordial B-mode polarization caused by gravitational waves in the early universe. Had the signal in fact been primordial in origin, it could have been an indication of quantum gravitational effects, but it soon transpired that the polarization was due to interstellar dust interference.

Holographic principle

From Wikipedia, the free encyclopedia

The holographic principle is a property of string theories and a supposed property of quantum gravity that states that the description of a volume of space can be thought of as encoded on a lower-dimensional boundary to the region – such as a light-like boundary like a gravitational horizon. First proposed by Gerard 't Hooft, it was given a precise string theoretic interpretation by Leonard Susskind, who combined his ideas with previous ones of 't Hooft and Charles Thorn. Susskind said, "The three-dimensional world of ordinary experience—the universe filled with galaxies, stars, planets, houses, boulders, and people—is a hologram, an image of reality coded on a distant two-dimensional surface." As pointed out by Raphael Bousso, Thorn observed in 1978, that string theory admits a lower-dimensional description in which gravity emerges from it in what would now be called a holographic way. The prime example of holography is the AdS/CFT correspondence.

The holographic principle was inspired by the Bekenstein bound of black hole thermodynamics, which conjectures that the maximum entropy in any region scales with the radius squared, rather than cubed as might be expected. In the case of a black hole, the insight was that the information content of all the objects that have fallen into the hole might be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory. However, there exist classical solutions to the Einstein equations that allow values of the entropy larger than those allowed by an area law (radius squared), hence in principle larger than those of a black hole. These are the so-called "Wheeler's bags of gold". The existence of such solutions conflicts with the holographic interpretation, and their effects in a quantum theory of gravity including the holographic principle are not yet fully understood.

High-level summary

The physical universe is widely seen to be composed of "matter" and "energy". In his 2003 article published in Scientific American magazine, Jacob Bekenstein speculatively summarized a current trend started by John Archibald Wheeler, which suggests scientists may "regard the physical world as made of information, with energy and matter as incidentals". Bekenstein asks "Could we, as William Blake memorably penned, 'see a world in a grain of sand', or is that idea no more than 'poetic license'?", referring to the holographic principle.

Unexpected connection

Bekenstein's topical overview "A Tale of Two Entropies" describes potentially profound implications of Wheeler's trend, in part by noting a previously unexpected connection between the world of information theory and classical physics. This connection was first described shortly after the seminal 1948 papers of American applied mathematician Claude Shannon introduced today's most widely used measure of information content, now known as Shannon entropy. As an objective measure of the quantity of information, Shannon entropy has been enormously useful, as the design of all modern communications and data storage devices, from cellular phones to modems to hard disk drives and DVDs, rely on Shannon entropy.

In thermodynamics (the branch of physics dealing with heat), entropy is popularly described as a measure of the "disorder" in a physical system of matter and energy. In 1877, Austrian physicist Ludwig Boltzmann described it more precisely in terms of the number of distinct microscopic states that the particles composing a macroscopic "chunk" of matter could be in, while still "looking" like the same macroscopic "chunk". As an example, for the air in a room, its thermodynamic entropy would equal the logarithm of the count of all the ways that the individual gas molecules could be distributed in the room, and all the ways they could be moving.

Energy, matter, and information equivalence

Shannon's efforts to find a way to quantify the information contained in, for example, a telegraph message, led him unexpectedly to a formula with the same form as Boltzmann's. In an article in the August 2003 issue of Scientific American titled "Information in the Holographic Universe", Bekenstein summarizes that "Thermodynamic entropy and Shannon entropy are conceptually equivalent: the number of arrangements that are counted by Boltzmann entropy reflects the amount of Shannon information one would need to implement any particular arrangement" of matter and energy. The only salient difference between the thermodynamic entropy of physics and Shannon's entropy of information is in the units of measure; the former is expressed in units of energy divided by temperature, the latter in essentially dimensionless "bits" of information.

The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information "inscribed" on the surface of its boundary.

The AdS/CFT correspondence

Conjectured relationship of adS/CFT

The anti-de Sitter/conformal field theory correspondence, sometimes called Maldacena duality or gauge/gravity duality, is a conjectured relationship between two kinds of physical theories. On one side are anti-de Sitter spaces (AdS) which are used in theories of quantum gravity, formulated in terms of string theory or M-theory. On the other side of the correspondence are conformal field theories (CFT) which are quantum field theories, including theories similar to the Yang–Mills theories that describe elementary particles.

The duality represents a major advance in understanding of string theory and quantum gravity. This is because it provides a non-perturbative formulation of string theory with certain boundary conditions and because it is the most successful realization of the holographic principle.

It also provides a powerful toolkit for studying strongly coupled quantum field theories. Much of the usefulness of the duality results from a strong-weak duality: when the fields of the quantum field theory are strongly interacting, the ones in the gravitational theory are weakly interacting and thus more mathematically tractable. This fact has been used to study many aspects of nuclear and condensed matter physics by translating problems in those subjects into more mathematically tractable problems in string theory.

The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. Important aspects of the correspondence were elaborated in articles by Steven Gubser, Igor Klebanov, and Alexander Markovich Polyakov, and by Edward Witten. By 2015, Maldacena's article had over 10,000 citations, becoming the most highly cited article in the field of high energy physics.

Black hole entropy

An object with relatively high entropy is microscopically random, like a hot gas. A known configuration of classical fields has zero entropy: there is nothing random about electric and magnetic fields, or gravitational waves. Since black holes are exact solutions of Einstein's equations, they were thought not to have any entropy.

But Jacob Bekenstein noted that this leads to a violation of the second law of thermodynamics. If one throws a hot gas with entropy into a black hole, once it crosses the event horizon, the entropy would disappear. The random properties of the gas would no longer be seen once the black hole had absorbed the gas and settled down. One way of salvaging the second law is if black holes are in fact random objects with an entropy that increases by an amount greater than the entropy of the consumed gas.

Given a fixed volume, a black hole whose event horizon encompasses that volume should be the object with the highest amount of entropy. Otherwise, imagine something with a larger entropy, then by throwing more mass into that something, we obtain a black hole with less entropy, violating the second law.

The illustration demonstrates the way of thinking of the entropic gravity, holographic principle, entropy distribution, and derivation of Einstein General Relativity equations from these considerations. The Einstein equation takes the form of the first law of thermodynamics when Bekenstein and Hawking equations are applied.

In a sphere of radius R, the entropy in a relativistic gas increases as the energy increases. The only known limit is gravitational; when there is too much energy, the gas collapses into a black hole. Bekenstein used this to put an upper bound on the entropy in a region of space, and the bound was proportional to the area of the region. He concluded that the black hole entropy is directly proportional to the area of the event horizon. Gravitational time dilation causes time, from the perspective of a remote observer, to stop at the event horizon. Due to the natural limit on maximum speed of motion, this prevents falling objects from crossing the event horizon no matter how close they get to it. Since any change in quantum state requires time to flow, all objects and their quantum information state stay imprinted on the event horizon. Bekenstein concluded that from the perspective of any remote observer, the black hole entropy is directly proportional to the area of the event horizon.

Stephen Hawking had shown earlier that the total horizon area of a collection of black holes always increases with time. The horizon is a boundary defined by light-like geodesics; it is those light rays that are just barely unable to escape. If neighboring geodesics start moving toward each other they eventually collide, at which point their extension is inside the black hole. So the geodesics are always moving apart, and the number of geodesics which generate the boundary, the area of the horizon, always increases. Hawking's result was called the second law of black hole thermodynamics, by analogy with the law of entropy increase.

At first, Hawking did not take the analogy too seriously. He argued that the black hole must have zero temperature, since black holes do not radiate and therefore cannot be in thermal equilibrium with any black body of positive temperature. Then he discovered that black holes do radiate. When heat is added to a thermal system, the change in entropy is the increase in mass–energy divided by temperature:

(Here the term δM c2 is substituted for the thermal energy added to the system, generally by non-integrable random processes, in contrast to dS, which is a function of a few "state variables" only, i.e. in conventional thermodynamics only of the Kelvin temperature T and a few additional state variables, such as the pressure.)

If black holes have a finite entropy, they should also have a finite temperature. In particular, they would come to equilibrium with a thermal gas of photons. This means that black holes would not only absorb photons, but they would also have to emit them in the right amount to maintain detailed balance.

Time-independent solutions to field equations do not emit radiation, because a time-independent background conserves energy. Based on this principle, Hawking set out to show that black holes do not radiate. But, to his surprise, a careful analysis convinced him that they do, and in just the right way to come to equilibrium with a gas at a finite temperature. Hawking's calculation fixed the constant of proportionality at 1/4; the entropy of a black hole is one quarter its horizon area in Planck units.

The entropy is proportional to the logarithm of the number of microstates, the enumerated ways a system can be configured microscopically while leaving the macroscopic description unchanged. Black hole entropy is deeply puzzling – it says that the logarithm of the number of states of a black hole is proportional to the area of the horizon, not the volume in the interior.

Later, Raphael Bousso came up with a covariant version of the bound based upon null sheets.

Black hole information paradox

Hawking's calculation suggested that the radiation which black holes emit is not related in any way to the matter that they absorb. The outgoing light rays start exactly at the edge of the black hole and spend a long time near the horizon, while the infalling matter only reaches the horizon much later. The infalling and outgoing mass/energy interact only when they cross. It is implausible that the outgoing state would be completely determined by some tiny residual scattering.

Hawking interpreted this to mean that when black holes absorb some photons in a pure state described by a wave function, they re-emit new photons in a thermal mixed state described by a density matrix. This would mean that quantum mechanics would have to be modified because, in quantum mechanics, states which are superpositions with probability amplitudes never become states which are probabilistic mixtures of different possibilities.

Troubled by this paradox, Gerard 't Hooft analyzed the emission of Hawking radiation in more detail. He noted that when Hawking radiation escapes, there is a way in which incoming particles can modify the outgoing particles. Their gravitational field would deform the horizon of the black hole, and the deformed horizon could produce different outgoing particles than the undeformed horizon. When a particle falls into a black hole, it is boosted relative to an outside observer, and its gravitational field assumes a universal form. 't Hooft showed that this field makes a logarithmic tent-pole shaped bump on the horizon of a black hole, and like a shadow, the bump is an alternative description of the particle's location and mass. For a four-dimensional spherical uncharged black hole, the deformation of the horizon is similar to the type of deformation which describes the emission and absorption of particles on a string-theory world sheet. Since the deformations on the surface are the only imprint of the incoming particle, and since these deformations would have to completely determine the outgoing particles, 't Hooft believed that the correct description of the black hole would be by some form of string theory.

This idea was made more precise by Leonard Susskind, who had also been developing holography, largely independently. Susskind argued that the oscillation of the horizon of a black hole is a complete description of both the infalling and outgoing matter, because the world-sheet theory of string theory was just such a holographic description. While short strings have zero entropy, he could identify long highly excited string states with ordinary black holes. This was a deep advance because it revealed that strings have a classical interpretation in terms of black holes.

This work showed that the black hole information paradox is resolved when quantum gravity is described in an unusual string-theoretic way assuming the string-theoretical description is complete, unambiguous and non-redundant. The space-time in quantum gravity would emerge as an effective description of the theory of oscillations of a lower-dimensional black-hole horizon, and suggest that any black hole with appropriate properties, not just strings, would serve as a basis for a description of string theory.

In 1995, Susskind, along with collaborators Tom Banks, Willy Fischler, and Stephen Shenker, presented a formulation of the new M-theory using a holographic description in terms of charged point black holes, the D0 branes of type IIA string theory. The matrix theory they proposed was first suggested as a description of two branes in eleven-dimensional supergravity by Bernard de Wit, Jens Hoppe, and Hermann Nicolai. The later authors reinterpreted the same matrix models as a description of the dynamics of point black holes in particular limits. Holography allowed them to conclude that the dynamics of these black holes give a complete non-perturbative formulation of M-theory. In 1997, Juan Maldacena gave the first holographic descriptions of a higher-dimensional object, the 3+1-dimensional type IIB membrane, which resolved a long-standing problem of finding a string description which describes a gauge theory. These developments simultaneously explained how string theory is related to some forms of supersymmetric quantum field theories.

Limit on information density

The Bekenstein-Hawking entropy of a black hole is proportional to the surface area of the black hole as expressed in Planck units.

Information content is defined as the logarithm of the reciprocal of the probability that a system is in a specific microstate, and the information entropy of a system is the expected value of the system's information content. This definition of entropy is equivalent to the standard Gibbs entropy used in classical physics. Applying this definition to a physical system leads to the conclusion that, for a given energy in a given volume, there is an upper limit to the density of information (the Bekenstein bound) about the whereabouts of all the particles which compose matter in that volume. In particular, a given volume has an upper limit of information it can contain, at which it will collapse into a black hole.

This suggests that matter itself cannot be subdivided infinitely many times and there must be an ultimate level of fundamental particles. As the degrees of freedom of a particle are the product of all the degrees of freedom of its sub-particles, were a particle to have infinite subdivisions into lower-level particles, the degrees of freedom of the original particle would be infinite, violating the maximal limit of entropy density. The holographic principle thus implies that the subdivisions must stop at some level.

The most rigorous realization of the holographic principle is the AdS/CFT correspondence by Juan Maldacena. However, J. David Brown and Marc Henneaux had rigorously proved in 1986, that the asymptotic symmetry of 2+1 dimensional gravity gives rise to a Virasoro algebra, whose corresponding quantum theory is a 2-dimensional conformal field theory.

Experimental tests

This plot shows the sensitivity of various experiments to fluctuations in space and time. Horizontal axis is the log of apparatus size (or duration time the speed of light), in meters; vertical axis is the log of the rms fluctuation amplitude in the same units. The lower left corner represents the Planck length or time. In these units, the size of the observable universe is about 26. Various physical systems and experiments are plotted. The "holographic noise" line represents the rms transverse holographic fluctuation amplitude on a given scale.

The Fermilab physicist Craig Hogan claims that the holographic principle would imply quantum fluctuations in spatial position that would lead to apparent background noise or "holographic noise" measurable at gravitational wave detectors, in particular GEO 600. However these claims have not been widely accepted, or cited, among quantum gravity researchers and appear to be in direct conflict with string theory calculations.

Analyses in 2011 of measurements of gamma ray burst GRB 041219A in 2004 by the INTEGRAL space observatory launched in 2002 by the European Space Agency, shows that Craig Hogan's noise is absent down to a scale of 10−48 meters, as opposed to the scale of 10−35 meters predicted by Hogan, and the scale of 10−16 meters found in measurements of the GEO 600 instrument. Research continued at Fermilab under Hogan as of 2013.

Jacob Bekenstein claimed to have found a way to test the holographic principle with a tabletop photon experiment.

Economics of nuclear power plants

EDF has said its third-generation Flamanville 3 project (seen here in 2010) will be delayed until 2018, due to "both structural and economic reasons," and the project's total cost had climbed to EUR 11 billion by 2012. In 2019, the start-up was once again pushed back, making it unlikely it could be started before the end of 2022. In July 2020, the French Court of Audit estimated the cost will reach €19.1 billion, more than 5 times the original cost estimate. The initial low cost forecasts for these megaprojects exhibited "optimism bias".

Nuclear power construction costs have varied significantly across the world and over time. Large and rapid increases in costs occurred during the 1970s, especially in the United States. Recent cost trends in countries such as Japan and Korea have been very different, including periods of stability and decline in construction costs.

New nuclear power plants typically have high capital expenditure for building plants. Fuel, operational, and maintenance costs are relatively small components of the total cost. The long service life and high capacity factor of nuclear power plants allow sufficient funds for ultimate plant decommissioning and waste storage and management to be accumulated, with little impact on the price per unit of electricity generated. Additionally, measures to mitigate climate change such as a carbon tax or carbon emissions trading, favor the economics of nuclear power over fossil fuel power. Nuclear power is cost competitive with the renewable generation when the capital cost is between $2000 and $3000/kW. 

Overview

Olkiluoto 3 under construction in 2009. It is the first EPR design, but problems with workmanship and supervision have created costly delays which led to an inquiry by the Finnish nuclear regulator STUK. In December 2012, Areva estimated that the full cost of building the reactor will be about €8.5 billion, or almost three times the original delivery price of €3 billion.

The economics of nuclear power are debated. Some opponents of nuclear power cite cost as the main challenge for the technology. Ian Lowe has also challenged the economics of nuclear power. Nuclear supporters point to the historical success of nuclear power across the world, and they call for new reactors in their own countries, including proposed new but largely uncommercialized designs, as a source of new power. The Intergovernmental Panel on Climate Change (IPCC) while endorsing nuclear technology as a low carbon, mature energy source (addressing greenhouse gas emissions), notes that nuclear's share of global generation has been in decline for over 30 years, listing barriers such as operational risks, uranium mining risks, financial and regulatory risks, unresolved waste management issues, nuclear weapon proliferation concerns, and adverse public opinion.

Solar power has very low capacity factors compared to nuclear, and solar power can only achieve so much market penetration before (expensive) energy storage and transmission become necessary. This is because nuclear power "requires less maintenance and is designed to operate for longer stretches before refueling" while solar power is in a constant state of refueling and is limited by a lack of fuel that requires a backup power source that works on a larger scale.

In the United States, nuclear power faces competition from the low natural gas prices in North America. Former Exelon CEO John Rowe said in 2012 that new nuclear plants in the United States "don’t make any sense right now" and won't be economic as long as the natural gas surplus persists.

The price of new plants in China is lower than in the Western world.

In 2016, the Governor of New York, Andrew Cuomo, directed the New York Public Service Commission to consider ratepayer-financed subsidies similar to those for renewable sources to keep nuclear power stations (which accounted for one third of the state's generation, and half of its emissions-free generation) profitable in the competition against natural gas plants, which have replaced nuclear plants when they closed in other states.

A study in 2019 by the economic think tank DIW Berlin, found that nuclear power has not been profitable anywhere in the world. The study of the economics of nuclear power has found it has never been financially viable, that most plants have been built while heavily subsidised by governments, often motivated by military purposes, and that nuclear power is not a good approach to tackling climate change. It found, after reviewing trends in nuclear power plant construction since 1951, that the average 1,000MW nuclear power plant would incur an average economic loss of 4.8 billion euros ($7.7 billion AUD). This has been refuted by another study.

Investments

Very large upfront costs and long project cycles make nuclear energy a very risky investment: fluctuations in the global economy, energy prices, or regulations can for example reduce the demand for energy, or make alternatives cheaper. However, in and of itself, nuclear projects are not inherently vastly riskier than other large infrastructure investments. After the 2009 recession, when the worldwide demand for electricity fell, and regulations became more permissive of unclean but cheap energy. In Eastern Europe, a number of long-established projects are struggling to find financing, notably Belene in Bulgaria and the additional reactors at Cernavoda in Romania, and some potential backers have pulled out. Where cheap gas is available and its future supply relatively secure, this also poses a major problem for clean energy projects.

Current bids for new nuclear power plants in China were estimated at between $2800/kW and $3500/kW, as China planned to accelerate its new build program after a pause following the Fukushima disaster. However, more recent reports indicated that China will fall short of its targets. While nuclear power in China has been cheaper than solar and wind power, these are getting cheaper while nuclear power costs are growing. Moreover, third generation plants are expected to be considerably more expensive than earlier plants. Therefore, comparison with other power generation methods is strongly dependent on assumptions about construction timescales and capital financing for nuclear plants. Analysis of the economics of nuclear power must take into account who bears the risks of future uncertainties. To date all operating nuclear power plants were developed by state-owned or regulated utility monopolies where many of the risks associated with political change and regulatory ratcheting were borne by consumers rather than suppliers. Many countries have now liberalized the electricity market where these risks, and the risk of cheap competition from subsidised energy sources emerging before capital costs are recovered, are borne by plant suppliers and operators rather than consumers, which leads to a significantly different evaluation of the risk of investing in new nuclear power plants. Generation III+ reactors are claimed to have a significantly longer design lifetime than their predecessors while using gradual improvements on existing designs that have been used for decades. This might offset higher construction costs to a degree, by giving a longer depreciation lifetime.

Construction costs

"The usual rule of thumb for nuclear power is that about two thirds of the generation cost is accounted for by fixed costs, the main ones being the cost of paying interest on the loans and repaying the capital..."

Capital cost, the building and financing of nuclear power plants, represents a large percentage of the cost of nuclear electricity. In 2014, the US Energy Information Administration estimated that for new nuclear plants going online in 2019, capital costs will make up 74% of the levelized cost of electricity; higher than the capital percentages for fossil-fuel power plants (63% for coal, 22% for natural gas), and lower than the capital percentages for some other nonfossil-fuel sources (80% for wind, 88% for solar PV).

Areva, the French nuclear plant operator, offers that 70% of the cost of a kWh of nuclear electricity is accounted for by the fixed costs from the construction process. Some analysts argue (for example Steve Thomas quoted in the book The Doomsday Machine by Martin Cohen and Andrew McKillop) that what is often not appreciated in debates about the economics of nuclear power is that the cost of equity, that is companies using their own money to pay for new plants, is generally higher than the cost of debt. Another advantage of borrowing may be that "once large loans have been arranged at low interest rates – perhaps with government support – the money can then be lent out at higher rates of return".

"One of the big problems with nuclear power is the enormous upfront cost. These reactors are extremely expensive to build. While the returns may be very great, they're also very slow. It can sometimes take decades to recoup initial costs. Since many investors have a short attention span, they don't like to wait that long for their investment to pay off."

Because of the large capital costs for the initial nuclear power plants built as part of a sustained build program and the relatively long construction period before revenue is returned, servicing the capital costs of first few nuclear power plants can be the most important factor determining the economic competitiveness of nuclear energy. The investment can contribute about 70% to 80% of the costs of electricity. Timothy Stone, businessman and nuclear expert, stated in 2017, "It has long been recognized that the only two numbers which matter in [new] nuclear power are the capital cost and the cost of capital." The discount rate chosen to cost a nuclear power plant's capital over its lifetime is arguably the most sensitive parameter to overall costs. Because of the long life of new nuclear power plants, most of the value of a new nuclear power plant is created for the benefit of future generations.

The recent liberalization of the electricity market in many countries has made the economics of nuclear power generation less enticing, and no new nuclear power plants have been built in a liberalized electricity market. Previously, a monopolistic provider could guarantee output requirements decades into the future. Private generating companies now have to accept shorter output contracts and the risks of future lower-cost competition, so they desire a shorter return on investment period. This favours generation plant types with lower capital costs or high subsidies, even if associated fuel costs are higher. A further difficulty is that due to the large sunk costs but unpredictable future income from the liberalized electricity market, private capital is unlikely to be available on favourable terms, which is particularly significant for nuclear as it is capital-intensive. Industry consensus is that a 5% discount rate is appropriate for plants operating in a regulated utility environment where revenues are guaranteed by captive markets, and 10% discount rate is appropriate for a competitive deregulated or merchant plant environment. However, the independent MIT study (2003) which used a more sophisticated finance model distinguishing equity and debt capital had a higher 11.5% average discount rate.

A 2016 study argued that while costs did increase in the past for reactors built in the past, this does not necessarily mean there is an inherent trend of cost escalation with nuclear power, as prior studies tended to examine a relatively small share of reactors built and that a full analysis shows that cost trends for reactors varied substantially by country and era.

Another important factor in estimating a NPPs lifetime cost derives from its capacity factor. According to Anthonie Cilliers, a scholar and nuclear engineer, "Because of the large capital investment, and the low variable cost of operations, nuclear plants are most cost effective when they can run all the time to provide a return on the investment. Hence, plant operators now consistently achieve 92 percent capacity factor (average power produced of maximum capacity). The higher the capacity factor, the lower the cost per unit of electricity."

Delays and overruns

Construction delays can add significantly to the cost of a plant. Since a power plant does not earn income during construction, and interest must be paid on debt from the time it is incurred, longer construction times translate directly into higher finance charges.

Modern nuclear power plants are planned for construction in five years or less (42 months for Canada Deuterium Uranium (CANDU) ACR-1000, 60 months from order to operation for an AP1000, 48 months from first concrete to operation for a European Pressurized Reactor (EPR) and 45 months for an ESBWR) as opposed to over a decade for some previous plants.

In Japan and France, construction costs and delays are significantly diminished because of streamlined government licensing and certification procedures. In France, one model of reactor was type-certified, using a safety engineering process similar to the process used to certify aircraft models for safety. That is, rather than licensing individual reactors, the regulatory agency certified a particular design and its construction process to produce safe reactors. U.S. law permits type-licensing of reactors, a process which is being used on the AP1000 and the ESBWR.

Canada has cost overruns for the Darlington Nuclear Generating Station, largely due to delays and policy changes, that are often cited by opponents of new reactors. Construction started in 1981 at an estimated cost of $7.4 Billion 1993-adjusted CAD, and finished in 1993 at a cost of $14.5 billion. 70% of the price increase was due to interest charges incurred due to delays imposed to postpone units 3 and 4, 46% inflation over a 4-year period and other changes in financial policy.

While in the United Kingdom and the United States cost overruns on nuclear plants contributed to the bankruptcies of several utility companies. In the United States these losses helped usher in energy deregulation in the mid-1990s that saw rising electricity rates and power blackouts in California. When the UK began privatizing utilities, its nuclear reactors "were so unprofitable they could not be sold." Eventually in 1996, the government gave them away. But the company that took them over, British Energy, had to be bailed out in 2004 to the extent of 3.4 billion pounds.

Operational costs

Fuel

Fuel costs account for about 28% of a nuclear plant's operating expenses. As of 2013, half the cost of reactor fuel was taken up by enrichment and fabrication, so that the cost of the uranium concentrate raw material was 14 percent of operating costs. Doubling the price of uranium would add about 10% to the cost of electricity produced in existing nuclear plants, and about half that much to the cost of electricity in future power plants. The cost of raw uranium contributes about $0.0015/kWh to the cost of nuclear electricity, while in breeder reactors the uranium cost falls to $0.000015/kWh.

Nuclear plants require fissile fuel. Generally, the fuel used is uranium, although other materials may be used (See MOX fuel). In 2005, prices on the world market for uranium averaged US$20/lb (US$44.09/kg). On 2007-04-19, prices reached US$113/lb (US$249.12/kg). On 2008-07-02, the price had dropped to $59/lb.

As of 2008, mining activity was growing rapidly, especially from smaller companies, but putting a uranium deposit into production takes 10 years or more. The world's present measured resources of uranium, economically recoverable at a price of US$130/kg according to the industry groups Organisation for Economic Co-operation and Development (OECD), Nuclear Energy Agency (NEA) and International Atomic Energy Agency (IAEA), are enough to last for "at least a century" at current consumption rates.

According to the World Nuclear Association, "the world's present measured resources of uranium (5.7 Mt) in the cost category less than three times present spot prices and used only in conventional reactors, are enough to last for about 90 years. This represents a higher level of assured resources than is normal for most minerals. Further exploration and higher prices will certainly, on the basis of present geological knowledge, yield further resources as present ones are used up." The amount of uranium present in all currently known conventional reserves alone (excluding the huge quantities of currently-uneconomical uranium present in "unconventional" reserves such as phosphate/phosphorite deposits, seawater, and other sources) is enough to last over 200 years at current consumption rates.

Waste disposal

All nuclear plants produce radioactive waste. In order to pay for the cost of storing, transporting and disposing these wastes in a permanent location in the United States, a surcharge of a tenth of a cent per kilowatt-hour is added to electricity bills. Roughly one percent of electrical utility bills in provinces using nuclear power are diverted to fund nuclear waste disposal in Canada.

The disposal of low level waste reportedly costs around £2,000/m³ in the UK. High level waste costs somewhere between £67,000/m³ and £201,000/m³. General division is 80%/20% of low level/high level waste, and one reactor produces roughly 12 m³ of high level waste annually.

Decommissioning

At the end of a nuclear plant's lifetime, the plant must be decommissioned. This entails either dismantling, safe storage or entombment. In the United States, the Nuclear Regulatory Commission (NRC) requires plants to finish the process within 60 years of closing. Since it costs around $500 million or more to shut down and decommission a plant, the NRC requires plant owners to set aside money when the plant is still operating to pay for the future shutdown costs.

Decommissioning a reactor that has undergone a meltdown is inevitably more difficult and expensive. Three Mile Island was decommissioned 14 years after its incident for $837 million. The cost of the Fukushima disaster cleanup is not yet known, but has been estimated to cost around $100 billion.

Proliferation and terrorism

A 2011 report for the Union of Concerned Scientists stated that "the costs of preventing nuclear proliferation and terrorism should be recognized as negative externalities of civilian nuclear power, thoroughly evaluated, and integrated into economic assessments—just as global warming emissions are increasingly identified as a cost in the economics of coal-fired electricity".

"Construction of the ELWR was completed in 2013 and is optimized for civilian electricity production, but it has "dual-use" potential and can be modified to produce material for nuclear weapons."

Safety

2000 candles in memory of the Chernobyl disaster in 1986, at a commemoration 25 years after the nuclear accident, as well as for the Fukushima nuclear disaster of 2011.

Nancy Folbre, an economist at the University of Massachusetts, has questioned the economic viability of nuclear power following the 2011 Japanese nuclear accidents:

The proven dangers of nuclear power amplify the economic risks of expanding reliance on it. Indeed, the stronger regulation and improved safety features for nuclear reactors called for in the wake of the Japanese disaster will almost certainly require costly provisions that may price it out of the market.

The cascade of problems at Fukushima, from one reactor to another, and from reactors to fuel storage pools, will affect the design, layout and ultimately the cost of future nuclear plants.

Insurance

Insurance available to the operators of nuclear power plants varies by nation. The worst case nuclear accident costs are so large that it would be difficult for the private insurance industry to carry the size of the risk, and the premium cost of full insurance would make nuclear energy uneconomic.

Nuclear power has largely worked under an insurance framework that limits or structures accident liabilities in accordance with the Paris convention on nuclear third-party liability, the Brussels supplementary convention, the Vienna convention on civil liability for nuclear damage, and in the United States the Price-Anderson Act. It is often argued that this potential shortfall in liability represents an external cost not included in the cost of nuclear electricity.

In Canada, the Canadian Nuclear Liability Act requires nuclear power plant operators to obtain $650 million (CAD) of liability insurance coverage per installation (regardless of the number of individual reactors present) starting in 2017 (up from the prior $75 million requirement established in 1976), increasing to $750 million in 2018, to $850 million in 2019, and finally to $1 billion in 2020. Claims beyond the insured amount would be assessed by a government appointed but independent tribunal, and paid by the federal government.

In the UK, the Nuclear Installations Act 1965 governs liability for nuclear damage for which a UK nuclear licensee is responsible. The limit for the operator is £140 million.

In the United States, the Price-Anderson Act has governed the insurance of the nuclear power industry since 1957. Owners of nuclear power plants are required to pay a premium each year for the maximum obtainable amount of private insurance ($450 million) for each licensed reactor unit. This primary or "first tier" insurance is supplemented by a second tier. In the event a nuclear accident incurs damages in excess of $450 million, each licensee would be assessed a prorated share of the excess up to $121,255,000. With 104 reactors currently licensed to operate, this secondary tier of funds contains about $12.61 billion. This results in a maximum combined primary+secondary coverage amount of up to $13.06 billion for a hypothetical single-reactor incident. If 15 percent of these funds are expended, prioritization of the remaining amount would be left to a federal district court. If the second tier is depleted, Congress is committed to determine whether additional disaster relief is required. In July 2005, Congress extended the Price-Anderson Act to newer facilities.

Cost per kWh

The cost per unit of electricity produced (Kilowatt-hour, kWh, or Megawatt-hour, MWh = 1,000 kWh) will vary according to country, depending on costs in the area, the regulatory regime and consequent financial and other risks, and the availability and cost of finance. Construction costs per kilowatt of generating capacity will also depend on geographic factors such as availability of cooling water, earthquake likelihood, and availability of suitable power grid connections. So it is not possible to accurately estimate costs on a global basis.

Levelized cost of energy estimates

In Levelized Cost of Energy (LCOE) estimates and comparisons, a very significant factor is the assumed discount rate which reflects the preference of an investor for short-term value of the funds as opposed to long-term value. As it's not a physical factor, but rather economic, a choice of specific values of discount rate can double or triple the estimated cost of energy merely based on that initial assumption. In case of low-carbon sources of energy, such as nuclear power, experts highlight that the discount rate should be set low (1-3%) as the value of low-carbon energy for future generations prevents very high future external costs of climate change. Numerous LCOE comparisons however use high discount rate values (10%) which mostly reflects preference for short-term profit by commercial investors without accounting for the decarbonization contribution. For example, IPCC AR3 WG3 calculation based on 10% discount rate produced LCOE estimate of $97/MWh for nuclear power, while by merely assuming 1.4% discount rate, the estimate drops to $42/MWh which is the same issue that has been raised for other low-carbon energy sources with high initial capital costs.

Other cross-market LCOE estimates are criticized for basing their calculation on undisclosed portfolio of cherry-picked projects that were significantly delayed due to various reasons, but not include projects that were built in time and within the budget. For example, Bloomberg New Energy Finance (BNEF), based on undisclosed portfolio of projects, estimated nuclear power LCOE at €190-375/MWh which is up to 900% higher than the published LCOE of €30/MWh for an actual existing Olkiluoto nuclear power plant, even after accounting for construction delays in OL3 block (although, this number is based on an average LCOE with new and old reactors). Based on the published methodology details, it has been pointed out that BNEF assumed cost of capital 230% higher than the actual one (1.56%), fixed operating costs at 300% higher than actual and nameplate power lower (1400 MW) than actual 1600 MW, all of which contributed to significant overestimate in price.

In 2019 the US EIA revised the levelized cost of electricity from new advanced nuclear power plants going online in 2023 to be $0.0775/kWh before government subsidies, using a regulated industry 4.3% cost of capital (WACC - pre-tax 6.6%) over a 30-year cost recovery period. Financial firm Lazard also updated its levelized cost of electricity report costing new nuclear at between $0.118/kWh and $0.192/kWh using a commercial 7.7% cost of capital (WACC - pre-tax 12% cost for the higher-risk 40% equity finance and 8% cost for the 60% loan finance) over a 40-year lifetime.

Comparisons with other power sources

Levelized cost of energy based on different studies. Electricity from renewables became cheaper while electricity from new nuclear plants became more expensive.

Generally, a nuclear power plant is significantly more expensive to build than an equivalent coal-fueled or gas-fueled plant. If natural gas is plentiful and cheap, the operating costs of conventional power plants is less. Most forms of electricity generation produce some form of negative externality costs imposed on third parties that are not directly paid by the producer such as pollution which negatively affects the health of those near and downwind of the power plant, and generation costs often do not reflect these external costs.

A comparison of the "real" cost of various energy sources is complicated by a number of uncertainties:

  • The increase and decrease of the cost change due to climate change because emissions of greenhouse gases is hard to estimate. Carbon taxes may be enacted, or carbon capture and storage may become mandatory.
  • The environmental damage cost increase that is caused by any energy source through land use (whether for mining fuels or for power generation), air and water pollution, solid waste production, manufacturing-related damages (such as from mining and processing ores or rare earth elements), etc.
  • The cost and political feasibility of disposal of the waste from reprocessed spent nuclear fuel is still not fully resolved. In the United States, the ultimate disposal costs of spent nuclear fuel are assumed by the U.S. government after producers pay a fixed surcharge.
  • Due to the dominant role of initial construction costs and the multi-year construction time, the interest rate for the capital required (as well as the timeline that the plant is completed in) has a major impact on the total cost of building a new nuclear plant.

Lazard's report on the estimated levelized cost of energy by source (10th edition) estimated unsubsidized prices of $97–$136/MWh for nuclear, $50–$60/MWh for solar PV, $32–$62/MWh for onshore wind, and $82–$155/MWh for offshore wind.

However, the most important subsidies to the nuclear industry do not involve cash payments. Rather, they shift construction costs and operating risks from investors to taxpayers and ratepayers, burdening them with an array of risks including cost overruns, defaults to accidents, and nuclear waste management. This approach has remained remarkably consistent throughout the nuclear industry's history, and distorts market choices that would otherwise favor less risky energy investments.

Benjamin K. Sovacool said in 2011 that, "When the full nuclear fuel cycle is considered — not only reactors but also uranium mines and mills, enrichment facilities, spent fuel repositories, and decommissioning sites — nuclear power proves to be one of the costliest sources of energy".

Brookings Institution published The Net Benefits of Low and No-Carbon Electricity Technologies in 2014 which states, after performing an energy and emissions cost analysis, that "The net benefits of new nuclear, hydro, and natural gas combined cycle plants far outweigh the net benefits of new wind or solar plants", with the most cost effective low carbon power technology being determined to be nuclear power. Moreover, Paul Joskow of MIT maintains that the "Levelized cost of electricity" (LCOE) metric is a poor means of comparing electricity sources as it hides the extra costs, such as the need to frequently operate back up power stations, incurred due to the use of intermittent power sources such as wind energy, while the value of baseload power sources are underpresented.

Kristin Shrader-Frechette analysed 30 papers on the economics of nuclear power for possible conflicts of interest. She found of the 30, 18 had been funded either by the nuclear industry or pro-nuclear governments and were pro-nuclear, 11 were funded by universities or non-profit non-government organisations and were anti-nuclear, the remaining 1 had unknown sponsors and took the pro-nuclear stance. The pro-nuclear studies were accused of using cost-trimming methods such as ignoring government subsidies and using industry projections above empirical evidence where ever possible. The situation was compared to medical research where 98% of industry sponsored studies return positive results.

Other economic issues

Nuclear power plants tend to be competitive in areas where other fuel resources are not readily available — France, most notably, has almost no native supplies of fossil fuels. France's nuclear power experience has also been one of paradoxically increasing rather than decreasing costs over time.

Making a massive investment of capital in a project with long-term recovery can affect a company's credit rating.

A Council on Foreign Relations report on nuclear energy argues that a rapid expansion of nuclear power may create shortages in building materials such as reactor-quality concrete and steel, skilled workers and engineers, and safety controls by skilled inspectors. This would drive up current prices.

Old nuclear plants generally had a somewhat limited ability to significantly vary their output in order to match changing demand (a practice called load following). However, many BWRs, some PWRs (mainly in France), and certain CANDU reactors (primarily those at Bruce Nuclear Generating Station) have various levels of load-following capabilities (sometimes substantial), which allow them to fill more than just baseline generation needs. Several newer reactor designs also offer some form of enhanced load-following capability. For example, the Areva EPR can slew its electrical output power between 990 and 1,650 MW at 82.5 MW per minute.

The number of companies that manufacture certain parts for nuclear reactors is limited, particularly the large forgings used for reactor vessels and steam systems. In 2010, only four companies (Japan Steel Works, China First Heavy Industries, Russia's OMZ Izhora and Korea's Doosan Heavy Industries) manufacture pressure vessels for reactors of 1100 MWe or larger. It was suggested that this poses a bottleneck that could hamper expansion of nuclear power internationally, however, some Western reactor designs require no steel pressure vessel such as CANDU derived reactors which rely on individual pressurized fuel channels. The large forgings for steam generators — although still very heavy — can be produced by a far larger number of suppliers.

For a country with both a nuclear power industry and a nuclear arms industry, synergies between the two can favor a nuclear power plant with an otherwise uncertain economy. For example, in the United Kingdom researchers have informed MPs that the government was using the Hinkley Point C project to cross-subsidise the UK military's nuclear-related activity by maintaining nuclear skills. In support of that, researchers from the University of Sussex Andy Stirling and Phil Johnstone stated that the costs of the Trident nuclear submarine programme would be prohibitive without “an effective subsidy from electricity consumers to military nuclear infrastructure”.

The hope for Economies of scale was one of the reasons of the development of "standard reactor designs" like the German "Konvoi" (only three such plants were ever actually built and they differ substantially from one another due to German federalism) or its successor, the French-German EPR (nuclear power plant).

The nuclear power industry in Western nations has a history of construction delays, cost overruns, plant cancellations, and nuclear safety issues despite significant government subsidies and support.

Following the Fukushima nuclear disaster in 2011, costs are likely to go up for currently operating and new nuclear power plants, due to increased requirements for on-site spent fuel management and elevated design basis threats. After Fukushima, the International Energy Agency halved its estimate of additional nuclear generating capacity built by 2035.

A 2017 analysis by Bloomberg showed that over half of U.S. nuclear plants were running at a loss, first of all those at a single unit site.

As of 2020, some companies and organizations have sought to develop proposals and projects aimed at reducing the traditional costs of nuclear power plant construction, often using small modular reactor designs rather than conventional reactors. For example, TerraPower, a company based in Bellevue, Washington and co-founded by Bill Gates, aims to build a sodium fast reactor for $1 billion with a proposed site in Kemmerer, Wyoming. Also in 2020, the Energy Impact Center, a Washington, D.C.–based research institute founded by Bret Kugelmass, introduced the OPEN100 project, a platform that provides open-source blueprints for a nuclear plant with a pressurized water reactor. The OPEN100 model could be used to build a plant for $300 million in two years. Oklo, a Silicon Valley–based startup, aims to build micro modular reactors that run off of radioactive waste produced by conventional nuclear power plants. Like OPEN100, Oklo aims to reduce costs partially by standardizing the construction of its plants. Other entities developing similar plans include, X-energy, NuScale Power, General Atomics, Elysium Industries, and others.

Quantum gravity

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Quantum_gravity A depiction of the cGh...