Search This Blog

Sunday, August 31, 2014

Loop quantum gravity

Loop quantum gravity

From Wikipedia, the free encyclopedia
 
Loop quantum gravity (LQG) is a theory that attempts to describe the quantum properties of gravity. It is also a theory of quantum space and quantum time, because, according to general relativity, the geometry of spacetime is a manifestation of gravity. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. The main output of the theory is a physical picture of space where space is granular. The granularity is a direct consequence of the quantization. It has the same nature as the granularity of the photons in the quantum theory of electromagnetism or the discrete levels of the energy of the atoms. But here, it is space itself that is discrete.

More precisely, space can be viewed as an extremely fine fabric or network "woven" of finite loops. These networks of loops are called spin networks. The evolution of a spin network over time is called a spin foam. The predicted size of this structure is the Planck length, which is approximately 10−35 meters. According to the theory, there is no meaning to distance at scales smaller than the Planck scale. Therefore, LQG predicts that not just matter, but also space itself has an atomic structure.

Today LQG is a vast area of research, developing in several directions, which involves about 50 research groups worldwide.[1][unreliable source?] They all share the basic physical assumptions and the mathematical description of quantum space. The full development of the theory is being pursued in two directions: the more traditional canonical loop quantum gravity, and the newer covariant loop quantum gravity, more commonly called spin foam theory.

Research into the physical consequences of the theory is proceeding in several directions. Among these, the most well-developed is the application of LQG to cosmology, called loop quantum cosmology (LQC). LQC applies LQG ideas to the study of the early universe and the physics of the Big Bang. Its most spectacular consequence is that the evolution of the universe can be continued beyond the Big Bang. The Big Bang appears thus to be replaced by a sort of cosmic Big Bounce.

History

In 1986, Abhay Ashtekar reformulated Einstein's general relativity in a language closer to that of the rest of fundamental physics. Shortly after, Ted Jacobson and Lee Smolin realized that the formal equation of quantum gravity, called the Wheeler–DeWitt equation, admitted solutions labelled by loops, when rewritten in the new Ashtekar variables, and Carlo Rovelli and Lee Smolin defined a nonperturbative and background-independent quantum theory of gravity in terms of these loop solutions. Jorge Pullin and Jerzy Lewandowski understood that the intersections of the loops are essential for the consistency of the theory, and the theory should be formulated in terms of intersecting loops, or graphs.
In 1994, Rovelli and Smolin showed that the quantum operators of the theory associated to area and volume have a discrete spectrum. That is, geometry is quantized. This result defines an explicit basis of states of quantum geometry, which turned out to be labelled by Roger Penrose's spin networks, which are graphs labelled by spins.

The canonical version of the dynamics was put on firm ground by Thomas Thiemann, who defined an anomaly-free Hamiltonian operator, showing the existence of a mathematically consistent background-independent theory. The covariant or spinfoam version of the dynamics developed during several decades, and crystallized in 2008, from the joint work of research groups in France, Canada, UK, Poland, and Germany, lead to the definition of a family of transition amplitudes, which in the classical limit can be shown to be related to a family of truncations of general relativity.[2] The finiteness of these amplitudes was proven in 2011.[3][4] It requires the existence of a positive cosmological constant, and this is consistent with observed acceleration in the expansion of the Universe.

General covariance and background independence

In theoretical physics, general covariance is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations. The essential idea is that coordinates are only artifices used in describing nature, and hence should play no role in the formulation of fundamental physical laws. A more significant requirement is the principle of General Relativity that states that the laws of physics take the same form in all reference systems. This is a generalization of the principle of special relativity which states that the laws of physics take the same form in all inertial frames.
In mathematics, a diffeomorphism is an isomorphism in the category of smooth manifolds. It is an invertible function that maps one differentiable manifold to another, such that both the function and its inverse are smooth. These are the defining symmetry transformations of General Relativity since the theory is formulated only in terms of a differentiable manifold.

In General Relativity, General covariance is intimately related to "diffeomorphism invariance". This symmetry is one of the defining features of the theory. However, it is a common misunderstanding that "diffeomorphism invariance" refers to the invariance of the physical predictions of a theory under arbitrary coordinate transformations; this is untrue and in fact every physical theory is invariant under coordinate transformations this way. Diffeomorphisms, as mathematicians define them, correspond to something much more radical; intuitively a way they can be envisaged is as simultaneously dragging all the physical fields (including the gravitational field) over the bare differentiable manifold while staying in the same coordinate system; diffeomorphisms are the true symmetry transformations of General Relativity, and come about from the assertion that the formulation of the theory is based on a bare differentiable manifold, but not on any prior geometry - the theory is background-independent (this is a profound shift, as all physical theories before general relativity had as part of their formulation a prior geometry). What is preserved under such transformations are the coincidences between the values the gravitational field take at such and such a "place" and the values the matter fields take there, from these relationships one can form a notion of matter being located with respect to the gravitational field, or vice versa. This is what Einstein discovered, physical entities are located with respect to one another only and not with respect to the spacetime manifold - as Carlo Rovelli puts it: "No more fields on spacetime: just fields on fields.".[5] This is the true meaning of the saying "The stage disappears and becomes one of the actors"; space-time as a "container" over which physics takes place has no objective physical meaning and instead the gravitational interaction is represented as just one of the fields forming the world. This is known as the relationalist interpretation of space-time. The realization by Einstein that General Relativity should be interpreted this way is the origin of his remark "Beyond my wildest expectations".

In LQG this aspect of General Relativity is taken seriously and this symmetry is preserved by requiring that the physical states remain invariant under the generators of diffeomorphisms. The interpretation of this condition is well understood for purely spatial diffeomorphisms. However, the understanding of diffeomorphisms involving time (the Hamiltonian constraint) is more subtle because it is related to dynamics and the so-called "problem of time" in general relativity.[6] A generally accepted calculational framework to account for this constraint has yet to be found.[7][8] A plausible candidate for the quantum hamiltonian constraint is the operator introduced by Thiemann.[9]

LQG is formally background independent. The equations of LQG are not embedded in, or presuppose, space and time (except for its invariant topology). Instead, they are expected to give rise to space and time at distances which are large compared to the Planck length. The issue of background independence in LQG still has some unresolved subtleties. For example, some derivations require a fixed choice of the topology, while any consistent quantum theory of gravity should include topology change as a dynamical process.

Constraints and their Poisson Bracket Algebra

The constraints of classical canonical general relativity

In the Hamiltonian formulation of ordinary classical mechanics the Poisson bracket is an important concept. A "canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical Poisson-bracket relations,

\{ q_i , p_j \} = \delta_{ij}

where the Poisson bracket is given by
\{f,g\} = \sum_{i=1}^{N} \left( 
\frac{\partial f}{\partial q_{i}} \frac{\partial g}{\partial p_{i}} - \frac{\partial f}{\partial p_{i}} \frac{\partial g}{\partial q_{i}}\right).
for arbitrary phase space functions f (q_i , p_j) and g (q_i , p_j). With the use of Poisson brackets, the Hamilton's equations can be rewritten as,

\dot{q}_i = \{ q_i , H \},
\dot{p}_i = \{ p_i , H \}.

These equations describe a ``flow" or orbit in phase space generated by the Hamiltonian H. Given any phase space function F (q,p), we have

{d \over dt} F (q_i,p_i) = \{ F , H \}.

Let us consider constrained systems, of which General relativity is an example. In a similar way the Poisson bracket between a constraint and the phase space variables generates a flow along an orbit in (the unconstrained) phase space generated by the constraint. There are three types of constraints in Ashtekar's reformulation of classical general relativity:

SU(2) Gauss gauge constraints

The Gauss constraints

G_j (x) = 0.

This represents an infinite number of constraints one for each value of x. These come about from re-expressing General relativity as an \mathrm{SU}(2) Yang–Mills type gauge theory (Yang–Mills is a generalization of Maxwell's theory where the gauge field transforms as a vector under Gauss transformations, that is, the Gauge field is of the form A_a^i (x) where i is an internal index. See Ashtekar variables). These infinite number of Gauss gauge constraints can be smeared with test fields with internal indices, \lambda^j (x),

G (\lambda) = \int d^3x G_j (x) \lambda^j (x).

which we demand vanish for any such function. These smeared constraints defined with respect to a suitable space of smearing functions give an equivalent description to the original constraints.

In fact Ashtekar's formulation may be thought of as ordinary \mathrm{SU}(2) Yang–Mills theory together with the following special constraints, resulting from diffeomorphism invariance, and a Hamiltonian that vanishes. The dynamics of such a theory are thus very different from that of ordinary Yang–Mills theory.

Spatial diffeomorphisms constraints

The spatial diffeomorphism constraints

C_a (x) = 0

can be smeared by the so-called shift functions \vec{N} (x) to give an equivalent set of smeared spatial diffeomorphism constraints,

C (\vec{N}) = \int d^3 x C_a (x) N^a (x).

These generate spatial diffeomorphisms along orbits defined by the shift function N^a (x).

Hamiltonian constraints

The Hamiltonian

H (x) = 0

can be smeared by the so-called lapse functions N (x) to give an equivalent set of smeared Hamiltonian constraints,

H (N) = \int d^3 x H (x) N (x).

These generate time diffeomorphisms along orbits defined by the lapse function N (x).
In Ashtekar formulation the gauge field A_a^i (x) is the configuration variable (the configuration variable being analogous to q in ordinary mechanics) and its conjugate momentum is the (densitized) triad (electrical field) \tilde{E}^a_i (x). The constraints are certain functions of these phase space variables.

We consider the action of the constraints on arbitrary phase space functions. An important notion here is the Lie derivative, \mathcal{L}_V, which is basically a derivative operation that infinitesimally "shifts" functions along some orbit with tangent vector V.

The Poisson bracket algebra

Of particular importance is the Poisson bracket algebra formed between the (smeared) constraints themselves as it completely determines the theory. In terms of the above the smeared constraints the constraint algebra amongst the Gauss' law reads,

\{ G (\lambda) , G (\mu) \} = G ([\lambda , \mu])

where [\lambda , \mu]^k = \lambda_i \mu_j \epsilon^{ijk}. And so we see that the Poisson bracket of two Gauss' law is equivalent to a single Gauss' law evaluated on the commutator of the smearings. The Poisson bracket amongst spatial diffeomorphisms constraints reads

\{ C (\vec{N})  , C (\vec{M}) \} = C (\mathcal{L}_\vec{N} \vec{M})

and we see that its effect is to "shift the smearing". The reason for this is that the smearing functions are not functions of the canonical variables and so the spatial diffeomorphism does not generate diffeomorphims on them. They do however generate diffeomorphims on everything else. This is equivalent to leaving everything else fixed while shifting the smearing .The action of the spatial diffeomorphism on the Gauss law is


\{ C (\vec{N})  , G (\lambda) \} = G (\mathcal{L}_\vec{N} \lambda),

again, it shifts the test field \lambda. The Gauss law has vanishing Poisson bracket with the Hamiltonian constraint. The spatial diffeomorphism constraint with a Hamiltonian gives a Hamiltonian with its smearing shifted,

\{ C (\vec{N})  , H (M) \} = H (\mathcal{L}_\vec{N} M).

Finally, the poisson bracket of two Hamiltonians is a spatial diffeomorphism,

\{ H (N)  , H (M) \} = C (K)

where K is some phase space function. That is, it is a sum over infinitesimal spatial diffeomorphisms constraints where the coefficients of proportionality are not constants but have non-trivial phase space dependence.

A (Poisson bracket) Lie algebra, with constraints C_I, is of the form

\{ C_I  , C_J \} = f_{IJ}^K C_K

where f_{IJ}^K are constants (the so-called structure constants). The above Poisson bracket algebra for General relativity does not form a true Lie algebra as we have structure functions rather than structure constants for the Poisson bracket between two Hamiltonians. This leads to difficulties.

Dirac observables

The constraints define a constraint surface in the original phase space. The gauge motions of the constraints apply to all phase space but have the feature that they leave the constraint surface where it is, and thus the orbit of a point in the hypersurface under gauge transformations will be an orbit entirely within it. Dirac observables are defined as phase space functions, O, that Poisson commute with all the constraints when the constraint equations are imposed,

\{ G_j , O \}_{G_j=C_a=H = 0} = \{ C_a , O \}_{G_j=C_a=H = 0} = \{ H , O \}_{G_j=C_a=H = 0} = 0,

that is, they are quantities defined on the constraint surface that are invariant under the gauge transformations of the theory.

Then, solving only the constraint G_j = 0 and determining the Dirac observables with respect to it leads us back to the ADM phase space with constraints H, C_a. The dynamics of general relativity is generated by the constraints, it can be shown that six Einstein equations describing time evolution (really a gauge transformation) can be obtained by calculating the Poisson brackets of the three-metric and its conjugate momentum with a linear combination of the spatial diffeomorphism and Hamiltonian constraint. The vanishing of the constraints, giving the physical phase space, are the four other Einstein equations.[10]

Quantization of the constraints - the equations of Quantum General Relativity

Pre-history and Ashtekar new variables

Many of the technical problems in canonical quantum gravity revolve around the constraints.
Canonical general relativity was originally formulated in terms of metric variables, but there seemed to be insurmountable mathematical difficulties in promoting the constraints to quantum operators because of their highly non-linear dependence on the canonical variables. The equations were much simplified with the introduction of Ashtekars new variables. Ashtekar variables describe canonical general relativity in terms of a new pair canonical variables closer to that of gauge theories. The first step consists of using densitized triads \tilde{E}_i^a (a triad E_i^a is simply three orthogonal vector fields labeled by i = 1,2,3 and the densitized triad is defined by \tilde{E}_i^a = \sqrt{\operatorname{det}(q)} E_i^a) to encode information about the spatial metric,

\operatorname{det}(q) q^{ab} = \tilde{E}_i^a \tilde{E}_j^b \delta^{ij}.
(where \delta^{ij} is the flat space metric, and the above equation expresses that q^{ab}, when written in terms of the basis E_i^a, is locally flat). (Formulating general relativity with triads instead of metrics was not new.) The densitized triads are not unique, and in fact one can perform a local in space rotation with respect to the internal indices i. The canonically conjugate variable is related to the extrinsic curvature by K_a^i = K_{ab} \tilde{E}^{ai} / \sqrt{\operatorname{det}(q)}. But problems similar to using the metric formulation arise when one tries to quantize the theory. Ashtekar's new insight was to introduce a new configuration variable,

A_a^i = \Gamma_a^i - i K_a^i

that behaves as a complex \operatorname{SU}(2) connection where \Gamma_a^i is related to the so-called spin connection via \Gamma_a^i = \Gamma_{ajk} \epsilon^{jki}. Here A_a^i is called the chiral spin connection. It defines a covariant derivative \mathcal{D}_a. It turns out that \tilde{E}^a_i is the conjugate momentum of A_a^i, and together these form Ashtekar's new variables.

The expressions for the constraints in Ashtekar variables; the Gauss's law, the spatial diffeomorphism constraint and the (densitized) Hamiltonian constraint then read:

G^i = \mathcal{D}_a \tilde{E}_i^a = 0
C_a = \tilde{E}_i^b F^i_{ab} - A_a^i (\mathcal{D}_b \tilde{E}_i^b) = V_a - A_a^i G^i = 0,
\tilde{H} = \epsilon_{ijk} \tilde{E}_i^a \tilde{E}_j^b F^i_{ab} = 0

respectively, where F^i_{ab} is the field strength tensor of the connection A_a^i and where V_a is referred to as the vector constraint. The above-mentioned local in space rotational invariance is the original of the \operatorname{SU}(2) gauge invariance here expressed by the Gauss law. Note that these constraints are polynomial in the fundamental variables, unlike as with the constraints in the metric formulation. This dramatic simplification seemed to open up the way to quantizing the constraints. (See the article Self-dual Palatini action for a derivation of Ashtekar's formulism).

With Ashtekar's new variables, given the configuration variable A^i_a, it is natural to consider wavefunctions \Psi (A^i_a). This is the connection representation. It is analogous to ordinary quantum mechanics with configuration variable q and wavefunctions \psi (q). The configuration variable gets promoted to a quantum operator via:

\hat{A}_a^i \Psi (A) = A_a^i \Psi (A),

(analogous to \hat{q} \psi (q) = q \psi (q)) and the triads are (functional) derivatives,

\hat{\tilde{E_i^a}} \Psi (A) = - i {\delta \Psi (A) \over \delta A_a^i}.

(analogous to \hat{p} \psi (q) = -i \hbar d \psi (q) / dq). In passing over to the quantum theory the constraints become operators on a kinematic Hilbert space (the unconstrained \operatorname{SU}(2) Yang–Mills Hilbert space). Note that different ordering of the A's and \tilde{E}'s when replacing the \tilde{E}'s with derivatives give rise to different operators - the choice made is called the factor ordering and should be chosen via physical reasoning. Formally they read

\hat{G}_j \vert\psi \rangle = 0
\hat{C}_a \vert\psi \rangle = 0
\hat{\tilde{H}} \vert\psi \rangle = 0.

There are still problems in properly defining all these equations and solving them. For example the Hamiltonian constraint Ashtekar worked with was the densitized version instead of the original Hamiltonian, that is, he worked with \tilde{H} = \sqrt{\operatorname{det}(q)} H. There were serious difficulties in promoting this quantity to a quantum operator. Moreover, although Ashtekar variables had the virtue of simplifying the Hamiltonian, they are complex. When one quantizes the theory, it is difficult to ensure that one recovers real general relativity as opposed to complex general relativity.

Quantum constraints as the equations of quantum general relativity

We now move on to demonstrate an important aspect of the quantum constraints. We consider Gauss' law only. First we state the classical result that the Poisson bracket of the smeared Gauss' law




G(\lambda) = \int d^3x \lambda^j (D_a E^a)^j
with the connections is

\{ G(\lambda) , A_a^i \} = \partial_a \lambda^i + g \epsilon^{ijk} A_a^j \lambda^k = (D_a \lambda)^i.

The quantum Gauss' law reads

\hat{G}_j \Psi (A) = - i D_a {\delta \Psi [A] \over \delta A_a^j} = 0.

If one smears the quantum Gauss' law and study its action on the quantum state one finds that the action of the constraint on the quantum state is equivalent to shifting the argument of \Psi by an infinitesimal (in the sense of the parameter \lambda small) gauge transformation,

\Big [ 1 + \int d^3x \lambda^j (x) \hat{G}_j \Big]  \Psi (A) = \Psi [A + D \lambda] = \Psi [A],

and the last identity comes from the fact that the constraint annihilates the state. So the constraint, as a quantum operator, is imposing the same symmetry that its vanishing imposed classically: it is telling us that the functions \Psi [A] have to be gauge invariant functions of the connection. The same idea is true for the other constraints.

Therefore the two step process in the classical theory of solving the constraints C_I = 0 (equivalent to solving the admissibility conditions for the initial data) and looking for the gauge orbits (solving the `evolution' equations) is replaced by a one step process in the quantum theory, namely looking for solutions \Psi of the quantum equations \hat{C}_I \Psi = 0. This is because it obviously solves the constraint at the quantum level and it simultaneously looks for states that are gauge invariant because \hat{C}_I is the quantum generator of gauge transformations (gauge invariant functions are constant along the gauge orbits and thus characterize them).[11] Recall that, at the classical level, solving the admissibility conditions and evolution equations was equivalent to solving all of Einstein's field equations, this underlines the central role of the quantum constraint equations in canonical quantum gravity.

Introduction of the loop representation

It was in particular the inability to have good control over the space of solutions to the Gauss' law and spacial diffeomorphism constraints that led Rovelli and Smolin to consider a new representation - the The loop representation in gauge theories and quantum gravity.[12]
We need the notion of a holonomy. A holonomy is a measure of how much the initial and final values of a spinor or vector differ after parallel transport around a closed loop; it is denoted

h_\gamma [A].

Knowledge of the holonomies is equivalent to knowledge of the connection, up to gauge equivalence. Holonomies can also be associated with an edge; under a Gauss Law these transform as

(h'_e)_{\alpha \beta} = U_{\alpha \gamma}^{-1} (x) (h_e)_{\gamma \sigma} U_{\sigma \beta} (y).

For a closed loop x = y if we take the trace of this, that is, putting \alpha = \beta and summing we obtain

(h'_e)_{\alpha \alpha} = U_{\alpha \gamma}^{-1} (x) (h_e)_{\gamma \sigma} U_{\sigma \alpha} (x) = [U_{\sigma \alpha} (x) U_{\alpha \gamma}^{-1} (x)] (h_e)_{\gamma \sigma} = \delta_{\sigma \gamma} (h_e)_{\gamma \sigma} = (h_e)_{\gamma \gamma}

or

\operatorname{Tr} h'_\gamma = \operatorname{Tr} h_\gamma..

The trace of an holonomy around a closed loop and is written

W_\gamma [A]

and is called a Wilson loop. Thus Wilson loop are gauge invariant. The explicit form of the Holonomy is

h_\gamma [A] = \mathcal{P} \exp \Big\{-\int_{\gamma_0}^{\gamma_1} ds \dot{\gamma}^a A_a^i (\gamma (s)) T_i \Big\}

where \gamma is the curve along which the holonomy is evaluated, and s is a parameter along the curve, \mathcal{P} denotes path ordering meaning factors for smaller values of s appear to the left, and T_i are matrices that satisfy the \operatorname{SU}(2) algebra

[T^i ,T^j] = 2i \epsilon^{ijk} T^k.

The Pauli matrices satisfy the above relation. It turns out that there are infinitely many more examples of sets of matrices that satisfy these relations, where each set comprises (N+1) \times (N+1) matrices with N = 1,2,3,\dots, and where non of these can be thought to `decompose' into two or more examples of lower dimension. They are called different irreducible representations of the \operatorname{SU}(2) algebra. The most fundamental representation being the Pauli matrices. The holonomy is labelled by a half integer N/2 according to the irreducible representation used.

The use of Wilson loops explicitly solves the Gauss gauge constraint. To handle the spatial diffeomorphism constraint we need to go over to the loop representation. As Wilson loops form a basis we can formally expand any Gauss gauge invariant function as,

\Psi [A] = \sum_\gamma \Psi [\gamma] W_\gamma [A] .

This is called the loop transform. We can see the analogy with going to the momentum representation in quantum mechanics(see Position and momentum space). There one has a basis of states \exp (ikx) labelled by a number k and one expands

\psi [x] = \int dk \psi (k) \exp (ikx) .

and works with the coefficients of the expansion \psi (k).

The inverse loop transform is defined by

\Psi [\gamma] = \int [dA] \Psi [A] W_\gamma [A].

This defines the loop representation. Given an operator \hat{O} in the connection representation,

\Phi [A] = \hat{O} \Psi [A] \qquad Eq \; 1,

one should define the corresponding operator \hat{O}' on \Psi [\gamma] in the loop representation via,

\Phi [\gamma] = \hat{O}' \Psi [\gamma] \qquad Eq \; 2,

where \Phi [\gamma] is defined by the usual inverse loop transform,

\Phi [\gamma] = \int [dA] \Phi [A] W_\gamma [A] \qquad Eq \; 3..

A transformation formula giving the action of the operator \hat{O}' on \Psi [\gamma] in terms of the action of the operator \hat{O} on \Psi [A] is then obtained by equating the R.H.S. of Eq \; 2 with the R.H.S. of Eq \; 3 with Eq \; 1 substituted into Eq \; 3, namely

\hat{O}' \Psi [\gamma] = \int [dA] W_\gamma [A] \hat{O} \Psi [A],
 or

\hat{O}' \Psi [\gamma] = \int [dA] (\hat{O}^\dagger W_\gamma [A]) \Psi [A],

where by \hat{O}^\dagger we mean the operator \hat{O} but with the reverse factor ordering (remember from simple quantum mechanics where the product of operators is reversed under conjugation). We evaluate the action of this operator on the Wilson loop as a calculation in the connection representation and rearranging the result as a manipulation purely in terms of loops (one should remember that when considering the action on the Wilson loop one should choose the operator one wishes to transform with the opposite factor ordering to the one chosen for its action on wavefunctions \Psi [A]). This gives the physical meaning of the operator \hat{O}'. For example if \hat{O}^\dagger corresponded to a spatial diffeomorphism, then this can be thought of as keeping the connection field A of W_\gamma [A] where it is while performing a spatial diffeomorphism on \gamma instead. Therefore the meaning of \hat{O}' is a spatial diffeomorphism on \gamma, the argument of \Psi [\gamma].

In the loop representation we can then solve the spatial diffeomorphism constraint by considering functions of loops \Psi [\gamma] that are invariant under spatial diffeomorphisms of the loop \gamma. That is, we construct what mathematicians call knot invariants. This opened up an unexpected connection between knot theory and quantum gravity.

What about the Hamiltonian constraint? Let us go back to the connection representation. Any collection of non-intersecting Wilson loops satisfy Ashtekar's quantum Hamiltonian constraint. This can be seen from the following. With a particular ordering of terms and replacing \tilde{E}^a_i by a derivative, the action of the quantum Hamiltonian constraint on a Wilson loop is

\hat{\tilde{H}}^\dagger W_\gamma [A] = - \epsilon_{ijk} \hat{F}^k_{ab} {\delta \over \delta A_a^i} \; {\delta \over \delta A_b^j} W_\gamma [A].
When a derivative is taken it brings down the tangent vector, \dot{\gamma}^a, of the loop, \gamma. So we have something like

\hat{F}^i_{ab} \dot{\gamma}^a \dot{\gamma}^b.
However, as F^i_{ab} is anti-symmetric in the indices a and b this vanishes (this assumes that \gamma is not discontinuous anywhere and so the tangent vector is unique). Now let us go back to the loop representation.

We consider wavefunctions \Psi [\gamma] that vanish if the loop has discontinuities and that are knot invariants. Such functions solve the Gauss law, the spatial diffeomorphism constraint and (formally) the Hamiltonian constraint. Thus we have identified an infinite set of exact (if only formal) solutions to all the equations of quantum general relativity![12] This generated a lot of interest in the approach and eventually led to LQG.

Geometric operators, the need for intersecting Wilson loops and spin network states

The easiest geometric quantity is the area. Let us choose coordinates so that the surface \Sigma is characterized by x^3 = 0. The area of small parallelogram of the surface \Sigma is the product of length of each side times \sin \theta where \theta is the angle between the sides. Say one edge is given by the vector \vec{u} and the other by \vec{v} then,

A = \| \vec{u} \| \| \vec{v} \| \sin \theta = \sqrt{\| \vec{u} \|^2 \| \vec{v} \|^2 (1 - \cos^2 \theta)} \quad = \sqrt{\| \vec{u} \|^2 \| \vec{v} \|^2 - (\vec{u} \cdot \vec{v})^2}

From this we get the area of the surface \Sigma to be given by

A_\Sigma = \int_\Sigma dx^1 dx^2 \sqrt{\operatorname{det}(q^{(2)})}

where \operatorname{det}(q^{(2)}) = q_{11} q_{22} - q_{12}^2 and is the determinant of the metric induced on \Sigma. This can be rewritten as

\operatorname{det}(q^{(2)}) = {\epsilon^{3ab} \epsilon^{3cd} q_{ac} q_{bc} \over 2}.

The standard formula for an inverse matrix is

q^{ab} = {\epsilon^{acd} \epsilon^{bef} q_{ce} q_{df} \over 3!\operatorname{det}(q)}

Note the similarity between this and the expression for \operatorname{det}(q^{(2)}). But in Ashtekar variables we have \tilde{E}^a_i \tilde{E}^{bi} = \operatorname{det}(q) q^{ab}. Therefore

A_\Sigma = \int_\Sigma dx^1 dx^2 \sqrt{\tilde{E}^3_i \tilde{E}^{3i}}.

According to the rules of canonical quantization we should promote the triads \tilde{E}^3_i to quantum operators,

\hat{\tilde{E}}^3_i \sim {\delta \over \delta A_3^i}.

It turns out that the area A_\Sigma can be promoted to a well defined quantum operator despite the fact that we are dealing with product of two functional derivatives and worse we have a square-root to contend with as well.[13] Putting N = 2J, we talk of being in the J-th representation. We note that \sum_i T^i T^i = J (J+1) 1. This quantity is important in the final formula for the area spectrum. We simply state the result below,

\hat{A}_\Sigma W_\gamma [A] = 8 \pi \ell_{\text{Planck}}^2 \beta \sum_I \sqrt{j_I (j_I + 1)} W_\gamma [A]

where the sum is over all edges I of the Wilson loop that pierce the surface \Sigma.

The formula for the volume of a region R is given by

V = \int_R d^3 x \sqrt{\operatorname{det}(q)} = {1 \over 6} \int_R dx^3 \sqrt{\epsilon_{abc} \epsilon^{ijk} \tilde{E}^a_i \tilde{E}^b_j \tilde{E}^c_k}.

The quantization of the volume proceeds the same way as with the area. As we take the derivative, and each time we do so we bring down the tangent vector \dot{\gamma}^a, when the volume operator acts on non-intersecting Wilson loops the result vanishes. Quantum states with non-zero volume must therefore involve intersections. Given that the anti-symmetric summation is taken over in the formula for the volume we would need at least intersections with three non-coplanar lines. Actually it turns out that one needs at least four-valent vertices for the volume operator to be non-vanishing.

We now consider Wilson loops with intersections. We assume the real representation where the gauge group is \operatorname{SU}(2). Wilson loops are an over complete basis as there are identities relating different Wilson loops. These come about from the fact that Wilson loops are based on matrices (the holonomy) and these matrices satisfy identities, the so-called Mandelstam identities. Given any two \operatorname{SU}(2) matrices A and B it is easy to check that,

\operatorname{Tr}(A) \operatorname{Tr}(B) = \operatorname{Tr}(AB) + \operatorname{Tr}(AB^{-1}).

This implies that given two loops \gamma and \eta that intersect, we will have,

W_\gamma [A] W_\eta [A] = W_{\gamma \circ \eta} [A] + W_{\gamma \circ \eta^{-1}} [A]

where by \eta^{-1} we mean the loop \eta traversed in the opposite direction and \gamma \circ \eta means the loop obtained by going around the loop \gamma and then along \eta. See figure below. Spin networks are certain linear combinations of intersecting Wilson loops designed to address the over completeness introduced by the Mandelstam identities.

Graphical representation of the Mandestam identity relating different Wilson loops.

As mentioned above the holonomy tells you how to propagate test spin half particles. A spin network state assigns an amplitude to a set of spin half particles tracing out a path in space, merging and splitting. These are described by spin networks \gamma: the edges are labelled by spins together with `intertwiners' at the vertices which are prescription for how to sum over different ways the spins are rerouted. The sum over rerouting are chosen as such to make the form of the intertwiner invariant under Gauss gauge transformations.

Real variables, modern analysis and LQG

Let us go into more detail about the technical difficulties associated with using Ashtekar's variables: With Ashtekar's variables one uses a complex connection and so the relevant gauge group as actually \operatorname{SL}(2, \mathbb{C}) and not \operatorname{SU}(2). As \operatorname{SL}(2, \mathbb{C}) is non-compact it creates serious problems for the rigorous construction of the necessary mathematical machinery. The group \operatorname{SU}(2) is on the other hand is compact and the relevant constructions needed have been developed.

As mentioned above, because Ashtekar's variables are complex it results in complex general relativity. To recover the real theory one has to impose what are known as the reality conditions. These require that the densitized triad be real and that the real part of the Ashtekar connection equals the compatible spin connection (the compatibility condition being \nabla_a e_b^I = 0) determined by the desitized triad. The expression for compatible connection \Gamma_a^i is rather complicated and as such non-polynomial formula enters through the back door.

Before we state the next difficulty we should give a definition; a tensor density of weight W transforms like an ordinary tensor, except that in additional the Wth power of the Jacobian,

J = \Big| {\partial x^a \over \partial x^{'b}} \Big|

appears as a factor, i.e.

{T'}^{a \dots}_{b \dots} = J^W {\partial x^{'a} \over \partial x^c} \dots {\partial x^d \over \partial x^{'b}} T^{c \dots}_{d \dots}.

It turns out that it is impossible, on general grounds, to construct a UV-finite, diffeomorphism non-violating operator corresponding to \sqrt{\operatorname{det}(q)} H. The reason is that the rescaled Hamiltonian constraint is a scalar density of weight two while it can be shown that only scalar densities of weight one have a chance to result in a well defined operator. Thus, one is forced to work with the original unrescaled, density one-valued, Hamiltonian constraint. However, this is non-polynomial and the whole virtue of the complex variables is questioned. In fact, all the solutions constructed for Ashtekar's Hamiltonian constraint only vanished for finite regularization (physics), however, this violates spatial diffeomorphism invariance.

Without the implementation and solution of the Hamiltonian constraint no progress can be made and no reliable predictions are possible!

To overcome the first problem one works with the configuration variable

A_a^i = \Gamma_a^i + \beta K_a^i

where \beta is real (as pointed out by Barbero, who introduced real variables some time after Ashtekar's variables[14][15]). The Guass law and the spatial diffeomorphism constraints are the same. In real Ashtekar variables the Hamiltonian is

H = {\epsilon_{ijk} F_{ab}^k \tilde{E}_i^a \tilde{E}_j^b \over \sqrt{\operatorname{det}(q)}} + 2 {\beta^2 + 1 \over \beta^2} {(\tilde{E}_i^a \tilde{E}_j^b - \tilde{E}_j^a \tilde{E}_i^b) \over \sqrt{\operatorname{det}(q)}} (A_a^i - \Gamma_a^i) (A_b^j - \Gamma_b^j) = H_E + H'.

The complicated relationship between \Gamma_a^i and the desitized triads causes serious problems upon quantization. It is with the choice \beta = \pm i that the second more complicated term is made to vanish. However, as mentioned above \Gamma_a^i reappears in the reality conditions. Also we still have the problem of the 1 / \sqrt{\operatorname{det}(q)} factor.

Thiemann was able to make it work for real \beta. First he could simplify the troublesome 1 / \sqrt{\operatorname{det}(q)} by using the identity

\{ A_c^k , V \} = {\epsilon_{abc} \epsilon^{ijk} \tilde{E}_i^a \tilde{E}_j^b \over \sqrt{\operatorname{det}(q)}}

where V is the volume. The A_c^k and V can be promoted to well defined operators in the loop representation and the Poisson bracket is replaced by a commutator upon quantization; this takes care of the first term. It turns out that a similar trick can be used to treat the second term. One introduces the quantity

K = \int d^3 x K_a^i \tilde{E}_i^a

and notes that

K_a^i = \{ A_a^i , K \}.

We are then able to write

A_a^i - \Gamma_a^i = \beta K_a^i = \beta \{ A_a^i , K \}.

The reason the quantity K is easier to work with at the time of quantization is that it can be written as

K = - \{ V , \int d^3 x H_E \}

where we have used that the integrated densitized trace of the extrinsic curvature, K, is the``time derivative of the volume".

In the long history of canonical quantum gravity formulating the Hamiltonian constraint as a quantum operator (Wheeler–DeWitt equation) in a mathematically rigorous manner has been a formidable problem. It was in the loop representation that a mathematically well defined Hamiltonian constraint was finally formulated in 1996.[9] We leave more details of its construction to the article Hamiltonian constraint of LQG. This together with the quantum versions of the Gauss law and spatial diffeomorphism constrains written in the loop representation are the central equations of LQG (modern canonical quantum General relativity).

Finding the states that are annihilated by these constraints (the physical states), and finding the corresponding physical inner product, and observables is the main goal of the technical side of LQG.

A very important aspect of the Hamiltonian operator is that it only acts at vertices (a consequence of this is that Thiemann's Hamiltonian operator, like Ashtekar's operator, annihilates non-intersecting loops except now it is not just formal and has rigorous mathematical meaning). More precisely, its action is non-zero on at least vertices of valence three and greater and results in a linear combination of new spin networks where the original graph has been modified by the addition of lines at each vertex together and a change in the labels of the adjacent links of the vertex.

Solving the quantum constraints

We solve, at least approximately, all the quantum constraint equations and for the physical inner product to make physical predictions.
Before we move on to the constraints of LQG, lets us consider certain cases. We start with a kinematic Hilbert space \mathcal{H}_{\text{Kin}} as so is equipped with an inner product—the kinematic inner product \langle\phi, \psi\rangle_{\text{Kin}}.
i) Say we have constraints \hat{C}_I whose zero eigenvalues lie in their discrete spectrum. Solutions of the first constraint, \hat{C}_1, correspond to a subspace of the kinematic Hilbert space, \mathcal{H}_1 \subset \mathcal{H}_{\text{Kin}}. There will be a projection operator P_1 mapping \mathcal{H}_{\text{Kin}} onto \mathcal{H}_1. The kinematic inner product structure is easily employed to provide the inner product structure after solving this first constraint; the new inner product \langle\phi , \psi\rangle_1 is simply

\langle\phi , \psi\rangle_1 = \langle P \phi , P \psi\rangle_{\text{Kin}}

They are based on the same inner product and are states normalizable with respect to it.
ii) The zero point is not contained in the point spectrum of all the \hat{C}_I, there is then no non-trivial solution \Psi \in \mathcal{H}_{\text{Kin}} to the system of quantum constraint equations \hat{C}_I \Psi = 0 for all I.
For example the zero eigenvalue of the operator

\hat{C} = \Big( i {d \over dx} - k \Big)

on L_2 (\mathbb{R} , dx) lies in the continuous spectrum \mathbb{R} but the formal ``eigenstate" \exp (-ikx) is not normalizable in the kinematic inner product,

\int_{- \infty}^\infty dx \psi^* (x) \psi (x) = \int_{- \infty}^\infty dx e^{ikx} e^{-ikx} = \int_{- \infty}^\infty dx = \infty

and so does not belong to the kinematic Hilbert space \mathcal{H}_{\text{Kin}}. In these cases we take a dense subset \mathcal{S} of \mathcal{H}_{\text{Kin}} (intuitively this means either any point in \mathcal{S} is either in \mathcal{H}_{\text{Kin}} or arbitrarily close to a point in \mathcal{H}_{\text{Kin}}) with very good convergence properties and consider its dual space \mathcal{S}' (intuitively these map elements of \mathcal{S} onto finite complex numbers in a linear manner), then \mathcal{S} \subset \mathcal{H}_{\text{Kin}} \subset \mathcal{S}' (as \mathcal{S}' contains distributional functions). The constraint operator is then implemented on this larger dual space, which contains distributional functions, under the adjoint action on the operator. One looks for solutions on this larger space. This comes at the price that the solutions must be given a new Hilbert space inner product with respect to which they are normalizable (see article on rigged Hilbert space). In this case we have a generalized projection operator on the new space of states. We cannot use the above formula for the new inner product as it diverges, instead the new inner product is given by the simply modification of the above,

\langle\phi, \psi\rangle_1 = \langle P\phi, \psi\rangle_{\text{Kin}}.

The generalized projector P is known as a rigging map.

Let us move to LQG, additional complications will arise from the fact the constraint algebra is not a Lie algebra due to the bracket between two Hamiltonian constraints.

The Gauss law is solved by the use of spin network states. They provide a basis for the Kinematic Hilbert space \mathcal{H}_{\text{Kin}}. The spatial diffeomorphism constraint has been solved. The induced inner product on \mathcal{H}_{\text{Diff}} (we do not pursue the details) has a very simple description in terms of spin network states; given two spin networks s and s', with associated spin network states \psi_s and \psi_{s'}, the inner product is 1 if s and s' are related to each other by a spatial diffeomorphism and zero otherwise.

The Hamiltonian constraint maps diffeomorphism invariant states onto non-diffeomorphism invaiant states as so does not preserve the diffeomorphism Hilbert space \mathcal{H}_{\text{Diff}} (this is an unavoidable consequence of the operator algebra). This means that you cant just solve the diffeomorphism constraint and then the Hamiltonian constraint. This problem can be circumvented by the introduction of the Master constraint, with its trivial operator algebra, one is then able in principle to construct the physical inner product from \mathcal{H}_{\text{Diff}}.

Spin foams

In loop quantum gravity (LQG), a spin network represents a "quantum state" of the gravitational field on a 3-dimensional hypersurface. The set of all possible spin networks (or, more accurately, "s-knots" - that is, equivalence classes of spin networks under diffeomorphisms) is countable; it constitutes a basis of LQG Hilbert space.
In physics, a spin foam is a topological structure made out of two-dimensional faces that represents one of the configurations that must be summed to obtain a Feynman's path integral (functional integration) description of quantum gravity. It is closely related to loop quantum gravity.

Spin foam derived from the Hamiltonian constraint operator

The Hamiltonian constraint generates `time' evolution. Solving the Hamiltonian constraint should tell us how quantum states evolve in `time' from an initial spin network state to a final spin network state. One approach to solving the Hamiltonian constraint starts with what is called the Dirac delta function. This is a rather singular function of the real line, denoted \delta (x), that is zero everywhere except at x = 0 but whose integral is finite and nonzero. It can be represented as a Fourier integral,

\delta (x) = \int e^{ikx} dk.

One can employ the idea of the delta function to impose the condition that the Hamiltonian constraint should vanish. It is obvious that

\prod_{x \in \Sigma} \delta (\hat{H} (x))

is non-zero only when \hat{H}(x) = 0 for all x in \Sigma. Using this we can `project' out solutions to the Hamiltonian constraint. Using this the physical inner product is formally given by

\biggl\langle \prod_{x \in \Sigma} \delta (\hat{H} (x)) s_{\text{int}} s_{\text{fin}} \biggr\rangle_{\text{Diff}}

where s_{\text{int}} are the initial spin network and s_{\text{fin}} is the final spin network. With analogy to the Fourier integral given above, this (generalized) projector can formally be written as

\int [d N] e^{i \int d^3 x N (x) \hat{H} (x)}.

The exponential can be expanded

\biggl\langle \int [d N] (1 + i \int d^3 x N (x) \hat{H} (x) + {i^2 \over 2!} [\int d^3 x N (x) \hat{H} (x)] [\int d^3 x' N (x') \hat{H} (x')] + \dots) s_{\text{int}}, s_{\text{fin}} \biggr\rangle_{\text{Diff}}

and each time a Hamiltonian operator acts it does so by adding a new edge at the vertex. The summation over different sequences of actions of \hat{H} can be visualized as a summation over different histories of `interaction vertices' in the `time' evolution sending the initial spin network to the final spin network. This then naturally gives rise to the two-complex (a combinatorial set of faces that join along edges, which in turn join on vertices) underlying the spin foam description; we evolve forward an initial spin network sweeping out a surface, the action of the Hamiltonian constraint operator is to produce a new planar surface starting at the vertex. We are able to use the action of the Hamiltonian constraint on the vertex of a spin network state to associate an amplitude to each "interaction" (in analogy to Feynman diagrams). See figure below. This opens up a way of trying to directly link canonical LQG to a path integral description. Now just as a spin networks describe quantum space, each configuration contributing to these path integrals, or sums over history, describe `quantum space-time'. Because of their resemblance to soap foams and the way they are labeled John Baez gave these `quantum space-times' the name `spin foams'.

 
The action of the Hamiltonian constraint translated to the path integral or so-called spin foam description. A single node splits into three nodes, creating a spin foam vertex. N (x_n) is the value of N at the vertex and H_{nop} are the matrix elements of the Hamiltonian constraint \hat{H}.

There are however severe difficulties with this particular approach, for example the Hamiltonian operator is not self-adjoint and so the exponential cant be well defined in general. The most serious problem is that the \hat{H} (x)'s are not mutually commuting, it can then be shown the formal quantity \int [d N] e^{i \int d^3 x N (x) \hat{H} (x)} cannot even define a (generalized) projector. The Master constraint (see below) does not suffer from these problems and as such offers a way of connecting the canonical theory to the path integral formulation.

Spin foams from BF theory

It turns out there are alternative routes to formulating the path integral, however their connection to the Hamiltonian formalism is less clear. One way is to start with the so-called BF theory. This is a simpler theory to general relativity. It has no local degrees of freedom and as such depends only on topological aspects of the fields. BF theory is what is known as a topological field theory.
Surprisingly, it turns out that general relativity can be obtained from BF theory by imposing a constraint,[16] BF theory involves a field B_{ab}^{IJ} and if one chooses the field B to be the (anti-symmetric) product of two tetrads

B_{ab}^{IJ} = {1 \over 2} (E^I_a E^J_b - E^I_b E^J_a)

(tetrads are like triads but in four spacetime dimensions), one recovers general relativity. The condition that the B field be given by the product of two tetrads is called the simplicity constraint. The spin foam dynamics of the topological field theory is well understood. Given the spin foam `interaction' amplitudes for this simple theory, one then tries to implement the simplicity conditions to obtain a path integral for general relativity. The non-trivial task of constructing a spin foam model is then reduced to the question of how this simplicity constraint should be imposed in the quantum theory. The first attempt at this was the famous Barrett–Crane model.[17] However this model was shown to be problematic, for example there did not seem to be enough degrees of freedom to ensure the correct classical limit.[18] It has been argued that the simplicity constraint was imposed too strongly at the quantum level and should only be imposed in the sense of expectation values just as with the Lorenz gauge condition \partial_\mu \hat{A}^\mu in the Gupta–Bleuler formalism of quantum electrodynamics. New models have now been put forward, sometimes motivated by imposing the simplicity conditions in a weaker sense.

Another difficulty here is that spin foams are defined on a discretization of spacetime. While this presents no problems for a topological field theory as it has no local degrees of freedom, it presents problems for GR. This is known as the problem triangularization dependence.

Modern formulation of spin foams

Just as imposing the classical simplicity constraint recovers general relativity from BF theory, one expects an appropriate quantum simplicity constraint will recover quantum gravity from quantum BF theory.

Much progress has been made with regard to this issue by Engle, Pereira, and Rovelli[19] and Freidal and Krasnov[20] in defining spin foam interaction amplitudes with much better behaviour.

An attempt to make contact between EPRL-FK spin foam and the canonical formulation of LQG has been made.[21]

Spin foam derived from the Master constraint operator

See below.

Spin foams from consistent discretisations

The semi-classical limit

What is the semiclassical limit?

The classical limit or correspondence limit is the ability of a physical theory to approximate or "recover" classical mechanics when considered over special values of its parameters.[22] The classical limit is used with physical theories that predict non-classical behavior.
In physics, the correspondence principle states that the behavior of systems described by the theory of quantum mechanics (or by the old quantum theory) reproduces classical physics in the limit of large quantum numbers. In other words, it says that for large orbits and for large energies, quantum calculations must agree with classical calculations.[23]

The principle was formulated by Niels Bohr in 1920,[24] though he had previously made use of it as early as 1913 in developing his model of the atom.[25]

There are two basic requirements in establishing the semi-classical limit of any quantum theory:

i) reproduction of the Poisson brackets (of the diffeomorphism constraints in the case of general relativity). This is extremely important because, as noted above, the Poisson bracket algebra formed between the (smeared) constraints themselves completely determines the classical theory. This is analogous to establishing Ehrenfest's theorem;

ii) the specification of a complete set of classical observables whose corresponding operators (see complete set of commuting observables for the quantum mechanical definition of a complete set of observables) when acted on by appropriate semi-classical states reproduce the same classical variables with small quantum corrections (a subtle point is that states that are semi-classical for one class of observables may not be semi-classical for a different class of observables[26]).

This may be easily done, for example, in ordinary quantum mechanics for a particle but in general relativity this becomes a highly non-trivial problem as we will see below.

Why might LQG not have general relativity as its semiclassical limit?

Any candidate theory of quantum gravity must be able to reproduce Einstein's theory of general relativity as a classical limit of a quantum theory. This is not guaranteed because of a feature of quantum field theories which is that they have different sectors, these are analogous to the different phases that come about in the thermodynamical limit of statistical systems. Just as different phases are physically different, so are different sectors of a quantum field theory. It may turn out that LQG belongs to an unphysical sector - one in which you do not recover general relativity in the semi classical limit (in fact there might not be any physical sector at all).

Theorems establishing the uniqueness of the loop representation as defined by Ashtekar et al. (i.e. a certain concrete realization of a Hilbert space and associated operators reproducing the correct loop algebra - the realization that everybody was using) have been given by two groups (Lewandowski, Okolow, Sahlmann and Thiemann)[27] and (Christian Fleischhack).[28] Before this result was established it was not known whether there could be other examples of Hilbert spaces with operators invoking the same loop algebra, other realizations, not equivalent to the one that had been used so far. These uniqueness theorems imply no others exist and so if LQG does not have the correct semiclassical limit then this would mean the end of the loop representation of quantum gravity altogether.

Difficulties checking the semiclassical limit of LQG

There are difficulties in trying to establish LQG gives Einstein's theory of general relativity in the semi classical limit. There are a number of particular difficulties in establishing the semi-classical limit
  1. There is no operator corresponding to infinitesimal spacial diffeomorphisms (it is not surprising that the theory has no generator of infinitesimal spatial `translations' as it predicts spatial geometry has a discrete nature, compare to the situation in condensed matter). Instead it must be approximated by finite spatial diffeomorphisms and so the Poisson bracket structure of the classical theory is not exactly reproduced. This problem can be circumvented with the introduction of the so-called Master constraint (see below)[29]
  2. There is the problem of reconciling the discrete combinatorial nature of the quantum states with the continuous nature of the fields of the classical theory.
  3. There are serious difficulties arising from the structure of the Poisson brackets involving the spatial diffeomorphism and Hamiltonian constraints. In particular, the algebra of (smeared) Hamiltonian constraints does not close, it is proportional to a sum over infinitesimal spatial diffeomorphisms (which, as we have just noted, does not exist in the quantum theory) where the coefficients of proportionality are not constants but have non-trivial phase space dependence - as such it does not form a Lie algebra. However, the situation is much improved by the introduction of the Master constraint.[29]
  4. The semi-classical machinery developed so far is only appropriate to non-graph-changing operators, however, Thiemann's Hamiltonian constraint is a graph-changing operator - the new graph it generates has degrees of freedom upon which the coherent state does not depend and so their quantum fluctuations are not suppressed. There is also the restriction, so far, that these coherent states are only defined at the Kimematic level, and now one has to lift them to the level of \mathcal{H}_{Diff} and \mathcal{H}_{Phys}. It can be shown that Thiemann's Hamiltonian constraint is required to be graph changing in order to resolve problem 3 in some sense. The Master constraint algebra however is trivial and so the requirement that it be graph changing can be lifted and indeed non-graph changing Master constraint operators have been defined.
  5. Formulating observables for classical general relativity is a formidable problem by itself because of its non-linear nature and space-time diffeomorphism invariance. In fact a systematic approximation scheme to calculate observables has only been recently developed.[30][31]
Difficulties in trying to examine the semi classical limit of the theory should not be confused with it having the wrong semi classical limit.

Progress in demonstrating LQG has the correct semiclassical limit

Much details here to be written up...

Concerning issue number 2 above one can consider so-called weave states. Ordinary measurements of geometric quantities are macroscopic, and planckian discreteness is smoothed out. The fabric of a T-shirt is analogous. At a distance it is a smooth curved two-dimensional surface. But a closer inspection we see that it is actually composed of thousands of one-dimensional linked threads. The image of space given in LQG is similar, consider a very large spin network formed by a very large number of nodes and links, each of Planck scale. But probed at a macroscopic scale, it appears as a three-dimensional continuous metric geometry.

As far as the editor knows problem 4 of having semi-classical machinery for non-graph changing operators is as the moment still out of reach.

To make contact with familiar low energy physics it is mandatory to have to develop approximation schemes both for the physical inner product and for Dirac observables.

The spin foam models have been intensively studied can be viewed as avenues toward approximation schemes for the physical inner product.

Markopoulou et al. adopted the idea of noiseless subsystems in an attempt to solve the problem of the low energy limit in background independent quantum gravity theories[32][33][34] The idea has even led to the intriguing possibility of matter of the standard model being identified with emergent degrees of freedom from some versions of LQG (see section below: LQG and related research programs).

Improved dynamics and the Master constraint

The Master constraint

Thiemann's Master constraint should not be confused with the Master equation to do with random processes. The Master Constraint Programme for Loop Quantum Gravity (LQG) was proposed as a classically equivalent way to impose the infinite number of Hamiltonian constraint equations

H (x) = 0

(x being a continuous index) in terms of a single Master constraint,

M = \int d^3x {[H (x)]^2 \over \sqrt{\operatorname{det}(q(x))}}.

which involves the square of the constraints in question. Note that H (x) were infinitely many whereas the Master constraint is only one. It is clear that if M vanishes then so do the infinitely many H (x)'s. Conversely, if all the H (x)'s vanish then so does M, therefore they are equivalent. The Master constraint M involves an appropriate averaging over all space and so is invariant under spatial diffeomorphisms (it is invariant under spatial "shifts" as it is a summation over all such spatial "shifts" of a quantity that transforms as a scalar). Hence its Poisson bracket with the (smeared) spacial diffeomorphism constraint, C (\vec{N}), is simple:

\{ M  , C (\vec{N}) \} = 0.

(it is su (2) invariant as well). Also, obviously as any quantity Poisson commutes with itself, and the Master constraint being a single constraint, it satisfies

\{ M  , M \} = 0.

We also have the usual algebra between spatial diffeomorphisms. This represents a dramatic simplification of the Poisson bracket structure, and raises new hope in understanding the dynamics and establishing the semi-classical limit.[35]

An initial objection to the use of the Master constraint was that on first sight it did not seem to encode information about the observables; because the Mater constraint is quadratic in the constraint, when you compute its Poisson bracket with any quantity, the result is proportional to the constraint, therefore it always vanishes when the constraints are imposed and as such does not select out particular phase space functions. However, it was realized that the condition

\{ \{ M  , O \} , O \}_{M = 0} = 0

is equivalent to O being a Dirac observable. So the Master constraint does capture information about the observables. Because of its significance this is known as the Master equation.[35]

That the Master constraint Poisson algebra is an honest Lie algebra opens up the possibility of using a certain method, know as group averaging, in order to construct solutions of the infinite number of Hamiltonian constraints, a physical inner product thereon and Dirac observables via what is known as refined algebraic quantization RAQ[36]

Testing the Master constraint

The constraints in their primitive form are rather singular, this was the reason for integrating them over test functions to obtain smeared constraints. However, it would appear that the equation for the Master constraint, given above, is even more singular involving the product of two primitive constraints (although integrated over space). Squaring the constraint is dangerous as it could lead to worsened ultraviolent behaviour of the corresponding operator and hence the Master constraint programme must be approached with due care.

In doing so the Master constraint programme has been satisfactorily tested in a number of model systems with non-trivial constraint algebras, free and interacting field theories.[37][38][39][40][41] The Master constraint for LQG was established as a genuine positive self-adjoint operator and the physical Hilbert space of LQG was shown to be non-empty,[42] an obvious consistency test LQG must pass to be a viable theory of quantum General relativity.

Applications of the Master constraint

The Master constraint has been employed in attempts to approximate the physical inner product and define more rigorous path integrals.[43][44][45][46]

The Consistent Discretizations approach to LQG,[47][48] is an application of the master constraint program to construct the physical Hilbert space of the canonical theory.

Spin foam from the Master constraint

It turns out that the Master constraint is easily generalized to incorporate the other constraints. It is then referred to as the extended Master constraint, denoted M_E. We can define the extended Master constraint which imposes both the Hamiltonian constraint and spatial diffeomorphism constraint as a single operator,

M_E = \int_\Sigma d^3x {H (x)^2 - q^{ab} V_a (x) V_b (x) \over \sqrt{det (q)}}.

Setting this single constraint to zero is equivalent to H(x) = 0 and V_a (x) = 0 for all x in \Sigma. This constraint implements the spatial diffeomorphism and Hamiltonian constraint at the same time on the Kinematic Hilbert space. The physical inner product is then defined as



\langle\phi, \psi\rangle_{\text{Phys}} = \lim_{T \rightarrow \infty} \biggl\langle\phi, \int_{-T}^T dt e^{i t \hat{M}_E} \psi\biggr\rangle

(as \delta (\hat{M_E}) = \lim_{T \rightarrow \infty} \int_{-T}^T dt e^{i t \hat{M}_E}).

A spin foam representation of this expression is obtained by splitting the t-parameter in discrete steps and writing

e^{i t \hat{M}_E} = \lim_{n \rightarrow \infty} [e^{i t \hat{M}_E / n}]^n = \lim_{n \rightarrow \infty} [1 + i t \hat{M}_E / n]^n.

The spin foam description then follows from the application of [1 + i t \hat{M}_E / n] on a spin network resulting in a linear combination of new spin networks whose graph and labels have been modified. Obviously an approximation is made by truncating the value of n to some finite integer. An advantage of the extended Master constraint is that we are working at the kinematic level and so far it is only here we have access semi-classical coherent states. Moreover, one can find none graph changing versions of this Master constraint operator, which are the only type of operators appropriate for these coherent states.

Algebraic quantum gravity

The Master constraint programme has evolved into a fully combinatorial treatment of gravity known as Algebraic Quantum Gravity (AQG).[49] While AQG is inspired by LQG, it differs drastically from it because in AQG there is fundamentally no topology or differential structure - it is background independent in a more generalized sense and could possibly have something to say about topology change. In this new formulation of quantum gravity existing semiclassical machinery, which is only viable for non-graph changing operators, can be employed, and progress has been made in establishing it has the correct semiclassical limit and providing contact with familiar low energy physics.[50][51] See Thiemann's book for details.

Physical applications of LQG

Black hole entropy

The Immirzi parameter (also known as the Barbero-Immirzi parameter) is a numerical coefficient appearing in loop quantum gravity. It may take real or imaginary values.
An artist depiction of two black holes merging, a process in which the laws of thermodynamics are upheld.

Black hole thermodynamics is the area of study that seeks to reconcile the laws of thermodynamics with the existence of black hole event horizons. The no hair conjecture of general relativity states that a black hole is characterized only by its mass, its charge, and its angular momentum; hence, it has no entropy. It appears, then, that one can violate the second law of thermodynamics by dropping an object with nonzero entropy into a black hole.[52] Work by Stephen Hawking and Jacob Bekenstein showed that one can preserve the second law of thermodynamics by assigning to each black hole a black-hole entropy
S_{\text{BH}} = \frac{k_{\text{B}}A}{4\ell_{\text{P}}^2},
where A is the area of the hole's event horizon, k_{\text{B}} is the Boltzmann constant, and \ell_{\text{P}} = \sqrt{G\hbar/c^{3}} is the Planck length.[53] The fact that the black hole entropy is also the maximal entropy that can be obtained by the Bekenstein bound (wherein the Bekenstein bound becomes an equality) was the main observation that led to the holographic principle.[52]

An oversight in the application of the no-hair theorem is the assumption that the relevant degrees of freedom accounting for the entropy of the black hole must be classical in nature; what if they were purely quantum mechanical instead and had non-zero entropy? Actually, this is what is realized in the LQG derivation of black hole entropy, and can be seen as a consequence of its background-independence - the classical black hole spacetime comes about from the semi-classical limit of the quantum state of the gravitational field, but there are many quantum states that have the same semiclasical limit. Specifically, in LQG[54] it is possible to associate a quantum geometrical interpretation to the microstates: These are the quantum geometries of the horizon which are consistent with the area, A, of the black hole and the topology of the horizon (i.e. spherical). LQG offers a geometric explanation of the finiteness of the entropy and of the proportionality of the area of the horizon.[55][56] These calculations have been generalized to rotating black holes.[57]

Representation of quantum geometries of the horizon. Polymer excitations in the bulk puncture the horizon, endowing it with quantized area. Intrinsically the horizon is flat except at punctures where it acquires a quantized deficit angle or quantized amount of curvature. These deficit angles add up to 4 \pi.

It is possible to derive, from the covariant formulation of full quantum theory (Spinfoam) the correct relation between energy and area (1st law), the Unruh temperature and the distribution that yields Hawking entropy.[58] The calculation makes use of the notion of dynamical horizon and is done for non-extremal black holes.

A recent success of the theory in this direction is the computation of the entropy of all non singular black holes directly from theory and independent of Immirzi parameter.[59] The result is the expected formula S=A/4, where S is the entropy and A the area of the black hole, derived by Bekenstein and Hawking on heuristic grounds. This is the only known derivation of this formula from a fundamental theory, for the case of generic non singular black holes. Older attempts at this calculation had difficulties. The problem was that although Loop quantum gravity predicted that the entropy of a black hole is proportional to the area of the event horizon, the result depended on a crucial free parameter in the theory, the above-mentioned Immirzi parameter. However, there is no known computation of the Immirzi parameter, so it had to be fixed by demanding agreement with Bekenstein and Hawking's calculation of the black hole entropy.

Loop quantum cosmology

The popular and technical literature makes extensive references to LQG-related topic of loop quantum cosmology. LQC was mainly developed by Martin Bojowald, it was popularized Loop quantum cosmology in Scientific American for predicting a Big Bounce prior to the Big Bang. Loop quantum cosmology (LQC) is a symmetry-reduced model of classical general relativity quantized using methods that mimic those of loop quantum gravity (LQG) that predicts a "quantum bridge" between contracting and expanding cosmological branches.
Achievements of LQC have been the resolution of the big bang singularity, the prediction of a Big Bounce, and a natural mechanism for inflation (cosmology).

LQC models share features of LQG and so is a useful toy model. However, the results obtained are subject to the usual restriction that a truncated classical theory, then quantized, might not display the true behaviour of the full theory due to artificial suppression of degrees of freedom that might have large quantum fluctuations in the full theory. It has been argued that singularity avoidance in LQC are by mechanisms only available in these restrictive models and that singularity avoidance in the full theory can still be obtained but by a more subtle feature of LQG.[60][61]

Loop Quantum Gravity phenomenology

Quantum gravity effects are notoriously difficult to measure because the Planck length is so incredibly small. However recently physicists have started to consider the possibility of measuring quantum gravity effects, mostly from astrophysical observations and gravitational wave detectors.

Background independent scattering amplitudes

Loop quantum gravity is formulated in a background-independent language. No spacetime is assumed a priori, but rather it is built up by the states of theory themselves - however scattering amplitudes are derived from n-point functions (Correlation function (quantum field theory)) and these, formulated in conventional quantum field theory, are functions of points of a background space-time. The relation between the background-independent formalism and the conventional formalism of quantum field theory on a given spacetime is far from obvious, and it is far from obvious how to recover low-energy quantities from the full background-independent theory. One would like to derive the n-point functions of the theory from the background-independent formalism, in order to compare them with the standard perturbative expansion of quantum general relativity and therefore check that loop quantum gravity yields the correct low-energy limit.

A strategy for addressing this problem has been suggested;[62] the idea is to study the boundary amplitude, namely a path integral over a finite space-time region, seen as a function of the boundary value of the field.[63] In conventional quantum field theory, this boundary amplitude is well–defined[64][65] and codes the physical information of the theory; it does so in quantum gravity as well, but in a fully background–independent manner.[66] A generally covariant definition of n-point functions can then be based on the idea that the distance between physical points –arguments of the n-point function is determined by the state of the gravitational field on the boundary of the spacetime region considered.

Progress has been made in calculating background independent scattering amplitudes this way with the use of spin foams. This is a way to extract physical information from the theory. Claims to have reproduced the correct behaviour for graviton scattering amplitudes and to have recovered classical gravity have been made. "We have calculated Newton's law starting from a world with no space and no time." - Carlo Rovelli.

Gravitons, string theory, super symmetry, extra dimensions in LQG

Some quantum theories of gravity posit a spin-2 quantum field that is quantized, giving rise to gravitons. In string theory one generally starts with quantized excitations on top of a classically fixed background. This theory is thus described as background dependent. Particles like photons as well as changes in the spacetime geometry (gravitons) are both described as excitations on the string worldsheet. While string theory is "background dependent", the choice of background, like a gauge fixing, does not affect the physical predictions. This is not the case, however, for quantum field theories, which give different predictions for different backgrounds. In contrast, loop quantum gravity, like general relativity, is manifestly background independent, eliminating the (in some sense) "redundant" background required in string theory. Loop quantum gravity, like string theory, also aims to overcome the nonrenormalizable divergences of quantum field theories.
LQG never introduces a background and excitations living on this background, so LQG does not use gravitons as building blocks. Instead one expects that one may recover a kind of semiclassical limit or weak field limit where something like "gravitons" will show up again. In contrast, gravitons play a key role in string theory where they are among the first (massless) level of excitations of a superstring.

LQG differs from string theory in that it is formulated in 3 and 4 dimensions and without supersymmetry or Kaluza-Klein extra dimensions, while the latter requires both to be true. There is no experimental evidence to date that confirms string theory's predictions of supersymmetry and Kaluza–Klein extra dimensions. In a 2003 paper A dialog on quantum gravity,[67] Carlo Rovelli regards the fact LQG is formulated in 4 dimensions and without supersymmetry as a strength of the theory as it represents the most parsimonious explanation, consistent with current experimental results, over its rival string/M-theory. Proponents of string theory will often point to the fact that, among other things, it demonstrably reproduces the established theories of general relativity and quantum field theory in the appropriate limits, which Loop Quantum Gravity has struggled to do. In that sense string theory's connection to established physics may be considered more reliable and less speculative, at the mathematical level. Peter Woit in Not Even Wrong and Lee Smolin in The Trouble with Physics regard string/M-theory to be in conflict with current known experimental results.

Since LQG has been formulated in 4 dimensions (with and without supersymmetry), and M-theory requires supersymmetry and 11 dimensions, a direct comparison between the two has not been possible. It is possible to extend mainstream LQG formalism to higher-dimensional supergravity, general relativity with supersymmetry and Kaluza–Klein extra dimensions should experimental evidence establish their existence. It would therefore be desirable to have higher-dimensional Supergravity loop quantizations at one's disposal in order to compare these approaches. In fact a series of recent papers have been published attempting just this.[68][69][70][71][72][73][74][75] Most recently, Thiemann at el have made progress toward calculating black hole entropy for supergravity in higher dimensions. It will be interesting to compare these results to the corresponding super string calculations.[76][77]

As of April 2013 LHC has failed to find evidence of supersymmetry or Kaluza–Klein extra dimensions, which has encouraged LQG researchers. Shaposhnikov in his paper "Is there a new physics between electroweak and Planck scales?" has proposed the neutrino minimal standard model,[78] which claims the most parsimonious theory is a standard model extended with neutrinos, plus gravity, and that extra dimensions, GUT physics, and supersymmetry, string/M-theory physics are unrealized in nature, and that any theory of quantum gravity must be four dimensional, like loop quantum gravity.

LQG and related research programs

Several research groups have attempted to combine LQG with other research programs: Johannes Aastrup, Jesper M. Grimstrup et al. research combines noncommutative geometry with loop quantum gravity,[79] Laurent Freidel, Simone Speziale, et al., spinors and twistor theory with loop quantum gravity,[80] and Lee Smolin et al. with Verlinde entropic gravity and loop gravity.[81] Stephon Alexander, Antonino Marciano and Lee Smolin have attempted to explain the origins of weak force chirality in terms of Ashketar's variables, which describe gravity as chiral,[82] and LQG with Yang–Mills theory fields [83] in four dimensions. Sundance Bilson-Thompson, Hackett et al.,[84][85] has attempted to introduce standard model via LQG"s degrees of freedom as an emergent property (by employing the idea noiseless subsystems a useful notion introduced in more general situation for constrained systems by Fotini Markopoulou-Kalamara et al.[86]) LQG has also drawn philosophical comparisons with Causal dynamical triangulation [87] and asymptotically safe gravity,[88] and the spinfoam with group field theory and AdS/CFT correspondence.[89] Smolin and Wen have suggested combining LQG with String-net liquid, tensors, and Smolin and Fotini Markopoulou-Kalamara Quantum Graphity. There is the consistent discretizations approach. In addition to what has already mentioned above, Pullin and Gambini provide a framework to connect the path integral and canonical approaches to quantum gravity. They may help reconcile the spin foam and canonical loop representation approaches. Recent research by Chris Duston and Matilde Marcolli introduces topology change via topspin networks.[90]

Problems and comparisons with alternative approaches

Some of the major unsolved problems in physics are theoretical, meaning that existing theories seem incapable of explaining a certain observed phenomenon or experimental result. The others are experimental, meaning that there is a difficulty in creating an experiment to test a proposed theory or investigate a phenomenon in greater detail.
Can quantum mechanics and general relativity be realized as a fully consistent theory (perhaps as a quantum field theory)?[7] Is spacetime fundamentally continuous or discrete? Would a consistent theory involve a force mediated by a hypothetical graviton, or be a product of a discrete structure of spacetime itself (as in loop quantum gravity)? Are there deviations from the predictions of general relativity at very small or very large scales or in other extreme circumstances that flow from a quantum gravity theory?

The theory of LQG is one possible solution to the problem of quantum gravity, as is string theory. There are substantial differences however. For example, string theory also addresses unification, the understanding of all known forces and particles as manifestations of a single entity, by postulating extra dimensions and so-far unobserved additional particles and symmetries. Contrary to this, LQG is based only on quantum theory and general relativity and its scope is limited to understanding the quantum aspects of the gravitational interaction. On the other hand, the consequences of LQG are radical, because they fundamentally change the nature of space and time and provide a tentative but detailed physical and mathematical picture of quantum spacetime.

Presently, no semiclassical limit recovering general relativity has been shown to exist. This means it remains unproven that LQG's description of spacetime at the Planck scale has the right continuum limit (described by general relativity with possible quantum corrections). Specifically, the dynamics of the theory is encoded in the Hamiltonian constraint, but there is no candidate Hamiltonian.[91] Other technical problems include finding off-shell closure of the constraint algebra and physical inner product vector space, coupling to matter fields of Quantum field theory, fate of the renormalization of the graviton in perturbation theory that lead to ultraviolet divergence beyond 2-loops (see One-loop Feynman diagram in Feynman diagram).[91]

While there has been a recent proposal relating to observation of naked singularities,[92] and doubly special relativity as a part of a program called loop quantum cosmology, there is no experimental observation for which loop quantum gravity makes a prediction not made by the Standard Model or general relativity (a problem that plagues all current theories of quantum gravity). Because of the above-mentioned lack of a semiclassical limit, LQG has not yet even reproduced the predictions made by general relativity.

An alternative criticism is that general relativity may be an effective field theory, and therefore quantization ignores the fundamental degrees of freedom.

Holographic principle

Holographic principle

From Wikipedia, the free encyclopedia

The holographic principle is a property of string theories and a supposed property of quantum gravity that states that the description of a volume of space can be thought of as encoded on a boundary to the region—preferably a light-like boundary like a gravitational horizon. First proposed by Gerard 't Hooft, it was given a precise string-theory interpretation by Leonard Susskind[1] who combined his ideas with previous ones of 't Hooft and Charles Thorn.[1][2] As pointed out by Raphael Bousso,[3] Thorn observed in 1978 that string theory admits a lower-dimensional description in which gravity emerges from it in what would now be called a holographic way.

In a larger sense, the theory suggests that the entire universe can be seen as a two-dimensional information structure "painted" on the cosmological horizon, such that the three dimensions we observe are an effective description only at macroscopic scales and at low energies. Cosmological holography has not been made mathematically precise, partly because the cosmological horizon has a finite area and grows with time.[4][5]

The holographic principle was inspired by black hole thermodynamics, which conjectures that the maximal entropy in any region scales with the radius squared, and not cubed as might be expected. In the case of a black hole, the insight was that the informational content of all the objects that have fallen into the hole might be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory.[6] However, there exist classical solutions to the Einstein equations that allow values of the entropy larger than those allowed by an area law, hence in principle larger than those of a black hole. These are the so-called "Wheeler's bags of gold". The existence of such solutions is in conflict with the holographic interpretation, and their effects in a quantum theory of gravity including the holographic principle are not yet fully understood.[7]

Black hole entropy

An object with entropy is microscopically random, like a hot gas. A known configuration of classical fields has zero entropy: there is nothing random about electric and magnetic fields, or gravitational waves. Since black holes are exact solutions of Einstein's equations, they were thought not to have any entropy either.
But Jacob Bekenstein noted that this leads to a violation of the second law of thermodynamics. If one throws a hot gas with entropy into a black hole, once it crosses the event horizon, the entropy would disappear. The random properties of the gas would no longer be seen once the black hole had absorbed the gas and settled down. One way of salvaging the second law is if black holes are in fact random objects, with an enormous entropy whose increase is greater than the entropy carried by the gas.

Bekenstein assumed that black holes are maximum entropy objects—that they have more entropy than anything else in the same volume. In a sphere of radius R, the entropy in a relativistic gas increases as the energy increases. The only known limit is gravitational; when there is too much energy the gas collapses into a black hole. Bekenstein used this to put an upper bound on the entropy in a region of space, and the bound was proportional to the area of the region. He concluded that the black hole entropy is directly proportional to the area of the event horizon.[8]

Stephen Hawking had shown earlier that the total horizon area of a collection of black holes always increases with time. The horizon is a boundary defined by light-like geodesics; it is those light rays that are just barely unable to escape. If neighboring geodesics start moving toward each other they eventually collide, at which point their extension is inside the black hole. So the geodesics are always moving apart, and the number of geodesics which generate the boundary, the area of the horizon, always increases. Hawking's result was called the second law of black hole thermodynamics, by analogy with the law of entropy increase, but at first, he did not take the analogy too seriously.

Hawking knew that if the horizon area were an actual entropy, black holes would have to radiate. When heat is added to a thermal system, the change in entropy is the increase in mass-energy divided by temperature:

{\rm d}S = \frac{{\rm d}M}{T}.
If black holes have a finite entropy, they should also have a finite temperature. In particular, they would come to equilibrium with a thermal gas of photons. This means that black holes would not only absorb photons, but they would also have to emit them in the right amount to maintain detailed balance.

Time independent solutions to field equations don't emit radiation, because a time independent background conserves energy. Based on this principle, Hawking set out to show that black holes do not radiate. But, to his surprise, a careful analysis convinced him that they do, and in just the right way to come to equilibrium with a gas at a finite temperature. Hawking's calculation fixed the constant of proportionality at 1/4; the entropy of a black hole is one quarter its horizon area in Planck units.[9]

The entropy is proportional to the logarithm of the number of microstates, the ways a system can be configured microscopically while leaving the macroscopic description unchanged. Black hole entropy is deeply puzzling — it says that the logarithm of the number of states of a black hole is proportional to the area of the horizon, not the volume in the interior.[10]

Later, Raphael Bousso came up with a covariant version of the bound based upon null sheets.

Black hole information paradox

Hawking's calculation suggested that the radiation which black holes emit is not related in any way to the matter that they absorb. The outgoing light rays start exactly at the edge of the black hole and spend a long time near the horizon, while the infalling matter only reaches the horizon much later. The infalling and outgoing mass/energy only interact when they cross. It is implausible that the outgoing state would be completely determined by some tiny residual scattering.
Hawking interpreted this to mean that when black holes absorb some photons in a pure state described by a wave function, they re-emit new photons in a thermal mixed state described by a density matrix. This would mean that quantum mechanics would have to be modified, because in quantum mechanics, states which are superpositions with probability amplitudes never become states which are probabilistic mixtures of different possibilities.[note 1]

Troubled by this paradox, Gerard 't Hooft analyzed the emission of Hawking radiation in more detail. He noted that when Hawking radiation escapes, there is a way in which incoming particles can modify the outgoing particles. Their gravitational field would deform the horizon of the black hole, and the deformed horizon could produce different outgoing particles than the undeformed horizon. When a particle falls into a black hole, it is boosted relative to an outside observer, and its gravitational field assumes a universal form. 't Hooft showed that this field makes a logarithmic tent-pole shaped bump on the horizon of a black hole, and like a shadow, the bump is an alternate description of the particle's location and mass. For a four-dimensional spherical uncharged black hole, the deformation of the horizon is similar to the type of deformation which describes the emission and absorption of particles on a string-theory world sheet. Since the deformations on the surface are the only imprint of the incoming particle, and since these deformations would have to completely determine the outgoing particles, 't Hooft believed that the correct description of the black hole would be by some form of string theory.

This idea was made more precise by Leonard Susskind, who had also been developing holography, largely independently. Susskind argued that the oscillation of the horizon of a black hole is a complete description[note 2] of both the infalling and outgoing matter, because the world-sheet theory of string theory was just such a holographic description. While short strings have zero entropy, he could identify long highly excited string states with ordinary black holes. This was a deep advance because it revealed that strings have a classical interpretation in terms of black holes.

This work showed that the black hole information paradox is resolved when quantum gravity is described in an unusual string-theoretic way assuming the string-theoretical description is complete, unambiguous and non-redundant.[12] The space-time in quantum gravity would emerge as an effective description of the theory of oscillations of a lower-dimensional black-hole horizon, and suggest that any black hole with appropriate properties, not just strings, would serve as a basis for a description of string theory.

In 1995, Susskind, along with collaborators Tom Banks, Willy Fischler, and Stephen Shenker, presented a formulation of the new M-theory using a holographic description in terms of charged point black holes, the D0 branes of type IIA string theory. The Matrix theory they proposed was first suggested as a description of two branes in 11-dimensional supergravity by Bernard de Wit, Jens Hoppe, and Hermann Nicolai. The later authors reinterpreted the same matrix models as a description of the dynamics of point black holes in particular limits. Holography allowed them to conclude that the dynamics of these black holes give a complete non-perturbative formulation of M-theory. In 1997, Juan Maldacena gave the first holographic descriptions of a higher-dimensional object, the 3+1-dimensional type IIB membrane, which resolved a long-standing problem of finding a string description which describes a gauge theory. These developments simultaneously explained how string theory is related to some forms of supersymmetric quantum field theories.

Limit on information density

Entropy, if considered as information, is measured in bits. The total quantity of bits is related to the total degrees of freedom of matter/energy.

For a given energy in a given volume, there is an upper limit to the density of information (the Bekenstein bound) about the whereabouts of all the particles which compose matter in that volume, suggesting that matter itself cannot be subdivided infinitely many times and there must be an ultimate level of fundamental particles. As the degrees of freedom of a particle are the product of all the degrees of freedom of its sub-particles, were a particle to have infinite subdivisions into lower-level particles, then the degrees of freedom of the original particle must be infinite, violating the maximal limit of entropy density. The holographic principle thus implies that the subdivisions must stop at some level, and that the fundamental particle is a bit (1 or 0) of information.

The most rigorous realization of the holographic principle is the AdS/CFT correspondence by Juan Maldacena. However, J.D. Brown and Marc Henneaux had rigorously proved already in 1986, that the asymptotic symmetry of 2+1 dimensional gravity gives rise to a Virasoro algebra, whose corresponding quantum theory is a 2-dimensional conformal field theory.[13]

High-level summary

The physical universe is widely seen to be composed of "matter" and "energy". In his 2003 article published in Scientific American magazine, Jacob Bekenstein summarized a current trend started by John Archibald Wheeler, which suggests scientists may "regard the physical world as made of information, with energy and matter as incidentals." Bekenstein asks "Could we, as William Blake memorably penned, 'see a world in a grain of sand,' or is that idea no more than 'poetic license,'"[14] referring to the holographic principle.

Unexpected connection

Bekenstein's topical overview "A Tale of Two Entropies" describes potentially profound implications of Wheeler's trend, in part by noting a previously unexpected connection between the world of information theory and classical physics. This connection was first described shortly after the seminal 1948 papers of American applied mathematician Claude E. Shannon introduced today's most widely used measure of information content, now known as Shannon entropy. As an objective measure of the quantity of information, Shannon entropy has been enormously useful, as the design of all modern communications and data storage devices, from cellular phones to modems to hard disk drives and DVDs, rely on Shannon entropy.

In thermodynamics (the branch of physics dealing with heat), entropy is popularly described as a measure of the "disorder" in a physical system of matter and energy. In 1877 Austrian physicist Ludwig Boltzmann described it more precisely in terms of the number of distinct microscopic states that the particles composing a macroscopic "chunk" of matter could be in while still looking like the same macroscopic "chunk". As an example, for the air in a room, its thermodynamic entropy would equal the logarithm of the count of all the ways that the individual gas molecules could be distributed in the room, and all the ways they could be moving.

Energy, matter, and information equivalence

Shannon's efforts to find a way to quantify the information contained in, for example, an e-mail message, led him unexpectedly to a formula with the same form as Boltzmann's. In an article in the August 2003 issue of Scientific American titled "Information in the Holographic Universe", Bekenstein summarizes that "Thermodynamic entropy and Shannon entropy are conceptually equivalent: the number of arrangements that are counted by Boltzmann entropy reflects the amount of Shannon information one would need to implement any particular arrangement..." of matter and energy. The only salient difference between the thermodynamic entropy of physics and the Shannon's entropy of information is in the units of measure; the former is expressed in units of energy divided
by temperature, the latter in essentially dimensionless "bits" of information, and so the difference is merely a matter of convention.

The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information "inscribed" on the surface of its boundary.[10]

Recent work

Nature [15] presents two papers[16][17] authored by Yoshifumi Hyakutake that bring computational evidence that Maldacena’s conjecture is true. One paper computes the internal energy of a black hole, the position of its event horizon, its entropy and other properties based on the predictions of string theory and the effects of virtual particles. The other paper calculates the internal energy of the corresponding lower-dimensional cosmos with no gravity. The two simulations match. These papers have received positive appreciation from Maldacena himself and Leonard Susskind, one of the founders of string theory. The papers do not suggest that the universe we actually live in is a hologram and are not an actual proof of Maldacena's conjecture for all cases but a demonstration that the conjecture works for a particular theoretical case. The situation they examine is a hypothetical universe, not a universe necessarily like ours. The new work is a mathematical test that verifies the AdS/CFT correspondence for a particular situation.[18]

Experimental tests

The Fermilab physicist Craig Hogan claims that the holographic principle would imply quantum fluctuations in spatial position[19] that would lead to apparent background noise or "holographic noise" measurable at gravitational wave detectors, in particular GEO 600.[20] However these claims have not been widely accepted, or cited, among quantum gravity researchers and appear to be in direct conflict with string theory calculations.[21]

Analyses in 2011 of measurements of gamma ray burst GRB 041219A in 2004 by the INTEGRAL space observatory launched in 2002 by the European Space Agency shows that Craig Hogan's noise is absent down to a scale of 10−48 meters, as opposed to scale of 10−35 meters predicted by Hogan, and the scale of 10−16 meters found in measurements of the GEO 600 instrument.[22] Research continues at Fermilab under Hogan as of 2013.[23]

Jacob Bekenstein also claims to have found a way to test the holographic principle with a tabletop photon experiment.[24]

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...