Search This Blog

Monday, February 2, 2015

Loop quantum gravity


From Wikipedia, the free encyclopedia

Loop quantum gravity (LQG) is a theory that attempts to describe the quantum properties of the universe and gravity. It is also a theory of quantum space and quantum time because, according to general relativity, the geometry of spacetime is a manifestation of gravity. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. The main output of the theory is a physical picture of space where space is granular. The granularity is a direct consequence of the quantization. It has the same nature as the granularity of the photons in the quantum theory of electromagnetism or the discrete levels of the energy of the atoms. Here, it is space itself that is discrete. In other words, there is a minimum distance possible to travel through it.

More precisely, space can be viewed as an extremely fine fabric or network "woven" of finite loops. These networks of loops are called spin networks. The evolution of a spin network over time is called a spin foam. The predicted size of this structure is the Planck length, which is approximately 10−35 meters. According to the theory, there is no meaning to distance at scales smaller than the Planck scale. Therefore, LQG predicts that not just matter, but also space itself has an atomic structure.

Today LQG is a vast area of research, developing in several directions, which involves about 30 research groups worldwide.[1] They all share the basic physical assumptions and the mathematical description of quantum space. The full development of the theory is being pursued in two directions: the more traditional canonical loop quantum gravity, and the newer covariant loop quantum gravity, more commonly called spin foam theory.

Research into the physical consequences of the theory is proceeding in several directions. Among these, the most well-developed is the application of LQG to cosmology, called loop quantum cosmology (LQC). LQC applies LQG ideas to the study of the early universe and the physics of the Big Bang. Its most spectacular consequence is that the evolution of the universe can be continued beyond the Big Bang. The Big Bang appears thus to be replaced by a sort of cosmic Big Bounce.

History

In 1986, Abhay Ashtekar reformulated Einstein's general relativity in a language closer to that of the rest of fundamental physics. Shortly after, Ted Jacobson and Lee Smolin realized that the formal equation of quantum gravity, called the Wheeler–DeWitt equation, admitted solutions labelled by loops, when rewritten in the new Ashtekar variables, and Carlo Rovelli and Lee Smolin defined a nonperturbative and background-independent quantum theory of gravity in terms of these loop solutions. Jorge Pullin and Jerzy Lewandowski understood that the intersections of the loops are essential for the consistency of the theory, and the theory should be formulated in terms of intersecting loops, or graphs.
In 1994, Rovelli and Smolin showed that the quantum operators of the theory associated to area and volume have a discrete spectrum. That is, geometry is quantized. This result defines an explicit basis of states of quantum geometry, which turned out to be labelled by Roger Penrose's spin networks, which are graphs labelled by spins.

The canonical version of the dynamics was put on firm ground by Thomas Thiemann, who defined an anomaly-free Hamiltonian operator, showing the existence of a mathematically consistent background-independent theory. The covariant or spinfoam version of the dynamics developed during several decades, and crystallized in 2008, from the joint work of research groups in France, Canada, UK, Poland, and Germany, lead to the definition of a family of transition amplitudes, which in the classical limit can be shown to be related to a family of truncations of general relativity.[2] The finiteness of these amplitudes was proven in 2011.[3][4] It requires the existence of a positive cosmological constant, and this is consistent with observed acceleration in the expansion of the Universe.

General covariance and background independence

In theoretical physics, general covariance is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations. The essential idea is that coordinates are only artifices used in describing nature, and hence should play no role in the formulation of fundamental physical laws. A more significant requirement is the principle of general relativity that states that the laws of physics take the same form in all reference systems. This is a generalization of the principle of special relativity which states that the laws of physics take the same form in all inertial frames.In mathematics, a diffeomorphism is an isomorphism in the category of smooth manifolds. It is an invertible function that maps one differentiable manifold to another, such that both the function and its inverse are smooth. These are the defining symmetry transformations of General Relativity since the theory is formulated only in terms of a differentiable manifold.

In general relativity, general covariance is intimately related to "diffeomorphism invariance". This symmetry is one of the defining features of the theory. However, it is a common misunderstanding that "diffeomorphism invariance" refers to the invariance of the physical predictions of a theory under arbitrary coordinate transformations; this is untrue and in fact every physical theory is invariant under coordinate transformations this way. Diffeomorphisms, as mathematicians define them, correspond to something much more radical; intuitively a way they can be envisaged is as simultaneously dragging all the physical fields (including the gravitational field) over the bare differentiable manifold while staying in the same coordinate system. Diffeomorphisms are the true symmetry transformations of general relativity, and come about from the assertion that the formulation of the theory is based on a bare differentiable manifold, but not on any prior geometry — the theory is background-independent (this is a profound shift, as all physical theories before general relativity had as part of their formulation a prior geometry). What is preserved under such transformations are the coincidences between the values the gravitational field take at such and such a "place" and the values the matter fields take there. From these relationships one can form a notion of matter being located with respect to the gravitational field, or vice versa. This is what Einstein discovered: that physical entities are located with respect to one another only and not with respect to the spacetime manifold. As Carlo Rovelli puts it: "No more fields on spacetime: just fields on fields.".[5] This is the true meaning of the saying "The stage disappears and becomes one of the actors"; space-time as a "container" over which physics takes place has no objective physical meaning and instead the gravitational interaction is represented as just one of the fields forming the world. This is known as the relationalist interpretation of space-time. The realization by Einstein that general relativity should be interpreted this way is the origin of his remark "Beyond my wildest expectations".

In LQG this aspect of general relativity is taken seriously and this symmetry is preserved by requiring that the physical states remain invariant under the generators of diffeomorphisms. The interpretation of this condition is well understood for purely spatial diffeomorphisms. However, the understanding of diffeomorphisms involving time (the Hamiltonian constraint) is more subtle because it is related to dynamics and the so-called "problem of time" in general relativity.[6] A generally accepted calculational framework to account for this constraint has yet to be found.[7][8] A plausible candidate for the quantum hamiltonian constraint is the operator introduced by Thiemann.[9]

LQG is formally background independent. The equations of LQG are not embedded in, or dependent on, space and time (except for its invariant topology). Instead, they are expected to give rise to space and time at distances which are large compared to the Planck length. The issue of background independence in LQG still has some unresolved subtleties. For example, some derivations require a fixed choice of the topology, while any consistent quantum theory of gravity should include topology change as a dynamical process.

Constraints and their Poisson Bracket Algebra

The constraints of classical canonical general relativity

In the Hamiltonian formulation of ordinary classical mechanics the Poisson bracket is an important concept. A "canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical Poisson-bracket relations,
\{ q_i , p_j \} = \delta_{ij}

where the Poisson bracket is given by
\{f,g\} = \sum_{i=1}^{N} \left( 
\frac{\partial f}{\partial q_{i}} \frac{\partial g}{\partial p_{i}} - \frac{\partial f}{\partial p_{i}} \frac{\partial g}{\partial q_{i}}\right).
for arbitrary phase space functions f (q_i , p_j) and g (q_i , p_j). With the use of Poisson brackets, the Hamilton's equations can be rewritten as,
\dot{q}_i = \{ q_i , H \},
\dot{p}_i = \{ p_i , H \}.

These equations describe a ``flow" or orbit in phase space generated by the Hamiltonian H. Given any phase space function F (q,p), we have
{d \over dt} F (q_i,p_i) = \{ F , H \}.

Let us consider constrained systems, of which General relativity is an example. In a similar way the Poisson bracket between a constraint and the phase space variables generates a flow along an orbit in (the unconstrained) phase space generated by the constraint. There are three types of constraints in Ashtekar's reformulation of classical general relativity:

SU(2) Gauss gauge constraints[edit]

The Gauss constraints
G_j (x) = 0.

This represents an infinite number of constraints one for each value of x. These come about from re-expressing General relativity as an \mathrm{SU}(2) Yang–Mills type gauge theory (Yang–Mills is a generalization of Maxwell's theory where the gauge field transforms as a vector under Gauss transformations, that is, the Gauge field is of the form A_a^i (x) where i is an internal index. See Ashtekar variables). These infinite number of Gauss gauge constraints can be smeared with test fields with internal indices, \lambda^j (x),
G (\lambda) = \int d^3x G_j (x) \lambda^j (x).

which we demand vanish for any such function. These smeared constraints defined with respect to a suitable space of smearing functions give an equivalent description to the original constraints.
In fact Ashtekar's formulation may be thought of as ordinary \mathrm{SU}(2) Yang–Mills theory together with the following special constraints, resulting from diffeomorphism invariance, and a Hamiltonian that vanishes. The dynamics of such a theory are thus very different from that of ordinary Yang–Mills theory.

Spatial diffeomorphisms constraints

C_a (x) = 0
can be smeared by the so-called shift functions \vec{N} (x) to give an equivalent set of smeared spatial diffeomorphism constraints,
C (\vec{N}) = \int d^3 x C_a (x) N^a (x).
These generate spatial diffeomorphisms along orbits defined by the shift function N^a (x).

Hamiltonian constraints

The Hamiltonian
H (x) = 0
can be smeared by the so-called lapse functions N (x) to give an equivalent set of smeared Hamiltonian constraints,
H (N) = \int d^3 x H (x) N (x).
These generate time diffeomorphisms along orbits defined by the lapse function N (x).
In Ashtekar formulation the gauge field A_a^i (x) is the configuration variable (the configuration variable being analogous to q in ordinary mechanics) and its conjugate momentum is the (densitized) triad (electrical field) \tilde{E}^a_i (x). The constraints are certain functions of these phase space variables.
We consider the action of the constraints on arbitrary phase space functions. An important notion here is the Lie derivative, \mathcal{L}_V, which is basically a derivative operation that infinitesimally "shifts" functions along some orbit with tangent vector V.

The Poisson bracket algebra

Of particular importance is the Poisson bracket algebra formed between the (smeared) constraints themselves as it completely determines the theory. In terms of the above smeared constraints the constraint algebra amongst the Gauss' law reads,
\{ G (\lambda) , G (\mu) \} = G ([\lambda , \mu])
where [\lambda , \mu]^k = \lambda_i \mu_j \epsilon^{ijk}. And so we see that the Poisson bracket of two Gauss' law is equivalent to a single Gauss' law evaluated on the commutator of the smearings. The Poisson bracket amongst spatial diffeomorphisms constraints reads
\{ C (\vec{N})  , C (\vec{M}) \} = C (\mathcal{L}_\vec{N} \vec{M})
and we see that its effect is to "shift the smearing". The reason for this is that the smearing functions are not functions of the canonical variables and so the spatial diffeomorphism does not generate diffeomorphims on them. They do however generate diffeomorphims on everything else. This is equivalent to leaving everything else fixed while shifting the smearing .The action of the spatial diffeomorphism on the Gauss law is
\{ C (\vec{N})  , G (\lambda) \} = G (\mathcal{L}_\vec{N} \lambda),
again, it shifts the test field \lambda. The Gauss law has vanishing Poisson bracket with the Hamiltonian constraint. The spatial diffeomorphism constraint with a Hamiltonian gives a Hamiltonian with its smearing shifted,
\{ C (\vec{N})  , H (M) \} = H (\mathcal{L}_\vec{N} M).
Finally, the poisson bracket of two Hamiltonians is a spatial diffeomorphism,
\{ H (N)  , H (M) \} = C (K)
where K is some phase space function. That is, it is a sum over infinitesimal spatial diffeomorphisms constraints where the coefficients of proportionality are not constants but have non-trivial phase space dependence.

A (Poisson bracket) Lie algebra, with constraints C_I, is of the form
\{ C_I  , C_J \} = f_{IJ}^K C_K
where f_{IJ}^K are constants (the so-called structure constants). The above Poisson bracket algebra for General relativity does not form a true Lie algebra as we have structure functions rather than structure constants for the Poisson bracket between two Hamiltonians. This leads to difficulties.

Dirac observables

The constraints define a constraint surface in the original phase space. The gauge motions of the constraints apply to all phase space but have the feature that they leave the constraint surface where it is, and thus the orbit of a point in the hypersurface under gauge transformations will be an orbit entirely within it. Dirac observables are defined as phase space functions, O, that Poisson commute with all the constraints when the constraint equations are imposed,
\{ G_j , O \}_{G_j=C_a=H = 0} = \{ C_a , O \}_{G_j=C_a=H = 0} = \{ H , O \}_{G_j=C_a=H = 0} = 0,
that is, they are quantities defined on the constraint surface that are invariant under the gauge transformations of the theory.

Then, solving only the constraint G_j = 0 and determining the Dirac observables with respect to it leads us back to the ADM phase space with constraints H, C_a. The dynamics of general relativity is generated by the constraints, it can be shown that six Einstein equations describing time evolution (really a gauge transformation) can be obtained by calculating the Poisson brackets of the three-metric and its conjugate momentum with a linear combination of the spatial diffeomorphism and Hamiltonian constraint. The vanishing of the constraints, giving the physical phase space, are the four other Einstein equations.[10]

Quantization of the constraints - the equations of Quantum General Relativity

Pre-history and Ashtekar new variables

Many of the technical problems in canonical quantum gravity revolve around the constraints. Canonical general relativity was originally formulated in terms of metric variables, but there seemed to be insurmountable mathematical difficulties in promoting the constraints to quantum operators because of their highly non-linear dependence on the canonical variables. The equations were much simplified with the introduction of Ashtekars new variables. Ashtekar variables describe canonical general relativity in terms of a new pair canonical variables closer to that of gauge theories. The first step consists of using densitized triads \tilde{E}_i^a (a triad E_i^a is simply three orthogonal vector fields labeled by i = 1,2,3 and the densitized triad is defined by \tilde{E}_i^a = \sqrt{\operatorname{det}(q)} E_i^a) to encode information about the spatial metric,\operatorname{det}(q) q^{ab} = \tilde{E}_i^a \tilde{E}_j^b \delta^{ij}.
(where \delta^{ij} is the flat space metric, and the above equation expresses that q^{ab}, when written in terms of the basis E_i^a, is locally flat). (Formulating general relativity with triads instead of metrics was not new.) The densitized triads are not unique, and in fact one can perform a local in space rotation with respect to the internal indices i. The canonically conjugate variable is related to the extrinsic curvature by K_a^i = K_{ab} \tilde{E}^{ai} / \sqrt{\operatorname{det}(q)}. But problems similar to using the metric formulation arise when one tries to quantize the theory. Ashtekar's new insight was to introduce a new configuration variable,
A_a^i = \Gamma_a^i - i K_a^i
that behaves as a complex \operatorname{SU}(2) connection where \Gamma_a^i is related to the so-called spin connection via \Gamma_a^i = \Gamma_{ajk} \epsilon^{jki}. Here A_a^i is called the chiral spin connection. It defines a covariant derivative \mathcal{D}_a. It turns out that \tilde{E}^a_i is the conjugate momentum of A_a^i, and together these form Ashtekar's new variables.

The expressions for the constraints in Ashtekar variables; the Gauss's law, the spatial diffeomorphism constraint and the (densitized) Hamiltonian constraint then read:
G^i = \mathcal{D}_a \tilde{E}_i^a = 0
C_a = \tilde{E}_i^b F^i_{ab} - A_a^i (\mathcal{D}_b \tilde{E}_i^b) = V_a - A_a^i G^i = 0,
\tilde{H} = \epsilon_{ijk} \tilde{E}_i^a \tilde{E}_j^b F^i_{ab} = 0
respectively, where F^i_{ab} is the field strength tensor of the connection A_a^i and where V_a is referred to as the vector constraint. The above-mentioned local in space rotational invariance is the original of the \operatorname{SU}(2) gauge invariance here expressed by the Gauss law. Note that these constraints are polynomial in the fundamental variables, unlike as with the constraints in the metric formulation.
This dramatic simplification seemed to open up the way to quantizing the constraints. (See the article Self-dual Palatini action for a derivation of Ashtekar's formulism).
With Ashtekar's new variables, given the configuration variable A^i_a, it is natural to consider wavefunctions \Psi (A^i_a). This is the connection representation. It is analogous to ordinary quantum mechanics with configuration variable q and wavefunctions \psi (q). The configuration variable gets promoted to a quantum operator via:
\hat{A}_a^i \Psi (A) = A_a^i \Psi (A),
(analogous to \hat{q} \psi (q) = q \psi (q)) and the triads are (functional) derivatives,
\hat{\tilde{E_i^a}} \Psi (A) = - i {\delta \Psi (A) \over \delta A_a^i}.
(analogous to \hat{p} \psi (q) = -i \hbar d \psi (q) / dq). In passing over to the quantum theory the constraints become operators on a kinematic Hilbert space (the unconstrained \operatorname{SU}(2) Yang–Mills Hilbert space). Note that different ordering of the A's and \tilde{E}'s when replacing the \tilde{E}'s with derivatives give rise to different operators - the choice made is called the factor ordering and should be chosen via physical reasoning. Formally they read
\hat{G}_j \vert\psi \rangle = 0
\hat{C}_a \vert\psi \rangle = 0
\hat{\tilde{H}} \vert\psi \rangle = 0.
There are still problems in properly defining all these equations and solving them. For example the Hamiltonian constraint Ashtekar worked with was the densitized version instead of the original Hamiltonian, that is, he worked with \tilde{H} = \sqrt{\operatorname{det}(q)} H. There were serious difficulties in promoting this quantity to a quantum operator. Moreover, although Ashtekar variables had the virtue of simplifying the Hamiltonian, they are complex. When one quantizes the theory, it is difficult to ensure that one recovers real general relativity as opposed to complex general relativity.

Quantum constraints as the equations of quantum general relativity

We now move on to demonstrate an important aspect of the quantum constraints. We consider Gauss' law only. First we state the classical result that the Poisson bracket of the smeared Gauss' law G(\lambda) = \int d^3x \lambda^j (D_a E^a)^j with the connections is
\{ G(\lambda) , A_a^i \} = \partial_a \lambda^i + g \epsilon^{ijk} A_a^j \lambda^k = (D_a \lambda)^i.
The quantum Gauss' law reads
\hat{G}_j \Psi (A) = - i D_a {\delta \Psi [A] \over \delta A_a^j} = 0.
If one smears the quantum Gauss' law and study its action on the quantum state one finds that the action of the constraint on the quantum state is equivalent to shifting the argument of \Psi by an infinitesimal (in the sense of the parameter \lambda small) gauge transformation,
\Big [ 1 + \int d^3x \lambda^j (x) \hat{G}_j \Big]  \Psi (A) = \Psi [A + D \lambda] = \Psi [A],
and the last identity comes from the fact that the constraint annihilates the state. So the constraint, as a quantum operator, is imposing the same symmetry that its vanishing imposed classically: it is telling us that the functions \Psi [A] have to be gauge invariant functions of the connection. The same idea is true for the other constraints.

Therefore the two step process in the classical theory of solving the constraints C_I = 0 (equivalent to solving the admissibility conditions for the initial data) and looking for the gauge orbits (solving the `evolution' equations) is replaced by a one step process in the quantum theory, namely looking for solutions \Psi of the quantum equations \hat{C}_I \Psi = 0. This is because it obviously solves the constraint at the quantum level and it simultaneously looks for states that are gauge invariant because \hat{C}_I is the quantum generator of gauge transformations (gauge invariant functions are constant along the gauge orbits and thus characterize them).[11] Recall that, at the classical level, solving the admissibility conditions and evolution equations was equivalent to solving all of Einstein's field equations, this underlines the central role of the quantum constraint equations in canonical quantum gravity.

Introduction of the loop representation

It was in particular the inability to have good control over the space of solutions to the Gauss' law and spacial diffeomorphism constraints that led Rovelli and Smolin to consider a new representation - the loop representation in gauge theories and quantum gravity.[12]
We need the notion of a holonomy. A holonomy is a measure of how much the initial and final values of a spinor or vector differ after parallel transport around a closed loop; it is denoted
h_\gamma [A].
Knowledge of the holonomies is equivalent to knowledge of the connection, up to gauge equivalence. Holonomies can also be associated with an edge; under a Gauss Law these transform as
(h'_e)_{\alpha \beta} = U_{\alpha \gamma}^{-1} (x) (h_e)_{\gamma \sigma} U_{\sigma \beta} (y).
For a closed loop x = y if we take the trace of this, that is, putting \alpha = \beta and summing we obtain
(h'_e)_{\alpha \alpha} = U_{\alpha \gamma}^{-1} (x) (h_e)_{\gamma \sigma} U_{\sigma \alpha} (x) = [U_{\sigma \alpha} (x) U_{\alpha \gamma}^{-1} (x)] (h_e)_{\gamma \sigma} = \delta_{\sigma \gamma} (h_e)_{\gamma \sigma} = (h_e)_{\gamma \gamma}
or
\operatorname{Tr} h'_\gamma = \operatorname{Tr} h_\gamma..
The trace of an holonomy around a closed loop and is written
W_\gamma [A]
and is called a Wilson loop. Thus Wilson loop are gauge invariant. The explicit form of the Holonomy is
h_\gamma [A] = \mathcal{P} \exp \Big\{-\int_{\gamma_0}^{\gamma_1} ds \dot{\gamma}^a A_a^i (\gamma (s)) T_i \Big\}
where \gamma is the curve along which the holonomy is evaluated, and s is a parameter along the curve, \mathcal{P} denotes path ordering meaning factors for smaller values of s appear to the left, and T_i are matrices that satisfy the \operatorname{SU}(2) algebra
[T^i ,T^j] = 2i \epsilon^{ijk} T^k.
The Pauli matrices satisfy the above relation. It turns out that there are infinitely many more examples of sets of matrices that satisfy these relations, where each set comprises (N+1) \times (N+1) matrices with N = 1,2,3,\dots, and where non of these can be thought to `decompose' into two or more examples of lower dimension. They are called different irreducible representations of the \operatorname{SU}(2) algebra. The most fundamental representation being the Pauli matrices. The holonomy is labelled by a half integer N/2 according to the irreducible representation used.
The use of Wilson loops explicitly solves the Gauss gauge constraint. To handle the spatial diffeomorphism constraint we need to go over to the loop representation. As Wilson loops form a basis we can formally expand any Gauss gauge invariant function as,
\Psi [A] = \sum_\gamma \Psi [\gamma] W_\gamma [A] .
This is called the loop transform. We can see the analogy with going to the momentum representation in quantum mechanics(see Position and momentum space). There one has a basis of states \exp (ikx) labelled by a number k and one expands
\psi [x] = \int dk \psi (k) \exp (ikx) .
and works with the coefficients of the expansion \psi (k).
The inverse loop transform is defined by
\Psi [\gamma] = \int [dA] \Psi [A] W_\gamma [A].
This defines the loop representation. Given an operator \hat{O} in the connection representation,
\Phi [A] = \hat{O} \Psi [A] \qquad Eq \; 1,
one should define the corresponding operator \hat{O}' on \Psi [\gamma] in the loop representation via,
\Phi [\gamma] = \hat{O}' \Psi [\gamma] \qquad Eq \; 2,
where \Phi [\gamma] is defined by the usual inverse loop transform,
\Phi [\gamma] = \int [dA] \Phi [A] W_\gamma [A] \qquad Eq \; 3..
A transformation formula giving the action of the operator \hat{O}' on \Psi [\gamma] in terms of the action of the operator \hat{O} on \Psi [A] is then obtained by equating the R.H.S. of Eq \; 2 with the R.H.S. of Eq \; 3 with Eq \; 1 substituted into Eq \; 3, namely
\hat{O}' \Psi [\gamma] = \int [dA] W_\gamma [A] \hat{O} \Psi [A],
or
\hat{O}' \Psi [\gamma] = \int [dA] (\hat{O}^\dagger W_\gamma [A]) \Psi [A],
where by \hat{O}^\dagger we mean the operator \hat{O} but with the reverse factor ordering (remember from simple quantum mechanics where the product of operators is reversed under conjugation). We evaluate the action of this operator on the Wilson loop as a calculation in the connection representation and rearranging the result as a manipulation purely in terms of loops (one should remember that when considering the action on the Wilson loop one should choose the operator one wishes to transform with the opposite factor ordering to the one chosen for its action on wavefunctions \Psi [A]). This gives the physical meaning of the operator \hat{O}'. For example if \hat{O}^\dagger corresponded to a spatial diffeomorphism, then this can be thought of as keeping the connection field A of W_\gamma [A] where it is while performing a spatial diffeomorphism on \gamma instead. Therefore the meaning of \hat{O}' is a spatial diffeomorphism on \gamma, the argument of \Psi [\gamma].

In the loop representation we can then solve the spatial diffeomorphism constraint by considering functions of loops \Psi [\gamma] that are invariant under spatial diffeomorphisms of the loop \gamma. That is, we construct what mathematicians call knot invariants. This opened up an unexpected connection between knot theory and quantum gravity.

What about the Hamiltonian constraint? Let us go back to the connection representation. Any collection of non-intersecting Wilson loops satisfy Ashtekar's quantum Hamiltonian constraint. This can be seen from the following. With a particular ordering of terms and replacing \tilde{E}^a_i by a derivative, the action of the quantum Hamiltonian constraint on a Wilson loop is
\hat{\tilde{H}}^\dagger W_\gamma [A] = - \epsilon_{ijk} \hat{F}^k_{ab} {\delta \over \delta A_a^i} \; {\delta \over \delta A_b^j} W_\gamma [A].
When a derivative is taken it brings down the tangent vector, \dot{\gamma}^a, of the loop, \gamma. So we have something like
\hat{F}^i_{ab} \dot{\gamma}^a \dot{\gamma}^b.
However, as F^i_{ab} is anti-symmetric in the indices a and b this vanishes (this assumes that \gamma is not discontinuous anywhere and so the tangent vector is unique). Now let us go back to the loop representation.

We consider wavefunctions \Psi [\gamma] that vanish if the loop has discontinuities and that are knot invariants. Such functions solve the Gauss law, the spatial diffeomorphism constraint and (formally) the Hamiltonian constraint. Thus we have identified an infinite set of exact (if only formal) solutions to all the equations of quantum general relativity![12] This generated a lot of interest in the approach and eventually led to LQG.

Geometric operators, the need for intersecting Wilson loops and spin network states

The easiest geometric quantity is the area. Let us choose coordinates so that the surface \Sigma is characterized by x^3 = 0. The area of small parallelogram of the surface \Sigma is the product of length of each side times \sin \theta where \theta is the angle between the sides. Say one edge is given by the vector \vec{u} and the other by \vec{v} then,
A = \| \vec{u} \| \| \vec{v} \| \sin \theta = \sqrt{\| \vec{u} \|^2 \| \vec{v} \|^2 (1 - \cos^2 \theta)} \quad = \sqrt{\| \vec{u} \|^2 \| \vec{v} \|^2 - (\vec{u} \cdot \vec{v})^2}
From this we get the area of the surface \Sigma to be given by
A_\Sigma = \int_\Sigma dx^1 dx^2 \sqrt{\operatorname{det}(q^{(2)})}
where \operatorname{det}(q^{(2)}) = q_{11} q_{22} - q_{12}^2 and is the determinant of the metric induced on \Sigma. This can be rewritten as
\operatorname{det}(q^{(2)}) = {\epsilon^{3ab} \epsilon^{3cd} q_{ac} q_{bc} \over 2}.
The standard formula for an inverse matrix is
q^{ab} = {\epsilon^{acd} \epsilon^{bef} q_{ce} q_{df} \over 3!\operatorname{det}(q)}
Note the similarity between this and the expression for \operatorname{det}(q^{(2)}). But in Ashtekar variables we have \tilde{E}^a_i \tilde{E}^{bi} = \operatorname{det}(q) q^{ab}. Therefore
A_\Sigma = \int_\Sigma dx^1 dx^2 \sqrt{\tilde{E}^3_i \tilde{E}^{3i}}.
According to the rules of canonical quantization we should promote the triads \tilde{E}^3_i to quantum operators,
\hat{\tilde{E}}^3_i \sim {\delta \over \delta A_3^i}.
It turns out that the area A_\Sigma can be promoted to a well defined quantum operator despite the fact that we are dealing with product of two functional derivatives and worse we have a square-root to contend with as well.[13] Putting N = 2J, we talk of being in the J-th representation. We note that \sum_i T^i T^i = J (J+1) 1. This quantity is important in the final formula for the area spectrum. We simply state the result below,
\hat{A}_\Sigma W_\gamma [A] = 8 \pi \ell_{\text{Planck}}^2 \beta \sum_I \sqrt{j_I (j_I + 1)} W_\gamma [A]
where the sum is over all edges I of the Wilson loop that pierce the surface \Sigma.
The formula for the volume of a region R is given by
V = \int_R d^3 x \sqrt{\operatorname{det}(q)} = {1 \over 6} \int_R dx^3 \sqrt{\epsilon_{abc} \epsilon^{ijk} \tilde{E}^a_i \tilde{E}^b_j \tilde{E}^c_k}.
The quantization of the volume proceeds the same way as with the area. As we take the derivative, and each time we do so we bring down the tangent vector \dot{\gamma}^a, when the volume operator acts on non-intersecting Wilson loops the result vanishes. Quantum states with non-zero volume must therefore involve intersections. Given that the anti-symmetric summation is taken over in the formula for the volume we would need at least intersections with three non-coplanar lines. Actually it turns out that one needs at least four-valent vertices for the volume operator to be non-vanishing.

We now consider Wilson loops with intersections. We assume the real representation where the gauge group is \operatorname{SU}(2). Wilson loops are an over complete basis as there are identities relating different Wilson loops. These come about from the fact that Wilson loops are based on matrices (the holonomy) and these matrices satisfy identities. Given any two \operatorname{SU}(2) matrices \mathbb{A} and \mathbb{B} it is easy to check that,
\operatorname{Tr}(\mathbb{A}) \operatorname{Tr}(\mathbb{B}) = \operatorname{Tr}(\mathbb{A}\mathbb{B}) + \operatorname{Tr}(\mathbb{A}\mathbb{B}^{-1}).
This implies that given two loops \gamma and \eta that intersect, we will have,
W_\gamma [A] W_\eta [A] = W_{\gamma \circ \eta} [A] + W_{\gamma \circ \eta^{-1}} [A]
where by \eta^{-1} we mean the loop \eta traversed in the opposite direction and \gamma \circ \eta means the loop obtained by going around the loop \gamma and then along \eta. See figure below. Given that the matrices are unitary one has that W_\gamma [A] = W_{\gamma^{-1}} [A]. Also given the cyclic property of the matrix traces (i.e. Tr (\mathbb{A} \mathbb{B}) = Tr (\mathbb{B} \mathbb{A})) one has that W_{\gamma \circ \eta} [A] = W_{\eta \circ \gamma} [A]. These identities can be combined with each other into further identities of increasing complexity adding more loops. These identities are the so-called Mandelstam identities. Spin networks certain are linear combinations of intersecting Wilson loops designed to address the over completeness introduced by the Mandelstam identities (for trivalent intersections they eliminate the over-compleness entirely) and actually constitute a basis for all gauge invariant functions.

Graphical representation of the simplest non-trivial Mandestam identity relating different Wilson loops.

As mentioned above the holonomy tells you how to propagate test spin half particles. A spin network state assigns an amplitude to a set of spin half particles tracing out a path in space, merging and splitting. These are described by spin networks \gamma: the edges are labelled by spins together with `intertwiners' at the vertices which are prescription for how to sum over different ways the spins are rerouted. The sum over rerouting are chosen as such to make the form of the intertwiner invariant under Gauss gauge transformations.

Real variables, modern analysis and LQG

Let us go into more detail about the technical difficulties associated with using Ashtekar's variables:With Ashtekar's variables one uses a complex connection and so the relevant gauge group as actually \operatorname{SL}(2, \mathbb{C}) and not \operatorname{SU}(2). As \operatorname{SL}(2, \mathbb{C}) is non-compact it creates serious problems for the rigorous construction of the necessary mathematical machinery. The group \operatorname{SU}(2) is on the other hand is compact and the relevant constructions needed have been developed.

As mentioned above, because Ashtekar's variables are complex it results in complex general relativity. To recover the real theory one has to impose what are known as the reality conditions. These require that the densitized triad be real and that the real part of the Ashtekar connection equals the compatible spin connection (the compatibility condition being \nabla_a e_b^I = 0) determined by the desitized triad. The expression for compatible connection \Gamma_a^i is rather complicated and as such non-polynomial formula enters through the back door.

Before we state the next difficulty we should give a definition; a tensor density of weight W transforms like an ordinary tensor, except that in additional the Wth power of the Jacobian,
J = \Big| {\partial x^a \over \partial x^{'b}} \Big|
appears as a factor, i.e.
{T'}^{a \dots}_{b \dots} = J^W {\partial x^{'a} \over \partial x^c} \dots {\partial x^d \over \partial x^{'b}} T^{c \dots}_{d \dots}.
It turns out that it is impossible, on general grounds, to construct a UV-finite, diffeomorphism non-violating operator corresponding to \sqrt{\operatorname{det}(q)} H. The reason is that the rescaled Hamiltonian constraint is a scalar density of weight two while it can be shown that only scalar densities of weight one have a chance to result in a well defined operator. Thus, one is forced to work with the original unrescaled, density one-valued, Hamiltonian constraint. However, this is non-polynomial and the whole virtue of the complex variables is questioned. In fact, all the solutions constructed for Ashtekar's Hamiltonian constraint only vanished for finite regularization (physics), however, this violates spatial diffeomorphism invariance.

Without the implementation and solution of the Hamiltonian constraint no progress can be made and no reliable predictions are possible!

To overcome the first problem one works with the configuration variable
A_a^i = \Gamma_a^i + \beta K_a^i
where \beta is real (as pointed out by Barbero, who introduced real variables some time after Ashtekar's variables[14][15]). The Guass law and the spatial diffeomorphism constraints are the same. In real Ashtekar variables the Hamiltonian is
H = {\epsilon_{ijk} F_{ab}^k \tilde{E}_i^a \tilde{E}_j^b \over \sqrt{\operatorname{det}(q)}} + 2 {\beta^2 + 1 \over \beta^2} {(\tilde{E}_i^a \tilde{E}_j^b - \tilde{E}_j^a \tilde{E}_i^b) \over \sqrt{\operatorname{det}(q)}} (A_a^i - \Gamma_a^i) (A_b^j - \Gamma_b^j) = H_E + H'.
The complicated relationship between \Gamma_a^i and the desitized triads causes serious problems upon quantization. It is with the choice \beta = \pm i that the second more complicated term is made to vanish. However, as mentioned above \Gamma_a^i reappears in the reality conditions. Also we still have the problem of the 1 / \sqrt{\operatorname{det}(q)} factor.
Thiemann was able to make it work for real \beta. First he could simplify the troublesome 1 / \sqrt{\operatorname{det}(q)} by using the identity
\{ A_c^k , V \} = {\epsilon_{abc} \epsilon^{ijk} \tilde{E}_i^a \tilde{E}_j^b \over \sqrt{\operatorname{det}(q)}}
where V is the volume. The A_c^k and V can be promoted to well defined operators in the loop representation and the Poisson bracket is replaced by a commutator upon quantization; this takes care of the first term. It turns out that a similar trick can be used to treat the second term. One introduces the quantity
K = \int d^3 x K_a^i \tilde{E}_i^a
and notes that
K_a^i = \{ A_a^i , K \}.
We are then able to write
A_a^i - \Gamma_a^i = \beta K_a^i = \beta \{ A_a^i , K \}.
The reason the quantity K is easier to work with at the time of quantization is that it can be written as
K = - \{ V , \int d^3 x H_E \}
where we have used that the integrated densitized trace of the extrinsic curvature, K, is the``time derivative of the volume".

In the long history of canonical quantum gravity formulating the Hamiltonian constraint as a quantum operator (Wheeler–DeWitt equation) in a mathematically rigorous manner has been a formidable problem. It was in the loop representation that a mathematically well defined Hamiltonian constraint was finally formulated in 1996.[9] We leave more details of its construction to the article Hamiltonian constraint of LQG. This together with the quantum versions of the Gauss law and spatial diffeomorphism constrains written in the loop representation are the central equations of LQG (modern canonical quantum General relativity).

Finding the states that are annihilated by these constraints (the physical states), and finding the corresponding physical inner product, and observables is the main goal of the technical side of LQG.

A very important aspect of the Hamiltonian operator is that it only acts at vertices (a consequence of this is that Thiemann's Hamiltonian operator, like Ashtekar's operator, annihilates non-intersecting loops except now it is not just formal and has rigorous mathematical meaning). More precisely, its action is non-zero on at least vertices of valence three and greater and results in a linear combination of new spin networks where the original graph has been modified by the addition of lines at each vertex together and a change in the labels of the adjacent links of the vertex.

Solving the quantum constraints

We solve, at least approximately, all the quantum constraint equations and for the physical inner product to make physical predictions.
Before we move on to the constraints of LQG, lets us consider certain cases. We start with a kinematic Hilbert space \mathcal{H}_{\text{Kin}} as so is equipped with an inner product—the kinematic inner product \langle\phi, \psi\rangle_{\text{Kin}}.
i) Say we have constraints \hat{C}_I whose zero eigenvalues lie in their discrete spectrum. Solutions of the first constraint, \hat{C}_1, correspond to a subspace of the kinematic Hilbert space, \mathcal{H}_1 \subset \mathcal{H}_{\text{Kin}}. There will be a projection operator P_1 mapping \mathcal{H}_{\text{Kin}} onto \mathcal{H}_1. The kinematic inner product structure is easily employed to provide the inner product structure after solving this first constraint; the new inner product \langle\phi , \psi\rangle_1 is simply
\langle\phi , \psi\rangle_1 = \langle P \phi , P \psi\rangle_{\text{Kin}}
They are based on the same inner product and are states normalizable with respect to it.
ii) The zero point is not contained in the point spectrum of all the \hat{C}_I, there is then no non-trivial solution \Psi \in \mathcal{H}_{\text{Kin}} to the system of quantum constraint equations \hat{C}_I \Psi = 0 for all I.

For example the zero eigenvalue of the operator
\hat{C} = \Big( i {d \over dx} - k \Big)
on L_2 (\mathbb{R} , dx) lies in the continuous spectrum \mathbb{R} but the formal ``eigenstate" \exp (-ikx) is not normalizable in the kinematic inner product,
\int_{- \infty}^\infty dx \psi^* (x) \psi (x) = \int_{- \infty}^\infty dx e^{ikx} e^{-ikx} = \int_{- \infty}^\infty dx = \infty
and so does not belong to the kinematic Hilbert space \mathcal{H}_{\text{Kin}}. In these cases we take a dense subset \mathcal{S} of \mathcal{H}_{\text{Kin}} (intuitively this means either any point in \mathcal{S} is either in \mathcal{H}_{\text{Kin}} or arbitrarily close to a point in \mathcal{H}_{\text{Kin}}) with very good convergence properties and consider its dual space \mathcal{S}' (intuitively these map elements of \mathcal{S} onto finite complex numbers in a linear manner), then \mathcal{S} \subset \mathcal{H}_{\text{Kin}} \subset \mathcal{S}' (as \mathcal{S}' contains distributional functions). The constraint operator is then implemented on this larger dual space, which contains distributional functions, under the adjoint action on the operator. One looks for solutions on this larger space. This comes at the price that the solutions must be given a new Hilbert space inner product with respect to which they are normalizable (see article on rigged Hilbert space).
In this case we have a generalized projection operator on the new space of states. We cannot use the above formula for the new inner product as it diverges, instead the new inner product is given by the simply modification of the above,
\langle\phi, \psi\rangle_1 = \langle P\phi, \psi\rangle_{\text{Kin}}.
The generalized projector P is known as a rigging map.

Let us move to LQG, additional complications will arise from the fact the constraint algebra is not a Lie algebra due to the bracket between two Hamiltonian constraints.

The Gauss law is solved by the use of spin network states. They provide a basis for the Kinematic Hilbert space \mathcal{H}_{\text{Kin}}. The spatial diffeomorphism constraint has been solved. The induced inner product on \mathcal{H}_{\text{Diff}} (we do not pursue the details) has a very simple description in terms of spin network states; given two spin networks s and s', with associated spin network states \psi_s and \psi_{s'}, the inner product is 1 if s and s' are related to each other by a spatial diffeomorphism and zero otherwise.

The Hamiltonian constraint maps diffeomorphism invariant states onto non-diffeomorphism invaiant states as so does not preserve the diffeomorphism Hilbert space \mathcal{H}_{\text{Diff}}. This is an unavoidable consequence of the operator algebra, in particular the commutator:
[ \hat{C} (\vec{N})  , \hat{H} (M) ] \propto \hat{H} (\mathcal{L}_\vec{N} M)
as can be seen by applying this to \psi_s \in \mathcal{H}_{Diff},
( \vec{C} (\vec{N}) \hat{H} (M) - \hat{H} (M) \vec{C} (\vec{N}) ) \psi_s \propto \hat{H} (\mathcal{L}_\vec{N} M) \psi_s
and using \vec{C} (\vec{N}) \psi_s = 0 to obtain
\vec{C} (\vec{N}) [\hat{H} (M) \psi_s] \propto \hat{H} (\mathcal{L}_\vec{N} M) \psi_s \not= 0
and so \hat{H} (M) \psi_s is not in \mathcal{H}_{Diff}.

This means that you can't just solve the diffeomorphism constraint and then the Hamiltonian constraint. This problem can be circumvented by the introduction of the Master constraint, with its trivial operator algebra, one is then able in principle to construct the physical inner product from \mathcal{H}_{\text{Diff}}.

Spin foams

In loop quantum gravity (LQG), a spin network represents a "quantum state" of the gravitational field on a 3-dimensional hypersurface. The set of all possible spin networks (or, more accurately, "s-knots" - that is, equivalence classes of spin networks under diffeomorphisms) is countable; it constitutes a basis of LQG Hilbert space.
In physics, a spin foam is a topological structure made out of two-dimensional faces that represents one of the configurations that must be summed to obtain a Feynman's path integral (functional integration) description of quantum gravity. It is closely related to loop quantum gravity.

Spin foam derived from the Hamiltonian constraint operator

The Hamiltonian constraint generates `time' evolution. Solving the Hamiltonian constraint should tell us how quantum states evolve in `time' from an initial spin network state to a final spin network state. One approach to solving the Hamiltonian constraint starts with what is called the Dirac delta function. This is a rather singular function of the real line, denoted \delta (x), that is zero everywhere except at x = 0 but whose integral is finite and nonzero. It can be represented as a Fourier integral,
\delta (x) = \int e^{ikx} dk.
One can employ the idea of the delta function to impose the condition that the Hamiltonian constraint should vanish. It is obvious that
\prod_{x \in \Sigma} \delta (\hat{H} (x))
is non-zero only when \hat{H}(x) = 0 for all x in \Sigma. Using this we can `project' out solutions to the Hamiltonian constraint. With analogy to the Fourier integral given above, this (generalized) projector can formally be written as
\int [d N] e^{i \int d^3 x N (x) \hat{H} (x)}.
Interestingly, this is formally spatially diffeomorphism-invariant. As such it can be applied at the spatially diffeomorphism-invariant level. Using this the physical inner product is formally given by
\biggl\langle \int [d N] e^{i \int d^3 x N (x) \hat{H} (x)} s_{\text{int}} s_{\text{fin}} \biggr\rangle_{\text{Diff}}
where s_{\text{int}} are the initial spin network and s_{\text{fin}} is the final spin network.
The exponential can be expanded
\biggl\langle \int [d N] (1 + i \int d^3 x N (x) \hat{H} (x) + {i^2 \over 2!} [\int d^3 x N (x) \hat{H} (x)] [\int d^3 x' N (x') \hat{H} (x')] + \dots) s_{\text{int}}, s_{\text{fin}} \biggr\rangle_{\text{Diff}}
and each time a Hamiltonian operator acts it does so by adding a new edge at the vertex. The summation over different sequences of actions of \hat{H} can be visualized as a summation over different histories of `interaction vertices' in the `time' evolution sending the initial spin network to the final spin network. This then naturally gives rise to the two-complex (a combinatorial set of faces that join along edges, which in turn join on vertices) underlying the spin foam description; we evolve forward an initial spin network sweeping out a surface, the action of the Hamiltonian constraint operator is to produce a new planar surface starting at the vertex. We are able to use the action of the Hamiltonian constraint on the vertex of a spin network state to associate an amplitude to each "interaction" (in analogy to Feynman diagrams). See figure below. This opens up a way of trying to directly link canonical LQG to a path integral description. Now just as a spin networks describe quantum space, each configuration contributing to these path integrals, or sums over history, describe `quantum space-time'. Because of their resemblance to soap foams and the way they are labeled John Baez gave these `quantum space-times' the name `spin foams'.

The action of the Hamiltonian constraint translated to the path integral or so-called spin foam description. A single node splits into three nodes, creating a spin foam vertex. N (x_n) is the value of N at the vertex and H_{nop} are the matrix elements of the Hamiltonian constraint \hat{H}.

There are however severe difficulties with this particular approach, for example the Hamiltonian operator is not self-adjoint, in fact it is not even a normal operator (i.e. the operator does not commute with its adjoint) and so the spectral theorem cannot be used to define the exponential in general. The most serious problem is that the \hat{H} (x)'s are not mutually commuting, it can then be shown the formal quantity \int [d N] e^{i \int d^3 x N (x) \hat{H} (x)} cannot even define a (generalized) projector. The Master constraint (see below) does not suffer from these problems and as such offers a way of connecting the canonical theory to the path integral formulation.

Spin foams from BF theory

It turns out there are alternative routes to formulating the path integral, however their connection to the Hamiltonian formalism is less clear. One way is to start with the so-called BF theory. This is a simpler theory to general relativity. It has no local degrees of freedom and as such depends only on topological aspects of the fields. BF theory is what is known as a topological field theory.
Surprisingly, it turns out that general relativity can be obtained from BF theory by imposing a constraint,[16] BF theory involves a field B_{ab}^{IJ} and if one chooses the field B to be the (anti-symmetric) product of two tetrads
B_{ab}^{IJ} = {1 \over 2} (E^I_a E^J_b - E^I_b E^J_a)
(tetrads are like triads but in four spacetime dimensions), one recovers general relativity. The condition that the B field be given by the product of two tetrads is called the simplicity constraint. The spin foam dynamics of the topological field theory is well understood. Given the spin foam `interaction' amplitudes for this simple theory, one then tries to implement the simplicity conditions to obtain a path integral for general relativity. The non-trivial task of constructing a spin foam model is then reduced to the question of how this simplicity constraint should be imposed in the quantum theory. The first attempt at this was the famous Barrett–Crane model.[17] However this model was shown to be problematic, for example there did not seem to be enough degrees of freedom to ensure the correct classical limit.[18] It has been argued that the simplicity constraint was imposed too strongly at the quantum level and should only be imposed in the sense of expectation values just as with the Lorenz gauge condition \partial_\mu \hat{A}^\mu in the Gupta–Bleuler formalism of quantum electrodynamics. New models have now been put forward, sometimes motivated by imposing the simplicity conditions in a weaker sense.

Another difficulty here is that spin foams are defined on a discretization of spacetime. While this presents no problems for a topological field theory as it has no local degrees of freedom, it presents problems for GR. This is known as the problem triangularization dependence.

Modern formulation of spin foams

Just as imposing the classical simplicity constraint recovers general relativity from BF theory, one expects an appropriate quantum simplicity constraint will recover quantum gravity from quantum BF theory.

Much progress has been made with regard to this issue by Engle, Pereira, and Rovelli[19] and Freidal and Krasnov[20] in defining spin foam interaction amplitudes with much better behaviour.

An attempt to make contact between EPRL-FK spin foam and the canonical formulation of LQG has been made.[21]

Spin foam derived from the Master constraint operator

See below.

Spin foams from consistent discretisations

The semi-classical limit

What is the semiclassical limit?

The classical limit or correspondence limit is the ability of a physical theory to approximate or "recover" classical mechanics when considered over special values of its parameters.[22] The classical limit is used with physical theories that predict non-classical behavior.
In physics, the correspondence principle states that the behavior of systems described by the theory of quantum mechanics (or by the old quantum theory) reproduces classical physics in the limit of large quantum numbers. In other words, it says that for large orbits and for large energies, quantum calculations must agree with classical calculations.[23]

The principle was formulated by Niels Bohr in 1920,[24] though he had previously made use of it as early as 1913 in developing his model of the atom.[25]

There are two basic requirements in establishing the semi-classical limit of any quantum theory:
i) reproduction of the Poisson brackets (of the diffeomorphism constraints in the case of general relativity). This is extremely important because, as noted above, the Poisson bracket algebra formed between the (smeared) constraints themselves completely determines the classical theory. This is analogous to establishing Ehrenfest's theorem;
ii) the specification of a complete set of classical observables whose corresponding operators (see complete set of commuting observables for the quantum mechanical definition of a complete set of observables) when acted on by appropriate semi-classical states reproduce the same classical variables with small quantum corrections (a subtle point is that states that are semi-classical for one class of observables may not be semi-classical for a different class of observables[26]).
This may be easily done, for example, in ordinary quantum mechanics for a particle but in general relativity this becomes a highly non-trivial problem as we will see below.

Why might LQG not have general relativity as its semiclassical limit?

Any candidate theory of quantum gravity must be able to reproduce Einstein's theory of general relativity as a classical limit of a quantum theory. This is not guaranteed because of a feature of quantum field theories which is that they have different sectors, these are analogous to the different phases that come about in the thermodynamical limit of statistical systems. Just as different phases are physically different, so are different sectors of a quantum field theory. It may turn out that LQG belongs to an unphysical sector - one in which you do not recover general relativity in the semi classical limit (in fact there might not be any physical sector at all).

Theorems establishing the uniqueness of the loop representation as defined by Ashtekar et al. (i.e. a certain concrete realization of a Hilbert space and associated operators reproducing the correct loop algebra - the realization that everybody was using) have been given by two groups (Lewandowski, Okolow, Sahlmann and Thiemann)[27] and (Christian Fleischhack).[28] Before this result was established it was not known whether there could be other examples of Hilbert spaces with operators invoking the same loop algebra, other realizations, not equivalent to the one that had been used so far. These uniqueness theorems imply no others exist and so if LQG does not have the correct semiclassical limit then this would mean the end of the loop representation of quantum gravity altogether.

Difficulties checking the semiclassical limit of LQG

There are difficulties in trying to establish LQG gives Einstein's theory of general relativity in the semi classical limit. There are a number of particular difficulties in establishing the semi-classical limit
  1. There is no operator corresponding to infinitesimal spacial diffeomorphisms (it is not surprising that the theory has no generator of infinitesimal spatial `translations' as it predicts spatial geometry has a discrete nature, compare to the situation in condensed matter). Instead it must be approximated by finite spatial diffeomorphisms and so the Poisson bracket structure of the classical theory is not exactly reproduced. This problem can be circumvented with the introduction of the so-called Master constraint (see below)[29]
  2. There is the problem of reconciling the discrete combinatorial nature of the quantum states with the continuous nature of the fields of the classical theory.
  3. There are serious difficulties arising from the structure of the Poisson brackets involving the spatial diffeomorphism and Hamiltonian constraints. In particular, the algebra of (smeared) Hamiltonian constraints does not close, it is proportional to a sum over infinitesimal spatial diffeomorphisms (which, as we have just noted, does not exist in the quantum theory) where the coefficients of proportionality are not constants but have non-trivial phase space dependence - as such it does not form a Lie algebra. However, the situation is much improved by the introduction of the Master constraint.[29]
  4. The semi-classical machinery developed so far is only appropriate to non-graph-changing operators, however, Thiemann's Hamiltonian constraint is a graph-changing operator - the new graph it generates has degrees of freedom upon which the coherent state does not depend and so their quantum fluctuations are not suppressed. There is also the restriction, so far, that these coherent states are only defined at the Kimematic level, and now one has to lift them to the level of \mathcal{H}_{Diff} and \mathcal{H}_{Phys}. It can be shown that Thiemann's Hamiltonian constraint is required to be graph changing in order to resolve problem 3 in some sense. The Master constraint algebra however is trivial and so the requirement that it be graph changing can be lifted and indeed non-graph changing Master constraint operators have been defined.
  5. Formulating observables for classical general relativity is a formidable problem by itself because of its non-linear nature and space-time diffeomorphism invariance. In fact a systematic approximation scheme to calculate observables has only been recently developed.[30][31]
Difficulties in trying to examine the semi classical limit of the theory should not be confused with it having the wrong semi classical limit.

Progress in demonstrating LQG has the correct semiclassical limit

Much details here to be written up...
Concerning issue number 2 above one can consider so-called weave states. Ordinary measurements of geometric quantities are macroscopic, and planckian discreteness is smoothed out. The fabric of a T-shirt is analogous. At a distance it is a smooth curved two-dimensional surface. But a closer inspection we see that it is actually composed of thousands of one-dimensional linked threads. The image of space given in LQG is similar, consider a very large spin network formed by a very large number of nodes and links, each of Planck scale. But probed at a macroscopic scale, it appears as a three-dimensional continuous metric geometry.

As far as the editor knows problem 4 of having semi-classical machinery for non-graph changing operators is as the moment still out of reach.

To make contact with familiar low energy physics it is mandatory to have to develop approximation schemes both for the physical inner product and for Dirac observables.

The spin foam models have been intensively studied can be viewed as avenues toward approximation schemes for the physical inner product.

Markopoulou et al. adopted the idea of noiseless subsystems in an attempt to solve the problem of the low energy limit in background independent quantum gravity theories[32][33][34] The idea has even led to the intriguing possibility of matter of the standard model being identified with emergent degrees of freedom from some versions of LQG (see section below: LQG and related research programs).

As Wightman emphasized in the 1950s, in Minkowski QFTs the n- point functions
W (x_1, \dots , x_n) = \langle 0 | \phi (x_n) \dots \phi (x_1) |0 \rangle ,
completely determine the theory. In particular, one can calculate the scattering amplitudes from these quantities. As explained below in the section on the Background independent scattering amplitudes, in the background-independent context, the n- point functions refer to a state and in gravity that state can naturally encode information about a specific geometry which can then appear in the expressions of these quantities. To leading order LQG calculations have been shown to agree in an appropriate sense with the n-point functions calculated in the effective low energy quantum general relativity.

Improved dynamics and the Master constraint

The Master constraint

Thiemann's Master constraint should not be confused with the Master equation to do with random processes. The Master Constraint Programme for Loop Quantum Gravity (LQG) was proposed as a classically equivalent way to impose the infinite number of Hamiltonian constraint equations
H (x) = 0
(x being a continuous index) in terms of a single Master constraint,
M = \int d^3x {[H (x)]^2 \over \sqrt{\operatorname{det}(q(x))}}.
which involves the square of the constraints in question. Note that H (x) were infinitely many whereas the Master constraint is only one. It is clear that if M vanishes then so do the infinitely many H (x)'s. Conversely, if all the H (x)'s vanish then so does M, therefore they are equivalent. The Master constraint M involves an appropriate averaging over all space and so is invariant under spatial diffeomorphisms (it is invariant under spatial "shifts" as it is a summation over all such spatial "shifts" of a quantity that transforms as a scalar). Hence its Poisson bracket with the (smeared) spacial diffeomorphism constraint, C (\vec{N}), is simple:
\{ M  , C (\vec{N}) \} = 0.
(it is su (2) invariant as well). Also, obviously as any quantity Poisson commutes with itself, and the Master constraint being a single constraint, it satisfies
\{ M  , M \} = 0.
We also have the usual algebra between spatial diffeomorphisms. This represents a dramatic simplification of the Poisson bracket structure, and raises new hope in understanding the dynamics and establishing the semi-classical limit.[35]

An initial objection to the use of the Master constraint was that on first sight it did not seem to encode information about the observables; because the Mater constraint is quadratic in the constraint, when you compute its Poisson bracket with any quantity, the result is proportional to the constraint, therefore it always vanishes when the constraints are imposed and as such does not select out particular phase space functions. However, it was realized that the condition
\{ \{ M  , O \} , O \}_{M = 0} = 0
is equivalent to O being a Dirac observable. So the Master constraint does capture information about the observables. Because of its significance this is known as the Master equation.[35]

That the Master constraint Poisson algebra is an honest Lie algebra opens up the possibility of using a certain method, know as group averaging, in order to construct solutions of the infinite number of Hamiltonian constraints, a physical inner product thereon and Dirac observables via what is known as refined algebraic quantization RAQ[36]

The quantum Master constraint

Define the quantum Master constraint (regularisation issues aside) as
\hat{M} := \int d^3x 
\widehat{\left( {H \over \det (q(x))^{1/4}} \right)}^\dagger (x) 
\widehat{\left( {H \over \det (q(x))^{1/4}} \right)} (x) .
Obviously,
\widehat{\left( {H \over \det (q(x))^{1/4}} \right)} (x) \Psi = 0
for all x implies \hat{M} \Psi = 0. Conversely, if \hat{M} \Psi = 0 then
0 = <\Psi , \hat{M} \Psi> = \int d^3x \left\| \widehat{\left( {H \over \det (q(x))^{1/4}} \right)} (x) \Psi \right\|^2
implies
\widehat{\left( {H \over \det (q(x))^{1/4}} \right)} (x) \Psi = 0 .
What is done first is, we are able to compute the matrix elements of the would-be operator \hat{M}, that is, we compute the quadratic form Q_M. It turns out that as Q_M is a graph changing, diffeomorphism invariant quadratic form it cannot exist on the kinematic Hilbert space H_{Kin}, and must be defined on  H_{Diff}. The fact that the master constraint operator \hat{M} is densely defined on H_{Diff}, it is obvious that \hat{M} is a positive and symmetric operator in H_{Diff}. Therefore, the quadratic form Q_M associated with \hat{M} is closable. The closure of Q_M is the quadratic form of a unique self-adjoint operator \hat{\overline{M}}, called the Friedrichs extension of \hat{M}. We relabel \hat{\overline{M}} as \hat{M} for simplicity.
It is also possible to construct a quadratic form Q_{M_E} for what is called the extended Master Constraint (discussed below) on H_{Kin} which also involves the weighted integral of the square of the spatial diffeomorphism constraint (this is possible because Q_{M_E} is not graph changing).

The spectrum of the Master constraint may not contain zero due to normal or factor ordering effects which are finite but similar in nature to the infinite vacuum energies of background-dependent quantum field theories. In this case it turns out to be physically correct to replace \hat{M} with \hat{M}' := \hat{M} - min (spec (\hat{M})) \hat{1} provided that the "normal ordering constant" vanishes in the classical limit, that is, \lim_{\hbar \rightarrow 0} min (spec(\hat{M})) = 0, so that \hat{M}' is a valid quantisation of M.

Testing the Master constraint

The constraints in their primitive form are rather singular, this was the reason for integrating them over test functions to obtain smeared constraints. However, it would appear that the equation for the Master constraint, given above, is even more singular involving the product of two primitive constraints (although integrated over space). Squaring the constraint is dangerous as it could lead to worsened ultraviolent behaviour of the corresponding operator and hence the Master constraint programme must be approached with due care.

In doing so the Master constraint programme has been satisfactorily tested in a number of model systems with non-trivial constraint algebras, free and interacting field theories.[37][38][39][40][41] The Master constraint for LQG was established as a genuine positive self-adjoint operator and the physical Hilbert space of LQG was shown to be non-empty,[42] an obvious consistency test LQG must pass to be a viable theory of quantum General relativity.

Applications of the Master constraint

The Master constraint has been employed in attempts to approximate the physical inner product and define more rigorous path integrals.[43][44][45][46]

The Consistent Discretizations approach to LQG,[47][48] is an application of the master constraint program to construct the physical Hilbert space of the canonical theory.

Spin foam from the Master constraint

It turns out that the Master constraint is easily generalized to incorporate the other constraints. It is then referred to as the extended Master constraint, denoted M_E. We can define the extended Master constraint which imposes both the Hamiltonian constraint and spatial diffeomorphism constraint as a single operator,
M_E = \int_\Sigma d^3x {H (x)^2 - q^{ab} V_a (x) V_b (x) \over \sqrt{det (q)}}.
Setting this single constraint to zero is equivalent to H(x) = 0 and V_a (x) = 0 for all x in \Sigma. This constraint implements the spatial diffeomorphism and Hamiltonian constraint at the same time on the Kinematic Hilbert space. The physical inner product is then defined as
\langle\phi, \psi\rangle_{\text{Phys}} = \lim_{T \rightarrow \infty} \biggl\langle\phi, \int_{-T}^T dt e^{i t \hat{M}_E} \psi\biggr\rangle
(as \delta (\hat{M_E}) = \lim_{T \rightarrow \infty} \int_{-T}^T dt e^{i t \hat{M}_E}). A spin foam representation of this expression is obtained by splitting the t-parameter in discrete steps and writing
e^{i t \hat{M}_E} = \lim_{n \rightarrow \infty} [e^{i t \hat{M}_E / n}]^n = \lim_{n \rightarrow \infty} [1 + i t \hat{M}_E / n]^n.
The spin foam description then follows from the application of [1 + i t \hat{M}_E / n] on a spin network resulting in a linear combination of new spin networks whose graph and labels have been modified.
Obviously an approximation is made by truncating the value of n to some finite integer. An advantage of the extended Master constraint is that we are working at the kinematic level and so far it is only here we have access semi-classical coherent states. Moreover, one can find none graph changing versions of this Master constraint operator, which are the only type of operators appropriate for these coherent states.

Algebraic quantum gravity

The Master constraint programme has evolved into a fully combinatorial treatment of gravity known as Algebraic Quantum Gravity (AQG).[49] The non-graph changing master constraint operator is adapted in the framework of algebraic quantum gravity. While AQG is inspired by LQG, it differs drastically from it because in AQG there is fundamentally no topology or differential structure - it is background independent in a more generalized sense and could possibly have something to say about topology change. In this new formulation of quantum gravity AQG semiclassical states always control the fluctuations of all present degrees of freedom. This makes the AQG semiclassical analysis superior over that of LQG, and progress has been made in establishing it has the correct semiclassical limit and providing contact with familiar low energy physics.[50][51] See Thiemann's book for details.

Physical applications of LQG

Black hole entropy

The Immirzi parameter (also known as the Barbero-Immirzi parameter) is a numerical coefficient appearing in loop quantum gravity. It may take real or imaginary values.

An artist depiction of two black holes merging, a process in which the laws of thermodynamics are upheld.

Black hole thermodynamics is the area of study that seeks to reconcile the laws of thermodynamics with the existence of black hole event horizons. The no hair conjecture of general relativity states that a black hole is characterized only by its mass, its charge, and its angular momentum; hence, it has no entropy. It appears, then, that one can violate the second law of thermodynamics by dropping an object with nonzero entropy into a black hole.[52] Work by Stephen Hawking and Jacob Bekenstein showed that one can preserve the second law of thermodynamics by assigning to each black hole a black-hole entropy
S_{\text{BH}} = \frac{k_{\text{B}}A}{4\ell_{\text{P}}^2},
where A is the area of the hole's event horizon, k_{\text{B}} is the Boltzmann constant, and \ell_{\text{P}} = \sqrt{G\hbar/c^{3}} is the Planck length.[53] The fact that the black hole entropy is also the maximal entropy that can be obtained by the Bekenstein bound (wherein the Bekenstein bound becomes an equality) was the main observation that led to the holographic principle.[52]

An oversight in the application of the no-hair theorem is the assumption that the relevant degrees of freedom accounting for the entropy of the black hole must be classical in nature; what if they were purely quantum mechanical instead and had non-zero entropy? Actually, this is what is realized in the LQG derivation of black hole entropy, and can be seen as a consequence of its background-independence - the classical black hole spacetime comes about from the semi-classical limit of the quantum state of the gravitational field, but there are many quantum states that have the same semiclasical limit. Specifically, in LQG[54] it is possible to associate a quantum geometrical interpretation to the microstates: These are the quantum geometries of the horizon which are consistent with the area, A, of the black hole and the topology of the horizon (i.e. spherical). LQG offers a geometric explanation of the finiteness of the entropy and of the proportionality of the area of the horizon.[55][56] These calculations have been generalized to rotating black holes.[57]

Representation of quantum geometries of the horizon. Polymer excitations in the bulk puncture the horizon, endowing it with quantized area. Intrinsically the horizon is flat except at punctures where it acquires a quantized deficit angle or quantized amount of curvature. These deficit angles add up to 4 \pi.

It is possible to derive, from the covariant formulation of full quantum theory (Spinfoam) the correct relation between energy and area (1st law), the Unruh temperature and the distribution that yields Hawking entropy.[58] The calculation makes use of the notion of dynamical horizon and is done for non-extremal black holes.

A recent success of the theory in this direction is the computation of the entropy of all non singular black holes directly from theory and independent of Immirzi parameter.[59] The result is the expected formula S=A/4, where S is the entropy and A the area of the black hole, derived by Bekenstein and Hawking on heuristic grounds. This is the only known derivation of this formula from a fundamental theory, for the case of generic non singular black holes. Older attempts at this calculation had difficulties. The problem was that although Loop quantum gravity predicted that the entropy of a black hole is proportional to the area of the event horizon, the result depended on a crucial free parameter in the theory, the above-mentioned Immirzi parameter. However, there is no known computation of the Immirzi parameter, so it had to be fixed by demanding agreement with Bekenstein and Hawking's calculation of the black hole entropy.

Loop quantum cosmology

The popular and technical literature makes extensive references to LQG-related topic of loop quantum cosmology. LQC was mainly developed by Martin Bojowald, it was popularized Loop quantum cosmology in Scientific American for predicting a Big Bounce prior to the Big Bang. Loop quantum cosmology (LQC) is a symmetry-reduced model of classical general relativity quantized using methods that mimic those of loop quantum gravity (LQG) that predicts a "quantum bridge" between contracting and expanding cosmological branches.
Achievements of LQC have been the resolution of the big bang singularity, the prediction of a Big Bounce, and a natural mechanism for inflation (cosmology).

LQC models share features of LQG and so is a useful toy model. However, the results obtained are subject to the usual restriction that a truncated classical theory, then quantized, might not display the true behaviour of the full theory due to artificial suppression of degrees of freedom that might have large quantum fluctuations in the full theory. It has been argued that singularity avoidance in LQC are by mechanisms only available in these restrictive models and that singularity avoidance in the full theory can still be obtained but by a more subtle feature of LQG.[60][61]

Loop Quantum Gravity phenomenology

Quantum gravity effects are notoriously difficult to measure because the Planck length is so incredibly small. However recently physicists have started to consider the possibility of measuring quantum gravity effects, mostly from astrophysical observations and gravitational wave detectors.

Background independent scattering amplitudes

Loop quantum gravity is formulated in a background-independent language. No spacetime is assumed a priori, but rather it is built up by the states of theory themselves - however scattering amplitudes are derived from n-point functions (Correlation function (quantum field theory)) and these, formulated in conventional quantum field theory, are functions of points of a background space-time. The relation between the background-independent formalism and the conventional formalism of quantum field theory on a given spacetime is far from obvious, and it is far from obvious how to recover low-energy quantities from the full background-independent theory. One would like to derive the n-point functions of the theory from the background-independent formalism, in order to compare them with the standard perturbative expansion of quantum general relativity and therefore check that loop quantum gravity yields the correct low-energy limit.

A strategy for addressing this problem has been suggested;[62] the idea is to study the boundary amplitude, namely a path integral over a finite space-time region, seen as a function of the boundary value of the field.[63] In conventional quantum field theory, this boundary amplitude is well–defined[64][65] and codes the physical information of the theory; it does so in quantum gravity as well, but in a fully background–independent manner.[66] A generally covariant definition of n-point functions can then be based on the idea that the distance between physical points –arguments of the n-point function is determined by the state of the gravitational field on the boundary of the spacetime region considered.

Progress has been made in calculating background independent scattering amplitudes this way with the use of spin foams. This is a way to extract physical information from the theory. Claims to have reproduced the correct behaviour for graviton scattering amplitudes and to have recovered classical gravity have been made. "We have calculated Newton's law starting from a world with no space and no time." - Carlo Rovelli.

Planck stars

Carlo Rovelli has written a paper claiming inside a black hole is a planck star, that if correct, would resolve the black hole firewall and black hole information paradox.

Gravitons, string theory, super symmetry, extra dimensions in LQG

Some quantum theories of gravity posit a spin-2 quantum field that is quantized, giving rise to gravitons. In string theory one generally starts with quantized excitations on top of a classically fixed background. This theory is thus described as background dependent. Particles like photons as well as changes in the spacetime geometry (gravitons) are both described as excitations on the string worldsheet. While string theory is "background dependent", the choice of background, like a gauge fixing, does not affect the physical predictions. This is not the case, however, for quantum field theories, which give different predictions for different backgrounds. In contrast, loop quantum gravity, like general relativity, is manifestly background independent, eliminating the (in some sense) "redundant" background required in string theory. Loop quantum gravity, like string theory, also aims to overcome the nonrenormalizable divergences of quantum field theories.
LQG never introduces a background and excitations living on this background, so LQG does not use gravitons as building blocks. Instead one expects that one may recover a kind of semiclassical limit or weak field limit where something like "gravitons" will show up again. In contrast, gravitons play a key role in string theory where they are among the first (massless) level of excitations of a superstring.

LQG differs from string theory in that it is formulated in 3 and 4 dimensions and without supersymmetry or Kaluza-Klein extra dimensions, while the latter requires both to be true. There is no experimental evidence to date that confirms string theory's predictions of supersymmetry and Kaluza–Klein extra dimensions. In a 2003 paper A dialog on quantum gravity,[67] Carlo Rovelli regards the fact LQG is formulated in 4 dimensions and without supersymmetry as a strength of the theory as it represents the most parsimonious explanation, consistent with current experimental results, over its rival string/M-theory. Proponents of string theory will often point to the fact that, among other things, it demonstrably reproduces the established theories of general relativity and quantum field theory in the appropriate limits, which Loop Quantum Gravity has struggled to do. In that sense string theory's connection to established physics may be considered more reliable and less speculative, at the mathematical level. Peter Woit in Not Even Wrong and Lee Smolin in The Trouble with Physics regard string/M-theory to be in conflict with current known experimental results.

Since LQG has been formulated in 4 dimensions (with and without supersymmetry), and M-theory requires supersymmetry and 11 dimensions, a direct comparison between the two has not been possible. It is possible to extend mainstream LQG formalism to higher-dimensional supergravity, general relativity with supersymmetry and Kaluza–Klein extra dimensions should experimental evidence establish their existence. It would therefore be desirable to have higher-dimensional Supergravity loop quantizations at one's disposal in order to compare these approaches. In fact a series of recent papers have been published attempting just this.[68][69][70][71][72][73][74][75] Most recently, Thiemann at el have made progress toward calculating black hole entropy for supergravity in higher dimensions. It will be interesting to compare these results to the corresponding super string calculations.[76][77]

As of April 2013 LHC has failed to find evidence of supersymmetry or Kaluza–Klein extra dimensions, which has encouraged LQG researchers. Shaposhnikov in his paper "Is there a new physics between electroweak and Planck scales?" has proposed the neutrino minimal standard model,[78] which claims the most parsimonious theory is a standard model extended with neutrinos, plus gravity, and that extra dimensions, GUT physics, and supersymmetry, string/M-theory physics are unrealized in nature, and that any theory of quantum gravity must be four dimensional, like loop quantum gravity.

LQG and related research programs

Several research groups have attempted to combine LQG with other research programs: Johannes Aastrup, Jesper M. Grimstrup et al. research combines noncommutative geometry with loop quantum gravity,[79] Laurent Freidel, Simone Speziale, et al., spinors and twistor theory with loop quantum gravity,[80] and Lee Smolin et al. with Verlinde entropic gravity and loop gravity.[81] Stephon Alexander, Antonino Marciano and Lee Smolin have attempted to explain the origins of weak force chirality in terms of Ashketar's variables, which describe gravity as chiral,[82] and LQG with Yang–Mills theory fields [83] in four dimensions. Sundance Bilson-Thompson, Hackett et al.,[84][85] has attempted to introduce standard model via LQG"s degrees of freedom as an emergent property (by employing the idea noiseless subsystems a useful notion introduced in more general situation for constrained systems by Fotini Markopoulou-Kalamara et al.[86]) LQG has also drawn philosophical comparisons with Causal dynamical triangulation [87] and asymptotically safe gravity,[88] and the spinfoam with group field theory and AdS/CFT correspondence.[89] Smolin and Wen have suggested combining LQG with String-net liquid, tensors, and Smolin and Fotini Markopoulou-Kalamara Quantum Graphity. There is the consistent discretizations approach. In addition to what has already mentioned above, Pullin and Gambini provide a framework to connect the path integral and canonical approaches to quantum gravity. They may help reconcile the spin foam and canonical loop representation approaches. Recent research by Chris Duston and Matilde Marcolli introduces topology change via topspin networks.[90]

Problems and comparisons with alternative approaches

Some of the major unsolved problems in physics are theoretical, meaning that existing theories seem incapable of explaining a certain observed phenomenon or experimental result. The others are experimental, meaning that there is a difficulty in creating an experiment to test a proposed theory or investigate a phenomenon in greater detail.
Can quantum mechanics and general relativity be realized as a fully consistent theory (perhaps as a quantum field theory)?[7] Is spacetime fundamentally continuous or discrete? Would a consistent theory involve a force mediated by a hypothetical graviton, or be a product of a discrete structure of spacetime itself (as in loop quantum gravity)? Are there deviations from the predictions of general relativity at very small or very large scales or in other extreme circumstances that flow from a quantum gravity theory?

The theory of LQG is one possible solution to the problem of quantum gravity, as is string theory. There are substantial differences however. For example, string theory also addresses unification, the understanding of all known forces and particles as manifestations of a single entity, by postulating extra dimensions and so-far unobserved additional particles and symmetries. Contrary to this, LQG is based only on quantum theory and general relativity and its scope is limited to understanding the quantum aspects of the gravitational interaction. On the other hand, the consequences of LQG are radical, because they fundamentally change the nature of space and time and provide a tentative but detailed physical and mathematical picture of quantum spacetime.

Presently, no semiclassical limit recovering general relativity has been shown to exist. This means it remains unproven that LQG's description of spacetime at the Planck scale has the right continuum limit (described by general relativity with possible quantum corrections). Specifically, the dynamics of the theory is encoded in the Hamiltonian constraint, but there is no candidate Hamiltonian.[91] Other technical problems include finding off-shell closure of the constraint algebra and physical inner product vector space, coupling to matter fields of Quantum field theory, fate of the renormalization of the graviton in perturbation theory that lead to ultraviolet divergence beyond 2-loops (see One-loop Feynman diagram in Feynman diagram).[91]

While there has been a recent proposal relating to observation of naked singularities,[92] and doubly special relativity as a part of a program called loop quantum cosmology, there is no experimental observation for which loop quantum gravity makes a prediction not made by the Standard Model or general relativity (a problem that plagues all current theories of quantum gravity). Because of the above-mentioned lack of a semiclassical limit, LQG has not yet even reproduced the predictions made by general relativity.

An alternative criticism is that general relativity may be an effective field theory, and therefore quantization ignores the fundamental degrees of freedom.

Bjørn Lomborg


From Wikipedia, the free encyclopedia

Bjørn Lomborg
Bjørn Lomborg 1.jpg
Bjørn Lomborg
Born (1965-01-06) 6 January 1965 (age 50)
Frederiksberg, Denmark
Occupation Author, Researcher, Analyst
Subject Environmental Economics

Bjørn Lomborg (Danish: [bjɶɐ̯n ˈlʌmbɒˀw]; born 6 January 1965) is the director of the Copenhagen Consensus Center and a former director of the Environmental Assessment Institute in Copenhagen. He became internationally known for his best-selling and controversial book, The Skeptical Environmentalist (2001).

In 2002, Lomborg and the Environmental Assessment Institute founded the Copenhagen Consensus, a project-based conference where prominent economists sought to establish priorities for advancing global welfare using methods based on the theory of welfare economics.

Lomborg campaigned against the Kyoto Protocol and other measures to cut carbon emissions in the short-term, and argued for adaptation to short-term temperature rises as they are inevitable, and for spending money on research and development for longer-term environmental solutions, and on other important world problems such as AIDS, malaria and malnutrition. In his critique of the 2012 United Nations Conference on Environment and Development, Lomborg stated: "Global warming is by no means our main environmental threat."[1]

In the chapter on climate change in The Skeptical Environmentalist, he states: "This chapter accepts the reality of man-made global warming but questions the way in which future scenarios have been arrived at and finds that forecasts of climate change of 6 degrees by the end of the century are not plausible".[2] Cost–benefit analyses, calculated by the Copenhagen Consensus, ranked climate mitigation initiatives lowest on a list of international development initiatives when first done in 2004.[3] In a 2010 interview with the New Statesman, Lomborg summarized his position on climate change: "Global warming is real – it is man-made and it is an important problem. But it is not the end of the world."[4]

Academic career

Lomborg spent a year as an undergraduate at the University of Georgia, earned an M.A. degree in political science at the University of Aarhus in 1991, and a Ph.D. degree in political science at the University of Copenhagen in 1994.

He lectured in statistics in the Department of Political Science at the University of Aarhus as an assistant professor (1994–1996) and associate professor (1997–2005). He left the university in February 2005 and in May of that year became an Adjunct Professor at Copenhagen Business School.
Early in his career his professional areas of interest lay in the simulation of strategies in collective action dilemmas, simulation of party behavior in proportional voting systems, and the use of surveys in public administration. In 1996, Lomborg's paper, "Nucleus and Shield: Evolution of Social Structure in the Iterated Prisoner's Dilemma", was published in the academic journal, American Sociological Review.[5]

Later Lomborg's interests shifted to the use of statistics in the environmental arena. His most famous book in this area is The Skeptical Environmentalist, whose English translation was published as a work in environmental economics by Cambridge University Press in 2001. He later edited Global Crises, Global Solutions, which presented the first conclusions of the Copenhagen Consensus, published in 2004 by the Cambridge University Press. In 2007, he authored a book entitled Cool It: The Skeptical Environmentalist's Guide to Global Warming.

The Skeptical Environmentalist

In 1998, Lomborg published four essays about the state of the environment in the leading Danish newspaper Politiken, which according to him "resulted in a firestorm debate spanning over 400 articles in major metropolitan newspapers."[6]

In 2001, he attained significant attention by publishing The Skeptical Environmentalist, a controversial book whose main thesis is that many of the most-publicized claims and predictions on environmental issues are wrong.

Formal accusations of scientific dishonesty

After the publication of The Skeptical Environmentalist, Lomborg was formally accused of scientific dishonesty by a group of environmental scientists, who brought a total of three complaints against him to the Danish Committees on Scientific Dishonesty (DCSD), a body under Denmark's Ministry of Science, Technology and Innovation (MSTI). Lomborg was asked whether he regarded the book as a "debate" publication, and thereby not under the purview of the DCSD, or as a scientific work; he chose the latter, clearing the way for the inquiry that followed.[7] The charges claimed that The Skeptical Environmentalist contained deliberately misleading data and flawed conclusions. Due to the similarity of the complaints, the DCSD decided to proceed on the three cases under one investigation.

In January, 2003, the DCSD released a ruling that sent a mixed message, finding the book to be scientifically dishonest through misrepresentation of scientific facts, but Lomborg himself not guilty due to his lack of expertise in the fields in question:[8] That February, Lomborg filed a complaint against the decision with the MSTI, which had oversight over the DCSD. In December, 2003, the Ministry annulled the DCSD decision, citing procedural errors, including lack of documentation of errors in the book, and asked the DCSD to re-examine the case. In March 2004, the DCSD formally decided not to act further on the complaints, reasoning that renewed scrutiny would, in all likelihood, result in the same conclusion.[7][9]

Response of the scientific community

The original DCSD decision about Lomborg provoked a petition[10] among Danish academics. 308 scientists, many of them from the social sciences, criticised the DCSD's methods in the case and called for the DCSD to be disbanded.[11] The Danish Minister of Science, Technology, and Innovation then asked the Danish Research Agency (DRA) to form an independent working group to review DCSD practices.[12] In response to this, another group of Danish scientists collected over 600 signatures, primarily from the medical and natural sciences community, to support the continued existence of the DCSD and presented their petition to the DRA.[11]

Recognition

The alumni network of the Cambridge Programme for Sustainability Leadership (CPSL) voted The Skeptical Environmentalist among its list of the top 50 sustainability books.[13]

Continued debate and criticism

The rulings of the Danish authorities in 2003–2004 left Lomborg's critics frustrated. Lomborg claimed vindication as a result of MSTI's decision to set aside the original finding of DCSD.

The Lomborg Deception, a book by Howard Friel, claims to offer a "careful analysis" of the ways in which Lomborg has "selectively used (and sometimes distorted) the available evidence".[14] Lomborg has provided a 27-page argument-by-argument rebuttal. Friel has written a reply to this rebuttal,[15] in which he admits two errors, but otherwise in general rejects Lomborg's arguments.

A group of scientists published an article in 2005 in the Journal of Information Ethics,[16] in which they concluded that most criticism against Lomborg was unjustified, and that the scientific community misused their authority to suppress Lomborg.

The claim that the accusations against Lomborg were unjustified was challenged in the next issue of Journal of Information Ethics[17] by Kåre Fog, one of the original plaintiffs. Fog reasserted his contention that, despite the ministry's decision, most of the accusations against Lomborg were valid. He also rejected what he called "the Galileo hypothesis", which he describes as the conception that Lomborg is just a brave young man confronting old-fashioned opposition.

Further career

Government work

In March 2002, the newly elected center-right prime minister, Anders Fogh Rasmussen, appointed Lomborg to run Denmark's new Environmental Assessment Institute (EAI). On 22 June 2004, Lomborg announced his decision to resign from this post to go back to the University of Aarhus, saying his work at the Institute was done and that he could better serve the public debate from the academic sector.

In 2002, Lomborg and the Environmental Assessment Institute founded the Copenhagen Consensus, which seeks to establish priorities for advancing global welfare using methodologies based on the theory of welfare economics. A panel of prominent economists was assembled to evaluate and rank a series of problems every four years. The project was funded largely by the Danish government, and co-sponsored by The Economist. A book summarizing the conclusions of the economists' first assessment, Global Crises, Global Solutions, edited by Lomborg, was published in October 2004 by Cambridge University Press.

In 2006, Lomborg became director of the new Copenhagen Consensus Center, a Danish government-funded institute intended to build on the mandate of the Environmental Assessment Institute, and expand on the original Copenhagen Consensus conference.[18]

Further books

Solutions for the World's Biggest Problems, published in 2007, offers an "... overview of twenty-three of the world's biggest problems relating to the environment, governance, economics, and health and population. Leading economists provide a short survey of the state-of-the-art analysis and sketch out some policy solutions for which they provide cost-benefit ratios."[19]

Cool It: The Skeptical Environmentalist's Guide to Global Warming, also published in 2007, argues against taking immediate and "drastic" action to curb greenhouse gases while simultaneously stating that "Global warming is happening. It's a serious and important problem ...". He argues that "... the cost and benefits of the proposed measures against global warming. ... is the worst way to spend our money. Climate change is a 100-year problem — we should not try to fix it in 10 years."[20]

Howard Friel wrote a book entitled The Lomborg Deception, which criticizes Lomborg, claiming that the sources Lomborg provides in the footnotes do not support—and in some cases are in direct contradiction to—Lomborg's assertions in the text of the book;[21] Lomborg has denied these claims in a public rebuttal.[22]

Personal life

Lomborg is openly gay and a vegetarian.[23] As a public figure he has been a participant in information campaigns in Denmark about homosexuality, and states that "Being a public gay is to my view a civic responsibility. It's important to show that the width of the gay world cannot be described by a tired stereotype, but goes from leather gays on parade-wagons to suit-and-tie yuppies on the direction floor, as well as everything in between"[24]

Recognitions and awards

Discussions in the media

After the release of The Skeptical Environmentalist in 2001, Lomborg was subjected to intense scrutiny and criticism in the media, where his scientific qualifications and integrity were both attacked and defended. The verdict of the Danish Committees for Scientific Dishonesty fueled this debate and brought it into the spotlight of international mass media. By the end of 2003 Lomborg had become an international celebrity, with frequent appearances on radio, television and print media around the world.
  • Scientific American published strong criticism of Lomborg's book. Lomborg responded on his own website, quoting the article at such length that Scientific American threatened to sue for copyright infringement. Lomborg eventually removed the rebuttal from his website; it was later published in PDF format on Scientific American's site.[33] The magazine also printed a response to the rebuttal.[34]
  • The Economist defended Lomborg, claiming the panel of experts that had criticised Lomborg in Scientific American was both biased and did not actually counter Lomborg's book. The Economist argued that the panel's opinion had come under no scrutiny at all, and that Lomborg's responses had not been reported.[35]
  • Penn & Teller: Bullshit! — the U.S. Showtime television programme featured an episode entitled "Environmental Hysteria" in which Lomborg criticised what he claimed was environmentalists' refusal to accept a cost-benefit analysis of environmental questions, and stressed the need to prioritise some issues above others.[36]
  • Rolling Stone stated, "Lomborg pulls off the remarkable feat of welding the techno-optimism of the Internet age with a lefty's concern for the fate of the planet."[37]
  • The Union of Concerned Scientists strongly criticised The Skeptical Environmentalist, claiming it to be "seriously flawed and failing to meet basic standards of credible scientific analysis", accusing Lomborg of presenting data in a fraudulent way, using flawed logic and selectively citing non-peer-reviewed literature.[38] The review was conducted by Peter Gleick, Jerry D. Mahlman, Edward O. Wilson, Thomas Lovejoy, Norman Myers, Jeff Harvey, and Stuart Pimm. Lomborg countered that some of the scientists involved in this report were also named and criticised in The Skeptical Environmentalist, and thus had a vested interest in discrediting it and its author.[citation needed]

Publications

Documentary film

Bjørn Lomborg released a documentary feature film, Cool It, on 12 November 2010 in the US.[41][42] The film in part explicitly challenged Al Gore's 2006 Oscar-winning environmental awareness documentary, An Inconvenient Truth, and was frequently presented by the media in that light, as in the Wall Street Journal headline, "Controversial ‘Cool It’ Documentary Takes on 'An Inconvenient Truth'."[43][44] Reviews were generally favorable, with a media critic collective rating of 51% from Rotten Tomatoes[45] and 61% from Metacritic.[46] The Atlantic review described it as "An urgent, intelligent, and entertaining account of the climate policy debate, with a strong focus on cost-effective solutions."[47] At the box office, Cool It 's US release grossed $62,713 (An Inconvenient Truth grossed $24,146,161 in the US).[48][49]

The Bloggers' Briefing with Bjørn Lomborg and his movie COOL IT. Accuracy In Media[50]

Call for action: It’s time to March Against the March Against Monsanto

& | February 1, 2015 |
6331823 | Original link:  http://geneticliteracyproject.org/2015/02/01/call-for-action-its-time-to-march-against-the-march-against-monsanto/
March Against March Against Monsanto
For the last two years, protestors have marched under the banner of the March Against Monsanto (MAM) in coordinated demonstrations around the world in opposition to genetically engineered crops, the companies that make them or market them, and governments that approve their sale. Thousands of people have participated. While many protestors may have good intentions, hoping to improve the food system, the organizers of the March Against Monsanto and many prominent NGOs that promote this event often misrepresent biotechnology and farming.

It’s time to take back the science; it’s time to march against the March Against Monsanto. In this spirit, Karl Haro von Mogel, David Sutherland and Kavin Senapathy are planning a counter protest to take place on May 23, 2015—the same day as this year’s March Against Monsanto.

Karl is a research geneticist based in Madison, Wisconsin, active science communicator and public speaker. He spends much of his free time helping people understand the complicated science of GMOs.

David is a Chicago-based artist by profession, vegan & animal rights activist in his free time. As part of his early activism he contributed efforts to anti-GMO activism. With a discovered interest in science and critical thinking he had a change of heart and mind. Now he has a mission and passion to undo those wrongs by demystifying the issues surrounding GMOs and biotech.

Kavin is a freelance writer, science communicator, and mother of two young children based in Madison, Wisconsin. She promotes the idea that critical thinking is key in raising well-rounded children, and that embracing biotechnology is imperative in this objective.

While all three come from diverse walks of life, they have united in this movement to promote evidence-based information about our food system, and combat fear-mongering about the same.

The idea for the counter demonstration, originally conceived as a symbolic event in Chicago or Madison, has grown into a coordinated movement, with people around the world organizing events to coincide with and challenge the March Against Monsanto.

Issues surrounding food incite passion. Food intersects everything from culture to science, politics, land use, nutrition, genetics and history. But because food is so important to human well-being, it is crucial that the claims made about it be carefully scrutinized. The MAM organizers don’t do that. The organization’s mission statement is a collection of false claims and conspiratorial leanings, and the group actively encourages extremist dialogue. Instead of saving people from poison, it is poisoning the debate about our food.

A few of the myriad fallacies MAM promotes:
  • “Research studies have shown that Monsanto’s genetically-modified foods can lead to serious health conditions such as the development of cancer tumors, infertility and birth defects.”
  • “Organic and small farmers suffer losses while Monsanto continues to forge its monopoly over the world’s food supply, including exclusive patenting rights over seeds and genetic makeup.”
  • “Monsanto’s GMO seeds are harmful to the environment; for example, scientists have indicated they have caused colony collapse among the world’s bee population.”
Sign "I want to know WTF I'm eating"
We believe that these claims corrupt the debate and sidetrack the political will of concerned people who want meaningful changes to our food system. There are real problems with our food and distribution system. Inventing fictitious problems, or grossly exaggerating minor ones, or claiming that problems that affect all agriculture are unique to genetic engineering, has no place in a fair-minded public discussion. Obesity, malnutrition, lack of access to healthy food, poverty, environmental challenges and climate change are genuine concerns surrounding food. Misrepresenting science and scapegoating a single company, in this case Monsanto, as the root of all of these problems is not only wrong, but will make it harder for genuinely concerned citizens to address these real issues.

This is why we are taking to the streets ourselves, starting a new movement centered around educating the public about the facts about genetically engineered crops, real issues with food and agriculture and civil discussion. It’s time to oppose the fear-mongering and distortions promulgated by the March Against Monsanto. Global food security and the health of our food system are serious issues. Let’s rally to restore sanity, science and compassion.

Messages of MAM are confused, misleading, and wrong.

Most of the marchers are concerned citizens, with a belief that they are marching for justice, marching for public well-being, marching against the specter of Corporate Greed. With hearts in the right places and emotions running high, they take to their annual protest. Yet, rather than providing an accurate and coherent message, the March Against Monsanto is an annual spectacle of absurd claims, often represented by scare slogans and signs.
Protest sign "quit trying to get in my genes"

MAM uses images of children supposedly subject to imminent danger from consuming GM foods to play on the emotions of concerned, but scientifically ill-educated parents.This type of imagery is is representative of the irrational “genetic engineering will control you” rhetoric rampant at these marches. Meaningless slogans like “My DNA is not for sale,” and “Quit trying to get in my genes” are catchy and invoke Orwellian-like Big Brother fears of malicious corporate bigwigs using genetic engineering to control the masses. While utterly unrealistic in a scientific sense, the vague allegation that Big Biotech can indirectly control the public via its genes has become a rallying cry at MAM events.
little girl next to lab rat

The repeated comparison of consumers to lab rats and science experiments has firmly cemented the notion that Big Ag unleashes products without regard to whether they are safe.

MAM frequently invokes heroes of the American Civil Rights movement, when there is no attack on civil rights taking place. In recent Facebook posts, MAM has used images of national heroes like Martin Luther King, Jr., and Rosa Parks to push its agenda. Quoting these leaders, MAM attempts to compare its movement to iconic and ongoing efforts to grant equal rights to all humans. Nevertheless, MAM’s effort only serves as an insult. Co-opting social justice heroes is a deceitful tactic that is offensive to victims of true civil rights abuses.
Rosa Parks

MAM leaders sow seeds of distrust about government and science. Not only do leaders paint the picture of Malevolent Monsanto, and Bad Big Ag, they also paint the government as conspiratorial cronies with an underhanded motivation to forgo public safety. It’s a reckless anarchism, drawing on tactics of the far right, that feeds cynicism.
Sign "FDA stop poisoning our families"

Those who use these tactics should be held accountable.

We are starting a movement to combat misinformation and fear-mongering, and promote science-based information on agricultural biotechnology:

The weight of evidence shows that the GMOs that people eat are safe. Just last week, in a survey of the scientist-members of the American Association for the Advancement of Science, the largest independent science organization in the world, 88 percent said GM foods are safe—a consensus higher than that for the belief that humans are mostly responsible for climate change.

We see the environmental and health advantages GM products provide, and we are excited about the promise these innovations hold—if the kind of hysteria and fear generated by groups like MAM don’t scuttle that. Technologies like recombinant DNA techniques, gene editing and RNAi have the potential to become an important tool in the challenge to feed the Earth’s growing population and combat malnutrition in increasingly sustainable ways. While biotechnology comes across as daunting to many people, we believe scientists and savvy citizens can come together to help demystify them and inform fellow concerned citizens about how “genetic enhancement” has helped shape agriculture as we know it for thousands of years.

Scientists and lay people alike have banded together on social media to spread awareness of the benefits of GMOs. Why not take this momentum a step further, to establish an organized movement?
Tomato injected with syringe

Fringe activists like the leaders of March Against Monsanto spread unscientific propaganda, promote distrust of scientific consensus, use junk rhetoric, and leverage fear. Conversely, we hope to leverage the public’s thirst for reason and knowledge to showcase the potential of GMOs to alleviate hunger and promote sustainability. While fringe anti-GMO activists play on consumer ignorance using scary, fallacious imagery like syringes in tomatoes, we’re convinced the majority of the public is intelligent, albeit misinformed, and eager for an objective view of food, farming and biotechnology.

Upon its founding in 2013, March Against Monsanto issued a mission statement addressing the questions: “Why do we march?” and “What are solutions we advocate?” Here, we present the first draft of our Mission Statement:

Why we will march:
  • The weight of scientific evidence shows that GMOs are safe and beneficial. With reasonable public policies in place, biotechnology has the potential to help feed and nourish the world.
  • We oppose the fallacies propagated by March Against Monsanto and other biotech activist opponents.
  • For too long, extremist advocacy groups have used the specter of “Monsanto” as a scapegoat to oppose GMOs and attack biotech supporters.
  • Ill-informed opponents of GMOs have manipulated the public discussion and the political process by flooding the internet and media with misinformation.
  • The growing chasm between the pro-science and anti-biotech camps does not promote solutions to global hunger and sustainability. An eye for an eye makes the whole world blind.
  • Biotech fears have harmful, real-world consequences. GM technologies are not a homogenous product to be demonized, but a toolbox to be utilized to improve our current food system.
Solutions we advocate:
    • Call and write food companies and organizations that have kept GMO ingredients in their products to thank them for standing with science.
    • Call and write companies that may be debating whether to go GMO free to to encourage them to keep GM ingredients.
    • Food without fear! Aim for a diet high in fruits, vegetables and other wholesome components without worrying about GMOs (unless they add vitamins.)
    • Educate friends and family about the science and benefits of GMOs in a non confrontational way.
    • Encourage people to get involved, learn and be a part of the conversation about GMOs and food.
    • Support businesses, NGOs, and educational organizations that take an evidence-based stance on GMOs.
    • Offer your expertise. If you’re a scientist, farmer, teacher, or industry expert, donate your valuable time to speak at schools and conferences about the real role of biotechnology in agriculture.
“Why we march” and “Solutions we advocate” are evolving lists. As our movement grows, we will work together to refine our mission.

While our name–“March Against March Against Monsanto”–denotes our opposition to much of what MAM represents, we will communicate a positive and diplomatic message. MAMAM is our acronym, it has more than one meaning; we also fancy ourselves March Against Myths About Modification.

Further, MAMAM is not just a one-day event, but a movement. We hope it will resonate with media, soar in cyberspace and stimulate face-to-face conversations. We promote spreading awareness and sharing science-based information in a proactive, non-combative manner.

Call to action:

If you’d like to help, head to Facebook to join the movement. If you’re unable to organize or attend a local event this year, we’re also looking for artists and writers willing to help with publicity material, and potential sponsors willing to donate toward the costs of educational handouts and bottles of water for marchers. Remember, we’re still in the infancy of our movement. Please stay tuned to watch us grow and hone our messages!

Hopefully and sincerely,
Kavin Senapathy, Karl Haro von Mogel, and David Sutherland

Homework

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Homework A person doing geometry home...