Search This Blog

Monday, July 29, 2024

Positive energy theorem

From Wikipedia, the free encyclopedia

The positive energy theorem (also known as the positive mass theorem) refers to a collection of foundational results in general relativity and differential geometry. Its standard form, broadly speaking, asserts that the gravitational energy of an isolated system is nonnegative, and can only be zero when the system has no gravitating objects. Although these statements are often thought of as being primarily physical in nature, they can be formalized as mathematical theorems which can be proven using techniques of differential geometry, partial differential equations, and geometric measure theory.

Richard Schoen and Shing-Tung Yau, in 1979 and 1981, were the first to give proofs of the positive mass theorem. Edward Witten, in 1982, gave the outlines of an alternative proof, which were later filled in rigorously by mathematicians. Witten and Yau were awarded the Fields medal in mathematics in part for their work on this topic.

An imprecise formulation of the Schoen-Yau / Witten positive energy theorem states the following:

Given an asymptotically flat initial data set, one can define the energy-momentum of each infinite region as an element of Minkowski space. Provided that the initial data set is geodesically complete and satisfies the dominant energy condition, each such element must be in the causal future of the origin. If any infinite region has null energy-momentum, then the initial data set is trivial in the sense that it can be geometrically embedded in Minkowski space.

The meaning of these terms is discussed below. There are alternative and non-equivalent formulations for different notions of energy-momentum and for different classes of initial data sets. Not all of these formulations have been rigorously proven, and it is currently an open problem whether the above formulation holds for initial data sets of arbitrary dimension.

Historical overview

The original proof of the theorem for ADM mass was provided by Richard Schoen and Shing-Tung Yau in 1979 using variational methods and minimal surfaces. Edward Witten gave another proof in 1981 based on the use of spinors, inspired by positive energy theorems in the context of supergravity. An extension of the theorem for the Bondi mass was given by Ludvigsen and James Vickers, Gary Horowitz and Malcolm Perry, and Schoen and Yau.

Gary Gibbons, Stephen Hawking, Horowitz and Perry proved extensions of the theorem to asymptotically anti-de Sitter spacetimes and to Einstein–Maxwell theory. The mass of an asymptotically anti-de Sitter spacetime is non-negative and only equal to zero for anti-de Sitter spacetime. In Einstein–Maxwell theory, for a spacetime with electric charge and magnetic charge , the mass of the spacetime satisfies (in Gaussian units)

with equality for the MajumdarPapapetrou extremal black hole solutions.

Initial data sets

An initial data set consists of a Riemannian manifold (M, g) and a symmetric 2-tensor field k on M. One says that an initial data set (M, g, k):

  • is time-symmetric if k is zero
  • is maximal if trgk = 0 
  • satisfies the dominant energy condition if
where Rg denotes the scalar curvature of g.

Note that a time-symmetric initial data set (M, g, 0) satisfies the dominant energy condition if and only if the scalar curvature of g is nonnegative. One says that a Lorentzian manifold (M, g) is a development of an initial data set (M, g, k) if there is a (necessarily spacelike) hypersurface embedding of M into M, together with a continuous unit normal vector field, such that the induced metric is g and the second fundamental form with respect to the given unit normal is k.

This definition is motivated from Lorentzian geometry. Given a Lorentzian manifold (M, g) of dimension n + 1 and a spacelike immersion f from a connected n-dimensional manifold M into M which has a trivial normal bundle, one may consider the induced Riemannian metric g = f *g as well as the second fundamental form k of f with respect to either of the two choices of continuous unit normal vector field along f. The triple (M, g, k) is an initial data set. According to the Gauss-Codazzi equations, one has

where G denotes the Einstein tensor Ricg - 1/2Rgg of g and ν denotes the continuous unit normal vector field along f used to define k. So the dominant energy condition as given above is, in this Lorentzian context, identical to the assertion that G(ν, ⋅), when viewed as a vector field along f, is timelike or null and is oriented in the same direction as ν.

The ends of asymptotically flat initial data sets

In the literature there are several different notions of "asymptotically flat" which are not mutually equivalent. Usually it is defined in terms of weighted Hölder spaces or weighted Sobolev spaces.

However, there are some features which are common to virtually all approaches. One considers an initial data set (M, g, k) which may or may not have a boundary; let n denote its dimension. One requires that there is a compact subset K of M such that each connected component of the complement MK is diffeomorphic to the complement of a closed ball in Euclidean space n. Such connected components are called the ends of M.

Formal statements

Schoen and Yau (1979)

Let (M, g, 0) be a time-symmetric initial data set satisfying the dominant energy condition. Suppose that (M, g) is an oriented three-dimensional smooth Riemannian manifold-with-boundary, and that each boundary component has positive mean curvature. Suppose that it has one end, and it is asymptotically Schwarzschild in the following sense:

Suppose that K is an open precompact subset of M such that there is a diffeomorphism Φ : ℝ3B1(0) → MK, and suppose that there is a number m such that the symmetric 2-tensor

on 3B1(0) is such that for any i, j, p, q, the functions and are all bounded.

Schoen and Yau's theorem asserts that m must be nonnegative. If, in addition, the functions and are bounded for any then m must be positive unless the boundary of M is empty and (M, g) is isometric to 3 with its standard Riemannian metric.

Note that the conditions on h are asserting that h, together with some of its derivatives, are small when x is large. Since h is measuring the defect between g in the coordinates Φ and the standard representation of the t = constant slice of the Schwarzschild metric, these conditions are a quantification of the term "asymptotically Schwarzschild". This can be interpreted in a purely mathematical sense as a strong form of "asymptotically flat", where the coefficient of the |x|−1 part of the expansion of the metric is declared to be a constant multiple of the Euclidean metric, as opposed to a general symmetric 2-tensor.

Note also that Schoen and Yau's theorem, as stated above, is actually (despite appearances) a strong form of the "multiple ends" case. If (M, g) is a complete Riemannian manifold with multiple ends, then the above result applies to any single end, provided that there is a positive mean curvature sphere in every other end. This is guaranteed, for instance, if each end is asymptotically flat in the above sense; one can choose a large coordinate sphere as a boundary, and remove the corresponding remainder of each end until one has a Riemannian manifold-with-boundary with a single end.

Schoen and Yau (1981)

Let (M, g, k) be an initial data set satisfying the dominant energy condition. Suppose that (M, g) is an oriented three-dimensional smooth complete Riemannian manifold (without boundary); suppose that it has finitely many ends, each of which is asymptotically flat in the following sense.

Suppose that is an open precompact subset such that has finitely many connected components and for each there is a diffeomorphism such that the symmetric 2-tensor satisfies the following conditions:

  • and are bounded for all

Also suppose that

  • and are bounded for any
  • and for any
  • is bounded.

The conclusion is that the ADM energy of each defined as

is nonnegative. Furthermore, supposing in addition that

  • and are bounded for any

the assumption that for some implies that n = 1, that M is diffeomorphic to 3, and that Minkowski space 3,1 is a development of the initial data set (M, g, k).

Witten (1981)

Let be an oriented three-dimensional smooth complete Riemannian manifold (without boundary). Let be a smooth symmetric 2-tensor on such that

Suppose that is an open precompact subset such that has finitely many connected components and for each there is a diffeomorphism such that the symmetric 2-tensor satisfies the following conditions:

  • and are bounded for all
  • and are bounded for all

For each define the ADM energy and linear momentum by

For each consider this as a vector in Minkowski space. Witten's conclusion is that for each it is necessarily a future-pointing non-spacelike vector. If this vector is zero for any then is diffeomorphic to and the maximal globally hyperbolic development of the initial data set has zero curvature.

Extensions and remarks

According to the above statements, Witten's conclusion is stronger than Schoen and Yau's. However, a third paper by Schoen and Yau shows that their 1981 result implies Witten's, retaining only the extra assumption that and are bounded for any It also must be noted that Schoen and Yau's 1981 result relies on their 1979 result, which is proved by contradiction; therefore their extension of their 1981 result is also by contradiction. By contrast, Witten's proof is logically direct, exhibiting the ADM energy directly as a nonnegative quantity. Furthermore, Witten's proof in the case can be extended without much effort to higher-dimensional manifolds, under the topological condition that the manifold admits a spin structure. Schoen and Yau's 1979 result and proof can be extended to the case of any dimension less than eight. More recently, Witten's result, using Schoen and Yau (1981)'s methods, has been extended to the same context. In summary: following Schoen and Yau's methods, the positive energy theorem has been proven in dimension less than eight, while following Witten, it has been proven in any dimension but with a restriction to the setting of spin manifolds.

As of April 2017, Schoen and Yau have released a preprint which proves the general higher-dimensional case in the special case without any restriction on dimension or topology. However, it has not yet (as of May 2020) appeared in an academic journal.

Applications

Sunday, July 28, 2024

Holonomic brain theory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Holonomic_brain_theory

Holonomic brain theory is a branch of neuroscience investigating the idea that human consciousness is formed by quantum effects in or between brain cells. Holonomic refers to representations in a Hilbert phase space defined by both spectral and space-time coordinates. Holonomic brain theory is opposed by traditional neuroscience, which investigates the brain's behavior by looking at patterns of neurons and the surrounding chemistry.

This specific theory of quantum consciousness was developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm building on the initial theories of holograms originally formulated by Dennis Gabor. It describes human cognition by modeling the brain as a holographic storage network. 

Pribram suggests these processes involve electric oscillations in the brain's fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses. These oscillations are waves and create wave interference patterns in which memory is encoded naturally, and the wave function may be analyzed by a Fourier transform.

Gabor, Pribram and others noted the similarities between these brain processes and the storage of information in a hologram, which can also be analyzed with a Fourier transform. In a hologram, any part of the hologram with sufficient size contains the whole of the stored information. In this theory, a piece of a long-term memory is similarly distributed over a dendritic arbor so that each part of the dendritic network contains all the information stored over the entire network. This model allows for important aspects of human consciousness, including the fast associative memory that allows for connections between different pieces of stored information and the non-locality of memory storage (a specific memory is not stored in a specific location, i.e. a certain cluster of neurons).

Origins and development

In 1946 Dennis Gabor invented the hologram mathematically, describing a system where an image can be reconstructed through information that is stored throughout the hologram. He demonstrated that the information pattern of a three-dimensional object can be encoded in a beam of light, which is more-or-less two-dimensional. Gabor also developed a mathematical model for demonstrating a holographic associative memory. One of Gabor's colleagues, Pieter Jacobus Van Heerden, also developed a related holographic mathematical memory model in 1963. This model contained the key aspect of non-locality, which became important years later when, in 1967, experiments by both Braitenberg and Kirschfield showed that exact localization of memory in the brain was false.

Karl Pribram had worked with psychologist Karl Lashley on Lashley's engram experiments, which used lesions to determine the exact location of specific memories in primate brains. Lashley made small lesions in the brains and found that these had little effect on memory. On the other hand, Pribram removed large areas of cortex, leading to multiple serious deficits in memory and cognitive function. Memories were not stored in a single neuron or exact location, but were spread over the entirety of a neural network. Lashley suggested that brain interference patterns could play a role in perception, but was unsure how such patterns might be generated in the brain or how they would lead to brain function.

Several years later an article by neurophysiologist John Eccles described how a wave could be generated at the branching ends of pre-synaptic axons. Multiple of these waves could create interference patterns. Soon after, Emmett Leith was successful in storing visual images through the interference patterns of laser beams, inspired by Gabor's previous use of Fourier transformations to store information within a hologram. After studying the work of Eccles and that of Leith, Pribram put forward the hypothesis that memory might take the form of interference patterns that resemble laser-produced holograms. In 1980, physicist David Bohm presented his ideas of holomovement and Implicate and explicate order. Pribram became aware of Bohm's work in 1975 and realized that, since a hologram could store information within patterns of interference and then recreate that information when activated, it could serve as a strong metaphor for brain function. Pribram was further encouraged in this line of speculation by the fact that neurophysiologists Russell and Karen DeValois together established "the spatial frequency encoding displayed by cells of the visual cortex was best described as a Fourier transform of the input pattern."

Theory overview

Hologram and holonomy

Diagram of one possible hologram setup.

A main characteristic of a hologram is that every part of the stored information is distributed over the entire hologram. Both processes of storage and retrieval are carried out in a way described by Fourier transformation equations. As long as a part of the hologram is large enough to contain the interference pattern, that part can recreate the entirety of the stored image, but the image may have unwanted changes, called noise.

An analogy to this is the broadcasting region of a radio antenna. In each smaller individual location within the entire area it is possible to access every channel, similar to how the entirety of the information of a hologram is contained within a part. Another analogy of a hologram is the way sunlight illuminates objects in the visual field of an observer. It doesn't matter how narrow the beam of sunlight is. The beam always contains all the information of the object, and when conjugated by a lens of a camera or the eyeball, produces the same full three-dimensional image. The Fourier transform formula converts spatial forms to spatial wave frequencies and vice versa, as all objects are in essence vibratory structures. Different types of lenses, acting similarly to optic lenses, can alter the frequency nature of information that is transferred.

This non-locality of information storage within the hologram is crucial, because even if most parts are damaged, the entirety will be contained within even a single remaining part of sufficient size. Pribram and others noted the similarities between an optical hologram and memory storage in the human brain. According to the holonomic brain theory, memories are stored within certain general regions, but stored non-locally within those regions. This allows the brain to maintain function and memory even when it is damaged. It is only when there exist no parts big enough to contain the whole that the memory is lost. This can also explain why some children retain normal intelligence when large portions of their brain—in some cases, half—are removed. It can also explain why memory is not lost when the brain is sliced in different cross-sections.

A single hologram can store 3D information in a 2D way. Such properties may explain some of the brain's abilities, including the ability to recognize objects at different angles and sizes than in the original stored memory.

Pribram proposed that neural holograms were formed by the diffraction patterns of oscillating electric waves within the cortex. Representation occurs as a dynamical transformation in a distributed network of dendritic microprocesses. It is important to note the difference between the idea of a holonomic brain and a holographic one. Pribram does not suggest that the brain functions as a single hologram. Rather, the waves within smaller neural networks create localized holograms within the larger workings of the brain. This patch holography is called holonomy or windowed Fourier transformations.

A holographic model can also account for other features of memory that more traditional models cannot. The Hopfield memory model has an early memory saturation point before which memory retrieval drastically slows and becomes unreliable. On the other hand, holographic memory models have much larger theoretical storage capacities. Holographic models can also demonstrate associative memory, store complex connections between different concepts, and resemble forgetting through "lossy storage".

Synaptodendritic web

A few of the various types of synapses

In classic brain theory the summation of electrical inputs to the dendrites and soma (cell body) of a neuron either inhibit the neuron or excite it and set off an action potential down the axon to where it synapses with the next neuron. However, this fails to account for different varieties of synapses beyond the traditional axodendritic (axon to dendrite). There is evidence for the existence of other kinds of synapses, including serial synapses and those between dendrites and soma and between different dendrites. Many synaptic locations are functionally bipolar, meaning they can both send and receive impulses from each neuron, distributing input and output over the entire group of dendrites.

Processes in this dendritic arbor, the network of teledendrons and dendrites, occur due to the oscillations of polarizations in the membrane of the fine-fibered dendrites, not due to the propagated nerve impulses associated with action potentials. Pribram posits that the length of the delay of an input signal in the dendritic arbor before it travels down the axon is related to mental awareness. The shorter the delay the more unconscious the action, while a longer delay indicates a longer period of awareness. A study by David Alkon showed that after unconscious Pavlovian conditioning there was a proportionally greater reduction in the volume of the dendritic arbor, akin to synaptic elimination when experience increases the automaticity of an action. Pribram and others theorize that, while unconscious behavior is mediated by impulses through nerve circuits, conscious behavior arises from microprocesses in the dendritic arbor.

At the same time, the dendritic network is extremely complex, able to receive 100,000 to 200,000 inputs in a single tree, due to the large amount of branching and the many dendritic spines protruding from the branches. Furthermore, synaptic hyperpolarization and depolarization remains somewhat isolated due to the resistance from the narrow dendritic spine stalk, allowing a polarization to spread without much interruption to the other spines. This spread is further aided intracellularly by the microtubules and extracellularly by glial cells. These polarizations act as waves in the synaptodendritic network, and the existence of multiple waves at once gives rise to interference patterns.

Deep and surface structure of memory

Pribram suggests that there are two layers of cortical processing: a surface structure of separated and localized neural circuits and a deep structure of the dendritic arborization that binds the surface structure together. The deep structure contains distributed memory, while the surface structure acts as the retrieval mechanism. Binding occurs through the temporal synchronization of the oscillating polarizations in the synaptodendritic web. It had been thought that binding only occurred when there was no phase lead or lag present, but a study by Saul and Humphrey found that cells in the lateral geniculate nucleus do in fact produce these. Here phase lead and lag act to enhance sensory discrimination, acting as a frame to capture important features. These filters are also similar to the lenses necessary for holographic functioning.

Pribram notes that holographic memories show large capacities, parallel processing and content addressability for rapid recognition, associative storage for perceptual completion and for associative recall. In systems endowed with memory storage, these interactions therefore lead to progressively more self-determination.

Recent studies

While Pribram originally developed the holonomic brain theory as an analogy for certain brain processes, several papers (including some more recent ones by Pribram himself) have proposed that the similarity between hologram and certain brain functions is more than just metaphorical, but actually structural. Others still maintain that the relationship is only analogical. Several studies have shown that the same series of operations used in holographic memory models are performed in certain processes concerning temporal memory and optomotor responses. This indicates at least the possibility of the existence of neurological structures with certain holonomic properties. Other studies have demonstrated the possibility that biophoton emission (biological electrical signals that are converted to weak electromagnetic waves in the visible range) may be a necessary condition for the electric activity in the brain to store holographic images. These may play a role in cell communication and certain brain processes including sleep, but further studies are needed to strengthen current ones. Other studies have shown the correlation between more advanced cognitive function and homeothermy. Taking holographic brain models into account, this temperature regulation would reduce distortion of the signal waves, an important condition for holographic systems. See: Computation approach in terms of holographic codes and processing.

Criticism and alternative models

Pribram's holonomic model of brain function did not receive widespread attention at the time, but other quantum models have been developed since, including brain dynamics by Jibu & Yasue and Vitiello's dissipative quantum brain dynamics. Though not directly related to the holonomic model, they continue to move beyond approaches based solely in classic brain theory.

Correlograph

In 1969 scientists D. Wilshaw, O. P. Buneman and H. Longuet-Higgins proposed an alternative, non-holographic model that fulfilled many of the same requirements as Gabor's original holographic model. The Gabor model did not explain how the brain could use Fourier analysis on incoming signals or how it would deal with the low signal-noise ratio in reconstructed memories. Longuet-Higgin's correlograph model built on the idea that any system could perform the same functions as a Fourier holograph if it could correlate pairs of patterns. It uses minute pinholes that do not produce diffraction patterns to create a similar reconstruction as that in Fourier holography. Like a hologram, a discrete correlograph can recognize displaced patterns and store information in a parallel and non-local way so it usually will not be destroyed by localized damage. They then expanded the model beyond the correlograph to an associative net where the points become parallel lines arranged in a grid. Horizontal lines represent axons of input neurons while vertical lines represent output neurons. Each intersection represents a modifiable synapse. Though this cannot recognize displaced patterns, it has a greater potential storage capacity. This was not necessarily meant to show how the brain is organized, but instead to show the possibility of improving on Gabor's original model. One property of the associative net that makes it attractive as a neural model is that good retrieval can be obtained even when some of the storage elements are damaged or when some of the components of the address are incorrect. P. Van Heerden countered this model by demonstrating mathematically that the signal-noise ratio of a hologram could reach 50% of ideal. He also used a model with a 2D neural hologram network for fast searching imposed upon a 3D network for large storage capacity. A key quality of this model was its flexibility to change the orientation and fix distortions of stored information, which is important for our ability to recognize an object as the same entity from different angles and positions, something the correlograph and association network models lack.

1970s energy crisis

From Wikipedia, the free encyclopedia   1970s energy crisis Real and nominal ...