Search This Blog

Thursday, October 8, 2020

Numerical relativity

From Wikipedia, the free encyclopedia

Numerical relativity is one of the branches of general relativity that uses numerical methods and algorithms to solve and analyze problems. To this end, supercomputers are often employed to study black holes, gravitational waves, neutron stars and many other phenomena governed by Einstein's theory of general relativity. A currently active field of research in numerical relativity is the simulation of relativistic binaries and their associated gravitational waves. Other branches are also active.

Overview

A primary goal of numerical relativity is to study spacetimes whose exact form is not known. The spacetimes so found computationally can either be fully dynamical, stationary or static and may contain matter fields or vacuum. In the case of stationary and static solutions, numerical methods may also be used to study the stability of the equilibrium spacetimes. In the case of dynamical spacetimes, the problem may be divided into the initial value problem and the evolution, each requiring different methods.

Numerical relativity is applied to many areas, such as cosmological models, critical phenomena, perturbed black holes and neutron stars, and the coalescence of black holes and neutron stars, for example. In any of these cases, Einstein's equations can be formulated in several ways that allow us to evolve the dynamics. While Cauchy methods have received a majority of the attention, characteristic and Regge calculus based methods have also been used. All of these methods begin with a snapshot of the gravitational fields on some hypersurface, the initial data, and evolve these data to neighboring hypersurfaces.

Like all problems in numerical analysis, careful attention is paid to the stability and convergence of the numerical solutions. In this line, much attention is paid to the gauge conditions, coordinates, and various formulations of the Einstein equations and the effect they have on the ability to produce accurate numerical solutions.

Numerical relativity research is distinct from work on classical field theories as many techniques implemented in these areas are inapplicable in relativity. Many facets are however shared with large scale problems in other computational sciences like computational fluid dynamics, electromagnetics, and solid mechanics. Numerical relativists often work with applied mathematicians and draw insight from numerical analysis, scientific computation, partial differential equations, and geometry among other mathematical areas of specialization.

History

Foundations in theory

Albert Einstein published his theory of general relativity in 1915. It, like his earlier theory of special relativity, described space and time as a unified spacetime subject to what are now known as the Einstein field equations. These form a set of coupled nonlinear partial differential equations (PDEs). After more than 100 years since the first publication of the theory, relatively few closed-form solutions are known for the field equations, and, of those, most are cosmological solutions that assume special symmetry to reduce the complexity of the equations.

The field of numerical relativity emerged from the desire to construct and study more general solutions to the field equations by approximately solving the Einstein equations numerically. A necessary precursor to such attempts was a decomposition of spacetime back into separated space and time. This was first published by Richard Arnowitt, Stanley Deser, and Charles W. Misner in the late 1950s in what has become known as the ADM formalism. Although for technical reasons the precise equations formulated in the original ADM paper are rarely used in numerical simulations, most practical approaches to numerical relativity use a "3+1 decomposition" of spacetime into three-dimensional space and one-dimensional time that is closely related to the ADM formulation, because the ADM procedure reformulates the Einstein field equations into a constrained initial value problem that can be addressed using computational methodologies.

At the time that ADM published their original paper, computer technology would not have supported numerical solution to their equations on any problem of any substantial size. The first documented attempt to solve the Einstein field equations numerically appears to be Hahn and Lindquist in 1964, followed soon thereafter by Smarr and by Eppley. These early attempts were focused on evolving Misner data in axisymmetry (also known as "2+1 dimensions"). At around the same time Tsvi Piran wrote the first code that evolved a system with gravitational radiation using a cylindrical symmetry. In this calculation Piran has set the foundation for many of the concepts used today in evolving ADM equations, like "free evolution" versus "constrained evolution", which deal with the fundamental problem of treating the constraint equations that arise in the ADM formalism. Applying symmetry reduced the computational and memory requirements associated with the problem, allowing the researchers to obtain results on the supercomputers available at the time.

Early results

The first realistic calculations of rotating collapse were carried out in the early eighties by Richard Stark and Tsvi Piran in which the gravitational wave forms resulting from formation of a rotating black hole were calculated for the first time. For nearly 20 years following the initial results, there were fairly few other published results in numerical relativity, probably due to the lack of sufficiently powerful computers to address the problem. In the late 1990s, the Binary Black Hole Grand Challenge Alliance successfully simulated a head-on binary black hole collision. As a post-processing step the group computed the event horizon for the spacetime. This result still required imposing and exploiting axisymmetry in the calculations.

Some of the first documented attempts to solve the Einstein equations in three dimensions were focused on a single Schwarzschild black hole, which is described by a static and spherically symmetric solution to the Einstein field equations. This provides an excellent test case in numerical relativity because it does have a closed-form solution so that numerical results can be compared to an exact solution, because it is static, and because it contains one of the most numerically challenging features of relativity theory, a physical singularity. One of the earliest groups to attempt to simulate this solution was Anninos et al. in 1995. In their paper they point out that

"Progress in three dimensional numerical relativity has been impeded in part by lack of computers with sufficient memory and computational power to perform well resolved calculations of 3D spacetimes."

Maturation of the field

In the years that followed, not only did computers become more powerful, but also various research groups developed alternate techniques to improve the efficiency of the calculations. With respect to black hole simulations specifically, two techniques were devised to avoid problems associated with the existence of physical singularities in the solutions to the equations: (1) Excision, and (2) the "puncture" method. In addition the Lazarus group developed techniques for using early results from a short-lived simulation solving the nonlinear ADM equations, in order to provide initial data for a more stable code based on linearized equations derived from perturbation theory. More generally, adaptive mesh refinement techniques, already used in computational fluid dynamics were introduced to the field of numerical relativity.

Excision

In the excision technique, which was first proposed in the late 1990s, a portion of a spacetime inside of the event horizon surrounding the singularity of a black hole is simply not evolved. In theory this should not affect the solution to the equations outside of the event horizon because of the principle of causality and properties of the event horizon (i.e. nothing physical inside the black hole can influence any of the physics outside the horizon). Thus if one simply does not solve the equations inside the horizon one should still be able to obtain valid solutions outside. One "excises" the interior by imposing ingoing boundary conditions on a boundary surrounding the singularity but inside the horizon. While the implementation of excision has been very successful, the technique has two minor problems. The first is that one has to be careful about the coordinate conditions. While physical effects cannot propagate from inside to outside, coordinate effects could. For example, if the coordinate conditions were elliptical, coordinate changes inside could instantly propagate out through the horizon. This then means that one needs hyperbolic type coordinate conditions with characteristic velocities less than that of light for the propagation of coordinate effects (e.g., using harmonic coordinates coordinate conditions). The second problem is that as the black holes move, one must continually adjust the location of the excision region to move with the black hole.

The excision technique was developed over several years including the development of new gauge conditions that increased stability and work that demonstrated the ability of the excision regions to move through the computational grid. The first stable, long-term evolution of the orbit and merger of two black holes using this technique was published in 2005.

Punctures

In the puncture method the solution is factored into an analytical part, which contains the singularity of the black hole, and a numerically constructed part, which is then singularity free. This is a generalization of the Brill-Lindquist  prescription for initial data of black holes at rest and can be generalized to the Bowen-York prescription for spinning and moving black hole initial data. Until 2005, all published usage of the puncture method required that the coordinate position of all punctures remain fixed during the course of the simulation. Of course black holes in proximity to each other will tend to move under the force of gravity, so the fact that the coordinate position of the puncture remained fixed meant that the coordinate systems themselves became "stretched" or "twisted," and this typically led to numerical instabilities at some stage of the simulation.

Breakthrough

In 2005 researchers demonstrated for the first time the ability to allow punctures to move through the coordinate system, thus eliminating some of the earlier problems with the method. This allowed accurate long-term evolutions of black holes. By choosing appropriate coordinate conditions and making crude analytic assumption about the fields near the singularity (since no physical effects can propagate out of the black hole, the crudeness of the approximations does not matter), numerical solutions could be obtained to the problem of two black holes orbiting each other, as well as accurate computation of gravitational radiation (ripples in spacetime) emitted by them.

Lazarus project

The Lazarus project (1998–2005) was developed as a post-Grand Challenge technique to extract astrophysical results from short lived full numerical simulations of binary black holes. It combined approximation techniques before (post-Newtonian trajectories) and after (perturbations of single black holes) with full numerical simulations attempting to solve General Relativity field equations. All previous attempts to numerically integrate in supercomputers the Hilbert-Einstein equations describing the gravitational field around binary black holes led to software failure before a single orbit was completed.

The Lazarus approach, in the meantime, gave the best insight into the binary black hole problem and produced numerous and relatively accurate results, such as the radiated energy and angular momentum emitted in the latest merging state, the linear momentum radiated by unequal mass holes, and the final mass and spin of the remnant black hole. The method also computed detailed gravitational waves emitted by the merger process and predicted that the collision of black holes is the most energetic single event in the Universe, releasing more energy in a fraction of a second in the form of gravitational radiation than an entire galaxy in its lifetime.

Adaptive mesh refinement

Adaptive mesh refinement (AMR) as a numerical method has roots that go well beyond its first application in the field of numerical relativity. Mesh refinement first appears in the numerical relativity literature in the 1980s, through the work of Choptuik in his studies of critical collapse of scalar fields. The original work was in one dimension, but it was subsequently extended to two dimensions. In two dimensions, AMR has also been applied to the study of inhomogeneous cosmologies, and to the study of Schwarzschild black holes. The technique has now become a standard tool in numerical relativity and has been used to study the merger of black holes and other compact objects in addition to the propagation of gravitational radiation generated by such astronomical events.

Recent developments

In the past few years, hundreds of research papers have been published leading to a wide spectrum of mathematical relativity, gravitational wave, and astrophysical results for the orbiting black hole problem. This technique extended to astrophysical binary systems involving neutron stars and black holes, and multiple black holes. One of the most surprising predictions is that the merger of two black holes can give the remnant hole a speed of up to 4000 km/s that can allow it to escape from any known galaxy. The simulations also predict an enormous release of gravitational energy in this merger process, amounting up to 8% of its total rest mass.

Quantum cognition

From Wikipedia, the free encyclopedia

Quantum cognition is an emerging field which applies the mathematical formalism of quantum theory to model cognitive phenomena such as information processing by the human brain, language, decision making, human memory, concepts and conceptual reasoning, human judgment, and perception. The field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum mechanical about the brain. Quantum cognition is based on the quantum-like paradigm or generalized quantum paradigm or quantum structure paradigm that information processing by complex systems such as the brain, taking into account contextual dependence of information and probabilistic reasoning, can be mathematically described in the framework of quantum information and quantum probability theory.

Quantum cognition uses the mathematical formalism of quantum theory to inspire and formalize models of cognition that aim to be an advance over models based on traditional classical probability theory. The field focuses on modeling phenomena in cognitive science that have resisted traditional techniques or where traditional models seem to have reached a barrier (e.g., human memory), and modeling preferences in decision theory that seem paradoxical from a traditional rational point of view (e.g., preference reversals). Since the use of a quantum-theoretic framework is for modeling purposes, the identification of quantum structures in cognitive phenomena does not presuppose the existence of microscopic quantum processes in the human brain.

Main subjects of research

Quantum-like models of information processing ("quantum-like brain")

The brain is definitely a macroscopic physical system operating on the scales (of time, space, temperature) which differ crucially from the corresponding quantum scales. (The macroscopic quantum physical phenomena, such as the Bose-Einstein condensate, are also characterized by the special conditions which are definitely not fulfilled in the brain.) In particular, the brain’s temperature is simply too high to be able to perform the real quantum information processing, i.e., to use the quantum carriers of information such as photons, ions, electrons. As is commonly accepted in brain science, the basic unit of information processing is a neuron. It is clear that a neuron cannot be in the superposition of two states: firing and non-firing. Hence, it cannot produce superposition playing the basic role in the quantum information processing. Superpositions of mental states are created by complex networks of neurons (and these are classical neural networks). Quantum cognition community states that the activity of such neural networks can produce effects formally described as interference (of probabilities) and entanglement. In principle, the community does not try to create the concrete models of quantum (-like) representation of information in the brain.

The quantum cognition project is based on the observation that various cognitive phenomena are more adequately described by quantum information theory and quantum probability than by the corresponding classical theories (see examples below). Thus the quantum formalism is considered an operational formalism that describes nonclassical processing of probabilistic data. Recent derivations of the complete quantum formalism from simple operational principles for representation of information support the foundations of quantum cognition. The subjective probability viewpoint on quantum probability developed by C. Fuchs and his collaborators also supports the quantum cognition approach, especially using quantum probabilities to describe the process of decision making.

Although at the moment we cannot present the concrete neurophysiological mechanisms of creation of the quantum-like representation of information in the brain, we can present general informational considerations supporting the idea that information processing in the brain matches with quantum information and probability. Here, contextuality is the key word, see the monograph of Khrennikov for detailed representation of this viewpoint. Quantum mechanics is fundamentally contextual.

Quantum systems do not have objective properties which can be defined independently of measurement context. (As was pointed by N. Bohr, the whole experimental arrangement must be taken into account.) Contextuality implies existence of incompatible mental variables, violation of the classical law of total probability and (constructive and destructive) interference effects. Thus the quantum cognition approach can be considered as an attempt to formalize contextuality of mental processes by using the mathematical apparatus of quantum mechanics.

Decision making

Suppose a person is given an opportunity to play two rounds of the following gamble: a coin toss will determine whether the subject wins $200 or loses $100. Suppose the subject has decided to play the first round, and does so. Some subjects are then given the result (win or lose) of the first round, while other subjects are not yet given any information about the results. The experimenter then asks whether the subject wishes to play the second round. Performing this experiment with real subjects gives the following results:

  1. When subjects believe they won the first round, the majority of subjects choose to play again on the second round.
  2. When subjects believe they lost the first round, the majority of subjects choose to play again on the second round.

Given these two separate choices, according to the sure thing principle of rational decision theory, they should also play the second round even if they don't know or think about the outcome of the first round. But, experimentally, when subjects are not told the results of the first round, the majority of them decline to play a second round. This finding violates the law of total probability, yet it can be explained as a quantum interference effect in a manner similar to the explanation for the results from double-slit experiment in quantum physics. Similar violations of the sure-thing principle are seen in empirical studies of the Prisoner's Dilemma and have likewise been modeled in terms of quantum interference.

The above deviations from classical rational expectations in agents’ decisions under uncertainty produce well known paradoxes in behavioral economics, that is, the Allais, Ellsberg and Machina paradoxes. These deviations can be explained if one assumes that the overall conceptual landscape influences the subject's choice in a neither predictable nor controllable way. A decision process is thus an intrinsically contextual process, hence it cannot be modeled in a single Kolmogorovian probability space, which justifies the employment of quantum probability models in decision theory. More explicitly, the paradoxical situations above can be represented in a unified Hilbert space formalism where human behavior under uncertainty is explained in terms of genuine quantum aspects, namely, superposition, interference, contextuality and incompatibility.

Considering automated decision making, quantum decision trees have different structure compared to classical decision trees. Data can be analyzed to see if a quantum decision tree model fits the data better.

Human probability judgments

Quantum probability provides a new way to explain human probability judgment errors including the conjunction and disjunction errors. A conjunction error occurs when a person judges the probability of a likely event L and an unlikely event U to be greater than the unlikely event U; a disjunction error occurs when a person judges the probability of a likely event L to be greater than the probability of the likely event L or an unlikely event U. Quantum probability theory is a generalization of Bayesian probability theory because it is based on a set of von Neumann axioms that relax some of the classic Kolmogorov axioms. The quantum model introduces a new fundamental concept to cognition—the compatibility versus incompatibility of questions and the effect this can have on the sequential order of judgments. Quantum probability provides a simple account of conjunction and disjunction errors as well as many other findings such as order effects on probability judgments.

The liar paradox - The contextual influence of a human subject on the truth behavior of a cognitive entity is explicitly exhibited by the so-called liar paradox, that is, the truth value of a sentence like "this sentence is false". One can show that the true-false state of this paradox is represented in a complex Hilbert space, while the typical oscillations between true and false are dynamically described by the Schrödinger equation.

Knowledge representation

Concepts are basic cognitive phenomena, which provide the content for inference, explanation, and language understanding. Cognitive psychology has researched different approaches for understanding concepts including exemplars, prototypes, and neural networks, and different fundamental problems have been identified, such as the experimentally tested non classical behavior for the conjunction and disjunction of concepts, more specifically the Pet-Fish problem or guppy effect, and the overextension and underextension of typicality and membership weight for conjunction and disjunction. By and large, quantum cognition has drawn on quantum theory in three ways to model concepts.

  1. Exploit the contextuality of quantum theory to account for the contextuality of concepts in cognition and language and the phenomenon of emergent properties when concepts combine
  2. Use quantum entanglement to model the semantics of concept combinations in a non-decompositional way, and to account for the emergent properties/associates/inferences in relation to concept combinations
  3. Use quantum superposition to account for the emergence of a new concept when concepts are combined, and as a consequence put forward an explanatory model for the Pet-Fish problem situation, and the overextension and underextension of membership weights for the conjunction and disjunction of concepts.

The large amount of data collected by Hampton on the combination of two concepts can be modeled in a specific quantum-theoretic framework in Fock space where the observed deviations from classical set (fuzzy set) theory, the above-mentioned over- and under- extension of membership weights, are explained in terms of contextual interactions, superposition, interference, entanglement and emergence. And, more, a cognitive test on a specific concept combination has been performed which directly reveals, through the violation of Bell's inequalities, quantum entanglement between the component concepts.

Human memory

The hypothesis that there may be something quantum-like about the human mental function was put forward with the quantum entanglement formula which attempted to model the effect that when a word's associative network is activated during study in memory experiment, it behaves like a quantum-entangled system. Models of cognitive agents and memory based on quantum collectives have been proposed by Subhash Kak. But he also points to specific problems of limits on observation and control of these memories due to fundamental logical reasons.

Semantic analysis and information retrieval

The research in (iv) had a deep impact on the understanding and initial development of a formalism to obtain semantic information when dealing with concepts, their combinations and variable contexts in a corpus of unstructured documents. This conundrum of natural language processing (NLP) and information retrieval (IR) on the web – and data bases in general – can be addressed using the mathematical formalism of quantum theory. As basic steps, (a) K. Van Rijsbergen introduced a quantum structure approach to IR, (b) Widdows and Peters utilised a quantum logical negation for a concrete search system, and Aerts and Czachor identified quantum structure in semantic space theories, such as latent semantic analysis. Since then, the employment of techniques and procedures induced from the mathematical formalisms of quantum theory – Hilbert space, quantum logic and probability, non-commutative algebras, etc. – in fields such as IR and NLP, has produced significant results.

Human perception

Bi-stable perceptual phenomena is a fascinating topic in the area of perception. If a stimulus has an ambiguous interpretation, such as a Necker cube, the interpretation tends to oscillate across time. Quantum models have been developed to predict the time period between oscillations and how these periods change with frequency of measurement. Quantum theory and an appropriate model have been developed by Elio Conte to account for interference effects obtained with measurements of ambiguous figures.

Gestalt perception

There are apparent similarities between Gestalt perception and quantum theory. In an article discussing the application of Gestalt to chemistry, Anton Amann writes: "Quantum mechanics does not explain Gestalt perception, of course, but in quantum mechanics and Gestalt psychology there exist almost isomorphic conceptions and problems:

  • Similarly as with the Gestalt concept, the shape of a quantum object does not a priori exist but it depends on the interaction of this quantum object with the environment (for example: an observer or a measurement apparatus).
  • Quantum mechanics and Gestalt perception are organized in a holistic way. Subentities do not necessarily exist in a distinct, individual sense.
  • In quantum mechanics and Gestalt perception objects have to be created by elimination of holistic correlations with the 'rest of the world'."

Each of the points mentioned in the above text in a simplified manner (Below explanations correlate respectively with the above-mentioned points):

  • As an object in quantum physics doesn't have any shape until and unless it interacts with its environment; Objects according to Gestalt perspective do not hold much of a meaning individually as they do when there is a "group" of them or when they are present in an environment.
  • Both in quantum mechanics and Gestalt perception, the objects must be studied as a whole rather than finding properties of individual components and interpolating the whole object.
  • In Gestalt concept creation of a new object from another previously existing object means that the previously existing object now becomes a sub entity of the new object, and hence "elimination of holistic correlations" occurs. Similarly a new quantum object made from a previously existing object means that the previously existing object looses its holistic view.

Amann comments: "The structural similarities between Gestalt perception and quantum mechanics are on a level of a parable, but even parables can teach us something, for example, that quantum mechanics is more than just production of numerical results or that the Gestalt concept is more than just a silly idea, incompatible with atomistic conceptions."

Quantum-like models of cognition in economics and finance

The assumption that information processing by the agents of the market follows the laws of quantum information theory and quantum probability was actively explored by many authors, e.g., E. Haven, O. Choustova, A. Khrennikov, see the book of E. Haven and A. Khrennikov, for detailed bibliography. We can mention, e.g., the Bohmian model of dynamics of prices of shares in which the quantum(-like) potential is generated by expectations of agents of the financial market and, hence, it has the mental nature. This approach can be used to model real financial data, see the book of E. Haven and A. Khrennikov (2012).

Application of theory of open quantum systems to decision making and "cell's cognition"

An isolated quantum system is an idealized theoretical entity. In reality interactions with environment have to be taken into account. This is the subject of theory of open quantum systems. Cognition is also fundamentally contextual. The brain is a kind of (self-)observer which makes context dependent decisions. Mental environment plays a crucial role in information processing. Therefore, it is natural to apply theory of open quantum systems to describe the process of decision making as the result of quantum-like dynamics of the mental state of a system interacting with an environment. The description of the process of decision making is mathematically equivalent to the description of the process of decoherence. This idea was explored in a series of works of the multidisciplinary group of researchers at Tokyo University of Science.

Since in the quantum-like approach the formalism of quantum mechanics is considered as a purely operational formalism, it can be applied to the description of information processing by any biological system, i.e., not only by human beings.

Operationally it is very convenient to consider e.g. a cell as a kind of decision maker processing information in the quantum information framework. This idea was explored in a series of papers of the Swedish-Japanese research group using the methods of theory of open quantum systems: genes expressions were modeled as decision making in the process of interaction with environment.

History

Here is a short history of applying the formalisms of quantum theory to topics in psychology. Ideas for applying quantum formalisms to cognition first appeared in the 1990s by Diederik Aerts and his collaborators Jan Broekaert, Sonja Smets and Liane Gabora, by Harald Atmanspacher, Robert Bordley, and Andrei Khrennikov. A special issue on Quantum Cognition and Decision appeared in the Journal of Mathematical Psychology (2009, vol 53.), which planted a flag for the field. A few books related to quantum cognition have been published including those by Khrennikov (2004, 2010), Ivancivic and Ivancivic (2010), Busemeyer and Bruza (2012), E. Conte (2012). The first Quantum Interaction workshop was held at Stanford in 2007 organized by Peter Bruza, William Lawless, C. J. van Rijsbergen, and Don Sofge as part of the 2007 AAAI Spring Symposium Series. This was followed by workshops at Oxford in 2008, Saarbrücken in 2009, at the 2010 AAAI Fall Symposium Series held in Washington, D.C., 2011 in Aberdeen, 2012 in Paris, and 2013 in Leicester. Tutorials also were presented annually beginning in 2007 until 2013 at the annual meeting of the Cognitive Science Society. A Special Issue on Quantum models of Cognition appeared in 2013 in the journal Topics in Cognitive Science.

Related theories

It was suggested by theoretical physicists David Bohm and Basil Hiley that mind and matter both emerge from an "implicate order". Bohm and Hiley's approach to mind and matter is supported by philosopher Paavo Pylkkänen. Pylkkänen underlines "unpredictable, uncontrollable, indivisible and non-logical" features of conscious thought and draws parallels to a philosophical movement some call "post-phenomenology", in particular to Pauli Pylkkö's notion of the "aconceptual experience", an unstructured, unarticulated and pre-logical experience.

The mathematical techniques of both Conte's group and Hiley's group involve the use of Clifford algebras. These algebras account for "non-commutativity" of thought processes (for an example, see: noncommutative operations in everyday life).

However, an area that needs to be investigated is the concept lateralised brain functioning. Some studies in marketing have related lateral influences on cognition and emotion in processing of attachment related stimuli.

Holonomic brain theory

From Wikipedia, the free encyclopedia

Holonomic brain theory is a branch of neuroscience investigating the idea that human consciousness is formed by quantum effects in or between brain cells. This is opposed by traditional neuroscience, which investigates the brain's behavior by looking at patterns of neurons and the surrounding chemistry, and which assumes that any quantum effects will not be significant at this scale. The entire field of quantum consciousness is often criticized as pseudoscience, as detailed on the main article thereof.

This specific theory of quantum consciousness was developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm building on the initial theories of Holograms originally formulated by Dennis Gabor. It describes human cognition by modeling the brain as a holographic storage network. Pribram suggests these processes involve electric oscillations in the brain's fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses. These oscillations are waves and create wave interference patterns in which memory is encoded naturally, and the Wave function may be analyzed by a Fourier transform. Gabor, Pribram and others noted the similarities between these brain processes and the storage of information in a hologram, which can also be analyzed with a Fourier transform. In a hologram, any part of the hologram with sufficient size contains the whole of the stored information. In this theory, a piece of a long-term memory is similarly distributed over a dendritic arbor so that each part of the dendritic network contains all the information stored over the entire network. This model allows for important aspects of human consciousness, including the fast associative memory that allows for connections between different pieces of stored information and the non-locality of memory storage (a specific memory is not stored in a specific location, i.e. a certain cluster of neurons).

Origins and development

In 1946 Dennis Gabor invented the hologram mathematically, describing a system where an image can be reconstructed through information that is stored throughout the hologram. He demonstrated that the information pattern of a three-dimensional object can be encoded in a beam of light, which is more-or-less two-dimensional. Gabor also developed a mathematical model for demonstrating a holographic associative memory. One of Gabor's colleagues, Pieter Jacobus Van Heerden, also developed a related holographic mathematical memory model in 1963. This model contained the key aspect of non-locality, which became important years later when, in 1967, experiments by both Braitenberg and Kirschfield showed that exact localization of memory in the brain was false.

Karl Pribram had worked with psychologist Karl Lashley on Lashley's engram experiments, which used lesions to determine the exact location of specific memories in primate brains.[1] Lashley made small lesions in the brains and found that these had little effect on memory. On the other hand, Pribram removed large areas of cortex, leading to multiple serious deficits in memory and cognitive function. Memories were not stored in a single neuron or exact location, but were spread over the entirety of a neural network. Lashley suggested that brain interference patterns could play a role in perception, but was unsure how such patterns might be generated in the brain or how they would lead to brain function.

Several years later an article by neurophysiologist John Eccles described how a wave could be generated at the branching ends of pre-synaptic axons. Multiple of these waves could create interference patterns. Soon after, Emmett Leith was successful in storing visual images through the interference patterns of laser beams, inspired by Gabor's previous use of Fourier transformations to store information within a hologram. After studying the work of Eccles and that of Leith, Pribram put forward the hypothesis that memory might take the form of interference patterns that resemble laser-produced holograms. Physicist David Bohm presented his ideas of holomovement and implicate and explicate order. Pribram became aware of Bohm's work in 1975 and realized that, since a hologram could store information within patterns of interference and then recreate that information when activated, it could serve as a strong metaphor for brain function. Pribram was further encouraged in this line of speculation by the fact that neurophysiologists Russell and Karen DeValois together established "the spatial frequency encoding displayed by cells of the visual cortex was best described as a Fourier transform of the input pattern."

Theory overview

The hologram and holonomy

Diagram of one possible hologram setup.

A main characteristic of a hologram is that every part of the stored information is distributed over the entire hologram. Both processes of storage and retrieval are carried out in a way described by Fourier transformation equations. As long as a part of the hologram is large enough to contain the interference pattern, that part can recreate the entirety of the stored image, but the image may have unwanted changes, called noise.

An analogy to this is the broadcasting region of a radio antenna. In each smaller individual location within the entire area it is possible to access every channel, similar to how the entirety of the information of a hologram is contained within a part. Another analogy of a hologram is the way sunlight illuminates objects in the visual field of an observer. It doesn't matter how narrow the beam of sunlight is. The beam always contains all the information of the object, and when conjugated by a lens of a camera or the eyeball, produces the same full three-dimensional image. The Fourier transform formula converts spatial forms to spatial wave frequencies and vice versa, as all objects are in essence vibratory structures. Different types of lenses, acting similarly to optic lenses, can alter the frequency nature of information that is transferred.

This non-locality of information storage within the hologram is crucial, because even if most parts are damaged, the entirety will be contained within even a single remaining part of sufficient size. Pribram and others noted the similarities between an optical hologram and memory storage in the human brain. According to the holonomic brain theory, memories are stored within certain general regions, but stored non-locally within those regions. This allows the brain to maintain function and memory even when it is damaged. It is only when there exist no parts big enough to contain the whole that the memory is lost.

 This can also explain why some children retain normal intelligence when large portions of their brain—in some cases, half—are removed. It can also explain why memory is not lost when the brain is sliced in different cross-sections.

A single hologram can store 3D information in a 2D way. Such properties may explain some of the brain's abilities, including the ability to recognize objects at different angles and sizes than in the original stored memory.

Pribram proposed that neural holograms were formed by the diffraction patterns of oscillating electric waves within the cortex. It is important to note the difference between the idea of a holonomic brain and a holographic one. Pribram does not suggest that the brain functions as a single hologram. Rather, the waves within smaller neural networks create localized holograms within the larger workings of the brain. This patch holography is called holonomy or windowed Fourier transformations.

A holographic model can also account for other features of memory that more traditional models cannot. The Hopfield memory model has an early memory saturation point before which memory retrieval drastically slows and becomes unreliable. On the other hand, holographic memory models have much larger theoretical storage capacities. Holographic models can also demonstrate associative memory, store complex connections between different concepts, and resemble forgetting through "lossy storage."

The synaptodendritic web

A Few of the Various Types of Synapses

In classic brain theory the summation of electrical inputs to the dendrites and soma (cell body) of a neuron either inhibit the neuron or excite it and set off an action potential down the axon to where it synapses with the next neuron. However, this fails to account for different varieties of synapses beyond the traditional axodendritic (axon to dendrite). There is evidence for the existence of other kinds of synapses, including serial synapses and those between dendrites and soma and between different dendrites. Many synaptic locations are functionally bipolar, meaning they can both send and receive impulses from each neuron, distributing input and output over the entire group of dendrites.

Processes in this dendritic arbor, the network of teledendrons and dendrites, occur due to the oscillations of polarizations in the membrane of the fine-fibered dendrites, not due to the propagated nerve impulses associated with action potentials. Pribram posits that the length of the delay of an input signal in the dendritic arbor before it travels down the axon is related to mental awareness. The shorter the delay the more unconscious the action, while a longer delay indicates a longer period of awareness. A study by David Alkon showed that after unconscious Pavlovian conditioning there was a proportionally greater reduction in the volume of the dendritic arbor, akin to synaptic elimination when experience increases the automaticity of an action. Pribram and others theorize that, while unconscious behavior is mediated by impulses through nerve circuits, conscious behavior arises from microprocesses in the dendritic arbor.

At the same time, the dendritic network is extremely complex, able to receive 100,000 to 200,000 inputs in a single tree, due to the large amount of branching and the many dendritic spines protruding from the branches. Furthermore, synaptic hyperpolarization and depolarization remains somewhat isolated due to the resistance from the narrow dendritic spine stalk, allowing a polarization to spread without much interruption to the other spines. This spread is further aided intracellularly by the microtubules and extracellularly by glial cells. These polarizations act as waves in the synaptodendritic network, and the existence of multiple waves at once gives rise to interference patterns.

Deep and surface structure of memory

Pribram suggests that there are two layers of cortical processing: a surface structure of separated and localized neural circuits and a deep structure of the dendritic arborization that binds the surface structure together. The deep structure contains distributed memory, while the surface structure acts as the retrieval mechanism. Binding occurs through the temporal synchronization of the oscillating polarizations in the synaptodendritic web. It had been thought that binding only occurred when there was no phase lead or lag present, but a study by Saul and Humphrey found that cells in the lateral geniculate nucleus do in fact produce these. Here phase lead and lag act to enhance sensory discrimination, acting as a frame to capture important features. These filters are also similar to the lenses necessary for holographic functioning.

Recent studies

While Pribram originally developed the holonomic brain theory as an analogy for certain brain processes, several papers (including some more recent ones by Pribram himself) have proposed that the similarity between hologram and certain brain functions is more than just metaphorical, but actually structural. Others still maintain that the relationship is only analogical. Several studies have shown that the same series of operations used in holographic memory models are performed in certain processes concerning temporal memory and optomotor responses. This indicates at least the possibility of the existence of neurological structures with certain holonomic properties. Other studies have demonstrated the possibility that biophoton emission (biological electrical signals that are converted to weak electromagnetic waves in the visible range) may be a necessary condition for the electric activity in the brain to store holographic images. These may play a role in cell communication and certain brain processes including sleep, but further studies are needed to strengthen current ones. Other studies have shown the correlation between more advanced cognitive function and homeothermy. Taking holographic brain models into account, this temperature regulation would reduce distortion of the signal waves, an important condition for holographic systems.

Criticism and alternative models

Pribram's holonomic model of brain function did not receive widespread attention at the time, but other quantum models have been developed since, including brain dynamics by Jibu & Yasue and Vitiello's dissipative quantum brain dynamics. Though not directly related to the holonomic model, they continue to move beyond approaches based solely in classic brain theory.

Correlograph

In 1969 scientists D. Wilshaw, O. P. Buneman and H. Longuet-Higgins proposed an alternative, non-holographic model that fulfilled many of the same requirements as Gabor's original holographic model. The Gabor model did not explain how the brain could use Fourier analysis on incoming signals or how it would deal with the low signal-noise ratio in reconstructed memories. Longuet-Higgin's correlograph model built on the idea that any system could perform the same functions as a Fourier holograph if it could correlate pairs of patterns. It uses minute pinholes that do not produce diffraction patterns to create a similar reconstruction as that in Fourier holography. Like a hologram, a discrete correlograph can recognize displaced patterns and store information in a parallel and non-local way so it usually will not be destroyed by localized damage. They then expanded the model beyond the correlograph to an associative net where the points become parallel lines arranged in a grid. Horizontal lines represent axons of input neurons while vertical lines represent output neurons. Each intersection represents a modifiable synapse. Though this cannot recognize displaced patterns, it has a greater potential storage capacity. This was not necessarily meant to show how the brain is organized, but instead to show the possibility of improving on Gabor's original model. P. Van Heerden countered this model by demonstrating mathematically that the signal-noise ratio of a hologram could reach 50% of ideal. He also used a model with a 2D neural hologram network for fast searching imposed upon a 3D network for large storage capacity. A key quality of this model was its flexibility to change the orientation and fix distortions of stored information, which is important for our ability to recognize an object as the same entity from different angles and positions, something the correlograph and association network models lack.

Applications

Holographic models of memory and consciousness may be related to several brain disorders involving disunity of sensory input within a unified consciousness, including Charles Bonnet Syndrome, Disjunctive Agnosia, and Schizophrenia. Charles Bonnet Syndrome patients experience two vastly different worlds within one consciousness. They see the world that psychologically normal people perceive, but also a simplified world riddled with Pseudohallucination. These patients can differentiate these two worlds easily. Since dynamic core and global workspace theories insist that a distinct area of the brain is responsible for consciousness, the only way a patient would perceive two worlds was if this dynamic core and global workspace were split. But such does not explain how different content can be perceived within one single consciousness since these theories assume that each dynamic core or global workspace creates a single coherent reality. The primary symptom of Disjunctive Agnosia is an inconsistency of sensory information within a unified consciousness. They may see one thing, but hear something entirely incompatible with that image. Schizophrenics often report experiencing thoughts that do not seem to originate from themselves, as if the idea was inserted exogenously. The individual feels no control over certain thoughts existing within their consciousness.

Holography

From Wikipedia, the free encyclopedia
 
Two photographs of a single hologram taken from different viewpoints

A hologram is a real world recording of an interference pattern which uses diffraction to reproduce a 3D light field, resulting in an image which still has the depth, parallax, and other properties of the original scene. Holography is the science and practice of making holograms. A hologram is a photographic recording of a light field, rather than an image formed by a lens. The holographic medium, for example the object produced by a holographic process (which may be referred to as a hologram) is usually unintelligible when viewed under diffuse ambient light. It is an encoding of the light field as an interference pattern of variations in the opacity, density, or surface profile of the photographic medium. When suitably lit, the interference pattern diffracts the light into an accurate reproduction of the original light field, and the objects that were in it exhibit visual depth cues such as parallax and perspective that change realistically with the different angles of viewing. That is, the view of the image from different angles represents the subject viewed from similar angles. In this sense, holograms do not have just the illusion of depth but are truly three-dimensional images.

In its pure form, holography needs a laser light for illuminating the subject and for viewing the finished hologram. A microscopic level of detail throughout the recorded scene can be reproduced. In common practice, however, major image quality compromises are made to remove the need for laser illumination to view the hologram, and in some cases, to make it. Holographic portraiture often resorts to a non-holographic intermediate imaging procedure, to avoid the dangerous high-powered pulsed lasers which would be needed to optically "freeze" moving subjects as perfectly as the extremely motion-intolerant holographic recording process requires. Holograms can now also be entirely computer-generated to show objects or scenes that never existed.

Holography is distinct from lenticular and other earlier autostereoscopic 3D display technologies, which can produce superficially similar results but are based on conventional lens imaging. Images requiring the aid of special glasses or other intermediate optics, stage illusions such as Pepper's Ghost and other unusual, baffling, or seemingly magical images are often incorrectly called holograms.

Dennis Gabor invented holography in 1947 and later won a Nobel Prize for his efforts.

Overview and history

The Hungarian-British physicist Dennis Gabor (in Hungarian: Gábor Dénes) was awarded the Nobel Prize in Physics in 1971 "for his invention and development of the holographic method". His work, done in the late 1940s, was built on pioneering work in the field of X-ray microscopy by other scientists including Mieczysław Wolfke in 1920 and William Lawrence Bragg in 1939. This discovery was an unexpected result of research into improving electron microscopes at the British Thomson-Houston Company (BTH) in Rugby, England, and the company filed a patent in December 1947 (patent GB685286). The technique as originally invented is still used in electron microscopy, where it is known as electron holography, but optical holography did not really advance until the development of the laser in 1960. The word holography comes from the Greek words ὅλος (holos; "whole") and γραφή (graphē; "writing" or "drawing").

Horizontal symmetric text, by Dieter Jung

The development of the laser enabled the first practical optical holograms that recorded 3D objects to be made in 1962 by Yuri Denisyuk in the Soviet Union and by Emmett Leith and Juris Upatnieks at the University of Michigan, USA. Early holograms used silver halide photographic emulsions as the recording medium. They were not very efficient as the produced grating absorbed much of the incident light. Various methods of converting the variation in transmission to a variation in refractive index (known as "bleaching") were developed which enabled much more efficient holograms to be produced.

Several types of holograms can be made. Transmission holograms, such as those produced by Leith and Upatnieks, are viewed by shining laser light through them and looking at the reconstructed image from the side of the hologram opposite the source. A later refinement, the "rainbow transmission" hologram, allows more convenient illumination by white light rather than by lasers. Rainbow holograms are commonly used for security and authentication, for example, on credit cards and product packaging.

Another kind of common hologram, the reflection or Denisyuk hologram, can also be viewed using a white-light illumination source on the same side of the hologram as the viewer and is the type of hologram normally seen in holographic displays. They are also capable of multicolour-image reproduction.

Specular holography is a related technique for making three-dimensional images by controlling the motion of specularities on a two-dimensional surface. It works by reflectively or refractively manipulating bundles of light rays, whereas Gabor-style holography works by diffractively reconstructing wavefronts.

Most holograms produced are of static objects but systems for displaying changing scenes on a holographic volumetric display are now being developed.

Holograms can also be used to store, retrieve, and process information optically.

In its early days, holography required high-power and expensive lasers, but currently, mass-produced low-cost laser diodes, such as those found on DVD recorders and used in other common applications, can be used to make holograms and have made holography much more accessible to low-budget researchers, artists and dedicated hobbyists.

It was thought that it would be possible to use X-rays to make holograms of very small objects and view them using visible light. Today, holograms with x-rays are generated by using synchrotrons or x-ray free-electron lasers as radiation sources and pixelated detectors such as CCDs as recording medium. The reconstruction is then retrieved via computation. Due to the shorter wavelength of x-rays compared to visible light, this approach allows imaging objects with higher spatial resolution. As free-electron lasers can provide ultrashort and x-ray pulses in the range of femtoseconds which are intense and coherent, x-ray holography has been used to capture ultrafast dynamic processes.

How it works

Recording a hologram
 
Reconstructing a hologram
This is a photograph of a small part of an unbleached transmission hologram viewed through a microscope. The hologram recorded an images of a toy van and car. It is no more possible to discern the subject of the hologram from this pattern than it is to identify what music has been recorded by looking at a CD surface. The holographic information is recorded by the speckle pattern

Holography is a technique that enables a light field (which is generally the result of a light source scattered off objects) to be recorded and later reconstructed when the original light field is no longer present, due to the absence of the original objects. Holography can be thought of as somewhat similar to sound recording, whereby a sound field created by vibrating matter like musical instruments or vocal cords, is encoded in such a way that it can be reproduced later, without the presence of the original vibrating matter. However, it is even more similar to Ambisonic sound recording in which any listening angle of a sound field can be reproduced in the reproduction.

Laser

In laser holography, the hologram is recorded using a source of laser light, which is very pure in its color and orderly in its composition. Various setups may be used, and several types of holograms can be made, but all involve the interaction of light coming from different directions and producing a microscopic interference pattern which a plate, film, or other medium photographically records.

In one common arrangement, the laser beam is split into two, one known as the object beam and the other as the reference beam. The object beam is expanded by passing it through a lens and used to illuminate the subject. The recording medium is located where this light, after being reflected or scattered by the subject, will strike it. The edges of the medium will ultimately serve as a window through which the subject is seen, so its location is chosen with that in mind. The reference beam is expanded and made to shine directly on the medium, where it interacts with the light coming from the subject to create the desired interference pattern.

Like conventional photography, holography requires an appropriate exposure time to correctly affect the recording medium. Unlike conventional photography, during the exposure the light source, the optical elements, the recording medium, and the subject must all remain motionless relative to each other, to within about a quarter of the wavelength of the light, or the interference pattern will be blurred and the hologram spoiled. With living subjects and some unstable materials, that is only possible if a very intense and extremely brief pulse of laser light is used, a hazardous procedure which is rare and rarely done outside of scientific and industrial laboratory settings. Exposures lasting several seconds to several minutes, using a much lower-powered continuously operating laser, are typical.

Apparatus

A hologram can be made by shining part of the light beam directly into the recording medium, and the other part onto the object in such a way that some of the scattered light falls onto the recording medium. A more flexible arrangement for recording a hologram requires the laser beam to be aimed through a series of elements that change it in different ways. The first element is a beam splitter that divides the beam into two identical beams, each aimed in different directions:

  • One beam (known as the illumination or object beam) is spread using lenses and directed onto the scene using mirrors. Some of the light scattered (reflected) from the scene then falls onto the recording medium.
  • The second beam (known as the reference beam) is also spread through the use of lenses, but is directed so that it doesn't come in contact with the scene, and instead travels directly onto the recording medium.

Several different materials can be used as the recording medium. One of the most common is a film very similar to photographic film (silver halide photographic emulsion), but with a much higher concentration of light-reactive grains, making it capable of the much higher resolution that holograms require. A layer of this recording medium (e.g., silver halide) is attached to a transparent substrate, which is commonly glass, but may also be plastic.

Process

When the two laser beams reach the recording medium, their light waves intersect and interfere with each other. It is this interference pattern that is imprinted on the recording medium. The pattern itself is seemingly random, as it represents the way in which the scene's light interfered with the original light source – but not the original light source itself. The interference pattern can be considered an encoded version of the scene, requiring a particular key – the original light source – in order to view its contents.

This missing key is provided later by shining a laser, identical to the one used to record the hologram, onto the developed film. When this beam illuminates the hologram, it is diffracted by the hologram's surface pattern. This produces a light field identical to the one originally produced by the scene and scattered onto the hologram.

Comparison with photography

Holography may be better understood via an examination of its differences from ordinary photography:

  • A hologram represents a recording of information regarding the light that came from the original scene as scattered in a range of directions rather than from only one direction, as in a photograph. This allows the scene to be viewed from a range of different angles, as if it were still present.
  • A photograph can be recorded using normal light sources (sunlight or electric lighting) whereas a laser is required to record a hologram.
  • A lens is required in photography to record the image, whereas in holography, the light from the object is scattered directly onto the recording medium.
  • A holographic recording requires a second light beam (the reference beam) to be directed onto the recording medium.
  • A photograph can be viewed in a wide range of lighting conditions, whereas holograms can only be viewed with very specific forms of illumination.
  • When a photograph is cut in half, each piece shows half of the scene. When a hologram is cut in half, the whole scene can still be seen in each piece. This is because, whereas each point in a photograph only represents light scattered from a single point in the scene, each point on a holographic recording includes information about light scattered from every point in the scene. It can be thought of as viewing a street outside a house through a 120 cm × 120 cm (4 ft × 4 ft) window, then through a 60 cm × 120 cm (2 ft × 4 ft) window. One can see all of the same things through the smaller window (by moving the head to change the viewing angle), but the viewer can see more at once through the 120 cm (4 ft) window.
  • A photograph is a two-dimensional representation that can only reproduce a rudimentary three-dimensional effect, whereas the reproduced viewing range of a hologram adds many more depth perception cues that were present in the original scene. These cues are recognized by the human brain and translated into the same perception of a three-dimensional image as when the original scene might have been viewed.
  • A photograph clearly maps out the light field of the original scene. The developed hologram's surface consists of a very fine, seemingly random pattern, which appears to bear no relationship to the scene it recorded.

Physics of holography

For a better understanding of the process, it is necessary to understand interference and diffraction. Interference occurs when one or more wavefronts are superimposed. Diffraction occurs when a wavefront encounters an object. The process of producing a holographic reconstruction is explained below purely in terms of interference and diffraction. It is somewhat simplified but is accurate enough to give an understanding of how the holographic process works.

For those unfamiliar with these concepts, it is worthwhile to read those articles before reading further in this article.

Plane wavefronts

A diffraction grating is a structure with a repeating pattern. A simple example is a metal plate with slits cut at regular intervals. A light wave that is incident on a grating is split into several waves; the direction of these diffracted waves is determined by the grating spacing and the wavelength of the light.

A simple hologram can be made by superimposing two plane waves from the same light source on a holographic recording medium. The two waves interfere, giving a straight-line fringe pattern whose intensity varies sinusoidally across the medium. The spacing of the fringe pattern is determined by the angle between the two waves, and by the wavelength of the light.

The recorded light pattern is a diffraction grating. When it is illuminated by only one of the waves used to create it, it can be shown that one of the diffracted waves emerges at the same angle as that at which the second wave was originally incident, so that the second wave has been 'reconstructed'. Thus, the recorded light pattern is a holographic recording as defined above.

Point sources

Sinusoidal zone plate

If the recording medium is illuminated with a point source and a normally incident plane wave, the resulting pattern is a sinusoidal zone plate, which acts as a negative Fresnel lens whose focal length is equal to the separation of the point source and the recording plane.

When a plane wave-front illuminates a negative lens, it is expanded into a wave that appears to diverge from the focal point of the lens. Thus, when the recorded pattern is illuminated with the original plane wave, some of the light is diffracted into a diverging beam equivalent to the original spherical wave; a holographic recording of the point source has been created.

When the plane wave is incident at a non-normal angle at the time of recording, the pattern formed is more complex, but still acts as a negative lens if it is illuminated at the original angle.

Complex objects

To record a hologram of a complex object, a laser beam is first split into two beams of light. One beam illuminates the object, which then scatters light onto the recording medium. According to diffraction theory, each point in the object acts as a point source of light so the recording medium can be considered to be illuminated by a set of point sources located at varying distances from the medium.

The second (reference) beam illuminates the recording medium directly. Each point source wave interferes with the reference beam, giving rise to its own sinusoidal zone plate in the recording medium. The resulting pattern is the sum of all these 'zone plates', which combine to produce a random (speckle) pattern as in the photograph above.

When the hologram is illuminated by the original reference beam, each of the individual zone plates reconstructs the object wave that produced it, and these individual wavefronts are combined to reconstruct the whole of the object beam. The viewer perceives a wavefront that is identical with the wavefront scattered from the object onto the recording medium, so that it appears that the object is still in place even if it has been removed.

Mathematical model

A single-frequency light wave can be modeled by a complex number, U, which represents the electric or magnetic field of the light wave. The amplitude and phase of the light are represented by the absolute value and angle of the complex number. The object and reference waves at any point in the holographic system are given by UO and UR. The combined beam is given by UO + UR. The energy of the combined beams is proportional to the square of magnitude of the combined waves as

If a photographic plate is exposed to the two beams and then developed, its transmittance, T, is proportional to the light energy that was incident on the plate and is given by

,

where k is a constant.

When the developed plate is illuminated by the reference beam, the light transmitted through the plate, UH, is equal to the transmittance, T, multiplied by the reference beam amplitude, UR, giving

It can be seen that UH has four terms, each representing a light beam emerging from the hologram. The first of these is proportional to UO. This is the reconstructed object beam, which enables a viewer to 'see' the original object even when it is no longer present in the field of view.

The second and third beams are modified versions of the reference beam. The fourth term is the "conjugate object beam". It has the reverse curvature to the object beam itself and forms a real image of the object in the space beyond the holographic plate.

When the reference and object beams are incident on the holographic recording medium at significantly different angles, the virtual, real, and reference wavefronts all emerge at different angles, enabling the reconstructed object to be seen clearly.

Recording a hologram

Items required

An optical table being used to make a hologram

To make a hologram, the following are required:

  • a suitable object or set of objects
  • part of the laser beam to be directed so that it illuminates the object (the object beam) and another part so that it illuminates the recording medium directly (the reference beam), enabling the reference beam and the light which is scattered from the object onto the recording medium to form an interference pattern
  • a recording medium which converts this interference pattern into an optical element which modifies either the amplitude or the phase of an incident light beam according to the intensity of the interference pattern.
  • a laser beam that produces coherent light with one wavelength.
  • an environment which provides sufficient mechanical and thermal stability that the interference pattern is stable during the time in which the interference pattern is recorded

These requirements are inter-related, and it is essential to understand the nature of optical interference to see this. Interference is the variation in intensity which can occur when two light waves are superimposed. The intensity of the maxima exceeds the sum of the individual intensities of the two beams, and the intensity at the minima is less than this and may be zero. The interference pattern maps the relative phase between the two waves, and any change in the relative phases causes the interference pattern to move across the field of view. If the relative phase of the two waves changes by one cycle, then the pattern drifts by one whole fringe. One phase cycle corresponds to a change in the relative distances travelled by the two beams of one wavelength. Since the wavelength of light is of the order of 0.5 μm, it can be seen that very small changes in the optical paths travelled by either of the beams in the holographic recording system lead to movement of the interference pattern which is the holographic recording. Such changes can be caused by relative movements of any of the optical components or the object itself, and also by local changes in air-temperature. It is essential that any such changes are significantly less than the wavelength of light if a clear well-defined recording of the interference is to be created.

The exposure time required to record the hologram depends on the laser power available, on the particular medium used and on the size and nature of the object(s) to be recorded, just as in conventional photography. This determines the stability requirements. Exposure times of several minutes are typical when using quite powerful gas lasers and silver halide emulsions. All the elements within the optical system have to be stable to fractions of a μm over that period. It is possible to make holograms of much less stable objects by using a pulsed laser which produces a large amount of energy in a very short time (μs or less). These systems have been used to produce holograms of live people. A holographic portrait of Dennis Gabor was produced in 1971 using a pulsed ruby laser.

Thus, the laser power, recording medium sensitivity, recording time and mechanical and thermal stability requirements are all interlinked. Generally, the smaller the object, the more compact the optical layout, so that the stability requirements are significantly less than when making holograms of large objects.

Another very important laser parameter is its coherence. This can be envisaged by considering a laser producing a sine wave whose frequency drifts over time; the coherence length can then be considered to be the distance over which it maintains a single frequency. This is important because two waves of different frequencies do not produce a stable interference pattern. The coherence length of the laser determines the depth of field which can be recorded in the scene. A good holography laser will typically have a coherence length of several meters, ample for a deep hologram.

The objects that form the scene must, in general, have optically rough surfaces so that they scatter light over a wide range of angles. A specularly reflecting (or shiny) surface reflects the light in only one direction at each point on its surface, so in general, most of the light will not be incident on the recording medium. A hologram of a shiny object can be made by locating it very close to the recording plate.

Hologram classifications

There are three important properties of a hologram which are defined in this section. A given hologram will have one or other of each of these three properties, e.g. an amplitude modulated, thin, transmission hologram, or a phase modulated, volume, reflection hologram.

Amplitude and phase modulation holograms

An amplitude modulation hologram is one where the amplitude of light diffracted by the hologram is proportional to the intensity of the recorded light. A straightforward example of this is photographic emulsion on a transparent substrate. The emulsion is exposed to the interference pattern, and is subsequently developed giving a transmittance which varies with the intensity of the pattern – the more light that fell on the plate at a given point, the darker the developed plate at that point.

A phase hologram is made by changing either the thickness or the refractive index of the material in proportion to the intensity of the holographic interference pattern. This is a phase grating and it can be shown that when such a plate is illuminated by the original reference beam, it reconstructs the original object wavefront. The efficiency (i.e., the fraction of the illuminated object beam which is converted into the reconstructed object beam) is greater for phase than for amplitude modulated holograms.

Thin holograms and thick (volume) holograms

A thin hologram is one where the thickness of the recording medium is much less than the spacing of the interference fringes which make up the holographic recording. The thickness of a thin hologram can be down to 60 nm by using a topological insulator material Sb2Te3 thin film.[32] Ultrathin holograms hold the potential to be integrated with everyday consumer electronics like smartphones.

A thick or volume hologram is one where the thickness of the recording medium is greater than the spacing of the interference pattern. The recorded hologram is now a three dimensional structure, and it can be shown that incident light is diffracted by the grating only at a particular angle, known as the Bragg angle.[33] If the hologram is illuminated with a light source incident at the original reference beam angle but a broad spectrum of wavelengths; reconstruction occurs only at the wavelength of the original laser used. If the angle of illumination is changed, reconstruction will occur at a different wavelength and the colour of the re-constructed scene changes. A volume hologram effectively acts as a colour filter.

Transmission and reflection holograms

A transmission hologram is one where the object and reference beams are incident on the recording medium from the same side. In practice, several more mirrors may be used to direct the beams in the required directions.

Normally, transmission holograms can only be reconstructed using a laser or a quasi-monochromatic source, but a particular type of transmission hologram, known as a rainbow hologram, can be viewed with white light.

In a reflection hologram, the object and reference beams are incident on the plate from opposite sides of the plate. The reconstructed object is then viewed from the same side of the plate as that at which the re-constructing beam is incident.

Only volume holograms can be used to make reflection holograms, as only a very low intensity diffracted beam would be reflected by a thin hologram.

Examples of full-color reflection holograms of mineral specimens:

Holographic recording media

The recording medium has to convert the original interference pattern into an optical element that modifies either the amplitude or the phase of an incident light beam in proportion to the intensity of the original light field.

The recording medium should be able to resolve fully all the fringes arising from interference between object and reference beam. These fringe spacings can range from tens of micrometers to less than one micrometer, i.e. spatial frequencies ranging from a few hundred to several thousand cycles/mm, and ideally, the recording medium should have a response which is flat over this range. Photographic film has a very low or even zero response at the frequencies involved and cannot be used to make a hologram – for example, the resolution of Kodak's professional black and white film starts falling off at 20 lines/mm – it is unlikely that any reconstructed beam could be obtained using this film.

If the response is not flat over the range of spatial frequencies in the interference pattern, then the resolution of the reconstructed image may also be degraded.

The table below shows the principal materials used for holographic recording. Note that these do not include the materials used in the mass replication of an existing hologram, which are discussed in the next section. The resolution limit given in the table indicates the maximal number of interference lines/mm of the gratings. The required exposure, expressed as millijoules (mJ) of photon energy impacting the surface area, is for a long exposure time. Short exposure times (less than ​11000 of a second, such as with a pulsed laser) require much higher exposure energies, due to reciprocity failure.

General properties of recording materials for holography
Material Reusable Processing Type Theoretical max. efficiency Required exposure (mJ/cm2) Resolution limit (mm−1)
Photographic emulsions No Wet Amplitude 6% 1.5 5000
Phase (bleached) 60%
Dichromated gelatin No Wet Phase 100% 100 10,000
Photoresists No Wet Phase 30% 100 3,000
Photothermoplastics Yes Charge and heat Phase 33% 0.1 500–1,200
Photopolymers No Post exposure Phase 100% 10000 5,000
Photorefractives Yes None Phase 100% 10 10,000

Copying and mass production

An existing hologram can be copied by embossing or optically.

Most holographic recordings (e.g. bleached silver halide, photoresist, and photopolymers) have surface relief patterns which conform with the original illumination intensity. Embossing, which is similar to the method used to stamp out plastic discs from a master in audio recording, involves copying this surface relief pattern by impressing it onto another material.

The first step in the embossing process is to make a stamper by electrodeposition of nickel on the relief image recorded on the photoresist or photothermoplastic. When the nickel layer is thick enough, it is separated from the master hologram and mounted on a metal backing plate. The material used to make embossed copies consists of a polyester base film, a resin separation layer and a thermoplastic film constituting the holographic layer.

The embossing process can be carried out with a simple heated press. The bottom layer of the duplicating film (the thermoplastic layer) is heated above its softening point and pressed against the stamper, so that it takes up its shape. This shape is retained when the film is cooled and removed from the press. In order to permit the viewing of embossed holograms in reflection, an additional reflecting layer of aluminum is usually added on the hologram recording layer. This method is particularly suited to mass production.

The first book to feature a hologram on the front cover was The Skook (Warner Books, 1984) by JP Miller, featuring an illustration by Miller. The first record album cover to have a hologram was "UB44", produced in 1982 for the British group UB40 by Advanced Holographics in Loughborough. This featured a 5.75" square embossed hologram showing a 3D image of the letters UB carved out of polystyrene to look like stone and the numbers 44 hovering in space on the picture plane. On the inner sleeve was an explanation of the holographic process and instructions on how to light the hologram. National Geographic published the first magazine with a hologram cover in March 1984. Embossed holograms are used widely on credit cards, banknotes, and high value products for authentication purposes.

It is possible to print holograms directly into steel using a sheet explosive charge to create the required surface relief. The Royal Canadian Mint produces holographic gold and silver coinage through a complex stamping process.

A hologram can be copied optically by illuminating it with a laser beam, and locating a second hologram plate so that it is illuminated both by the reconstructed object beam, and the illuminating beam. Stability and coherence requirements are significantly reduced if the two plates are located very close together. An index matching fluid is often used between the plates to minimize spurious interference between the plates. Uniform illumination can be obtained by scanning point-by-point or with a beam shaped into a thin line.

Reconstructing and viewing the holographic image

Holographic self-portrait, exhibited at the National Polytechnic Museum, Sofia

When the hologram plate is illuminated by a laser beam identical to the reference beam which was used to record the hologram, an exact reconstruction of the original object wavefront is obtained. An imaging system (an eye or a camera) located in the reconstructed beam 'sees' exactly the same scene as it would have done when viewing the original. When the lens is moved, the image changes in the same way as it would have done when the object was in place. If several objects were present when the hologram was recorded, the reconstructed objects move relative to one another, i.e. exhibit parallax, in the same way as the original objects would have done. It was very common in the early days of holography to use a chess board as the object and then take photographs at several different angles using the reconstructed light to show how the relative positions of the chess pieces appeared to change.

A holographic image can also be obtained using a different laser beam configuration to the original recording object beam, but the reconstructed image will not match the original exactly. When a laser is used to reconstruct the hologram, the image is speckled just as the original image will have been. This can be a major drawback in viewing a hologram.

White light consists of light of a wide range of wavelengths. Normally, if a hologram is illuminated by a white light source, each wavelength can be considered to generate its own holographic reconstruction, and these will vary in size, angle, and distance. These will be superimposed, and the summed image will wipe out any information about the original scene, as if superimposing a set of photographs of the same object of different sizes and orientations. However, a holographic image can be obtained using white light in specific circumstances, e.g. with volume holograms and rainbow holograms. The white light source used to view these holograms should always approximate to a point source, i.e. a spot light or the sun. An extended source (e.g. a fluorescent lamp) will not reconstruct a hologram since its light is incident at each point at a wide range of angles, giving multiple reconstructions which will "wipe" one another out.

White light reconstructions do not contain speckles.

Volume holograms

A reflection-type volume hologram can give an acceptably clear reconstructed image using a white light source, as the hologram structure itself effectively filters out light of wavelengths outside a relatively narrow range. In theory, the result should be an image of approximately the same colour as the laser light used to make the hologram. In practice, with recording media that require chemical processing, there is typically a compaction of the structure due to the processing and a consequent colour shift to a shorter wavelength. Such a hologram recorded in a silver halide gelatin emulsion by red laser light will usually display a green image. Deliberate temporary alteration of the emulsion thickness before exposure, or permanent alteration after processing, has been used by artists to produce unusual colours and multicoloured effects.

Rainbow holograms

Rainbow hologram showing the change in colour in the vertical direction

In this method, parallax in the vertical plane is sacrificed to allow a bright, well-defined, gradiently colored reconstructed image to be obtained using white light. The rainbow holography recording process usually begins with a standard transmission hologram and copies it using a horizontal slit to eliminate vertical parallax in the output image. The viewer is therefore effectively viewing the holographic image through a narrow horizontal slit, but the slit has been expanded into a window by the same dispersion that would otherwise smear the entire image. Horizontal parallax information is preserved but movement in the vertical direction results in a color shift rather than altered vertical perspective. Because perspective effects are reproduced along one axis only, the subject will appear variously stretched or squashed when the hologram is not viewed at an optimum distance; this distortion may go unnoticed when there is not much depth, but can be severe when the distance of the subject from the plane of the hologram is very substantial. Stereopsis and horizontal motion parallax, two relatively powerful cues to depth, are preserved.

The holograms found on credit cards are examples of rainbow holograms. These are technically transmission holograms mounted onto a reflective surface like a metalized polyethylene terephthalate substrate commonly known as PET.

Fidelity of the reconstructed beam

Reconstructions from two parts of a broken hologram. Note the different viewpoints required to see the whole object

To replicate the original object beam exactly, the reconstructing reference beam must be identical to the original reference beam and the recording medium must be able to fully resolve the interference pattern formed between the object and reference beams. Exact reconstruction is required in holographic interferometry, where the holographically reconstructed wavefront interferes with the wavefront coming from the actual object, giving a null fringe if there has been no movement of the object and mapping out the displacement if the object has moved. This requires very precise relocation of the developed holographic plate.

Any change in the shape, orientation or wavelength of the reference beam gives rise to aberrations in the reconstructed image. For instance, the reconstructed image is magnified if the laser used to reconstruct the hologram has a longer wavelength than the original laser. Nonetheless, good reconstruction is obtained using a laser of a different wavelength, quasi-monochromatic light or white light, in the right circumstances.

Since each point in the object illuminates all of the hologram, the whole object can be reconstructed from a small part of the hologram. Thus, a hologram can be broken up into small pieces and each one will enable the whole of the original object to be imaged. One does, however, lose information and the spatial resolution gets worse as the size of the hologram is decreased – the image becomes "fuzzier". The field of view is also reduced, and the viewer will have to change position to see different parts of the scene.

Applications

Art

Early on, artists saw the potential of holography as a medium and gained access to science laboratories to create their work. Holographic art is often the result of collaborations between scientists and artists, although some holographers would regard themselves as both an artist and a scientist.

Salvador Dalí claimed to have been the first to employ holography artistically. He was certainly the first and best-known surrealist to do so, but the 1972 New York exhibit of Dalí holograms had been preceded by the holographic art exhibition that was held at the Cranbrook Academy of Art in Michigan in 1968 and by the one at the Finch College gallery in New York in 1970, which attracted national media attention. In Great Britain, Margaret Benyon began using holography as an artistic medium in the late 1960s and had a solo exhibition at the University of Nottingham art gallery in 1969. This was followed in 1970 by a solo show at the Lisson Gallery in London, which was billed as the "first London expo of holograms and stereoscopic paintings".

During the 1970s, a number of art studios and schools were established, each with their particular approach to holography. Notably, there was the San Francisco School of Holography established by Lloyd Cross, The Museum of Holography in New York founded by Rosemary (Posy) H. Jackson, the Royal College of Art in London and the Lake Forest College Symposiums organised by Tung Jeong. None of these studios still exist; however, there is the Center for the Holographic Arts in New York and the HOLOcenter in Seoul, which offers artists a place to create and exhibit work.

During the 1980s, many artists who worked with holography helped the diffusion of this so-called "new medium" in the art world, such as Harriet Casdin-Silver of the United States, Dieter Jung of Germany, and Moysés Baumstein of Brazil, each one searching for a proper "language" to use with the three-dimensional work, avoiding the simple holographic reproduction of a sculpture or object. For instance, in Brazil, many concrete poets (Augusto de Campos, Décio Pignatari, Julio Plaza and José Wagner Garcia, associated with Moysés Baumstein) found in holography a way to express themselves and to renew Concrete Poetry.

A small but active group of artists still integrate holographic elements into their work. Some are associated with novel holographic techniques; for example, artist Matt Brand employed computational mirror design to eliminate image distortion from specular holography.

The MIT Museum and Jonathan Ross both have extensive collections of holography and on-line catalogues of art holograms.

Data storage

Holography can be put to a variety of uses other than recording images. Holographic data storage is a technique that can store information at high density inside crystals or photopolymers. The ability to store large amounts of information in some kind of medium is of great importance, as many electronic products incorporate storage devices. As current storage techniques such as Blu-ray Disc reach the limit of possible data density (due to the diffraction-limited size of the writing beams), holographic storage has the potential to become the next generation of popular storage media. The advantage of this type of data storage is that the volume of the recording media is used instead of just the surface. Currently available SLMs can produce about 1000 different images a second at 1024×1024-bit resolution. With the right type of medium (probably polymers rather than something like LiNbO3), this would result in about one-gigabit-per-second writing speed. Read speeds can surpass this, and experts believe one-terabit-per-second readout is possible.

In 2005, companies such as Optware and Maxell produced a 120mm disc that uses a holographic layer to store data to a potential 3.9TB, a format called Holographic Versatile Disc. As of September 2014, no commercial product has been released.

Another company, InPhase Technologies, was developing a competing format, but went bankrupt in 2011 and all its assets were sold to Akonia Holographics, LLC.

While many holographic data storage models have used "page-based" storage, where each recorded hologram holds a large amount of data, more recent research into using submicrometre-sized "microholograms" has resulted in several potential 3D optical data storage solutions. While this approach to data storage can not attain the high data rates of page-based storage, the tolerances, technological hurdles, and cost of producing a commercial product are significantly lower.

Dynamic holography

In static holography, recording, developing and reconstructing occur sequentially, and a permanent hologram is produced.

There also exist holographic materials that do not need the developing process and can record a hologram in a very short time. This allows one to use holography to perform some simple operations in an all-optical way. Examples of applications of such real-time holograms include phase-conjugate mirrors ("time-reversal" of light), optical cache memories, image processing (pattern recognition of time-varying images), and optical computing.

The amount of processed information can be very high (terabits/s), since the operation is performed in parallel on a whole image. This compensates for the fact that the recording time, which is in the order of a microsecond, is still very long compared to the processing time of an electronic computer. The optical processing performed by a dynamic hologram is also much less flexible than electronic processing. On one side, one has to perform the operation always on the whole image, and on the other side, the operation a hologram can perform is basically either a multiplication or a phase conjugation. In optics, addition and Fourier transform are already easily performed in linear materials, the latter simply by a lens. This enables some applications, such as a device that compares images in an optical way.

The search for novel nonlinear optical materials for dynamic holography is an active area of research. The most common materials are photorefractive crystals, but in semiconductors or semiconductor heterostructures (such as quantum wells), atomic vapors and gases, plasmas and even liquids, it was possible to generate holograms.

A particularly promising application is optical phase conjugation. It allows the removal of the wavefront distortions a light beam receives when passing through an aberrating medium, by sending it back through the same aberrating medium with a conjugated phase. This is useful, for example, in free-space optical communications to compensate for atmospheric turbulence (the phenomenon that gives rise to the twinkling of starlight).

Hobbyist use

Peace Within Reach, a Denisyuk DCG hologram by amateur Dave Battin

Since the beginning of holography, amateur experimenters have explored its uses.

In 1971, Lloyd Cross opened the San Francisco School of Holography and taught amateurs how to make holograms using only a small (typically 5 mW) helium-neon laser and inexpensive home-made equipment. Holography had been supposed to require a very expensive metal optical table set-up to lock all the involved elements down in place and damp any vibrations that could blur the interference fringes and ruin the hologram. Cross's home-brew alternative was a sandbox made of a cinder block retaining wall on a plywood base, supported on stacks of old tires to isolate it from ground vibrations, and filled with sand that had been washed to remove dust. The laser was securely mounted atop the cinder block wall. The mirrors and simple lenses needed for directing, splitting and expanding the laser beam were affixed to short lengths of PVC pipe, which were stuck into the sand at the desired locations. The subject and the photographic plate holder were similarly supported within the sandbox. The holographer turned off the room light, blocked the laser beam near its source using a small relay-controlled shutter, loaded a plate into the holder in the dark, left the room, waited a few minutes to let everything settle, then made the exposure by remotely operating the laser shutter.

Many of these holographers would go on to produce art holograms. In 1983, Fred Unterseher, a co-founder of the San Francisco School of Holography and a well-known holographic artist, published the Holography Handbook, an easy-to-read guide to making holograms at home. This brought in a new wave of holographers and provided simple methods for using the then-available AGFA silver halide recording materials.

In 2000, Frank DeFreitas published the Shoebox Holography Book and introduced the use of inexpensive laser pointers to countless hobbyists. For many years, it had been assumed that certain characteristics of semiconductor laser diodes made them virtually useless for creating holograms, but when they were eventually put to the test of practical experiment, it was found that not only was this untrue, but that some actually provided a coherence length much greater than that of traditional helium-neon gas lasers. This was a very important development for amateurs, as the price of red laser diodes had dropped from hundreds of dollars in the early 1980s to about $5 after they entered the mass market as a component of DVD players in the late 1990s. Now, there are thousands of amateur holographers worldwide.

By late 2000, holography kits with inexpensive laser pointer diodes entered the mainstream consumer market. These kits enabled students, teachers, and hobbyists to make several kinds of holograms without specialized equipment, and became popular gift items by 2005. The introduction of holography kits with self-developing plates in 2003 made it possible for hobbyists to create holograms without the bother of wet chemical processing.

In 2006, a large number of surplus holography-quality green lasers (Coherent C315) became available and put dichromated gelatin (DCG) holography within the reach of the amateur holographer. The holography community was surprised at the amazing sensitivity of DCG to green light. It had been assumed that this sensitivity would be uselessly slight or non-existent. Jeff Blyth responded with the G307 formulation of DCG to increase the speed and sensitivity to these new lasers.

Kodak and Agfa, the former major suppliers of holography-quality silver halide plates and films, are no longer in the market. While other manufacturers have helped fill the void, many amateurs are now making their own materials. The favorite formulations are dichromated gelatin, Methylene-Blue-sensitised dichromated gelatin, and diffusion method silver halide preparations. Jeff Blyth has published very accurate methods for making these in a small lab or garage.

A small group of amateurs are even constructing their own pulsed lasers to make holograms of living subjects and other unsteady or moving objects.

Holographic interferometry

Holographic interferometry (HI) is a technique that enables static and dynamic displacements of objects with optically rough surfaces to be measured to optical interferometric precision (i.e. to fractions of a wavelength of light). It can also be used to detect optical-path-length variations in transparent media, which enables, for example, fluid flow to be visualized and analyzed. It can also be used to generate contours representing the form of the surface or the isodose regions in radiation dosimetry.

It has been widely used to measure stress, strain, and vibration in engineering structures.

Interferometric microscopy

The hologram keeps the information on the amplitude and phase of the field. Several holograms may keep information about the same distribution of light, emitted to various directions. The numerical analysis of such holograms allows one to emulate large numerical aperture, which, in turn, enables enhancement of the resolution of optical microscopy. The corresponding technique is called interferometric microscopy. Recent achievements of interferometric microscopy allow one to approach the quarter-wavelength limit of resolution.

Sensors or biosensors

The hologram is made with a modified material that interacts with certain molecules generating a change in the fringe periodicity or refractive index, therefore, the color of the holographic reflection.

Security

Identigram as a security element in a German identity card

Security holograms are very difficult to forge, because they are replicated from a master hologram that requires expensive, specialized and technologically advanced equipment. They are used widely in many currencies, such as the Brazilian 20, 50, and 100-reais notes; British 5, 10, and 20-pound notes; South Korean 5000, 10,000, and 50,000-won notes; Japanese 5000 and 10,000 yen notes, Indian 50,100,500, and 2000 rupee notes; and all the currently-circulating banknotes of the Canadian dollar, Croatian kuna, Danish krone, and Euro. They can also be found in credit and bank cards as well as passports, ID cards, books, DVDs, and sports equipment.

Other applications

Holographic scanners are in use in post offices, larger shipping firms, and automated conveyor systems to determine the three-dimensional size of a package. They are often used in tandem with checkweighers to allow automated pre-packing of given volumes, such as a truck or pallet for bulk shipment of goods. Holograms produced in elastomers can be used as stress-strain reporters due to its elasticity and compressibility, the pressure and force applied are correlated to the reflected wavelength, therefore its color. Holography technique can also be effectively used for radiation dosimetry.

FMCG industry

These are the hologram adhesive strips that provide protection against counterfeiting and duplication of products. These protective strips can be used on FMCG products like cards, medicines, food, audio-visual products etc. Hologram protection strips can be directly laminated on the product covering.

Electrical and electronic products

Hologram tags have an excellent ability to inspect an identical product. These kind of tags are more often used for protecting duplication of electrical and electronic products. These tags are available in a variety colors, sizes and shapes.

Hologram dockets for vehicle number plate

Some vehicle number plates on bikes or cars have registered hologram stickers which indicate authenticity. For the purpose of identification they have unique ID numbers.

High security holograms for credit cards

Holograms on credit cards
Holograms on credit cards.

These are holograms with high security features like micro texts, nano texts, complex images, logos and a multitude of other features. Holograms once affixed on Debit cards/passports cannot be removed easily. They offer an individual identity to a brand along with its protection.

Non-optical

In principle, it is possible to make a hologram for any wave.

Electron holography is the application of holography techniques to electron waves rather than light waves. Electron holography was invented by Dennis Gabor to improve the resolution and avoid the aberrations of the transmission electron microscope. Today it is commonly used to study electric and magnetic fields in thin films, as magnetic and electric fields can shift the phase of the interfering wave passing through the sample. The principle of electron holography can also be applied to interference lithography.

Acoustic holography is a method used to estimate the sound field near a source by measuring acoustic parameters away from the source via an array of pressure and/or particle velocity transducers.

 Measuring techniques included within acoustic holography are becoming increasingly popular in various fields, most notably those of transportation, vehicle and aircraft design, and NVH. The general idea of acoustic holography has led to different versions such as near-field acoustic holography (NAH) and statistically optimal near-field acoustic holography (SONAH). For audio rendition, the wave field synthesis is the most related procedure.

Atomic holography has evolved out of the development of the basic elements of atom optics. With the Fresnel diffraction lens and atomic mirrors atomic holography follows a natural step in the development of the physics (and applications) of atomic beams. Recent developments including atomic mirrors and especially ridged mirrors have provided the tools necessary for the creation of atomic holograms, although such holograms have not yet been commercialized.

Neutron beam holography has been used to see the inside of solid objects.

False holograms

Effects produced by lenticular printing, the Pepper's ghost illusion (or modern variants such as the Musion Eyeliner), tomography and volumetric displays are often confused with holograms. Such illusions have been called "fauxlography".

Pepper's ghost with a 2D video. The video image displayed on the floor is reflected in an angled sheet of glass.

The Pepper's ghost technique, being the easiest to implement of these methods, is most prevalent in 3D displays that claim to be (or are referred to as) "holographic". While the original illusion, used in theater, involved actual physical objects and persons, located offstage, modern variants replace the source object with a digital screen, which displays imagery generated with 3D computer graphics to provide the necessary depth cues. The reflection, which seems to float mid-air, is still flat, however, thus less realistic than if an actual 3D object was being reflected.

Examples of this digital version of Pepper's ghost illusion include the Gorillaz performances in the 2005 MTV Europe Music Awards and the 48th Grammy Awards; and Tupac Shakur's virtual performance at Coachella Valley Music and Arts Festival in 2012, rapping alongside Snoop Dogg during his set with Dr. Dre.

An even simpler illusion can be created by rear-projecting realistic images into semi-transparent screens. The rear projection is necessary because otherwise the semi-transparency of the screen would allow the background to be illuminated by the projection, which would break the illusion.

Crypton Future Media, a music software company that produced Hatsune Miku, one of many Vocaloid singing synthesizer applications, has produced concerts that have Miku, along with other Crypton Vocaloids, performing on stage as "holographic" characters. These concerts use rear projection onto a semi-transparent DILAD screen to achieve its "holographic" effect.

In 2011, in Beijing, apparel company Burberry produced the "Burberry Prorsum Autumn/Winter 2011 Hologram Runway Show", which included life size 2-D projections of models. The company's own video shows several centered and off-center shots of the main 2-dimensional projection screen, the latter revealing the flatness of the virtual models. The claim that holography was used was reported as fact in the trade media.

In Madrid, on 10 April 2015, a public visual presentation called "Hologramas por la Libertad" (Holograms for Liberty), featuring a ghostly virtual crowd of demonstrators, was used to protest a new Spanish law that prohibits citizens from demonstrating in public places. Although widely called a "hologram protest" in news reports, no actual holography was involved – it was yet another technologically updated variant of the Pepper's Ghost illusion.

In fiction

Holography has been widely referred to in movies, novels, and TV, usually in science fiction, starting in the late 1970s. Science fiction writers absorbed the urban legends surrounding holography that had been spread by overly-enthusiastic scientists and entrepreneurs trying to market the idea. This had the effect of giving the public overly high expectations of the capability of holography, due to the unrealistic depictions of it in most fiction, where they are fully three-dimensional computer projections that are sometimes tactile through the use of force fields. Examples of this type of depiction include the hologram of Princess Leia in Star Wars, Arnold Rimmer from Red Dwarf, who was later converted to "hard light" to make him solid, and the Holodeck and Emergency Medical Hologram from Star Trek.

Holography served as an inspiration for many video games with the science fiction elements. In many titles, fictional holographic technology has been used to reflect real life misrepresentations of potential military use of holograms, such as the "mirage tanks" in Command & Conquer: Red Alert 2 that can disguise themselves as trees. Player characters are able to use holographic decoys in games such as Halo: Reach and Crysis 2 to confuse and distract the enemy. Starcraft ghost agent Nova has access to "holo decoy" as one of her three primary abilities in Heroes of the Storm.[91]

Fictional depictions of holograms have, however, inspired technological advances in other fields, such as augmented reality, that promise to fulfill the fictional depictions of holograms by other means.

 

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...