Search This Blog

Sunday, July 28, 2024

Holonomic brain theory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Holonomic_brain_theory

Holonomic brain theory is a branch of neuroscience investigating the idea that human consciousness is formed by quantum effects in or between brain cells. Holonomic refers to representations in a Hilbert phase space defined by both spectral and space-time coordinates. Holonomic brain theory is opposed by traditional neuroscience, which investigates the brain's behavior by looking at patterns of neurons and the surrounding chemistry.

This specific theory of quantum consciousness was developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm building on the initial theories of holograms originally formulated by Dennis Gabor. It describes human cognition by modeling the brain as a holographic storage network. 

Pribram suggests these processes involve electric oscillations in the brain's fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses. These oscillations are waves and create wave interference patterns in which memory is encoded naturally, and the wave function may be analyzed by a Fourier transform.

Gabor, Pribram and others noted the similarities between these brain processes and the storage of information in a hologram, which can also be analyzed with a Fourier transform. In a hologram, any part of the hologram with sufficient size contains the whole of the stored information. In this theory, a piece of a long-term memory is similarly distributed over a dendritic arbor so that each part of the dendritic network contains all the information stored over the entire network. This model allows for important aspects of human consciousness, including the fast associative memory that allows for connections between different pieces of stored information and the non-locality of memory storage (a specific memory is not stored in a specific location, i.e. a certain cluster of neurons).

Origins and development

In 1946 Dennis Gabor invented the hologram mathematically, describing a system where an image can be reconstructed through information that is stored throughout the hologram. He demonstrated that the information pattern of a three-dimensional object can be encoded in a beam of light, which is more-or-less two-dimensional. Gabor also developed a mathematical model for demonstrating a holographic associative memory. One of Gabor's colleagues, Pieter Jacobus Van Heerden, also developed a related holographic mathematical memory model in 1963. This model contained the key aspect of non-locality, which became important years later when, in 1967, experiments by both Braitenberg and Kirschfield showed that exact localization of memory in the brain was false.

Karl Pribram had worked with psychologist Karl Lashley on Lashley's engram experiments, which used lesions to determine the exact location of specific memories in primate brains. Lashley made small lesions in the brains and found that these had little effect on memory. On the other hand, Pribram removed large areas of cortex, leading to multiple serious deficits in memory and cognitive function. Memories were not stored in a single neuron or exact location, but were spread over the entirety of a neural network. Lashley suggested that brain interference patterns could play a role in perception, but was unsure how such patterns might be generated in the brain or how they would lead to brain function.

Several years later an article by neurophysiologist John Eccles described how a wave could be generated at the branching ends of pre-synaptic axons. Multiple of these waves could create interference patterns. Soon after, Emmett Leith was successful in storing visual images through the interference patterns of laser beams, inspired by Gabor's previous use of Fourier transformations to store information within a hologram. After studying the work of Eccles and that of Leith, Pribram put forward the hypothesis that memory might take the form of interference patterns that resemble laser-produced holograms. In 1980, physicist David Bohm presented his ideas of holomovement and Implicate and explicate order. Pribram became aware of Bohm's work in 1975 and realized that, since a hologram could store information within patterns of interference and then recreate that information when activated, it could serve as a strong metaphor for brain function. Pribram was further encouraged in this line of speculation by the fact that neurophysiologists Russell and Karen DeValois together established "the spatial frequency encoding displayed by cells of the visual cortex was best described as a Fourier transform of the input pattern."

Theory overview

Hologram and holonomy

Diagram of one possible hologram setup.

A main characteristic of a hologram is that every part of the stored information is distributed over the entire hologram. Both processes of storage and retrieval are carried out in a way described by Fourier transformation equations. As long as a part of the hologram is large enough to contain the interference pattern, that part can recreate the entirety of the stored image, but the image may have unwanted changes, called noise.

An analogy to this is the broadcasting region of a radio antenna. In each smaller individual location within the entire area it is possible to access every channel, similar to how the entirety of the information of a hologram is contained within a part. Another analogy of a hologram is the way sunlight illuminates objects in the visual field of an observer. It doesn't matter how narrow the beam of sunlight is. The beam always contains all the information of the object, and when conjugated by a lens of a camera or the eyeball, produces the same full three-dimensional image. The Fourier transform formula converts spatial forms to spatial wave frequencies and vice versa, as all objects are in essence vibratory structures. Different types of lenses, acting similarly to optic lenses, can alter the frequency nature of information that is transferred.

This non-locality of information storage within the hologram is crucial, because even if most parts are damaged, the entirety will be contained within even a single remaining part of sufficient size. Pribram and others noted the similarities between an optical hologram and memory storage in the human brain. According to the holonomic brain theory, memories are stored within certain general regions, but stored non-locally within those regions. This allows the brain to maintain function and memory even when it is damaged. It is only when there exist no parts big enough to contain the whole that the memory is lost. This can also explain why some children retain normal intelligence when large portions of their brain—in some cases, half—are removed. It can also explain why memory is not lost when the brain is sliced in different cross-sections.

A single hologram can store 3D information in a 2D way. Such properties may explain some of the brain's abilities, including the ability to recognize objects at different angles and sizes than in the original stored memory.

Pribram proposed that neural holograms were formed by the diffraction patterns of oscillating electric waves within the cortex. Representation occurs as a dynamical transformation in a distributed network of dendritic microprocesses. It is important to note the difference between the idea of a holonomic brain and a holographic one. Pribram does not suggest that the brain functions as a single hologram. Rather, the waves within smaller neural networks create localized holograms within the larger workings of the brain. This patch holography is called holonomy or windowed Fourier transformations.

A holographic model can also account for other features of memory that more traditional models cannot. The Hopfield memory model has an early memory saturation point before which memory retrieval drastically slows and becomes unreliable. On the other hand, holographic memory models have much larger theoretical storage capacities. Holographic models can also demonstrate associative memory, store complex connections between different concepts, and resemble forgetting through "lossy storage".

Synaptodendritic web

A few of the various types of synapses

In classic brain theory the summation of electrical inputs to the dendrites and soma (cell body) of a neuron either inhibit the neuron or excite it and set off an action potential down the axon to where it synapses with the next neuron. However, this fails to account for different varieties of synapses beyond the traditional axodendritic (axon to dendrite). There is evidence for the existence of other kinds of synapses, including serial synapses and those between dendrites and soma and between different dendrites. Many synaptic locations are functionally bipolar, meaning they can both send and receive impulses from each neuron, distributing input and output over the entire group of dendrites.

Processes in this dendritic arbor, the network of teledendrons and dendrites, occur due to the oscillations of polarizations in the membrane of the fine-fibered dendrites, not due to the propagated nerve impulses associated with action potentials. Pribram posits that the length of the delay of an input signal in the dendritic arbor before it travels down the axon is related to mental awareness. The shorter the delay the more unconscious the action, while a longer delay indicates a longer period of awareness. A study by David Alkon showed that after unconscious Pavlovian conditioning there was a proportionally greater reduction in the volume of the dendritic arbor, akin to synaptic elimination when experience increases the automaticity of an action. Pribram and others theorize that, while unconscious behavior is mediated by impulses through nerve circuits, conscious behavior arises from microprocesses in the dendritic arbor.

At the same time, the dendritic network is extremely complex, able to receive 100,000 to 200,000 inputs in a single tree, due to the large amount of branching and the many dendritic spines protruding from the branches. Furthermore, synaptic hyperpolarization and depolarization remains somewhat isolated due to the resistance from the narrow dendritic spine stalk, allowing a polarization to spread without much interruption to the other spines. This spread is further aided intracellularly by the microtubules and extracellularly by glial cells. These polarizations act as waves in the synaptodendritic network, and the existence of multiple waves at once gives rise to interference patterns.

Deep and surface structure of memory

Pribram suggests that there are two layers of cortical processing: a surface structure of separated and localized neural circuits and a deep structure of the dendritic arborization that binds the surface structure together. The deep structure contains distributed memory, while the surface structure acts as the retrieval mechanism. Binding occurs through the temporal synchronization of the oscillating polarizations in the synaptodendritic web. It had been thought that binding only occurred when there was no phase lead or lag present, but a study by Saul and Humphrey found that cells in the lateral geniculate nucleus do in fact produce these. Here phase lead and lag act to enhance sensory discrimination, acting as a frame to capture important features. These filters are also similar to the lenses necessary for holographic functioning.

Pribram notes that holographic memories show large capacities, parallel processing and content addressability for rapid recognition, associative storage for perceptual completion and for associative recall. In systems endowed with memory storage, these interactions therefore lead to progressively more self-determination.

Recent studies

While Pribram originally developed the holonomic brain theory as an analogy for certain brain processes, several papers (including some more recent ones by Pribram himself) have proposed that the similarity between hologram and certain brain functions is more than just metaphorical, but actually structural. Others still maintain that the relationship is only analogical. Several studies have shown that the same series of operations used in holographic memory models are performed in certain processes concerning temporal memory and optomotor responses. This indicates at least the possibility of the existence of neurological structures with certain holonomic properties. Other studies have demonstrated the possibility that biophoton emission (biological electrical signals that are converted to weak electromagnetic waves in the visible range) may be a necessary condition for the electric activity in the brain to store holographic images. These may play a role in cell communication and certain brain processes including sleep, but further studies are needed to strengthen current ones. Other studies have shown the correlation between more advanced cognitive function and homeothermy. Taking holographic brain models into account, this temperature regulation would reduce distortion of the signal waves, an important condition for holographic systems. See: Computation approach in terms of holographic codes and processing.

Criticism and alternative models

Pribram's holonomic model of brain function did not receive widespread attention at the time, but other quantum models have been developed since, including brain dynamics by Jibu & Yasue and Vitiello's dissipative quantum brain dynamics. Though not directly related to the holonomic model, they continue to move beyond approaches based solely in classic brain theory.

Correlograph

In 1969 scientists D. Wilshaw, O. P. Buneman and H. Longuet-Higgins proposed an alternative, non-holographic model that fulfilled many of the same requirements as Gabor's original holographic model. The Gabor model did not explain how the brain could use Fourier analysis on incoming signals or how it would deal with the low signal-noise ratio in reconstructed memories. Longuet-Higgin's correlograph model built on the idea that any system could perform the same functions as a Fourier holograph if it could correlate pairs of patterns. It uses minute pinholes that do not produce diffraction patterns to create a similar reconstruction as that in Fourier holography. Like a hologram, a discrete correlograph can recognize displaced patterns and store information in a parallel and non-local way so it usually will not be destroyed by localized damage. They then expanded the model beyond the correlograph to an associative net where the points become parallel lines arranged in a grid. Horizontal lines represent axons of input neurons while vertical lines represent output neurons. Each intersection represents a modifiable synapse. Though this cannot recognize displaced patterns, it has a greater potential storage capacity. This was not necessarily meant to show how the brain is organized, but instead to show the possibility of improving on Gabor's original model. One property of the associative net that makes it attractive as a neural model is that good retrieval can be obtained even when some of the storage elements are damaged or when some of the components of the address are incorrect. P. Van Heerden countered this model by demonstrating mathematically that the signal-noise ratio of a hologram could reach 50% of ideal. He also used a model with a 2D neural hologram network for fast searching imposed upon a 3D network for large storage capacity. A key quality of this model was its flexibility to change the orientation and fix distortions of stored information, which is important for our ability to recognize an object as the same entity from different angles and positions, something the correlograph and association network models lack.

Superfluid vacuum theory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Superfluid_vacuum_theory

The microscopic structure of this physical vacuum is currently unknown and is a subject of intensive studies in SVT. An ultimate goal of this research is to develop scientific models that unify quantum mechanics (which describes three of the four known fundamental interactions) with gravity, making SVT a candidate for the theory of quantum gravity and describes all known interactions in the Universe, at both microscopic and astronomic scales, as different manifestations of the same entity, superfluid vacuum.

History

The concept of a luminiferous aether as a medium sustaining electromagnetic waves was discarded after the advent of the special theory of relativity, as the presence of the concept alongside special relativity results in several contradictions; in particular, aether having a definite velocity at each spacetime point will exhibit a preferred direction. This conflicts with the relativistic requirement that all directions within a light cone are equivalent. However, as early as in 1951 P.A.M. Dirac published two papers where he pointed out that we should take into account quantum fluctuations in the flow of the aether. His arguments involve the application of the uncertainty principle to the velocity of aether at any spacetime point, implying that the velocity will not be a well-defined quantity. In fact, it will be distributed over various possible values. At best, one could represent the aether by a wave function representing the perfect vacuum state for which all aether velocities are equally probable.

Inspired by Dirac's ideas, K. P. Sinha, C. Sivaram and E. C. G. Sudarshan published in 1975 a series of papers that suggested a new model for the aether according to which it is a superfluid state of fermion and anti-fermion pairs, describable by a macroscopic wave function.[3][4][5] They noted that particle-like small fluctuations of superfluid background obey the Lorentz symmetry, even if the superfluid itself is non-relativistic. Nevertheless, they decided to treat the superfluid as the relativistic matter – by putting it into the stress–energy tensor of the Einstein field equations. This did not allow them to describe the relativistic gravity as a small fluctuation of the superfluid vacuum, as subsequent authors have noted.

Since then, several theories have been proposed within the SVT framework. They differ in how the structure and properties of the background superfluid must look. In absence of observational data which would rule out some of them, these theories are being pursued independently.

Relation to other concepts and theories

Lorentz and Galilean symmetries

According to the approach, the background superfluid is assumed to be essentially non-relativistic whereas the Lorentz symmetry is not an exact symmetry of Nature but rather the approximate description valid only for small fluctuations. An observer who resides inside such vacuum and is capable of creating or measuring the small fluctuations would observe them as relativistic objects – unless their energy and momentum are sufficiently high to make the Lorentz-breaking corrections detectable. If the energies and momenta are below the excitation threshold then the superfluid background behaves like the ideal fluid, therefore, the Michelson–Morley-type experiments would observe no drag force from such aether.

Further, in the theory of relativity the Galilean symmetry (pertinent to our macroscopic non-relativistic world) arises as the approximate one – when particles' velocities are small compared to speed of light in vacuum. In SVT one does not need to go through Lorentz symmetry to obtain the Galilean one – the dispersion relations of most non-relativistic superfluids are known to obey the non-relativistic behavior at large momenta.

To summarize, the fluctuations of vacuum superfluid behave like relativistic objects at "small" momenta (a.k.a. the "phononic limit")

and like non-relativistic ones

at large momenta. The yet unknown nontrivial physics is believed to be located somewhere between these two regimes.

Relativistic quantum field theory

In the relativistic quantum field theory the physical vacuum is also assumed to be some sort of non-trivial medium to which one can associate certain energy. This is because the concept of absolutely empty space (or "mathematical vacuum") contradicts the postulates of quantum mechanics. According to QFT, even in absence of real particles the background is always filled by pairs of creating and annihilating virtual particles. However, a direct attempt to describe such medium leads to the so-called ultraviolet divergences. In some QFT models, such as quantum electrodynamics, these problems can be "solved" using the renormalization technique, namely, replacing the diverging physical values by their experimentally measured values. In other theories, such as the quantum general relativity, this trick does not work, and reliable perturbation theory cannot be constructed.

According to SVT, this is because in the high-energy ("ultraviolet") regime the Lorentz symmetry starts failing so dependent theories cannot be regarded valid for all scales of energies and momenta. Correspondingly, while the Lorentz-symmetric quantum field models are obviously a good approximation below the vacuum-energy threshold, in its close vicinity the relativistic description becomes more and more "effective" and less and less natural since one will need to adjust the expressions for the covariant field-theoretical actions by hand.

Curved spacetime

According to general relativity, gravitational interaction is described in terms of spacetime curvature using the mathematical formalism of differential geometry. This was supported by numerous experiments and observations in the regime of low energies. However, the attempts to quantize general relativity led to various severe problems, therefore, the microscopic structure of gravity is still ill-defined. There may be a fundamental reason for this—the degrees of freedom of general relativity are based on what may be only approximate and effective. The question of whether general relativity is an effective theory has been raised for a long time.

According to SVT, the curved spacetime arises as the small-amplitude collective excitation mode of the non-relativistic background condensate. The mathematical description of this is similar to fluid-gravity analogy which is being used also in the analog gravity models. Thus, relativistic gravity is essentially a long-wavelength theory of the collective modes whose amplitude is small compared to the background one. Outside this requirement the curved-space description of gravity in terms of the Riemannian geometry becomes incomplete or ill-defined.

Cosmological constant

The notion of the cosmological constant makes sense in a relativistic theory only, therefore, within the SVT framework this constant can refer at most to the energy of small fluctuations of the vacuum above a background value, but not to the energy of the vacuum itself. Thus, in SVT this constant does not have any fundamental physical meaning, and related problems such as the vacuum catastrophe, simply do not occur in the first place.

Gravitational waves and gravitons

According to general relativity, the conventional gravitational wave is:

  1. the small fluctuation of curved spacetime which
  2. has been separated from its source and propagates independently.

Superfluid vacuum theory brings into question the possibility that a relativistic object possessing both of these properties exists in nature. Indeed, according to the approach, the curved spacetime itself is the small collective excitation of the superfluid background, therefore, the property (1) means that the graviton would be in fact the "small fluctuation of the small fluctuation", which does not look like a physically robust concept (as if somebody tried to introduce small fluctuations inside a phonon, for instance). As a result, it may be not just a coincidence that in general relativity the gravitational field alone has no well-defined stress–energy tensor, only the pseudotensor one. Therefore, the property (2) cannot be completely justified in a theory with exact Lorentz symmetry which the general relativity is. Though, SVT does not a priori forbid an existence of the non-localized wave-like excitations of the superfluid background which might be responsible for the astrophysical phenomena which are currently being attributed to gravitational waves, such as the Hulse–Taylor binary. However, such excitations cannot be correctly described within the framework of a fully relativistic theory.

Mass generation and Higgs boson

The Higgs boson is the spin-0 particle that has been introduced in electroweak theory to give mass to the weak bosons. The origin of mass of the Higgs boson itself is not explained by electroweak theory. Instead, this mass is introduced as a free parameter by means of the Higgs potential, which thus makes it yet another free parameter of the Standard Model. Within the framework of the Standard Model (or its extensions) the theoretical estimates of this parameter's value are possible only indirectly and results differ from each other significantly. Thus, the usage of the Higgs boson (or any other elementary particle with predefined mass) alone is not the most fundamental solution of the mass generation problem but only its reformulation ad infinitum. Another known issue of the Glashow–Weinberg–Salam model is the wrong sign of mass term in the (unbroken) Higgs sector for energies above the symmetry-breaking scale.

While SVT does not explicitly forbid the existence of the electroweak Higgs particle, it has its own idea of the fundamental mass generation mechanism – elementary particles acquire mass due to the interaction with the vacuum condensate, similarly to the gap generation mechanism in superconductors or superfluids. Although this idea is not entirely new, one could recall the relativistic Coleman-Weinberg approach, SVT gives the meaning to the symmetry-breaking relativistic scalar field as describing small fluctuations of background superfluid which can be interpreted as an elementary particle only under certain conditions. In general, one allows two scenarios to happen:

  • Higgs boson exists: in this case SVT provides the mass generation mechanism which underlies the electroweak one and explains the origin of mass of the Higgs boson itself;
  • Higgs boson does not exist: then the weak bosons acquire mass by directly interacting with the vacuum condensate.

Thus, the Higgs boson, even if it exists, would be a by-product of the fundamental mass generation phenomenon rather than its cause.

Also, some versions of SVT favor a wave equation based on the logarithmic potential rather than on the quartic one. The former potential has not only the Mexican-hat shape, necessary for the spontaneous symmetry breaking, but also some other features which make it more suitable for the vacuum's description.

Logarithmic BEC vacuum theory

In this model the physical vacuum is conjectured to be strongly-correlated quantum Bose liquid whose ground-state wavefunction is described by the logarithmic Schrödinger equation. It was shown that the relativistic gravitational interaction arises as the small-amplitude collective excitation mode whereas relativistic elementary particles can be described by the particle-like modes in the limit of low energies and momenta. The essential difference of this theory from others is that in the logarithmic superfluid the maximal velocity of fluctuations is constant in the leading (classical) order. This allows to fully recover the relativity postulates in the "phononic" (linearized) limit.

The proposed theory has many observational consequences. They are based on the fact that at high energies and momenta the behavior of the particle-like modes eventually becomes distinct from the relativistic one – they can reach the speed of light limit at finite energy. Among other predicted effects is the superluminal propagation and vacuum Cherenkov radiation.

Theory advocates the mass generation mechanism which is supposed to replace or alter the electroweak Higgs one. It was shown that masses of elementary particles can arise as a result of interaction with the superfluid vacuum, similarly to the gap generation mechanism in superconductors. For instance, the photon propagating in the average interstellar vacuum acquires a tiny mass which is estimated to be about 10−35 electronvolt. One can also derive an effective potential for the Higgs sector which is different from the one used in the Glashow–Weinberg–Salam model, yet it yields the mass generation and it is free of the imaginary-mass problem appearing in the conventional Higgs potential.

Body psychotherapy

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Body_psychotherapy

Body psychotherapy, also called body-oriented psychotherapy, is an approach to psychotherapy which applies basic principles of somatic psychology. It originated in the work of Pierre Janet, Sigmund Freud and particularly Wilhelm Reich who developed it as vegetotherapy. Branches also were developed by Alexander Lowen, and John Pierrakos, both patients and students of Reich, like Reichian body-oriented psychotherapy and Gerda Boyesen.

History

Wilhelm Reich and the post-Reichians are considered the central element of body psychotherapy. From the 1930s, Reich became known for the idea that muscular tension reflected repressed emotions, what he called 'body armour', and developed a way to use pressure to produce emotional release in his clients. Reich was expelled from the psychoanalytic mainstream and his work found a home in the 'growth movement' of the 1960s and 1970s and in the countercultural project of 'liberating the body'. Perhaps as a result, body psychotherapy was marginalised within mainstream psychology and was seen in the 1980s and 1990s as 'the radical fringe of psychotherapy'. Body psychotherapy's marginal position may be connected with the tendency for charismatic leaders to emerge within it, from Reich onwards.

Alexander Lowen in his Bioenergetic analysis and John Pierrakos in Core energetics extended Reich's finding of the segmented nature of body armour: "The muscular armour has a segmented arrangement...always transverse to the torso, never along it". Lowen claimed that "No words are so clear as the language of body expression". Subsequently, the Chiron Centre for Body Psychotherapy added influences from Gestalt therapy to their approach.

The early 2000s saw a 'renaissance of body psychotherapy' which was part of a broader increased interest in the body and embodiment in psychology and other disciplines including philosophy, sociology, anthropology and cultural studies. Object relations theory has arguably opened the way more recently for a fuller consideration of the body-mind connection in psychotherapy.

Branches

There are numerous branches of body psychotherapy, often tracing their origins to particular individuals: for example, 'Bioenergetic analysis' to the work of Lowen and Pierrakos; 'Radix' to the work of Chuck Kelley; Organismic Psychotherapy to the work of Malcolm and Katherine Brown; 'Biosynthesis' to the work of David Boadella; 'Biodynamic Psychology' to that of Gerda Boyesen; 'Rubenfeld Synergy' to Ilana Rubenfeld's work; 'Body-Mind Centering' to Bonnie Bainbridge Cohen's work, and 'Body-mind Psychotherapy' to Susan Aposhyan; the development of Jack Painter's 'Postural and Energetic Integration' into a psychotherapeutic modality.

Many of these contributors to body psychotherapy were influenced by the work of Wilhelm Reich, while adding and incorporating a variety of other influences. Syntheses of these approaches are also becoming accepted and recognised in their own right (e.g. The Chiron Approach: Chiron Association of Body Psychotherapists).

Alongside the body psychotherapies built directly on the work of Reich, there is a branch of post-Jungian body psychotherapies, developed from Jung's idea of the 'somatic unconscious'. While many post-Jungians dismiss Reich and do not work with the body, contributors to Jungian derived body psychotherapy include Arnold Mindell with his concept of the 'dreambody' and the development of process oriented psychology. Process oriented psychology is known for its focus on the body and movement.

Body psychotherapy and dance movement therapy have developed separately and are professionally distinguished, however they have significant common ground and shared principles including the importance of non-verbal therapeutic techniques and the development of body-focused awareness.

A review of body psychotherapy research finds there is a small but growing empirical evidence base about the outcomes of body psychotherapy, however it is weakened by the fragmentation of the field into different branches and schools. The review reports that one of the strongest studies is longitudinal (2 year) outcome research conducted with 342 participants across 8 different schools (Hakomi Experiental Psychology, Unitive Body Psychotherapy, Biodynamic Psychology, Bioenergetic Analysis, Client-Centred Verbal and Body Psychotherapy, Integrative Body Psychotherapy, Body-Oriented Psychotherapy, and Biosynthesis). Overall efficacy was demonstrated in symptom reduction, however the study design limited further substantive conclusions.

The review of outcome research across different types of body-oriented psychotherapy concludes that the best evidence supports efficacy for treating somatoform/psychosomatic disorders and schizophrenia, while there is also support for 'generally good effects on subjectively experienced depressive and anxiety symptoms, somatisation and social insecurity.' A more recent review found that results in some of these domains were mixed or might have resulted from other causes (for example, somatic symptoms in one study improved even after therapy had ended, suggesting that the improvements may have been unrelated to the therapy).

Trauma

Body psychotherapy is one modality used in a multi-modal approach to treating psychological trauma, particularly post-traumatic stress disorder (PTSD) and complex post-traumatic stress disorder (C-PTSD).

Recovering a sense of physical boundaries through sensorimotor psychotherapy is an important part of re-establishing trust in the traumatised. Blending somatic and cognitive awareness, such an approach reaches back for inspiration to the pioneering work of Janet, as well as employing the more recent work of António Damásio.

The necessity of often working without touch with traumatised victims presents a special challenge for body psychotherapists.

Organizations

The European Association for Body Psychotherapy (EABP) and The United States Association for Body Psychotherapy (USABP) are two professional associations for body psychotherapists.

The EABP was founded in 1988 to promote the inclusion of body psychotherapy within a broader process of professionalisation, standardisation and regulation of psychotherapy in Europe, driven by the European Association for Psychotherapy (EAP). The EABP Board committed to meeting the EAP standards for establishing the scientific validity of psychotherapy modalities and achieved this in 1999/2000 for body psychotherapy as a whole, with various individual modalities subsequently also achieving this recognition. It was accepted as a European-Wide Accrediting Organisation in 2000.

EABP has a bi-annual conference; organises a Council of ten National B-P Associations; supports a FORUM of Body-Psychotherapy Organisations, which accredits more than 18 B-P training organisations in 10 different countries; the EABP website also provides a list of research papers; a searchable bibliography of body-psychotherapy publications, containing more than 5,000 entries.

The USABP was formed in June 1996 to provide professional representation for body psychotherapy practitioners in the United States. The USABP launched a peer-reviewed professional journal in 2002, the USA Body Psychotherapy Journal, which was published twice-yearly from 2002 to 2011. In 2012, the sister organisations, EABP and USABP, together launched the International Body Psychotherapy Journal.

There is also an Australian Association of Somatic Psychotherapy Australia.

Cautions

The importance of ethical issues in body psychotherapy has been highlighted on account of the intimacy of the techniques used.

The term bioenergetic has a well established meaning in biochemistry and cell biology. Its use in RBOP (Reichian body-oriented psychotherapy) has been criticized as "ignoring the already well established universal consensus about energy existing in Science."

There is a group of psychotherapists who believe that psychotherapy should be thought of as a craft and evaluated based on the effectiveness of the treatment, rather than evaluated based on scientific validity. However, efficacy studies of body psychotherapy have been few in number and, although the results are supportive of the use of body psychotherapy in some contexts, this trend "is not overwhelming".

Orchestrated objective reduction

From Wikipedia, the free encyclopedia
 
The founders of the theory: Roger Penrose and Stuart Hameroff, respectively

Orchestrated objective reduction (Orch OR) is a highly controversial theory postulating that consciousness originates at the quantum level inside neurons (rather than being a product of neural connections). The mechanism is held to be a quantum process called objective reduction that is orchestrated by cellular structures called microtubules. It is proposed that the theory may answer the hard problem of consciousness and provide a mechanism for free will. The hypothesis was first put forward in the early 1990s by Nobel laureate for physics, Roger Penrose, and anaesthesiologist Stuart Hameroff. The hypothesis combines approaches from molecular biology, neuroscience, pharmacology, philosophy, quantum information theory, and quantum gravity.

While more generally accepted theories assert that consciousness emerges as the complexity of the computations performed by cerebral neurons increases, Orch OR posits that consciousness is based on non-computable quantum processing performed by qubits formed collectively on cellular microtubules, a process significantly amplified in the neurons. The qubits are based on oscillating dipoles forming superposed resonance rings in helical pathways throughout lattices of microtubules. The oscillations are either electric, due to charge separation from London forces, or magnetic, due to electron spin—and possibly also due to nuclear spins (that can remain isolated for longer periods) that occur in gigahertz, megahertz and kilohertz frequency ranges. Orchestration refers to the hypothetical process by which connective proteins, such as microtubule-associated proteins (MAPs), influence or orchestrate qubit state reduction by modifying the spacetime-separation of their superimposed states. The latter is based on Penrose's objective-collapse theory for interpreting quantum mechanics, which postulates the existence of an objective threshold governing the collapse of quantum-states, related to the difference of the spacetime curvature of these states in the universe's fine-scale structure.

Orchestrated objective reduction has been criticized from its inception by mathematicians, philosophers, and scientists. The criticism concentrated on three issues: Penrose's interpretation of Gödel's theorem; Penrose's abductive reasoning linking non-computability to quantum events; and the brain's unsuitability to host the quantum phenomena required by the theory, since it is considered too "warm, wet and noisy" to avoid decoherence.

Background

Logician Kurt Gödel

In 1931, mathematician and logician Kurt Gödel proved that any effectively generated theory capable of proving basic arithmetic cannot be both consistent and complete. In other words, a mathematically sound theory lacks the means to prove itself. In his first book concerning consciousness, The Emperor's New Mind (1989), Roger Penrose argued that equivalent statements to "Gödel-type propositions" had recently been put forward.

Partially in response to Gödel's argument, the Penrose–Lucas argument leaves the question of the physical basis of non-computable behaviour open. Most physical laws are computable, and thus algorithmic. However, Penrose determined that wave function collapse was a prime candidate for a non-computable process. In quantum mechanics, particles are treated differently from the objects of classical mechanics. Particles are described by wave functions that evolve according to the Schrödinger equation. Non-stationary wave functions are linear combinations of the eigenstates of the system, a phenomenon described by the superposition principle. When a quantum system interacts with a classical system—i.e. when an observable is measured—the system appears to collapse to a random eigenstate of that observable from a classical vantage point.

If collapse is truly random, then no process or algorithm can deterministically predict its outcome. This provided Penrose with a candidate for the physical basis of the non-computable process that he hypothesized to exist in the brain. However, he disliked the random nature of environmentally induced collapse, as randomness was not a promising basis for mathematical understanding. Penrose proposed that isolated systems may still undergo a new form of wave function collapse, which he called objective reduction (OR).

Penrose sought to reconcile general relativity and quantum theory using his own ideas about the possible structure of spacetime. He suggested that at the Planck scale curved spacetime is not continuous, but discrete. He further postulated that each separated quantum superposition has its own piece of spacetime curvature, a blister in spacetime. Penrose suggests that gravity exerts a force on these spacetime blisters, which become unstable above the Planck scale of and collapse to just one of the possible states. The rough threshold for OR is given by Penrose's indeterminacy principle:

where:
  • is the time until OR occurs,
  • is the gravitational self-energy or the degree of spacetime separation given by the superpositioned mass, and
  • is the reduced Planck constant.

Thus, the greater the mass–energy of the object, the faster it will undergo OR and vice versa. Mesoscopic objects could collapse on a timescale relevant to neural processing.

An essential feature of Penrose's theory is that the choice of states when objective reduction occurs is selected neither randomly (as are choices following wave function collapse) nor algorithmically. Rather, states are selected by a "non-computable" influence embedded in the Planck scale of spacetime geometry. Penrose claimed that such information is Platonic, representing pure mathematical truths, which relates to Penrose's ideas concerning the three worlds: the physical, the mental, and the Platonic mathematical world. In Shadows of the Mind (1994), Penrose briefly indicates that this Platonic world could also include aesthetic and ethical values, but he does not commit to this further hypothesis.

The Penrose–Lucas argument was criticized by mathematicians, computer scientists, and philosophers, and the consensus among experts in these fields is that the argument fails, with different authors attacking different aspects of the argument. Minsky argued that because humans can believe false ideas to be true, human mathematical understanding need not be consistent and consciousness may easily have a deterministic basis. Feferman argued that mathematicians do not progress by mechanistic search through proofs, but by trial-and-error reasoning, insight and inspiration, and that machines do not share this approach with humans.

Orch OR

Penrose outlined a predecessor to Orch OR in The Emperor's New Mind, coming to the problem from a mathematical viewpoint and in particular Gödel's theorem, but lacked a detailed proposal for how quantum processes could be implemented in the brain. Stuart Hameroff separately worked in cancer research and anesthesia, which gave him an interest in brain processes. Hameroff read Penrose's book and suggested to him that microtubules within neurons were suitable candidate sites for quantum processing, and ultimately for consciousness. Throughout the 1990s, the two collaborated on the Orch OR theory, which Penrose published in Shadows of the Mind (1994).

Hameroff's contribution to the theory derived from his study of the neural cytoskeleton, and particularly on microtubules. As neuroscience has progressed, the role of the cytoskeleton and microtubules has assumed greater importance. In addition to providing structural support, microtubule functions include axoplasmic transport and control of the cell's movement, growth and shape.

Orch OR combines the Penrose–Lucas argument with Hameroff's hypothesis on quantum processing in microtubules. It proposes that when condensates in the brain undergo an objective wave function reduction, their collapse connects noncomputational decision-making to experiences embedded in spacetime's fundamental geometry. The theory further proposes that the microtubules both influence and are influenced by the conventional activity at the synapses between neurons.

Microtubule computation

A: An axon terminal releases neurotransmitters through a synapse and are received by microtubules in a neuron's dendritic spine.
B: Simulated microtubule tubulins switch states.

Hameroff proposed that microtubules were suitable candidates for quantum processing. Microtubules are made up of tubulin protein subunits. The tubulin protein dimers of the microtubules have hydrophobic pockets that may contain delocalized π electrons. Tubulin has other, smaller non-polar regions, for example 8 tryptophans per tubulin, which contain π electron-rich indole rings distributed throughout tubulin with separations of roughly 2 nm. Hameroff claims that this is close enough for the tubulin π electrons to become quantum entangled. During entanglement, particle states become inseparably correlated. The quantum effects in tryptophans were confirmed in the study from 2024 that is called Ultraviolet Superradiance from Mega-Networks of Tryptophan in Biological Architectures.

Hameroff originally suggested in the fringe Journal of Cosmology that the tubulin-subunit electrons would form a Bose–Einstein condensate. He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules. However, this too was rejected by Reimers's group. Hameroff then responded to Reimers. "Reimers et al have most definitely NOT shown that strong or coherent Frohlich condensation in microtubules is unfeasible. The model microtubule on which they base their Hamiltonian is not a microtubule structure, but a simple linear chain of oscillators." Hameroff reasoned that such condensate behavior would magnify nanoscopic quantum effects to have large scale influences in the brain.

Hameroff then proposed that condensates in microtubules in one neuron can link with microtubule condensates in other neurons and glial cells via the gap junctions of electrical synapses. Hameroff proposed that the gap between the cells is sufficiently small that quantum objects can tunnel across it, allowing them to extend across a large area of the brain. He further postulated that the action of this large-scale quantum activity is the source of 40 Hz gamma waves, building upon the much less controversial theory that gap junctions are related to the gamma oscillation.

In April 2022, the results of two related experiments were presented at The Science of Consciousness conference:

  1. In a study Hameroff was part of, Jack Tuszyński of the University of Alberta demonstrated that anesthetics hasten the duration of a process called delayed luminescence, in which microtubules and tubulins re-emit trapped light. Tuszyński suspects that the phenomenon has a quantum origin, with superradiance being investigated as one possibility (in a later study superradiance was confirmed to occur in networks of tryptophans, which are found in microtubules). “We’re not at the level of interpreting this physiologically, saying 'Yeah, this is where consciousness begins,' but it may," Jack Tuszyński told New Scientist.
  2. In the second experiment, Gregory D. Scholes and Aarat Kalra of Princeton University used lasers to excite molecules within tubulins, causing a prolonged excitation to diffuse through microtubules farther than expected, which did not occur when repeated under anesthesia. However, diffusion results have to be interpreted carefully, since even classical diffusion can be very complex due to the wide range of length scales in the fluid filled extracellular space.

In 2024, a study called Ultraviolet Superradiance from Mega-Networks of Tryptophan in Biological Architectures] whose result was published in The Journal of Physical Chemistry confirmed the quantum effect called superradiance in large networks of tryptophans, which are found in microtubules. Large networks of tryptophans are a warm and noisy environment, an environment in which quantum effects typically aren't expected to take place. The results of the study were first theoretically predicted and then experimentally confirmed by the researchers. Professor Majed Chergui of The Swiss Federal Institute of Technology, who led the experimental team said "It's a beautiful result. It took very precise and careful application of standard protein spectroscopy methods, but guided by the theoretical predictions of our collaborators, we were able to confirm a stunning signature of superradiance in a micron-scale biological system". Marlan Scully, a physics professor at Princeton University, who is among other things famous for his work in the field of theoretical quantum optics, commented on the study by saying “We will certainly be examining closely the implications for quantum effects in living systems for years to come”. The study states "by analyzing the coupling with the electromagnetic field of mega-networks of tryptophans present in these biologically relevant architectures, we find the emergence of collective quantum optical effects, namely, superradiant and subradiant eigenmodes (...) our work demonstrates that collective and cooperative UV excitations in mega-networks of tryptophans support robust quantum states in protein aggregates, with observed consequences even under thermal equilibrium conditions".

Microtubule quantum vibration theory of anesthetic action

At high concentrations (~5 MAC) the anesthetic gas halothane causes reversible depolymerization of microtubules. This cannot be the mechanism of anesthetic action, however, because human anesthesia is performed at 1 MAC. (It is important to note that neither Penrose or Hameroff ever claim that depolymerization is the mechanism of action for ORCH OR) At ~1 MAC halothane, reported minor changes in tubulin protein expression (~1.3-fold) in primary cortical neurons after exposure to halothane and isoflurane are not evidence that tubulin directly interacts with general anesthetics, but rather shows that the proteins controlling tubulin production are possible anesthetic targets. Further proteomic study reports 0.5 mM [14C]halothane binding to tubulin monomers alongside three dozens of other proteins. In addition, modulation of microtubule stability has been reported during anthracene general anesthesia of tadpoles. The study called Direct Modulation of Microtubule Stability Contributes to Anthracene General Anesthesia claims to provide "strong evidence that destabilization of neuronal microtubules provides a path to achieving general anesthesia".

What might anesthetics do to microtubules to cause loss of consciousness? A highly disputed theory put forth in the mid-1990s by Stuart Hameroff and Sir Roger Penrose posits that consciousness is based on quantum vibrations in tubulin/microtubules inside brain neurons. Computer modeling of tubulin's atomic structure found that anesthetic gas molecules bind adjacent to amino acid aromatic rings of non-polar π-electrons and that collective quantum dipole oscillations among all π-electron resonance rings in each tubulin showed a spectrum with a common mode peak at 613 THz. Simulated presence of 8 different anesthetic gases abolished the 613 THz peak, whereas the presence of 2 different nonanesthetic gases did not affect the 613 THz peak, from which it was speculated that this 613 THz peak in microtubules could be related to consciousness and anesthetic action.

Another study that Stuart Hameroff was a part of claims to show "anesthetic molecules can impair π-resonance energy transfer and exciton hopping in 'quantum channels' of tryptophan rings in tubulin, and thus account for selective action of anesthetics on consciousness and memory".

Criticism

Orch OR has been criticized both by physicists and neuroscientists who consider it to be a poor model of brain physiology. Orch OR has also been criticized for lacking explanatory power; the philosopher Patricia Churchland wrote, "Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules."

David Chalmers argues against quantum consciousness. He instead discusses how quantum mechanics may relate to dualistic consciousness. Chalmers is skeptical that any new physics can resolve the hard problem of consciousness. He argues that quantum theories of consciousness suffer from the same weakness as more conventional theories. Just as he argues that there is no particular reason why particular macroscopic physical features in the brain should give rise to consciousness, he also thinks that there is no particular reason why a particular quantum feature, such as the EM field in the brain, should give rise to consciousness either.

Decoherence in living organisms

In 2000 Max Tegmark claimed that any quantum coherent system in the brain would undergo effective wave function collapse due to environmental interaction long before it could influence neural processes (the "warm, wet and noisy" argument, as it later came to be known). He determined the decoherence timescale of microtubule entanglement at brain temperatures to be on the order of femtoseconds, far too brief for neural processing. Christof Koch and Klaus Hepp also agreed that quantum coherence does not play, or does not need to play any major role in neurophysiology. Koch and Hepp concluded that "The empirical demonstration of slowly decoherent and controllable quantum bits in neurons connected by electrical or chemical synapses, or the discovery of an efficient quantum algorithm for computations performed by the brain, would do much to bring these speculations from the 'far-out' to the mere 'very unlikely'."

In response to Tegmark's claims, Hagan, Tuszynski and Hameroff claimed that Tegmark did not address the Orch OR model, but instead a model of his own construction. This involved superpositions of quanta separated by 24 nm rather than the much smaller separations stipulated for Orch OR. As a result, Hameroff's group claimed a decoherence time seven orders of magnitude greater than Tegmark's, although still far below 25 ms. Hameroff's group also suggested that the Debye layer of counterions could screen thermal fluctuations, and that the surrounding actin gel might enhance the ordering of water, further screening noise. They also suggested that incoherent metabolic energy could further order water, and finally that the configuration of the microtubule lattice might be suitable for quantum error correction, a means of resisting quantum decoherence.

In 2009, Reimers et al. and McKemmish et al., published critical assessments. Earlier versions of the theory had required tubulin-electrons to form either Bose–Einsteins or Frohlich condensates, and the Reimers group noted the lack of empirical evidence that such could occur. Additionally they calculated that microtubules could only support weak 8 MHz coherence. McKemmish et al. argued that aromatic molecules cannot switch states because they are delocalised; and that changes in tubulin protein-conformation driven by GTP conversion would result in a prohibitive energy requirement.

In 2022, a group of Italian physicists conducted several experiments that failed to provide evidence in support of a gravity-related quantum collapse model of consciousness, weakening the possibility of a quantum explanation for consciousness.

Neuroscience

Hameroff frequently writes: "A typical brain neuron has roughly 107 tubulins (Yu and Baas, 1994)", yet this is Hameroff's own invention, which should not be attributed to Yu and Baas. Hameroff apparently misunderstood that Yu and Baas actually "reconstructed the microtubule (MT) arrays of a 56 μm axon from a cell that had undergone axon differentiation" and this reconstructed axon "contained 1430 MTs ... and the total MT length was 5750 μm." A direct calculation shows that 107 tubulins (to be precise 9.3 × 106 tubulins) correspond to this MT length of 5750 μm inside the 56 μm axon.

Hameroff's 1998 hypothesis required that cortical dendrites contain primarily 'A' lattice microtubules, but in 1994 Kikkawa et al. showed that all in vivo microtubules have a 'B' lattice and a seam.

Orch OR also required gap junctions between neurons and glial cells, yet Binmöller et al. proved in 1992 that these do not exist in the adult brain. In vitro research with primary neuronal cultures shows evidence for electrotonic (gap junction) coupling between immature neurons and astrocytes obtained from rat embryos extracted prematurely through Cesarean section; however, the Orch OR claim is that mature neurons are electrotonically coupled to astrocytes in the adult brain. Therefore, Orch OR contradicts the well-documented electrotonic decoupling of neurons from astrocytes in the process of neuronal maturation, which is stated by Fróes et al. as follows: "junctional communication may provide metabolic and electrotonic interconnections between neuronal and astrocytic networks at early stages of neural development and such interactions are weakened as differentiation progresses."

Other biology-based criticisms have been offered, including a lack of explanation for the probabilistic release of neurotransmitter from presynaptic axon terminals and an error in the calculated number of the tubulin dimers per cortical neuron.

In 2014, Penrose and Hameroff published responses to some criticisms and revisions to many of the theory's peripheral assumptions, while retaining the core hypothesis.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...