Quantum mechanics is the study of matter and matter's interactions with energy on the scale of atomic and subatomic particles. By contrast, classical physics
explains matter and energy only on a scale familiar to human
experience, including the behavior of astronomical bodies such as the
Moon. Classical physics is still used in much of modern science and
technology. However, towards the end of the 19th century, scientists
discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. The desire to resolve inconsistencies between observed phenomena and
classical theory led to a revolution in physics, a shift in the original
scientific paradigm: the development of quantum mechanics.
Many aspects of quantum mechanics yield unexpected results, defying expectations and deemed counterintuitive. These aspects can seem paradoxical as they map behaviors quite differently from those seen at larger scales. In the words of quantum physicist Richard Feynman, quantum mechanics deals with "nature as She is—absurd". Features of quantum mechanics often defy simple explanations in everyday language. One example of this is the uncertainty principle: precise measurements of position cannot be combined with precise measurements of velocity. Another example is entanglement: a measurement made on one particle (such as an electron that is measured to have spin
'up') will correlate with a measurement on a second particle (an
electron will be found to have spin 'down') if the two particles have a
shared history. This will apply even if it is impossible for the result
of the first measurement to have been transmitted to the second particle
before the second measurement takes place.
Quantum mechanics helps people understand chemistry, because it explains how atoms interact with each other and form molecules. Many remarkable phenomena can be explained using quantum mechanics, like superfluidity. For example, if liquid helium cooled to a temperature near absolute zero
is placed in a container, it spontaneously flows up and over the rim of
its container; this is an effect which cannot be explained by classical
physics.
James C. Maxwell's unification of the equations
governing electricity, magnetism, and light in the late 19th century
led to experiments on the interaction of light and matter. Some of these
experiments had aspects which could not be explained until quantum
mechanics emerged in the early part of the 20th century.
The seeds of the quantum revolution appear in the discovery by J.J. Thomson in 1897 that cathode rays were not continuous but "corpuscles" (electrons). Electrons had been named just six years earlier as part of the emerging theory of atoms. In 1900, Max Planck, unconvinced by the atomic theory, discovered that he needed discrete entities like atoms or electrons to explain black-body radiation.
Black-body radiation intensity vs color and temperature. The rainbow bar represents visible light; 5000 K
objects are "white hot" by mixing differing colors of visible light. To
the right is the invisible infrared. Classical theory (black curve for
5000 K) fails to predict the colors; the other curves are correctly
predicted by quantum theories.
Very hot – red hot or white hot – objects look similar when heated to
the same temperature. This look results from a common curve of light
intensity at different frequencies (colors), which is called black-body
radiation. White hot objects have intensity across many colors in the
visible range. The lowest frequencies above visible colors are infrared light,
which also give off heat. Continuous wave theories of light and matter
cannot explain the black-body radiation curve. Planck spread the heat
energy among individual "oscillators" of an undefined character but with
discrete energy capacity; this model explained black-body radiation.
At the time, electrons, atoms, and discrete oscillators were all exotic ideas to explain exotic phenomena. But in 1905 Albert Einstein
proposed that light was also corpuscular, consisting of "energy
quanta", in contradiction to the established science of light as a
continuous wave, stretching back a hundred years to Thomas Young's work on diffraction.
Einstein's revolutionary proposal started by reanalyzing Planck's
black-body theory, arriving at the same conclusions by using the new
"energy quanta". Einstein then showed how energy quanta connected to
Thomson's electron. In 1902, Philipp Lenard
directed light from an arc lamp onto freshly cleaned metal plates
housed in an evacuated glass tube. He measured the electric current
coming off the metal plate, at higher and lower intensities of light and
for different metals. Lenard showed that amount of current – the number
of electrons – depended on the intensity of the light, but that the
velocity of these electrons did not depend on intensity. This is the photoelectric effect.
The continuous wave theories of the time predicted that more light
intensity would accelerate the same amount of current to higher
velocity, contrary to this experiment. Einstein's energy quanta
explained the volume increase: one electron is ejected for each quantum:
more quanta mean more electrons.
Einstein then predicted that the electron velocity would increase
in direct proportion to the light frequency above a fixed value that
depended upon the metal. Here the idea is that energy in energy-quanta
depends upon the light frequency; the energy transferred to the electron
comes in proportion to the light frequency. The type of metal gives a barrier, the fixed value, that the electrons must climb over to exit their atoms, to be emitted from the metal surface and be measured.
Ten years elapsed before Millikan's definitive experiment verified Einstein's prediction. During that time many scientists rejected the revolutionary idea of quanta. But Planck's and Einstein's concept was in the air and soon began to affect other physics and quantum theories.
Experiments with light and matter in the late 1800s uncovered a
reproducible but puzzling regularity. When light was shown through
purified gases, certain frequencies (colors) did not pass. These dark
absorption 'lines' followed a distinctive pattern: the gaps between the
lines decreased steadily. By 1889, the Rydberg formula predicted the lines for hydrogen gas using only a constant number and the integers to index the lines.
The origin of this regularity was unknown. Solving this mystery would
eventually become the first major step toward quantum mechanics.
Throughout the 19th century evidence grew for the atomic
nature of matter. With Thomson's discovery of the electron in 1897,
scientists began the search for a model of the interior of the atom.
Thomson proposed negative electrons swimming in a pool of positive charge. Between 1908 and 1911, Rutherford showed that the positive part was only 1/3000th of the diameter of the atom.
Models of "planetary" electrons orbiting a nuclear "Sun" were
proposed, but cannot explain why the electron does not simply fall into
the positive charge. In 1913 Niels Bohr and Ernest Rutherford
connected the new atom models to the mystery of the Rydberg formula:
the orbital radius of the electrons were constrained and the resulting
energy differences matched the energy differences in the absorption
lines. This meant that absorption and emission of light from atoms was
energy quantized: only specific energies that matched the difference in
orbital energy would be emitted or absorbed.
Trading one mystery – the regular pattern of the Rydberg formula –
for another mystery – constraints on electron orbits – might not seem
like a big advance, but the new atom model summarized many other
experimental findings. The quantization of the photoelectric effect and
now the quantization of the electron orbits set the stage for the final
revolution.
Throughout the first and the modern era of quantum mechanics the
concept that classical mechanics must be valid macroscopically
constrained possible quantum models. This concept was formalized by Bohr
in 1923 as the correspondence principle. It requires quantum theory to converge to classical limits. A related concept is Ehrenfest's theorem, which shows that the average values obtained from quantum mechanics (e.g. position and momentum) obey classical laws.
Stern–Gerlach experiment:
Silver atoms travelling through an inhomogeneous magnetic field, and
being deflected up or down depending on their spin; (1) furnace, (2)
beam of silver atoms, (3) inhomogeneous magnetic field, (4) classically
expected result, (5) observed result
In 1922 Otto Stern and Walther Gerlachdemonstrated that the magnetic properties of silver atoms defy classical explanation, the work contributing to Stern’s 1943 Nobel Prize in Physics.
They fired a beam of silver atoms through a magnetic field. According
to classical physics, the atoms should have emerged in a spray, with a
continuous range of directions. Instead, the beam separated into two,
and only two, diverging streams of atoms. Unlike the other quantum effects known at the time, this striking result involves the state of a single atom. In 1927, Thomas Erwin Phipps and John Bellamy Taylor (de) obtained a similar, but less pronounced effect using hydrogen atoms in their ground state, thereby eliminating any doubts that may have been caused by the use of silver atoms.
In 1924, Wolfgang Pauli called it "two-valuedness not describable
classically" and associated it with electrons in the outermost shell. The experiments lead to formulation of its theory described to arise from spin of the electron in 1925, by Samuel Goudsmit and George Uhlenbeck, under the advice of Paul Ehrenfest.
In 1924 Louis de Broglie proposed that electrons in an atom are constrained not in "orbits" but as
standing waves. In detail his solution did not work, but his hypothesis –
that the electron "corpuscle" moves in the atom as a wave – spurred Erwin Schrödinger to develop a wave equation for electrons; when applied to hydrogen the Rydberg formula was accurately reproduced.
Example original electron diffraction photograph from the laboratory of G. P. Thomson, recorded 1925–1927
Max Born's 1924 paper "Zur Quantenmechanik" was the first use of the words "quantum mechanics" in print. His later work included developing quantum collision models; in a footnote to a 1926 paper he proposed the Born rule connecting theoretical models to experiment.
In 1927 at Bell Labs, Clinton Davisson and Lester Germerfired slow-moving electrons at a crystallinenickel target which showed a diffraction patternindicating wave nature of electron whose theory was fully explained by Hans Bethe. A similar experiment by George Paget Thomson and Alexander Reid, firing electrons at thin celluloid foils and later metal films, observing rings, independently discovered matter wave nature of electrons.
Planck and Einstein started the revolution with quanta that broke
down the continuous models of matter and light. Twenty years later
"corpuscles" like electrons came to be modeled as continuous waves. This
result came to be called wave-particle duality, one iconic idea along
with the uncertainty principle that sets quantum mechanics apart from
older models of physics.
In 1923 Compton
demonstrated that the Planck-Einstein energy quanta from light also had
momentum; three years later the "energy quanta" got a new name "photon". Despite its role in almost all stages of the quantum revolution, no explicit model for light quanta existed until 1927 when Paul Dirac began work on a quantum theory of radiation that became quantum electrodynamics. Over the following decades this work evolved into quantum field theory, the basis for modern quantum optics and particle physics.
The concept of wave–particle duality says that neither the classical
concept of "particle" nor of "wave" can fully describe the behavior of
quantum-scale objects, either photons or matter. Wave–particle duality
is an example of the principle of complementarity in quantum physics.An elegant example of wave-particle duality is the double-slit experiment.
The
diffraction pattern produced when light is shone through one slit (top)
and the interference pattern produced by two slits (bottom). Both
patterns show oscillations due to the wave nature of light. The double
slit pattern is more dramatic.
In the double-slit experiment, as originally performed by Thomas Young in 1803, and then Augustin Fresnel a decade later, a beam of light is directed through two narrow, closely spaced slits, producing an interference pattern
of light and dark bands on a screen. The same behavior can be
demonstrated in water waves: the double-slit experiment was seen as a
demonstration of the wave nature of light.
Variations of the double-slit experiment have been performed using electrons, atoms, and even large molecules, and the same type of interference pattern is seen. Thus it has been demonstrated that all matter possesses wave characteristics.
If the source intensity is turned down, the same interference
pattern will slowly build up, one "count" or particle (e.g. photon or
electron) at a time. The quantum system acts as a wave when passing
through the double slits, but as a particle when it is detected. This is
a typical feature of quantum complementarity: a quantum system acts as a
wave in an experiment to measure its wave-like properties, and like a
particle in an experiment to measure its particle-like properties. The
point on the detector screen where any individual particle shows up is
the result of a random process. However, the distribution pattern of
many individual particles mimics the diffraction pattern produced by
waves.
Suppose it is desired to measure the position and speed of an
object—for example, a car going through a radar speed trap. It can be
assumed that the car has a definite position and speed at a particular
moment in time. How accurately these values can be measured depends on
the quality of the measuring equipment. If the precision of the
measuring equipment is improved, it provides a result closer to the true
value. It might be assumed that the speed of the car and its position
could be operationally defined and measured simultaneously, as precisely
as might be desired.
In 1927, Heisenberg proved that this last assumption is not correct. Quantum mechanics shows that certain pairs of physical properties, for
example, position and speed, cannot be simultaneously measured, nor
defined in operational terms, to arbitrary precision: the more precisely
one property is measured, or defined in operational terms, the less
precisely can the other be thus treated. This statement is known as the uncertainty principle.
The uncertainty principle is not only a statement about the accuracy of
our measuring equipment but, more deeply, is about the conceptual
nature of the measured quantities—the assumption that the car had
simultaneously defined position and speed does not work in quantum
mechanics. On a scale of cars and people, these uncertainties are
negligible, but when dealing with atoms and electrons they become
critical.
Heisenberg gave, as an illustration, the measurement of the position and momentum
of an electron using a photon of light. In measuring the electron's
position, the higher the frequency of the photon, the more accurate is
the measurement of the position of the impact of the photon with the
electron, but the greater is the disturbance of the electron. This is
because from the impact with the photon, the electron absorbs a random
amount of energy, rendering the measurement obtained of its momentum
increasingly uncertain, for one is necessarily measuring its post-impact
disturbed momentum from the collision products and not its original
momentum (momentum which should be simultaneously measured with
position). With a photon of lower frequency, the disturbance (and hence
uncertainty) in the momentum is less, but so is the accuracy of the
measurement of the position of the impact.
At the heart of the uncertainty principle is a fact that for any
mathematical analysis in the position and velocity domains, achieving a
sharper (more precise) curve in the position domain can only be done at
the expense of a more gradual (less precise) curve in the speed domain,
and vice versa. More sharpness in the position domain requires
contributions from more frequencies in the speed domain to create the
narrower curve, and vice versa. It is a fundamental tradeoff inherent in
any such related or complementary measurements, but is only really noticeable at the smallest (Planck) scale, near the size of elementary particles.
The uncertainty principle shows mathematically that the product of the uncertainty in the position and momentum
of a particle (momentum is velocity multiplied by mass) could never be
less than a certain value, and that this value is related to the Planck constant.
Wave function collapse means that a measurement has forced or
converted a quantum (probabilistic or potential) state into a definite
measured value. This phenomenon is only seen in quantum mechanics rather
than classical mechanics.
For example, before a photon actually "shows up" on a detection
screen it can be described only with a set of probabilities for where it
might show up. When it does appear, for instance in the CCD
of an electronic camera, the time and space where it interacted with
the device are known within very tight limits. However, the photon has
disappeared in the process of being captured (measured), and its quantum
wave function
has disappeared with it. In its place, some macroscopic physical change
in the detection screen has appeared, e.g., an exposed spot in a sheet
of photographic film, or a change in electric potential in some cell of a
CCD.
Because of the uncertainty principle, statements about both the position and momentum of particles can assign only a probability
that the position or momentum has some numerical value. Therefore, it
is necessary to formulate clearly the difference between the state of
something indeterminate, such as an electron in a probability cloud, and
the state of something having a definite value. When an object can
definitely be "pinned-down" in some respect, it is said to possess an eigenstate.
In the Stern–Gerlach experiment discussed above,
the quantum model predicts two possible values of spin for the atom
compared to the magnetic axis. These two eigenstates are named
arbitrarily 'up' and 'down'. The quantum model predicts these states
will be measured with equal probability, but no intermediate values will
be seen. This is what the Stern–Gerlach experiment shows.
The eigenstates of spin about the vertical axis are not
simultaneously eigenstates of spin about the horizontal axis, so this
atom has an equal probability of being found to have either value of
spin about the horizontal axis. As described in the section above,
measuring the spin about the horizontal axis can allow an atom that was
spun up to spin down: measuring its spin about the horizontal axis
collapses its wave function into one of the eigenstates of this
measurement, which means it is no longer in an eigenstate of spin about
the vertical axis, so can take either value.
In 1924, Wolfgang Pauli proposed a new quantum degree of freedom (or quantum number),
with two possible values, to resolve inconsistencies between observed
molecular spectra and the predictions of quantum mechanics. In
particular, the spectrum of atomic hydrogen had a doublet, or pair of lines differing by a small amount, where only one line was expected. Pauli formulated his exclusion principle,
stating, "There cannot exist an atom in such a quantum state that two
electrons within [it] have the same set of quantum numbers."
In 1928, Paul Dirac extended the Pauli equation, which described spinning electrons, to account for special relativity.
The result was a theory that dealt properly with events, such as the
speed at which an electron orbits the nucleus, occurring at a
substantial fraction of the speed of light. By using the simplest electromagnetic interaction,
Dirac was able to predict the value of the magnetic moment associated
with the electron's spin and found the experimentally observed value,
which was too large to be that of a spinning charged sphere governed by classical physics. He was able to solve for the spectral lines of the hydrogen atom and to reproduce from physical first principles Sommerfeld's successful formula for the fine structure of the hydrogen spectrum.
Dirac's equations sometimes yielded a negative value for energy,
for which he proposed a novel solution: he posited the existence of an antielectron and a dynamical vacuum. This led to the many-particle quantum field theory.
In quantum physics, a group of particles can interact or be created together in such a way that the quantum state
of each particle of the group cannot be described independently of the
state of the others, including when the particles are separated by a
large distance. This is known as quantum entanglement.
An early landmark in the study of entanglement was the Einstein–Podolsky–Rosen (EPR) paradox, a thought experiment proposed by Albert Einstein, Boris Podolsky and Nathan Rosen which argues that the description of physical reality provided by quantum mechanics is incomplete. In a 1935 paper titled "Can Quantum-Mechanical Description of Physical
Reality be Considered Complete?", they argued for the existence of
"elements of reality" that were not part of quantum theory, and
speculated that it should be possible to construct a theory containing
these hidden variables.
The thought experiment involves a pair of particles prepared in
what would later become known as an entangled state. Einstein, Podolsky,
and Rosen pointed out that, in this state, if the position of the first
particle were measured, the result of measuring the position of the
second particle could be predicted. If instead the momentum of the first
particle were measured, then the result of measuring the momentum of
the second particle could be predicted. They argued that no action taken
on the first particle could instantaneously affect the other, since
this would involve information being transmitted faster than light,
which is forbidden by the theory of relativity.
They invoked a principle, later known as the "EPR criterion of
reality", positing that: "If, without in any way disturbing a system, we
can predict with certainty (i.e., with probability
equal to unity) the value of a physical quantity, then there exists an
element of reality corresponding to that quantity." From this, they
inferred that the second particle must have a definite value of both
position and of momentum prior to either quantity being measured. But
quantum mechanics considers these two observables incompatible
and thus does not associate simultaneous values for both to any system.
Einstein, Podolsky, and Rosen therefore concluded that quantum theory
does not provide a complete description of reality. In the same year, Erwin Schrödinger used the word "entanglement" and declared: "I would not call that one but rather the characteristic trait of quantum mechanics."
The Irish physicist John Stewart Bell
carried the analysis of quantum entanglement much further. He deduced
that if measurements are performed independently on the two separated
particles of an entangled pair, then the assumption that the outcomes
depend upon hidden variables within each half implies a mathematical
constraint on how the outcomes on the two measurements are correlated.
This constraint would later be named the Bell inequality.
Bell then showed that quantum physics predicts correlations that
violate this inequality. Consequently, the only way that hidden
variables could explain the predictions of quantum physics is if they
are "nonlocal", which is to say that somehow the two particles are able
to interact instantaneously no matter how widely they ever become
separated.Performing experiments like those that Bell suggested, physicists have
found that nature obeys quantum mechanics and violates Bell
inequalities. In other words, the results of these experiments are
incompatible with any local hidden variable theory.
The idea of quantum field theory began in the late 1920s with British physicist Paul Dirac, when he attempted to quantize the energy of the electromagnetic field;
just as in quantum mechanics the energy of an electron in the hydrogen
atom was quantized. Quantization is a procedure for constructing a
quantum theory starting from a classical theory.
Merriam-Webster defines a field in physics as "a region or space in which a given effect (such as magnetism) exists". Other effects that manifest themselves as fields are gravitation and static electricity. In 2008, physicist Richard Hammond wrote:
Sometimes we distinguish between quantum mechanics (QM)
and quantum field theory (QFT). QM refers to a system in which the
number of particles is fixed, and the fields (such as the
electromechanical field) are continuous classical entities. QFT ...
goes a step further and allows for the creation and annihilation of
particles ...
He added, however, that quantum mechanics is often used to refer to "the entire notion of quantum view".
In 1931, Dirac proposed the existence of particles that later became known as antimatter. Dirac shared the Nobel Prize in Physics for 1933 with Schrödinger "for the discovery of new productive forms of atomic theory".
Quantum electrodynamics (QED) is the name of the quantum theory of the electromagnetic force. Understanding QED begins with understanding electromagnetism. Electromagnetism can be called "electrodynamics" because it is a dynamic interaction between electrical and magnetic forces. Electromagnetism begins with the electric charge.
Electric charges are the sources of and create, electric fields.
An electric field is a field that exerts a force on any particles that
carry electric charges, at any point in space. This includes the
electron, proton, and even quarks,
among others. As a force is exerted, electric charges move, a current
flows, and a magnetic field is produced. The changing magnetic field, in
turn, causes electric current (often moving electrons). The physical description of interacting charged particles, electrical currents, electrical fields, and magnetic fields is called electromagnetism.
In 1928 Paul Dirac produced a relativistic quantum theory of
electromagnetism. This was the progenitor to modern quantum
electrodynamics, in that it had essential ingredients of the modern
theory. However, the problem of unsolvable infinities developed in this relativistic quantum theory. Years later, renormalization
largely solved this problem. Initially viewed as a provisional, suspect
procedure by some of its originators, renormalization eventually was
embraced as an important and self-consistent tool in QED and other
fields of physics. Also, in the late 1940s Feynman diagrams
provided a way to make predictions with QED by finding a probability
amplitude for each possible way that an interaction could occur. The
diagrams showed in particular that the electromagnetic force is the
exchange of photons between interacting particles.
The Lamb shift
is an example of a quantum electrodynamics prediction that has been
experimentally verified. It is an effect whereby the quantum nature of
the electromagnetic field makes the energy levels in an atom or ion
deviate slightly from what they would otherwise be. As a result,
spectral lines may shift or split.
Similarly, within a freely propagating electromagnetic wave, the current can also be just an abstract displacement current, instead of involving charge carriers. In QED, its full description makes essential use of short-lived virtual particles. There, QED again validates an earlier, rather mysterious concept.
The physical measurements, equations, and predictions pertinent to
quantum mechanics are all consistent and hold a very high level of
confirmation. However, the question of what these abstract models say
about the underlying nature of the real world has received competing
answers. These interpretations are widely varying and sometimes somewhat
abstract. For instance, the Copenhagen interpretation states that before a measurement, statements about a particle's properties are completely meaningless, while the many-worlds interpretation describes the existence of a multiverse made up of every possible universe.
Light behaves in some aspects like particles and in other aspects
like waves. Matter—the "stuff" of the universe consisting of particles
such as electrons and atoms—exhibits wavelike behavior too. Some light sources, such as neon lights,
give off only certain specific frequencies of light, a small set of
distinct pure colors determined by neon's atomic structure. Quantum
mechanics shows that light, along with all other forms of electromagnetic radiation, comes in discrete units, called photons, and predicts its spectral energies (corresponding to pure colors), and the intensities of its light beams. A single photon is a quantum,
or smallest observable particle, of the electromagnetic field. A
partial photon is never experimentally observed. More broadly, quantum
mechanics shows that many properties of objects, such as position,
speed, and angular momentum,
that appeared continuous in the zoomed-out view of classical mechanics,
turn out to be (in the very tiny, zoomed-in scale of quantum mechanics)
quantized. Such properties of elementary particles
are required to take on one of a set of small, discrete allowable
values, and since the gap between these values is also small, the
discontinuities are only apparent at very tiny (atomic) scales.
The relationship between the frequency of electromagnetic radiation and the energy of each photon is why ultraviolet light can cause sunburn, but visible or infrared light cannot. A photon of ultraviolet light delivers a high amount of energy—enough
to contribute to cellular damage such as occurs in a sunburn. A photon
of infrared light delivers less energy—only enough to warm one's skin.
So, an infrared lamp can warm a large surface, perhaps large enough to
keep people comfortable in a cold room, but it cannot give anyone a
sunburn.
In even a simple light switch, quantum tunneling is absolutely vital, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives also use quantum tunneling, to erase their memory cells.
Parapsychology research rarely appears in mainstream scientific journals; a few niche journals publish most papers about parapsychology.
Terminology
The term parapsychology was coined in 1889 by philosopher Max Dessoir as the German parapsychologie. It was adopted by J. B. Rhine in the 1930s as a replacement for the term psychical research to indicate a significant shift toward experimental methodology and academic discipline. The term originates from the Greek: παράpara meaning "alongside", and psychology.
In parapsychology, psi is the unknown factor in extrasensory perception and psychokinesis experiences that is not explained by known physical or biological mechanisms. The term is derived from ψpsi, the 23rd letter of the Greek alphabet and the initial letter of the Greek: ψυχήpsyche, "mind, soul".The term was coined by biologistBertold Wiesner, and first used by psychologist Robert Thouless in a 1942 article published in the British Journal of Psychology.
In 1853, chemist Robert Hare conducted experiments with mediums and reported positive results. Other researchers such as Frank Podmore highlighted flaws in his experiments, such as lack of controls to prevent trickery. Agenor de Gasparin conducted early experiments into table-tipping. For five months in 1853, he declared the experiments a success, being the result of an "ectenic force".
Critics noted that the conditions were insufficient to prevent
trickery. For example, the sitters may have moved the table with their
knees, and no experimenter was simultaneously watching above and below
the table.
The German astrophysicistJohann Karl Friedrich Zöllner tested the medium Henry Slade in 1877. According to Zöllner, some of the experiments were successful. However, flaws in the experiments were discovered, and critics have
suggested that Slade was a fraud who performed trickery in the
experiments.
Areas of study included telepathy, hypnotism, Reichenbach's phenomena, apparitions, hauntings, and the physical aspects of Spiritualism such as table-tilting, materialization, and apportation. In the 1880s, the Society investigated apparitional experiences and hallucinations in the sane. Among the first important works was the two-volume publication in 1886, Phantasms of the Living, which was largely criticized by scholars. In 1894, the Census of Hallucinations
was published which sampled 17,000 people. Out of these, 1,684 persons
admitted to having experienced a hallucination of an apparition. The SPR became the model for similar societies in other European countries and the United States during the late 19th century.
Early clairvoyance experiments were reported in 1884 by Charles Richet.
Playing cards were enclosed in envelopes, and a subject was put under
hypnosis to identify them. The subject was reported to have succeeded in
a series of 133 trials, but the results dropped to the chance level
when performed before a group of scientists in Cambridge. J. M. Peirce
and E. C. Pickering reported a similar experiment in which they tested 36 subjects over 23,384 trials, which did not obtain above-chance scores.
In 1911, Stanford University became the first academic institution in the United States to study extrasensory perception (ESP) and psychokinesis (PK) in a laboratory setting. The effort was headed by psychologist John Edgar Coover and funded by Thomas Welton Stanford,
brother of the university's founder. After conducting approximately
10,000 experiments, Coover concluded that "statistical treatments of the
data fail to reveal any cause beyond chance."
In 1930, Duke University
became the second major U.S. academic institution to engage in the
critical study of ESP and psychokinesis in the laboratory. Under the
guidance of psychologist William McDougall, and with the help of others in the department—including psychologists Karl Zener, Joseph B. Rhine, and Louisa E. Rhine—laboratory
ESP experiments using volunteer subjects from the undergraduate student
body began. As opposed to the approaches of psychical research, which
generally sought qualitative evidence for paranormal phenomena, the experiments at Duke University proffered a quantitative, statistical approach using cards
and dice. As a consequence of the ESP experiments at Duke, standard
laboratory procedures for the testing of ESP were developed and adopted
by interested researchers worldwide.
George Estabrooks
conducted an ESP experiment using cards in 1927. Harvard students were
used as the subjects. Estabrooks acted as the sender, with the guesser
in an adjoining room. Estabrooks conducted a total of 2,300 trials. When
Estabrooks sent the subjects to a distant room with insulation, the
scores dropped to chance level. Attempts to repeat the experiment also
failed.
The publication of J. B. Rhine's book, New Frontiers of the Mind
(1937), brought the laboratory's findings to the general public. In his
book, Rhine popularized the word "parapsychology", coined by
psychologist Max Dessoir
over 40 years earlier, to describe the research conducted at Duke.
Rhine also founded an autonomous Parapsychology Laboratory within Duke
and started the Journal of Parapsychology, which he co-edited with McDougall.
Early parapsychological research employed the use of Zener cards in experiments designed to test for the existence of telepathic communication, or clairvoyant or precognitive perception.
Rhine, along with associate Karl Zener, had developed a statistical
system of testing for ESP that involved subjects guessing what symbol,
out of five possible symbols, would appear when going through a special deck of cards
designed for this purpose. A percentage of correct guesses (or hits)
significantly above 20% was perceived as higher than chance and
indicative of psychic ability. Rhine stated in his first book, Extrasensory Perception (1934), that after 90,000 trials, he felt ESP is "an actual and demonstrable occurrence".
Irish medium and parapsychologist Eileen J. Garrett
was tested by Rhine at Duke University in 1933 with Zener cards. Rhine
placed certain symbols on the cards, sealed them in an envelope, and
asked Garrett to guess their contents. She performed poorly and later
criticized the tests by claiming the cards lacked a psychic energy called "energy stimulus" and that she could not perform clairvoyance to order. The parapsychologist Samuel Soal and his colleagues tested Garrett in May 1937. Soal conducted most experiments in the Psychological Laboratory at University College London. Soal recorded over 12,000 guesses, but Garrett failed to produce above chance level. In his report Soal wrote "In the case of Mrs. Eileen Garrett we fail to
find the slightest confirmation of J. B. Rhine's remarkable claims
relating to her alleged powers of extra-sensory perception. Not only did
she fail when I took charge of the experiments, but she failed equally
when four other carefully trained experimenters took my place."
The parapsychology experiments at Duke evoked much criticism from
academics and others who challenged the concepts and evidence of ESP.
Many psychological departments attempted to repeat Rhine's experiments
with failure. W. S. Cox (1936) from Princeton University,
with 132 subjects, produced 25,064 trials in a playing card ESP
experiment. Cox concluded, "There is no evidence of extrasensory
perception either in the 'average man' or of the group investigated or
in any particular individual of that group. The discrepancy between
these results and those obtained by Rhine is due either to
uncontrollable factors in experimental procedure or to the difference in
the subjects." Four other psychological departments failed to replicate Rhine's results. After thousands of card runs, James Charles Crumbaugh failed to duplicate the results of Rhine.
In 1938, the psychologist Joseph Jastrow
wrote that much of the evidence for extrasensory perception collected
by Rhine and other parapsychologists was anecdotal, biased, dubious and
the result of "faulty observation and familiar human frailties". Rhine's experiments were discredited due to the discovery that sensory leakage
or cheating could account for all his results, such as the subject
being able to read the symbols from the back of the cards and being able
to see and hear the experimenter to note subtle clues.
IllusionistMilbourne Christopher
wrote years later that he felt "there are at least a dozen ways a
subject who wished to cheat under the conditions Rhine described could
deceive the investigator". When Rhine took precautions in response to
criticisms of his methods, he failed to find any high-scoring subjects. Another criticism, made by chemist Irving Langmuir, among others, was one of selective reporting.
Langmuir stated that Rhine did not report scores of subjects that he
suspected were intentionally guessing wrong and that this, he felt,
biased the statistical results higher than they should have been.
Rhine and his colleagues attempted to address these criticisms through new experiments described in the book Extrasensory Perception After Sixty Years (1940). Rhine described three experiments: the Pearce-Pratt experiment, the Pratt-Woodruff experiment, and the Ownbey-Zirkle series, which he believed demonstrated ESP. However, C. E. M. Hansel
wrote, "It is now known that each experiment contained serious flaws
that escaped notice in the examination made by the authors of Extra-Sensory Perception After Sixty Years". Joseph Gaither Pratt
was the co-experimenter in the Pearce-Pratt and Pratt-Woodruff
experiments at the Duke campus. Hansel visited the campus where the
experiments took place and discovered the results could have originated
through a trick, so they could not supply evidence for ESP.
His research used dice, with subjects 'willing' them to
fall a certain way. Not only can dice be drilled, shaved, falsely
numbered and manipulated, but even straight dice often show bias in the
long run. Casinos for this reason retire dice often, but at Duke,
subjects continued to try for the same effect on the same dice over long
experimental runs. Not surprisingly, PK appeared at Duke and nowhere
else.
Mr. Zirkle and Miss Ownbey
Parapsychologists and skeptics criticized the Ownbey-Zirkle ESP experiment at Duke. Ownbey would attempt to send ESP symbols to Zirkle, who would guess
what they were. The pair were placed in adjacent rooms, unable to see
each other, and an electric fan was used to prevent the pair from
communicating by sensory cues. Ownbey tapped a telegraph key to Zirkle
to inform him when she was trying to send him a symbol. The door
separating the two rooms was open during the experiment, and after each
guess, Zirkle would call out his guess to Ownbey, who recorded his
choice. Critics pointed out the experiment was flawed as Ownbey acted as
both the sender and the experimenter; nobody controlled the experiment,
so Ownbey could have cheated by communicating with Zirkle or made
recording mistakes.
The Turner-Ownbey long-distance telepathy
experiment was also flawed. May Frances Turner positioned herself in
the Duke Parapsychology Laboratory, while Sara Ownbey claimed to receive
transmissions 250 miles away. For the experiment, Turner would think of
a symbol and write it down, while Ownbey would write her guesses. The scores were highly successful and both records were supposed to be
sent to J. B. Rhine, however, Ownbey sent them to Turner. Critics
pointed out this invalidated the results as she could have simply
written her own record to agree with the other. When the experiment was
repeated and the records were sent to Rhine, the scores dropped to
average.
Lucien Warner and Mildred Raible performed a famous ESP
experiment at Duke University. Warner and Raible locked a subject in a
room with a switch controlling a signal light elsewhere, which she could
signal to guess the card. Ten runs with ESP packs of cards were used,
and she achieved 93 hits (43 more than chance). Weaknesses with the
experiment were later discovered. The duration of the light signal could
be varied so that the subject could call for specific symbols. Certain
symbols in the experiment appeared far more often than others,
indicating poor shuffling or card manipulation. The experiment was not
repeated.
Duke's administration grew less sympathetic to parapsychology,
and after Rhine's retirement in 1965, parapsychological links with the
university were broken. Rhine later established the Foundation for
Research on the Nature of Man (FRNM) and the Institute for
Parapsychology as a successor to the Duke laboratory. In 1995, the centenary of Rhine's birth, the FRNM was renamed the Rhine Research Center.
Today, the Rhine Research Center is a parapsychology research unit,
stating that it "aims to improve the human condition by creating a
scientific understanding of those abilities and sensitivities that
appear to transcend the ordinary limits of space and time".
Establishment of the Parapsychological Association
The Parapsychological Association (PA) was created in Durham, North Carolina,
on June 19, 1957. J. B. Rhine proposed its formation at a
parapsychology workshop held at the Parapsychology Laboratory of Duke
University. Rhine proposed that the group form itself into the nucleus
of an international professional society in parapsychology. The aim of
the organization, as stated in its Constitution, became "to advance
parapsychology as a science, to disseminate knowledge of the field, and
to integrate the findings with those of other branches of science".
His challenge to parapsychology's AAAS affiliation was unsuccessful. Today, the PA consists of about three hundred full, associate, and affiliated members worldwide.
Stargate Project
Beginning in the early 1950s, the CIA started extensive research into behavioral engineering. The findings from these experiments led to the formation of the Stargate Project, which handled ESP research for the U.S. federal government.
The Stargate Project was terminated in 1995 with the conclusion
that it was never useful in any intelligence operation. The information
was vague and included a lot of irrelevant and erroneous data. There was
also reason to suspect that the research managers had adjusted their
project reports to fit the known background cues.
1970s and 1980s
The
affiliation of the Parapsychological Association (PA) with the American
Association for the Advancement of Science, along with a general
openness to psychic and occult
phenomena in the 1970s, led to a decade of increased parapsychological
research. During this period, other related organizations were also
formed, including the Academy of Parapsychology and Medicine (1970), the
Institute of Parascience (1971), the Academy of Religion and Psychical
Research, the Institute of Noetic Sciences (1973), the International Kirlian Research Association (1975), and the Princeton Engineering Anomalies Research Laboratory (1979). Parapsychological work was also conducted at the Stanford Research Institute (SRI) during this time.
The surge in paranormal research continued into the 1980s: the
Parapsychological Association reported members working in more than 30
countries. For example, research was carried out and regular conferences
held in Eastern Europe and the former Soviet Union although the word parapsychology was discarded in favor of the term psychotronics. While Soviet Psychotronics included many fantastical and ineffectual
methods, much like Parapsychology, it enjoyed more comprehensive state
backing and explored a wide variety of both scientific and
pseudo-scientific means for influencing consciousness.
The main promoter of psychotronics was Czech scientist Zdeněk Rejdák, who described it as a physical science, organizing conferences and presiding over the International Association for Psychotronic Research.
In 1985, the Department of Psychology at the University of Edinburgh established a Chair of Parapsychology, awarding it to Robert Morris,
an experimental parapsychologist from the United States. Morris and his
research associates and PhD students pursued research on topics related
to parapsychology.
Since the 1980s, contemporary parapsychological research has waned considerably in the United States. Early research was considered inconclusive, and parapsychologists faced strong skepticism from their academic colleagues. Some effects thought to be paranormal, for example, the effects of Kirlian photography (thought by some to represent a human aura), disappeared under more stringent controls, leaving those avenues of research at dead-ends. Most parapsychology research in the US is now confined to private institutions funded by private sources. After 28 years of research, Princeton Engineering Anomalies Research Laboratory (PEAR), which studied psychokinesis, closed in 2007.
Over the last two decades, some new sources of funding for
parapsychology in Europe have seen a "substantial increase in European
parapsychological research so that the center of gravity for the field
has swung from the United States to Europe". The United Kingdom has the largest number of active parapsychologists of all nations. In the UK, researchers work in conventional psychology departments and
do studies in mainstream psychology to "boost their credibility and show
that their methods are sound". It is thought that this approach could
account for the relative strength of parapsychology in Britain.
Research and professional organizations include the Parapsychological Association; the Society for Psychical Research, publisher of the Journal of the Society for Psychical Research and Psi Encyclopedia; the American Society for Psychical Research, publisher of the Journal of the American Society for Psychical Research (last published in 2004); the Rhine Research Center and Institute for Parapsychology, publisher of the Journal of Parapsychology; the Parapsychology Foundation, which published the International Journal of Parapsychology (between 1959 and 1968 and 2000–2001) and the Australian Institute of Parapsychological Research, publisher of the Australian Journal of Parapsychology. The European Journal of Parapsychology ceased publishing in 2010.
Parapsychological research has also included other sub-disciplines of psychology. These related fields include transpersonal psychology, which studies transcendent or spiritual aspects of the human mind, and anomalistic psychology, which examines paranormal beliefs and subjective anomalous experiences in traditional psychological terms.
Research
Scope
Parapsychologists study some ostensible paranormal phenomena, including but not limited to:
Telepathy: Transfer of information of thoughts or feelings between individuals by means other than the five classical senses.
Precognition: Perception of information about future places or events before they occur.
Clairvoyance: Obtaining information about places or events at remote locations by means unknown to current science.
Psychokinesis: The ability of the mind to influence matter, time, space, or energy by means unknown to current science.
Reincarnation: The rebirth of a soul or other non-physical aspect of human consciousness in a new physical body after death.
Apparitional experiences:
Phenomena often attributed to ghosts and encountered in places a
deceased individual is thought to have frequented or in association with
the person's former belongings.
The definitions for the terms above may not reflect their mainstream
usage nor the opinions of all parapsychologists and their critics.
The Ganzfeld (German for "whole field") is a technique used to test individuals for telepathy. The technique—a form of moderate sensory deprivation—was developed to quickly quiet mental "noise" by providing mild, unpatterned stimuli to the visual and auditory senses. The visual sense is usually isolated by creating a soft red glow which is diffused through half ping-pong balls placed over the recipient's eyes. The auditory sense is usually blocked by playing white noise,
static, or similar sounds to the recipient. The subject is also seated
in a reclined, comfortable position to minimize the sense of touch.
In the typical Ganzfeld experiment, a "sender" and a "receiver" are isolated. The receiver is put into the Ganzfeld state, or Ganzfeld effect
and the sender is shown a video clip or still picture and asked to send
that image to the receiver mentally. While in the Ganzfeld,
experimenters ask the receiver to continuously speak aloud all mental
processes, including images, thoughts, and feelings. At the end of the
sending period, typically about 20 to 40 minutes, the receiver is taken
out of the Ganzfeld state and shown four images or videos, one of which
is the actual target and three non-target decoys. The receiver attempts
to select the target, using perceptions experienced during the Ganzfeld
state as clues to what the mentally "sent" image might have been.
Participant of a Ganzfeld experiment. Proponents say such experiments have shown evidence of telepathy, while critics like Ray Hyman have pointed out that they have not been independently replicated.
The Ganzfeld experiment studies that were examined by Ray Hyman and Charles Honorton
had methodological problems that were well documented. Honorton
reported only 36% of the studies used duplicate target sets of pictures
to avoid handling cues. Hyman discovered flaws in all of the 42 Ganzfeld experiments, and to
assess each experiment, he devised a set of 12 categories of flaws. Six
of these concerned statistical defects, and the other six covered
procedural flaws such as inadequate documentation, randomization, security, and possibilities of sensory leakage. Over half of the studies failed to safeguard against sensory leakage,
and all of the studies contained at least one of the 12 flaws. Because
of the flaws, Honorton agreed with Hyman the 42 Ganzfeld studies could
not support the claim for the existence of psi.[96]
Possibilities of sensory leakage in the Ganzfeld experiments
included the receivers hearing what was going on in the sender's room
next door as the rooms were not soundproof and the sender's fingerprints
to be visible on the target object for the receiver to see. Hyman reviewed the autoganzfeld experiments and discovered a pattern in
the data that implied a visual cue may have taken place. Hyman wrote
the autoganzfeld experiments were flawed because they did not preclude
the possibility of sensory leakage.
In 2010, Lance Storm, Patrizio Tressoldi, and Lorenzo Di Risio
analyzed 29 Ganzfeld studies from 1997 to 2008. Of the 1,498 trials, 483
produced hits, corresponding to a hit rate of 32.2%. This hit rate is statistically significant with p < .001.
Participants selected for personality traits and personal
characteristics thought to be psi-conducive were found to perform
significantly better than unselected participants in the Ganzfeld
condition. Hyman (2010) published a rebuttal to Storm et al.
According to Hyman, "Reliance on meta-analysis as the sole basis for
justifying the claim that an anomaly exists and that the evidence for it
is consistent and replicable is fallacious. It distorts what scientists
mean by confirmatory evidence." Hyman wrote that the Ganzfeld studies
were not independently replicated and failed to produce evidence for
psi. Storm et al.
published a response to Hyman stating that the Ganzfeld experimental
design has proved to be consistent and reliable, that parapsychology is a
struggling discipline that has not received much attention, and that
therefore further research on the subject is necessary.[93] Rouder et al. 2013 wrote that critical evaluation of Storm et al.'s meta-analysis reveals no evidence for psi, no plausible mechanism and omitted replication failures.
Remote viewing is the practice of seeking impressions about a distant
or unseen target using subjective means, in particular, extrasensory
perception. A remote viewer is typically expected to give information
about an object, event, person, or location hidden from physical view
and separated at some distance. Several hundred such trials have been conducted by investigators over the past 25 years, including those by the Princeton Engineering Anomalies Research Laboratory (PEAR) and by scientists at SRI International and Science Applications International Corporation. Many of these were under contract by the U.S. government as part of the espionage program Stargate Project, which terminated in 1995 having failed to document any practical intelligence value.
The psychologists David Marks and Richard Kammann attempted to replicate Russell Targ and Harold Puthoff's
remote viewing experiments that were carried out in the 1970s at SRI
International. In a series of 35 studies, they could not replicate the
results, motivating them to investigate the procedure of the original
experiments. Marks and Kammann discovered that the notes given to the
judges in Targ and Puthoff's experiments contained clues as to the order
in which they were carried out, such as referring to yesterday's two
targets or having the session date written at the top of the page. They
concluded that these clues were the reason for the experiment's high hit
rates. Marks was able to achieve 100 percent accuracy without visiting any of the sites himself but by using cues. James Randi
wrote controlled tests in collaboration with several other researchers,
eliminating several sources of cueing and extraneous evidence present
in the original tests; Randi's controlled tests produced negative
results. Students could also solve Puthoff and Targ's locations from the
cues included in the transcripts.
In 1980, Charles Tart claimed that rejudging the transcripts from one of Targ and Puthoff's experiments revealed an above-chance result. Targ and Puthoff again refused to provide copies of the transcripts and
it was not until July 1985 that they were made available for study,
when it was discovered they still contained sensory cues. Marks and Christopher Scott (1986) wrote, "Considering the importance
for the remote viewing hypothesis of adequate cue removal, Tart's
failure to perform this basic task seems beyond comprehension. As
previously concluded, remote viewing has not been demonstrated in the
experiments conducted by Puthoff and Targ, only the repeated failure of
the investigators to remove sensory cues."
PEAR closed its doors at the end of February 2007. Its founder, Robert G. Jahn, said of it, "For 28 years, we've done what we wanted to do, and there's no reason to stay and generate more of the same data." Statistical flaws in his work have been proposed by others in the parapsychological and general scientific communities. The physicist Robert L. Park said of PEAR, "It's been an embarrassment to science, and I think an embarrassment for Princeton".
The advent of powerful and inexpensive electronic and computer
technologies has allowed the development of fully automated experiments
studying possible interactions between mind and matter. In the most common experiment of this type, a random number generator (RNG), based on electronic or radioactive noise, produces a data stream that is recorded and analyzed by computer software.
A subject attempts to mentally alter the distribution of the random
numbers, usually in an experimental design that is functionally
equivalent to getting more "heads" than "tails" while flipping a coin.
In the RNG experiment, design flexibility can be combined with rigorous
controls while collecting a large amount of data quickly. This
technique has been used both to test individuals for psychokinesis and
to test the possible influence on RNGs of large groups of people.
Major meta-analyses of the RNG database have been published every few years since appearing in the journal Foundations of Physics in 1986. PEAR founder Robert G. Jahn
and his colleague Brenda Dunne say that the experiments produced "a
very small effect" not significant enough to be observed over a brief
experiment but over a large number of trials resulted in a tiny
statistical deviation from chance. According to Massimo Pigliucci,
the results from PEAR can be explained without invoking the paranormal
because of two problems with the experiment: "the difficulty of
designing machines capable of generating truly random events and the
fact that statistical "significance" is not at all a good measure of the
importance or genuineness of a phenomenon." Pigluicci has written that the statistical analysis used by the Jahn and the PEAR group relied on a quantity called a "p-value",
but a problem with p-values is that if the sample size (number of
trials) is very large, like the PEAR tests, then one is guaranteed to
find artificially low p-values indicating a statistically significant
result even though nothing was occurring other than small biases in the
experimental apparatus.
Two German independent scientific groups have failed to replicate the PEAR results. Pigliucci has written this was "yet another indication that the
simplest hypothesis is likely to be true: there was nothing to
replicate." The most recent meta-analysis on psychokinesis was published in Psychological Bulletin,
along with several critical commentaries. It analyzed the results of
380 studies; the authors reported an overall positive effect size that
was statistically significant but very small relative to the sample size
and could, in principle, be explained by publication bias.
Direct mental interactions with living systems
Formerly
called bio-PK, "direct mental interactions with living systems" (DMILS)
studies the effects of one person's intentions on a distant person's psychophysiological state. One type of DMILS experiment looks at the commonly reported "feeling of
being stared at". The "starer" and the "staree" are isolated in
different locations, and the starer is periodically asked to simply gaze
at the staree via closed-circuit video links. Meanwhile, the staree's
nervous system activity is automatically and continuously monitored.
Parapsychologists have interpreted the cumulative data on this
and similar DMILS experiments to suggest that one person's attention
directed towards a remote, isolated person can significantly activate or
calm that person's nervous system. In a meta-analysis of these experiments published in the British Journal of Psychology
in 2004, researchers found a small but significant overall DMILS
effect. However, the study also found that the effect size was
insignificant when a small number of the highest-quality studies from
one laboratory were analyzed. The authors concluded that although the
existence of some anomaly related to distant intentions cannot be ruled
out, there was also a shortage of independent replications and
theoretical concepts.
Parapsychological studies into dream telepathy were carried out at the Maimonides Medical Center in Brooklyn, New York led by Stanley Krippner and Montague Ullman. They concluded the results from some of their experiments supported dream telepathy. However, the results have not been independently replicated.
The picture target experiments that Krippner and Ullman conducted were criticized by C. E. M. Hansel.
According to Hansel, there were weaknesses in the design of the
experiments in the way in which the agents became aware of their target
picture. Only the agent should have known the target, and no other
person should have known until the targets were judged; however, an
experimenter was with the agent when the target envelope was opened.
Hansel also wrote that the experiment had poor controls as the main
experimenter could communicate with the subject. In 2002, Krippner denied Hansel's accusations, claiming the agent did not communicate with the experimenter.
Edward Belvedere and David Foulkes attempted to replicate the
picture-target experiments. The finding was that neither the subject nor
the judges matched the targets with dreams above chance level. Results from other experiments by Belvedere and Foulkes were also negative.
In 2003, Simon Sherwood and Chris Roe wrote a review that claimed support for dream telepathy at Maimonides. However, James Alcock
noted that their review was based on "extreme messiness" of data.
Alcock concluded the dream telepathy experiments at Maimonides have
failed to provide evidence for telepathy and "lack of replication is
rampant".
Ascent of the Blessed by Hieronymus Bosch (after 1490) depicts a tunnel of light and spiritual figures similar to those reported by near-death experiencers.
A near-death experience (NDE) is an experience reported by a person who nearly died, or who experienced clinical death and then revived. NDEs include one or more of the following experiences: a sense of being dead; an out-of-body experience;
a sensation of floating above one's body and seeing the surrounding
area; a sense of overwhelming love and peace; a sensation of moving
upwards through a tunnel or narrow passageway; meeting deceased
relatives or spiritual figures; encountering a being of light, or a
light; experiencing a life review; reaching a border or boundary; and a feeling of being returned to the body, often accompanied by reluctance.
Interest in the NDE was spurred initially by the research of psychiatrists Elisabeth Kübler-Ross, George G. Ritchie, and Raymond Moody. In 1975, Moody wrote the best-selling book Life After Life and in 1977, he wrote a second book, Reflections on Life After Life. In 1998, Moody was appointed chair in "consciousness studies" at the University of Nevada, Las Vegas. The International Association for Near-death Studies
(IANDS) was founded in 1978 to meet the needs of early researchers and
experiencers within this field of research. Later researchers, such as
psychiatrist Bruce Greyson, psychologist Kenneth Ring, and cardiologist Michael Sabom, introduced the study of near-death experiences to the academic setting.
Psychiatrist Ian Stevenson, from the University of Virginia,
conducted more than 2,500 case studies over 40 years and published
twelve books. He wrote that childhood memories ostensibly related to reincarnation
normally occurred between the ages of three and seven years and then
faded shortly afterward. He compared the memories with reports of people
known to the deceased, attempting to do so before any contact between
the child and the deceased's family had occurred, and searched for disconfirming evidence that could provide alternative explanations for the reports aside from reincarnation.
Some 35 percent of the subjects examined by Stevenson had
birthmarks or congenital disabilities. Stevenson believed that the
existence of birthmarks and deformities on children, when they occurred
at the location of fatal wounds in the deceased, provided the best
evidence for reincarnation. However, Stevenson has never claimed that he had proved the existence
of reincarnation, and cautiously referred to his cases as being "of the
reincarnation type" or "suggestive of reincarnation". Researchers who believe in the evidence for reincarnation have been unsuccessful in getting the scientific community to consider it a serious possibility.
Ian Wilson argued that a large number of Stevenson's cases consisted of poor children remembering wealthy lives or belonging to a higher caste. He speculated that such cases may represent a scheme to obtain money from the family of the alleged former incarnation. Philosopher Keith Augustine has written, "The vast majority of
Stevenson's cases come from countries where a religious belief in
reincarnation is strong, and rarely elsewhere, seems to indicate that
cultural conditioning (rather than reincarnation) generates claims of
spontaneous past-life memories." Philosopher Paul Edwards wrote that reincarnation invokes logically dubious assumptions and is inconsistent with modern science.
Scientific reception
James Alcock is a notable critic of parapsychology.
Evaluation
The scientific consensus is that there is insufficient evidence to support the existence of psi phenomena.
Scientists critical of parapsychology state that its
extraordinary claims demand extraordinary evidence if they are to be
taken seriously. Scientists who have evaluated parapsychology have written the entire body of evidence is of poor quality and not adequately controlled. In support of this view, critics cite instances of fraud, flawed studies, and cognitive biases (such as clustering illusion, availability error, confirmation bias, illusion of control, magical thinking, and the bias blind spot) as ways to explain parapsychological results. Research has also shown that people's desire to believe in paranormal
phenomena causes them to discount strong evidence that it does not
exist.
The psychologists Donovan Rawcliffe (1952), C. E. M. Hansel (1980), Ray Hyman
(1989), and Andrew Neher (2011) have studied the history of psi
experiments from the late 19th century up until the 1980s. Flaws and
weaknesses were discovered in every experiment investigated, so the
possibility of sensory leakage and trickery were not ruled out. The data from the Creery sister and the Soal-Goldney experiments were proven to be fraudulent, one of the subjects from the Smith-Blackburn experiments confessed to fraud, the Brugmans experiment, the experiments by John Edgar Coover and those conducted by Joseph Gaither Pratt and Helmut Schmidt
had flaws in the design of the experiments, did not rule out the
possibility of sensory cues or trickery and have not been replicated.
According to critics, psi is negatively defined as any effect
that cannot be currently explained in terms of chance or normal causes,
and this is a fallacy as it encourages parapsychologists to use any
peculiarity in the data as a characteristic of psi. Parapsychologists have admitted it is impossible to eliminate the
possibility of non-paranormal causes in their experiments. There is no
independent method to indicate the presence or absence of psi. Persi Diaconis
has written that the controls in parapsychological experiments are
often loose with possibilities of subject cheating and unconscious
sensory cues.
In 1998, physics professor Michael W. Friedlander
noted that parapsychology has "failed to produce any clear evidence for
the existence of anomalous effects that require us to go beyond the
known region of science." Philosopher and skeptic Robert Todd Carroll
has written research in parapsychology has been characterized by
"deception, fraud, and incompetence in setting up properly controlled
experiments and evaluating statistical data." The psychologist Ray Hyman
has pointed out that some parapsychologists such as Dick Bierman,
Walter Lucadou, J. E. Kennedy, and Robert Jahn have admitted the
evidence for psi is "inconsistent, irreproducible, and fails to meet
acceptable scientific standards." Richard Wiseman
has criticized the parapsychological community for widespread errors in
research methods including cherry-picking new procedures which may
produce preferred results, explaining away unsuccessful attempted
replications with claims of an "experimenter effect", data mining, and retrospective data selection.
Independent evaluators and researchers dispute the existence of
parapsychological phenomena and the scientific validity of
parapsychological research. In 1988, the U.S. National Academy of Sciences
published a report on the subject that concluded that "no scientific
justification from research conducted over a period of 130 years for the
existence of parapsychological phenomena." No accepted theory
of parapsychology currently exists, and many competing and often
conflicting models have been advocated by different parapsychologists in
an attempt to explain reported paranormal phenomena. Terence Hines in his book Pseudoscience and the Paranormal
(2003), wrote, "Many theories have been proposed by parapsychologists
to explain how psi takes place. To skeptics, such theory building seems
premature, as the phenomena to be explained by the theories have yet to
be demonstrated convincingly." Skeptics such as Antony Flew have cited the lack of such a theory as their reason for rejecting parapsychology.
In a review of parapsychological reports, Hyman wrote, "randomization is often inadequate, multiple statistical testing without adjustment for significance levels is prevalent, possibilities for sensory leakage are not uniformly prevented, errors in use of statistical tests are much too common, and documentation is typically inadequate". Parapsychology has been criticized for making no precise predictions.
Ray Hyman (standing), Lee Ross, Daryl Bem and Victor Benassi at the 1983 CSICOP Conference in Buffalo, New York
In 2003, James Alcock Professor of Psychology at York University published Give the Null Hypothesis a Chance: Reasons to Remain Doubtful about the Existence of Psi,
where he claimed that parapsychologists never seem to take seriously
the possibility that psi does not exist. Because of that, they interpret
null results as indicating only that they were unable to observe psi in
a particular experiment rather than taking it as support for the
possibility that there is no psi. The failure to take the null hypothesis
as a serious alternative to their psi hypotheses leads them to rely
upon many arbitrary "effects" to excuse failures to find predicted
effects, excuse the lack of consistency in outcomes, and excuse failures
to replicate.
Fundamental endemic problems in parapsychological research
include, amongst others: insufficient definition of the subject matter,
total reliance on negative definitions of their phenomena (e.g., psi is
said to occur only when all known normal influences are ruled out);
failure to produce a single phenomenon that neutral researchers can
independently replicate; the invention of "effects" such as the
psi-experimenter effect to explain away inconsistencies in the data and
failures to achieve predicted outcomes; unfalsifiability
of claims; the unpredictability of effects; lack of progress in over a
century of formal research; methodological weaknesses; reliance on
statistical procedures to determine when psi has supposedly occurred,
even though statistical analysis does not in itself justify a claim that
psi has occurred; and failure to jibe with other areas of science.
Overall, he argues that there is nothing in parapsychological research
that would ever lead parapsychologists to conclude that psi does not
exist. So, even if it does not, the search will likely continue for a
long time. "I continue to believe that parapsychology is, at bottom,
motivated by belief in search of data, rather than data in search of
explanation."
Alcock and cognitive psychologist Arthur S. Reber
have criticized parapsychology broadly, writing that if psi effects
were true, they would negate fundamental principles of science such as causality, time's arrow, thermodynamics, and the inverse square law.
According to Alcock and Reber, "parapsychology cannot be true unless
the rest of science isn't. Moreover, if psi effects were real, they
would have already fatally disrupted the rest of the body of science".
Richard Land has written that from what is known about human biology, it is implausible that evolution has provided humans with ESP as research has shown the recognized five senses are adequate for the evolution and survival of the species. Michael Shermer, in the article "Psychic Drift: Why most scientists do not believe in ESP and psi phenomena" for Scientific American,
wrote "the reason for skepticism is that we need replicable data and a
viable theory, both of which are missing in psi research."
In January 2008, the results of a study using neuroimaging
were published. To provide what are purported to be the most favorable
experimental conditions, the study included appropriate emotional
stimuli and had biologically or emotionally related participants, such
as twins. The experiment was designed to produce positive results if telepathy, clairvoyance or precognition
occurred. Still, despite this, no distinguishable neuronal responses
were found between psychic and non-psychic stimuli, while variations in
the same stimuli showed anticipated effects on brain activation
patterns. The researchers concluded, "These findings are the strongest
evidence yet obtained against the existence of paranormal mental
phenomena." Other studies have attempted to test the psi hypothesis by using
functional neuroimaging. A neuroscience review of the studies (Acunzo et al. 2013) discovered methodological weaknesses that could account for the reported psi effects.
A 2014 study discovered that schizophrenic patients have more belief in psi than healthy adults.
Some researchers have become skeptical of parapsychology, such as Susan Blackmore and John Taylor, after years of study and no progress in demonstrating the existence of psi by the scientific method.
On the subject of psychokinesis, the physicist Sean M. Carroll has written that both human brains and the spoons they try to bend are made, like all matter, of quarks and leptons;
everything else they do emerges as properties of the behavior of quarks
and leptons. The quarks and leptons interact through the four forces:
strong, weak, electromagnetic, and gravitational. Thus, either it is one
of the four known forces, or it is a new force, and any new force with a
range over 1 millimeter must be at most a billionth the strength of
gravity, or it will have been captured in experiments already done. This
leaves no physical force that could account for psychokinesis.
Physicist John G. Taylor,
who investigated parapsychological claims, wrote that an unknown fifth
force causing psychokinesis would have to transmit a great deal of
energy. The energy would have to overcome the electromagnetic forces
binding the atoms together. The atoms would need to respond more
strongly to the fifth force while it is operative than to electric
forces. Therefore, such an additional force between atoms should exist
all the time and not only during alleged paranormal occurrences. Taylor
wrote there is no scientific trace of such a force in physics, down to
many orders of magnitude; thus, if a scientific viewpoint is to be
preserved, the idea of any fifth force must be discarded. Taylor
concluded there is no possible physical mechanism for psychokinesis, and
it is in complete contradiction to established science.
Felix Planer, a professor of electrical engineering,
has written that if psychokinesis were real, then it would be easy to
demonstrate by getting subjects to depress a scale on a sensitive
balance, raise the temperature of a water bath which could be measured
with an accuracy of a hundredth of a degree Celsius
or affect an element in an electrical circuit such as a resistor which
could be monitored to better than a millionth of an ampere. Planer writes that such experiments are extremely sensitive and easy to
monitor but are not utilized by parapsychologists as they "do not hold
out the remotest hope of demonstrating even a minute trace of PK"
because the alleged phenomenon is non-existent. Planer has written that
parapsychologists fall back on studies that involve only unrepeatable
statistics, owing their results to poor experimental methods, recording
mistakes, and faulty statistical mathematics.
According to Planer, "all research in medicine and other sciences
would become illusionary, if the existence of PK had to be taken
seriously; for no experiment could be relied upon to furnish objective
results, since all measurements would become falsified to a greater or
lesser degree, according to his PK ability, by the experimenter's
wishes." Planer concluded the concept of psychokinesis is absurd and has
no scientific basis.
Philosopher and physicist Mario Bunge
has written that "psychokinesis, or PK, violates the principle that
mind cannot act directly on matter. (If it did, no experimenter could
trust his readings of measuring instruments.) It also violates the
principles of conservation of energy and momentum. The claim that
quantum mechanics allows for the possibility of mental power influencing
randomizers—an alleged case of micro-PK—is ludicrous since that theory
respects the said conservation principles, and it deals exclusively with
physical things."
The physicist Robert L. Park
questioned if the mind really could influence matter, then it would be
easy for parapsychologists to measure such a phenomenon by using the
alleged psychokinetic power to deflect a microbalance
which would not require any dubious statistics but "the reason, of
course, is that the microbalance stubbornly refuses to budge." Park has suggested the reason statistical studies are so popular in
parapsychology is because they introduce opportunities for uncertainty
and error, which are used to support the biases of the experimenter.
Park wrote, "No proof of psychic phenomena is ever found. In spite of
all the tests devised by parapsychologists like Jahn and Radin,
and huge amounts of data collected over a period of many years, the
results are no more convincing today than when they began their
experiments."
Parapsychological theories are viewed as pseudoscientific by the
scientific community as incompatible with well-established laws of science. As there is no repeatable evidence for psi, the field is often regarded as a pseudoscience.
The philosopher Raimo Tuomela
summarized why the majority of scientists consider parapsychology to be
a pseudoscience in his essay "Science, Protoscience, and
Pseudoscience".
Parapsychology relies on an ill-defined ontology and typically shuns exact thinking.
The hypotheses and theories of parapsychology have not been proven and are in bad shape.
Extremely little progress has taken place in parapsychology on the whole and parapsychology conflicts with established science.
Parapsychology has poor research problems, being concerned with
establishing the existence of its subject matter and having practically
no theories to create proper research problems.
While in parts of parapsychology there are attempts to use the
methods of science there are also unscientific areas; and in any case
parapsychological research can at best qualify as prescientific because
of its poor theoretical foundation.
Parapsychology is a largely isolated research area.
The methods of parapsychologists are regarded by critics, including those who wrote the science standards for the California State Board of Education, to be pseudoscientific. Some of the more specific criticisms state that parapsychology does not
have a clearly defined subject matter, an easily repeatable experiment
that can demonstrate a psi effect on demand, nor an underlying theory to
explain the paranormal transfer of information. James Alcock
has stated that few of parapsychology's experimental results have
prompted interdisciplinary research with more mainstream sciences such
as physics or biology and that parapsychology remains an isolated
science to such an extent that its very legitimacy is questionable, and as a whole is not justified in being labeled "scientific". Alcock wrote, "Parapsychology is indistinguishable from pseudo-science,
and its ideas are essentially those of magic... There is no
evidence that would lead the cautious observer to believe that
parapsychologists and paraphysicists are on the track of a real
phenomenon, a real energy or power that has so far escaped the attention
of those people engaged in "normal" science."
The scientific community considers parapsychology a pseudoscience
because it continues to explore the hypothesis that psychic abilities
exist despite a century of experimental results that fail to demonstrate
that hypothesis conclusively. A panel commissioned by the United States National Research Council
to study paranormal claims concluded that "despite a 130-year record of
scientific research on such matters, our committee could find no
scientific justification for the existence of phenomena such as
extrasensory perception, mental telepathy or 'mind over matter'
exercises... Evaluation of a large body of the best available evidence
simply does not support the contention that these phenomena exist."
There is also an issue of non-falsifiability associated with psi. On this subject Terence Hines has written:
The most common rationale offered
by parapsychologists to explain the lack of a repeatable demonstration
of ESP or other psi phenomena is to say that ESP in particular and psi
phenomena in general are elusive or jealous phenomena. This means the
phenomena go away when a skeptic is present or when skeptical
"vibrations" are present. This argument seems nicely to explain away
some of the major problems facing parapsychology until it is realized
that it is nothing more than a classic nonfalsifiable hypothesis... The
use of the nonfalsifiable hypothesis is permitted in parapsychology to a
degree unheard of in any scientific discipline. To the extent that
investigators accept this type of hypothesis, they will be immune to
having their belief in psi disproved. No matter how many experiments
fail to provide evidence for psi and no matter how good those
experiments are, the nonfalsifiable hypothesis will always protect the
belief.
Mario Bunge
has written that research in parapsychology for over a hundred years
has produced no firm findings or testable predictions. All
parapsychologists can do is claim alleged data is anomalous and beyond
the reach of ordinary science. The aim of parapsychologists "is not that
of finding laws and systematizing them into theories in order to
understand and forecast" but to "buttress ancient spiritualist myths or
to serve as a surrogate for lost religions."
The psychologist David Marks has written that parapsychologists have failed to produce a single repeatable demonstration of the paranormal
and described psychical research as a pseudoscience, an "incoherent
collection of belief systems steeped in fantasy, illusion and error."[205] However, Chris French,
who is not convinced that parapsychology has demonstrated evidence for
psi, has argued that parapsychological experiments still adhere to the
scientific method and should not be completely dismissed as
pseudoscience. "Sceptics like myself will often point out that there's
been systematic research in parapsychology for well over a century, and
so far the wider scientific community is not convinced." French has noted his position is "the minority view among critics of parapsychology".
Philosopher Bradley Dowden
characterized parapsychology as a pseudoscience because
parapsychologists have no valid theories to test or reproducible data
from their experiments.
There have been instances of fraud in the history of parapsychology research. In the late 19th century, the Creery Sisters (Mary, Alice, Maud, Kathleen, and Emily) were tested by the Society for Psychical Research
and believed them to have genuine psychic ability; however, during a
later experiment they were caught utilizing signal codes and they
confessed to fraud. George Albert Smith and Douglas Blackburn were claimed to be genuine psychics by the Society for Psychical Research, but Blackburn confessed to fraud:
For nearly thirty years the
telepathic experiments conducted by Mr. G. A. Smith and myself have been
accepted and cited as the basic evidence of the truth of thought
transference...
...the whole of those alleged experiments were bogus, and originated in
the honest desire of two youths to show how easily men of scientific
mind and training could be deceived when seeking for evidence in support
of a theory they were wishful to establish.
The experiments of Samuel Soal and K. M. Goldney
of 1941–1943 (suggesting the precognitive ability of a single
participant) were long regarded as some of the best in the field because
they relied upon independent checking and witnesses to prevent fraud.
However, many years later, statistical evidence, uncovered and published
by other parapsychologists in the field, suggested that Soal had
cheated by altering some of the raw data.
In 1974, many experiments by Walter J. Levy, J. B. Rhine's
successor as director of the Institute for Parapsychology, were exposed
as fraudulent. Levy had reported on a series of successful ESP experiments involving
computer-controlled manipulation of non-human subjects, including rats.
His experiments showed very high positive results. However, Levy's
fellow researchers became suspicious about his methods. They found that
Levy interfered with data-recording equipment, manually creating
fraudulent strings of positive results. Levy confessed to the fraud and
resigned.
In 1974, Rhine published the paper Security versus Deception in Parapsychology in the Journal of Parapsychology,
which documented 12 cases of fraud that he had detected from 1940 to
1950 but refused to give the names of the participants in the studies. Massimo Pigliucci has written:
Most damning of all, Rhine admitted publicly that he had
uncovered at least twelve instances of dishonesty among his researchers
in a single decade, from 1940 to 1950. However, he flaunted standard
academic protocol by refusing to divulge the names of the fraudsters,
which means that there is unknown number of published papers in the
literature that claim paranormal effects while in fact they were the
result of conscious deception.
Martin Gardner claimed to have inside information that files in Rhine's laboratory contain material suggesting fraud on the part of Hubert Pearce. Pearce was never able to obtain above-chance results when persons other
than the experimenter were present during an experiment, making it more
likely that he was cheating in some way. Rhine's other subjects could
only obtain non-chance levels when they could shuffle the cards, which
suggested they used tricks to arrange the order of the Zener cards before the experiments started.
A researcher from Tarkio College in Missouri, James D. MacFarland, was suspected of falsifying data to achieve positive psi results. Before the fraud was discovered, MacFarland published two articles in the Journal of Parapsychology (1937 & 1938) supporting the existence of ESP. Presumably speaking about MacFarland, Louisa Rhine wrote that in
reviewing the data submitted to the lab in 1938, the researchers at the
Duke Parapsychology Lab recognized the fraud. "...before long they were
all certain that Jim had consistently falsified his records... To
produce extra hits, Jim had to resort to erasures and transpositions in
the records of his call series." MacFarland never published another article in the Journal of Parapsychology after the fraud was discovered.
Some instances of fraud amongst spiritualistmediums were exposed by early psychical researchers such as Richard Hodgson and Harry Price. In the 1920s, magician and escapologist Harry Houdini said that researchers and observers had not created experimental procedures that preclude fraud.
Criticism of experimental results
Critical analysts, including some parapsychologists, are unsatisfied with experimental parapsychology studies. Some reviewers, such as psychologist Ray Hyman,
contend that apparently successful experimental results in psi research
are more likely due to sloppy procedures, poorly trained researchers,
or methodological flaws rather than to genuine psi effects. Fellow psychologist Stuart Vyse hearkens back to a time of data manipulation, now recognized as "p-hacking", as part of the issue. Within parapsychology there are disagreements over the results and
methodology as well. For example, the experiments at the PEAR laboratory
were criticized in a paper published by the Journal of Parapsychology
in which parapsychologists independent from the PEAR laboratory
concluded that these experiments "depart[ed] from criteria usually
expected in formal scientific experimentation" due to "[p]roblems with
regard to randomization, statistical baselines, application of
statistical models, agent coding of descriptor lists, feedback to
percipients, sensory cues, and precautions against cheating." They felt
that the originally stated significance values were "meaningless".
A typical measure of psi phenomena is a statistical deviation
from chance expectation. However, critics point out that statistical
deviation is, strictly speaking, only evidence of a statistical anomaly,
and the cause of the deviation is not known. Hyman contends that even
if psi experiments that regularly reproduce similar deviations from
chance could be designed, they would not necessarily prove psychic
functioning. Critics have coined the term The Psi Assumption
to describe "the assumption that any significant departure from the
laws of chance in a test of psychic ability is evidence that something
anomalous or paranormal has occurred...[in other words] assuming what
they should be proving." These critics hold that concluding the
existence of psychic phenomena based on chance deviation in inadequately
designed experiments is affirming the consequent or begging the question.
In 1979, magician and debunkerJames Randi engineered a hoax, now referred to as Project Alpha
to encourage a tightening of standards within the parapsychology
community. Randi recruited two young magicians and sent them undercover
to Washington University's
McDonnell Laboratory, where they "fooled researchers ... into believing
they had paranormal powers." The aim was to expose poor experimental
methods and the credulity thought to be common in parapsychology. Randi has stated that both of his recruits deceived experimenters for
three years with demonstrations of supposedly psychic abilities: blowing
electric fuses sealed in a box, causing a lightweight paper rotor
perched atop a needle to turn inside a bell jar, bending metal spoons
sealed in a glass bottle, etc. The hoax by Randi raised ethical concerns in the scientific and
parapsychology communities, eliciting criticism even among skeptical
communities such as the Committee for the Scientific Investigation of
Claims of the Paranormal (CSICOP), which he helped found, but also
positive responses from the President of the Parapsychological
Association Stanley Krippner. Psychologist Ray Hyman, a CSICOP member,
called the results "counterproductive".
Selection bias and meta-analysis
Selective reporting
has been offered by critics as an explanation for the positive results
reported by parapsychologists. Selective reporting is sometimes called a
"file drawer" problem, which arises when only positive study results
are made public, while studies with negative or null results are not
made public. Selective reporting has a compounded effect on meta-analysis, which is a statistical technique that aggregates the results of many studies to generate sufficient statistical power to demonstrate a result that the individual studies themselves could not demonstrate at a statistically significant level. For example, a recent meta-analysis combined 380 studies on psychokinesis, including data from the PEAR lab. It concluded that, although there is a
statistically significant overall effect, it is inconsistent, and
relatively few negative studies would cancel it out. Consequently, biased publication of positive results could be the cause.
Numerous researchers have criticized the popularity of meta-analysis in parapsychology, and is often seen as troublesome even within parapsychology. Critics have said that parapsychologists misuse meta-analysis to create
the incorrect impression that statistically significant results have
been obtained that indicate the existence of psi phenomena. Physicist Robert Park
states that parapsychology's reported positive results are problematic
because most such findings are invariably at the margin of statistical
significance and that might be explained by a number of confounding
effects; Park states that such marginal results are a typical symptom of
pathological science as described by Irving Langmuir.
Researcher J. E. Kennedy has said that concerns over
meta-analysis in science and medicine also apply to problems present in
parapsychological meta-analysis. As a post-hoc analysis,
critics emphasize the opportunity the method presents to produce biased
outcomes via selecting cases chosen for study, methods employed, and
other key criteria. Critics say that analogous problems with
meta-analysis have been documented in medicine, where it has been shown
different investigators performing meta-analyses of the same set of
studies have reached contradictory conclusions.
In anomalistic psychology, paranormal phenomena have naturalistic explanations resulting from psychological and physical factors, which have sometimes given the impression of paranormal activity to some people when, in fact, there have been none. According to the psychologist Chris French:
The difference between anomalistic
psychology and parapsychology is in terms of the aims of what each
discipline is about. Parapsychologists typically are actually searching
for evidence to prove the reality of paranormal forces, to prove they
really do exist. So the starting assumption is that paranormal things do
happen, whereas anomalistic psychologists tend to start from the
position that paranormal forces probably don't exist and that therefore
we should be looking for other kinds of explanations, in particular the
psychological explanations for those experiences that people typically
label as paranormal.
While parapsychology has declined, anomalistic psychology has risen.
It is now offered as an option in some psychology degree programs. It is
also an option on the A2 psychology syllabus in the UK.[242]