Hypothetical technology is technology that does not exist yet, but that could exist in the future. This article presents examples of technologies that have been hypothesized or proposed, but that have not been developed yet. An example of hypothetical technology is teleportation.
Artificial general intelligence (AGI) is a hypothetical artificial intelligence
that demonstrates a human-like ability to learn. AGI is a machine which
could do all human activities with the efficiency of a machine. It is a
primary goal of artificial intelligence research and a common topic among science fiction writers and futurists. Artificial general intelligence is also referred to as strong AI, full AI or one that has the ability to perform "general intelligent action". AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness, which are observed in living beings.
Whole brain emulation (WBE) or mind uploading (sometimes called mind
copying or mind transfer) is the hypothetical process of copying mental
content (including long-term memory and "self") from a particular brain
substrate and copying it to a computational or storage device, such as a
digital, analog, quantum-based, or software-based artificial neural network. The computational device could then run a simulation
model of the brain information processing, such that it responds in
essentially the same way as the original brain (i.e., indistinguishable
from the brain for all relevant purposes) and experiences having a consciousmind.
Mind uploading may potentially be accomplished by at least two
methods: Copy-and-Transfer or Gradual Replacement of neurons. In the
former method, mind uploading would be achieved by scanning and mapping
the salient features of a biological brain, and then by copying,
transferring and storing that information state into a computer system
or another computational device. The simulated mind could be within a virtual reality or simulated world,
supported by an anatomic 3D body simulation model. Alternatively, the
simulated mind could reside in a computer that's inside (or connected
to) a humanoidrobot or a biological body.
Space flight
There are many forms of spaceflight that have been proposed that have
not, so far, been developed but are thought to be possible. Some, like
the space elevator are under active development.Others, like Project Orion,
a nuclear bomb propulsion system, are entirely paper exercises. As it
happens, Orion is thought to be entirely achievable with existing
technology (the obstacles to it are environmental and political rather
than technological), whereas the space elevator depends on the development of a material for the cable with a very high specific strength.
A space elevator is a proposed type of space transport system. Its main component is a ribbon-like cable (also called a tether)
starting at or near a planetary surface and extending into space. It is
designed to permit vehicle transport along the cable directly into
space or orbit without the use of large rockets.
An Earth-based space elevator would consist of a cable with one end
attached to the surface near the equator and the other end in space
beyond geostationary orbit
(35,800 km altitude). The competing forces of gravity, which are
stronger at the lower end, and the outward/upward centrifugal force,
which is stronger at the upper end, would result in the cable staying up
under tension, and stationary over a single position on Earth. Once
deployed, the tether would be ascended repeatedly by mechanical means to
orbit, and descended to return to the surface from orbit.
On Earth, with its relatively strong gravity, current technology
is not capable of manufacturing tether materials that are sufficiently strong and light enough to build a space elevator. However, recent concepts for a space elevator are notable for their plans to use carbon nanotube or boron nitride nanotube-based materials as the tensile element in the tether design.
The rotating skyhook, or momentum-exchange tether, is an idea related
to the space elevator concept. It is one of the many proposed
applications of space tethers,
which include some propulsion systems. The tether is rotated from a
heavy orbiting vehicle such that the far end, weighted with a docking
station, periodically enters Earth's atmosphere. With the right timing,
a fast aircraft can transfer cargo and passengers during the brief time
the skyhook is at the bottom of its cycle and stationary relative to
Earth's surface.
A light sail is a proposed propulsion system that uses the momentum
transferred to a sail by light impinging on it. A light sail could use
sunlight to achieve interplanetary travel without carrying large quantities of onboard fuel. Just as a sailboat on Earth can tack into the wind, the light sail can be tacked against the direction of light for a return journey from the outer planets.
At the beginning of the 21st century, light sails were still entirely hypothetical. The Japanese IKAROS spacecraft was launched in 2010 as a proof-of-concept mission for the light sail. It successfully completed a fly-by of Venus using a light sail as its main means of propulsion.
Evolutionary neuroscientists examine changes in genes, anatomy,
physiology, and behavior to study the evolution of changes in the brain. They study a multitude of processes including the evolution of vocal, visual, auditory, taste, and learning systems as well as language evolution and development. In addition, evolutionary neuroscientists study the evolution of specific areas or structures in the brain such as the amygdala, forebrain and cerebellum as well as the motor or visual cortex.
History
Studies of the brain began during ancient Egyptian times but studies
in the field of evolutionary neuroscience began after the publication of
Darwin's On the Origin of Species in 1859. At that time, brain evolution was largely viewed at the time in relation to the incorrect scala naturae.
Phylogeny and the evolution of the brain were still viewed as linear.
During the early 20th century, there were several prevailing theories
about evolution. Darwinism was based on the principles of natural selection and variation, Lamarckism was based on the passing down of acquired traits, Orthogenesis was based on the assumption that tendency towards perfection steers evolution, and Saltationism
argued that discontinuous variation creates new species. Darwin's
became the most accepted and allowed for people to starting thinking
about the way animals and their brains evolve.
The 1936 book The Comparative Anatomy of the Nervous System of Vertebrates Including Man by the Dutch neurologist C.U. Ariëns Kappers (first published in German in 1921) was a landmark publication in the field. Following the Evolutionary Synthesis,
the study of comparative neuroanatomy was conducted with an
evolutionary view, and modern studies incorporate developmental
genetics. It is now accepted that phylogenetic changes occur independently
between species over time and can not be linear. It is also believed
that an increase with brain size correlates with an increase in neural
centers and behavior complexity.
Major arguments
Over time, there are several arguments that would come to define the
history of evolutionary neuroscience. The first is the argument between E.G. St. Hilaire and G. Cuvier over the topic of "common plan versus diversity". St. Hilaire argued that all animals are built based on a single plan or archetype and he stressed the importance of homologies
between organisms, while Cuvier believed that the structure of organs
was determined by their function and that knowledge of the function of
one organ could help discover the functions of other organs.He argued that there were at least four different archetypes. After
Darwin, the idea of evolution was more accepted and St. Hilaire's idea
of homologous structures was more accepted. The second major argument is
that of Aristotle's scala naturae (scale of nature) and the great chain of being versus the phylogenetic bush. The scala naturae,
later also called the phylogenetic scale, was based on the premise that
phylogenies are linear or like a scale while the phylogenetic bush
argument was based on the idea that phylogenies were not linear, and
more resembled a bush – the currently accepted view. A third major
argument dealt with the size of the brain and whether relative size or
absolute size was more relevant in determining function. In the late
18th century, it was determined that brain to body ratio reduces as body
size increases. However more recently, there is more focus on absolute brain size as this scales with internal structures and functions, with the degree of structural complexity, and with the amount of white matter
in the brain, all suggesting that absolute size is much better
predictor of brain function. Finally, a fourth argument is that of
natural selection (Darwinism)
versus developmental constraints (concerted evolution). It is now
accepted that the evolution of development is what causes adult species
to show differences and evolutionary neuroscientists maintain that many
aspects of brain function and structure are conserved across species.
Techniques
Throughout history, we see how evolutionary neuroscience has been dependent on developments in biological theory and techniques. The field of evolutionary neuroscience has been shaped by the
development of new techniques that allow for the discovery and
examination of parts of the nervous system. In 1873, C. Golgi
devised the silver nitrate method which allowed for the description of
the brain at the cellular level as opposed to simply the gross level. Santiago
and Pedro Ramon used this method to analyze numerous parts of brains,
broadening the field of comparative neuroanatomy. In the second half of
the 19th century, new techniques allowed scientists to identify neuronal
cell groups and fiber bundles in brains. In 1885, Vittorio Marchi
discovered a staining technique that let scientists see induced axonal
degeneration in myelinated axons, in 1950, the "original nauta
procedure" allowed for more accurate identification of degenerating
fibers, and in the 1970s, there were several discoveries of multiple
molecular tracers which would be used for experiments even today. In the
last 20 years, cladistics has also become a useful tool for looking at variation in the brain.
Evolution of brains
Many of Earth's early years were filled with brainless creatures, and among them was the amphioxus,
which can be traced as far back as 550 million years ago. Amphioxi had a
significantly simpler way of life, which made it not necessary for them
to have a brain. To replace its absence of a brain, the prehistoric
amphioxi had a limited nervous system,
which was composed of only a bunch of cells. These cells optimized
their uses because many of the cells for sensing intertwined with the
cells used for its very simple system for moving, which allowed it to
propel itself through bodies of water and react without much processing
while the cells remaining were used for the detection of light to
account to the fact that it had no eyes. It also did not need a sense of
hearing. Even though the amphioxi had limited senses, they did not need
them to survive efficiently, as their life was mainly dedicated to
sitting on the seafloor to eat. Although the amphioxus' "brain" might seem severely underdeveloped
compared to their human counterparts, it was set well for its respective
environment, which has allowed it to prosper for millions of years.
Although many scientists once assumed that the brain evolved to
achieve an ability to think, such a view is today considered a great
misconception. 500 million years ago, the Earth entered into the Cambrian
period, where hunting became a new concern for survival in an animal's
environment. At this point, animals became sensitive to the presence of
another, which could serve as food. Although hunting did not inherently
require a brain, it was one of the main steps that pushed the
development of one, as organisms progressed to develop advanced sensory
systems.
In response to progressively complicated surroundings, where
competition between animals with brains started to arise for survival,
animals had to learn to manage their energy. As creatures acquired a variety of senses for perception, animals progressed to develop allostasis,
which played the role of an early brain by forcing the body to gather
past experiences to improve prediction. Since prediction beat reaction,
organisms who planned their manoeuvres were more likely to survive than
those who did not. This came with equally managing energy adequately,
which nature favoured. Animals that had not developed allostasis would
be at a disadvantage for their purpose of exploration, foraging and
reproduction, as death was a higher risk factor.
As allostasis continued to develop in animals, their bodies
equally continuously evolved in size and complexity. They progressively
started to develop cardiovascular systems, respiratory systems and immune systems
to survive in their environments, which required bodies to have
something more complex than the limited quality of cells to regulate
themselves. This encouraged the nervous systems of many creatures to
develop into a brain, which was sizeable and strikingly similar to how
most animal brains look today.
Evolution of the human brain
Darwin, in The Descent of Man,
stipulated that the mind evolved simultaneously with the body.
According to his theory, all humans have a barbaric core that they learn
to deal with. Darwin's theory allowed people to start thinking about the way animals and their brains evolve.
Reptile brain
Plato's insight on the evolution of the human brain contemplated the
idea that all humans were once lizards, with similar survival needs such
as feeding, fighting and mating. In the classical eraPlato first described this concept as the "lizard mind" – the deepest layer and one of three parts of his conception of a three-part human mind. In the 20th century P. MacLean developed a similar, modern triune brain theory.
Recent research in molecular genetics has demonstrated evidence
that there is no difference in the neurons that reptiles and nonhuman
mammals have when compared to humans. Instead, new research speculates
that all mammals, and potentially reptiles, birds and some species of
fish, evolve from a common order pattern. This research reinforces the
idea that human brains are structurally no t any different from many
other organisms.
The cerebral cortex of reptiles resembles that of mammals, although simplified. Although the evolution and function of the human cerebral cortex is
still shrouded in mystery, we know that it is the most dramatically
changed part of the brain during recent evolution. The reptilian brain,
300 million years ago, was made for all our basic urges and instincts
like fighting, reproducing, and mating. The reptile brain evolved
100 million years later and gave us the ability to feel emotion.
Eventually, it was able to develop a rational part that controls our
inner animal.
Visual perception
Vision allows humans to process the world surrounding them to a
certain extent. Through the wavelengths of light, the human brain can
associate them to a specific event. Although the brain obviously
perceives its surroundings at a specific moment, the brain equally
predicts the upcoming changes in the environment. Once it has noticed them, the brain begins to prepare itself to
encounter the new scenario by attempting to develop an adequate
response. This is accomplished by using the data the brain has at its
access, which can be to use past experiences and memories to form a
proper response. However, sometimes the brain fails to predict accurately which means
that the mind perceives a false illustration. Such an incorrect image
occurs when the brain uses an inadequate memory to respond to what it is
facing, which means that the memory does not correlate with the real
scenario.
The rabbit–duck illusion
is a famous ambiguous image in which a rabbit or a duck can be seen.
The earliest known version is an unattributed drawing from the
23 October 1892 issue of Blätter magazine.
Research about how visual perception has developed in evolution is
today best understood through studying present-day primates since the
organization of the brain cannot be ascertained only by analyzing
fossilized skulls.
The brain interprets visual information in the occipital lobe, a
region in the back of the brain. The occipital lobe contains the visual
cortex and the thalamus, which are the two main actors in processing
visual information. The process of interpreting information has proven
to be more complex than "what you see is what you get". Misinterpreting
visual information is more common than previously believed.
As knowledge of the human brain has evolved, researchers discover
that our visual perception is much closer to a construction of the
brain than a direct "photograph" of what is in front of us. This can
lead to misperceiving certain situations or elements in the brain's
attempt to keep us safe. For example, an on-edge soldier believes a
young child with a stick is a grown man with a gun, as the brain's
sympathetic system, or fight-or-flight mode, is activated.
An example of this phenomenon can be observed in the rabbit–duck illusion.
Depending on how the image is looked at, the brain can interpret the
image of a rabbit, or a duck. There is no right or wrong answer, but it
is proof that what is seen may not be the reality of the situation.
Auditory perception
The organization of the human auditory cortex is divided into core,
belt, and parabelt. This closely resembles that of present-day primates.
The concept of auditory perception resembles visual perception
very similarly. Our brain is wired to act on what it expects to
experience. The sense of hearing helps situate an individual, but it
also gives them hints about what else is around them. If something
moves, they know approximately where it is and by the tone of it, the
brain can predict what moved. If someone were to hear leaves rustling in
a forest, the brain might interpret that sound as being an animal which
could be a dangerous factor, but it would simply be another person
walking. The brain can predict many things based on what it is interpreting, however, those predictions may not all be true.
Language development
Evidence of a rich cognitive life in primate relatives of humans is
extensive, and a wide range of specific behaviours in line with
Darwinian theory is well documented. However, until recently, research has disregarded nonhuman primates in
the context of evolutionary linguistics, primarily because unlike vocal
learning birds, our closest relatives seem to lack imitative abilities.
Evolutionary speaking, there is great evidence suggesting a genetic
groundwork for the concept of languages has been in place for millions
of years, as with many other capabilities and behaviours observed today.
While evolutionary linguists agree on the fact that volitional
control over vocalizing and expressing language is a quite recent leap
in the history of the human race, that is not to say auditory perception
is a recent development as well. Research has shown substantial
evidence of well-defined neural pathways linking cortices to organize
auditory perception in the brain. Thus, the issue lies in our abilities
to imitate sounds.
Beyond the fact that primates may be poorly equipped to learn
sounds, studies have shown them to learn and use gestures far better.
Visual cues and motoric pathways developed millions of years earlier in
our evolution, which seems to be one reason for our earlier ability to
understand and use gestures.
Cognitive specializations
Evolution shows how certain environments and surroundings will favor
the development of specific cognitive functions of the brain to aid an
animal or in this case human to successfully live in that environment.
Cognitive specialization in a theory in which cognitive
functions, such as the ability to communicate socially, can be passed
down genetically through offspring. This would benefit species in the
process of natural selection. As for studying this in relation to the
human brain, it has been theorized that very specific social skills
apart from language, such as trust, vulnerability, navigation, and
self-awareness can also be passed by offspring.
The quantum mind or quantum consciousness is a group of hypotheses proposing that local physical laws and interactions from classical mechanics or connections between neurons alone cannot explain consciousness. These hypotheses posit instead that quantum-mechanical phenomena, such as entanglement and superposition that cause nonlocalized quantum effects,
interacting in smaller features of the brain than cells, may play an
important part in the brain's function and could explain critical
aspects of consciousness. These scientific hypotheses are as yet
unvalidated, and they can overlap with quantum mysticism.
History
Eugene Wigner developed the idea that quantum mechanics has something to do with the workings of the mind. He proposed that the wave function collapses due to its interaction with consciousness. Freeman Dyson argued that "mind, as manifested by the capacity to make choices, is to some extent inherent in every electron".
Other contemporary physicists and philosophers considered these arguments unconvincing. Victor Stenger
characterized quantum consciousness as a "myth" having "no scientific
basis" that "should take its place along with gods, unicorns and
dragons".
David Chalmers argues against quantum consciousness. He instead discusses how quantum mechanics may relate to dualistic consciousness. Chalmers is skeptical that any new physics can resolve the hard problem of consciousness. He argues that quantum theories of consciousness suffer from the same
weakness as more conventional theories. Just as he argues that there is
no particular reason why specific macroscopic physical features in the
brain should give rise to consciousness, he also thinks that there is no
specific reason why a particular quantum feature, such as the EM field
in the brain, should give rise to consciousness either.
Approaches
Bohm and Hiley
David Bohm viewed quantum theory and relativity as contradictory, which implied a more fundamental level in the universe. He claimed that both quantum theory and relativity pointed to this deeper theory, a quantum field theory. This more fundamental level was proposed to represent an undivided wholeness and an implicate order, from which arises the explicate order of the universe as we experience it.
Bohm's proposed order applies both to matter and consciousness.
He suggested that it could explain the relationship between them. He saw
mind and matter as projections into our explicate order from the
underlying implicate order. Bohm claimed that when we look at matter, we
see nothing that helps us to understand consciousness.
Bohm never proposed a specific means by which his proposal could
be falsified, nor a neural mechanism through which his "implicate order"
could emerge in a way relevant to consciousness. He later collaborated on Karl Pribram's holonomic brain theory as a model of quantum consciousness.
David Bohm also collaborated with Basil Hiley on work that claimed mind and matter both emerge from an "implicate order". Hiley in turn worked with philosopher Paavo Pylkkänen. According to Pylkkänen, Bohm's suggestion "leads naturally to the assumption that the physical correlate of the logical thinking
process is at the classically describable level of the brain, while the
basic thinking process is at the quantum-theoretically describable
level".
Theoretical physicist Roger Penrose and anaesthesiologistStuart Hameroff collaborated to produce the theory known as "orchestrated objective reduction"
(Orch-OR). Penrose and Hameroff initially developed their ideas
separately and later collaborated to produce Orch-OR in the early 1990s.
They reviewed and updated their theory in 2013.
Penrose's argument stemmed from Gödel's incompleteness theorems. In his first book on consciousness, The Emperor's New Mind (1989), he argued that while a formal system cannot prove its own consistency,
Gödel's unprovable results are provable by human mathematicians. Penrose took this to mean that human mathematicians are not formal
proof systems and not running a computable algorithm. According to
Bringsjord and Xiao, this line of reasoning is based on fallacious equivocation on the meaning of computation. In the same book, Penrose wrote: "One might speculate, however, that
somewhere deep in the brain, cells are to be found of single quantum
sensitivity. If this proves to be the case, then quantum mechanics will
be significantly involved in brain activity."
Penrose determined that wave function collapse
was the only possible physical basis for a non-computable process.
Dissatisfied with its randomness, he proposed a new form of wave
function collapse that occurs in isolation and called it objective reduction.
He suggested each quantum superposition has its own piece of spacetime
curvature and that when these become separated by more than one Planck length, they become unstable and collapse. Penrose suggested that objective reduction represents neither randomness nor algorithmic processing but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derives.
Hameroff provided a hypothesis that microtubules would be suitable hosts for quantum behavior. Microtubules are composed of tubulinprotein dimer subunits. The dimers each have hydrophobic pockets that are 8 nm apart and may contain delocalized π electrons. Tubulins have other smaller non-polar regions that contain π-electron-rich indole rings separated by about 2 nm. Hameroff proposed that these electrons are close enough to become entangled. He originally suggested that the tubulin-subunit electrons would form a Bose–Einstein condensate, but this was discredited. He then proposed a Frohlich condensate, a hypothetical coherent
oscillation of dipolar molecules, but this too was experimentally
discredited.
For instance, the proposed predominance of A-lattice
microtubules, more suitable for information processing, was falsified by
Kikkawa et al. who showed that all in vivo microtubules have a B lattice and a seam.
Orch-OR predicted that microtubule coherence reaches the synapses
through dendritic lamellar bodies (DLBs), but De Zeeuw et al. proved this impossible by showing that DLBs are micrometers away from gap junctions.
In 2014, Hameroff and Penrose claimed that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013 corroborates Orch-OR theory. Experiments that showed that anaesthetic drugs reduce how long
microtubules can sustain suspected quantum excitations appear to support
the quantum theory of consciousness.
In April 2022, the results of two related experiments at the University of Alberta and Princeton University were announced at The Science of Consciousness conference, providing further evidence to support quantum processes operating within microtubules. In a study Stuart Hameroff was part of, Jack Tuszyński of the University of Alberta demonstrated that anesthetics hasten the duration of a process called delayed luminescence, in which microtubules and tubulins re-emit trapped light. Tuszyński suspects that the phenomenon has a quantum origin, with superradiance being investigated as one possibility. In the second experiment, Gregory D. Scholes and Aarat Kalra of Princeton University
used lasers to excite molecules within tubulins, causing a prolonged
excitation to diffuse through microtubules further than expected, which
did not occur when repeated under anesthesia.However, diffusion results have to be interpreted carefully, since even
classical diffusion can be very complex due to the wide range of length
scales in the fluid filled extracellular space. Nevertheless, University of Oxford quantum physicist Vlatko Vedral told that this connection with consciousness is a really long shot.
Also in 2022, a group of Italian physicists conducted several
experiments that failed to provide evidence in support of a
gravity-related quantum collapse model of consciousness, weakening the
possibility of a quantum explanation for consciousness.
Although these theories are stated in a scientific framework, it
is difficult to separate them from scientists' personal opinions. The
opinions are often based on intuition or subjective ideas about the
nature of consciousness. For example, Penrose wrote:
[M]y own point of view asserts that you can't even simulate conscious
activity. What's going on in conscious thinking is something you
couldn't properly imitate at all by computer.... If something behaves as
though it's conscious, do you say it is conscious? People argue
endlessly about that. Some people would say, "Well, you've got to take
the operational viewpoint; we don't know what consciousness is. How do
you judge whether a person is conscious or not? Only by the way they
act. You apply the same criterion to a computer or a computer-controlled
robot." Other people would say, "No, you can't say it feels something
merely because it behaves as though it feels something." My view is
different from both those views. The robot wouldn't even behave
convincingly as though it was conscious unless it really was—which I say
it couldn't be, if it's entirely computationally controlled.
Penrose continues:
A lot of what the brain does you could do on a computer. I'm not
saying that all the brain's action is completely different from what you
do on a computer. I am claiming that the actions of consciousness are
something different. I'm not saying that consciousness is beyond
physics, either—although I'm saying that it's beyond the physics we know
now.... My claim is that there has to be something in physics that we
don't yet understand, which is very important, and which is of a
noncomputational character. It's not specific to our brains; it's out
there, in the physical world. But it usually plays a totally
insignificant role. It would have to be in the bridge between quantum
and classical levels of behavior—that is, where quantum measurement
comes in.
Umezawa, Vitiello, Freeman
Hiroomi Umezawa and collaborators proposed a quantum field theory of memory storage.Giuseppe Vitiello and Walter Freeman proposed a dialog model of the mind. This dialog takes place between the classical and the quantum parts of the brain. Their quantum field theory models of brain dynamics are fundamentally different from the Penrose–Hameroff theory.
Quantum brain dynamics
As described by Harald Atmanspacher, "Since quantum theory is the
most fundamental theory of matter that is currently available, it is a
legitimate question to ask whether quantum theory can help us to
understand consciousness."
The original motivation in the early 20th century for
relating quantum theory to consciousness was essentially philosophical.
It is fairly plausible that conscious free decisions ("free will") are problematic in a perfectly deterministic world,
so quantum randomness might indeed open up novel possibilities for free
will. (On the other hand, randomness is problematic for goal-directed
volition!)
Ricciardi and Umezawa proposed in 1967 a general theory of quanta of long-range coherent waves within and between brain cells, and showed a possible mechanism of memory storage and retrieval in terms of Nambu–Goldstone bosons. Mari Jibu and Kunio Yasue later popularized these results under the name "quantum brain dynamics" (QBD) as the hypothesis to explain the function of the brain within the framework of quantum field theory with implications on consciousness.
Pribram
Karl Pribram's holonomic brain theory (quantum holography) invoked quantum field theory to explain higher-order processing of memory in the brain. He argued that his holonomic model solved the binding problem. Pribram collaborated with Bohm in his work on quantum approaches to the thought process. Pribram suggested much of the processing in the brain was done in distributed fashion. He proposed that the fine fibered, felt-like dendritic fields might follow the principles of quantum field theory when storing and retrieving long term memory.
Stapp
Henry Stapp proposed that quantum waves are reduced only when they interact with consciousness. He argues from the orthodox quantum mechanics of John von Neumann that the quantum state collapses when the observer selects one among
the alternative quantum possibilities as a basis for future action. The
collapse, therefore, takes place in the expectation that the observer
associated with the state. Stapp's work drew criticism from scientists
such as David Bourget and Danko Georgiev.
Catecholaminergic neuron electron transport (CNET)
CNET is a hypothesized neural signaling mechanism in
catecholaminergic neurons that would use quantum mechanical electron
transport. The hypothesis is based in part on the observation by many independent
researchers that electron tunneling occurs in ferritin, an iron storage
protein that is prevalent in those neurons, at room temperature and
ambient conditions. The hypothesized function of this mechanism is to assist in action
selection, but the mechanism itself would be capable of integrating
millions of cognitive and sensory neural signals using a physical
mechanism associated with strong electron-electron interactions.Each tunneling event would involve a collapse of an electron wave
function, but the collapse would be incidental to the physical effect
created by strong electron-electron interactions.
CNET predicted a number of physical properties of these neurons
that have been subsequently observed experimentally, such as electron
tunneling in substantia nigra pars compacta (SNc) tissue and the
presence of disordered arrays of ferritin in SNc tissue. The hypothesis also predicted that disordered ferritin arrays like
those found in SNc tissue should be capable of supporting long-range
electron transport and providing a switching or routing function, both
of which have also been subsequently observed.
Another prediction of CNET was that the largest SNc neurons
should mediate action selection. This prediction was contrary to earlier
proposals about the function of those neurons at that time, which were
based on predictive reward dopamine signaling. A
team led by Dr. Pascal Kaeser of Harvard Medical School subsequently
demonstrated that those neurons do in fact code movement, consistent
with the earlier predictions of CNET. While the CNET mechanism has not yet been directly observed, it may be
possible to do so using quantum dot fluorophores tagged to ferritin or
other methods for detecting electron tunneling.
CNET is applicable to a number of different consciousness models as a binding or action selection mechanism, such as Integrated Information Theory (IIT) and Sensorimotor Theory (SMT). It is noted that many existing models of consciousness fail to
specifically address action selection or binding. For example, O'Regan
and Noë call binding a "pseudo problem," but also state that "the fact
that object attributes seem perceptually to be part of a single object
does not require them to be 'represented' in any unified kind of way,
for example, at a single location in the brain, or by a single process.
They may be so represented, but there is no logical necessity for this." Simply because there is no "logical necessity" for a physical
phenomenon does not mean that it does not exist, or that once it is
identified that it can be ignored. Likewise, global workspace theory (GWT) models appear to treat dopamine as modulatory, based on the prior understanding of those neurons from predictive
reward dopamine signaling research, but GWT models could be adapted to
include modeling of moment-by-moment activity in the striatum to mediate
action selection, as observed by Kaiser. CNET is applicable to those
neurons as a selection mechanism for that function, as otherwise that
function could result in seizures from simultaneous actuation of
competing sets of neurons. While CNET by itself is not a model of
consciousness, it is able to integrate different models of consciousness
through neural binding and action selection. However, a more complete
understanding of how CNET might relate to consciousness would require a
better understanding of strong electron-electron interactions in
ferritin arrays, which implicates the many-body problem.
Criticism
These hypotheses of the quantum mind remain hypothetical speculation,
as Penrose admits in his discussions. Until they make a prediction that
is tested by experimentation, the hypotheses are not based on empirical
evidence. In 2010, Lawrence Krauss
was guarded in criticising Penrose's ideas. He said: "Roger Penrose has
given lots of new-age crackpots ammunition... Many people are dubious
that Penrose's suggestions are reasonable, because the brain is not an
isolated quantum-mechanical system. To some extent it could be, because
memories are stored at the molecular level, and at a molecular level
quantum mechanics is significant." According to Krauss, "It is true that quantum mechanics is extremely
strange, and on extremely small scales for short times, all sorts of
weird things happen. And in fact, we can make weird quantum phenomena
happen. But what quantum mechanics doesn't change about the universe is,
if you want to change things, you still have to do something. You can't
change the world by thinking about it."
The process of testing the hypotheses with experiments is fraught with conceptual/theoretical, practical, and ethical problems.
Conceptual problems
The idea that a quantum effect is necessary for consciousness to
function is still in the realm of philosophy. Penrose proposes that it
is necessary, but other theories of consciousness do not indicate that
it is needed. For example, Daniel Dennett proposed a theory called multiple drafts model, which doesn't indicate that quantum effects are needed, in his 1991 book Consciousness Explained. A philosophical argument on either side is not a scientific proof,
although philosophical analysis can indicate key differences in the
types of models and show what type of experimental differences might be
observed. But since there is no clear consensus among philosophers,
there is no conceptual support that a quantum mind theory is needed.
A possible conceptual approach is to use quantum mechanics as an
analogy to understand a different field of study like consciousness,
without expecting that the laws of quantum physics will apply. An
example of this approach is the idea of Schrödinger's cat. Erwin Schrödinger
described how one could, in principle, create entanglement of a
large-scale system by making it dependent on an elementary particle in a
superposition. He proposed a scenario with a cat in a locked steel
chamber, wherein the cat's survival depended on the state of a radioactive atom—whether it had decayed and emitted radiation. According to Schrödinger, the Copenhagen interpretation implies that the cat is both alive and dead
until the state has been observed. Schrödinger did not wish to promote
the idea of dead-and-alive cats as a serious possibility; he intended
the example to illustrate the absurdity of the existing view of quantum
mechanics. But since Schrödinger's time, physicists have given other interpretations of the mathematics of quantum mechanics, some of which regard the "alive and dead" cat superposition as quite real. Schrödinger's famous thought experiment
poses the question of when a system stops existing as a quantum
superposition of states. In the same way, one can ask whether the act of
making a decision is analogous to having a superposition of states of
two decision outcomes, so that making a decision means "opening the box"
to reduce the brain from a combination of states to one state. This
analogy of decision-making uses a formalism derived from quantum
mechanics, but does not indicate the actual mechanism by which the
decision is made.
In this way, the idea is similar to quantum cognition.
This field clearly distinguishes itself from the quantum mind, as it is
not reliant on the hypothesis that there is something micro-physical
quantum-mechanical about the brain. Quantum cognition is based on the
quantum-like paradigm,generalized quantum paradigm, or quantum structure paradigm that information processing by complex systems such as the brain can be
mathematically described in the framework of quantum information and
quantum probability theory. This model uses quantum mechanics only as an
analogy and does not propose that quantum mechanics is the physical
mechanism by which it operates. For example, quantum cognition proposes
that some decisions can be analyzed as if there is interference between
two alternatives, but it is not a physical quantum interference effect.
Practical problems
The main theoretical argument against the quantum-mind hypothesis is
the assertion that quantum states in the brain would lose coherency
before they reached a scale where they could be useful for neural
processing. This supposition was elaborated by Max Tegmark. His calculations indicate that quantum systems in the brain decohere at sub-picosecond timescales. No response by a brain has shown computational results or reactions on
this fast of a timescale. Typical reactions are on the order of
milliseconds, trillions of times longer than sub-picosecond timescales.
Daniel Dennett uses an experimental result in support of his multiple drafts model of an optical illusion
that happens on a timescale of less than a second or so. In this
experiment, two different-colored lights, with an angular separation of a
few degrees at the eye, are flashed in succession. If the interval
between the flashes is less than a second or so, the first light that is
flashed appears to move across to the position of the second light.
Furthermore, the light seems to change color as it moves across the
visual field. A green light will appear to turn red as it seems to move
across to the position of a red light. Dennett asks how we could see the
light change color before the second light is observed. Velmans argues that the cutaneous rabbit illusion,
another illusion that happens in about a second, demonstrates that
there is a delay while modelling occurs in the brain and that this delay
was discovered by Libet. These slow illusions that happen at times of less than a second do not
support a proposal that the brain functions on the picosecond timescale.
Penrose says:
The problem with trying to use quantum mechanics in the action of the
brain is that if it were a matter of quantum nerve signals, these nerve
signals would disturb the rest of the material in the brain, to the
extent that the quantum coherence would get lost very quickly. You
couldn't even attempt to build a quantum computer out of ordinary nerve
signals, because they're just too big and in an environment that's too
disorganized. Ordinary nerve signals have to be treated classically. But
if you go down to the level of the microtubules, then there's an
extremely good chance that you can get quantum-level activity inside
them.
For my picture, I need this quantum-level activity in the
microtubules; the activity has to be a large-scale thing that goes not
just from one microtubule to the next but from one nerve cell to the
next, across large areas of the brain. We need some kind of coherent
activity of a quantum nature which is weakly coupled to the
computational activity that Hameroff argues is taking place along the
microtubules.
There are various avenues of attack. One is directly on the
physics, on quantum theory, and there are certain experiments that
people are beginning to perform, and various schemes for a modification
of quantum mechanics. I don't think the experiments are sensitive enough
yet to test many of these specific ideas. One could imagine experiments
that might test these things, but they'd be very hard to perform.
Penrose also said in an interview:
...whatever consciousness is, it must be beyond computable
physics.... It's not that consciousness depends on quantum mechanics,
it's that it depends on where our current theories of quantum mechanics
go wrong. It's to do with a theory that we don't know yet.
A demonstration of a quantum effect in the brain has to explain this
problem or explain why it is not relevant, or that the brain somehow
circumvents the problem of the loss of quantum coherency at body
temperature. As Penrose proposes, it may require a new type of physical
theory, something "we don't know yet."
Ethical problems
Deepak Chopra has referred a "quantum soul" existing "apart from the body", human "access to a field of infinite possibilities", and other quantum mysticism topics such as quantum healing
or quantum effects of consciousness. Seeing the human body as being
undergirded by a "quantum-mechanical body" composed not of matter but of
energy and information, he believes that "human aging is fluid and
changeable; it can speed up, slow down, stop for a time, and even
reverse itself", as determined by one's state of mind. Robert Carroll states that Chopra attempts to integrate Ayurveda with quantum mechanics to justify his teachings. Chopra argues that what he calls "quantum healing" cures any manner of
ailments, including cancer, through effects that he claims are based on
the same principles as quantum mechanics. This has led physicists to object to his use of the term quantum in reference to medical conditions and the human body. Chopra said: "I think quantum theory has a lot of things to say about the observer effect,
about non-locality, about correlations. So I think there's a school of
physicists who believe that consciousness has to be equated, or at least
brought into the equation, in understanding quantum mechanics." On the other hand, he also claims that quantum effects are "just a
metaphor. Just like an electron or a photon is an indivisible unit of
information and energy, a thought is an indivisible unit of
consciousness." In his book Quantum Healing, Chopra stated the conclusion that quantum entanglement links everything in the Universe, and therefore it must create consciousness.
According to Daniel Dennett, "On this topic, Everybody's an expert...
but they think that they have a particular personal authority about the
nature of their own conscious experiences that can trump any hypothesis
they find unacceptable."
While quantum effects are significant in the physiology of the
brain, critics of quantum mind hypotheses challenge whether the effects
of known or speculated quantum phenomena in biology scale up to have
significance in neuronal computation, much less the emergence of
consciousness as phenomenon. Daniel Dennett said, "Quantum effects are
there in your car, your watch, and your computer. But most things—most
macroscopic objects—are, as it were, oblivious to quantum effects. They
don't amplify them; they don't hinge on them."
Mind uploading is a hypothetical process of whole brain emulation in which a brain scan is used to completely emulate a person's mental state in a digitalcomputer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and have a sentientconsciousmind.
Substantial mainstream research in related areas is being conducted in neuroscience and computer science, including animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics, and information extraction from dynamically functioning brains. Supporters say many of the tools and ideas needed to achieve mind
uploading already exist or are under active development, but they admit
that others are as yet very speculative, though still in the realm of
engineering possibility.
Mind uploading may be accomplished by either of two methods: copy-and-upload or copy-and-delete by gradual replacement of neurons (which can be considered gradual destructive uploading) until the original organic brain no longer exists and a computer program emulating it takes control of the body. In the former method, mind uploading would be achieved by scanning and mapping
the salient features of a biological brain and then storing and copying
that information into a computer system or another computational
device. The biological
brain may not survive the copying process or may be deliberately
destroyed during it. The simulated mind could be in a virtual reality or
simulated world,
supported by an anatomic 3D body simulation model. Alternatively, the
simulated mind could reside in a computer inside—or either connected to
or remotely controlled by—a (not necessarily humanoid) robot, biological, or cybernetic body.
Among some futurists and within part of transhumanist movement, mind uploading is treated as an important proposed life extension or immortality technology (known as "digital immortality"). Some believe mind uploading is the best way to preserve the human species, as opposed to cryonics.
Another aim of mind uploading is to provide a permanent backup to our
"mind-file", to enable interstellar space travel, and to be a means for
human culture to survive a global disaster by making a functional copy
of a human society in a computing device. Some futurists consider
whole-brain emulation a "logical endpoint" of computational neuroscience and neuroinformatics, both of which study brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI (artificial general intelligence) and to at least weak superintelligence. Another approach is seed AI,
which is not based on existing brains. Computer-based intelligence,
such as an upload, could think much faster than a biological human, even
if it were no more intelligent. A large-scale society of uploads might,
according to futurists, give rise to a technological singularity: an exponential development of technology that exceeds human control and becomes unpredictable. Mind uploading is a central conceptual feature of numerous science fiction novels, films, and games.
Overview
Many neuroscientists believe that the human mind is largely an emergent property of the information processing of its neuronal network.
Neuroscientists have said that important functions performed by
the mind, such as learning, memory, and consciousness, are due to purely
physical and electrochemical processes in the brain and are governed by
applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:
Consciousness is part of the
natural world. It depends, we believe, only on mathematics and logic and
on the imperfectly known laws of physics, chemistry, and biology; it
does not arise from some magical or otherworldly quality.
Many theorists have presented models of the brain and established
a range of estimates of how much computing power is needed for partial
and complete simulations. Using these models, some have estimated that uploading may be possible within decades if trends such as Moore's law continue. As of December 2022, this kind of technology is almost entirely theoretical.
In theory, if a mind's information and processes can be disassociated
from a biological body, they are no longer tied to that body's limits
and lifespan. Furthermore, information within a brain could be partly or
wholly copied or transferred to one or more other substrates (including
digital storage or another brain), thereby—from a purely mechanistic
perspective—reducing or eliminating such information's "mortality risk".
This general proposal was discussed in 1971 by biogerontologistGeorge M. Martin of the University of Washington. From the perspective of the biological brain, the simulated brain may
just be a copy, even if it is conscious and has an indistinguishable
character. As such, the original biological being, before the uploading,
might consider the digital twin a new and independent being rather than
a future self.
Space exploration
An "uploaded astronaut" could be used instead of a "live" astronaut in human spaceflight, avoiding the perils of zero gravity, the vacuum of space, and cosmic radiation to the human body. It would allow for the use of smaller spacecraft, such as the proposed StarChip, and it would enable virtually unlimited interstellar travel distances.
Mind editing
Some researchers believe editing human brains is physically possible in theory, for example by performing neurosurgery with nanobots,
but it would require particularly advanced technology. Editing an
uploaded mind would be much easier, as long as the exact edits to be
made are known. This would facilitate cognitive enhancement and the precise control of the emulated beings' well-being, motivations, and personalities.
Speed
Although the number of neuronal connections in the human brain is very large (around 100 trillions),
the frequency of activation of biological neurons is limited to around
200 Hz, whereas electronic hardware can easily operate at multiple GHz.
With sufficient hardware parallelism, a simulated brain could thus in
theory run faster than a biological brain. Uploaded beings may therefore
not only be more efficient but also have a faster rate of subjective experience than biological brains (e.g. experiencing an hour of lifetime in a single second of real time).
Relevant technologies and techniques
The focus of mind uploading, in the case of copy-and-transfer, is on
data acquisition, rather than data maintenance of the brain. A set of
approaches known as loosely coupled off-loading (LCOL) may be used in an
attempt to characterize and copy a brain's mental contents. The LCOL approach may take advantage of self-reports, life-logs, and
video recordings that can be analyzed by artificial intelligence. A
bottom-up approach may focus on neurons' specific resolution,
morphology, and spike times, the times at which they produce potential
responses.
AI-based "digital self" applications
Related to the "loosely coupled off-loading" (LCOL) idea of using
self-reports and life-logs to approximate aspects of a person's mental
contents, some consumer applications use user-provided text (e.g.,
journaling) to build conversational AI
companions or "digital selves"; this differs from whole-brain emulation
because it does not involve brain scanning or simulation of brain
tissue. Examples include AI companion/chatbot services such as Replika
and Character.AI, and a mobile application called Mind Upload which describes itself as allowing users to "upload your consciousness" by submitting thoughts and memories.
Computational complexity
Estimates of how much processing power is needed to emulate a human brain at various levels, along with the fastest and slowest supercomputers from TOP500
and a $1000 PC. Note the logarithmic scale. The (exponential) trend
line for the fastest supercomputer reflects a doubling every 14 months.
Kurzweil believes that mind uploading will be possible at neural
simulation, while the Sandberg & Bostrom report is less certain
about where consciousness arises.
Advocates of mind uploading point to Moore's law to support the
notion that the necessary computing power will be available within a few
decades. But the actual computational requirements for running an
uploaded human mind are very difficult to quantify, potentially
rendering such an argument specious.
Regardless of the techniques used to capture or recreate the
function of a human mind, the processing demands are likely to be
immense, due to the large number of neurons in the human brain along
with the considerable complexity of each neuron.
Required computational capacity strongly depends on the chosen level of simulation model scale:
Level
CPU demand (FLOPS)
Memory demand (Tb)
$1 million super‐computer (Earliest year of making)
When modeling and simulating a specific brain, a brain map or
connectivity database showing the connections between the neurons must
be extracted from an anatomic model of the brain. For whole-brain
simulation, this map should show the connectivity of the whole nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of a small sample of tissue from a mouse brain including synaptic details is possible as of 2010.
But if short-term memory and working memory
include prolonged or repeated firing of neurons as well as intraneural
dynamic processes, the electrical and chemical signal state of the
synapses and neurons may be hard to extract. The uploaded mind may then
perceive a memory loss of the events and mental processes immediately before the brain scanning.
A full brain map has been estimated to occupy less than 2 x 1016
bytes (20,000 TB) and would store the addresses of the connected
neurons, the synapse type, and the "weight" of each of the brains' 1015 synapses. But the biological complexities of true brain function (the epigenetic
states of neurons, protein components with multiple functional states,
etc.) may preclude an accurate prediction of the volume of binary data
required to faithfully represent a functioning human mind.
Serial sectioning
Serial sectioning of a brain
A possible method for mind uploading is serial sectioning, in which
the brain tissue and perhaps other parts of the nervous system are
frozen and then scanned and analyzed layer by layer, which, for frozen
samples at nano-scale, requires a cryo-ultramicrotome, capturing the structure of the neurons and their interconnections.The exposed surface of frozen nerve tissue would be scanned and
recorded, and then the surface layer of tissue removed. While this would
be very slow and labor-intensive, research is underway to automate the
collection and microscopy of serial sections. The scans would then be analyzed, and a model of the neural net recreated in the system into which the mind was being uploaded.
There is uncertainty with this approach using current microscopy
techniques. If it is possible to replicate neuron function from its
visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique. But as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron's cell membrane),
this may not suffice to capture and simulate neuron functions. It may
be possible to extend the techniques of serial sectioning and to capture
the internal molecular makeup of neurons, through the use of
sophisticated immunohistochemistry staining methods that could then be read via confocal laser scanning microscopy.
But as the physiological genesis of mind is not currently known, this
method may not be able to access all the biochemical information
necessary to recreate a human brain with sufficient fidelity.
Brain imaging
Process from MRI acquisition to the whole brain structural networkMagnetoencephalography
It may be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology such as functional MRI (fMRI, for mapping change in blood flow), magnetoencephalography
(MEG, for mapping of electrical currents), or combinations of multiple
methods, to build a detailed three-dimensional model of the brain using
non-invasive and non-destructive techniques. Today, fMRI is often
combined with MEG to create functional maps of human cortices during
more complex cognitive tasks, as the methods complement each other. Even
though current imaging technology lacks the spatial resolution needed
to gather the information needed for such a scan, important recent and
future developments are predicted to substantially improve both spatial
and temporal resolution.
Brain–computer interfaces
Brain–computer interfaces (BCIs) are sometimes discussed as
indirectly relevant to mind uploading insofar as they aim to record (and
in some cases stimulate) neural activity, but they do not by themselves
provide the high-resolution structural and biochemical data typically
assumed in whole-brain emulation proposals. Neuralink
is a company developing an implantable BCI that uses flexible electrode
"threads" inserted by a neurosurgical robot, described in a 2019
technical report in the Journal of Medical Internet Research. In 2024, Neuralink reported its first human implant; scientists noted
the work is primarily aimed at assisting people with paralysis and that
major challenges remain for long-term safety and performance.
Ongoing work in brain simulation includes partial and whole simulations of some animals. For example, the C. elegans roundworm, Drosophila fruit fly, and mouse have all been simulated to various degrees.
The Blue Brain Project, initiated by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne,
is an attempt to create a synthetic brain by reverse-engineering
mammalian brain circuitry in order to accelerate experimental research
on the brain. In 2009, after a successful simulation of part of a rat brain, the director, Henry Markram, said, "A detailed, functional artificial human brain can be built within the next 10 years." In 2013, Markram became the director of the new decade-long Human Brain Project.
Less than two years later, the project was recognized to be mismanaged
and its claims overblown, and Markram was asked to step down.
Commercial brain preservation proposals
In 2018, startup company Nectome, backed by Y Combinator, proposed a
controversial preservation service using aldehyde-stabilized cryopreservation.
The company's approach involves connecting terminal patients to a
heart-lung machine while under general anesthesia to pump preservation
chemicals through the carotid arteries while the person is still alive, a
process the company calls "100 percent fatal". The service planned to operate under California's End of Life Option
Act. Nectome won an $80,000 Brain Preservation Foundation prize for
preserving a pig's brain at the synaptic level, and received a $960,000
federal grant from the National Institute of Mental Health. Nectome collaborated with MIT neuroscientist Edward Boyden.
Nanobots
One approach to digital immortality is gradually "replacing" neurons in the brain with advanced medical technology such as nanobiotechnology, possibly using wetware computer technology or using nanobots to read brain structure, as described by Alexey Turchin.
Issues
Philosophical issues
The main philosophical problem faced by mind uploading is the hard problem of consciousness: the difficulty of explaining how a physical entity such as a human can have qualia, phenomenal consciousness, or subjective experience. Many philosophical responses to the hard problem entail that mind
uploading is fundamentally impossible, while others are compatible with
mind uploading. Many proponents of mind uploading defend its feasibility
by recourse to non-reductive physicalism, which includes the belief that consciousness is an emergent feature that arises from large neural network high-level patterns of organization that could be realized
in other processing devices. Mind uploading relies on the idea that the
human mind reduces to neural network paths and the weights of synapses
in the brain. In contrast, many dualistic and idealistic
accounts seek to avoid the hard problem of consciousness by explaining
it in terms of immaterial (and presumably inaccessible) substances,
which pose a fundamental challenge to the feasibility of artificial
consciousness in general.
Assuming physicalism is true, the mind can be defined as the
information state of the brain, so it exists only in the same sense as
the information content of a data file or a computer software state. In
this case, data specifying a neural network's information state could be
captured and copied as a "computer file" from the brain and implemented
in a different physical form. This is not to deny that minds are richly adapted to their substrates. An analogy to mind uploading is copying the information state of a
computer program from the memory of the computer on which it is running
to another computer and then continuing its execution on the second
computer. The second computer may have different hardware architecture,
but it emulates the hardware of the first computer.
These philosophical issues have a long history. In 1775, Thomas Reid
wrote: "I would be glad to know... whether when my brain has lost its
original structure, and when some hundred years after the same materials
are fabricated so curiously as to become an intelligent being, whether,
I say that being will be me; or, if, two or three such beings should be
formed out of my brain; whether they will all be me, and consequently
one and the same intelligent being." Although the term hard problem of consciousness was coined in 1994, debate about the issue is ancient. Augustine of Hippo
argued against physicalist "Academians" in the 5th century, writing
that consciousness cannot be an illusion because only a conscious being
can be deceived or experience an illusion. René Descartes, the founder of mind-body dualism, made a similar objection in the 17th century, coining the popular phrase "Je pense, donc je suis" ("I think, therefore I am"). Although physicalism was proposed in ancient times, Thomas Huxley was among the first to describe mental experience as merely an epiphenomenon of interactions within the brain, with no causal power of its own and entirely downstream from brain activity.
Many transhumanists and singularitarians
hope to become immortal by creating one or many non-biological
functional copies of their brains, thereby leaving their "biological
shell". But the philosopher and transhumanist Susan Schneider claims that, at best, uploading would create a copy of the original mind. Schneider agrees that consciousness has a computational basis, but does
not agree that this means a person survives uploading. According to
her, uploading would probably result in the death of one's brain, and
only others could maintain the illusion that the original person
survived. It is implausible to think that one's consciousness could
leave one's brain for another location; ordinary physical objects do not
behave this way. Ordinary objects (rocks, tables, etc.) are not
simultaneously here and elsewhere. At best, a copy is created.
Others have argued against such conclusions. For example,
Buddhist transhumanist James Hughes has pointed out that this
consideration only goes so far: if one believes the self is an illusion,
worries about survival are not reasons to avoid uploading, and Keith Wiley has presented an argument wherein all resulting minds
of an uploading procedure have equal claims to the original identity,
such that survival of the self is determined retroactively from a
strictly subjective position. Some have also asserted that consciousness is a part of an
extra-biological system yet to be discovered and therefore cannot yet be
fully understood. Without transference of consciousness, true uploading
or perpetual immortality cannot be practically achieved.
Another potential consequence of mind uploading is that the
decision to upload may create a mindless symbol manipulator instead of a
conscious mind (a philosophical zombie) If a computer could process sensory inputs to generate the same outputs
that a human mind does (speech, muscle movements, etc.) without having
conscious experience, it may be impossible to determine whether the
uploaded mind is conscious and not merely an automaton that behaves like
a conscious being. Thought experiments like the Chinese room
raise fundamental questions about mind uploading: if an upload behaves
in ways highly indicative of consciousness, or insists that it is
conscious, is it conscious? There might also be an absolute upper limit in processing speed above
which consciousness cannot be sustained. The subjectivity of
consciousness precludes a definitive answer to this question.
Many scientists, including Ray Kurzweil,
believe that whether a separate entity is conscious is impossible to
know with confidence, since consciousness is inherently subjective (see solipsism).
Regardless, some scientists believe consciousness is the consequence of
substrate-neutral computational processes. Other scientists, including Roger Penrose, believe consciousness may emerge from some form of quantum computation that depends on an organic substrate (see quantum mind).
In light of uncertainty about whether uploaded minds are conscious, Sandberg proposes a cautious approach:
Principle
of assuming the most (PAM): Assume that any emulated system could have
the same mental properties as the original system and treat it
correspondingly.
Continuity of consciousness
Michael Cerullo's theory of consciousness is called "psychological
branching identity". He argues a person would survive gradual
destructive uploading via scan-and-copy (or copy-and-delete), based on a
theory grounded in emergent materialism, functionalism,
and psychological contiunity theory. According to him, psychological
identity branches out, with each copy an authentic continuation of the
original, ensuring the persistence of the original consciousness even if
the substrate is destroyed in the process. The mind branches into
distinct paths, all of which are continuations of the uploaded person.
Ethical and legal implications
The "fading qualia" and "dancing qualia" thought experiments proposed by Chalmers
The process of developing emulation technology raises ethical issues related to animal welfare and artificial consciousness. The neuroscience required to develop brain emulation would require
animal experimentation, first on invertebrates and then on small mammals
before moving on to humans. Sometimes the animals would just need to be
euthanized in order to extract, slice, and scan their brains, but
sometimes behavioral and in vivo measures would be required, which might cause pain to living animals.
In addition, the resulting animal emulations might suffer, depending on one's views about consciousness. Bancroft argues for the plausibility of consciousness in brain simulations based on David Chalmers's "fading qualia" thought experiment. Bancroft concludes: "If, as I argue above, a sufficiently detailed computational simulation
of the brain is potentially operationally equivalent to an organic
brain, it follows that we must consider extending protections against
suffering to simulations." Chalmers has argued that such virtual
realities would be genuine realities. But if mind uploading occurs and the uploads are not conscious, there may be a significant opportunity cost. In Superintelligence, Nick Bostrom expresses concern that a "Disneyland without children" could be built.
It might help reduce emulation suffering to develop virtual
equivalents of anesthesia and to omit processing related to pain and/or
consciousness. But some experiments might require a fully functioning
and suffering animal emulation. Animals might also suffer by accident
due to flaws and lack of insight into what parts of their brains are
suffering. Questions also arise about the moral status of partial brain emulations
and about creating neuromorphic emulations inspired by biological
brains but differently built.
Brain emulations could be erased by computer viruses or malware
without destroying the underlying hardware. This may make assassination
easier than for physical humans. The attacker might take the computing
power for its own use.
Many questions arise regarding the legal personhood of emulations. Would they be given the rights of biological humans? If a person makes
an emulated copy of themselves and then dies, does the emulation inherit
their property and official positions? Could the emulation ask to "pull
the plug" when its biological version was terminally ill or in a coma?
Would it help to treat emulations as adolescents for a few years so that
the biological creator would maintain temporary control? Would criminal
emulations receive the death penalty, or would they be given forced
data modification as a form of "rehabilitation"? Could an upload have
marriage and child-care rights?
If simulated minds had rights, it might be difficult to ensure
their protection. For example, social science researchers might be
tempted to secretly expose simulated minds, or whole isolated societies
of simulated minds, to controlled experiments in which many copies of
the same minds are exposed (serially or simultaneously) to different
test conditions.
Public attitudes
Research on public attitudes toward mind uploading reveals complex
psychological factors. A 2018 study by Michael Laakasuo found that
approval or disapproval of mind uploading technology is significantly
predicted by individual differences in disgust sensitivity, particularly
sexual disgust and purity moral orientations, rather than rational
thinking styles or personality traits. The research found that:
People with higher "purity" moral foundations (from Moral Foundations Theory) were more likely to condemn mind upload technology
Science fiction familiarity strongly predicted approval of mind uploading
Those with death anxiety were more accepting of mind uploading, viewing it as life extension rather than death
The target platform (computer, android, artificial brain) did not
affect moral judgments—only the act of transfer itself mattered
The study suggests mind uploading may have broader appeal than
previously thought among those seeking life extension or immortality.
Another study by Laakasuo found that attitudes toward mind
uploading are predicted by belief in an afterlife; the existence of mind
uploading technology may threaten religious and spiritual notions of
immortality and divinity.
Political and economic implications
Emulations might be preceded by a technological arms race driven by first-strike advantages.
Their emergence and existence may lead to increased risk of war,
including inequality, power struggles, strong loyalty and willingness to
die among emulations, and new forms of racism, xenophobia, and
religious prejudice. If emulations run much faster than humans, there might not be enough
time for human leaders to make wise decisions or negotiate. Humans might
react violently to the growing power of emulations, especially if it
reduces human wages. Emulations might not trust each other, and even
well-intentioned defensive measures might be interpreted as offense.
Robin Hanson's book The Age of Em
poses many hypotheses on the nature of a society of mind uploads,
including that the most common minds would be copies of adults with
personalities conducive to long hours of productive specialized work.
Emulation timelines and AI risk
Kenneth D. Miller, a professor of neuroscience at Columbia University
and a co-director of the Center for Theoretical Neuroscience, has
raised doubts about the practicality of mind uploading. His major
argument is that reconstructing neurons and their connections is itself a
formidable task, but far from sufficient. The brain's operation depends
on the dynamics of electrical and biochemical signal exchange between
neurons, so capturing them in a single "frozen" state may be
insufficient. In addition, the nature of these signals may require
modeling at the molecular level and beyond. Therefore, while not
rejecting the idea in principle, Miller believes that the complexity of
the "absolute" duplication of a mind will be insurmountable for several
hundred years.
The neuroscience and computer-hardware technologies that may make
brain emulation possible are widely desired for other reasons, and
their development will presumably continue. People may also have brain
emulations for a brief but significant period on the way to
non-emulation-based human-level AI. If emulation technology arrives, it is debatable whether its advance should be accelerated or slowed.
Arguments for speeding up brain-emulation research:
If neuroscience rather than computing power is the bottleneck to
brain emulation, emulation advances may be more erratic and
unpredictable based on when new scientific discoveries happen. Limited computing power would mean the first emulations would run
slower and so would be easier to adapt to, and there would be more time
for the technology to transition through society.
Improvements in manufacturing, 3D printing, and nanotechnology may accelerate hardware production, which could increase the "computing overhang" from excess hardware relative to neuroscience.
If one AI-development group had a lead in emulation technology, it
would have more subjective time to win an arms race to build the first
superhuman AI. Because it would be less rushed, it would have more
freedom to consider AI risks.
Arguments for slowing brain-emulation research:
Greater investment in brain emulation and associated cognitive
science might enhance AI researchers' ability to create "neuromorphic"
(brain-inspired) algorithms, such as neural networks, reinforcement
learning, and hierarchical perception. This could accelerate risks from uncontrolled AI. Participants at a 2011 AI workshop estimated an 85% probability that
neuromorphic AI would arrive before brain emulation. This was based on
the idea that brain emulation would require understanding of the
workings and functions of the brain's components, along with the
technological know-how to emulate neurons. But reverse-engineering the Microsoft Windows
code base is already hard, and reverse-engineering the brain is likely
much harder. By a very narrow margin, the participants leaned toward the
view that accelerating brain emulation would increase expected AI risk.
Waiting might give society more time to think about the consequences
of brain emulation and develop institutions to improve cooperation.
Emulation research would also accelerate neuroscience as a whole,
which might accelerate medical advances, cognitive enhancement, lie
detectors, and psychological manipulation.
Emulations might be easier to control than de novo AI because:
Human abilities, behavioral tendencies, and vulnerabilities are
more thoroughly understood, thus control measures might be more
intuitive and easier to plan.
Emulations could more easily inherit human motivations.
Emulations are harder to manipulate than de novo AI, because brains are messy and complicated; this could reduce risks of their rapid takeoff. Also, emulations may be bulkier and require more hardware than AI, which would also slow the speed of a transition. Unlike AI, an emulation would not be able to rapidly expand beyond the size of a human brain. Emulations running at digital speeds would have less intelligence
differential vis-à-vis AI and so might more easily control AI.
As counterpoint to these considerations, Bostrom notes some downsides:
Even if human behavior is better understood, the evolution of emulation behavior under self-improvement might be much less predictable than the evolution of safe de novo AI under self-improvement.
Emulations may not inherit all human motivations. Perhaps they would
inherit people's darker motivations or would behave abnormally in the
unfamiliar environment of cyberspace.
Even if there is a slow takeoff toward emulations, there would still be a second transition to de novo AI later. Two intelligence explosions may mean more total risk.
Because of the postulated difficulties that a whole brain emulation-generated superintelligence would pose for the control problem, computer scientist Stuart J. Russell, in his book Human Compatible, rejects creating one, calling it "so obviously a bad idea".
Advocates
In 1979, Hans Moravec described and endorsed mind uploading using a brain surgeon. He used a similar description in 1988, calling it "transmigration".
Ray Kurzweil, director of engineering at Google,
has long predicted that people will be able to upload their brains to
computers and become "digitally immortal" by 2045. For example, he made
this claim in his 2013 speech at the Global Futures 2045 International Congress in New York, which claims to subscribe to a similar set of beliefs. Mind uploading has also been advocated by a number of researchers in neuroscience and artificial intelligence, such as Marvin Minsky. In 1993, Joe Strout created a small website called the Mind Uploading Home Page, and began advocating the idea in cryonics
circles and elsewhere. That site has not been updated recently, but it
has spawned other sites, including MindUploading.org, run by Randal A. Koene,
who also moderates a mailing list on the topic. These advocates see
mind uploading as a medical procedure that could save countless lives.
Many transhumanists look forward to the development and deployment of mind-uploading technology, with transhumanists such as Nick Bostrom predicting that it will become possible within the 21st century due to technological trends such as Moore's law.
Michio Kaku, in collaboration with Science, hosted the documentary Sci Fi Science: Physics of the Impossible, based on his book Physics of the Impossible. Episode four, "How to Teleport", mentions that mind uploading via techniques such as quantum entanglement and whole-brain emulation using an advanced MRI machine may enable people to be transported vast distances at near light-speed.
Gregory S. Paul's and Earl D. Cox's book Beyond Humanity: CyberEvolution and Future Minds is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer. Richard Doyle's Wetwares: Experiments in PostVital Living
deals extensively with uploading from the perspective of distributed
embodiment, arguing for example that humans are part of the "artificial
life phenotype". Doyle's vision reverses the polarity on uploading, with
artificial life forms such as uploads actively seeking out biological
embodiment as part of their reproductive strategy.
In fiction
Mind uploading—transferring one's personality to a computer—appears in several works of science fiction. It is distinct from transferring a consciousness from one human body to another. It is sometimes applied to a single person and sometimes to an entire society. Recurring themes in these stories include whether the computerized mind is truly conscious, and if so, whether identity is preserved. It is a common feature of the cyberpunk subgenre, sometimes taking the form of digital immortality.