Search This Blog

Thursday, July 5, 2018

Bayesian approaches to brain function

From Wikipedia, the free encyclopedia
Bayesian approaches to brain function investigate the capacity of the nervous system to operate in situations of uncertainty in a fashion that is close to the optimal prescribed by Bayesian statistics. This term is used in behavioural sciences and neuroscience and studies associated with this term often strive to explain the brain's cognitive abilities based on statistical principles. It is frequently assumed that the nervous system maintains internal probabilistic models that are updated by neural processing of sensory information using methods approximating those of Bayesian probability.

Origins

This field of study has its historical roots in numerous disciplines including machine learning, experimental psychology and Bayesian statistics. As early as the 1860s, with the work of Hermann Helmholtz in experimental psychology the brain's ability to extract perceptual information from sensory data was modeled in terms of probabilistic estimation.[5][6] The basic idea is that the nervous system needs to organize sensory data into an accurate internal model of the outside world.

Bayesian probability has been developed by many important contributors. Pierre-Simon Laplace, Thomas Bayes, Harold Jeffreys, Richard Cox and Edwin Jaynes developed mathematical techniques and procedures for treating probability as the degree of plausibility that could be assigned to a given supposition or hypothesis based on the available evidence.[7] In 1988 Edwin Jaynes presented a framework for using Bayesian Probability to model mental processes.[8] It was thus realized early on that the Bayesian statistical framework holds the potential to lead to insights into the function of the nervous system.

This idea was taken up in research on unsupervised learning, in particular the Analysis by Synthesis approach, branches of machine learning.[9][10] In 1983 Geoffrey Hinton and colleagues proposed the brain could be seen as a machine making decisions based on the uncertainties of the outside world.[11] During the 1990s researchers including Peter Dayan, Geoffrey Hinton and Richard Zemel proposed that the brain represents knowledge of the world in terms of probabilities and made specific proposals for tractable neural processes that could manifest such a Helmholtz Machine.[12][13][14]

Psychophysics

A wide range of studies interpret the results of psychophysical experiments in light of Bayesian perceptual models. Many aspects of human perceptual and motor behavior can be modeled with Bayesian statistics. This approach, with its emphasis on behavioral outcomes as the ultimate expressions of neural information processing, is also known for modeling sensory and motor decisions using Bayesian decision theory. Examples are the work of Landy,[15][16] Jacobs,[17][18] Jordan, Knill,[19][20] Kording and Wolpert,[21][22] and Goldreich.[23][24][25]

Neural coding

Many theoretical studies ask how the nervous system could implement Bayesian algorithms. Examples are the work of Pouget, Zemel, Deneve, Latham, Hinton and Dayan. George and Hawkins published a paper that establishes a model of cortical information processing called hierarchical temporal memory that is based on Bayesian network of Markov chains. They further map this mathematical model to the existing knowledge about the architecture of cortex and show how neurons could recognize patterns by hierarchical Bayesian inference.[26]

Electrophysiology

A number of recent electrophysiological studies focus on the representation of probabilities in the nervous system. Examples are the work of Shadlen and Schultz.

Predictive coding

Predictive coding is a neurobiologically plausible scheme for inferring the causes of sensory input based on minimizing prediction error.[27] These schemes are related formally to Kalman filtering and other Bayesian update schemes.

Free energy

During the 1990s some researchers such as Geoffrey Hinton and Karl Friston began examining the concept of free energy as a calculably tractable measure of the discrepancy between actual features of the world and representations of those features captured by neural network models.[28] A synthesis has been attempted recently[29] by Karl Friston, in which the Bayesian brain emerges from a general principle of free energy minimisation.[30] In this framework, both action and perception are seen as a consequence of suppressing free-energy, leading to perceptual[31] and active inference[32] and a more embodied (enactive) view of the Bayesian brain. Using variational Bayesian methods, it can be shown how internal models of the world are updated by sensory information to minimize free energy or the discrepancy between sensory input and predictions of that input. This can be cast (in neurobiologically plausible terms) as predictive coding or, more generally, Bayesian filtering.

According to Friston:[33]
"The free-energy considered here represents a bound on the surprise inherent in any exchange with the environment, under expectations encoded by its state or configuration. A system can minimise free energy by changing its configuration to change the way it samples the environment, or to change its expectations. These changes correspond to action and perception, respectively, and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment implies that the system’s state and structure encode an implicit and probabilistic model of the environment."[33]
This area of research was summarized in terms understandable by the layperson in a 2008 article in New Scientist that offered a unifying theory of brain function.[34] Friston makes the following claims about the explanatory power of the theory:
"This model of brain function can explain a wide range of anatomical and physiological aspects of brain systems; for example, the hierarchical deployment of cortical areas, recurrent architectures using forward and backward connections and functional asymmetries in these connections. In terms of synaptic physiology, it predicts associative plasticity and, for dynamic models, spike-timing-dependent plasticity. In terms of electrophysiology it accounts for classical and extra-classical receptive field effects and long-latency or endogenous components of evoked cortical responses. It predicts the attenuation of responses encoding prediction error with perceptual learning and explains many phenomena like repetition suppression, mismatch negativity and the P300 in electroencephalography. In psychophysical terms, it accounts for the behavioural correlates of these physiological phenomena, e.g., priming, and global precedence."[33]
"It is fairly easy to show that both perceptual inference and learning rest on a minimisation of free energy or suppression of prediction error."[33]

Quantum mind

From Wikipedia, the free encyclopedia

The quantum mind or quantum consciousness[1] group of hypotheses propose that classical mechanics cannot explain consciousness. It posits that quantum mechanical phenomena, such as quantum entanglement and superposition, may play an important part in the brain's function and could contribute to form the basis of an explanation of consciousness.

Hypotheses have been proposed about ways for quantum effects to be involved in the process of consciousness, but even those who advocate them admit that the hypotheses remain unproven, and possibly unprovable. Some of the proponents propose experiments that could demonstrate quantum consciousness, but the experiments have not yet been possible to perform.

Terms used in the theory of quantum mechanics can be misinterpreted by laymen in ways that are not valid but that sound mystical or religious, and therefore may seem to be related to consciousness. These misinterpretations of the terms are not justified in the theory of quantum mechanics. According to Sean Carroll, "No theory in the history of science has been more misused and abused by cranks and charlatans—and misunderstood by people struggling in good faith with difficult ideas—than quantum mechanics."[2] Lawrence Krauss says, "No area of physics stimulates more nonsense in the public arena than quantum mechanics."[3] Some proponents of pseudoscience use quantum mechanical terms in an effort to justify their statements, but this effort is misleading, and it is a false interpretation of the physical theory. Quantum mind theories of consciousness that are based on this kind of misinterpretations of terms are not valid by scientific methods or from empirical experiments.

History

Eugene Wigner developed the idea that quantum mechanics has something to do with the workings of the mind. He proposed that the wave function collapses due to its interaction with consciousness. Freeman Dyson argued that "mind, as manifested by the capacity to make choices, is to some extent inherent in every electron."[4]

Other contemporary physicists and philosophers considered these arguments to be unconvincing.[5] Victor Stenger characterized quantum consciousness as a "myth" having "no scientific basis" that "should take its place along with gods, unicorns and dragons."[6]

David Chalmers argued against quantum consciousness. He instead discussed how quantum mechanics may relate to dualistic consciousness.[7] Chalmers is skeptical of the ability of any new physics to resolve the hard problem of consciousness.[8][9]

Quantum mind approaches

Bohm

David Bohm viewed quantum theory and relativity as contradictory, which implied a more fundamental level in the universe.[10] He claimed both quantum theory and relativity pointed towards this deeper theory, which he formulated as a quantum field theory. This more fundamental level was proposed to represent an undivided wholeness and an implicate order, from which arises the explicate order of the universe as we experience it.

Bohm's proposed implicate order applies both to matter and consciousness. He suggested that it could explain the relationship between them. He saw mind and matter as projections into our explicate order from the underlying implicate order. Bohm claimed that when we look at matter, we see nothing that helps us to understand consciousness.

Bohm discussed the experience of listening to music. He believed the feeling of movement and change that make up our experience of music derive from holding the immediate past and the present in the brain together. The musical notes from the past are transformations rather than memories. The notes that were implicate in the immediate past become explicate in the present. Bohm viewed this as consciousness emerging from the implicate order.

Bohm saw the movement, change or flow, and the coherence of experiences, such as listening to music, as a manifestation of the implicate order. He claimed to derive evidence for this from Jean Piaget's[11] work on infants. He held these studies to show that young children learn about time and space because they have a "hard-wired" understanding of movement as part of the implicate order. He compared this "hard-wiring" to Chomsky's theory that grammar is "hard-wired" into human brains.

Bohm never proposed a specific means by which his proposal could be falsified, nor a neural mechanism through which his "implicate order" could emerge in a way relevant to consciousness.[10] Bohm later collaborated on Karl Pribram's holonomic brain theory as a model of quantum consciousness.[12]

According to philosopher Paavo Pylkkänen, Bohm's suggestion "leads naturally to the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain, while the basic thinking process is at the quantum-theoretically describable level."[13]

Penrose and Hameroff

Theoretical physicist Roger Penrose and anaesthesiologist Stuart Hameroff collaborated to produce the theory known as Orchestrated Objective Reduction (Orch-OR). Penrose and Hameroff initially developed their ideas separately and later collaborated to produce Orch-OR in the early 1990s. The theory was reviewed and updated by the authors in late 2013.[14][15]

Penrose's argument stemmed from Gödel's incompleteness theorems. In Penrose's first book on consciousness, The Emperor's New Mind (1989),[16] he argued that while a formal system cannot prove its own consistency, Gödel’s unprovable results are provable by human mathematicians.[17] He took this disparity to mean that human mathematicians are not formal proof systems and are not running a computable algorithm. According to Bringsjorg and Xiao, this line of reasoning is based on fallacious equivocation on the meaning of computation.[18] In the same book, Penrose wrote, "One might speculate, however, that somewhere deep in the brain, cells are to be found of single quantum sensitivity. If this proves to be the case, then quantum mechanics will be significantly involved in brain activity."[16]:p.400

Penrose determined wave function collapse was the only possible physical basis for a non-computable process. Dissatisfied with its randomness, Penrose proposed a new form of wave function collapse that occurred in isolation and called it objective reduction. He suggested each quantum superposition has its own piece of spacetime curvature and that when these become separated by more than one Planck length they become unstable and collapse.[19] Penrose suggested that objective reduction represented neither randomness nor algorithmic processing but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derived.[19]

Hameroff provided a hypothesis that microtubules would be suitable hosts for quantum behavior.[20] Microtubules are composed of tubulin protein dimer subunits. The dimers each have hydrophobic pockets that are 8 nm apart and that may contain delocalized pi electrons. Tubulins have other smaller non-polar regions that contain pi electron-rich indole rings separated by only about 2 nm. Hameroff proposed that these electrons are close enough to become entangled.[21] Hameroff originally suggested the tubulin-subunit electrons would form a Bose–Einstein condensate, but this was discredited.[22] He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules. However, this too was experimentally discredited.[23]

However, Orch-OR made numerous false biological predictions, and is not an accepted model of brain physiology.[24] In other words, there is a missing link between physics and neuroscience,[25] for instance, the proposed predominance of 'A' lattice microtubules, more suitable for information processing, was falsified by Kikkawa et al.,[26][27] who showed all in vivo microtubules have a 'B' lattice and a seam. The proposed existence of gap junctions between neurons and glial cells was also falsified.[28] Orch-OR predicted that microtubule coherence reaches the synapses via dendritic lamellar bodies (DLBs), however De Zeeuw et al. proved this impossible,[29] by showing that DLBs are located micrometers away from gap junctions.[30]

In January 2014, Hameroff and Penrose claimed that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013[31] corroborates the Orch-OR theory.[15][32]

Although these theories are stated in a scientific framework, it is difficult to separate them from the personal opinions of the scientist. The opinions are often based on intuition or subjective ideas about the nature of consciousness. For example, Penrose wrote,
my own point of view asserts that you can't even simulate conscious activity. What's going on in conscious thinking is something you couldn't properly imitate at all by computer.... If something behaves as though it's conscious, do you say it is conscious? People argue endlessly about that. Some people would say, 'Well, you've got to take the operational viewpoint; we don't know what consciousness is. How do you judge whether a person is conscious or not? Only by the way they act. You apply the same criterion to a computer or a computer-controlled robot.' Other people would say, 'No, you can't say it feels something merely because it behaves as though it feels something.' My view is different from both those views. The robot wouldn't even behave convincingly as though it was conscious unless it really was — which I say it couldn't be, if it's entirely computationally controlled.[33]
Penrose continues,
A lot of what the brain does you could do on a computer. I'm not saying that all the brain's action is completely different from what you do on a computer. I am claiming that the actions of consciousness are something different. I'm not saying that consciousness is beyond physics, either — although I'm saying that it's beyond the physics we know now.... My claim is that there has to be something in physics that we don't yet understand, which is very important, and which is of a noncomputational character. It's not specific to our brains; it's out there, in the physical world. But it usually plays a totally insignificant role. It would have to be in the bridge between quantum and classical levels of behavior — that is, where quantum measurement comes in.[34]
In response, W. Daniel Hillis replied, "Penrose has committed the classical mistake of putting humans at the center of the universe. His argument is essentially that he can't imagine how the mind could be as complicated as it is without having some magic elixir brought in from some new principle of physics, so therefore it must involve that. It's a failure of Penrose's imagination.... It's true that there are unexplainable, uncomputable things, but there's no reason whatsoever to believe that the complex behavior we see in humans is in any way related to uncomputable, unexplainable things."[34]

Lawrence Krauss is also blunt in criticizing Penrose's ideas. He said, "Well, Roger Penrose has given lots of new-age crackpots ammunition by suggesting that at some fundamental scale, quantum mechanics might be relevant for consciousness. When you hear the term 'quantum consciousness,' you should be suspicious.... Many people are dubious that Penrose's suggestions are reasonable, because the brain is not an isolated quantum-mechanical system."[3]

Umezawa, Vitiello, Freeman

Hiroomi Umezawa and collaborators proposed a quantum field theory of memory storage.[35][36] Giuseppe Vitiello and Walter Freeman proposed a dialog model of the mind. This dialog takes place between the classical and the quantum parts of the brain.[37][38][39] Their quantum field theory models of brain dynamics are fundamentally different from the Penrose-Hameroff theory.

Pribram, Bohm, Kak

Karl Pribram's holonomic brain theory (quantum holography) invoked quantum mechanics to explain higher order processing by the mind.[40][41] He argued that his holonomic model solved the binding problem.[42] Pribram collaborated with Bohm in his work on the quantum approaches to mind and he provided evidence on how much of the processing in the brain was done in wholes.[43] He proposed that ordered water at dendritic membrane surfaces might operate by structuring Bose-Einstein condensation supporting quantum dynamics.[44]

Although Subhash Kak's work is not directly related to that of Pribram, he likewise proposed that the physical substrate to neural networks has a quantum basis,[45][46] but asserted that the quantum mind has machine-like limitations.[47] He points to a role for quantum theory in the distinction between machine intelligence and biological intelligence, but that in itself cannot explain all aspects of consciousness.[48][49] He has proposed that the mind remains oblivious of its quantum nature due to the principle of veiled nonlocality.[50][51]

Stapp

Henry Stapp proposed that quantum waves are reduced only when they interact with consciousness. He argues from the Orthodox Quantum Mechanics of John von Neumann that the quantum state collapses when the observer selects one among the alternative quantum possibilities as a basis for future action. The collapse, therefore, takes place in the expectation that the observer associated with the state. Stapp's work drew criticism from scientists such as David Bourget and Danko Georgiev.[52] Georgiev[53][54][55] criticized Stapp's model in two respects:
  • Stapp's mind does not have its own wavefunction or density matrix, but nevertheless can act upon the brain using projection operators. Such usage is not compatible with standard quantum mechanics because one can attach any number of ghostly minds to any point in space that act upon physical quantum systems with any projection operators. Therefore, Stapp's model negates "the prevailing principles of physics".[53]
  • Stapp's claim that quantum Zeno effect is robust against environmental decoherence directly contradicts a basic theorem in quantum information theory that acting with projection operators upon the density matrix of a quantum system can only increase the system's Von Neumann entropy.[53][54]
Stapp has responded to both of Georgiev's objections.[56][57]

David Pearce

British philosopher David Pearce defends what he calls physicalistic idealism (""Physicalistic idealism" is the non-materialist physicalist claim that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions [...]," and has conjectured that unitary conscious minds are physical states of quantum coherence (neuronal superpositions).[58][59][60][61] This conjecture is, according to Pearce, amenable to falsification unlike most theories of consciousness, and Pearce has outlined an experimental protocol describing how the hypothesis could be tested using matter-wave interferometry to detect nonclassical interference patterns of neuronal superpositions at the onset of thermal decoherence.[62] Pearce admits that his ideas are "highly speculative," "counterintuitive," and "incredible."[60]

Criticism

These hypotheses of the quantum mind remain hypothetical speculation, as Penrose and Pearce admitted in their discussion. Until they make a prediction that is tested by experiment, the hypotheses aren't based in empirical evidence. According to Lawrence Krauss, "It is true that quantum mechanics is extremely strange, and on extremely small scales for short times, all sorts of weird things happen. And in fact we can make weird quantum phenomena happen. But what quantum mechanics doesn't change about the universe is, if you want to change things, you still have to do something. You can't change the world by thinking about it."[3]

The process of testing the hypotheses with experiments is fraught with problems, including conceptual/theoretical, practical, and ethical issues.

Conceptual problems

The idea that a quantum effect is necessary for consciousness to function is still in the realm of philosophy. Penrose proposes that it is necessary. But other theories of consciousness do not indicate that it is needed. For example, Daniel Dennett proposed a theory called multiple drafts model that doesn't indicate that quantum effects are needed. The theory is described in Dennett's book, Consciousness Explained, published in 1991.[63] A philosophical argument on either side isn't scientific proof, although the philosophical analysis can indicate key differences in the types of models, and they can show what type of experimental differences might be observed. But since there isn't a clear consensus among philosophers, it isn't conceptual support that a quantum mind theory is needed.

There are computers that are specifically designed to compute using quantum mechanical effects. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement.[64] They are different from binary digital electronic computers based on transistors. Whereas common digital computing requires that the data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses quantum bits, which can be in superpositions of states. One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. Currently, some quantum computers require their qubits to be cooled to 20 millikelvins in order to prevent significant decoherence.[65] As a result, time consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions.[66] There aren't any obvious analogies between the functioning of quantum computers and the human brain. Some of the hypothetical models of quantum mind have proposed mechanisms for maintaining quantum coherence in the brain, but they have not been shown to operate.

Quantum entanglement is a physical phenomenon often invoked for quantum mind models. This effect occurs when pairs or groups of particles interact so that the quantum state of each particle cannot be described independently of the other(s), even when the particles are separated by a large distance. Instead, a quantum state has to be described for the whole system. Measurements of physical properties such as position, momentum, spin, and polarization, performed on entangled particles are found to be correlated. If one of the particles is measured, the same property of the other particle immediately adjusts to maintain the conservation of the physical phenomenon. According to the formalism of quantum theory, the effect of measurement happens instantly, no matter how far apart the particles are.[67][68] It is not possible to use this effect to transmit classical information at faster-than-light speeds[69] (see Faster-than-light § Quantum mechanics). Entanglement is broken when the entangled particles decohere through interaction with the environment; for example, when a measurement is made[70] or the particles undergo random collisions or interactions. According to David Pearce, "In neuronal networks, ion-ion scattering, ion-water collisions, and long-range Coulomb interactions from nearby ions all contribute to rapid decoherence times; but thermally-induced decoherence is even harder experimentally to control than collisional decoherence." He anticipated that quantum effects would have to be measured in femtoseconds, a trillion times faster than the rate at which neurons function (milliseconds).[62]

Another possible conceptual approach is to use quantum mechanics as an analogy to understand a different field of study like consciousness, without expecting that the laws of quantum physics will apply. An example of this approach is the idea of Schrödinger's cat. Erwin Schrödinger described how one could, in principle, create entanglement of a large-scale system by making it dependent on an elementary particle in a superposition. He proposed a scenario with a cat in a locked steel chamber, wherein the cat's life or death depended on the state of a radioactive atom, whether it had decayed and emitted radiation or not. According to Schrödinger, the Copenhagen interpretation implies that the cat remains both alive and dead until the state has been observed. Schrödinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; on the contrary, he intended the example to illustrate the absurdity of the existing view of quantum mechanics.[71] However, since Schrödinger's time, other interpretations of the mathematics of quantum mechanics have been advanced by physicists, some of which regard the "alive and dead" cat superposition as quite real.[72][73] Schrödinger's famous thought experiment poses the question, "when does a quantum system stop existing as a superposition of states and become one or the other?" In the same way, it is possible to ask whether the brain's act of making a decision is analogous to having a superposition of states of two decision outcomes, so that making a decision means "opening the box" to reduce the brain from a combination of states to one state. But even Schrödinger didn't think this really happened to the cat; he didn't think the cat was literally alive and dead at the same time. This analogy about making a decision uses a formalism that is derived from quantum mechanics, but it doesn't indicate the actual mechanism by which the decision is made. In this way, the idea is similar to quantum cognition. This field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum mechanical about the brain. Quantum cognition is based on the quantum-like paradigm,[74][75] generalized quantum paradigm,[76] or quantum structure paradigm[77] that information processing by complex systems such as the brain can be mathematically described in the framework of quantum information and quantum probability theory. This model uses quantum mechanics only as an analogy, but doesn't propose that quantum mechanics is the physical mechanism by which it operates. For example, quantum cognition proposes that some decisions can be analyzed as if there are interference between two alternatives, but it is not a physical quantum interference effect.

Practical problems

The demonstration of a quantum mind effect by experiment is necessary. Is there a way to show that consciousness is impossible without a quantum effect? Can a sufficiently complex digital, non-quantum computer be shown to be incapable of consciousness? Perhaps a quantum computer will show that quantum effects are needed. In any case, complex computers that are either digital or quantum computers may be built. These could demonstrate which type of computer is capable of conscious, intentional thought. But they don't exist yet, and no experimental test has been demonstrated.

Quantum mechanics is a mathematical model that can provide some extremely accurate numerical predictions. Richard Feynman called quantum electrodynamics, based on the quantum mechanics formalism, "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen.[78]:Ch1 So it is not impossible that the model could provide an accurate prediction about consciousness that would confirm that a quantum effect is involved. If the mind depends on quantum mechanical effects, the true proof is to find an experiment that provides a calculation that can be compared to an experimental measurement. It has to show a measurable difference between a classical computation result in a brain and one that involves quantum effects.

The main theoretical argument against the quantum mind hypothesis is the assertion that quantum states in the brain would lose coherency before they reached a scale where they could be useful for neural processing. This supposition was elaborated by Tegmark. His calculations indicate that quantum systems in the brain decohere at sub-picosecond timescales.[79][80] No response by a brain has shows computation results or reactions on this fast of a timescale. Typical reactions are on the order of milliseconds, trillions of times longer than sub-picosecond time scales.[81]

Daniel Dennett uses an experimental result in support of his Multiple Drafts Model of an optical illusion that happens on a time scale of less than a second or so. In this experiment, two different colored lights, with an angular separation of a few degrees at the eye, are flashed in succession. If the interval between the flashes is less than a second or so, the first light that is flashed appears to move across to the position of the second light. Furthermore, the light seems to change color as it moves across the visual field. A green light will appear to turn red as it seems to move across to the position of a red light. Dennett asks how we could see the light change color before the second light is observed.[63] Velmans argues that the cutaneous rabbit illusion, another illusion that happens in about a second, demonstrates that there is a delay while modelling occurs in the brain and that this delay was discovered by Libet.[82] These slow illusions that happen at times of less than a second don't support a proposal that the brain functions on the picosecond time scale.

According to David Pearce, a demonstration of picosecond effects is "the fiendishly hard part – feasible in principle, but an experimental challenge still beyond the reach of contemporary molecular matter-wave interferometry. ...The conjecture predicts that we'll discover the interference signature of sub-femtosecond macro-superpositions."[62]

Penrose says,
The problem with trying to use quantum mechanics in the action of the brain is that if it were a matter of quantum nerve signals, these nerve signals would disturb the rest of the material in the brain, to the extent that the quantum coherence would get lost very quickly. You couldn't even attempt to build a quantum computer out of ordinary nerve signals, because they're just too big and in an environment that's too disorganized. Ordinary nerve signals have to be treated classically. But if you go down to the level of the microtubules, then there's an extremely good chance that you can get quantum-level activity inside them.

For my picture, I need this quantum-level activity in the microtubules; the activity has to be a large scale thing that goes not just from one microtubule to the next but from one nerve cell to the next, across large areas of the brain. We need some kind of coherent activity of a quantum nature which is weakly coupled to the computational activity that Hameroff argues is taking place along the microtubules.

There are various avenues of attack. One is directly on the physics, on quantum theory, and there are certain experiments that people are beginning to perform, and various schemes for a modification of quantum mechanics. I don't think the experiments are sensitive enough yet to test many of these specific ideas. One could imagine experiments that might test these things, but they'd be very hard to perform.[34]
A demonstration of a quantum effect in the brain has to explain this problem or explain why it is not relevant, or that the brain somehow circumvents the problem of the loss of quantum coherency at body temperature. As Penrose proposes, it may require a new type of physical theory.

Ethical problems

Can self-awareness, or understanding of a self in the surrounding environment, be done by a classical parallel processor, or are quantum effects needed to have a sense of "oneness"? According to Lawrence Krauss, "You should be wary whenever you hear something like, 'Quantum mechanics connects you with the universe' ... or 'quantum mechanics unifies you with everything else.' You can begin to be skeptical that the speaker is somehow trying to use quantum mechanics to argue fundamentally that you can change the world by thinking about it."[3] A subjective feeling is not sufficient to make this determination. Humans don't have a reliable subjective feeling for how we do a lot of functions. According to Daniel Dennett, "On this topic, Everybody's an expert... but they think that they have a particular personal authority about the nature of their own conscious experiences that can trump any hypothesis they find unacceptable."[83]

Since humans are the only animals known to be conscious, then performing experiments to demonstrate quantum effects in consciousness requires experimentation on a living human brain. This is not automatically excluded or impossible, but it seriously limits the kinds of experiments that can be done. Studies of the ethics of brain studies are being actively solicited[84] by the BRAIN Initiative, a U.S. Federal Government funded effort to document the connections of neurons in the brain.

An ethically objectionable practice by proponents of quantum mind theories involves the practice of using quantum mechanical terms in an effort to make the argument sound more impressive, even when they know that those terms are irrelevant. Dale DeBakcsy notes that "trendy parapsychologists, academic relativists, and even the Dalai Lama have all taken their turn at robbing modern physics of a few well-sounding phrases and stretching them far beyond their original scope in order to add scientific weight to various pet theories."[85] At the very least, these proponents must make a clear statement about whether quantum formalism is being used as an analogy or as an actual physical mechanism, and what evidence they are using for support. An ethical statement by a researcher should specify what kind of relationship their hypothesis has to the physical laws.

Misleading statements of this type have been given by, for example, Deepak Chopra. Chopra has commonly referred to topics such as quantum healing or quantum effects of consciousness. Seeing the human body as being undergirded by a "quantum mechanical body" composed not of matter but of energy and information, he believes that "human aging is fluid and changeable; it can speed up, slow down, stop for a time, and even reverse itself," as determined by one's state of mind.[86] Robert Carroll states Chopra attempts to integrate Ayurveda with quantum mechanics to justify his teachings.[87] Chopra argues that what he calls "quantum healing" cures any manner of ailments, including cancer, through effects that he claims are literally based on the same principles as quantum mechanics.[88] This has led physicists to object to his use of the term quantum in reference to medical conditions and the human body.[88] Chopra said, "I think quantum theory has a lot of things to say about the observer effect, about non-locality, about correlations. So I think there’s a school of physicists who believe that consciousness has to be equated, or at least brought into the equation, in understanding quantum mechanics."[89] On the other hand, he also claims "[Quantum effects are] just a metaphor. Just like an electron or a photon is an indivisible unit of information and energy, a thought is an indivisible unit of consciousness."[89] In his book Quantum Healing, Chopra stated the conclusion that quantum entanglement links everything in the Universe, and therefore it must create consciousness.[90] In either case, the references to the word "quantum" don't mean what a physicist would claim, and arguments that use the word "quantum" shouldn't be taken as scientifically proven.

Chris Carter includes statements in his book, Science and Psychic Phenomena,[91] of quotes from quantum physicists in support of psychic phenomena. In a review of the book, Benjamin Radford wrote that Carter used such references to "quantum physics, which he knows nothing about and which he (and people like Deepak Chopra) love to cite and reference because it sounds mysterious and paranormal.... Real, actual physicists I've spoken to break out laughing at this crap.... If Carter wishes to posit that quantum physics provides a plausible mechanism for psi, then it is his responsibility to show that, and he clearly fails to do so."[92] Sharon Hill has studied amateur paranormal research groups, and these groups like to use "vague and confusing language: ghosts 'use energy,' are made up of 'magnetic fields', or are associated with a 'quantum state.'"[93][94]

Statements like these about quantum mechanics indicate a temptation to misinterpret technical, mathematical terms like entanglement in terms of mystical feelings. This approach can be interpreted as a kind of Scientism, using the language and authority of science when the scientific concepts don't apply.

A larger problem in the popular press with the quantum mind hypotheses is that they are extracted without scientific support or justification and used to support areas of pseudoscience. In brief, for example, the property of quantum entanglement refers to the connection between two particles that share a property such as angular momentum. If the particles collide, then they are no longer entangled. Extrapolating this property from the entanglement of two elementary particles to the functioning of neurons in the brain to be used in a computation is not simple. It is a long chain to prove a connection between entangled elementary particles and a macroscopic effect that affects human consciousness. It is also necessary to show how sensory inputs affect the coupled particles and then computation is accomplished.

Perhaps the final question is, what difference does it make if quantum effects are involved in computations in the brain? It is already known that quantum mechanics plays a role in the brain, since quantum mechanics determines the shapes and properties of molecules like neurotransmitters and proteins, and these molecules affect how the brain works. This is the reason that drugs such as morphine affect consciousness. As Daniel Dennett said, "quantum effects are there in your car, your watch, and your computer. But most things — most macroscopic objects — are, as it were, oblivious to quantum effects. They don't amplify them; they don't hinge on them."[34] Lawrence Krauss said, "We're also connected to the universe by gravity, and we're connected to the planets by gravity. But that doesn't mean that astrology is true.... Often, people who are trying to sell whatever it is they're trying to sell try to justify it on the basis of science. Everyone knows quantum mechanics is weird, so why not use that to justify it? ... I don't know how many times I've heard people say, 'Oh, I love quantum mechanics because I'm really into meditation, or I love the spiritual benefits that it brings me.' But quantum mechanics, for better or worse, doesn't bring any more spiritual benefits than gravity does."[3]

But it appears that these molecular quantum effects are not what the proponents of the quantum mind are interested in. Proponents seem to want to use the nonlocal, nonclassical aspects of quantum mechanics to connect the human consciousness to a kind of universal consciousness or to long-range supernatural abilities. Although it isn't impossible that these effects may be observed, they have not been found at present, and the burden of proof is on those who claim that these effects exist. The ability of humans to transfer information at a distance without a known classical physical mechanism has not been shown.

Preparing for our posthuman future of artificial intelligence

(credit: iStock)

By David Brin
March 9, 2017
Original link:  http://www.kurzweilai.net/preparing-for-our-posthuman-future-of-artificial-intelligence
“Each generation imagines itself to be more intelligent than the one that went before it, and wiser than the one that comes after it.” – George Orwell
What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: “Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century.”

(credit: Prometheus Books)

A lot of folks are earnestly exploring the topic. “Will scientists soon be able to create supercomputers that can read a newspaper with understanding, or write a news story, or create novels, or even formulate laws?” asks J. Storrs Hall in Beyond AI: Creating the Conscience of the Machine (2007). “And if machine intelligence advances beyond human intelligence, will we need to start talking about a computer’s intentions?”

Sharing this concern, SpaceX/Tesla entrepreneur Elon Musk has joined with Y Combinator founder Sam Altman to establish OpenAI, an endeavor that aims to keep artificial intelligence research — and its products — accountable by maximizing transparency and openness.

Among the most-worried is Swiss author Gerd Leonhard, whose new book Technology Vs. Humanity: The Coming Clash Between Man and Machine, coins an interesting term, “androrithm,” to contrast with the algorithms that are implemented in every digital calculating engine or computer. Some foresee algorithms ruling the world with the inexorable automaticity of reflex, and Leonhard asks: “Will we live in a world where data and algorithms triumph over androrithms… i.e., all that stuff that makes us human?”

Will we see the explosive or exponential transitions predicted by Vernor Vinge, who gave “singularity” its modern meaning, or as championed by Ray Kurzweil? Day-in, day-out, we are only somewhat aware of rapid change, since we swim along inside its current. But Leonhard illustrates how swiftly a singularity crisis may come on, by referring to a line from Ernest Hemingway’s The Sun Also Rises:

“How did you go bankrupt?”
“Two ways. Gradually and then suddenly.”

Comments Leonhard:
“Exponentially and the “gradually then suddenly” phenomenon are essential to understand when creating our future… Increasingly, we will see humble beginnings of a huge opportunity or threat. And then, all of a sudden, it is either gone and forgotten or it is here, now, and much bigger than imagined. Think of solar energy, digital currencies, or autonomous vehicles: All took a long time to play out, but all of a sudden, they’re here and they’re roaring. Those who adapt too slowly or fail to foresee the pivot points will suffer the consequences.”
He adds: “wait and see is very likely going to mean waiting to become irrelevant.”

Leonhard expresses urgency for civilization to apply humanist values to the coming transition. Unlike Francis Fukayama, whose Our Posthuman Future exudes loathing for tech-driven disruption of old ways and urges renunciation, Leonhard accepts that major changes are inevitable and won’t be all-bad. He is friendly to many in the “Humanity Plus” community and shows an awareness of science fiction (SF) as a medium for scenario exploration.

(I do find it troubling that so many pundits give nods toward SF, yet seem to have read nothing since William Gibson’s Neuromancer, whose simplistic preachings and redolent cynicism now seem rather quaint, unhelpful, and long in the tooth. That perennial citation is starting to seem perfunctory, even discrediting.)

Nevertheless, after a very interesting first portion, Technology Vs. Humanity thereupon devolves into the kind of repetitious proselytization that can be distilled into two sentences:
  • We should all try to retain mastery over mechanisms that cannot ever have any ethical constraints of their own.
  • All that we hold dear will be doomed, unless we consistently, forcefully and perpetually apply upon our tools moral standards that have served humanity to this point.
That is quite a double-barreled onus! A prospective task that seems –– peering ahead across future generations –– rather exhausting.


Technology vs. Humanity by Gerd Leonhard: About the book

Artificial intelligence. Cognitive computing. The Singularity. Digital obesity. Printed food. The Internet of Things. The death of privacy. The end of work-as-we-know-it, and radical longevity: The imminent clash between technology and humanity is already rushing towards us. What moral values are you prepared to stand up for—before being human alters its meaning forever? Before it’s too late, we must stop and ask the big questions: How do we embrace technology without becoming it? When it happens — gradually, then suddenly — the machine era will create the greatest watershed in human life on Earth. 



Exploring analogous territory (and equipped with a very similar cover), Heartificial Intelligence by John C. Havens also explores the looming prospect of all-controlling algorithms and smart machines, diving into questions and proposals that overlap with Leonhard. “We need to create ethical standards for the artificial intelligence usurping our lives and allow individuals to control their identity, based on their values,” Havens writes.

Mark Anderson of the Strategic News Service pondered the onrush of devices that might meddle in our minds and hearts:
“Frank Lloyd Wright is rumored to have once boasted that he could design a house which…could lead the inhabitants to fall in love, or to get divorced. If this was even partly true of building architecture…then what of the architecture of those who will be holding, and reacting to, our innermost secrets? How will a new user know that she is using a bot with bad performance statistics? Should there be different levels of ethical certification for bots involved with selling shoes on Amazon, compared to counseling or doing Watson-like medical diagnoses?”
Making a virtue of the hand we Homo sapiens are dealt, Havens maintains: “Our frailty is one of the key factors that distinguish us from machines.”

Which seems intuitive till you recall that almost no mechanism in history has ever worked for as long, as resiliently or consistently, as a healthy 70 year old human being has, recovering from countless shocks and adapting to innumerable surprising changes. Still, he makes a strong (if obvious) point that “the future of happiness is dependent on teaching our machines what we value most.”

The Optimists Strike Back!

(credit: Tor Books)

In sharp contrast to those worriers is Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence, which posits that our cybernetic children will be as capable as our biological ones, at one key and central aptitude — learning from both parental instruction and experience how to play well with others.

This will be especially likely if (as I posit in Existence) AI researchers come to a too-long delayed realization — that we know of only one way that intelligence ever actually came about in this universe: through upbringing in human homes. Through interfacing with the world relentlessly in the physical, personal and cultural feedback loops of childhood. Indeed — and here’s an irony — this is the only scenario under which the urgings of Leonhard and Havens and so many others have even a remote chance of coming true.

(credit: Oxford University Press)

Well, there is one other way, elucidated in Robin Hanson’s new book: The Age of Em. In that startlingly original and well-thought-out tome, Hanson wagers that AI can only happen in the near term by emulating the brain activity and working minds of actual, living humans. Such doppeled copies — (a little like e-versions of my dittoes, in Kiln People) — might proliferate in “matrix” style software worlds, spawning billions, trillions and even quadrillions of copies, all of them based upon a selection of original human beings. Originals whose own versions of human morality and spirituality become templates to pass down the line.

Hence — according to Hanson — such cyber-emulated descendants would be inherently capable of ethics, since they are based on us … though they might later veer into new cultures as different from ours as Shogun-era Japan was from the Yanamamo, or Aztecs, or Tibetans, or attendees at Burning Man.

A Failed Prescription

Gerd Leonhard seems aware, at least surficially, that culture makes a difference. Moreover he sniffs, scenting danger in optimism:

…To me, it is clear that technological determinism and a global version of the “California ideology” (as in “Why don’t we just invent our way out of this, have fun, make lots of money while improving the lives of billions of people with these amazing new technologies?”) could prove to be just as lazy — and dangerous — as Luddism.

A former resident of Silicon Valley, Leonhard is welcome to his opinion. Though I also find it ironic. For example, he preaches that STEM educations should be accompanied by exposure to humanities and ethics and all that, in order to generate innovators who are also grounded in history and values …

… while appearing to ignore the plain fact that that is exactly what happens in Californian schools and especially that state’s glorious universities, far more than anywhere else on the planet. Indeed, it is only in North America that all universities fully implement a fourth year in their baccalaureate programs, consisting of “breadth requirements,” so that science and engineering types must take a full year of humanistic courses … while arts, humanities or other “soft” majors must imbibe enough science survey classes to foster at least marginally aware citizens.

(Proof of this? The U.S. almost always scores among the top three in “adult science literacy” and often number one. I explain this elsewhere, so don’t let your head explode with cognitive dissonance.)

In his book Machines of Loving Grace, John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.” It is an open question whether the yin or the yang side of Silicon Valley culture … or else the new, state controlled tech centers in China, for example … take this obligation down paths of responsibility.

Gerd Leonhard coins a term: “Exponential Humanism.”   “Through this philosophy, I believe we can find a balanced way forward that will allow us to both embrace technology but not become technology in the process.” Nor do I disagree with the general desideratum. The conversation he calls for is essential!

Alas, Leonhard then goes on to present checklists, then more checklists, of things we ought to do and/or not-do, in order to retain our humanity, control and values. Take this agenda as a sample:

I propose that we devise a test that gauges all new scientific and technological breakthroughs according to questions such as:
  • Does this idea violate the human rights of anyone involved?
  • Does this idea substitute human relationships with machine relationships?
  • Does this idea put efficiency over humanity?
  • Does this idea put economics and profits over the most basic human ethics?
  • Does this idea automate something that should not be automated?
I don’t mind checklists, and these certainly contain wisdom. But Leonhard offers no details about how to pass and enforce such rules. By worldwide consensus among those who read Technology vs. Humanity? By legislation? Orwellian fiat? Nor does he speak of enforcement; what is to be done about dissenters or those who reject renunciation?

A Method That Is Truly Human

(credit: Free Press)

Again and again, from techno skeptics like Leonhard and Havens and so many others, we hear that “technology has no ethics.”

Well, I am not so sure about that. Nor is Kurzweil, whose Age of Spiritual Machines suggests otherwise. Or Kevin Kelly, whose What Technology Wants and The Inevitable propose simple process solutions to the dilemma of encouraging decent outcomes and behavior.

Nor Peter Diamandis, whose Abundance impudently forecasts a post-scarcity future, when spectacularly wealthy citizens can partner with cyber entities and explore values together. Nor Isaac Asimov, who foresaw robots caring deeply about moral issues, over the long stretch of time.

But let’s go along with Havens and Leonhard and accept the premise that “technology has no ethics.” In that case, the answer is simple.

Then don’t rely on ethics! Certainly evangelization has not had the desired effect — fostering good and decent behavior where it matter most — in the past. Seriously, I will give a cookie to the first modern pundit I come across who ponders human history, taking perspective from the long ages of brutal, feudal darkness endured by our ancestors.

Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and … preached!

They lectured and chided. They threatened damnation and offered heavenly rewards. Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Judeao-Christian-Muslim laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question:

“How’s that working out for you?”

(credit: Harper Prism)

In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators, parasites and abusers — just as it won’t divert the most malignant machines. Indeed, moralizing often empowers them, offering ways to rationalize exploiting others.
Even Asimov’s fabled robots — driven and constrained by his checklist of unbendingly benevolent, humano-centric Three Laws — eventually get smart enough to become lawyers. Whereupon they proceed to interpret the embedded ethical codes however they want. (See how I resolve this in Foundation’s Triumph.)

And yet, preachers never stopped. Nor should they; ethics are important! But more as a metric tool, revealing to us how we’re doing. How we change. For decent people, ethics are the mirror in which we evaluate ourselves and hold ourselves accountable.

And that realization was what led to a new technique. Something enlightenment pragmatists decided to try, a couple of centuries ago. A trick, a method, that enabled us at last to rise above a mire of kings and priests and scolds. The secret sauce of our success is —

— accountability. Creating a civilization that is flat and open and free enough — empowering so many that predators and parasites may be confronted by the entities who most care about stopping predation, their victims. One in which politicians and elites see their potential range of actions limited by law and by the scrutiny of citizens. Does this newer method work as well as it should? Hell no!

Does it work better than every single other system ever tried, including those filled to overflowing with moralizers? Better than all of them combined? By light years?

Yes, indeed.

We may not be, by nature, highly moral creatures. But we do know how to be persnickety. Suspicious. Judgmental. Accusatory. Demanding. Those we do with spectacular skill and passion. And while these traits often wrought vileness, in hierarchies of old, we have harnessed them into arenas wherein positive sum, win-win outcomes pour forth, catching and staunching many evils. Detecting and amplifying so many good things.

Moreover, this may be the proper way to deal with ethics-deficient technology. As citizens and users, we need to stay judgmental, applying accountability via markets, democracy, science and courts –– and public opinion –– upon those companies and cyber entities who behave in ways we find unethical. Or inhuman. The specifics of implementation will change, with time. (We’ll need new, technological tools for applying accountability.)

But this is the way that Ray Kurzweil’s vaunted singularity machines will learn to be “spiritual.” The kind and friendly ones will do better than their unethical competitors … because the good guy machines will have us — the Olde Race — as allies against the meanie-bots. And yes, it might boil down to just that.

Alas, the glory of our era — this technique that underlies our positive-sum games — seems so poorly understood that many of our best minds never grasp the method in its essence, believing instead that we’ll cross the minefield ahead by chiding.

Gerd Leonhard, in Technology vs. Humanity, offers us a Hegelian dialectic of sorts. Between two dismal theses — the blithe techno-transcendentalism of Ray Kurzweil and the renunciatory nostalgia of Francis Fukayama — Leonhard rightly pleads for caution, for a middle-ground synthesis, though leaning a bit toward Fukayama. Leonhard frets over plans to embrace and incorporate tech-prosthetics into human existence. “Because it would be a reduction, not an expansion, of who we are, it would no longer be empowerment but enslavement. …”

To which I must reply: how the heck do you know that?

All of them, spanning the current spectrum of discourse from Kurzweil and Peter Diamandis to Leonhard and Havens all the way to Fukayama and religious fundamentalists, seem bent on making grand declarations. Yet, those who would lay down lists of demands and prescriptions make a shared assumption, the same one proclaimed by Plato and so many other dogmatists: that they know the way of things better than our descendants will!

Recall the quotation from George Orwell that opened this article: “Each generation imagines itself to be more intelligent than the one that went before it, and wiser than the one that comes after it.” Shall we then demand that our children and grandchildren — perhaps a bit augmented and smarter than us, but certainly vastly more knowledgeable — ought to follow blueprints that we lay down? Like Cro-Magnon hunters telling us never to forget rituals for propitiating the mammoth spirits? Or bronze age herdsmen telling us how to make love?

Ben Franklin and his apprentices led a conspiracy against kings and priests, crafting systems of accountability not in order to tell their descendants how to live, but in order to leave those later citizens the widest range of options. It is that flexibility — wrought by free speech, open inquiry, due process and above all reciprocal accountability — that lent us our most precious sovereign power. To learn from mistakes and try new things, innovating along a positive-sum flow called progress.

We did not need specifics from the Founders; indeed, it proved desperately important for later generations to toss out many of their crude biases! Nor will our heirs need or benefit from explicit lists and prescriptions laid down by well-meaning authors in 2016. Because they will be both smarter and wiser than us, or we’ll have failed.

Will they be smarter and wiser in part because of technology? That seems likely. Might they have solved many of the quandaries that fret us … only to encounter others that we cannot imagine? Also very likely.

Might some of our practical and moral decisions right now either aid or impede that growth? Of course. That is why I bother to engage this topic and read all these earnestly sincere tomes about the future!

But our job is not to delineate or prescribe. It is to find enough of the errors and calamities in advance, cancel those we can, and build enough virtuous cycles so that our children may stand on our shoulders, doing and achieving and pondering and making ethical decisions for their own time. Doing all of that both clumsily and brilliantly. And then yammering too much advice at their own heirs.

Originally published in Omni

(credit: David Brin)

David Brin is a scientist, public speaker, and world-known author. His novels have been New York Times Bestsellers, winning multiple Hugo, Nebula and other awards. At least a dozen have been translated into more than twenty languages.

David’s latest novel, Existence, is set forty years ahead, in a near future when human survival seems to teeter along not just on one tightrope, but dozens, with as many hopeful trends and breakthroughs as dangers… a world we already see ahead. Only one day an astronaut snares a small, crystalline object from space. It appears to contain a message, even visitors within. Peeling back layer after layer of motives and secrets may offer opportunities, or deadly peril.

David’s non-fiction book, The Transparent Society: Will Technology Make Us Choose Between Freedom and Privacy?, deals with secrecy in the modern world. It won the Freedom of Speech Award from the American Library Association.

A 1998 movie, directed by Kevin Costner, was loosely based on his post-apocalyptic novel, The Postman. Brin’s 1989 ecological thriller, Earth, foreshadowed global warming, cyberwarfare and near-future trends such as the World Wide Web. David’s novel Kiln People has been called a book of ideas disguised as a fast-moving and fun noir detective story, set in a future when new technology enables people to physically be in more than two places at once. A hardcover graphic novel The Life Eaters explored alternate outcomes to WWII, winning nominations and high praise.

David’s science fictional Uplift Universe explores a future when humans genetically engineer higher animals like dolphins to become equal members of our civilization. These include the award-winning Startide Rising, The Uplift War, Brightness Reef, Infinity’s Shore and Heaven’s Reach. He also recently tied up the loose ends left behind by the late Isaac Asimov: Foundation’s Triumph brings to a grand finale Asimov’s famed Foundation Universe.

Brin serves on advisory committees dealing with subjects as diverse as national defense and homeland security, astronomy and space exploration, SETI and nanotechnology, future/prediction and philanthropy.

As a public speaker, Brin shares unique insights — serious and humorous — about ways that changing technology may affect our future lives. He appears frequently on TV, including several episodes of “The Universe” and History Channel’s “Life After People.” He also was a regular cast member on “The ArciTECHS.”

Brin’s scientific work covers an eclectic range of topics, from astronautics, astronomy, and optics to alternative dispute resolution and the role of neoteny in human evolution. His Ph.D in Physics from UCSD,  the University of California at San Diego (the lab of nobelist Hannes Alfven), followed a masters in optics and an undergraduate degree in astrophysics from Caltech. He was a postdoctoral fellow at the California Space Institute. His technical patents directly confront some of the faults of old-fashioned screen-based interaction, aiming to improve the way human beings converse online.
(credit: David Brin)

Equality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Equality_...