Search This Blog

Sunday, July 22, 2018

Optogenetics

From Wikipedia, the free encyclopedia
 
Optogenetics (from Greek optikós, meaning 'seen, visible') is a biological technique which involves the use of light to control cells in living tissue, typically neurons, that have been genetically modified to express light-sensitive ion channels. It is a neuromodulation method that uses a combination of techniques from optics and genetics to control and monitor the activities of individual neurons in living tissue—even within freely-moving animals—and to precisely measure these manipulation effects in real-time. The key reagents used in optogenetics are light-sensitive proteins. Neuronal control is achieved using optogenetic actuators like channelrhodopsin, halorhodopsin, and archaerhodopsin, while optical recording of neuronal activities can be made with the help of optogenetic sensors for calcium (GCaMP), vesicular release (synapto-pHluorin), neurotransmitter (GluSnFRs), or membrane voltage (arc lightning, ASAP1). Control (or recording) of activity is restricted to genetically defined neurons and performed in a spatiotemporal-specific manner by light.

In 2010, optogenetics was chosen as the "Method of the Year" across all fields of science and engineering by the interdisciplinary research journal Nature Methods. At the same time, optogenetics was highlighted in the article on "Breakthroughs of the Decade" in the academic research journal Science.[5] These journals also referenced recent public-access general-interest video Method of the year video and textual SciAm summaries of optogenetics.

History

The "far-fetched" possibility of using light for selectively controlling precise neural activity (action potential) patterns within subtypes of cells in the brain was thought of by Francis Crick in his Kuffler Lectures at the University of California in San Diego in 1999.[6] An earlier use of light to activate neurons was carried out by Richard Fork,[7] who demonstrated laser activation of neurons within intact tissue, although not in a genetically-targeted manner. The earliest genetically targeted method that used light to control rhodopsin-sensitized neurons was reported in January 2002, by Boris Zemelman (now at UT Austin) and Gero Miesenböck, who employed Drosophila rhodopsin cultured mammalian neurons.[8] In 2003, Zemelman and Miesenböck developed a second method for light-dependent activation of neurons in which single inotropic channels TRPV1, TRPM8 and P2X2 were gated by photocaged ligands in response to light.[9] Beginning in 2004, the Kramer and Isacoff groups developed organic photoswitches or "reversibly caged" compounds in collaboration with the Trauner group that could interact with genetically introduced ion channels.[10][11] TRPV1 methodology, albeit without the illumination trigger, was subsequently used by several laboratories to alter feeding, locomotion and behavioral resilience in laboratory animals.[12][13][14] However, light-based approaches for altering neuronal activity were not applied outside the original laboratories, likely because the easier to employ channelrhodopsin was cloned soon thereafter.[15]
Peter Hegemann, studying the light response of green algae at the University of Regensburg, had discovered photocurrents that were too fast to be explained by the classic g-protein-coupled animal rhodopsins.[16] Teaming up with the electrophysiologist Georg Nagel at the Max Planck Institute in Frankfurt, they could demonstrate that a single gene from the alga Chlamydomonas produced large photocurents when expressed in the oocyte of a frog.[17] To identify expressing cells, they replaced the cytoplasmic tail of the algal protein with the fluorescent protein YFP, generating the first generally applicable optogenetic tool.[15] Zhuo-Hua Pan of Wayne State University, researching on restore sight to blindness, thought about using channelrhodopsin when it came out in late 2003. By February 2004, he was trying channelrhodopsin out in ganglion cells—the neurons in our eyes that connect directly to the brain—that he had cultured in a dish. Indeed, the transfected neurons became electrically active in response to light.[18] In April 2005, Susana Lima and Miesenböck reported the first use of genetically-targeted P2X2 photostimulation to control the behaviour of an animal.[19] They showed that photostimulation of genetically circumscribed groups of neurons, such as those of the dopaminergic system, elicited characteristic behavioural changes in fruit flies. In August 2005, Karl Deisseroth's laboratory in the Bioengineering Department at Stanford including graduate students Ed Boyden and Feng Zhang (both now at MIT) published the first demonstration of a single-component optogenetic system in cultured mammalian neurons,[20][21] using the channelrhodopsin-2(H134R)-eYFP construct from Nagel and Hegemann.[15] The groups of Gottschalk and Nagel were first to use channelrhodopsin-2 for controlling neuronal activity in an intact animal, showing that motor patterns in the roundworm Caenorhabditis elegans could be evoked by light stimulation of genetically selected neural circuits (published in December 2005).[22] In mice, controlled expression of optogenetic tools is often achieved with cell-type-specific Cre/loxP methods developed for neuroscience by Joe Z. Tsien back in the 1990s[23] to activate or inhibit specific brain regions and cell-types in vivo.[24]

The primary tools for optogenetic recordings have been genetically encoded calcium indicators (GECIs). The first GECI to be used to image activity in an animal was cameleon, designed by Atsushi Miyawaki, Roger Tsien and coworkers.[25] Cameleon was first used successfully in an animal by Rex Kerr, William Schafer and coworkers to record from neurons and muscle cells of the nematode C. elegans.[26] Cameleon was subsequently used to record neural activity in flies[27] and zebrafish.[28] In mammals, the first GECI to be used in vivo was GCaMP,[29] first developed by Nakai and coworkers.[30] GCaMP has undergone numerous improvements, and GCaMP6[31] in particular has become widely used throughout neuroscience.

In 2010, Karl Deisseroth at Stanford University was awarded the inaugural HFSP Nakasone Award "for his pioneering work on the development of optogenetic methods for studying the function of neuronal networks underlying behavior". In 2012, Gero Miesenböck was awarded the InBev-Baillet Latour International Health Prize for "pioneering optogenetic approaches to manipulate neuronal activity and to control animal behaviour." In 2013, Ernst Bamberg, Ed Boyden, Karl Deisseroth, Peter Hegemann, Gero Miesenböck and Georg Nagel were awarded The Brain Prize for "their invention and refinement of optogenetics."[32][33] Karl Deisseroth was awarded the Else Kröner Fresenius Research Prize 2017 (4 million euro) for his "contributions to the understanding of the biological basis of psychiatric disorders".

Description

Fig 1. Channelrhodopsin-2 (ChR2) induces temporally precise blue light-driven activity in rat prelimbic prefrontal cortical neurons. a) In vitro schematic (left) showing blue light delivery and whole-cell patch-clamp recording of light-evoked activity from a fluorescent CaMKllα::ChR2-EYFP expressing pyramidal neuron (right) in an acute brain slice. b) In vivo schematic (left) showing blue light (473 nm) delivery and single-unit recording. (bottom left) Coronal brain slice showing expression of CaMKllα::ChR2-EYFP in the prelimbic region. Light blue arrow shows tip of the optical fiber; black arrow shows tip of the recording electrode (left). White bar, 100 µm. (bottom right) In vivo light recording of prefrontal cortical neuron in a transduced CaMKllα::ChR2-EYFP rat showing light-evoked spiking to 20 Hz delivery of blue light pulses (right). Inset, representative light-evoked single-unit response.[34]
 
Fig 2. Halorhodopsin (NpHR) rapidly and reversibly silences spontaneous activity in vivo in rat prelimbic prefrontal cortex. (Top left) Schematic showing in vivo green (532 nm) light delivery and single- unit recording of a spontaneously active CaMKllα::eNpHR3.0- EYFP expressing pyramidal neuron. (Right) Example trace showing that continuous 532 nm illumination inhibits single-unit activity in vivo. Inset, representative single unit event; Green bar, 10 seconds.[34]
 
A nematode expressing the light-sensitive ion channel Mac. Mac is a proton pump originally isolated in the fungus Leptosphaeria maculans and now expressed in the muscle cells of C. elegans that opens in response to green light and causes hyperpolarizing inhibition. Of note is the extension in body length that the worm undergoes each time it is exposed to green light, which is presumably caused by Mac's muscle-relaxant effects.[35]
 
A nematode expressing ChR2 in its gubernacular-oblique muscle group responding to stimulation by blue light. Blue light stimulation causes the gubernacular-oblique muscles to repeatedly contract, causing repetitive thrusts of the spicule, as would be seen naturally during copulation.[36]
 
Optogenetics provides millisecond-scale temporal precision which allows the experimenter to keep pace with fast biological information processing (for example, in probing the causal role of specific action potential patterns in defined neurons). Indeed, to probe the neural code, optogenetics by definition must operate on the millisecond timescale to allow addition or deletion of precise activity patterns within specific cells in the brains of intact animals, including mammals (see Figure 1). By comparison, the temporal precision of traditional genetic manipulations (employed to probe the causal role of specific genes within cells, via "loss-of-function" or "gain of function" changes in these genes) is rather slow, from hours or days to months. It is important to also have fast readouts in optogenetics that can keep pace with the optical control. This can be done with electrical recordings ("optrodes") or with reporter proteins that are biosensors, where scientists have fused fluorescent proteins to detector proteins. An example of this is voltage-sensitive fluorescent protein (VSFP2).[37] Additionally, beyond its scientific impact optogenetics represents an important case study in the value of both ecological conservation (as many of the key tools of optogenetics arise from microbial organisms occupying specialized environmental niches), and in the importance of pure basic science as these opsins were studied over decades for their own sake by biophysicists and microbiologists, without involving consideration of their potential value in delivering insights into neuroscience and neuropsychiatric disease.[38]

Light-activated proteins: channels, pumps and enzymes

The hallmark of optogenetics therefore is introduction of fast light-activated channels, pumps, and enzymes that allow temporally precise manipulation of electrical and biochemical events while maintaining cell-type resolution through the use of specific targeting mechanisms. Among the microbial opsins which can be used to investigate the function of neural systems are the channelrhodopsins (ChR2, ChR1, VChR1, and SFOs) to excite neurons and anion-conducting channelrhodopsins for light-induced inhibition. Light-driven ion pumps are also used to inhibit neuronal activity, e.g. halorhodopsin (NpHR),[39] enhanced halorhodopsins (eNpHR2.0 and eNpHR3.0, see Figure 2),[40] archaerhodopsin (Arch), fungal opsins (Mac) and enhanced bacteriorhodopsin (eBR).[41]

Optogenetic control of well-defined biochemical events within behaving mammals is now also possible. Building on prior work fusing vertebrate opsins to specific G-protein coupled receptors[42] a family of chimeric single-component optogenetic tools was created that allowed researchers to manipulate within behaving mammals the concentration of defined intracellular messengers such as cAMP and IP3 in targeted cells.[43] Other biochemical approaches to optogenetics (crucially, with tools that displayed low activity in the dark) followed soon thereafter, when optical control over small GTPases and adenylyl cyclases was achieved in cultured cells using novel strategies from several different laboratories.[44][45][46][47][48] This emerging repertoire of optogenetic probes now allows cell-type-specific and temporally precise control of multiple axes of cellular function within intact animals.[49]

Hardware for light application

Another necessary factor is hardware (e.g. integrated fiberoptic and solid-state light sources) to allow specific cell types, even deep within the brain, to be controlled in freely behaving animals. Most commonly, the latter is now achieved using the fiberoptic-coupled diode technology introduced in 2007,[50][51][52] though to avoid use of implanted electrodes, researchers have engineered ways to inscribe a "window" made of zirconia that has been modified to be transparent and implanted in mice skulls, to allow optical waves to penetrate more deeply to stimulate or inhibit individual neurons.[53] To stimulate superficial brain areas such as the cerebral cortex, optical fibers or LEDs can be directly mounted to the skull of the animal. More deeply implanted optical fibers have been used to deliver light to deeper brain areas. Complementary to fiber-tethered approaches, completely wireless techniques have been developed utilizing wirelessly delivered power to headborne LEDs for unhindered study of complex behaviors in freely behaving organisms.[54]

Expression of optogenetic actuators

Optogenetics also necessarily includes the development of genetic targeting strategies such as cell-specific promoters or other customized conditionally-active viruses, to deliver the light-sensitive probes to specific populations of neurons in the brain of living animals (e.g. worms, fruit flies, mice, rats, and monkeys). In invertebrates such as worms and fruit flies some amount of all-trans-retinal (ATR) is supplemented with food. A key advantage of microbial opsins as noted above is that they are fully functional without the addition of exogenous co-factors in vertebrates.[52]

Technique

Three primary components in the application of optogenetics
are as follows (A) Identification or synthesis of a
light-sensitive protein (opsin) such as channelrhodopsin-2
(ChR2), halorhodopsin (NpHR), etc... (B) The design of a
system to introduce the genetic material containing the opsin
into cells for protein expression such as application of Cre
 recombinase or an adeno-associated-virus (C) application of
light emitting instruments.[55]

The technique of using optogenetics is flexible and adaptable to the experimenter's needs. For starters, experimenters genetically engineer a microbial opsin based on the gating properties (rate of excitability, refractory period, etc..) required for the experiment.

There is a challenge in introducing the microbial opsin, an optogenetic actuator, into a specific region of the organism in question. A rudimentary approach is to introduce an engineered viral vector that contains the optogenetic actuator gene attached to a recognizable promoter such as CAMKIIα. This allows for some level of specificity as cells that already contain and can translate the given promoter will be infected with the viral vector and hopefully express the optogenetic actuator gene.

Another approach is the creation of transgenic mice where the optogenetic actuator gene is introduced into mice zygotes with a given promoter, most commonly Thy1. Introduction of the optogenetic actuator at an early stage allows for a larger genetic code to be incorporated and as a result, increases the specificity of cells to be infected.

A third and rather novel approach that has been developed is creating transgenic mice with Cre recombinase, an enzyme that catalyzes recombination between two lox-P sites. Then by introducing an engineered viral vector containing the optogenetic actuator gene in between two lox-P sites, only the cells containing the Cre recombinase will express the microbial opsin. This last technique has allowed for multiple modified optogenetic actuators to be used without the need to create a whole line of transgenic animals every time a new microbial opsin is needed.

After the introduction and expression of the microbial opsin, depending on the type of analysis being performed, application of light can be placed at the terminal ends or the main region where the infected cells are situated. Light stimulation can be performed with a vast array of instruments from light emitting diodes (LEDs) or diode-pumped solid state (DPSS). These light sources are most commonly connected to a computer through a fiber optic cable. Recent advances include the advent of wireless head-mounted devices that also apply LED to targeted areas and as a result give the animal more freedom of mobility to reproduce in vivo results.[56][57]

Issues

Although already a powerful scientific tool, optogenetics, according to Doug Tischer & Orion D. Weiner of the University of California San Francisco, should be regarded as a "first-generation GFP" because of its immense potential for both utilization and optimization.[58] With that being said, the current approach to optogenetics is limited primarily by its versatility. Even within the field of Neuroscience where it is most potent, the technique is less robust on a subcellular level.[59]

Selective expression

One of the main problems of optogenetics is that not all the cells in question may express the microbial opsin gene at the same level. Thus, even illumination with a defined light intensity will have variable effects on individual cells. Optogenetic stimulation of neurons in the brain is even less controlled as the light intensity drops exponentially from the light source (e.g. implanted optical fiber).

Moreover, mathematical modelling shows that selective expression of opsin in specific cell types can dramatically alter the dynamical behavior of the neural circuitry. In particular, optogenetic stimulation that preferentially targets inhibitory cells can transform the excitability of the neural tissue from Type 1 — where neurons operate as integrators — to Type 2 where neurons operate as resonators.[60] Type 1 excitable media sustain propagating waves of activity whereas Type 2 excitable media do not. The transformation from one to the other explains how constant optical stimulation of primate motor cortex elicits gamma-band (40–80 Hz) oscillations in the manner of a Type 2 excitable medium. Yet those same oscillations propagate far into the surrounding tissue in the manner of a Type 1 excitable medium.[61]

Nonetheless, it remains difficult to target opsin to defined subcellular compartments, e.g. the plasma membrane, synaptic vesicles, or mitochondria.[59][62] Restricting the opsin to specific regions of the plasma membrane such as dendrites, somata or axon terminals would provide a more robust understanding of neuronal circuitry.[59]

Kinetics and synchronization

An issue with channelrhodopsin-2 is that its gating properties don't mimic in vivo cation channels of cortical neurons. A solution to this issue with a protein's kinetic property is introduction of variants of channelrhodopsin-2 with more favorable kinetics.[55][56]

Another one of the technique's limitations is that light stimulation produces a synchronous activation of infected cells and this removes any individual cell properties of activation among the population affected. Therefore, it is difficult to understand how the cells in the population affected communicate with one another or how their phasic properties of activation may relate to the circuitry being observed.

Optogenetic activation has been combined with functional magnetic resonance imaging (ofMRI) to elucidate the connectome, a thorough map of the brain’s neural connections. The results, however, are limited by the general properties of fMRI.[59][63] The readouts from this neuroimaging procedure lack the spatial and temporal resolution appropriate for studying the densely packed and rapid-firing neuronal circuits.[63]

Excitation spectrum

The opsin proteins currently in use have absorption peaks across the visual spectrum, but remain considerable sensitivity to blue light.[59] This spectral overlap makes it very difficult to combine opsin activation with genenetically encoded indictors (GEVIs, GECIs, GluSnFR, synapto-pHluorin), most of which need blue light excitation. Opsins with infrared activation would, at a standard irradiance value, increase light penetration and augment resolution through reduction of light scattering.

Applications

The field of optogenetics has furthered the fundamental scientific understanding of how specific cell types contribute to the function of biological tissues such as neural circuits in vivo (see references from the scientific literature below). Moreover, on the clinical side, optogenetics-driven research has led to insights into Parkinson's disease[64][65] and other neurological and psychiatric disorders. Indeed, optogenetics papers in 2009 have also provided insight into neural codes relevant to autism, Schizophrenia, drug abuse, anxiety, and depression.[41][66][67][68]

Identification of particular neurons and networks

Amygdala

Optogenetic approaches have been used to map neural circuits in the amygdala that contribute to fear conditioning.[69][70][71][72] One such example of a neural circuit is the connection made from the basolateral amygdala to the dorsal-medial prefrontal cortex where neuronal oscillations of 4 Hz have been observed in correlation to fear induced freezing behaviors in mice. Transgenic mice were introduced with channelrhodoposin-2 attached with a parvalbumin-Cre promoter that selectively infected interneurons located both in the basolateral amygdala and the dorsal-medial prefrontal cortex responsible for the 4 Hz oscillations. The interneurons were optically stimulated generating a freezing behavior and as a result provided evidence that these 4 Hz oscillations may be responsible for the basic fear response produced by the neuronal populations along the dorsal-medial prefrontal cortex and basolateral amygdala.[73]

Olfactory bulb

Optogenetic activation of olfactory sensory neurons was critical for demonstrating timing in odor processing[74] and for mechanism of neuromodulatory mediated olfactory guided behaviors (e.g. aggression, mating)[75] In addition, with the aid of optogenetics, evidence has been reproduced to show that the "afterimage" of odors is concentrated more centrally around the olfactory bulb rather than on the periphery where the olfactory receptor neurons would be located. Transgenic mice infected with channel-rhodopsin Thy1-ChR2, were stimulated with a 473 nm laser transcranially positioned over the dorsal section of the olfactory bulb. Longer photostimulation of mitral cells in the olfactory bulb led to observations of longer lasting neuronal activity in the region after the photostimulation had ceased, meaning the olfactory sensory system is able to undergo long term changes and recognize differences between old and new odors.[76]

Nucleus accumbens

Optogenetics, freely moving mammalian behavior, in vivo electrophysiology, and slice physiology have been integrated to probe the cholinergic interneurons of the nucleus accumbens by direct excitation or inhibition. Despite representing less than 1% of the total population of accumbal neurons, these cholinergic cells are able to control the activity of the dopaminergic terminals that innervate medium spiny neurons (MSNs) in the nucleus accumbens.[77] These accumbal MSNs are known to be involved in the neural pathway through which cocaine exerts its effects, because decreasing cocaine-induced changes in the activity of these neurons has been shown to inhibit cocaine conditioning. The few cholinergic neurons present in the nucleus accumbens may prove viable targets for pharmacotherapy in the treatment of cocaine dependence[41]

Cages for rat equipped of optogenetics leds commutators which permit in vivo to study animal behavior during optogenetics' stimulations.

Prefrontal cortex

In vivo and in vitro recordings (by the Cooper laboratory) of individual CAMKII AAV-ChR2 expressing pyramidal neurons within the prefrontal cortex demonstrated high fidelity action potential output with short pulses of blue light at 20 Hz (Figure 1).[34] The same group recorded complete green light-induced silencing of spontaneous activity in the same prefrontal cortical neuronal population expressing an AAV-NpHR vector (Figure 2).[34]

Heart

Optogenetics was applied on atrial cardiomyocytes to end spiral wave arrhythmias, found to occur in atrial fibrillation, with light.[78] This method is still in the development stage. A recent study explored the possibilities of optogenetics as a method to correct for arrythmias and resynchronize cardiac pacing. The study introduced channelrhodopsin-2 into cardiomyocytes in ventricular areas of hearts of transgenic mice and performed in vitro studies of photostimulation on both open-cavity and closed-cavity mice. Photostimulation led to increased activation of cells and thus increased ventricular contractions resulting in increasing heart rates. In addition, this approach has been applied in cardiac resynchronization therapy (CRT) as a new biological pacemaker as a substitute for electrode based-CRT.[79] Lately, optogenetics has been used in the heart to defibrillate ventricular arrhythmias with local epicardial illumination,[80] a generalized whole heart illumination[81] or with customized stimulation patterns based on arrhythmogenic mechanisms in order to lower defibrillation energy.[82]

Spiral ganglion

Optogenetic stimulation of the spiral ganglion in deaf mice restored auditory activity.[83][84] Optogenetic application onto the cochlear region allows for the stimulation or inhibition of the spiral ganglion cells (SGN). In addition, due to the characteristics of the resting potentials of SGN's, different variants of the protein channelrhodopsin-2 have been employed such as Chronos and CatCh. Chronos and CatCh variants are particularly useful in that they have less time spent in their deactivated states, which allow for more activity with less bursts of blue light emitted. The result being that the LED producing the light would require less energy and the idea of cochlear prosthetics in association with photo-stimulation, would be more feasible.[85]

Brainstem

Optogenetic stimulation of a modified red-light excitable channelrhodopsin (ReaChR) expressed in the facial motor nucleus enabled minimally invasive activation of motoneurons effective in driving whisker movements in mice.[86] One novel study employed optogenetics on the Dorsal Ralphe Nucleus to both activate and inhibit dopaminergic release onto the ventral tegmental area. To produce activation transgenic mice were infected with channelrhodopsin-2 with a TH-Cre promoter and to produce inhibition the hyperpolarizing opsin NpHR was added onto the TH-Cre promoter. Results showed that optically activating dopaminergic neurons led to an increase in social interactions, and their inhibition decreased the need to socialize only after a period of isolation.[87]

Precise temporal control of interventions

The currently available optogenetic actuators allow for the accurate temporal control of the required intervention (i.e. inhibition or excitation of the target neurons) with precision routinely going down to the millisecond level. Therefore, experiments can now be devised where the light used for the intervention is triggered by a particular element of behavior (to inhibit the behavior), a particular unconditioned stimulus (to associate something to that stimulus) or a particular oscillatory event in the brain (to inhibit the event). This kind of approach has already been used in several brain regions:

Hippocampus

Sharp waves and ripple complexes (SWRs) are distinct high frequency oscillatory events in the hippocampus thought to play a role in memory formation and consolidation. These events can be readily detected by following the oscillatory cycles of the on-line recorded local field potential. In this way the onset of the event can be used as a trigger signal for a light flash that is guided back into the hippocampus to inhibit neurons specifically during the SWRs and also to optogenetically inhibit the oscillation itself[88] These kinds of "closed-loop" experiments are useful to study SWR complexes and their role in memory.

Cellular biology/cell signaling pathways

 
Optogenetic control of cellular forces and induction of mechanotransduction. Pictured cells receive an hour of imaging concurrent with blue light that pulses every 60 seconds. This is also indicated when the blue point flashes onto the image. The cell relaxes for an hour without light activation and then this cycle repeats again. The square inset magnifies the cell's nucleus.

The optogenetic toolkit has proven pivotal for the field of neuroscience as it allows precise manipulation of neuronal excitability. Moreover, this technique has been shown to extend outside neurons to an increasing number of proteins and cellular functions.[58] Cellular scale modifications including manipulation of contractile forces relevant to cell migration, cell division and wound healing have been optogenetically manipulated.[89] The field has not developed to the point where processes crucial to cellular and developmental biology and cell signaling including protein localization, post-translational modification and GTP loading can be consistently controlled via optogenetics.[58]

Photosensitive proteins utilized in various cell signaling pathways

While this extension of optogenetics remains to be further investigated, there are various conceptual methodologies that may prove to immediately robust. There is a considerable body of literature outlining photosensitive proteins that have been utilized in cell signaling pathways.[58] CRY2, LOV, DRONPA and PHYB are photosynthetic proteins involved in inducible protein association whereby activation via light can induce/turn off a signaling cascade via recruitment of a signaling domain to its respective substrate.[90][91][92][93] LOV and PHYB are photosensitive proteins that engage in homodimerization and/or heterodimerization to recruit some DNA-modifying protein, translocate to the site of DNA and alter gene expression levels.[94][95][96] CRY2, a protein that inherently clusters when active, has been fused with signaling domains and subsequently photoactivated allowing for clustering-based activation.[97] Proteins LOV and Dronpa have also been adapted to cell signaling manipulation; exposure to light induces conformational changes in the photosensitive protein which can subsequently reveal a previously obscured signaling domain and/or activate a protein that was otherwise allosterically inhibited.[98][99] LOV has been fused to caspase 3 to produce a construct capable of inducing apoptosis upon light stimulation.[100]

Optogenetic temporal control of signals

A different set of signaling cascades respond to stimulus timing duration and dynamics.[101] Adaptive signaling pathways, for instance, adjust in accordance to the current level of the projected stimulus and display activity only when these levels change as opposed to responding to absolute levels of the input.[102] Stimulus dynamics also can trigger activity; treating PC12 cells with epidermal growth factor (inducing a transient profile of ERK activity) leads to cellular proliferation whereas introduction of nerve growth factor (inducing a sustained profile of ERK activity) is associated with a different cellular decision whereby the PC12 cells differentiate into neuron-like cells.[103] This discovery was guided pharmacologically but the finding was replicated utilizing optogenetic inputs instead.[104] This ability to optogenetically control signals for various time durations is being explored to elucidate various cell signaling pathways where there is not a strong enough understanding to utilize either drug/genetic manipulation.

Parallel universes, the Matrix, and superintelligence

June 26, 2003 by Michio Kaku
Original link:  http://www.kurzweilai.net/parallel-universes-the-matrix-and-superintelligence
 Published on KurzweilAI.net June 26, 2003.

Physicists are converging on a “theory of everything,” probing the 11th dimension, developing computers for the next generation of robots, and speculating about civilizations millions of years ahead of ours, says Dr. Michio Kaku, author of the best-sellers Hyperspace and Visions and co-founder of String Field Theory, in this interview by KurzweilAI.net Editor Amara D. Angelica.


What are the burning issues for you currently?

Well, several things. Professionally, I work on something called Superstring theory, or now called M-theory, and the goal is to find an equation, perhaps no more than one inch long, which will allow us to "read the mind of God," as Einstein used to say.

In other words, we want a single theory that gives us an elegant, beautiful representation of the forces that govern the Universe. Now, after two thousand years of investigation into the nature of matter, we physicists believe that there are four fundamental forces that govern the Universe.

Some physicists have speculated about the existence of a fifth force, which may be some kind of paranormal or psychic force, but so far we find no reproducible evidence of a fifth force.

Now, each time a force has been mastered, human history has undergone a significant change. In the 1600s, when Isaac Newton first unraveled the secret of gravity, he also created a mechanics. And from Newton’s Laws and his mechanics, the foundation was laid for the steam engine, and eventually the Industrial Revolution.

So, in other words, in some sense, a byproduct of the mastery of the first force, gravity, helped to spur the creation of the Industrial Revolution, which in turn is perhaps one of the greatest revolutions in human history.

The second great force is the electromagnetic force; that is, the force of light, electricity, magnetism, the Internet, computers, transistors, lasers, microwaves, x-rays, etc.

And then in the 1860s, it was James Clerk Maxwell, the Scottish physicist at Cambridge University, who finally wrote down Maxwell’s equations, which allow us to summarize the dynamics of light.

That helped to unleash the Electric Age, and the Information Age, which have changed all of human history. Now it’s hard to believe, but Newton’s equations and Einstein’s equations are no more than about half an inch long.

Maxwell’s equations are also about half an inch long. For example, Maxwell’s equations say that the “four-dimensional divergence of an antisymmetric, second-rank tensor equals zero.” That’s Maxwell’s equations, the equations for light. And in fact, at Berkeley, you can buy a T-shirt which says, "In the beginning, God said the four-dimensional divergence of an antisymmetric, second rank tensor equals zero, and there was Light, and it was good."

So, the mastery of the first two forces helped to unleash, respectively, the Industrial Revolution and the Information Revolution.

The last two forces are the weak nuclear force and the strong nuclear force, and they in turn have helped us to unlock the secret of the stars, via Einstein’s equations E=mc2, and many people think that far in the future, the human race may ultimately derive its energy not only from solar power, which is the power of fusion, but also fusion power on the Earth, in terms of fusion reactors, which operate on seawater, and create no copious quantities of radioactive waste.

So, in summary, the mastery of each force helped to unleash a new revolution in human history.

Today, we physicists are embarking upon the greatest quest of all, which is to unify all four of these forces into a single comprehensive theory. The first force, gravity, is now represented by Einstein’s General Theory of Relativity, which gives us the Big Bang, black holes, and expanding universe. It’s a theory of the very large; it’s a theory of smooth, space-time manifolds like bedsheets and trampoline nets.

The second theory, the quantum theory, is the exact opposite. The quantum theory allows us to unify the electromagnetic, weak and strong force. However, it is based on discrete, tiny packets of energy called quanta, rather than smooth bedsheets, and it is based on probabilities, rather than the certainty of Einstein’s equations. So these two theories summarize the sum total of all physical knowledge of the physical universe.

Any equation describing the physical universe ultimately is derived from one of these two theories. The problem is these two theories are diametrically opposed. They are based on different assumptions, different principles, and different mathematics. Our job as physicists is to unify the two into a single, comprehensive theory. Now, over the last decades, the giants of the twentieth century have tried to do this and have failed.

For example, Niels Bohr, the founder of atomic physics and the quantum theory, was very skeptical about many attempts over the decades to create a Unified Field Theory. One day, Wolfgang Pauli, Nobel laureate, was giving a talk about his version of the Unified Field Theory, and in a very famous story, Bohr stood up in the back of the room and said, "Mr. Pauli, we in the back are convinced that your theory is crazy. What divides us is whether your theory is crazy enough."

So today, we realize that a true Unified Field Theory must be bizarre, must be fantastic, incredible, mind-boggling, crazy, because all the sane alternatives have been studied and discarded.

Today we have string theory, which is based on the idea that the subatomic particles we see in nature are nothing but notes we see on a tiny, vibrating string. If you kick the string, then an electron will turn into a neutrino. If you kick it again, the vibrating string will turn from a neutrino into a photon or a graviton. And if you kick it enough times, the vibrating string will then mutate into all the subatomic particles.

Therefore we no longer in some sense have to deal with thousands of subatomic particles coming from our atom smashers, we just have to realize that what makes them, what drives them, is a vibrating string. Now when these strings collide, they form atoms and nuclei, and so in some sense, the melodies that you can write on the string correspond to the laws of chemistry. Physics is then reduced to the laws of harmony that we can write on a string. The Universe is a symphony of strings. And what is the mind of God that Einstein used to write about? According to this picture, the mind of God is music resonating through ten- or eleven-dimensional hyperspace, which of course begs the question, "If the universe is a symphony, then is there a composer to the symphony?" But that’s another question.

Parallel worlds

What do you think of Sir Martin Rees’ concerns about the risk of creating black holes on Earth in his book, Our Final Hour?

I haven’t read his book, but perhaps Sir Martin Rees is referring to many press reports that claim that the Earth may be swallowed up by a black hole created by our machines. This started with a letter to the editor in Scientific American asking whether the RHIC accelerator in Brookhaven, Long Island, will create a black hole which will swallow up the earth. This was then picked up by the Sunday London Times who then splashed it on the international wire services, and all of a sudden, we physicists were deluged with hundreds of emails and telegrams asking whether or not we are going to destroy the world when we create a black hole in Long Island.

However, you can calculate that in outer space, cosmic rays have more energy than the particles produced in our most powerful atom smashers, and black holes do not form in outer space. Not to mention the fact that to create a black hole, you would have to have the mass of a giant star. In fact, an object ten to fifty times the mass of our star may in fact form a black hole. So the probability of a black hole forming in Long Island is zero.

However, Sir Martin Rees also has written a book, talking about the Multiverse. And that is also the subject of my next book, coming out late next year, called Parallel Worlds. We physicists no longer believe in a Universe. We physicists believe in a Multiverse that resembles the boiling of water. Water boils when tiny particles, or bubbles, form, which then begin to rapidly expand. If our Universe is a bubble in boiling water, then perhaps Big Bangs happen all the time.

Now, the Multiverse idea is consistent with Superstring theory, in the sense that Superstring theory has millions of solutions, each of which seems to correspond to a self-consistent Universe. So in some sense, Superstring theory is drowning in its own riches. Instead of predicting a unique Universe, it seems to allow the possibility of a Multiverse of Universes.

This may also help to answer the question raised by the Anthropic Principle. Our Universe seems to have known that we were coming. The conditions for life are extremely stringent. Life and consciousness can only exist in a very narrow band of physical parameters. For example, if the proton is not stable, then the Universe will collapse into a useless heap of electrons and neutrinos. If the proton were a little bit different in mass, it would decay, and all our DNA molecules would decay along with it.

In fact, there are hundreds, perhaps thousands, of coincidences, happy coincidences, that make life possible. Life, and especially consciousness, is quite fragile. It depends on stable matter, like protons, that exists for billions of years in a stable environment, sufficient to create autocatalytic molecules that can reproduce themselves, and thereby create Life. In physics, it is extremely hard to create this kind of Universe. You have to play with the parameters, you have to juggle the numbers, cook the books, in order to create a Universe which is consistent with Life.

However, the Multiverse idea explains this problem, because it simply means we coexist with dead Universes. In other Universes, the proton is not stable. In other Universes, the Big Bang took place, and then it collapsed rapidly into a Big Crunch, or these Universes had a Big Bang, and immediately went into a Big Freeze, where temperatures were so low, that Life could never get started.

So, in the Multiverse of Universes, many of these Universes are in fact dead, and our Universe in this sense is special, in that Life is possible in this Universe. Now, in religion, we have the Judeo-Christian idea of an instant of time, a genesis, when God said, "Let there be light." But in Buddhism, we have a contradictory philosophy, which says that the Universe is timeless. It had no beginning, and it had no end, it just is. It’s eternal, and it has no beginning or end.

The Multiverse idea allows us to combine these two pictures into a coherent, pleasing picture. It says that in the beginning, there was nothing, nothing but hyperspace, perhaps ten- or eleven-dimensional hyperspace. But hyperspace was unstable, because of the quantum principle. And because of the quantum principle, there were fluctuations, fluctuations in nothing. This means that bubbles began to form in nothing, and these bubbles began to expand rapidly, giving us the Universe. So, in other words, the Judeo-Christian genesis takes place within the Buddhist nirvana, all the time, and our Multiverse percolates universes.

Now this also raises the possibility of Universes that look just like ours, except there’s one quantum difference. Let’s say for example, that a cosmic ray went through Churchill’s mother, and Churchill was never born, as a consequence. In that Universe, which is only one quantum event away from our Universe, England never had a dynamic leader to lead its forces against Hitler, and Hitler was able to overcome England, and in fact conquer the world.

So, we are one quantum event away from Universes that look quite different from ours, and it’s still not clear how we physicists resolve this question. This paradox revolves around the Schrödinger’s Cat problem, which is still largely unsolved. In any quantum theory, we have the possibility that atoms can exist in two places at the same time, in two states at the same time. And then Erwin Schrödinger, the founder of quantum mechanics, asked the question: let’s say we put a cat in a box, and the cat is connected to a jar of poison gas, which is connected to a hammer, which is connected to a Geiger counter, which is connected to uranium. Everyone believes that uranium has to be described by the quantum theory. That’s why we have atomic bombs, in fact. No one disputes this.

But if the uranium decays, triggering the Geiger counter, setting off the hammer, destroying the jar of poison gas, then I might kill the cat. And so, is the cat dead or alive? Believe it or not, we physicists have to superimpose, or add together, the wave function of a dead cat with the wave function of a live cat. So the cat is neither dead nor alive.

This is perhaps one of the deepest questions in all the quantum theory, with Nobel laureates arguing with other Nobel laureates about the meaning of reality itself.

Now, in philosophy, solipsists like Bishop Berkeley used to believe that if a tree fell in the forest and there was no one there to listen to the tree fall, then perhaps the tree did not fall at all. However, Newtonians believe that if a tree falls in the forest, that you don’t have to have a human there to witness the event.

The quantum theory puts a whole new spin on this. The quantum theory says that before you look at the tree, the tree could be in any possible state. It could be burnt, a sapling, it could be firewood, it could be burnt to the ground. It could be in any of an infinite number of possible states. Now, when you look at it, it suddenly springs into existence and becomes a tree.

Einstein never liked this. When people used to come to his house, he used to ask them, "Look at the moon. Does the moon exist because a mouse looks at the moon?" Well, in some sense, yes. According to the Copenhagen school of Neils Bohr, observation determines existence.

Now, there are at least two ways to resolve this. The first is the Wigner school. Eugene Wigner was one of the creators of the atomic bomb and a Nobel laureate. And he believed that observation creates the Universe. An infinite sequence of observations is necessary to create the Universe, and in fact, maybe there’s a cosmic observer, a God of some sort, that makes the Universe spring into existence.

There’s another theory, however, called decoherence, or many worlds, which believes that the Universe simply splits each time, so that we live in a world where the cat is alive, but there’s an equal world where the cat is dead. In that world, they have people, they react normally, they think that their world is the only world, but in that world, the cat is dead. And, in fact, we exist simultaneously with that world.

This means that there’s probably a Universe where you were never born, but everything else is the same. Or perhaps your mother had extra brothers and sisters for you, in which case your family is much larger. Now, this can be compared to sitting in a room, listening to radio. When you listen to radio, you hear many frequencies. They exist simultaneously all around you in the room. However, your radio is only tuned to one frequency. In the same way, in your living room, there is the wave function of dinosaurs. There is the wave function of aliens from outer space. There is the wave function of the Roman Empire, because it never fell, 1500 years ago.

All of this coexists inside your living room. However, just like you can only tune into one radio channel, you can only tune into one reality channel, and that is the channel that you exist in. So, in some sense it is true that we coexist with all possible universes. The catch is, we cannot communicate with them, we cannot enter these universes.

However, I personally believe that at some point in the future, that may be our only salvation. The latest cosmological data indicates that the Universe is accelerating, not slowing down, which means the Universe will eventually hit a Big Freeze, trillions of years from now, when temperatures are so low that it will be impossible to have any intelligent being survive.

When the Universe dies, there’s one and only one way to survive in a freezing Universe, and that is to leave the Universe. In evolution, there is a law of biology that says if the environment becomes hostile, either you adapt, you leave, or you die.

When the Universe freezes and temperatures reach near absolute zero, you cannot adapt. The laws of thermodynamics are quite rigid on this question. Either you will die, or you will leave. This means, of course, that we have to create machines that will allow us to enter eleven-dimensional hyperspace. This is still quite speculative, but String theory, in some sense, may be our only salvation. For advanced civilizations in outer space, either we leave or we die.

That brings up a question. Matrix Reloaded seems to be based on parallel universes. What do you think of the film in terms of its metaphors?

Well, the technology found in the Matrix would correspond to that of an advanced Type I or Type II civilization. We physicists, when we scan outer space, do not look for little green men in flying saucers. We look for the total energy outputs of a civilization in outer space, with a characteristic frequency. Even if intelligent beings tried to hide their existence, by the second law of thermodynamics, they create entropy, which should be visible with our detectors.

So we classify civilizations on the basis of energy outputs. A Type I civilization is planetary. They control all planetary forms of energy. They would control, for example, the weather, volcanoes, earthquakes; they would mine the oceans, any planetary form of energy they would control. Type II would be stellar. They play with solar flares. They can move stars, ignite stars, play with white dwarfs. Type III is galactic, in the sense that they have now conquered whole star systems, and are able to use black holes and star clusters for their energy supplies.

Each civilization is separated by the previous civilization by a factor of ten billion. Therefore, you can calculate numerically at what point civilizations may begin to harness certain kinds of technologies. In order to access wormholes and parallel universes, you have to be probably a Type III civilization, because by definition, a Type III civilization has enough energy to play with the Planck energy.

The Planck energy, or 1019 billion electron volts, is the energy at which space-time becomes unstable. If you were to heat up, in your microwave oven, a piece of space-time to that energy, then bubbles would form inside your microwave oven, and each bubble in turn would correspond to a baby Universe.

Now, in the Matrix, several metaphors are raised. One metaphor is whether computing machines can create artificial realities. That would require a civilization centuries or millennia ahead of ours, which would place it squarely as a Type I or Type II civilization.

However, we also have to ask a practical question: is it possible to create implants that could access our memory banks to create this artificial reality, and are machines dangerous? My answer is the following. First of all, cyborgs with neural implants: the technology does not exist, and probably won’t exist for at least a century, for us to access the central nervous system. At present, we can only do primitive experiments on the brain.

For example, at Emory University in Atlanta, Georgia, it’s possible to put a glass implant into the brain of a stroke victim, and the paralyzed stroke victim is able to, by looking at the cursor of a laptop, eventually control the motion of the cursor. It’s very slow and tedious; it’s like learning to ride a bicycle for the first time. But the brain grows into the glass bead, which is placed into the brain. The glass bead is connected to a laptop computer, and over many hours, the person is able to, by pure thought, manipulate the cursor on the screen.

So, the central nervous system is basically a black box. Except for some primitive hookups to the visual system of the brain, we scientists have not been able to access most bodily functions, because we simply don’t know the code for the spinal cord and for the brain. So, neural implant technology, I believe is one hundred, maybe centuries away from ours.

Will robots take over?

On the other hand, we have to ask yet another metaphor raised by the Matrix, and that is, are machines dangerous? And the answer is, potentially, yes. However, at present, our robots have the intelligence of a cockroach, in the sense that pattern recognition and common sense are the two most difficult, unsolved problems in artificial intelligence theory. Pattern recognition means the ability to see, hear, and to understand what you are seeing and understand what you are hearing. Common sense means your ability to make sense out of the world, which even children can perform.

Those two problems are at the present time largely unsolved. Now, I think, however, that within a few decades, we should be able to create robots as smart as mice, maybe dogs and cats. However, when machines start to become as dangerous as monkeys, I think we should put a chip in their brain, to shut them off when they start to have murderous thoughts.

By the time you have monkey intelligence, you begin to have self-awareness, and with self-awareness, you begin to have an agenda created by a monkey for its own purposes. And at that point, a mechanical monkey may decide that its agenda is different from our agenda, and at that point they may become dangerous to humans. I think we have several decades before that happens, and Moore’s Law will probably collapse in 20 years anyway, so I think there’s plenty of time before we come to the point where we have to deal with murderous robots, like in the movie 2001.

So you differ with Ray Kurzweil’s concept of using nanobots to reverse-engineer and upload the brain, possibly within the coming decades?

Not necessarily. I’m just laying out a linear course, the trajectory where artificial intelligence theory is going today. And that is, trying to build machines which can navigate and roam in our world, and two, robots which can make sense out of the world. However, there’s another divergent path one might take, and that’s to harness the power of nanotechnology. However, nanotechnology is still very primitive. At the present time, we can barely build arrays of atoms. We cannot yet build the first atomic gear, for example. No one has created an atomic wheel with ball bearings. So simple machines, which even children can play with in their toy sets, don’t yet exist at the atomic level. However, on a scale of decades, we may be able to create atomic devices that begin to mimic our own devices.

Molecular transistors can already be made. Nanotubes allow us to create strands of material that are super-strong. However, nanotechnology is still in its infancy and therefore, it’s still premature to say where nanotechnology will go. However, one place where technology may go is inside our body. Already, it’s possible to create a pill the size of an aspirin pill that has a television camera that can photograph our insides as it goes down our gullet, which means that one day surgery may become relatively obsolete.

In the future, it’s conceivable we may have atomic machines that enter the blood. And these atomic machines will be the size of blood cells and perhaps they would be able to perform useful functions like regulating and sensing our health, and perhaps zapping cancer cells and viruses in the process. However, this is still science fiction, because at the present time, we can’t even build simple atomic machines yet.

Are we living in a simulation?

Is there any possibility, similar to the premise of The Matrix, that we are living in a simulation?

Well, philosophically speaking, it’s always possible that the universe is a dream, and it’s always possible that our conversation with our friends is a by-product of the pickle that we had last night that upset our stomach. However, science is based upon reproducible evidence. When we go to sleep and we wake up the next day, we usually wind up in the same universe. It is reproducible. No matter how we try to avoid certain unpleasant situations, they come back to us. That is reproducible. So reality, as we commonly believe it to exist, is a reproducible experiment, it’s a reproducible sensation. Therefore in principle, you could never rule out the fact that the world could be a dream, but the fact of the matter is, the universe as it exists is a reproducible universe.

Now, in the Matrix, a computer simulation was run so that virtual reality became reproducible. Every time you woke up, you woke up in that same virtual reality. That technology, of course, does not violate the laws of physics. There’s nothing in relativity or the quantum theory that says that the Matrix is not possible. However, the amount of computer power necessary to drive the universe and the technology necessary for a neural implant is centuries to millennia beyond anything that we can conceive of, and therefore this is something for an advanced Type I or II civilization.

Why is a Type I required to run this kind of simulation? Is number crunching the problem?

Yes, it’s simply a matter of number crunching. At the present time, we scientists simply do not know how to interface with the brain. You see, one of the problems is, the brain, strictly speaking, is not a digital computer at all. The brain is not a Turing machine. A Turing machine is a black box with an input tape and an output tape and a central processing unit. That is the essential element of a Turing machine: information processing is localized in one point. However, our brain is actually a learning machine; it’s a neural network.

Many people find this hard to believe, but there’s no software, there is no operating system, there is no Windows programming for the brain. The brain is a vast collection, perhaps a hundred billion neurons, each neuron with 10,000 connections, which slowly and painfully interacts with the environment. Some neural pathways are genetically programmed to give us instinct. However, for the most part, our cerebral cortex has to be reprogrammed every time we bump into reality.

As a consequence, we cannot simply put a chip in our brain that augments our memory and enhances our intelligence. Memory and thinking, we now realize, is distributed throughout the entire brain. For example, it’s possible to have people with only half a brain. There was a documented case recently where a young girl had half her brain removed and she’s still fully functional.

So, the brain can operate with half of its mass removed. However, you remove one transistor in your Pentium computer and the whole computer dies. So, there’s a fundamental difference between digital computers–which are easily programmed, which are modular, and you can insert different kinds of subroutines in them–and neural networks, where learning is distributed throughout the entire device, making it extremely difficult to reprogram. That is the reason why, even if we could create an advanced PlayStation that would run simulations on a PC screen, that software cannot simply be injected into the human brain, because the brain has no operating system.

Ray Kurzweil’s next book, The Singularity is Near, predicts that possibly within the coming decades, there will be super-intelligence emerging on the planet that will surpass that of humans. What do you think of that idea?

Yes, that sounds interesting. But Moore’s Law will have collapsed by then, so we’ll have a little breather. In 20 years time, the quantum theory takes over, so Moore’s Law collapses and we’ll probably stagnate for a few decades after that. Moore’s Law, which states that computer power doubles every 18 months, will not last forever. The quantum theory giveth, the quantum theory taketh away. The quantum theory makes possible transistors, which can be etched by ultraviolet rays onto smaller and smaller chips of silicon. This process will end in about 15 to 20 years. The senior engineers at Intel now admit for the first time that, yes, they are facing the end.

The thinnest layer on a Pentium chip consists of about 20 atoms. When we start to hit five atoms in the thinnest layer of a Pentium chip, the quantum theory takes over, electrons can now tunnel outside the layer, and the Pentium chip short-circuits. Therefore, within a 15 to 20 year time frame, Moore’s Law could collapse, and Silicon Valley could become a Rust Belt.

This means that we physicists are desperately trying to create the architecture for the post-silicon era. This means using quantum computers, quantum dot computers, optical computers, DNA computers, atomic computers, molecular computers, in order to bridge the gap when Moore’s Law collapses in 15 to 20 years. The wealth of nations depends upon the technology that will replace the power of silicon.

This also means that you cannot project artificial intelligence exponentially into the future. Some people think that Moore’s Law will extend forever; in which case humans will be reduced to zoo animals and our robot creations will throw peanuts at us and make us dance behind bars. Now, that may eventually happen. It is certainly consistent within the laws of physics.

However, the laws of the quantum theory say that we’re going to face a massive problem 15 to 20 years from now. Now, some remedial methods have been proposed; for example, building cubical chips, chips that are stacked on chips to create a 3-dimensional array. However, the problem there is heat production. Tremendous quantities of heat are produced by cubical chips, such that you can fry an egg on top of a cubical chip. Therefore, I firmly believe that we may be able to squeeze a few more years out of Moore’s Law, perhaps designing clever cubical chips that are super-cooled, perhaps using x-rays to etch our chips instead of ultraviolet rays. However, that only delays the inevitable. Sooner or later, the quantum theory kills you. Sooner or later, when we hit five atoms, we don’t know where the electron is anymore, and we have to go to the next generation, which relies on the quantum theory and atoms and molecules.

Therefore, I say that all bets are off in terms of projecting machine intelligence beyond a 20-year time frame. There’s nothing in the laws of physics that says that computers cannot exceed human intelligence. All I raise is that we physicists are desperately trying to patch up Moore’s Law, and at the present time we have to admit that we have no successor to silicon, which means that Moore’s Law will collapse in 15 to 20 years.

So are you saying that quantum computing and nanocomputing are not likely to be available by then?

No, no, I’m just saying it’s very difficult. At the present time we physicists have been able to compute on seven atoms. That is the world’s record for a quantum computer. And that quantum computer was able to calculate 3 x 5 = 15. Now, being able to calculate 3 x 5 = 15 does not equal the convenience of a laptop computer that can crunch potentially millions of calculations per second. The problem with quantum computers is that any contamination, any atomic disturbance, disturbs the alignment of the atoms and the atoms then collapse into randomness. This is extremely difficult, because any cosmic ray, any air molecule, any disturbance can conceivably destroy the coherence of our atomic computer to make them useless.

Unless you have redundant parallel computing?

Even if you have parallel computing you still have to have each parallel computer component free of any disturbance. So, no matter how you cut it, the practical problems of building quantum computers, although within the laws of physics, are extremely difficult, because it requires that we remove all in contact with the environment at the atomic level. In practice, we’ve only been able to do this with a handful of atoms, meaning that quantum computers are still a gleam in the eye of most physicists.

Now, if a quantum computer can be successfully built, it would, of course, scare the CIA and all the governments of the world, because it would be able to crack any code created by a Turing machine. A quantum computer would be able to perform calculations that are inconceivable by a Turing machine. Calculations that require an infinite amount of time on a Turing machine can be calculated in a few seconds by a quantum computer. For example, if you shine laser beams on a collection of coherent atoms, the laser beam scatters, and in some sense performs a quantum calculation, which exceeds the memory capability of any Turing machine.

However, as I mentioned, the problem is that these atoms have to be in perfect coherence, and the problems of doing this are staggering in the sense that even a random collision with a subatomic particle could in fact destroy the coherence and make the quantum computer impractical.

So, I’m not saying that it’s impossible to build a quantum computer; I’m just saying that it’s awfully difficult.

SETI: looking in the wrong direction

When do you think we might expect SETI [Search for Extraterrestrial Intelligence] to be successful?

I personally think that SETI is looking in the wrong direction. If, for example, we’re walking down a country road and we see an anthill, do we go down to the ant and say, "I bring you trinkets, I bring you beads, I bring you knowledge, I bring you medicine, I bring you nuclear technology, take me to your leader"? Or, do we simply step on them? Any civilization capable of reaching the planet Earth would be perhaps a Type III civilization. And the difference between you and the ant is comparable to the distance between you and a Type III civilization. Therefore, for the most part, a Type III civilization would operate with a completely different agenda and message than our civilization.

Let’s say that a ten-lane superhighway is being built next to the anthill. The question is: would the ants even know what a ten-lane superhighway is, or what it’s used for, or how to communicate with the workers who are just feet away? And the answer is no. One question that we sometimes ask is if there is a Type III civilization in our backyard, in the Milky Way galaxy, would we even know its presence? And if you think about it, you realize that there’s a good chance that we, like ants in an anthill, would not understand or be able to make sense of a ten-lane superhighway next door.

So this means there that could very well be a Type III civilization in our galaxy, it just means that we’re not smart enough to find one. Now, a Type III civilization is not going to make contact by sending Captain Kirk on the Enterprise to meet our leader. A Type III civilization would send self-replicating Von Neumann probes to colonize the galaxy with robots. For example, consider a virus. A virus only consists of thousands of atoms. It’s a molecule in some sense. But in about one week, it can colonize an entire human being made of trillions of cells. How is that possible?

Well, a Von Neumann probe would be a self-replicating robot that lands on a moon; a moon, because they are stable, with no erosion, and they’re stable for billions of years. The probe would then make carbon copies of itself by the millions. It would create a factory to build copies of itself. And then these probes would then rocket to other nearby star systems, land on moons, to create a million more copies by building a factory on that moon. Eventually, there would be sphere surrounding the mother planet, expanding at near-light velocity, containing trillions of these Von Neumann probes, and that is perhaps the most efficient way to colonize the galaxy. This means that perhaps, on our moon there is a Von Neumann probe, left over from a visitation that took place million of years ago, and the probe is simply waiting for us to make the transition from Type 0 to Type I.

The Sentinel.

Yes. This, of course, is the basis of the movie 2001, because at the beginning of the movie, Kubrick interviewed many prominent scientists, and asked them the question, "What is the most likely way that an advanced civilization would probe the universe?" And that is, of course, through self-replicating Von Neumann probes, which create moon bases. That is the basis of the movie 2001, where the probe simply waits for us to become interesting. If we’re Type 0, we’re not very interesting. We have all the savagery and all the suicidal tendencies of fundamentalism, nationalism, sectarianism, that are sufficient to rip apart our world.

By the time we’ve become Type I, we’ve become interesting, we’ve become planetary, we begin to resolve our differences. We have centuries in which to exist on a single planet to create a paradise on Earth, a paradise of knowledge and prosperity.

Evolution of the brain

From Wikipedia, the free encyclopedia
 
The principles that govern the evolution of brain structure are not well understood. Brain to body size does not scale isometrically (in a linear fashion) but rather allometrically. The brains and bodies of mammals do not scale linearly. Small bodied mammals have relatively large brains compared to their bodies and large mammals (such as whales) have small brains; similar to growth.
 
If brain weight is plotted against body weight for primates, the regression line of the sample points can indicate the brain power of a primate species. Lemurs for example fall below this line which means that for a primate of equivalent size, we would expect a larger brain size. Humans lie well above the line indicating that humans are more encephalized than lemurs. In fact, humans are more encephalized than all other primates.

Early history of brain development

Scientists can infer that the first brain structure appeared at least 550 million years ago, with fossil brain tissue present in sites of exceptional preservation.[1]
A trend in brain evolution according to a study done with mice, chickens, monkeys and apes concluded that more evolved species tend to preserve the structures responsible for basic behaviors. A long term human study comparing the human brain to the primitive brain found that the modern human brain contains the primitive hindbrain region – what most neuroscientists call the protoreptilian brain. The purpose of this part of the brain is to sustain fundamental homeostatic functions. The pons and medulla are major structures found there. A new region of the brain developed in mammals about 250 million years after the appearance of the hindbrain. This region is known as the paleomammalian brain, the major parts of which are the hippocampi and amygdalas, often referred to as the limbic system. The limbic system deals with more complex functions including emotional, sexual and fighting behaviors. Of course, animals that are not vertebrates also have brains, and their brains have undergone separate evolutionary histories.[2]

The brainstem and limbic system are largely based on nuclei, which are essentially balled-up clusters of tightly-packed neurons and the axon fibers that connect them to each other, as well as to neurons in other locations. The other two major brain areas (the cerebrum and cerebellum) are based on a cortical architecture. At the outer periphery of the cortex, the neurons are arranged into layers (the number of which vary according to species and function) a few millimeters thick. There are axons that travel between the layers, but the majority of axon mass is below the neurons themselves. Since cortical neurons and most of their axon fiber tracts don't have to compete for space, cortical structures can scale more easily than nuclear ones. A key feature of cortex is that because it scales with surface area, "more" of it can be fit inside a skull by introducing convolutions, in much the same way that a dinner napkin can be stuffed into a glass by wadding it up. The degree of convolution is generally greater in more evolved species, which benefit from the increased surface area.

The cerebellum, or "little brain," is behind the brainstem and below the occipital lobe of the cerebrum in humans. Its purposes include the coordination of fine sensorimotor tasks, and it may be involved in some cognitive functions, such as language. Human cerebellar cortex is finely convoluted, much more so than cerebral cortex. Its interior axon fiber tracts are called the arbor vitae, or Tree of Life.

The area of the brain with the greatest amount of recent evolutionary change is called the cerebrum, or neocortex. In reptiles and fish, this area is called the pallium, and is smaller and simpler relative to body mass than what is found in mammals. According to research, the cerebrum first developed about 200 million years ago. It's responsible for higher cognitive functions - for example, language, thinking, and related forms of information processing.[3] It's also responsible for processing sensory input (together with the thalamus, a part of the limbic system that acts as an information router). Most of its function is subconscious, that is, not available for inspection or intervention by the conscious mind. Neocortex is an elaboration, or outgrowth, of structures in the limbic system, with which it is tightly integrated.

Randomizing access and scaling brains up

Some animal groups have gone through major brain enlargement through evolution (e.g. vertebrates and cephalopods both contain many lineages in which brains have grown through evolution) but most animal groups are composed only of species with extremely small brains. Some scientists argue that this difference is due to vertebrate and cephalopod neurons having evolved ways of communicating that overcome the scalability problem of neural networks while most animal groups have not. They argue that the reason why traditional neural networks fail to improve their function when they scale up is because filtering based on previously known probabilities cause self-fulfilling prophecy-like biases that create false statistical evidence giving a completely false worldview and that randomized access can overcome this problem and allow brains to be scaled up to more discriminating conditioned reflexes at larger brains that lead to new worldview forming abilities at certain thresholds. This is explained by randomization allowing the entire brain to eventually get access to all information over the course of many shifts even though instant privileged access is physically impossible. They cite that vertebrate neurons transmit virus-like capsules containing RNA that are sometimes read in the neuron to which it is transmitted and sometimes passed further on unread which creates randomized access, and that cephalopod neurons make different proteins from the same gene which suggests another mechanism for randomization of concentrated information in neurons, both making it evolutionarily worth scaling up brains.[4][5][6]

Brain re-arrangement

With the use of in vivo Magnetic resonance imaging (MRI) and tissue sampling, different cortical samples from members of each hominoid species were analyzed. In each species, specific areas were either relatively enlarged or shrunken, which can detail neural organizations. Different sizes in the corticol areas can show specific adaptations, functional specializations and evolutionary events that were changes in how the hominoid brain is organized. In early prediction it was thought that the frontal lobe, a large part of the brain that is generally devoted to behavior and social interaction, predicted the differences in behavior between hominoid and humans. Discrediting this theory was evidence supporting that damage to the frontal lobe in both humans and hominoids show atypical social and emotional behavior; thus, this similarity means that the frontal lobe was not very likely to be selected for reorganization. Instead, it is now believed that evolution occurred in other parts of the brain that are strictly associated with certain behaviors. The reorganization that took place is thought to have been more organizational than volumetric; whereas the brain volumes were relatively the same but specific landmark position of surface anatomical features, for example, the lunate sulcus suggest that the brains had been through a neurological reorganization.[7] There is also evidence that the early hominin lineage also underwent a quiescent period, which supports the idea of neural reorganization.

Dental fossil records for early humans and hominins show that immature hominins, including australopithecines and members of Homo, reveal that these species have a quiescent period (Bown et al. 1987). A quiescent period is a period in which there are no dental eruptions of adult teeth; at this time the child becomes more accustomed to social structure, and development of culture. During this time the child is given an extra advantage over other hominoids, devoting several years into developing speech and learning to cooperate within a community.[8] This period is also discussed in relation to encephalization. It was discovered that chimpanzees do not have this neutral dental period and suggest that a quiescent period occurred in very early hominin evolution. Using the models for neurological reorganization it can be suggested the cause for this period, dubbed middle childhood, is most likely for enhanced foraging abilities in varying seasonal environments. To understand the development of human dentition, taking a look at behavior and biology.[9]

Genetic factors contributing to modern evolution

Bruce Lahn, the senior author at the Howard Hughes Medical Center at the University of Chicago and colleagues have suggested that there are specific genes that control the size of the human brain. These genes continue to play a role in brain evolution, implying that the brain is continuing to evolve. The study began with the researchers assessing 214 genes that are involved in brain development. These genes were obtained from humans, macaques, rats and mice. Lahn and the other researchers noted points in the DNA sequences that caused protein alterations. These DNA changes were then scaled to the evolutionary time that it took for those changes to occur. The data showed the genes in the human brain evolved much faster than those of the other species. Once this genomic evidence was acquired, Lahn and his team decided to find the specific gene or genes that allowed for or even controlled this rapid evolution. Two genes were found to control the size of the human brain as it develops. These genes are Microcephalin and Abnormal Spindle-like Microcephaly (ASPM). The researchers at the University of Chicago were able to determine that under the pressures of selection, both of these genes showed significant DNA sequence changes. Lahn's earlier studies displayed that Microcephalin experienced rapid evolution along the primate lineage which eventually led to the emergence of Homo sapiens. After the emergence of humans, Microcephalin seems to have shown a slower evolution rate. On the contrary, ASPM showed its most rapid evolution in the later years of human evolution once the divergence between chimpanzees and humans had already occurred.[10]

Each of the gene sequences went through specific changes that led to the evolution of humans from ancestral relatives. In order to determine these alterations, Lahn and his colleagues used DNA sequences from multiple primates then compared and contrasted the sequences with those of humans. Following this step, the researchers statistically analyzed the key differences between the primate and human DNA to come to the conclusion, that the differences were due to natural selection. The changes in DNA sequences of these genes accumulated to bring about a competitive advantage and higher fitness that humans possess in relation to other primates. This comparative advantage is coupled with a larger brain size which ultimately allows the human mind to have a higher cognitive awareness.[11]

Human brain size in the fossil record

The evolutionary history of the human brain shows primarily a gradually bigger brain relative to body size during the evolutionary path from early primates to hominids and finally to Homo sapiens. Human brain size has been trending upwards since 2 million years ago, with a 3 factor increase. Early australopithecine brains were little larger than chimpanzee brains. The increase has been seen as larger human brain volume as we progressed along the human timeline of evolution (see Homininae), starting from about 600 cm3 in Homo habilis up to 1600 cm3 in Homo neanderthalensis (male averages). The increase in brain size topped with Neanderthals; Brain size of Homo sapiens varies significantly between population (races), with male averages ranging between about 1,200 to 1,450 cm3.

Evolutionary neuroscience

From Wikipedia, the free encyclopedia

Evolutionary neuroscience is the scientific study of the evolution of nervous systems. Evolutionary neuroscientists investigate the evolution and natural history of nervous system structure, functions and emergent properties. The field draws on concepts and findings from both neuroscience and evolutionary biology. Historically, most empirical work has been in the area of comparative neuroanatomy, and modern studies often make use of phylogenetic comparative methods. Selective breeding and experimental evolution approaches are also being used more frequently.

Conceptually and theoretically, the field is related to fields as diverse as cognitive genomics, neurogenetics, developmental neuroscience, neuroethology, comparative psychology, evo-devo, behavioral neuroscience, cognitive neuroscience, behavioral ecology, biological anthropology and sociobiology.

Evolutionary neuroscientists examine changes in genes, anatomy, physiology, and behavior to study the evolution of changes in the brain.[2] They study a multitude of processes including the evolution of vocal, visual, auditory, taste, and learning systems as well as language evolution and development. [2] [3]In addition, evolutionary neuroscientists study the evolution of specific areas or structures in the brain such as the amigdala , forebrain and cerebellum as well as the motor or visual cortex. [2]

History

Studies of the brain began during ancient Egyptian times but studies in the field of evolutionary neuroscience began after the publication of Darwin's On the Origin of Species in 1859.[4] At that time, brain evolution was largely viewed at the time in relation to the incorrect scala naturae. Phylogeny and the evolution of the brain were still viewed as linear.[4] During the early 20th century, there were several prevailing theories about evolution. Darwinism was based on the principles of natural selection and variation, Lamarckism was based on the passing down of acquired traits, Orthogenesis was based on the assumption that tendency towards perfection steers evolution, and Saltationism argued that discontinuous variation creates new species.[4] Darwin’s became the most accepted and allowed for people to starting thinking about the way animals and their brains evolve.[4]

The 1936 book The Comparative Anatomy of the Nervous System of Vertebrates Including Man by the Dutch neurologist C.U. Ariëns Kappers (first published in German in 1921) was a landmark publication in the field. Following the Evolutionary Synthesis, the study of comparative neuroanatomy was conducted with an evolutionary view, and modern studies incorporate developmental genetics.[5][6] It is now accepted that phylogenetic changes occur independently between species over time and can not be linear.[4] It is also believed that an increase with brain size correlates with an increase in neural centers and behavior complexity.[7]

Major Arguments

Over time, there are several arguments that would come to define the history of evolutionary neuroscience. The first is the argument between Etienne Geoffro St. Hilaire and George Cuvier over the topic of "common plan versus diversity". [2] Geoffrey argued that all animals are built based off a single plan or archetype and he stressed the importance of homologies between organisms, while Cuvier believed that the structure of organs was determined by their function and that knowledge of the function of one organ could help discover the functions of other organs.[2][4] He argued that there were at least four different archetypes.[2] After Darwin, the idea of evolution was more accepted and Geoffrey's idea of homologous structures was more accepted.[2]The second major argument is that of the Scala Naturae (scale of nature) versus the phylogenetic bush.[2] The Scala Naturae, later also called the phylogenetic scale, was based on the premise that phylogenies are linear or like a scale while the phylogenetic bush argument was based on the idea that phylogenies were nonlinear and resembled a bush more than a scale.[2] Today it is accepted that phylogenies are nonlinear.[2] A third major argument dealt with the size of the brain and whether relative size or absolute size was more relevant in determining function.[2] In the late 18th century, it was determined that brain to body ratio reduces as body size increases.[2] However more recently, there is more focus on absolute brain size as this scales with internal structures and functions, with the degree of structural complexity, and with the amount of white matter in the brain, all suggesting that absolute size is much better predictor of brain function. [2] Finally, a fourth argument is that of natural selection (Darwinism) versus developmental constraints (concerted evolution).[2] It is now accepted that the evolution of development is what causes adult species to show differences and evolutionary neuroscientists maintain that many aspects of brain function and structure are conserved across species.[2]

Techniques

Throughout history, we see how evolutionary neuroscience has been dependent on developments in biological theory and techniques. [4]The field of evolutionary neuroscience has been shaped by the development of new techniques that allow for the discovery and examination of parts of the nervous system. In 1873, Camillo Gogi devised the silver nitrate method which allowed for the description of the brain at the cellular level as opposed to simply the gross level.[4] Santiago Ramon and Pedro Ramon used this method to analyze numerous parts of brains, broadening the field of comparative neuroanatomy.[4] In the second half of the 19th century, new techniques allowed scientists to identify neuronal cell groups and fiber bundles in brains.[4] In 1885,Vittorio Marchi discovered a staining technique that let scientists see induced axonal degeneration in myelinated axons, in 1950, the “original Nauta procedure” allowed for more accurate identification of degenerating fibers, and in the 1970s, there were several discoveries of multiple molecular tracers which would be used for experiments even today.[4] In the last 20 years, cladistics has also become a useful tool for looking at variation in the brain.[7]

Evolution of the Human Brain

Darwin's theory allowed for people to start thinking about the way animals and their brains evolve.

San Andreas Fault

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/San_Andreas_Fault   San Andreas Fault Arrows show relative...