Search This Blog

Wednesday, July 18, 2018

The Human Machine Merger: Why We Will Spend Most of Our Time in Virtual Reality in the Twenty-first Century

August 29, 2001 by Ray Kurzweil
Original link:  http://www.kurzweilai.net/the-human-machine-merger-why-we-will-spend-most-of-our-time-in-virtual-reality-in-the-twenty-first-century
Raymond Kurzweil’s keynote address delivered at the 2000 ACM SIGGRAPH conference in New Orleans.
 
I want to talk about the importance of virtual reality and also how graphics will play a major role in the creation of virtual reality. The natural world has had some interesting qualities to it, we find it very endearing, but I think we can create a more interesting world which will become more and more compelling and realistic as we go through the 21st century. That’s going to be created by this community represented here at Siggraph–the graphics community–because we’re going to use our imagination to recreate both earthly as well as fantastic environments that don’t or couldn’t exist in the real world.
But first, I want to talk to you about the trends that will get us there. I’ve become somewhat of a student of technology, and that’s an outgrowth of my interest in working with others to create technology. If you work on creating technologies, as most of you have, you need to anticipate where technology will be at a certain point so that your project will be feasible and useful when it’s completed, not just when you started. So you begin to anticipate where technology is going. And, through the course of doing that over a few decades, I’ve watched trends and become a student of them, have tried to anticipate them–and have actually noted some interesting trends which have remained true for the several decades that I’ve watched them–and I’ve developed some models of how technologies in different areas are developing.

This has given me the ability to try and invent things that will use materials of the future, not just limiting my ideas to the materials we have today, as interesting as they are becoming. Allen Case has noted: To anticipate the future we need to invent it. So we can invent with future materials if we have some idea of what they are.

Perhaps the most important insight that I’ve gained, which people are quick to agree with but very slow to really internalize and to appreciate all of its implications, is the accelerating pace of technical change. I’ve had this debate with many people, including a couple of weeks ago at Harvard; and, one of the dialogues I’ve had with Bill Joy was over the dangers of technology. Even though he and I are sometimes put on the opposite sides on the desirability of certain technologies, we actually share a vision of what will be feasible in future times.

One Nobel Laureate said: Well, there’s no way we’re going to see self-replicating nano-technological entities for at least 100 years. And I then replied that: Yes, that’s actually a reasonable estimate of how much work it will take–it’ll take 100 years of progress, at today’s rate of progress, to get self-replicating nanotechnological entities. But the rate of progress is not going to remain at today’s rate, and I’ll be showing you some charts of what’s happening to the rate of “paradigm shifts”; it’s doubling every decade. We will make 100 years of progress, at today’s rate, in 25 years. The next ten years will be like 20, the following 10 years will be like 40. The 21st century will therefore be like 20,000 years, as far as the rate of progress based on the current rate. The 20th century, as revolutionary as it was, did not have 100 years of progress, at today’s rate; since we accelerated up to today’s rate, it really was about 25 years of progress. The 21st century will be about 1000 times greater, in terms of change and paradigm shift, than the 20th century.

Now, I’d like to take you through some trends. I don’t want to dwell on this because I really want to talk about the implications of and what we will make of all of this technology, and how we will reshape our experiences and ultimately the definition of who we are, with a lot of help from the graphics community. It’s actually interesting how this little special interest group of ACM has emerged into a very powerful force. But I think it’s a significant portent for the future, because we are going to be recreating our world, and our visual world, combined with our auditory world, is the environment that we live in. And we can create virtual reality. As you saw, the person creating the motions of Kermit really projects himself as a different person. We can actually be different people in virtual reality, we don’t have to be stuck with this same old body that we have in real reality. .

But, let me show you a few of these trends. Now, a lot of these trends stem from thinking about the following question, which is: What is Moore’s Law? Some people are saying, “Well Moore’s Law is going to come to an end.” Also, Moore’s Law has become a synonym for the exponential growth of computing.

I’ve been thinking about Moore’s Law, actually, for at least 20 years: What is the real nature of this exponential trend? Where does it come from? Is it an example of something deeper or more profound? Randy Isaacs says that it’s just a set of industry expectations now. We kind of know what to expect, so we know where memory chips and processing chips need to be four year now to be competitive, so it becomes a self-fulfilling prophecy.

But, I wondered, is there something more fundamental going on? I mentioned earlier the intuitive linear view of the future, in which people assume that the future is going to continue rolling out the way it has been, even though most of us have been around long enough to see that progress is growing exponentially. If I say things are accelerating, people are quick to agree with that. They nonetheless assume that 50 years from now, we’re going to see 50 years of progress. People in all different fields with some idea of how long it takes to do things will respond to a difficult problem by saying, “Well, that’s going to take 50 years.” What they don’t realize is that we can see 50 years of progress in less than 20 years.

Moore’s Law is a double exponential trend, and the exponential growth of computing goes beyond Moore’s Laws–and also shows you that this double exponential trend applies to every area of information-based technology, technology that will ultimately will reshape our world.

A lot of people saying, “Well, okay, Moore’s Law is true for hardware but it’s not true for software.” I don’t agree. Here is an example using speech recognition software. These are actual, competitive products from one of my companies. We went in 15 years from a $5,000 package that recognized 1,000 words poorly without continuous speech, to a $50 product with a 100,000 word vocabulary that’s much more accurate.

That’s typical for software. Also, software productivity has also been growing exponentially. In order to consider Moore’s Law–and as I mentioned, people are predicting that that particular paradigm will end. Moore’s Law says that the sides of a transistor are shrinking by 50% every 24 months, so you can put twice as many in a chip, and they also run twice as fast–so that’s actually quadrupling of the computing power.

And within I’d say 12 to 15 years–that was the estimate of these Intel management that I was on a panel with at Agenda 2000–we’ll run out of space because the key features will only be a few atoms in width within 15 years.

So will that be the end of the exponential growth of computing? That’s a very important question if you’re wondering about the 21st century. I put 49 famous computers on an exponential graph. Down at the lower left-hand is the computer that was used in the 1890 American census, calculating equipment using punch cards. 1940 is the year in which the machine that Alan Turing developed that cracked the German enigma code and gave Winston Churchill the transcription of nearly all the Nazi messages, which he then ignored, because he realized if he used them–for example if he warned Coventry that they were going to be bombed, the Germans would see the preparations and realize that their code had been cracked.

So he virtually didn’t use that information until the Battle of Britain, where suddenly the English flyers seemed to know magically where the German flyers were at all times, and able to win the battle of Britain, otherwise we wouldn’t have had a staging point for our D-Day invasion.

1952 is the year of the machine that CBS used to predict the election of a U.S. president, President Eisenhower. In the upper right-hand corner is the computer you just got for your latest graphics experiment.



 And one thing we can see on this is that Moore’s Law was not the first but the fifth paradigm to provide exponential growth of computing. Each of these different colored areas are different paradigms: electrro-mechanical, relay-based, vacuum tubes, transistors, integrated circuits. Every time a paradigm ran out of steam, another paradigm came along and picked up where that paradigm ran out.

People are very quick to criticize exponential trends, saying that ultimately they’ll run out of resources, like rabbits in Australia, but every time one particular paradigm ran out of steam, another completely different approach was able to continue the exponential growth. They were making vacuum tubes smaller but finally got to a point where they couldn’t make them smaller anymore and maintain the vacuum. Transistors came along, which are not just small vacuum tubes, they’re a completely different paradigm.

You can see, by way of an exponential graph, that every time you go up a level it represents a multiplication of computing power by a factor of 100. Most of the charts I’ll show you are exponential graphs. A straight line in an exponential graph means exponential growth. That is actually a slow exponential. There’s actually exponential growth and the rate of exponential growth. We doubled the computing power every three years at the beginning of the century, every two years in the middle, and we’re now doubling it every one year.

It’s obvious what the sixth paradigm will be; the sixth paradigm will be computing in three dimensions. After all, we live in a three-dimensional world; our brain is organized in three dimensions. It’s actually a very inefficient type of circuitry: neurons are very large “devices”, they’re very slow, it uses electrochemical signaling that’s only 200 calculations per second–but it’s organized in three dimensions, and there are already three dimensional computing technologies. I’ve seen a number of them; they’re working in laboratories. There’s one at MIT Media Lab that has 300 layers of circuitry. Nanotubes, which are my favorite, are these little hexagonal arrays of carbon atoms that you can coax to do any type of electronic circuit. You can make the equivalent of transistors and other electrical devices. There’s super-conditioning. They’re very strong. They’re impervious to heat, so they don’t have the thermal problems when you pack them in three dimensions. And a one-inch cube of nanotube circuitry would be a million times more powerful than the computing capacity of the human brain. But you’ll notice that that is a double exponential, and I’ll come back to projecting that into the 21st century. These are other exponential graphs you’ve seen–transistors per chip–and I’m just going to show you these quickly, you don’t have to really be able to see all the details on the chart, they all kind of go like this.

These are exponential charts showing the pervasive aspect of the exponential growth of technology–and I’ll come back to sharing some ideas why technology inherently grows exponentially but this is brain scanning,

this is brain scanning resolution.

That’s important to some of the scenarios I want to talk to you about regarding virtual reality. Brain scanning speed, imaging construction time, genome scanning–When the genome projects were started about 12 years ago, people did not consider it to be a project that was likely to succeed. Skeptics said, “At the rate at which we can scan the genome, it will take 10,000 years to finish the project, and, okay, maybe there’ll be some improvements, but there’s now way that they’ll finish in 15 years.”


But, in fact–and this is an exponential graph–we’ve gone from, in just 10 years, a cost of $12 per base pair to a fraction of tenth of a cent, in 10 years

Human gene mapping, nanotechnolgy–all of these are exponential trends. Telecommunications has grown exponentially, everyone’s aware of it. I want to show you why people are aware of it now, but I also want to show you why people were not aware of it until recently.

Now, this is an exponential graph, and this is interesting because you see the cascade of S curves. Typically, any specific paradigm–and Moore’s Law is a paradigm–will grow exponentially for a while and then will level off when it reaches its limits. And then some paradigm shift, some innovation, allows that exponential growth to take off again. And sometimes, if you have enough data, you can actually see the cascaded S curves as we go from one technology to another.


But this graph shows double exponential growth in bits per second, per dollar, based on ISP costs. Now, these two charts here are the same data, but this is a linear chart.

We experience the world in the linear domain, that’s how we experience change. So, up until about 1997 or 1998, nobody really noticed the telecommunications revolution because it was kind of stuck at approximately zero.

If you tracked the technology and looked at it, you saw these trends coming, but this is how we experience change in the linear domain. Again, here, this chart shows modem costs. On an exponential graph you see exponential growth, it’s quite predicable, but this is how we experience it–sudden, explosive growth. This chart shows Internet backbone. These are all the same phenomena. I mean, here’s the explosion of the web, as measured in web servers– you can take different types of measurements of the web and they all show the same thing.

But here’s what I was looking at up through, say, 1988. In the eighties, it was quite clear that the early precursors of the web (although it wasn’t called that then) were growing exponentially and would then become really obvious by the mid-1990s. And sure enough, 1995, suddenly the web seemed to come out of nowhere, but it was really predictable, if you looked at the exponential trend.

Memory costs, again, same phenomenon.


I’ll just go through these quickly. Here’s our exponential growth for computing. If we project this chart into the 21st century we see that, right now, your typical $1000 PC is somewhere between an insect and a mouse brain.

Of course, mouse brains are very well-optimized for doing mouse tasks, but we’re learning about that.

The human brain has about 100 billion neurons, we have about 1,000 connections from one neuron to another. They operate very slowly, 200 calculations per second, but 100 billion times 1,000, that’s 100 trillion-fold parallelism times 200 is 20 million billion calculations per second, or 20 billion MIPs, and we’ll have 20 billion MIPs, for $1,000, about the year 2020. But it won’t be organized in these rectangular boxes or even little palm tops. I’ll talk about the shape of computing, which will basically be invisible well before that time, but by 2020, $1000 of computing will equal that 20 million billion calculations per second.

Now, that won’t automatically give us human levels of intelligence, but the organizations, the software, the content and the embedded knowledge is equally important. I will tell you about how I feel we will achieve those things, but we will have the requisite composing power. By 2050, $1000 of computing will equal 10 billion human brains–that might be off by a year or two–but the 21st century won’t be wanting for computational resources.

And this growth has been growing the economy–and this is a whole other issue. I feel that the models that we use to plan our economy, that the Fed uses to raise interest rates, are completing ignoring the very powerful deflationary forces I’ve just shown you. All their models have things like capital investment and energy prices, but don’t have things like mips, megabytes, and bandwidth, and the things that are really driving our economy. Software. Knowledge is growing exponentially.

But this is an exponential chart showing that the economy has been growing exponentially. The little blips you see there is the Depression, followed by the post World War Two boom. But the little recessions we have are very small perturbations in what is basically an exponential trend.

This is what it looks like on a linear chart. This is per capita.

This is the growth of the economy, which has been growing exponentially over the last 10 years. And what we’ve been doing with all this technology is basically automating tasks at the bottom of the skill ladder. In1800, the Luddites, which was a name of a society of weavers whose livelihood had been sort of stood on its head by these new automated machines, looked around and saw one person doing the work of what used to be 10 or 20, and it seemed apparent to them that soon employment would be just enjoyed by an elite.

The Luddite movement was suppress and went beneath the surface (although the term still remains a symbol of the dangers of technology.) This is because what the original Luddites didn’t realize is that we’re not ever really satisfied. I mean, we’re not satisfied to have 800 by 600 resolution. We’re not satisfied with just having two dimensional images. We’re never satisfied. People were not satisfied with owning just one shirt; suddenly people wanted a whole wardrobe of shirts. The common man and woman wanted well-made clothing, available at low cost for the first time.

So productivity went up, new industries were created to create these machines; we eliminated jobs at the bottom of the skill ladder and created new jobs at the top of the skill ladder. The skill ladder moves up. We’ve been growing our investment in education exponentially.

I won’t dwell on the details of these charts. Even longevity. In the 18th century, every year, we added a few days to human life expectancy. In the 19th century, we added a few weeks, every year, to human life expectancy–so this is double exponential growth. We’re now adding about 150 days, every year, to human life expectancy,

and with the revolutions coming in genomics, perdiomics, therapeutic cloning, rational drug design, and the other biotechnology revolutions, within 10 years we’ll be adding more than a year, every year, to human life expectancy. So, if you can hang in there for another 10 years, (don’t spend all of your time in the French Quarter!), this will be the increase in human life expectancy. We’ll get ahead of the power curve and be adding more than a year every year, within a decade.

Miniaturization is another very important trend. We’re making things smaller. Bill Joy, in his critique of technology, (I don’t know how many of you are familiar with this Wired cover story and the discussion that’s followed that) has, as one of his recommendations, to just forego nanotechnology. But nanotechnology is not one field which is worked on just by nanotechnologists. Nanotechnology is just the inevitable end result of the pervasive trend toward making things smaller, which we’ve been doing for decades and centuries. We’re currently shrinking technology at a rate of 5.6 per linear dimension per decade. This chart shows you the shrinking of electronics and mechanical devices. It’s the same trend, these are exponential trends, going toward the smaller.


Paradigm shift rates: Again, this chart shows the time it’s taken–again, with small amount of time at the top of chart–to adopt mass use of different inventions, defined as used by quarter of the U.S. population, from electricity and telephone, up through radio and TV., finally to the Internet. The adoption of new methods, new paradigms, is getting faster and faster–and, I have mathematical models of this–we’re doubling the paradigm shift rate every ten years, and that has very profound effects on our view of the future.



This is, I think, a really interesting chart because, this shows double exponentials, this is exponentials on both axis, and it’s basically the amount of time it took for a paradigm shift. The first paradigm shift, starting with biological evolution, cells, DNA, took billions of years. Then, in the Cambrian explosion, when all the different animal body plans were developed by evolution, a paradigm shift only took a few tens of millions of years, and humanoids, that paradigm shift only took a few million years, and homo sapiens a few hundred thousand years. And at that point, the cutting edge of technology actually moved away from biological evolution. Biological evolution continues. But the cutting edge, in terms of the creation of more intelligence and greater knowledge on Earth, moved from biological evolution to human-directed technology, because we as a species have a species-wide knowledge base. Other species create tools, but they don’t remember them, they don’t record them, and don’t use one set of tools to create the next set of tools.

The first paradigm shift took tens of thousands of years: stone tools, fire, the wheel. We remember these. We used these tools to create the next generation. So, 1000 years ago a paradigm shift only took a few hundred years. Now, a paradigm shift only takes a few years’ time. There’s hardly a week that goes by that I don’t hear about a new business model. But it’s interesting how technological evolution has followed, very precisely, biological evolution. It really is a continuation of that same evolutionary process.

Now, people sometimes will argue, saying, well, no, development of these body plans didn’t take 10 million years, it took 40 million years, and the worldwide web wasn’t four years, it was eight years. And you can fool around with these numbers. You still get a plot that looks very much the same. I mean, we didn’t develop cells in four years, it was on the order of billions of years, and the worldwide web didn’t take millions of years.

So I’d like to put some of these technologies together for you, and talk about what we will see. And I’ll come back and hopefully have time to show you a couple of contemporary things that I’m involved in. But, let’s talk about, first, ten years from now, or eight or nine years from now. First of all, computers will disappear; they’re already getting smaller, but as they get smaller we’re taking a step backward. These devices with tiny little screens and keyboards that you can’t really use are a step backward in many ways. We really want to interact with technology in a very seamless way. And I believe that computing will basically be invisible by the end of this decade. Images will be written directly on our retina–many of you may have tried early prototypes, some of which are here at this show, of devices that can basically display images if you put these glasses on.

The technology today is not really viable for walking around in an everyday sort of immersion, but by the end of the decade our contact lenses and glasses will be writing images on the retina. Of course, today you can get a high quality screen, but more interestingly, you can have that screen expand to encompass your whole visual field of view. It’s pretty straightforward technology that can track your eyes and head movement, and thus provide full immersion visual virtual reality. The auditory aspect is even easier.

It’s actually an interesting human fact, this problem of how we get the sound into the ear, and we struggle with that today with cell phones, with people holding things up like this or sticking something in the ear and having a cord come out. I think we’ll have to find more viable solutions. Ultimately, we’re going to get rid of all these wires–I mean, I’m wired up–wires are a real pain in the neck. So we’ll have a tiny little device you can stick in your ear, and these devices will all talk to each other. We’ll develop something more effective than Blue Tooth, but even that’s a significant step forward.

But we will be able to be in visual and auditory virtual reality at all times. We’ll have hand bandwidth connection to the Internet at all times. Interestingly, Europe is actually significantly ahead of us: With their GSM technology they’re going to have 180,000 bit per second wireless connection to the Internet within a few months. And within a year, whole European countries will have this wireless Internet. And I’ve seen projects, business plans, where people are going to be giving away, for free, very high, powerful computers with high quality visual displays, and basically the business model will be driven by e-commerce. People will be wired all the time, and that’s just coming in a year or two.

So certainly by the end of this decade, we’ll have a very high bandwidth connection to the Internet at all times; we’ll be in full immersion visual and auditory virtual reality. The electronics for all of this will be invisible, so small, they’ll be in the glasses or woven in your clothing, and the nature of websites will be virtual reality environments, and going to a website will be mean entering a virtual reality environment. These will be shared environments so that you can go with a few thousand of your closest friends or one close friend, and you don’t have to have the same body, as I mentioned earlier, as we have in real reality–you can have different bodies for different sorts of occasions. You can be Kermit the frog. And this is an example of a virtual reality projection of a person on a flat screen; it’s just a step, but we will be able to have that person enter a shared, three dimension, virtual visual environment. So you can be someone else and have any kind of interaction with anyone, except that you can’t touch them, not with that kind of facility of just walking around. We will be always plugged in and connected to these virtual environments, and we will very frequently go into these environments.

Now, we’ve had a former virtual reality for 100 years, which is the telephone, and you may not think of that as virtual reality, but it was amazing to people in the 19th century when that was first introduced. It had never happened before in human history. It would be as if you were actually with someone else even though they were in Chicago. And that was quite a revelation, at least with the auditory sense, but we can communicate quite effectively with the auditory sense.

And people very often say, “Well, if virtual reality’s going to be so important in the 21st century, nothing is real and we’re going to be in these imaginary worlds, and the responsibility for our actions seems to disappear.” But that’s not true. We’ve had this form of auditory virtual reality, and things we do on the telephone are real–we make real agreements. You can’t say, “Well, I didn’t really agree to that, that was just on the telephone.”

So, we will have real experiences with people in virtual reality. And of course, ten years from now we’ll have forms of tactile virtual reality. They exist today, there are haptic interfaces for surgery and also for games, but they’re not full immersion. We’ll probably have some experimental full immersion sort of body suits by 2009. But that’s not the really interesting way to create full immersion tactile virtual reality.

By 2009, we’ll have full immersion visual and auditory virtual reality, and we’ll have some limited not full immersion tactile virtual reality. Let’s go out to 2030 and put together some of the trends that I talked to you about.

By that time, we’ll be able to send little nanobots, microscopic size robots that can go inside the capillaries and travel through your brain and scan the brain from inside. I showed you the chart of brain scanning resolution, speeds, costs–all of those are exploding exponentially. With every new generation of brain scanning we can see with finer and finer resolution. We can already see the connections, in certain instances, between neurons with brain scanning.

There’s a technology today that you can use to actually scan and see all of the salient neuro details. Of course, there’s not agreement on what those details are, but we can see with very high resolution, provided the scanning tip is right next to the neural features. So we can scan my brain today and see everything that’s going on; you just have to move the scanning tip all throughout my brain so that it’s in close proximity to every neural feature.

Now, how are we going to do that without making a mess of things? The answer is to send the scanners inside that brain. By design, our capillaries travel by every single connection and every neuron and every single neural feature. We can send billions of these scanning robots, all on a wireless local area network, and they would all scan the brain from inside and create a very high resolution map of everything that’s going on.

Now, what are we going to do with that massive database that develops? Well, one thing we can do is reverse engineer the brain and understand the basic principles as to how it works, and that’s something that we’re doing already. I mean, some people challenge this, saying, “But there’s no way you’re going to be able to understand this state.” The same challenge has come up in the Genome Project. And of course, we have that genome data now. It’s going to take us quite a while to understand it, though; having the data doesn’t equal understanding.

But we do have high-resolution scans of certain areas of the brain. The brain is not one organ; it’s several hundred specialized regions. Each one is organized differently, and we have scanned certain regions, certain areas of the auditory and visual cortex. Carver Mead, for example, at Cal Tech, has developed powerful digitally controlled analog chips that are based on these biologically inspired models from the reverse engineering of visual and auditory cortexes; they operate similarly and are they’re used in high-end digital cameras.

We can understand these algorithms. They’re very different than the algorithms that we’re used to using. They’re not sequential, they’re not logical; they’re chaotic, they’re highly parallel, they’re self-organizing. They have a holographic nature in that there’s no sort of chief executive officer neuron. You can eliminate, you can cut any of the wires, you can eliminate any of the nuerons and it really makes no difference–the information and the processes are distributed throughout a whole complex region.

Based on these insights, we have some biologically inspired models today. This is the area that I work in; areas such as evolutionary genetic algorithms, use these biologically inspired models. They are mathematically simplified. But as we get a more powerful understanding of how these different brain regions work, we can develop much more powerful biologically inspired models, and ultimately create and recreate processes that are built on the same messages–massively parallel, digitally controlled analog, chaotic, self-organizing–and recreate the types of processes that occur in the hundreds of different brain regions, and create entities–they actually won’t be in silicon, they’ll probably be using something like nanotubes–that have the complexity and the richness and the depth of human intelligence.

Our machines today are still a million times simpler than the human brain, which is why they don’t have all of our endearing qualities, and things like our ability to get the joke, or to be funny, or to understand you, an emotion, respond appropriately to emotion, and have spiritual experiences. These are not some sorts of side-effects of human intelligence, or distractions; that is the cutting edge of human intelligence. And anyone who does graphic works understands that to create something that’s natural, it has to have a human feel to it. This is really at the cutting edge of technology, and it will require a technology of the complexity of the human brain to create entities that have those kinds of attractive and convincing features.

But we’ll have that–by 2030, $1000 of computation will be 1000 times more powerful than the human brain. We’ll have had for some time, at that time, very high resolution scans of how the human brain works.

Now let’s go back to virtual reality, and these same nanobots. And, by the way, these are conservative scenarios. If you look at the trends in terms of miniaturization, in terms of both mechanical devices and electronic devices, there are already very tiny, pretty sophisticated robots being built. These nanobots don’t have to actually be all that complex. They don’t even necessarily have to navigate. They could actually just move involuntarily through the bloodstream and, as they travel by different neural features, they can communicate with them, the same way that we communicate, now, with different satellites when we’re driving around, or with different cell phone stations as we change cells.

Let’s go back to a scenario involving direct connection with the human brain and these nanobot-based implants. There’s another technology today, called the neurotransistors, where electronics actually can communicate, in both directions, with biological neurons. If a neuron fires, this neuron transistor detects that electro-mechanical pulse, so that’s communication from the neuron to the electronics. It can also cause the neuron to fire or suppress it from firing, and it doesn’t have to stick a wire into it; its wireless connection has to be physically proximate to the neuron. That’s something that exists today.

For full immersion virtual reality, we will send billions of these nanobots to take up positions by every nerve fiber coming from all of our senses. If you want to be in real reality, they sit there and do nothing; if you want to be in virtual reality, they suppress the signals coming from our real senses and replace them with the signals that you would have been receiving if you were in the virtual environment. And the virtual environment is created courtesy of the graphics profession, which will probably encompass more than half of the computer field by that time, because we’re going to be recreating these virtual environments.

In this scenario, we have virtual reality from within and it can recreate all of our senses. And, these will be shared environments, so you can go there with one person or many people, and going to a website will mean entering a virtual reality environment encompassing all of our sense, and not just the five senses but also emotional correlates, reactions we have, emotional reactions, sexual pleasure, humor–there are actually neurological correlates of all of these activities–I talk about them in my book. For example, there was a particular spot in this girl’s brain that, when they stimulated it during open brain surgery work (she was awake), she would start to laugh. The surgeon thought that they were just stimulating some involuntary laugh reflex.

But they actually discovered that they were stimulating the perception of humor; whenever they stimulated this spot, she found everything hilarious. “You guys are just so funny standing there” would be a typical remark. So you can actually identify and enhance or modify our emotional responses to different experiences. That can be part of the overlay of these virtual reality environments. And again, you can have different kinds of bodies for different experiences. Just as people today project their fuzzy little images from webcams in their apartments–and there are tens of thousands if not hundreds of thousands of people whose lives you can “peer into”on the web today– people will beam their whole flow of sensory and even emotional experiences so you can, sort of a la Being John Malkovich, experience the lives of other people. Now, most of those experiences are probably pretty dull at any one moment, so people will archive the more interesting experiences, and you can experience that.

Ultimately, these same nanobots will expand human intelligence and our abilities and facilities in many different ways because they can communicate with each other without actually having to have any kind of physical connection. Because they’re talking to each other wirelessly, they can create new neuron connections. So ultimately, you can describe how these can expand human memory, expand our cognitive faculties, expand our pattern recognition abilities, be able to do things like download knowledge and experiences, and ultimately, expand human intelligence enormously.

When you talk to somebody in the year 2040, you will be talking to someone who may happen to be of biological origin but whose mental processes are a hybrid of their biological thinking processes and electronic processes in their brain that are working very intimately together. We already have a very intimate connection between the thinking of the human species and all the computation that is going on, and our biological thinking is still substantially more powerful than our non-biological thinking, but the biological thinking is flat; the non-biological thinking is coming up like this, as shown on this chart.

The crossover point–and some people call this the singularity–is in the 2020s, 2030 kind of region. When you got to 2040 you’re definitely past the singularity. The bulk of human thinking is non-biological.

To this, people often say, “Well, this is not a very pleasant vision, you’re placing humanity with these machines.” But that’s because we have this prejudice against machines. We don’t really understand what machines are capable of, because all of the machines that we’ve “met” are very uninteresting compared to people. But when machines are derived from human intelligence but are a million times more capable, we’ll have a different respect for machines, and there won’t be a clear distinction between human and machine intelligence–there’s going to be a merger.

We’re already really well along that way. If all the machines in the world stopped today, our civilization would grind to a halt. That wasn’t true as recently as 25 years ago. In 2040, our human intelligence and our machine intelligence will be very deeply and very intimately melded. But all of this is an expression of the human civilization–it’s all coming out of our civilization, it’s all a continuation of that exponential curve that I showed you. It’s what necessary or what will keep the exponential growth of the rate of paradigm shift going. And we will become capable of far more profound experiences of many diverse kinds. We’ll be able to “recreate the world” according to our imaginations, and that’s all, I think, one huge graphics challenge. Well, thank you very much.

(End of presentation; the following is an extended presentation given the next day.)

Welcome. My name is Kurt Akley, I’m the chair of this year’s papers program, and it’s my distinct pleasure to welcome you to a very special, special, special session here at lunch today. As many of you know, yesterday Ray Kurzweil gave an outstanding introduction to the conference. His keynote speech was extremely well received, it’s provocative, it’s thoughtful, and what we noticed is that he didn’t really have enough time to work through all the things that he had to talk about. So, it turned out he was staying here for the rest of the conference, we asked if he’d be willing to spend another hour with us, he said he was.

So that’s what we’re here today to do. We have the opportunity to perhaps interact a little more, hear a little bit more about what Ray has been doing of late. And so without further ado let me reintroduce you to Ray Kurzweil, author, inventor, pioneer and truly great thinker.

RK: Thanks. I’m enjoying the conference, and I asked them to turn the lights up so I could actually see you. Even though I knew you were all out there yesterday, it’s a little disconcerting to speak to a “black sea.”

I thought we’d have more of an informal session. I’d like to share with you some other ideas that pick up from what I talked about yesterday, and then have some dialog about those issues and other things that you’d like to talk about. I think this is a particularly propitious conference for me to be speaking at because of the emphasis on virtual reality and the steps that are being taken with contemporary technology, which are impressive; I think it’s going to be a profound force as we go forward. As you can experience in the interactive exhibits here, virtual reality is an environment, it’s a form of environment that you can interact in and with somewhat intelligent objects, and you can interact with other people.

As we consider the trends that I talked about yesterday, remember that we experience technology and change in the linear dimensions but the trends are in the exponential dimension, so they become explosive once they reach the knee of the curve.

These technologies, which although impressive, don’t really compete with many of aspects of real reality; but they will overcome those limitations, and as we go through this century, shared virtual environments will be a place where we spend a lot of time and interact with each other.

I wanted to cover three things and then have a dialog. One would be to flesh out a little bit more the virtual reality scenario of 2030–excuse the pun–but I didn’t get the opportunity to explore that idea fully. I’d like to then jump back to some contemporary technologies and talk about projects that I’m involved in today. And then, I’d like to touch on a couple of philosophical implications, philosophically jumping off in two directions.

One direction is a sort of ethical direction on the desirability issues of these types of technologies, which has jumped into the public arena partly prompted by a cover story in Wired magazine by Bill Joy. Bill Joy and I have been engaging in some dialogues, and people seem to have paired us as sort of pro and con future technology, which is certainly an oversimplification. Very often, I end up in these forums defending Bill Joy because his perspective on the dangers and destructive potential of technology are very often attacked as not being feasible scenarios.

I believe they are feasible, and that there are dangers, and that is going to be a major concern of the 21st century. However, I come out in the end saying that the dangers are worth it. But that bears some discussion.

The other philosophical direction is this issue of consciousness. If we have these very complex entities called humans that we consider to be conscious, what about complex non-biological entities, and how does that relate to consciousness? Is it inherent that only biological entities can be conscious? Just consider some of those implications.

Well, that’s a lot of issues and we’ll only be able to touch on them, but I would like to say a few things about them. Now, to “come back” to 2030, let me describe that scenario again and consider that a little more carefully.

We will have, by 2030–actually well before that, these are conservative scenarios–nanobot technology, little microscopic sized, blood cell sized or smaller micro-robots or nano robots–nanobots I call them–that can travel through the bloodstream. They actually don’t have to have a lot of robotic capabilities. In fact, you can develop a scenario where they don’t have any, they just kind of move around, the way red blood cells do, without any locomotion or navigational capabilities of their own. And, as they travel by salient points in the brain, through the capillaries, they can communicate while they’re nearby. And, if there are enough of them, there will always be some nearby, kind of like a-synchronous satellites or a-synchronous cars, traveling through different cell regions.

Or you can imagine that the nanobots take up permanent positions in different parts of the brain, in the capillaries. But the capillaries, by design, since all of the brain requires nutrition, travel by every salient neuro feature. And that will be the most effective way of scanning the brain. So we can have all of the nanobots travel by all of the salient neuro features. And as I mentioned, there’s already, today–actually at the Weizmann Institute, in Israel–a very high resolution neuroscanning technology that can pick up extremely fine resolution neuro features, provided the scanning tip is right next to the neuro features. But it doesn’t have to be touching them.

So if you could make the scanners, using that technology, small enough, you could send them through the bloodstream and pick up those features. And as I mentioned, we’d be able to use that information then to recreate and reverse engineer the processes that take place in the brain.

And this is something we’ve already done. We have very high-resolution maps of a few of the several hundred regions of the brain, particularly in the sensory processing pattern recognition areas such as visual processing and auditory processing. And we have reasonable models of how the early auditory processing and visual processing work. When that information from neurobiology became available, we factored those auditory transformations in our speech recognition work, and all of the competitive speech recognition today incorporates those models of early human brain auditory pre-processing; and that’s provided a significant jump in the accuracy of speech recognition.

There are many other examples of that. We’re already using the early stages of brain reverse-engineering. Now, the virtual reality scenario is that the nanobots take up positions by every nerve fiber coming from all the senses, or, alternatively, we have lots of them and they’re constantly swinging by, so you always have some nanobot that’s near every sensory fiber coming from all of the senses. And again, we have the technology today. Obviously there are significant engineering issues involved in this, but we’re talking about 30 years from now.

And just to remind you, with regards to the ideas that I discussed yesterday, the pace of technical progress is accelerating, so we’ll make 20 years of progress the next 10 years, 40 years in the decade after that, 80 years in the decade after that, so that’s 140 years of progress, at today’s rate of progress, over the next 30 years. We didn’t make 100 years of progress in the 20th century because we’ve been accelerating up to today’s rate of progress. We made about 25 years of progress, at today’s rate of progress, in the 20th century. So, just in the next 30 years, that’s 140 years of progress at the year 2000 rate, versus 25, for the entire 20th century, at today’s rate. That’s a factor of almost six to one, in terms of what we’ll accomplish over the next 30 years.

And that’s a very significant factor. It’s remarkable to me how many otherwise thoughtful people who comment on the future and very often have good intuitions as to how long things will take in their field of study–if we talk about certain genetic progress with a biology professor, he’ll have a good idea or she’ll have a good idea of how long it’ll take to accomplish it and things– but fail to consider that that rate is not a constant.

30 years from now, we’ll he able to have these nanobots; the miniaturization trends I showed you will permit this. We can almost build these kinds of circuits today. We can’t make them quite small enough, but we can make them fairly small, with something called Smart Dust, developed by the Department of Defense. The current generation being built now is one millimeter–that’s too big for this scenario–but these one millimeter devices, which are pretty small, can actually be dropped from a plane, and they can fly, find positions with great precision and you can have thousands, not billions, but thousands of these on a wireless local area network. They can then take visual images, communicate with each other, coordinate, send messages back, act as little spies, or accomplish other military objectives.

Those devices are of a comparable complexity to what I’m talking about, and they’re just too big, but the miniaturization trends indicate that the scenarios I’m talking about are quite conservative for the year 2030.

So, the nanobots take up positions by every neuro-fiber, and if you want to be in real reality, they sit there and do nothing. If you want to be in virtual reality, they shut down the signals coming from our real senses and replace them with the signals that you would be receiving if you were in that virtual environment.

And this is, again, using technology which at least in some crude form exists today. We have neuron transistors, which I mentioned yesterday, which can communicate with neurons wirelessly; they don’t need to stick a wire into the neuron. The way neurons work is that they gather signals from, on average, 1000 milli-connections coming into the neuron. They process the signals in a certain non-linear way and if it exceeds the threshold, the neuron fires. And this all or nothing firing–it’s kind of a digitally controlled analog system–we find this throughout the brain, in all the different regions of neuro-circuitry that we’ve looked at.

And when the neuron fires, this neuron transistor, which is an electronic device, can detect that electro-magnetic pulse. Conversely, the neuron transistor can cause a neuron to fire or suppress it from firing. So you have two-way communication between the electronic world and the neural world.

Say you want to enter virtual reality. The nanobots shut down the signals from the real senses and replace them with the signals that you would be receiving in the virtual environment. And then you can be an “actor” in this virtual environment. If I decide to move my hand in front of my face, it suppresses the signals going to my real muscles and causes my virtual hand to move in front of my face, so I can move around and I can be an actor and I would see my hand coming in front of my face. If I go like this, I feel the tactile sensations, I hear it, and, other people in that same shared environment would see me to do this, and if we touch each other we would feel each other, and it would be just like real reality, except you wouldn’t have to be physically proximate to share any kind of environment.

And that’s going to be the tasks of the graphic community, to recreate every earthly environment that we can “find”, and create every imaginative environment that we can imagine; there’s really no limit to that, it’s going to be a major form of “artistic expression.”

Of course, you don’t have to have the same body that you have in real reality, and we saw a good example of that yesterday, where an actor in the back–okay, it was two people, but that’s just a limitation of today’s technology; you certainly could imagine this being done with one person–was projecting her physical presence in another form. And so, you’ll be able to be Kermit or anyone else that you’d like to be, in these other environments.

People will be able to beam their full flow of sensory experiences onto the web, and you’ll be able to then plug into that. You’d have to map one person’s sensory experiences onto your own–that’s not really that hard to do, since we’re all pretty similar–but you’d be able to even map a man onto a woman or even do cross-species mappings and have some sense of what it’s like to be a giant squid. I’ve always wondered about that.

I mean, you see these mysterious creatures; they’re obviously very intelligent, they’re doing clever things, but what is it like to be a giant squid? Well, part of being a giant squid, or part of being a woman, or any kind of really complex, interesting, endearing entity in this universe, has to do also with our emotional reactions, and those can be mapped as well. There are neurological correlates of our experiences. I mentioned yesterday that there’s a humor spot where, when they triggered this in this girl, she found everything very funny. You have similar spots for different types of spiritual experiences–there are obviously many different types of these, and many of these may not actually be spiritual experiences. In fact, localized, specific spots in the brain may be involved in very profound patterns of activity. But there are neurological correlates of all of our experiences. And I talk about a number of different kinds of neurological correlates. There are certainly neurological correlates for sexual pleasure and other experiences that are not just direct sensory, raw sensory information.

So, we would be able to, in these virtual environments, also enhance these secondary neural responses to our environment as well, and that would be an aspect of these shared environments.

These nanobots then can create new connections. The way our brain is connected, we have 100 billion neurons; there’s an average of 1,000 connections from one neuron to another. Our memories, our experiences, our skills, are represented as vast patterns of information.

We have, in fact, an exponentially growing knowledge base as a species. That’s something that no other species has. Other animals learn on their own, but they don’t pass expanding knowledge bases down from one generation to another.

But,we can’t just download knowledge–that’s something that machines can do. For example, in speech recognition, we spent several years training one research computer to understand human speech, and it uses the biologically inspired models–neural nets, mark-off models, genetic algorithms, self-organizing patterns–that are based on our crude current understanding of self-organizing systems in the biological world.

A major part of the engineering project was, in fact, collecting thousands of hours of speech representing different speakers in different dialects, and then exposing this to the system and have it try to recognize the speech. It made mistakes, and then we had it adjust automatically, and self-organize the connections between its simulated neurons. Then it would do a slightly better job.

Over many months of this kind of training, it learned to recognize speech. Well, if you want your personal computer to recognize human speech, you don’t have to spend two years training it the same painstaking way. You can just take the evolved models that we’ve done in our research computer and load it as software, with all of those neural connections and simulated models of connections, already pre-set. So machines can share their knowledge.

We don’t have quick downloading ports on our neuro-interconnection neurotransmitter concentration levels. But as we build non-biological analogs of our neurons and interconnections and neurotransmitter levels where our skills and memories are stored, we won’t leave out those quick downloading ports.

When we can add non-biological intelligence and have these nanobots that can, for example, create new connections; you can have two neurons create a new connection because the nanobots that could be influencing them or controlling them can communicate wirelessly and create a new simulated connection.

So, instead of being so severely restricted, as we are today, to a mere hundred trillion connections in our brain, we’ll be able to expand that. If you actually look at the figure, it might sound like a big figure, but we’re used to big figures already, today, in the computer industry. As we get out into the century, 2040, 2050, we’ll be able to multiply our mental capacity very significantly. And around 2035, 2040, we get to the point where most of the thinking will be non-biological. There are exponential curves on this chart of the growth of computers and our biological thinking is flat–in fact, the human race has 1026 calculations per second, and that’s a flat figure; whereas the non-biological intelligence is growing exponentially. And the cut-off, according to my calculations, is in the 2030s. So, as we get to 2050, the bulk of thinking–which in my opinion is still an expression of the human civilization–will be non-biological. But it will be human thinking, because it’s going to be derived from human thinking. It’s going to be created by humans, or created by machines that are created by humans, or created by machines that are based on reverse engineering of the human brain or downloads of human thinking, and many other intimate connections between human and machine thinking that we can’t even contemplate today.

In these shared virtual environments we can have experiences of other people, we can have experiences including sort of the emotional reactions that are catalogued. We can experience what it’s like to be another person or even another species, and also grow our minds and be able to ultimately download knowledge. People often say, “Well, if that would make everything that we now struggle with very easy, people will lose motivation.”

But in my view, yes, things that we struggle with today and problems that we have today will become easy, but we will be onto greater horizons–I mean, this is the same issue that the Luddites faced in the1800s. They said, “Just a few people can make the entire production of our entire textile industry, so only a few people are going to be working. And, if you apply this to all these different industries, only 1% of the population will be working.”

Well, that didn’t happen, because we wanted more shirts and we wanted things that they couldn’t even imagine–I mean, they certainly couldn’t imagine creating websites and creating bandwidth and all the different technologies we have today. And most human beings in this room are working on things that no one could even understand 100 years ago, and that’s going to continue to be the case: As human knowledge expands exponentially, the kinds of knowledge that we would like to have, as well as our sphere of ignorance, also grows exponentially, and I think there’s really no limit to how human knowledge can expand. Particularly, as we confront different ways of expressing ourselves, virtual reality environments will be an “art form” that is far more challenging than anything we work with today.

Let me take a very sharp segue to contemporary technology. I’ll mention a few things I’m working on, and I’ll try to get to interactive quickly. I’ve got three projects I’m working on. One is called FATKAT, which is financial accelerating transactions from Kurzweil Adaptive Technologies, which is self-organizing systems–neural nets, genetic algorithms, mark-up models– applied to stock market predictions, and we have a system that’s actually working quite well. Our last system, when we simulate with real data running over NASDAQ, which made eleven-fold gains over the last 14 years, made about 600-fold gains. So we’re going to continue to refine that and ultimately work with some fund managers to use it as a system to make more efficient stock market investment decisions.

A second project I have is called FamilyPractice.com, which has an interesting technology called the virtual patient, which is kind of a virtual reality patient. That will be on our website in September. It exists right now as software, and it actually simulates the doctor-patient encounter. Every time you bring up a patient, the patient is different. And, rather than the patient coming in and saying, “Well, I’ve got Type 2 diabetes,” the patient comes in and says, “I’m going to the bathroom a lot and I’m thirsty a lot.” You can actually interview the patient with language, and you can administer every kind of medical test and talk to the patient. You can look inside the patient’s retinas and see diabetic retinopathy. You can speed up time, make one second equal to a month and actually look inside the patient’s eyes and see the diabetic retinopathy evolving, with some image morphing. It’s actually a simulation of the doctor-patient encounter, with a simulated acceleration of time.

Another project is Kurzweil Cyberart.com. As I describe that, I’ll bring up some images. We’re going to have a virtual artist, called AARON, which is the brainchild of Harold Cohen. Some of you might be familiar with his work because he’s been working on this for 30 years. This will be a free screen saver–and I’ll give you the website in a moment–but you can download this program for free. There’ll be a deluxe version for $30, but there’ll be a free version that is basically a screen saver. It takes about two minutes to draw a painting on your screen–but every painting is different, and the painting on your screen will be different than any one that you’ll ever see again, it’s different than any one that will be on the other million screen savers around the world.

These are not being pulled out a database. It is painting these paintings in real time. And there’s quite a bit if diversity. It has the range of diversity you would expect, actually, from a good human artist. And in fact, some of these hang in museums around the world. It is, I think, very good quality art, but the interesting thing is, unlike screen savers that are constantly sort of repeating themselves and doing the same thing over and over again, this never repeats itself; it’s always doing something different. We have a similar system, called Ray Kurzweil’s Cybernetic Poet, on the website today that is also a free download, and that has a screen saver that writes original poetry on your screen. The way that system works is by reading poetry–we have 36 files of classic and contemporary poets–and it creates a language model. Then, using modeling techniques, it actually writes original poetry in the same style, but it’s original poetry. And again, the poems on your screen will be different than from anybody else’s screen saver.

It also has a poet’s assistant, and that is its real purpose: as you write poetry in one screen, it pops up with windows saying, “Here’s an alliteration that Yeats used with that work, and here’s a line that Robert Frost used with that word, and here’s an interesting turn-of-phrase which I just made up that actually would fit in very well with this rhyme.” And it’s just giving you this sort of unlimited set of suggestions, most of which you’ll ignore. But when you’re writing poetry it’s hard to get ideas, and rather than just sort of thumbing through a rhyming dictionary and a thesaurus, this gives you interesting ideas; it’ll give you suggestions how to finish a line, based on these language modeling techniques, based on the poetry of 36 built-in authors.

This company is called Kurzweil Cyberart.com. And one other project I’ll mention is called Kurzweil AI Network. We have the editor here of that, Sarah Black. That’s going to be launched around the end of this year. It’s going to be a web portal, not just for artificial intelligence, but kind of intended to be the Wired magazine of the web, covering the kinds of technologies that I’ve been talking about and the kind of technologies that this conference is interested in–the technologies that will bring intelligent machines and virtual reality into the world in the 21st century. It will cover conferences like this one and there will be interviews with key people.

But, in particular, we don’t want it to just be a magazine that happens to be printed on a website. We’ll be using advanced technologies, a number of them, both as exemplars of what we’re talking about and as ways of presenting the information.

One example is we’ll have a virtual personality–and virtual personalities are a phenomena that you’ll see increasingly in the future. We’ll be interacting with entities that seem like people. They won’t have the intelligence of people, but will function well within a limited domain–for example the domain of being a sales clerk for an e-commerce site where there’s a limited array of products there and a limited set of questions that someone might ask about those products.

Systems are emerging that will be able to do jus that, and you’ll be able to converse ultimately just by talking, with speaker independent speech recognition, to the system, and the system, with a visual presence and a human voice, will be able to talk back to you. All the components of this really exist already today–things like Real Speak, which is a human sounding voice. Ultimately you will be able to take a sample of your voice and it will sound just like you.

We’re going to have a virtual assistant on our site that will be your hostess, which will guide you through the Kurzweil AI Network site. This is from a technology called Life FX, a company that I’m advising and joining the Board of. It’s a public company and they’re here at the show. What is interesting about this technology is that it is a digital person, and it’s a model that actually includes all the facial muscles of a face, and you send a very slow data exchange, so this will work over a website, even on a 14.4 modem, because if you actually were controlling a movie, you couldn’t do that without a high bandwidth connection. So it’s sending a slow set of control signals that are driving a digital face, and then creating, ultimately with human sounding tech. speech, a human sounding and human looking virtual personality that can be driven with low bandwidth channel over the Internet.

And what’s interesting is that they can take a photograph, they can take your picture, just a two-dimensional photograph, and make it speak like this. And I’ve seen them do that, just taking a flat photograph and then suddenly that photograph begins to talk. So this kind of technology will be one of the important components of virtual personalities.

Let me just say one comment on–I don’t know that we have time to talk about consciousness–but on the desirability issue of technology. Technology’s always been a double-edged sword. The concerns that have been raised have to do with self-replication. Disease processes are inherently a process of self-replication. Cancer, or even the flu, any disease, involves some pathogen, some undesirable entity that self-replicates, and the self-replication goes awry. Cancer’s actually a natural, healthy process, except the process that stops the self-replication has broken down, so the self-replication continues without the appropriate limits, and then becomes very destructive.

And in order to scale up to very large numbers, you need self-replication. I mean, how do you get from one fertilized egg to the trillions of cells that are in a human being? Well, evolution solved that problem with biological evolution, with self-replication.

And all of these technologies that I’m talking about will involve self-replication. That’s how we get from one copy of software in a laboratory to the millions or hundreds of millions of copies that exist around the world. Well, it’s self-replicating. And that self-replication gone awry is a phenomenon we call computer viruses. There are many different variants of them, but when that self-replication becomes destructive it can cause ill-effect. I’ll come back to the phenomenon of computer viruses.

But, as we get very powerful technologies, like biological technology and self-replicating nanotechnology, things can become ultimately very destructive. We’re not that far from where biological entities could be engineered at a routine college bioengineering laboratory, and someone could create a bio-engineered pathogen that could be very destructive. Self-replicating nanotechnology can be even more destructive, because nanotechnology actually is more powerful than biological technology. Proteins are actually very fragile; they’ll exist only in a very limited temperature range, and they’re not very strong, whereas nanotechnology entities–where we’re rebuilding the world sort of atom by atom–can be much stronger. You can create, for example, nanotube-based entities that are extremely tough–50 times stronger than steel, and obviously much more powerful than biological entities.

I mentioned earlier that you need billions nanobots for them to be useful. How are you going to get billions of them without self-replication?

The specter has been raised that these could be very destructive technologies, particularly if they’re manipulated by the wrong parties. And we see people using technology today in a destructive way; viruses are a good example of that.

Bill Joy has said, “Let’s just avoid the destructive types of technology. For example, nanotechnology is destructive, so let’s just not do nanotechnology.” This is a complex issue, but in my view this is a very unrealistic premise, because nanotechnology is not one thing; it’s really the end result of tens of thousands of different projects that have advanced in technology in many different ways. All of these technologies are advancing. People aren’t creating self-replicating nanotechnology entities; they’re creating a higher resolution projector by developing little micro-mirrors. That’s a small step toward nanotechnology. And tens of thousands of these little small steps in technology will ultimately get us to where these destructive potentials are feasible.

The promise of this type of technology can, I think, solve age-old problems. I think we’re on the verge of overcoming cancer and all the major diseases that we’ve struggled with for many decades, and that we will be extending human longevity. But the same technology can also be very destructive. The promise and the peril of these types of technologies are deeply intertwined. I don’t think it’s feasible to stop the advancement of technology. I don’t think it would be desirable. Technology’s driven forward by economic imperative. I mean, Bill Joy’s own company, Sun Microsystems, is certainly advancing in many different fronts that would make these technologies feasible, including more efficient communications, more efficient distributed processing, and more powerful computers. The end result of these technologies are potentially just as destructive as any of the other scenarios that one might talk about.

Ultimately, I think it’s a “spiritual quest” that makes us need to continue to advance. I’d say this is part of the evolutionary process, and evolution–at least, in my view–is a spiritually driven process that, as entities involve on an evolutionary track, they become more complex, more intelligent, more beautiful, more creative, moving toward the kind of spiritual ideal of infinite intelligence and creativity. Now, evolution never becomes infinite but it moves exponentially in that direction.

In more practical terms, we have great needs. There’s still a lot of suffering in the world. We need to overcome disease. We need to use nanotechnology to clean up destruction from the environment. But in immediate terms, these technologies are driven forward by economic imperative. We’d have to repeal capitalism and have a totalitarian system to stop technology.

The real answer to this is not new; technology is already destructive–we don’t have to look further than today to see the examples of that. We need ethical guidelines, we need law enforcement, we need technological safeguards. There are already discussions, for example, about computer viruses, of software immune systems, and we already have a form of immune systems and all the forms of anti-viral programs that exist. It’s not the case that we don’t have a response system.

And I think, in fact, we can take some comfort from the example of computer viruses. Yes, we still struggle with them, we’re still concerned about them, they remain destructive. But when computer viruses first emerged–and that is a new form of self-replicating entity that didn’t exist at all a few decades ago–here we had this new form of entity that exists and thrives in a certain environment, the medium of computer networks. And, when they first emerged on the scene, observers said, “When these get to be more sophisticated–these early ones are primitive– they’re going to just completely crowd out computer networks and render them inoperative.”

And what have we actually seen? They continue to be a problem, but they’re really more of a nuisance. The destruction, which has been estimated in billions of dollars, is still only one-tenth of one percent of the benefit we get from computer networks. So they certainly haven’t been all that destructive. We have been successful in keeping it to the nuisance level.

Now, one could say, “But wait a second, computer viruses aren’t potentially lethal. The kinds of specters of self-replicating biological pathogens and self-replicating nano-technology, those could be lethal.” But that actually only strengthens my argument. The fact that computer viruses are not usually lethal means that more people are willing to put them out–a lot of the hackers that release software viruses wouldn’t do so if they thought they were going to kill people. These are generally not murderers.

Also, our response to that problem is much more lackadaisical because it’s not lethal. If, in fact, some of these scenarios were potentially lethal, our response would be 100 times greater, far fewer people would put them out, and there’d be a much more full-bodied response from all levels of society, from law enforcement down to self-policing and ethical guidelines and professional societies, etc.

So I think we’ll make it through, but I think it is an issue that doesn’t go without saying, “It’s something we’re going to have to be very mindful of. Technology is very powerful, it is power, and it can be applied to all human causes and purposes, not all of which reflect our shared human values.”

Let me take some questions, in the fifteen minutes we have left.

Q: My concept is called Personiform–I’ve been working on this concept for about ten years; human simulation, the area of human simulation. What limits do you see which, no matter what level of technology that we develop, you believe we will never be able to go past? I mean, there are certain questions about the digitization and simulation of the soul and what the soul actually is and whether that’s an entity that can be simulated or reproduced effectively.

You’re probably going to ask about this, and you probably wrote about it in your book, somewhat. What are your thoughts on being able to go past the barrier of not being able to produce a soul?

RK: That brings up the issue I didn’t get to, which was consciousness, and I don’t want to use up too much of the time remaining to talk about that, even though it’s probably the most important issue we can talk about. But it’s not an easily resolvable issue. There’s been one sort of set of criticism, which Roger Penrose has exemplified, saying that our soul and our consciousness is embodied in a certain quantum state and, unless you capture the exact quantum state of an entity, you can’t capture its consciousness.

But I would point out that I’m not in the same quantum state I was at the beginning of this lecture or even a moment ago, and it’s not clear how accurately you need to really capture an entity in order to capture something that’s really a accurate recreation of that entity.

Let’s do a thought experiment. Suppose you scan my brain while I’m sleeping and you capture every salient detail, which includes all the neurotransmitter concentrations, so there’s all my memories, and you “reinstantiate” it in a neural substrate that’s not biological. And, of course, you’re going to have to give that entity a body, which is a complicated discussion, so you create a new body with nanotechnology, or maybe it’s a biological body, or maybe it’s just a body that exists in virtual reality, which in 2050 might be just as good, or maybe it’s a projection of a self-organizing swarm of nanotechnology (foglets). There’s many different possible scenarios.

You give it a body, and now this entity (if the technology is refined, and like any other technology, it won’t be at first, but ultimately will be) acts just like Ray Kurzweil. So does this second body have Ray Kurzweil’s consciousness? Well, you could do a simple thought experiment and say, Wait a minute, the old Ray Kurzweil, which is me, could still be around, and I wouldn’t even necessarily know that you had done this. So if you come to me in the morning and say, “Hey, good news Ray. We’ve successfully, finally, scanned your brain and reinstantiated it in another computer. We don’t need your old brain and body anymore,” I might see a philosophical flaw in that perspective. I may wish the new Ray Kurzweil well and I’ll probably end up being jealous of him because he’ll have capabilities that I won’t have, so he’ll be more successful than I in realizing my dreams and goals, but I’ll feel that he’s a different person.

So, he’s different, and certainly from the moment of creation these two entities are moving in different directions, having different experiences and becoming different. But is this second Ray conscious? He’ll certainly act conscious, and he’ll claim to be conscious. He’ll act the same way, and he’ll claim to have emotions and be upset and all the other sort of subtle cues. And when he says “I’m conscious” or “I’m angry,” he’ll have all the subtle, rich hues that the original Ray Kurzweil does, and so it will be a very convincing recreation. He’ll certainly seem conscious.

But some philosophers will come along and say, “Well, no, he’s not squirting neurotransmitters so he can’t be conscious.” There’s no real definitive way to resolve that question. It really comes down to the difference between objectivity and subjectivity. Science is objective: it is measurement and logical deduction from those measurements. And consciousness is, by definition, subjectivity. And we can measure sort of a consensus of subjectivity, but we can’t ultimately get down to the core of subjectivity. So these entities will certainly seem conscious. I believe that we will accept them as conscious, and that’s an objective prediction but not ultimately a philosophical one, that these entities really are conscious. Ultimately, in the real world, we’ll resolve these issues politically, and since these entities ultimately will be very intelligent–more intelligent than humans are today–they’ll be very convincing when they claim to be conscious, and we’ll end up believing them, and if we don’t believe them they’ll get mad at us.

Also, as I said, there won’t be a clear distinction between human and machine intelligence. When you meet an entity of biological origin it will have both biological and machine thinking processes. Or, the entity could be a non-biological entity that is a simulation or a biologically inspired recreation of biological processes. It will seem very human, and there’s going to be many variations in between. There is not going to be a clear distinction between the two worlds. So, that’s the best as I can do in five minutes.

Q: I actually have different questions, but just based on that, suppose we have created one of those conscious entities. Are we allowed to switch them off again?

RK: Well, it is interesting. You could switch one off and then switch it back on and presumably it does not experience that lapse of time. Being shut off is something that we can experience, too. Sleeping isn’t completely that experience, but if we take anesthesia, we really are blacked-out, and we know that biological entities can be frozen and brought back–we haven’t done that with a human but, presumably, they don’t experience that passage of time. So we can shut off and shut back on biological processes. Anesthesia does do that.

And people in comas for a long time don’t apparently experience that time, although there is a type of coma where there is brain activity, though the people aren’t communicating. But there’s the potential, if you turn one of these processes off, to turn them back on. Unless you’ve lost the file and can’t turn it back on–then it’s gone.

I will say one thing. We will enter a period where this sort of inexorable link that we’ve had between the life of our biological bodies and the continued survival of our “mind file” will no longer be inextricably linked, as they are today. Today, when our hardware crashes and disintegrates, we lose all the information that’s in our minds. Now, whether you consider our identity to be that information or not, it certainly drives our experience of another person. All of our personality and memories are represented as information. And there’s information up here, and I’ve estimated it at thousands of trillions of bytes of information. When that information is lost, and it’s a profound pattern of information…

Now, that’s not the case with our computers. When I go to another computer, if this crashes or if I get another model, I don’t throw all my files away; I copy them over. My files, the software information that exists on my notebook computer, have a longevity completely disconnected from the hardware. Now, that doesn’t mean it lives forever because, if I don’t care about a particular file and I’ve never accessed it and, finally, 12 years later I say, “Okay, I want that file,” I’ll realize it exists in some old format–say, magnetic tape. The magnetic tape drive for that has been lost and I don’t have the drivers anymore or the operating system–try going back to some old PD-10 mag tape to retrieve a file.

This is a generic problem. It really comes down to, philosophically, that if you don’t care about information, information does die. It doesn’t always live forever. It does come down to our longevity being linked to our caring about ourselves and our longevity, and it actually puts the control of our longevity in our own decision-making. We’re used to that being a decision that’s kind of out of our hands–some people say, “It’s in God’s hands” or “It’s in Fate’s hands,” but we’ll actually be at a point where that it will be in our own hands, because we don’t necessarily need to lose that information–it doesn’t need to be linked to our biological bodies.

Even just in the biological world, there are technologies being evolved today, therapeutic cloning, where we can re-grow all of our organs, and, instead of constantly replacing ourselves with telomere reduced cells and older cells, we can replace them with younger cells, and re-grow, including our brain matter, our bodies, just in the biological world. This is not even using nanotechnology or computation. It will be possible for our bodies and brains to live indefinitely. Ultimately, we’ll be able to capture this pattern of information.

Now it gets down to the philosophical issue. If you capture my brain patterns and you recreate them in some other form, is that really me, or is that another person? From a third party’s perspective, people will continue to live, because that information will continue to live. It will raise ethical, moral, and philosophical issues that have, in fact, been around for thousands of years, but they’ve existed as sort of polite debates among philosophers. The difference, in the 21st century, is that these will actually become real, practical, ethical and legal questions that we can’t ignore.

Q: I have another philosophical question. Yesterday you showed us how computing technology is really a succession of five individual technologies that kind of leapfrog each other and you were hinting that you believe that, as integrated circuits will run out of steam, nanotechnology will smoothly take over. How sure can you be about that? It seems that holography and biological computing have been dreams that have been around for awhile, but they seem not quite able to catch on at the speeds that we need so that there will be a smooth takeover from integrated circuits.

RK: Well, as I’ve studied these trends in different areas of technology, I’ve seen this phenomenon over and over again; you saw it on some of the charts, not just in computation, where you see ongoing exponential growth as a succession of S curves. With some paradigms, some form of technology grows exponentially, levels off, and then another one takes over. And this has been continuing indefinitely in all those different areas of technology that I showed you–basically all information-based technologies.

Within the area of computation, it’s already happened five times, through different forms of electronic circuitry. And in each case, as one technology ran out of steam, there were many different competing technologies in the wings which, while the old technology continued to grow, these new ones were not cost effective. But as the old one then finally reached its end, then the new ones were able to continue the exponential growth.

And I’m quite encouraged by the panoply of new technologies. I think the most interesting one, the one I’m most encouraged about, is nanotubes, which are already working, and there don’t seem to be any practical limitations. Some of these things like DNA computing and optical computing are very specialized and I don’t see them replacing general purpose computation, but you can create general purpose circuits with nanotubes. But there are a number of other circuit methods that are also working.

You can show where you can get, actually, a thousand times the computation you need to emulate the human brain for $1000 with contemporary silicon technology, not with today’s circuits but without going through the next paradigm. If you push, just today, the Moore’s Law silicon paradigm to its limit and use the digitally controlled analog computation–that is the paradigm that the brain uses–you can get to the kinds of standards we’ve been talking about, without going to the next paradigm. But for a lot of reasons, I believe that we’ll see, as we’ve seen many times before and as we see in many different areas of technology, continued paradigm shifts.

Q: I have a follow-on question that basically has to do with the analysis of what it takes to replicate, let’s say, a human brain, and the number of neurons is not the only criterion. An analogy, for instance, is, let’s say, how much information is embodied in a bacteria? One could do the analysis where one just takes the number of bases and says there are four bases per times the number of bases in the entire genome. But that does not represent the bacteria–it ignores the context.

And, in the same way, the number of neurons, or the number of computing elements, doesn’t reflect the complete capability or structure of the brain. Just as you said in your keynote, we have computers now that are the size of rat brains or mouse brains, but we don’t have computers that can do anything near what a mouse can do, let alone what a grasshopper can do.

RK: Well, we haven’t tried very hard; there’s not much of a market potential for that. But I understand your question–let me address it in two ways. I think your point is very well-taken when it comes to brain downloading, because potentially, to really get a very accurate recreation, you would want to actually emulate at least the implications of every little wiggle in an inter-neuronal connection and all the little complexities that the real world has introduced.

And certainly we see, as we translate from the genome to even a baby’s brain, at least a one million-fold increase in complexity. The genome has only 6 billion bits. It’s estimated that 97% of that is junk, and some people say, “Well, the junk really is useful.” But the fact is, the junk means that this reputation–ALU, one particular sequence, is repeated 300,000 times–comprises 3% of the genome.

If you just take simple data compression on the genome, you’d get it down to no more than 100 million bytes, about the size of Microsoft Word, and that blows up to a brain of 100 trillion connections and at least a million-fold increase in complexity.

And then, as the baby–and even before birth–interacts with the world and starts learning, we increase that complexity far more. However, as we look at replacing brain modules, and we’ve done that in certain instances–for example the Parkinson’s patients, where we replace a particular region of the brain that is scrambled or destroyed by that disease, we find that circuits that are actually much simpler than the kinds of analyses I described before will actually work and will perform the functionality.

As we look at the circuits in the cerebral and visual cortex, and the kinds of transformations they are making, we can come up with circuits that use far fewer transistors than by looking at just the analysis of all the different connections and doing a kind of brute force method.

If you wanted to emulate something with the general capability of the human brain, there are strong arguments that you don’t need 20 million billion calculations per second. On the other hand, if you wanted to capture all of the complexity of an individual personality, there are arguments, as I think you pointed out, that you would need far greater complexity.

However, even a factor of 1,000 just means a delay of nine years. A factor of a billion means a factor of about 25 years, because of the exponential growth. So even if we acquire vastly more information than I’ve estimated–and I think my estimates are basically conservatively high–but even if I’m off by a factor of a billion, it doesn’t appreciably change what we’ll see happening in the 21st century.

Q: Do you think that with the ease of copying provided by the Internet and greater communications technologies, particularly with systems such as Napster and Gnutella, that we are soon going to have to abandon the idea of intellectual property and copyright? Things like open source and the human genome project raise ethical questions about this issue.

RK: Well, intellectual property is going to be a key issue; it’s already a key issue. The Napster model, even though there was this court finding yesterday. I think that, based on contemporary law, the court is correct that the Napster model is violating a copyright law, but there are other models–Gnutella uses a Napster model– but to take a model where it is okay for me to take a CD and play it on my stereo and have my friends come over to my living room and listen to it, I think everyone would agree that that’s okay. Well, how about if I have them listen to it over the telephone? I call them up on the phone and say, “Hey, listen to this new CD.” That’s okay also, I guess.

Well, how about if the telephone, then, is an actual streaming connection and there are actually 100 friends, or maybe just a few friends listening to it–well, I’m just using the Internet now as a telephone. That sounds like it’s probably okay. How about if it’s not really my friend, I’m just part of this network and somebody wants to listen to Metallica? The software knows it’s on my machine, plays it on my machine, so I’m playing it even though I’m not aware of it, and this other person’s listening to it. But no copy actually has been made, so I’m not actually sending a copy of this copyrighted material, I’m just playing it on my machine and the other person’s listening to it.

But since it’s always available to be played over the network, you don’t really have to make these copies, because all you really want to do, ultimately, is listen to it.

I think there’s actually ways of doing this Napster-type sharing which breaks the old “business model” of the recording industry. Napster, I think, does violate copyright law, but I think there’s ways of doing it that don’t break copyright law but nonetheless break the business models.

I think we need to use new business models, which exist. There are means of, I believe, protecting intellectual property. I think there are methods where you can control the usage of software, technically. It would be useful, for example, if hardware had readable serial numbers, so that software could actually know which machines are authorized to read which software. I also think there needs to be a consensus that people should pay for intellectual property usage, but then you have to create models that people are willing to pay for, or you have a situation where somebody can get something for free, or they have to pay $20 for a CD, when they really want only one or two songs. The incentive is too great to break the business model, but if you had a model that people could buy into–I think there are reasonable technical solutions allowing people to experience software in a way that’s needed, for example.

I think the problem is solvable with completely new models. Just as the Internet has been basically drawing business models in almost every industry and replacing them with new ones, all of these information intensive industries will have to adopt these new models. It would make sense to pay per song that you listen to. I think people would be willing to do that if there were models that they could afford. It’s not that hard to break cell phones and use free cell phone usage, and of course there are a lot of pirated cell phones, but the calls are not that great so most people don’t really want to break those laws. They’re willing to pay affordable fees for cell phones but don’t feel the fee structure of the old business models of CDs is really something they want to buy into.

I think intellectual property is important, because if you really destroy intellectual property you’re destroying the capital formation that creates the intellectual property that people want to enjoy in the first place. One more question.

Q: When machines are advanced enough to have a sense of humor, do you think we’ll be able to get their jokes? And, more to the point, do you think that these augmented human beings you envision will have a significantly different aesthetic sense to the ones we have now, or will it be essentially similar?

RK: A different sense?

Q: Aesthetic sense.

RK: If we don’t get the joke, then the technology isn’t working. And there’s a very similar concept–it’s a very good question–at the essence of the Turing test. I think Turing was very insightful, that we could embody human knowledge in language–in fact, in just written language–and that you don’t even necessarily have to have a visual presence and a physical presence to embody human knowledge; it can be embodied in language, and language can embody jokes– say you read a screenplay, which is just text, it’s a very small amount of information, relatively speaking, but it can actually embody a full range of human emotions and a very subtle, deep level of human knowledge.

Humor requires very subtle knowledge; it is the most sophisticated thing we do. Humor, emotion specifically–these are not sort of side issues or distractions from the essence of human intelligence. These are the cutting edges of human intelligence. These are the most advanced, most complex, subtle, rich, impressive things that we do. And it won’t be until computers can really come up with the joke, or even just get the joke, that we can consider non-biological entities to be operating at the human level. And it really is similar to the Turing test. Now, passing the Turing test doesn’t mean these entities are conscious. It’s an objective test of a certain level of performance, but I think it is a successful test of human level intelligence.

As for aesthetic sense, I think it’s the same issue. My own belief is that this technology will enhance humanity, enhance our human sense of aesthetics and ethics, and allow us to be more expressive, in a human way. All of us are frustrated at times, or we can’t always rise to the occasion and be witty at the right time and say the right thing and be able to be as articulate as we’d want, or understand some concept that’s a human concept and get it quickly enough and respond in the right way. And there’s just so many things we’d like to do–so many books we want to read, and movies we want to see, and websites we want to visit– and there’s so little time.

If we can enhance ourselves beyond this severe restriction that we’ve suffered through for several millenniums of only a hundred trillion connections, just think of all the experiences that we’ll be able to have and share with one another. Thank you very much. I’ve enjoyed the dialogue.

Neuroprosthetics

From Wikipedia, the free encyclopedia
Neuroprosthetics (also called neural prosthetics) is a discipline related to neuroscience and biomedical engineering concerned with developing neural prostheses. They are sometimes contrasted with a brain–computer interface, which connects the brain to a computer rather than a device meant to replace missing biological functionality.

Neural prostheses are a series of devices that can substitute a motor, sensory or cognitive modality that might have been damaged as a result of an injury or a disease. Cochlear implants provide an example of such devices. These devices substitute the functions performed by the ear drum and stapes while simulating the frequency analysis performed in the cochlea. A microphone on an external unit gathers the sound and processes it; the processed signal is then transferred to an implanted unit that stimulates the auditory nerve through a microelectrode array. Through the replacement or augmentation of damaged senses, these devices intend to improve the quality of life for those with disabilities.

These implantable devices are also commonly used in animal experimentation as a tool to aid neuroscientists in developing a greater understanding of the brain and its functioning. By wirelessly monitoring the brain's electrical signals sent out by electrodes implanted in the subject's brain, the subject can be studied without the device affecting the results.

Accurately probing and recording the electrical signals in the brain would help better understand the relationship among a local population of neurons that are responsible for a specific function.

Neural implants are designed to be as small as possible in order to be minimally invasive, particularly in areas surrounding the brain, eyes or cochlea. These implants typically communicate with their prosthetic counterparts wirelessly. Additionally, power is currently received through wireless power transmission through the skin. The tissue surrounding the implant is usually highly sensitive to temperature rise, meaning that power consumption must be minimal in order to prevent tissue damage.[2]

The neuroprosthetic currently undergoing the most widespread use is the cochlear implant, with over 300,000 in use worldwide as of 2012.[3]

History

The first known cochlear implant was created in 1957. Other milestones include the first motor prosthesis for foot drop in hemiplegia in 1961, the first auditory brainstem implant in 1977 and a peripheral nerve bridge implanted into the spinal cord of an adult rat in 1981. In 1988, the lumbar anterior root implant and functional electrical stimulation (FES) facilitated standing and walking, respectively, for a group of paraplegics.[4]

Regarding the development of electrodes implanted in the brain, an early difficulty was reliably locating the electrodes, originally done by inserting the electrodes with needles and breaking off the needles at the desired depth. Recent systems utilize more advanced probes, such as those used in deep brain stimulation to alleviate the symptoms of Parkinson's disease. The problem with either approach is that the brain floats free in the skull while the probe does not, and relatively minor impacts, such as a low speed car accident, are potentially damaging. Some researchers, such as Kensall Wise at the University of Michigan, have proposed tethering 'electrodes to be mounted on the exterior surface of the brain' to the inner surface of the skull. However, even if successful, tethering would not resolve the problem in devices meant to be inserted deep into the brain, such as in the case of deep brain stimulation (DBS).

Visual prosthetics

A visual prosthesis can create a sense of image by electrically stimulating neurons in the visual system. A camera would wirelessly transmit to an implant, the implant would map the image across an array of electrodes. The array of electrodes has to effectively stimulate 600-1000 locations, stimulating these optic neurons in the retina thus will create an image. The stimulation can also be done anywhere along the optic signal's path way. The optical nerve can be stimulated in order to create an image, or the visual cortex can be stimulated, although clinical tests have proven most successful for retinal implants.
A visual prosthesis system consists of an external (or implantable) imaging system which acquires and processes the video. Power and data will be transmitted to the implant wirelessly by the external unit. The implant uses the received power/data to convert the digital data to an analog output which will be delivered to the nerve via micro electrodes.

Photoreceptors are the specialized neurons that convert photons into electrical signals. They are part of the retina, a multilayer neural structure about 200 um thick that lines the back of the eye. The processed signal is sent to the brain through the optical nerve. If any part of this pathway is damaged blindness can occur.

Blindness can result from damage to the optical pathway (cornea, aqueous humor, crystalline lens, and vitreous). This can happen as a result of accident or disease. The two most common retinal degenerative diseases that result in blindness secondary to photoreceptor loss is age related macular degeneration (AMD) and retinitis pigmentosa (RP).

The first clinical trial of a permanently implanted retinal prosthesis was a device with a passive microphotodiode array with 3500 elements.[5] This trial was implemented at Optobionics, Inc., in 2000. In 2002, Second Sight Medical Products, Inc. (Sylmar, CA) began a trial with a prototype epiretinal implant with 16 electrodes. The subjects were six individuals with bare light perception secondary to RP. The subjects demonstrated their ability to distinguish between three common objects (plate, cup, and knife) at levels statistically above chance. An active sub retinal device developed by Retina Implant GMbH (Reutlingen, Germany) began clinical trials in 2006. An IC with 1500 microphotodiodes was implanted under the retina. The microphotodiodes serve to modulate current pulses based on the amount of light incident on the photo diode.[6]

The seminal experimental work towards the development of visual prostheses was done by cortical stimulation using a grid of large surface electrodes. In 1968 Giles Brindley implanted an 80 electrode device on the visual cortical surface of a 52-year-old blind woman. As a result of the stimulation the patient was able to see phosphenes in 40 different positions of the visual field.[7] This experiment showed that an implanted electrical stimulator device could restore some degree of vision. Recent efforts in visual cortex prosthesis have evaluated efficacy of visual cortex stimulation in a non-human primate. In this experiment after a training and mapping process the monkey is able to perform the same visual saccade task with both light and electrical stimulation.

The requirements for a high resolution retinal prosthesis should follow from the needs and desires of blind individuals who will benefit from the device. Interactions with these patients indicate that mobility without a cane, face recognition and reading are the main necessary enabling capabilities.[8]

The results and implications of fully functional visual prostheses are exciting. However, the challenges are grave. In order for a good quality image to be mapped in the retina a high number of micro-scale electrode arrays are needed. Also, the image quality is dependent on how much information can be sent over the wireless link. Also this high amount of information must be received and processed by the implant without much power dissipation which can damage the tissue. The size of the implant is also of great concern. Any implant would be preferred to be minimally invasive.[8]

With this new technology, several scientists, including Karen Moxon at Drexel, John Chapin at SUNY, and Miguel Nicolelis at Duke University, started research on the design of a sophisticated visual prosthesis. Other scientists[who?] have disagreed with the focus of their research, arguing that the basic research and design of the densely populated microscopic wire was not sophisticated enough to proceed.

Auditory prosthetics

Cochlear implants (CIs), auditory brain stem implants (ABIs), and auditory midbrain implants (AMIs) are the three main categories for auditory prostheses. CI electrode arrays are implanted in the cochlea, ABI electrode arrays stimulate the cochlear nucleus complex in the lower brain stem, and AMIs stimulates auditory neurons in the inferior colliculus. Cochlear implants have been very successful among these three categories. Today the Advanced Bionics Corporation, the Cochlear Corporation and the Med-El Corporation are the major commercial providers of cochlea implants.

In contrast to traditional hearing aids that amplify sound and send it through the external ear, cochlear implants acquire and process the sound and convert it into electrical energy for subsequent delivery to the auditory nerve. The microphone of the CI system receives sound from the external environment and sends it to processor. The processor digitizes the sound and filters it into separate frequency bands that are sent to the appropriate tonotonic region in the cochlea that approximately corresponds to those frequencies.

In 1957, French researchers A. Djourno and C. Eyries, with the help of D. Kayser, provided the first detailed description of directly stimulation the auditory nerve in a human subject.[9] The individuals described hearing chirping sounds during simulation. In 1972, the first portable cochlear implant system in an adult was implanted at the House Ear Clinic. The U.S. Food and Drug Administration (FDA) formally approved the marketing of the House-3M cochlear implant in November 1984.[10]

Improved performance on cochlear implant not only depends on understanding the physical and biophysical limitations of implant stimulation but also on an understanding of the brain's pattern processing requirements. Modern signal processing represents the most important speech information while also providing the brain the pattern recognition information that it needs. Pattern recognition in the brain is more effective than algorithmic preprocessing at identifying important features in speech. A combination of engineering, signal processing, biophysics, and cognitive neuroscience was necessary to produce the right balance of technology to maximize the performance of auditory prosthesis.[11]

Cochlear implants have been also used to allow acquiring of spoken language development in congenitally deaf children, with remarkable success in early implantations (before 2–4 years of life have been reached).[12] There have been about 80,000 children implanted worldwide.

The concept of combining simultaneous electric-acoustic stimulation (EAS) for the purposes of better hearing was first described by C. von Ilberg and J. Kiefer, from the Universitätsklinik Frankfurt, Germany, in 1999.[13] That same year the first EAS patient was implanted. Since the early 2000s FDA has been involved in a clinical trial of device termed the "Hybrid" by Cochlear Corporation. This trial is aimed at examining the usefulness of cochlea implantation in patients with residual low-frequency hearing. The "Hybrid" utilizes a shorter electrode than the standard cochlea implant, since the electrode is shorter it stimulates the basil region of the cochlea and hence the high-frequency tonotopic region. In theory these devices would benefit patients with significant low-frequency residual hearing who have lost perception in the speech frequency range and hence have decreased discrimination scores.[14]

Prosthetics for pain relief

The SCS (Spinal Cord Stimulator) device has two main components: an electrode and a generator. The technical goal of SCS for neuropathic pain is to mask the area of a patient's pain with a stimulation induced tingling, known as "paresthesia", because this overlap is necessary (but not sufficient) to achieve pain relief.[15] Paresthesia coverage depends upon which afferent nerves are stimulated. The most easily recruited by a dorsal midline electrode, close to the pial surface of spinal cord, are the large dorsal column afferents, which produce broad paresthesia covering segments caudally.

In ancient times the electrogenic fish was used as a shocker to subside pain. Healers had developed specific and detailed techniques to exploit the generative qualities of the fish to treat various types of pain, including headache. Because of the awkwardness of using a living shock generator, a fair level of skill was required to deliver the therapy to the target for the proper amount of time. (Including keeping the fish alive as long as possible) Electro analgesia was the first deliberate application of electricity. By the nineteenth century, most western physicians were offering their patients electrotherapy delivered by portable generator.[16] In the mid-1960s, however, three things converged to ensure the future of electro stimulation.
  1. Pacemaker technology, which had it start in 1950, became available.
  2. Melzack and Wall published their gate control theory of pain, which proposed that the transmission of pain could be blocked by stimulation of large afferent fibers.[17]
  3. Pioneering physicians became interested in stimulating the nervous system to relieve patients from pain.
The design options for electrodes include their size, shape, arrangement, number, and assignment of contacts and how the electrode is implanted. The design option for the pulse generator include the power source, target anatomic placement location, current or voltage source, pulse rate, pulse width, and number of independent channels. Programming options are very numerous (a four-contact electrode offers 50 functional bipolar combinations). The current devices use computerized equipment to find the best options for use. This reprogramming option compensates for postural changes, electrode migration, changes in pain location, and suboptimal electrode placement.[18]

Motor prosthetics

Devices which support the function of autonomous nervous system include the implant for bladder control. In the somatic nervous system attempts to aid conscious control of movement include Functional electrical stimulation and the lumbar anterior root stimulator.

Bladder control implants

Where a spinal cord lesion leads to paraplegia, patients have difficulty emptying their bladders and this can cause infection. From 1969 onwards Brindley developed the sacral anterior root stimulator, with successful human trials from the early 1980s onwards.[19] This device is implanted over the sacral anterior root ganglia of the spinal cord; controlled by an external transmitter, it delivers intermittent stimulation which improves bladder emptying. It also assists in defecation and enables male patients to have a sustained full erection.

The related procedure of sacral nerve stimulation is for the control of incontinence in able-bodied patients.[20]

Motor prosthetics for conscious control of movement

Researchers are currently investigating and building motor neuroprosthetics that will help restore movement and the ability to communicate with the outside world to persons with motor disabilities such as tetraplegia or amyotrophic lateral sclerosis. Research has found that the striatum plays a crucial role in motor sensory learning. This was demonstrated by an experiment in which lab rats' firing rates of the striatum was recorded at higher rates after performing a task consecutively.

To capture electrical signals from the brain, scientists have developed microelectrode arrays smaller than a square centimeter that can be implanted in the skull to record electrical activity, transducing recorded information through a thin cable. After decades of research in monkeys, neuroscientists have been able to decode neuronal signals into movements. Completing the translation, researchers have built interfaces that allow patients to move computer cursors, and they are beginning to build robotic limbs and exoskeletons that patients can control by thinking about movement.

The technology behind motor neuroprostheses is still in its infancy. Investigators and study participants continue to experiment with different ways of using the prostheses. Having a patient think about clenching a fist, for example, produces a different result than having him or her think about tapping a finger. The filters used in the prostheses are also being fine-tuned, and in the future, doctors hope to create an implant capable of transmitting signals from inside the skull wirelessly, as opposed to through a cable.

Preliminary clinical trials suggest that the devices are safe and that they have the potential to be effective.[citation needed] Some patients have worn the devices for over two years with few, if any, ill effects.[citation needed]

Prior to these advancements, Philip Kennedy (Emory and Georgia Tech) had an operable if somewhat primitive system which allowed an individual with paralysis to spell words by modulating their brain activity. Kennedy's device used two neurotrophic electrodes: the first was implanted in an intact motor cortical region (e.g. finger representation area) and was used to move a cursor among a group of letters. The second was implanted in a different motor region and was used to indicate the selection.[21]

Developments continue in replacing lost arms with cybernetic replacements by using nerves normally connected to the pectoralis muscles. These arms allow a slightly limited range of motion, and reportedly are slated to feature sensors for detecting pressure and temperature.[22]

Dr. Todd Kuiken at Northwestern University and Rehabilitation Institute of Chicago has developed a method called targeted reinnervation for an amputee to control motorized prosthetic devices and to regain sensory feedback.

Sensory/motor prosthetics

In 2002 an Multielectrode array of 100 electrodes, which now forms the sensor part of a Braingate, was implanted directly into the median nerve fibers of scientist Kevin Warwick. The recorded signals were used to control a robot arm developed by Warwick's colleague, Peter Kyberd and was able to mimic the actions of Warwick's own arm.[23] Additionally, a form of sensory feedback was provided via the implant by passing small electrical currents into the nerve. This caused a contraction of the first lumbrical muscle of the hand and it was this movement that was perceived.[23]

Obstacles

Mathematical modelling

Accurate characterization of the nonlinear input/output (I/O) parameters of the normally functioning tissue to be replaced is paramount to designing a prosthetic that mimics normal biologic synaptic signals.[24][25] Mathematical modeling of these signals is a complex task "because of the nonlinear dynamics inherent in the cellular/molecular mechanisms comprising neurons and their synaptic connections".[26][27][28] The output of nearly all brain neurons are dependent on which post-synaptic inputs are active and in what order the inputs are received. (spatial and temporal properties, respectively).[29]

Once the I/O parameters are modeled mathematically, integrated circuits are designed to mimic the normal biologic signals. For the prosthetic to perform like normal tissue, it must process the input signals, a process known as transformation, in the same way as normal tissue.

Size

Implantable devices must be very small to be implanted directly in the brain, roughly the size of a quarter. One of the example of microimplantable electrode array is the Utah array.[30]

Wireless controlling devices can be mounted outside of the skull and should be smaller than a pager.

Power consumption

Power consumption drives battery size. Optimization of the implanted circuits reduces power needs. Implanted devices currently need on-board power sources. Once the battery runs out, surgery is needed to replace the unit. Longer battery life correlates to fewer surgeries needed to replace batteries. One option that could be used to recharge implant batteries without surgery or wires is being used in powered toothbrushes.[citation needed] These devices make use of inductive coupling to recharge batteries. Another strategy is to convert electromagnetic energy into electrical energy, as in radio-frequency identification tags.

Biocompatibility

Cognitive prostheses are implanted directly in the brain, so biocompatibility is a very important obstacle to overcome. Materials used in the housing of the device, the electrode material (such as iridium oxide[31]), and electrode insulation must be chosen for long term implantation. Subject to Standards: ISO 14708-3 2008-11-15, Implants for Surgery - Active implantable medical devices Part 3: Implantable neurostimulators.

Crossing the blood–brain barrier can introduce pathogens or other materials that may cause an immune response. The brain has its own immune system that acts differently from the immune system of the rest of the body.

Questions to answer: How does this affect material choice? Does the brain have unique phages that act differently and may affect materials thought to be biocompatible in other areas of the body?

Data transmission

Wireless Transmission is being developed to allow continuous recording of neuronal signals of individuals in their daily life. This allows physicians and clinicians to capture more data, ensuring that short term events like epileptic seizures can be recorded, allowing better treatment and characterization of neural disease.

A small, light weight device has been developed that allows constant recording of primate brain neurons at Stanford University.[32] This technology also enables neuroscientists to study the brain outside of the controlled environment of a lab.

Methods of data transmission must be robust and secure. Neurosecurity is a new issue. Makers of cognitive implants must prevent unwanted downloading of information or thoughts[citation needed] from and uploading of detrimental data to the device that may interrupt function.

Correct implantation

Implantation of the device presents many problems. First, the correct presynaptic inputs must be wired to the correct postsynaptic inputs on the device. Secondly, the outputs from the device must be targeted correctly on the desired tissue. Thirdly, the brain must learn how to use the implant. Various studies in brain plasticity suggest that this may be possible through exercises designed with proper motivation.

Technologies involved

Local field potentials

Local field potentials (LFPs) are electrophysiological signals that are related to the sum of all dendritic synaptic activity within a volume of tissue. Recent studies suggest goals and expected value are high-level cognitive functions that can be used for neural cognitive prostheses.[33] Also, Rice University scientists have discovered a new method to tune the light-induced vibrations of nanoparticles through slight alterations to the surface to which the particles are attached. According to the university, the discovery could lead to new applications of photonics from molecular sensing to wireless communications. They used ultrafast laser pulses to induce the atoms in gold nanodisks to vibrate.[34]

Automated movable electrical probes

One hurdle to overcome is the long term implantation of electrodes. If the electrodes are moved by physical shock or the brain moves in relation to electrode position, the electrodes could be recording different nerves. Adjustment to electrodes is necessary to maintain an optimal signal. Individually adjusting multi electrode arrays is a very tedious and time consuming process. Development of automatically adjusting electrodes would mitigate this problem. Anderson's group is currently collaborating with Yu-Chong Tai's lab and the Burdick lab (all at Caltech) to make such a system that uses electrolysis-based actuators to independently adjust electrodes in a chronically implanted array of electrodes.[35]

Imaged guided surgical techniques

Image-guided surgery is used to precisely position brain implants.

Spintronics

From Wikipedia, the free encyclopedia
Spintronics (a portmanteau meaning spin transport electronics), also known as spin electronics, is the study of the intrinsic spin of the electron and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices.

Spintronics fundamentally differs from traditional electronics in that, in addition to charge state, electron spins are exploited as a further degree of freedom, with implications in the efficiency of data storage and transfer. Spintronic systems are most often realised in dilute magnetic semiconductors (DMS) and Heusler alloys and are of particular interest in the field of quantum computing.

History

Spintronics emerged from discoveries in the 1980s concerning spin-dependent electron transport phenomena in solid-state devices. This includes the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Johnson and Silsbee (1985)[5] and the discovery of giant magnetoresistance independently by Albert Fert et al.[6] and Peter Grünberg et al. (1988). The origins of spintronics can be traced to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow and initial experiments on magnetic tunnel junctions by Julliere in the 1970s.[8] The use of semiconductors for spintronics began with the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990[9] and of the electric dipole spin resonance by Rashba in 1960.[10]

Theory

The spin of the electron is an intrinsic angular momentum that is separate from the angular momentum due to its orbital motion. The magnitude of the projection of the electron's spin along an arbitrary axis is {\tfrac {1}{2}}\hbar , implying that the electron acts as a Fermion by the spin-statistics theorem. Like orbital angular momentum, the spin has an associated magnetic moment, the magnitude of which is expressed as
\mu ={\tfrac {\sqrt {3}}{2}}{\frac {q}{m_{e}}}\hbar .
In a solid the spins of many electrons can act together to affect the magnetic and electronic properties of a material, for example endowing it with a permanent magnetic moment as in a ferromagnet.
In many materials, electron spins are equally present in both the up and the down state, and no transport properties are dependent on spin. A spintronic device requires generation or manipulation of a spin-polarized population of electrons, resulting in an excess of spin up or spin down electrons. The polarization of any spin dependent property X can be written as
P_{X}={\frac {X_{\uparrow }-X_{\downarrow }}{X_{\uparrow }+X_{\downarrow }}}.
A net spin polarization can be achieved either through creating an equilibrium energy split between spin up and spin down. Methods include putting a material in a large magnetic field (Zeeman effect), the exchange energy present in a ferromagnet or forcing the system out of equilibrium. The period of time that such a non-equilibrium population can be maintained is known as the spin lifetime, \tau .

In a diffusive conductor, a spin diffusion length \lambda can be defined as the distance over which a non-equilibrium spin population can propagate. Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond). An important research area is devoted to extending this lifetime to technologically relevant timescales.

A plot showing a spin up, spin down, and the resulting spin polarized population of electrons. Inside a spin injector, the polarization is constant, while outside the injector, the polarization decays exponentially to zero as the spin up and down populations go to equilibrium.

The mechanisms of decay for a spin polarized population can be broadly classified as spin-flip scattering and spin dephasing. Spin-flip scattering is a process inside a solid that does not conserve spin, and can therefore switch an incoming spin up state into an outgoing spin down state. Spin dephasing is the process wherein a population of electrons with a common spin state becomes less polarized over time due to different rates of electron spin precession. In confined structures, spin dephasing can be suppressed, leading to spin lifetimes of milliseconds in semiconductor quantum dots at low temperatures.

Superconductors can enhance central effects in spintronics such as magnetoresistance effects, spin lifetimes and dissipationless spin-currents.[11][12]

The simplest method of generating a spin-polarised current in a metal is to pass the current through a ferromagnetic material. The most common applications of this effect involve giant magnetoresistance (GMR) devices. A typical GMR device consists of at least two layers of ferromagnetic materials separated by a spacer layer. When the two magnetization vectors of the ferromagnetic layers are aligned, the electrical resistance will be lower (so a higher current flows at constant voltage) than if the ferromagnetic layers are anti-aligned. This constitutes a magnetic field sensor.

Two variants of GMR have been applied in devices: (1) current-in-plane (CIP), where the electric current flows parallel to the layers and (2) current-perpendicular-to-plane (CPP), where the electric current flows in a direction perpendicular to the layers.

Other metal-based spintronics devices:
  • Tunnel magnetoresistance (TMR), where CPP transport is achieved by using quantum-mechanical tunneling of electrons through a thin insulator separating ferromagnetic layers.
  • Spin-transfer torque, where a current of spin-polarized electrons is used to control the magnetization direction of ferromagnetic electrodes in the device.
  • Spin-wave logic devices carry information in the phase. Interference and spin-wave scattering can perform logic operations.

Spintronic-logic devices

Non-volatile spin-logic devices to enable scaling are being extensively studied.[13] Spin-transfer, torque-based logic devices that use spins and magnets for information processing have been proposed.[14][15] These devices are part of the ITRS exploratory road map. Logic-in memory applications are already in the development stage.[16][17] A 2017 review article can be found in Materials Today.[18]

Applications

Read heads of magnetic hard drives are based on the GMR or TMR effect.

Motorola developed a first-generation 256 kb magnetoresistive random-access memory (MRAM) based on a single magnetic tunnel junction and a single transistor that has a read/write cycle of under 50 nanoseconds.[19] Everspin has since developed a 4 Mb version.[20] Two second-generation MRAM techniques are in development: thermal-assisted switching (TAS)[21] and spin-transfer torque (STT).[22]

Another design, racetrack memory, encodes information in the direction of magnetization between domain walls of a ferromagnetic wire.

Magnetic sensors can use the GMR effect.[citation needed]

In 2012 persistent spin helices of synchronized electrons were made to persist for more than a nanosecond, a 30-fold increase, longer than the duration of a modern processor clock cycle.[23]

Semiconductor-based spintronic devices

Doped semiconductor materials display dilute ferromagnetism. In recent years, dilute magnetic oxides (DMOs) including ZnO based DMOs and TiO2-based DMOs have been the subject of numerous experimental and computational investigations.[24][25] Non-oxide ferromagnetic semiconductor sources (like manganese-doped gallium arsenide GaMnAs),[26] increase the interface resistance with a tunnel barrier,[27] or using hot-electron injection.[28]

Spin detection in semiconductors has been addressed with multiple techniques:
  • Faraday/Kerr rotation of transmitted/reflected photons[29]
  • Circular polarization analysis of electroluminescence[30]
  • Nonlocal spin valve (adapted from Johnson and Silsbee's work with metals)[31]
  • Ballistic spin filtering[32]
The latter technique was used to overcome the lack of spin-orbit interaction and materials issues to achieve spin transport in silicon.[33]

Because external magnetic fields (and stray fields from magnetic contacts) can cause large Hall effects and magnetoresistance in semiconductors (which mimic spin-valve effects), the only conclusive evidence of spin transport in semiconductors is demonstration of spin precession and dephasing in a magnetic field non-collinear to the injected spin orientation, called the Hanle effect.

Applications

Applications using spin-polarized electrical injection have shown threshold current reduction and controllable circularly polarized coherent light output.[34] Examples include semiconductor lasers. Future applications may include a spin-based transistor having advantages over MOSFET devices such as steeper sub-threshold slope.

Magnetic-tunnel transistor: The magnetic-tunnel transistor with a single base layer[35] has the following terminals:
  • Emitter (FM1): Injects spin-polarized hot electrons into the base.
  • Base (FM2): Spin-dependent scattering takes place in the base. It also serves as a spin filter.
  • Collector (GaAs): A Schottky barrier is formed at the interface. It only collects electrons that have enough energy to overcome the Schottky barrier, and when states are available in the semiconductor.
The magnetocurrent (MC) is given as:
MC={\frac {I_{c,p}-I_{c,ap}}{I_{c,ap}}}
And the transfer ratio (TR) is
TR={\frac {I_{C}}{I_{E}}}
MTT promises a highly spin-polarized electron source at room temperature.

Storage media

Antiferromagnetic storage media have been studied as an alternative to ferromagnetism,[36] especially since with antiferromagnetic material the bits can be stored as well as with ferromagnetic material. Instead of the usual definition 0 -> 'magnetisation upwards', 1 -> 'magnetisation downwards', the states can be, e.g., 0 -> 'vertically-alternating spin configuration' and 1 -> 'horizontally-alternating spin configuration'.[37]).

The main advantages of antiferromagnetic material are:
  • non-sensitivity against data-damaging perturbations by stray fields due to zero net external magnetization[38];
  • no effect on near particles, implying that antiferromagnetic device elements would not magnetically disturb its neighboring elements;[38]
  • far shorter switching times (antiferromagnetic resonance frequency is in the THz range compared to GHz ferromagnetic resonance frequency);[39]
  • broad range of commonly available antiferromagnetic materials including insulators, semiconductors, semimetals, metals, and superconductors.[39]
Research is being done into how to read and write information to antiferromagnetic spintronics as their net zero magnetization makes this difficult compared to conventional ferromagnetic spintronics. In modern MRAM, detection and manipulation of ferromagnetic order by magnetic fields has largely been done away with in favor of more efficient and scalable reading and writing by electrical current. Methods of reading and writing information by current rather than fields are also being investigated in antiferromagnets as fields are ineffective anyways. Writing methods currently being investigated in antiferromagnets are through spin-transfer torque and spin-orbit torque from the spin Hall effect and the Rashba effect. Reading information in antiferromagnets via magnetoresistance effects such as tunnel magnetoresistance is also being explored.

Equality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Equality_...