Search This Blog

Sunday, July 15, 2018

The Paradigms and Paradoxes of Intelligence: Building a Brain

August 6, 2001 by Ray Kurzweil
Original link:  http://www.kurzweilai.net/the-paradigms-and-paradoxes-of-intelligence-building-a-brain

How to build a brain, written for “The Futurecast,” a monthly column in the Library Journal.

Originally published November 1992. Published on KurzweilAI.net August 6, 2001.

In the last two columns, we examined two methods for emulating intelligence in a machine. The recursive paradigm involves the application of massive computing power to analyze the implications of every possible course of action (e.g., every allowable move in a chess game) followed by every possible course of reaction that could follow each course of action, and so on in an exponentially exploding tree of possibilities. The neural net paradigm involves the cascading of networks of neurons, where each neuron simplifies thousands of inputs into a single judgment.

Up until recently, humans and computers have used radically different strategies in their intelligent decision-making. For example, in chess (our quintessential laboratory for studying intelligence), a human chess master memorizes about 50,000 situations and then uses his or her neural net-based pattern recognition capabilities to recognize which of these situations is most applicable to the current board position. In contrast, the computer chess master typically memorizes very few board positions and relies instead on its ability to analyze in depth every possible course of action during the time of play. The computer player will typically analyze between one million and one billion board positions for each move. In contrast, the human player does not have time to consider more than a few dozen board positions, hence the reliance on a memory of previously analyzed situations.

Humans do not have the precise memory nor the mental speed to excel at using the recursive paradigm (not without a computer to help). However, while humans will never master the recursive paradigm, machines are not restricted to it. Machines programmed to use the neural net paradigm are displaying a rapidly increasing ability to learn patterns (of speech, handwriting, faces, etc.) in a manner similar to, if still cruder than, their human creators

Computer neural net simulations have been limited by two factors: the number of neural connections that can be simulated in real time and the capacity of computer memories. While human neurons are slow (a million times slower than electronic circuits), every neuron and every interneuronal connection is operating simultaneously. With about 100 billion neurons and an average of 1000 connections per neuron, there are about 100 trillion computations being performed at the same time. At about 200 computations per second, that comes to 20 million billion (2 x 1016) calculations per second.

So how does that compare to the state-of-the-art in human-created technology? Specialized neural computers have been developed that can simulate neurons directly in hardware. These operate about a thousand times faster than neural networks simulated in software on conventional PCs. One such neural computer, the Ricoh RN100, can process 128 million connection computations per second. This type of computer represents significant progress, but it is still 150 million times slower than the human brain.

Moore speed and memory

So enters Moore’s Law. Moore’s Law is the driving force behind a technological revolution so vast that the entire computer revolution to date represents only a minor ripple of its ultimate implications. Simply stated, Moore’s Law says that computing speeds and densities double every 18 months. In other words, every 18 months we can buy a computer that is twice as fast and has twice as much memory for the same cost. Remarkably, this law has held true since the beginning of this century through numerous changes in underlying methods – from the mechanical card-based computing technology of the 1890 census, to the relay-based computers of the 1940s, to the vacuum tube-based computers of the 1950s, to the transistor-based machines of the 1960s, to the integrated circuits of today. The trend has held for thousands of different calculators and computers over the past 100 years. Computer memory, for example, is about 150 million times more powerful today (for the same unit cost) than it was in 1950.

Moore’s Law will continue unabated for many decades to come. We have not even begun to explore the third dimension in chip design. Chips today are flat, whereas the brain is organized in three dimensions. Improvements in semiconductor materials, including the development of superconducting circuits that do not generate heat, will enable the development of chips (actually cubes) with thousands of layers of circuitry combined with far smaller component geometries for an improvement in computer power of many million-fold. There are more than enough new computing technologies being developed to assure a continuation of Moore’s Law for a very long time.

Moore’s Law does more than double computing power every 18 months. It doubles both the capacity of computation (the number of computing elements) and the speed (the number of calculations per second) of each computing element. Since a neural computer is inherently massively parallel, each doubling of capacity and speed actually multiplies the number of neural connections per second by four. Thus, we can increase the power of our neural computer by a factor of 1000 every 7 1/2 years. To provide just one example among many that this rate of progress is quite reasonable, Ricoh has just announced a new version of its neural computer that is 12 times faster than the one developed two years ago. At this rate, a personal neural computer will match the capacity of the human brain in terms of neuron connections per second (i.e., 2 x 1016 calculations per second) in about 20 years, or in the year 2012. Achieving the memory capacity of the human brain (1014 analog values stored at the synapses) will take a little longer- about 27 years, or in the year 2019.

Reaching this threshold will not cause Moore’s Law to slow down. As we go through the 21st century, computer circuits will be grown like crystals with computing taking place at the molecular level. By the year 2040, your state-of-the-art personal computer will be able to simulate a society of 10,000 brains, each of which would be opening at a speed 10,000 times faster than a human brain. Or, alternatively, it could implement a single mind with 10,000 times the memory capacity of a human brain and 100 million times the speed.

The sources of knowledge

However, raw computing speed and memory capacity, even if implemented in massively parallel neural nets, will not automatically result in human-level intelligence. The architecture and organization of these resources are at least as important as the capacity itself. And then of course, neural net-based systems will need to learn their lessons. After all, the human neural net spends at least a couple of decades learning before it is considered ready for most useful tasks.

There are several sources of such knowledge. One is the extensive array of research efforts (still performed by humans) to understand the algorithms and methods underlying the hundreds of faculties we collectively call human intelligence. Progress in this arena is steady if painstaking, although in many areas – e.g., speech recognition – algorithms already exist that are just waiting for more powerful computers to enable them.

There is, of course, a source of knowledge that we can tap to accelerate greatly our understanding of how to design intelligence in a machine, and that is the human brain itself. By probing the brain’s circuits, we can essentially copy a proven design, one that took its original designer several billion years to develop. Just as the Human Genome Project (in which the entire human genetic code is being scanned and recorded) will accelerate the ability to create new treatments and drugs, a similar effort to scan and record the neural organization of the human brain can help provide the templates of intelligence. This effort has already begun. For example, an artificial retina chip created by Synaptics is fundamntally a copy of the neural organization (implemented in silicon, of course) of the human retina and its visual processing layer.

High speed, high-resolution magnetic resonance imaging (MRI) scanners are already able to resolve individual somas (neuron cell bodies) without disturbing the living tissue being scanned. More powerfull MRls, using larger magnets, would be capable of scanning individual nerve fibers that are only ten microns in diameter. Eventually, we will be able to automatically scan the presynaptic vesicles that are the site of human learning.

Layers of intelligence

This suggests two scenarios. The first is to scan portions of a brain to ascertain the architecture of interneuronal connections in different regions. The exact position of each nerve fiber is not as important as the overall pattern. With this information, we can design artificial neural nets that will operate similarly. This process will be like peeling an onion as each layer of human intelligence is revealed.

A more difficult scenario would be to noninvasively scan someone’s brain to map the locations and interconnections of the somas, axons, dendrites, synapses, presynaptic vesicles, and other neural components. Its entire organization could then be re-created on a neural computer of sufficient capacity, including the contents of its memory. We can peer inside someone’s brain today with MRI scanners, which are increasing their resolution with each new generation of the device.

There are a number of technical challenges in accomplishing this, including achieving suitable resolution, bandwidth (i.e., speed of scanning), lack of vibration, and safety. For a number of reasons, it will be easier to scan the brain of someone recently deceased than of someone still living (it is easier to get someone deceased to sit still for one thing), but noninvasively, scanning a living brain will ultimately become feasible as MRI and other scanning technologies continue to improve in resolution and speed.If people were scanned and then re-created in a neural computer, one might wonder just who are those people in the machine? The answer would depend on whom you ask. If you ask the people in the machine, they would strenuously claim to be the original persons, having lived certain lives, gone into a scanner, and then woke up in the machine. On the other hand, the original people who were scanned would claim that the people in the machine are imposters, those who appear to share their memories and personality, but are definitely different people.

Many other issues are raised by these scenarios. A machine intelligence that was derived from human intelligence would need a body. A disembodied mind would become quickly depressed. While progress will be made in this area as well, building a suitable artificial body will in many ways be more challenging than building an artificial mind. Even partial success in the first and easiest of the two scenarios above (scanning portions of a brain to ascertain general principles of construction) will present new dilemmas. If, as seems likely, the next century will produce PCs with memory capacities and computational capabilities vastly outstripping the human brain, even a partial mastery of human cognitive facilities will be formidable. At a minimum, we are likely to see the Luddite issue (i.e., concern over the negative impact of machines on human employment) become of intense interest once again. We will examine these and other issues when we take a look at the impact of machine intelligence on life in the 21st century in an upcoming series of Futurecasts.

Meanwhile, back in the closing days of the 20th century, we all share an intense interest in making the most of our human intelligence and frail yet marvelous human bodies. I have long been interested in our health and well being and have discovered a rather unexpected perspective: we actually have the knowledge to virtually eliminate heart disease and cancer. I will share some of these thoughts in the next Futurecasts.

Reprinted with permission from Library Journal, November 1992. Copyright © 1992, Reed Elsevier, USA

Degenerative disc disease

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Deg...