June 13, 2002 by Vernor Vinge, Ray Kurzweil
Original link:
http://www.kurzweilai.net/singularity-chat-with-vernor-vinge-and-ray-kurzweil
Originally posted June 11, 2002 on SCIFI.COM. Published June 13, 2002 on KurzweilAI.net.
Vernor Vinge (screen name “vv”) and Ray Kurzweil (screen name
“RayKurzweil”) recently discussed The Singularity — their idea that
superhuman machine intelligence will soon exceed human intelligence — in
an online chat room co-produced by Analog Science Fiction and Fact and
Asimov’s Science Fiction magazine on SCIFI.COM. Vinge, a noted science
fiction writer, is the author of the seminal paper, “The Technological
Singularity.” Kurzweil’s The Singularity Is Near book is due out in
early 2003 and is previewed in “The Law of Accelerating Returns.” (Note:
typos corrected and comments aggregated for readability.)
ChatMod: Hi everyone, thanks for joining us here. I’m Patrizia
DiLucchio for SCIFI. Tonight we’re pleased to welcome science fiction
writer Vernor Vinge, and author, innovator, and inventor Ray Kurzweil —
the founder of Kurzweil Technologies. Tonight’s topic is Singularity.
The term "singularity" refers to the geometric rate of the growth of
technology and the idea that this growth will lead to a superhuman
machine that will far exceed the human intellect.
ChatMod: This evening’s chat is co-produced by Analog Science
Fiction and Fact and Asimov’s Science Fiction (www.asimovs.com), the
leading publications in cutting edge science fiction. Our host tonight
is Asimov’s editor Gardner Dozois.
ChatMod: Brief word about the drill. This is a moderated chat —
please send your questions for our guest to ChatMod, as private
messages. (To send a private message, either double-click on ChatMod or
type "/msg ChatMod" on the command line – only without the quotes.)…Then
hit Enter (or Return on a Mac.)
ChatMod: Gardner, I will leave things to you :)
Gardner: So, I think that Vernor Vinge needs no introduction to this audience. Mr. Kurzweil, would you care to introduce yourself?
RayKurzweil: I consider myself an inventor, entrepreneur, and author.
Gardner: What have you invented? What’s your latest book?
RayKurzweil: My inventions are in the area of pattern
recognition, which is part of AI, and the part that ultimately will play
a crucial role because the vast majority of human intelligence is based
on recognizing patterns. I’ve worked in the areas of optical character
recognition, music synthesis, speech synthesis and recognition. I’m now
working on an automated stock market fund based on pattern recognition.
As for books, there is one coming out next week "Are We Spiritual
Machines? Ray Kurzweil vs. the Critics of Strong AI" from the Discovery
Institute Press.
My next trade book will be the Singularity is Near
(Viking) expected early next year (not the Singularity, but the book,
that is).
Gardner: We recognize them even when they don’t exist, in fact.
Like the Face On Mars.
RayKurzweil: The face on Mars shows our power of anthropomorphization.
Gardner: So how far ARE we from the Singularity, then? Guesses?
RayKurzweil: I think it would first make sense to discuss what
the Singularity is. The definition offered at the beginning of this
chat is one view, but there are others.
Gardner: Go for it.
RayKurzweil: I think that once a nonbiological intelligence
(i.e., a machine) reaches human intelligence in its diverse dimensions,
it will necessarily soar past it because (i) computational and
communication power will continue to grow exponentially, (ii) machines
can master information with far greater capacity and accuracy already
and most importantly, machines can share their knowledge. We don’t have
quick downloading ports on our neurotransmitter concentration patterns,
or interneuronal connection patterns. Machines will.
We have hundreds of examples of "narrow AI" today, and I believe
we’ll have "strong AI" (capable of passing the Turing test) and thereby
soaring past human intelligence for the reasons I stated above by 2029.
But that’s not the Singularity. This is "merely" the means by which
technology will continue to grow exponentially in its power.
Gardner: Vernor? That fit your ideas?
vv: I agree that there are lots of different takes.
RayKurzweil: If we can combine strong AI, nanotechnology and
other exponential trends, technology will appear to tear the fabric of
human understanding by around the mid 2040s by my estimation. However,
the event horizon of the Singularity can be compared to the concept of
Singularity in physics. As one gets near a black hole, what appears to
be an event horizon from outside the black hole appears differently from
inside. The same will be true of this historical Singularity.
Once we get there, if one is not crushed by it (which will require
merging with the technology), then it will not appear to be a rip in the
fabric; one will be able to keep up with it.
Gardner: What will it look like from the inside?
vv: That depends on who the observer is. Hans Moravec once
pointed out that if “you” are riding the curve of intellect improvement,
then no singularity is visible. But if “you” have only human intellect,
then the transition could be pretty unknowable. In that latter case,
all we have are analogies.
RayKurzweil: We’re thinking alike here. We can only answer
that by analog or metaphor. By definition, we cannot describe processes
whose intelligence exceeds our own. But we can compare lesser animals to
humans, and then make the analogy to our intellectual descendents.
vv: Yes, :-)
Gardner: Seems unlikely to me that EVERYONE will have an equal
capacity for keeping up with it. There are people today who have
trouble keeping up even with the 20th Century, like the Amish.
RayKurzweil: The Amish seem to fit in well. I could think of other examples of people who would like to turn the clock back.
Gardner: Many. So won’t the same be true during the Singularity?
RayKurzweil: But in terms of opportunity, this is the
have-have not issue. Keep in mind that because of what I call the "law
of accelerating returns," technology starts out unaffordable, becomes
merely expensive, then inexpensive, then free.
vv: True, but the better analogy is across the entire kingdom of life
Gardner: How do you mean that, Vernor?
RayKurzweil: We can imagine some bacteria discussing the pros
and cons of evolving into more advanced species like humans. There might
be some strong arguments against doing so.
RayKurzweil: If bacteria could talk, of course.
Gardner: From the bacteria’s point of view, they might be right.
vv: When dealing with "superhumans" it is not the same thing
as comparing — say — our tech civ with a pretech human civ. The
analogies should be with the animal kingdom and even more perhaps with
things even further away and more primitive.
RayKurzweil: Bacteria are still doing pretty well, and
comprise a larger portion of the biomass than humans. Ants are doing
very well also. I agree with your last comment, Vernor.
vv: Yes, there could well be a place for the normal humans — certainly as a safety reservoir in case of tech disasters.
Gardner: So do we end up with civilization split up into several different groups, then, at different levels of evolution?
RayKurzweil: I think that the super intelligences will appear
to unenhanced humans (MOSHs – Mostly Original Substrate Humans) to be
their transcendent servants. They will be able to meet all the needs of
MOSHs with only a trivial portion of their intellectual capacity, and
will appreciate and honor their forbears.
vv: I think this is one of the more plausible scenarios (and
relatively happy). I thought the movie "Weird Science" was very
impressive for that reason (though I’m not sure how many people had that
interpretation of the movie. :-)
RayKurzweil: Sorry I missed that movie — most scifi movies are dystopian.
Gardner: If they’re so transcendently superior, though, why
should they bother to serve unevolved humans at all? However trivial an
effort it would demand on their part?
vv: At Foresight a few years ago, [Eliezer] Yudkowsky put it as simply –"They were designed originally to like us."
RayKurzweil: For a while anyway, they would want to preserve this knowledge, just as we value things from our past today.
Gardner: There are people who don’t like people NOW, though. And people who care nothing about preserving their heritage.
RayKurzweil: Also I see this emerging intelligence as human,
as an expression of the human civilization. It’s emerging from within us
and from within our civilization. It’s not an alien invasion.
Gardner: Would this change?
vv: I am not convinced that any particular scenario is most likely, though some are much more attractive than others.
ChatMod: Let me jump in for a moment just to let our audience
know that we will be using audience questions during our second 30
minutes. We haven’t forgotten you.
RayKurzweil: These developments are more likely to occur from
the more thoughtful parts of our civilization. The thoughtless parts
won’t have the capacity to do it.
Gardner: Let’s hope.
RayKurzweil: Yes. There are certainly downsides.
Gardner: "The rude mind with difficulty associates the ideas
of power and benignity." True. But there have been people with immense
power who have not used it at all benignly. Who, in fact, have used it
with deliberate malice.
vv: It may be too late to announce a sympathy for animal rights :-)
RayKurzweil: One could argue that network-based communication such as the Internet encourages democracy.
Gardner: Yes, but it also encourages people who deliberately
promulgate viruses and try to destroy communications, for no particular
reason other than for the pure hell of it.
RayKurzweil: Computer viruses are a good example of a new
human-made self-replicating pathogen. It’s a good test case for how well
we can deal with new self-replicating dangers.
Gardner: True.
Gardner: Let’s hope our societal immune system is up to the task!
RayKurzweil: It’s certainly an uneasy balance — not one we can
get overly comfortable about, but on balance I would say that we’ve
kept computer viruses to a nuisance level. If we do half as well with
bioengineered viruses or self-replicating nanotechnology entities, we’ll
be doing well.
ChatMod: STATION IDENTIFICATION: Just a reminder. We’re
chatting with science fiction writer Vernor Vinge, and author,
innovator, and inventor Ray Kurzweil — the founder of Kurzweil
Technologies. Tonight’s topic is Singularity. Tonight’s chat is
co-produced by Analog Science Fiction and Fact and Asimov’s Science
Fiction (www.asimovs.com.)
Gardner: Vernor, for the most part, do you think that we’d
LIKE living in a Singularity? Do you think positive or negative
scenarios are more likely?
vv: I’m inclined toward the optimistic, but very pessimistic
things are possible. I don’t see any logical inescapable conclusion.
History has had this trajectory toward the good, and there are
retrospective explanations for why this is inevitable — but, that may
also just be blind luck, the “inevitability” just being the anthropic
principle putting rose-colored glasses on our view of the process.
RayKurzweil: My feeling about that is that we would not want
to turn back. Consider asking people two centuries ago if they would
like living today. Many might say the dangers are too great. But how
many people today would opt to go back 200 years to the short (37-year
average life span) difficult lives of 200 years ago.
Gardner: It’s interesting that only in the last few years are
we seeing stories in SF that share that positive view, instead of the
automatic negative default setting. Good point, Ray. I always distrust
the longing for The Good Old Days–especially since they sucked, for most
people on the Earth.
RayKurzweil: Indeed, most humans lived lives of great hardship, labor, disease-filled, poverty-filled, etc.
Gardner: Audience questions?
ChatMod:
to : Hi there. I have a
question for Ray Kurzweil: Have the events of Sept. 11 altered your
forecast for the evolution of machine intelligence? Could a worldwide
nuclear war–heaven forbid–derail the emergence of sentient machines? Or
has technology already advanced so far that a singularity is inevitable
within the next fifty years?
RayKurzweil: The downsides of technology are not new. There
was great destruction in the 20th century — 50 million people died in WW
II, made possible by technology. However, these great dislocations in
history did nothing to dislocate the advance of technology. I’ve been
gathering technology trends over the past century, and they all
continued quite smoothly through wars, economic depressions and
recessions, and other events such as this. It would take a total
disaster to stop this process, which is possible, but I think unlikely.
Gardner: A worldwide effective plague, like the Black Death,
might be even more effective in setting back the clock a bit. In fact,
World War II accelerated the rate of technological progress. Wars tend
to do that.
RayKurzweil: A bioengineered pathogen that was unstoppable is a grave danger. We’ll need the antiviral technologies in time.
ChatMod: Next Question
ChatMod:
to : Question for
Vernor: Saw a reference to a possible film version of "True Names,"
Vinge’s "Singularity" novella on "Coming Attractions"
http://www.corona.bc.ca/films/filmlistingsFramed.html. What is happening
with that?
vv: Aha — It is under option. True Names was recently pitched
to one of the heads of the scifi network. I’ve heard that it was very
well received and that they’re thinking about it for a TV series. It
would be an action series focused on things like anonymity but leading
ultimately to singularity issues.
ChatMod: Next Question
ChatMod:
to : How does Mr.
Kurzweil reconcile his statement that superintelligent AI can be
expected in 2029, but the Singularity will not begin to "tear the fabric
of human understanding" until 2040?
RayKurzweil: The threshold of a machine being able to pass the
Turing test will not immediately tear the fabric of human history. But
when you then factor in continuing exponential growth, creation of many
more such intelligences, the evolution of these machines into true super
intelligences, combining with nanotech, etc., and the maturing of all
this, well that will take a bit longer.
Gardner: If it’s the AIs who are becoming superintelligent, maybe it’s them who’ll have the Singularity, not the rest of us.
ChatMod: Next Question
ChatMod:
to : For both
guests. So far, with humans, evolution seems to have favored
intelligence as a survival trait. Once machines can think in this
"Singularity" stage, do you think intelligence will still be favored? Or
will stronger, healthier bodies be preferred for the intelligent tech
we’ll all be carrying like symbiotes?
RayKurzweil: The us-them issue is very important. Again, I
don’t see this as an alien invasion. It’s emerging from our brains, and
one of the primary applications, at least initially will be to literally
expand our brains from inside our brains.
RayKurzweil: Intelligence will always win out. It’s clever enough to circumvent lesser distinctions.
vv: I think the intelligence will be even more strongly favored (as the facilitator of all these other characteristics).
ChatMod: Next Question
ChatMod:
to : This question
for Vinge and Kurzweil. What other paths are there to Singularity and
which is more likely to lead to singularity?
RayKurzweil: Eric Drexler asks this question: Will we first
have strong AI, which will be intelligent enough to create full
nanotechnology (i.e., an assembler that can create anything). OR will we
first have nanotechnology which can be used to reverse-engineer the
human brain and create strong AI. The two paths are progressing
together, it’s hard to say which scenario is more plausible. They may
come together.
Gardner: Speaking of us-them issues, will the Singularity
affect dirt farmers in China and Mexico? Will it spread eventually to
include everyone? Or will their lives remain essentially unchanged, no
matter what’s happening elsewhere?
RayKurzweil: Dirt farmers will soon be using cell phones if
they’re not using them already. Some Asian countries have skipped
industrialization and gone directly to an information economy. Many
individuals do that as well. Ultimately these technologies become
extremely inexpensive. How long ago was it that when you saw someone in a
movie use a cell phone that indicated that this person was a member of
the power elite. That was only a few years ago.
ChatMod: Next Question:
ChatMod:
to : Do you have
suggestions for how we can participate in research leading to the
singularity. I am a programmer for example. Are there projects someone
like me can help with?
RayKurzweil: The Singularity emerges from many thousands of
smaller advances on many fronts: three-dimensional molecular computing,
brain research (brain reverse engineering), software techniques,
communication technologies, and many others. There’s no single
Singularity study. It’s the result of many intersecting revolutions.
vv: I think there are a number of paths and they intertwine:
Pure AI, IA (growing out of human-computer interface work whereby our
automation becomes an amplification of ourselves, perhaps becoming what
David Brin calls our “neo-neocortex”), the Internet and large scale
collaboration, and some improvements in humans themselves (e.g.,
improved natural memory would make early IA interfaces much easier to
set up, as in my story “Fast Times at Fairmont High”).
ChatMod:
to : For both guests —
will we notice the Singularity when it happens, or will life seem
routine until we suddenly wake up and realize we’re on the other side?
Gardner: Or WILL we realize that?
RayKurzweil: Life will appear progressively more exciting.
Doesn’t it seem that way already. I mean we’re just getting started, but
there’s already a palpable acceleration.
vv: Yes, I know some people who half-seriously suggest it has already happened.
ChatMod: Gardner, maybe you and VV can answer this one — in fiction when did the first realistic discussion of Singularity appear?
Gardner: I think that Vernor was the first person to come up with a term for it.
vv: I think that depends on how constrained the definition is.
In my 1993 NASA essay, I tried to research some history though that
wasn’t related to fiction so much: I first used the term/idea at an AAAI
meeting in 1982, and then in OMNI in 1983.
Gardner: Although the idea that technology would in the future
give people the power of gods and make them incomprehensible to us
normal humans, goes way back. At least as far as Wells.
vv: Before that, I didn’t find a use of Singularity that was
in the sense I use it (but von Neumann apparently used the term for
something very similar).
RayKurzweil: Von Neumann said that technology appeared to be
accelerating such that it would reach a point of almost infinite power
at some point in the future.
vv: Gardner: yes, apocalyptic optimism has always been a staple of our genre!
Gardner: Some writers have pictured the far-future techno-gods
as being indifferent to us, though. Much as we usually are to ants
(unless we want to build a highway bypass where their nest is!)
RayKurzweil: We’re further away from ants, evolutionarily speaking, than the machines we’re building will be from us.
vv: The early machines anyway.
RayKurzweil: Good point.
Gardner: What about from the machines the machines we build, though?
RayKurzweil: That was Vernor’s point. However, I still
maintain that we will evolve into the machines. We will start by merging
with them (by, for example, placing billions of nanobots in the
capillaries of our brain), and then evolving into them.
ChatMod: Let me jump in and tell our audience that we’ll do about ten more minutes of questions, then open the floor
Gardner: Like the Tin Man paradox from THE WIZARD OF OZ.
ChatMod:
to : I’d like to know what the guests think of the idea that perhaps everyone
will be left behind after the Singularity, that humans will/may have to
deal with the idea that most meaningful human endeavors will be better
performed by AIs. Or at least by enhanced humans.
vv: I think this will be the case for standard humans. Also I think Greg Bear’s
fast progress scenario is possible, so that the second and third generation stuff happens in a matter of weeks.
Gardner: I think it makes a big difference whether the AIs are
enhanced humans or pure machines. I’m hoping for the former. If it’s
the later, we may have problems.
RayKurzweil: The problem stems more from self-replication of entities that are not all that bright — basically the gray goo scenario.
vv: Actually, an argument can be made for the reverse ,
Gardner. We have so much bloody baggage that I’m not sure who to trust
as the early "god like" human :-)
ChatMod: ‘Nother Question:
ChatMod:
to : to ‘vv’
and ‘RayKurzweil’: When will our mode of economics change relative to
the singularity? Will capitalism "survive" the singularity?
Gardner: Are you saying that machines would be kinder to us than humans are likely to be?
vv: I think there will continue to be scarce resources, though "mere humans" may not really perceive things that way.
RayKurzweil: Capitalism, to the extent that it creates a
Darwinian environment, fuels the ongoing acceleration of technology.
It’s the "Brave New World" scenario, in which technology is controlled
by a totalitarian Government that slows things down. The scarce
resources will all be knowledge-based.
vv: Hmm, they might be matter-based in the sense that some
form of matter is needed to make computing devices (Hans’ [Moravec]
notion of the universe converted into thinking matter).
RayKurzweil: Very little matter is needed. Fredkin shows no
lower bound to the energy and matter resources needed for computation
and communication. I do think that the entire Universe will be changed
from "dumb matter" to smart matter, which is the ultimate answer to the
cosmological questions.
ChatMod: We’ll make this next one the final audience question
ChatMod:
to : To both
authors: If you knew that the first project team to create an enhanced
human or build a self-improving AI were hanging out in this chatroom,
what would you most want to say to them? What kind of moral imperatives
are involved with implementing or entering a Singularity?
RayKurzweil: Although this begs the question, I don’t think
the threshold is that clear cut. We already have enhanced humans, and
there are self-improving AIs within narrow constraints. Over time, these
narrow constraints will get less narrow. But putting that aside, how
about "Respect your elders"?
vv: I find it very hard to think of surefire safety advice,
but I bet that time-constrained arms races would be most likely to end
in hellish disasters.
Gardner: If there were Singularities elsewhere, what signs
would they leave behind that we could recognize? Wouldn’t the universe
already have been converted to "smart matter" if other races had hit
this point?
RayKurzweil: Yes, indeed. That is why I believe that we are
first. That’s why we don’t notice any other intelligences out there and
why SETI will fail. They’re not there. That may seem unlikely, but so is
the existence of our universe with its marvelously precise rules to
allow matter to evolve, etc. But here we are. So by the anthropic
principle, we’re here to talk about it.
Gardner: You feel that way too, Vernor?
vv: I’m not that confident of any particular explanation for
Fermi’s paradox (which this question is surely a part of). From SF we
have lots of possibilities: maybe we are the first or we are Not At All
the first — in which latter case you get something like Robert Charles
Wilson’s Darwinia or Hans Moravec’s “Pigs in Cyberspace.” Or maybe there
are big disasters that can happen and we have been living in
anthropic-principle bliss of the dangers. However, being the first would
be nice, if only now we can pull it off and transcend!
RayKurzweil: The only other possibility is that it is
impossible to overcome the speed of light as a limitation and that there
are other superintelligences beyond our light sphere. The explanation
that a superintelligence may have destroyed itself is plausible for one
or a few such civilizations, but according to the SETI assumptions,
there should be millions or billions of such civilizations. It’s not
plausible that they all destroyed themselves, or that they all decided
to remain hidden from us.
Gardner: Well, before we open the floor, let’s get plugs. We
already know that Ray has two books out or coming out (repeat the
titles, please!). Vernor, what new projects do you have coming?
vv: Well, I have the story collection from Tor out late last
year. I’m waiting on the TV True Names stuff from SCIFI channel that I
mentioned earlier. I’ve got some near-future stuff based on "Fast times
at Fairmont High" (about wearable computers and fine-grained ubiquitous
networking).
RayKurzweil: With regard to plugs, I would suggest people
check out KurzweilAI.net. We have about 70 big thinkers (like Vernor,
Hans Moravec, and many others) with about 400 articles, about 1000 news
items each year, a free e-newsletter you can sign up for, lots of
discussion on our MindX boards on these issues. I’m also working on a
book called "A Short Guide to a Long Life" with coauthor Terry Grossman,
MD, so that we can all live to see the Singularity. One doctor has
already cured type I Diabetes in rats with a nanotechnology-based module
in the bloodstream. There are dozens of such ideas in progress.
ChatMod: Our hour is long gone. Thanks, gentlemen for a great
chat. Tonight’s chat is co-produced by Analog Science Fiction and Fact
and Asimov’s Science Fiction (www.asimovs.com.) Just a reminder…Join us
again on June 25th when we’ll be chatting with Asimov artists Michael
Carroll, Kelly and Laura Freas, and Wolf Read. Good night everybody.