Of course, it’s easy to understand why AI has been giving rise to dystopian fears about the future from the world’s most intelligent people. That’s because the problem at the heart of AI is something that the supporters of the Future of Life letter refer to as “existential risk” — the risk that very bad things can happen in the near future to wipe out the human race as a result of technology gone bad.
“Existential risk” is precisely what makes Hollywood sci-fi movies so scary. In last year’s dystopian thriller “Transcendence,” for example, Johnny Depp morphs into a super-brain with the ability to wipe the human race off the planet. At about the same time the movie hit cinemas, Hawking bluntly warned about the risks of super-intelligence: “I think the development of full artificial intelligence could spell the end of the human race.”
The reason, Hawking told the BBC in an interview, is that, “Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” In short, if computers get too smart, it’s game over for humans.
In the Future of Life letter, Musk and Hawking hint at a dystopian future in which humans have lost control of self-driving cars, drones, lethal weapons and the right to privacy. Even worse, computers would become smarter than humans and at some point, would decide that humans really aren’t so necessary after all. And they would do so not because they are inherently evil, but because they are so inherently rational – humans tend to make a mess of things.
But how likely is it, really, that an AI super-mind could wreak that kind of havoc and decide that humans are expendable?
The flip side of “existential risk” is “existential reward” — the possibility that very good things can happen in the near future as a result of exponential leaps in technology. For every Stephen Hawking and Elon Musk, there’s a Ray Kurzweil or Peter Diamandis. People who focus on “existential reward” claim that AI will bring forth a utopian future, in which the human brain’s full potential will be opened up, giving us the ability to discover new cures, new sources of energy, and new solutions to all of humanity’s problems. Even the supporters of Future of Life’s dystopian AI premise concede that there are a lot of positives out there, including the “eradication of disease and poverty.”
If you think about what AI has already accomplished, well, there’s a lot more that can be done when super-intelligence is applied to the pressing humanitarian issues of the day. The Future of Life letter notes how far, how fast, we’ve already come. AI has given us speech recognition, image classification, autonomous vehicles and machine translation. Thinking in terms of “existential reward” is what leads one to think about the future as one of abundance, in which AI helps — not hurts — humanity.
The types of AI safeguards alluded to by Hawking and Musk in the Future of Life open letter could make a difference in ensuring “reward” wins out over “risk.” In short, these safeguards could tilt the playing field in favor of humans, by ensuring that “our AI systems must do what we want them to do.”
However, to view the debate over AI purely in terms of humans vs. the machines misses the point. It’s not us vs. them in a race for mastery of planet Earth, with human intelligence evolving linearly and digital intelligence evolving exponentially. What’s more likely is some form of hybrid evolution in which humans remain in charge but develop augmented capabilities as a result of technology. One popular scenario for sci-fi fans is one in which humans and computers ultimately merge into some sort of interstellar species, figure out how to leave planet Earth behind on a new mission to colonize the galaxy and live happily ever after.
When a technology is so obviously dangerous — like nuclear energy or synthetic biology — humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential. While it’s scary, sure, that humans may no longer be the smartest life forms in the room a generation from now, should we really be that concerned? Seems like we’ve already done a pretty good job of finishing off the planet anyway. If anything, we should be welcoming our AI masters to arrive sooner rather than later.
Dominic Basulto is a futurist and blogger based in New York City.