January 19, 2006 by Kevin Kelly
Original link: http://www.kurzweilai.net/we-are-the-web-2
Originally published in Wired Magazine August 2005. Published on KurzweilAI.net January 19, 2006.
The planet-sized “Web” computer is already more complex than a
human brain and has surpassed the 20-petahertz threshold for potential
intelligence as calculated by Ray Kurzweil. In 10 years, it will be
ubiquitous. So will superintelligence emerge on the Web, not a
supercomputer?
Ten years ago, Netscape’s explosive IPO ignited huge piles of money.
The brilliant flash revealed what had been invisible only a moment
before: the World Wide Web. As Eric Schmidt (then at Sun, now at Google)
noted, the day before the IPO, nothing about the Web; the day after,
everything.
Computing pioneer Vannevar Bush outlined the Web’s core
idea—hyperlinked pages—in 1945, but the first person to try to build out
the concept was a freethinker named Ted Nelson who envisioned his own
scheme in 1965. However, he had little success connecting digital bits
on a useful scale, and his efforts were known only to an isolated group
of disciples. Few of the hackers writing code for the emerging Web in
the 1990s knew about Nelson or his hyperlinked dream machine.
At the suggestion of a computer-savvy friend, I got in touch with
Nelson in 1984, a decade before Netscape. We met in a dark dockside bar
in Sausalito, California. He was renting a houseboat nearby and had the
air of someone with time on his hands. Folded notes erupted from his
pockets, and long strips of paper slipped from overstuffed notebooks.
Wearing a ballpoint pen on a string around his neck, he told me—way too
earnestly for a bar at 4 o’clock in the afternoon—about his scheme for
organizing all the knowledge of humanity. Salvation lay in cutting up 3 x
5 cards, of which he had plenty.
Although Nelson was polite, charming, and smooth, I was too slow for his fast talk. But I got an
aha!
from his marvelous notion of hypertext. He was certain that every
document in the world should be a footnote to some other document, and
computers could make the links between them visible and permanent. But
that was just the beginning! Scribbling on index cards, he sketched out
complicated notions of transferring authorship back to creators and
tracking payments as readers hopped along networks of documents, what he
called the docuverse. He spoke of "transclusion" and
"intertwingularity" as he described the grand utopian benefits of his
embedded structure. It was going to save the world from stupidity.
I believed him. Despite his quirks, it was clear to me that a
hyperlinked world was inevitable—someday. But looking back now, after 10
years of living online, what surprises me about the genesis of the Web
is how much was missing from Vannevar Bush’s vision, Nelson’s docuverse,
and my own expectations. We all missed the big story. The revolution
launched by Netscape’s IPO was only marginally about hypertext and human
knowledge. At its heart was a new kind of participation that has since
developed into an emerging culture based on sharing. And the ways of
participating unleashed by hyperlinks are creating a new type of
thinking—part human and part machine—found nowhere else on the planet or
in history.
Not only did we fail to imagine what the Web would become, we still
don’t see it today! We are blind to the miracle it has blossomed into.
And as a result of ignoring what the Web really is, we are likely to
miss what it will grow into over the next 10 years. Any hope of
discerning the state of the Web in 2015 requires that we own up to how
wrong we were 10 years ago.
1995
Before the Netscape browser illuminated the Web, the
Internet did not exist for most people. If it was acknowledged at all,
it was mischaracterized as either corporate email (as exciting as a
necktie) or a clubhouse for adolescent males (read: pimply nerds). It
was hard to use. On the Internet, even dogs had to type. Who wanted to
waste time on something so boring?
The memories of an early enthusiast like myself can be unreliable, so
I recently spent a few weeks reading stacks of old magazines and
newspapers. Any promising new invention will have its naysayers, and the
bigger the promises, the louder the nays. It’s not hard to find smart
people saying stupid things about the Internet on the morning of its
birth. In late 1994,
Time magazine explained why the Internet
would never go mainstream: "It was not designed for doing commerce, and
it does not gracefully accommodate new arrivals."
Newsweek put
the doubts more bluntly in a February 1995 headline: "THE INTERNET?
BAH!" The article was written by astrophysicist and Net maven Cliff
Stoll, who captured the prevailing skepticism of virtual communities and
online shopping with one word: "baloney."
This dismissive attitude pervaded a meeting I had with the top
leaders of ABC in 1989. I was there to make a presentation to the corner
office crowd about this "Internet stuff." To their credit, they
realized something was happening. Still, nothing I could tell them would
convince them that the Internet was not marginal, not just typing, and,
most emphatically, not just teenage boys. Stephen Weiswasser, a senior
VP, delivered the ultimate putdown: "The Internet will be the CB radio
of the ’90s," he told me, a charge he later repeated to the press.
Weiswasser summed up ABC’s argument for ignoring the new medium: "You
aren’t going to turn passive consumers into active trollers on the
Internet."
I was shown the door. But I offered one tip before I left. "Look," I said. "I happen to know that the address
abc.com has not been registered. Go down to your basement, find your most technical computer guy, and have him register
abc.com
immediately. Don’t even think about it. It will be a good thing to do."
They thanked me vacantly. I checked a week later. The domain was still
unregistered.
While it is easy to smile at the dodos in TV land, they were not the
only ones who had trouble imagining an alternative to couch potatoes.
Wired did, too. When I examine issues of
Wired
from before the Netscape IPO (issues that I proudly edited), I am
surprised to see them touting a future of high production-value
content—5,000 always-on channels and virtual reality, with a side order
of email sprinkled with bits of the Library of Congress. In fact,
Wired
offered a vision nearly identical to that of Internet wannabes in the
broadcast, publishing, software, and movie industries: basically, TV
that worked. The question was who would program the box.
Wired looked forward to a constellation of new media upstarts like Nintendo and Yahoo!, not old-media dinosaurs like ABC.
Problem was, content was expensive to produce, and 5,000 channels of
it would be 5,000 times as costly. No company was rich enough, no
industry large enough, to carry off such an enterprise. The great
telecom companies, which were supposed to wire up the digital
revolution, were paralyzed by the uncertainties of funding the Net. In
June 1994, David Quinn of British Telecom admitted to a conference of
software publishers, "I’m not sure how you’d make money out of it."
The immense sums of money supposedly required to fill the Net with
content sent many technocritics into a tizzy. They were deeply concerned
that cyberspace would become cyburbia—privately owned and operated.
Writing in
Electronic Engineering Times in 1995, Jeff Johnson
worried: "Ideally, individuals and small businesses would use the
information highway to communicate, but it is more likely that the
information highway will be controlled by Fortune 500 companies in 10
years." The impact would be more than commercial. "Speech in cyberspace
will not be free if we allow big business to control every square inch
of the Net," wrote Andrew Shapiro in
The Nation in July 1995.
The fear of commercialization was strongest among hardcore
programmers: the coders, Unix weenies, TCP/IP fans, and selfless
volunteer IT folk who kept the ad hoc network running. The major
administrators thought of their work as noble, a gift to humanity. They
saw the Internet as an open commons, not to be undone by greed or
commercialization. It’s hard to believe now, but until 1991, commercial
enterprise on the Internet was strictly prohibited. Even then, the rules
favored public institutions and forbade "extensive use for private or
personal business."
In the mid-1980s, when I was involved in the WELL, an early nonprofit
online system, we struggled to connect it to the emerging Internet but
were thwarted, in part, by the "acceptable use" policy of the National
Science Foundation (which ran the Internet backbone). In the eyes of the
NSF, the Internet was funded for research, not commerce. At first this
restriction wasn’t a problem for online services, because most
providers, the WELL included, were isolated from one another. Paying
customers could send email within the system—but not outside it. In
1987, the WELL fudged a way to forward outside email through the Net
without confronting the acceptable use policy, which our organization’s
own techies were reluctant to break. The NSF rule reflected a lingering
sentiment that the Internet would be devalued, if not trashed, by
opening it up to commercial interests. Spam was already a problem (one
every week!).
This attitude prevailed even in the offices of
Wired. In 1994, during the first design meetings for
Wired‘s
embryonic Web site, HotWired, programmers were upset that the
innovation we were cooking up—what are now called clickthrough ad
banners—subverted the great social potential of this new territory. The
Web was hardly out of diapers, and already they were being asked to
blight it with billboards and commercials. Only in May 1995, after the
NSF finally opened the floodgates to ecommerce, did the geek elite begin
to relax.
Three months later, Netscape’s public offering took off, and in a
blink a world of DIY possibilities was born. Suddenly it became clear
that ordinary people could create material anyone with a connection
could view. The burgeoning online audience no longer needed ABC for
content. Netscape’s stock peaked at $75 on its first day of trading, and
the world gasped in awe. Was this insanity, or the start of something
new?
2005
The scope of the Web today is hard to fathom. The
total number of Web pages, including those that are dynamically created
upon request and document files available through links, exceeds 600
billion. That’s 100 pages per person alive.
How could we create so much, so fast, so well? In fewer than 4,000
days, we have encoded half a trillion versions of our collective story
and put them in front of 1 billion people, or one-sixth of the world’s
population. That remarkable achievement was not in anyone’s 10-year
plan.
The accretion of tiny marvels can numb us to the arrival of the
stupendous. Today, at any Net terminal, you can get: an amazing variety
of music and video, an evolving encyclopedia, weather forecasts, help
wanted ads, satellite images of anyplace on Earth, up-to-the-minute news
from around the globe, tax forms, TV guides, road maps with driving
directions, real-time stock quotes, telephone numbers, real estate
listings with virtual walk-throughs, pictures of just about anything,
sports scores, places to buy almost anything, records of political
contributions, library catalogs, appliance manuals, live traffic
reports, archives to major newspapers—all wrapped up in an interactive
index that really works.
This view is spookily godlike. You can switch your gaze of a spot in
the world from map to satellite to 3-D just by clicking. Recall the
past? It’s there. Or listen to the daily complaints and travails of
almost anyone who blogs (and doesn’t everyone?). I doubt angels have a
better view of humanity.
Why aren’t we more amazed by this fullness? Kings of old would have
gone to war to win such abilities. Only small children would have
dreamed such a magic window could be real. I have reviewed the
expectations of waking adults and wise experts, and I can affirm that
this comprehensive wealth of material, available on demand and free of
charge, was not in anyone’s scenario. Ten years ago, anyone silly enough
to trumpet the above list as a vision of the near future would have
been confronted by the evidence: There wasn’t enough money in all the
investment firms in the entire world to fund such a cornucopia. The
success of the Web at this scale was impossible.
But if we have learned anything in the past decade, it is the plausibility of the impossible.
Take eBay. In some 4,000 days, eBay has gone from marginal Bay Area
experiment in community markets to the most profitable spinoff of
hypertext. At any one moment, 50 million auctions race through the site.
An estimated half a million folks make their living selling through
Internet auctions. Ten years ago I heard skeptics swear nobody would
ever buy a car on the Web. Last year eBay Motors sold $11 billion worth
of vehicles. EBay’s 2001 auction of a $4.9 million private jet would
have shocked anyone in 1995—and still smells implausible today.
Nowhere in Ted Nelson’s convoluted sketches of hypertext transclusion
did the fantasy of a global flea market appear. Especially as the
ultimate business model! He hoped to franchise his Xanadu hypertext
systems in the physical world at the scale of a copy shop or café—you
would go to a store to do your hypertexting. Xanadu would take a cut of
the action.
Instead, we have an open global flea market that handles 1.4 billion
auctions every year and operates from your bedroom. Users do most of the
work; they photograph, catalog, post, and manage their own auctions.
And they police themselves; while eBay and other auction sites do call
in the authorities to arrest serial abusers, the chief method of
ensuring fairness is a system of user-generated ratings. Three billion
feedback comments can work wonders.
What we all failed to see was how much of this new world would be
manufactured by users, not corporate interests. Amazon.com customers
rushed with surprising speed and intelligence to write the reviews that
made the site’s long-tail selection usable. Owners of Adobe, Apple, and
most major software products offer help and advice on the developer’s
forum Web pages, serving as high-quality customer support for new
buyers. And in the greatest leverage of the common user, Google turns
traffic and link patterns generated by 2 billion searches a month into
the organizing intelligence for a new economy. This bottom-up takeover
was not in anyone’s 10-year vision.
No Web phenomenon is more confounding than blogging. Everything media
experts knew about audiences—and they knew a lot—confirmed the focus
group belief that audiences would never get off their butts and start
making their own entertainment. Everyone knew writing and reading were
dead; music was too much trouble to make when you could sit back and
listen; video production was simply out of reach of amateurs. Blogs and
other participant media would never happen, or if they happened they
would not draw an audience, or if they drew an audience they would not
matter. What a shock, then, to witness the near-instantaneous rise of 50
million blogs, with a new one appearing every two seconds.
There—another new blog! One more person doing what AOL and ABC—and
almost everyone else—expected only AOL and ABC to be doing. These
user-created channels make no sense economically. Where are the time,
energy, and resources coming from?
The audience.
I run a blog about cool tools. I write it for my own delight and for
the benefit of friends. The Web extends my passion to a far wider group
for no extra cost or effort. In this way, my site is part of a vast and
growing gift economy, a visible underground of valuable creations—text,
music, film, software, tools, and services—all given away for free. This
gift economy fuels an abundance of choices. It spurs the grateful to
reciprocate. It permits easy modification and reuse, and thus promotes
consumers into producers.
The open source software movement is another example. Key ingredients
of collaborative programming—swapping code, updating instantly,
recruiting globally—didn’t work on a large scale until the Web was
woven. Then software became something you could join, either as a beta
tester or as a coder on an open source project. The clever "view source"
browser option let the average Web surfer in on the act. And anyone
could rustle up a link—which, it turns out, is the most powerful
invention of the decade.
Linking unleashes involvement and interactivity at levels once
thought unfashionable or impossible. It transforms reading into
navigating and enlarges small actions into powerful forces. For
instance, hyperlinks made it much easier to create a seamless, scrolling
street map of every town. They made it easier for people to refer to
those maps. And hyperlinks made it possible for almost anyone to
annotate, amend, and improve any map embedded in the Web. Cartography
has gone from spectator art to participatory democracy.
The electricity of participation nudges ordinary folks to invest huge
hunks of energy and time into making free encyclopedias, creating
public tutorials for changing a flat tire, or cataloging the votes in
the Senate. More and more of the Web runs in this mode. One study found
that only 40 percent of the Web is commercial. The rest runs on duty or
passion.
Coming out of the industrial age, when mass-produced goods outclassed
anything you could make yourself, this sudden tilt toward consumer
involvement is a complete Lazarus move: "We thought that died long ago."
The deep enthusiasm for making things, for interacting more deeply than
just choosing options, is the great force not reckoned 10 years ago.
This impulse for participation has upended the economy and is steadily
turning the sphere of social networking—smart mobs, hive minds, and
collaborative action—into the main event.
When a company opens its databases to users, as Amazon, Google, and
eBay have done with their Web services, it is encouraging participation
at new levels. The corporation’s data becomes part of the commons and an
invitation to participate. People who take advantage of these
capabilities are no longer customers; they’re the company’s developers,
vendors, skunk works, and fan base.
A little over a decade ago, a phone survey by
Macworld asked a
few hundred people what they thought would be worth $10 per month on
the information superhighway. The participants started with uplifting
services: educational courses, reference books, electronic voting, and
library information. The bottom of the list ended with sports
statistics, role-playing games, gambling, and dating. Ten years later
what folks actually use the Internet for is inverted. According to a
2004 Stanford study, people use the Internet for (in order): playing
games, "just surfing," shopping the list ends with responsible
activities like politics and banking. (Some even admitted to porn.)
Remember, shopping wasn’t supposed to happen. Where’s Cliff Stoll, the
guy who said the Internet was baloney and online catalogs humbug? He has
a little online store where he sells handcrafted Klein bottles.
The public’s fantasy, revealed in that 1994 survey, began reasonably
with the conventional notions of a downloadable world. These assumptions
were wired into the infrastructure. The bandwidth on cable and phone
lines was asymmetrical: Download rates far exceeded upload rates. The
dogma of the age held that ordinary people had no need to upload; they
were consumers, not producers. Fast-forward to today, and the poster
child of the new Internet regime is BitTorrent. The brilliance of
BitTorrent is in its exploitation of near-symmetrical communication
rates. Users upload stuff while they are downloading. It assumes
participation, not mere consumption. Our communication infrastructure
has taken only the first steps in this great shift from audience to
participants, but that is where it will go in the next decade.
With the steady advance of new ways to share, the Web has embedded
itself into every class, occupation, and region. Indeed, people’s
anxiety about the Internet being out of the mainstream seems quaint now.
In part because of the ease of creation and dissemination, online
culture is
the culture. Likewise, the worry about the Internet
being 100 percent male was entirely misplaced. Everyone missed the party
celebrating the 2002 flip-point when women online first outnumbered
men. Today, 52 percent of netizens are female. And, of course, the
Internet is not and has never been a teenage realm. In 2005, the average
user is a bone-creaking 41 years old.
What could be a better mark of irreversible acceptance than adoption
by the Amish? I was visiting some Amish farmers recently. They fit the
archetype perfectly: straw hats, scraggly beards, wives with bonnets, no
electricity, no phones or TVs, horse and buggy outside. They have an
undeserved reputation for resisting all technology, when actually they
are just very late adopters. Still, I was amazed to hear them mention
their Web sites.
"Amish Web sites?" I asked.
"For advertising our family business. We weld barbecue grills in our shop."
"Yes, but—"
"Oh, we use the Internet terminal at the public library. And Yahoo!"
I knew then the battle was over.
2015
The Web continues to evolve from a world ruled by mass
media and mass audiences to one ruled by messy media and messy
participation. How far can this frenzy of creativity go? Encouraged by
Web-enabled sales, 175,000 books were published and more than 30,000
music albums were released in the US last year. At the same time, 14
million blogs launched worldwide. All these numbers are escalating. A
simple extrapolation suggests that in the near future, everyone alive
will (on average) write a song, author a book, make a video, craft a
weblog, and code a program. This idea is less outrageous than the notion
150 years ago that someday everyone would write a letter or take a
photograph.
What happens when the data flow is asymmetrical—but in favor of
creators? What happens when everyone is uploading far more than they
download? If everyone is busy making, altering, mixing, and mashing, who
will have time to sit back and veg out? Who will be a consumer?
No one. And that’s just fine. A world where production outpaces
consumption should not be sustainable; that’s a lesson from Economics
101. But online, where many ideas that don’t work in theory succeed in
practice, the audience increasingly doesn’t matter. What matters is the
network of social creation, the community of collaborative interaction
that futurist Alvin Toffler called prosumption. As with blogging and
BitTorrent, prosumers produce and consume at once. The producers are the
audience, the act of making is the act of watching, and every link is
both a point of departure and a destination.
But if a roiling mess of participation is
all we think the Web
will become, we are likely to miss the big news, again. The experts are
certainly missing it. The Pew Internet & American Life Project
surveyed more than 1,200 professionals in 2004, asking them to predict
the Net’s next decade. One scenario earned agreement from two-thirds of
the respondents: "As computing devices become embedded in everything
from clothes to appliances to cars to phones, these networked devices
will allow greater surveillance by governments and businesses." Another
was affirmed by one-third: "By 2014, use of the Internet will increase
the size of people’s social networks far beyond what has traditionally
been the case."
These are safe bets, but they fail to capture the Web’s disruptive
trajectory. The real transformation under way is more akin to what Sun’s
John Gage had in mind in 1988 when he famously said, "The network
is
the computer." He was talking about the company’s vision of the
thin-client desktop, but his phrase neatly sums up the destiny of the
Web: As the OS for a megacomputer that encompasses the Internet, all its
services, all peripheral chips and affiliated devices from scanners to
satellites, and the billions of human minds entangled in this global
network. This gargantuan Machine already exists in a primitive form. In
the coming decade, it will evolve into an integral extension not only of
our senses and bodies but our minds.
Today, the Machine acts like a very large computer with top-level
functions that operate at approximately the clock speed of an early PC.
It processes 1 million emails each second, which essentially means
network email runs at 1 megahertz. Same with Web searches. Instant
messaging runs at 100 kilohertz, SMS at 1 kilohertz. The Machine’s total
external RAM is about 200 terabytes. In any one second, 10 terabits can
be coursing through its backbone, and each year it generates nearly 20
exabytes of data. Its distributed "chip" spans 1 billion active PCs,
which is approximately the number of transistors in one PC.
This planet-sized computer is comparable in complexity to a human
brain. Both the brain and the Web have hundreds of billions of neurons
(or Web pages). Each biological neuron sprouts synaptic links to
thousands of other neurons, while each Web page branches into dozens of
hyperlinks. That adds up to a trillion "synapses" between the static
pages on the Web. The human brain has about 100 times that number—but
brains are not doubling in size every few years. The Machine is.
Since each of its "transistors" is itself a personal computer with a
billion transistors running lower functions, the Machine is fractal. In
total, it harnesses a quintillion transistors, expanding its complexity
beyond that of a biological brain. It has already surpassed the
20-petahertz threshold for potential intelligence as calculated by Ray
Kurzweil. For this reason some researchers pursuing artificial
intelligence have switched their bets to the Net as the computer most
likely to think first. Danny Hillis, a computer scientist who once
claimed he wanted to make an AI "that would be proud of me," has
invented massively parallel supercomputers in part to advance us in that
direction. He now believes the first real AI will emerge not in a
stand-alone supercomputer like IBM’s proposed 23-teraflop Blue Brain,
but in the vast digital tangle of the global Machine.
In 10 years, the system will contain hundreds of millions of miles of
fiber-optic neurons linking the billions of ant-smart chips embedded
into manufactured products, buried in environmental sensors, staring out
from satellite cameras, guiding cars, and saturating our world with
enough complexity to begin to learn. We will live inside this thing.
Today the nascent Machine routes packets around disturbances in its
lines; by 2015 it will anticipate disturbances and avoid them. It will
have a robust immune system, weeding spam from its trunk lines,
eliminating viruses and denial-of-service attacks the moment they are
launched, and dissuading malefactors from injuring it again. The
patterns of the Machine’s internal workings will be so complex they
won’t be repeatable; you won’t always get the same answer to a given
question. It will take intuition to maximize what the global network has
to offer. The most obvious development birthed by this platform will be
the absorption of routine. The Machine will take on anything we do more
than twice. It will be the Anticipation Machine.
One great advantage the Machine holds in this regard: It’s always on.
It is very hard to learn if you keep getting turned off, which is the
fate of most computers. AI researchers rejoice when an adaptive learning
program runs for days without crashing. The fetal Machine has been
running continuously for at least 10 years (30 if you want to be picky).
I am aware of no other machine—of any type—that has run that long with
zero downtime. While portions may spin down due to power outages or
cascading infections, the entire thing is unlikely to go quiet in the
coming decade. It will be the most reliable gadget we have.
And the most universal. By 2015, desktop operating systems will be
largely irrelevant. The Web will be the only OS worth coding for. It
won’t matter what device you use, as long as it runs on the Web OS. You
will reach the same distributed computer whether you log on via phone,
PDA, laptop, or HDTV.
In the 1990s, the big players called that convergence. They peddled
the image of multiple kinds of signals entering our lives through one
box—a box they hoped to control. By 2015 this image will be turned
inside out. In reality, each device is a differently shaped window that
peers into the global computer. Nothing converges. The Machine is an
unbounded thing that will take a billion windows to glimpse even part
of. It is what you’ll see on the other side of any screen.
And who will write the software that makes this contraption useful
and productive? We will. In fact, we’re already doing it, each of us,
every day. When we post and then tag pictures on the community photo
album Flickr, we are teaching the Machine to give names to images. The
thickening links between caption and picture form a neural net that can
learn. Think of the 100 billion times
per day humans click on a
Web page as a way of teaching the Machine what we think is important.
Each time we forge a link between words, we teach it an idea. Wikipedia
encourages its citizen authors to link each fact in an article to a
reference citation. Over time, a Wikipedia article becomes totally
underlined in blue as ideas are cross-referenced. That massive
cross-referencing is how brains think and remember. It is how neural
nets answer questions. It is how our global skin of neurons will adapt
autonomously and acquire a higher level of knowledge.
The human brain has no department full of programming cells that
configure the mind. Rather, brain cells program themselves simply by
being used. Likewise, our questions program the Machine to answer
questions. We think we are merely wasting time when we surf mindlessly
or blog an item, but each time we click a link we strengthen a node
somewhere in the Web OS, thereby programming the Machine by using it.
What will most surprise us is how dependent we will be on what the
Machine knows—about us and about what we want to know. We already find
it easier to Google something a second or third time rather than
remember it ourselves. The more we teach this megacomputer, the more it
will assume responsibility for our knowing. It will become our memory.
Then it will become our identity. In 2015 many people, when divorced
from the Machine, won’t feel like themselves—as if they’d had a
lobotomy.
Legend has it that Ted Nelson invented Xanadu as a remedy for his
poor memory and attention deficit disorder. In this light, the Web as
memory bank should be no surprise. Still, the birth of a machine that
subsumes all other machines so that in effect there is only one Machine,
which penetrates our lives to such a degree that it becomes essential
to our identity—this will be full of surprises. Especially since it is
only the beginning.
There is only one time in the history of each planet when its
inhabitants first wire up its innumerable parts to make one large
Machine. Later that Machine may run faster, but there is only one time
when it is born.
You and I are alive at this moment.
We should marvel, but people alive at such times usually don’t. Every
few centuries, the steady march of change meets a discontinuity, and
history hinges on that moment. We look back on those pivotal eras and
wonder what it would have been like to be alive then. Confucius,
Zoroaster, Buddha, and the latter Jewish patriarchs lived in the same
historical era, an inflection point known as the axial age of religion.
Few world religions were born after this time. Similarly, the great
personalities converging upon the American Revolution and the geniuses
who commingled during the invention of modern science in the 17th
century mark additional axial phases in the short history of our
civilization.
Three thousand years from now, when keen minds review the past, I
believe that our ancient time, here at the cusp of the third millennium,
will be seen as another such era. In the years roughly coincidental
with the Netscape IPO, humans began animating inert objects with tiny
slivers of intelligence, connecting them into a global field, and
linking their own minds into a single thing. This will be recognized as
the largest, most complex, and most surprising event on the planet.
Weaving nerves out of glass and radio waves, our species began wiring up
all regions, all processes, all facts and notions into a grand network.
From this embryonic neural net was born a collaborative interface for
our civilization, a sensing, cognitive device with power that exceeded
any previous invention. The Machine provided a new way of thinking
(perfect search, total recall) and a new mind for an old species. It was
the Beginning.
In retrospect, the Netscape IPO was a puny rocket to herald such a
moment. The product and the company quickly withered into irrelevance,
and the excessive exuberance of its IPO was downright tame compared with
the dotcoms that followed. First moments are often like that. After the
hysteria has died down, after the millions of dollars have been gained
and lost, after the strands of mind, once achingly isolated, have
started to come together—the only thing we can say is: Our Machine is
born. It’s on.
© 2005 Kevin Kelly. Reprinted with permission.