When Ray Kurzweil met with Google CEO
Larry Page last July, he wasn’t looking for a job. A respected inventor
who’s become a machine-intelligence futurist, Kurzweil wanted to discuss
his upcoming book How to Create a Mind. He told Page, who had
read an early draft, that he wanted to start a company to develop his
ideas about how to build a truly intelligent computer: one that could
understand language and then make inferences and decisions on its own.
It quickly became obvious that such an effort would require nothing
less than Google-scale data and computing power. “I could try to give
you some access to it,” Page told Kurzweil. “But it’s going to be very
difficult to do that for an independent company.” So Page suggested that
Kurzweil, who had never held a job anywhere but his own companies, join
Google instead. It didn’t take Kurzweil long to make up his mind: in
January he started working for Google as a director of engineering.
“This is the culmination of literally 50 years of my focus on artificial
intelligence,” he says.
Kurzweil was attracted not just by Google’s computing resources but
also by the startling progress the company has made in a branch of AI
called deep learning. Deep-learning software attempts to mimic the
activity in layers of neurons in the neocortex, the wrinkly 80 percent
of the brain where thinking occurs. The software learns, in a very real
sense, to recognize patterns in digital representations of sounds,
images, and other data.
The basic idea—that software can simulate the neocortex’s large
array of neurons in an artificial “neural network”—is decades old, and
it has led to as many disappointments as breakthroughs. But because of
improvements in mathematical formulas and increasingly powerful
computers, computer scientists can now model many more layers of virtual
neurons than ever before.
With this greater depth, they are producing remarkable advances in
speech and image recognition. Last June, a Google deep-learning system
that had been shown 10 million images from YouTube videos proved almost
twice as good as any previous image recognition effort at identifying
objects such as cats. Google also used the technology to cut the error
rate on speech recognition in its latest Android mobile software. In
October, Microsoft chief research officer Rick Rashid wowed attendees at
a lecture in China with a demonstration of speech software that
transcribed his spoken words into English text with an error rate of 7
percent, translated them into Chinese-language text, and then simulated
his own voice uttering them in Mandarin. That same month, a team of
three graduate students and two professors won a contest held by Merck
to identify molecules that could lead to new drugs. The group used deep
learning to zero in on the molecules most likely to bind to their
targets.
Google in particular has become a magnet for deep learning and
related AI talent. In March the company bought a startup cofounded by
Geoffrey Hinton, a University of Toronto computer science professor who
was part of the team that won the Merck contest. Hinton, who will split
his time between the university and Google, says he plans to “take ideas
out of this field and apply them to real problems” such as image
recognition, search, and natural-language understanding, he says.
All this has normally cautious AI researchers hopeful that
intelligent machines may finally escape the pages of science fiction.
Indeed, machine intelligence is starting to transform everything from
communications and computing to medicine, manufacturing, and
transportation. The possibilities are apparent in IBM’s Jeopardy!-winning
Watson computer, which uses some deep-learning techniques and is now
being trained to help doctors make better decisions. Microsoft has
deployed deep learning in its Windows Phone and Bing voice search.
Extending deep learning into applications beyond speech and image
recognition will require more conceptual and software breakthroughs, not
to mention many more advances in processing power. And we probably
won’t see machines we all agree can think for themselves for years,
perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft
Research USA, “deep learning has reignited some of the grand challenges
in artificial intelligence.”
Building a Brain
There have been many competing approaches to those challenges. One
has been to feed computers with information and rules about the world,
which required programmers to laboriously write software that is
familiar with the attributes of, say, an edge or a sound. That took lots
of time and still left the systems unable to deal with ambiguous data;
they were limited to narrow, controlled applications such as phone menu
systems that ask you to make queries by saying specific words.
Neural networks, developed in the 1950s not long after the dawn of
AI research, looked promising because they attempted to simulate the way
the brain worked, though in greatly simplified form. A program maps out
a set of virtual neurons and then assigns random numerical values, or
“weights,” to connections between them. These weights determine how each
simulated neuron responds—with a mathematical output between 0 and 1—to
a digitized feature such as an edge or a shade of blue in an image, or a
particular energy level at one frequency in a phoneme, the individual
unit of sound in spoken syllables.
Programmers would train a neural network to detect an object or
phoneme by blitzing the network with digitized versions of images
containing those objects or sound waves containing those phonemes. If
the network didn’t accurately recognize a particular pattern, an
algorithm would adjust the weights. The eventual goal of this training
was to get the network to consistently recognize the patterns in speech
or sets of images that we humans know as, say, the phoneme “d” or the
image of a dog. This is much the same way a child learns what a dog is
by noticing the details of head shape, behavior, and the like in furry,
barking animals that other people call dogs.
But early neural networks could simulate only a very limited number
of neurons at once, so they could not recognize patterns of great
complexity. They languished through the 1970s.
In the mid-1980s, Hinton and others helped spark a revival of
interest in neural networks with so-called “deep” models that made
better use of many layers of software neurons. But the technique still
required heavy human involvement: programmers had to label data before
feeding it to the network. And complex speech or image recognition
required more computer power than was then available.
Finally, however, in the last decade Hinton and other researchers
made some fundamental conceptual breakthroughs. In 2006, Hinton
developed a more efficient way to teach individual layers of neurons.
The first layer learns primitive features, like an edge in an image or
the tiniest unit of speech sound. It does this by finding combinations
of digitized pixels or sound waves that occur more often than they
should by chance. Once that layer accurately recognizes those features,
they’re fed to the next layer, which trains itself to recognize more
complex features, like a corner or a combination of speech sounds. The
process is repeated in successive layers until the system can reliably
recognize phonemes or objects.
Like cats. Last June, Google demonstrated one of the largest neural
networks yet, with more than a billion connections. A team led by
Stanford computer science professor Andrew Ng and Google Fellow Jeff
Dean showed the system images from 10 million randomly selected YouTube
videos. One simulated neuron in the software model fixated on images of
cats. Others focused on human faces, yellow flowers, and other objects.
And thanks to the power of deep learning, the system identified these
discrete objects even though no humans had ever defined or labeled them.
What stunned some AI experts, though, was the magnitude of
improvement in image recognition. The system correctly categorized
objects and themes in the YouTube images 16 percent of the time. That
might not sound impressive, but it was 70 percent better than previous
methods. And, Dean notes, there were 22,000 categories to choose from;
correctly slotting objects into some of them required, for example,
distinguishing between two similar varieties of skate fish. That would
have been challenging even for most humans. When the system was asked to
sort the images into 1,000 more general categories, the accuracy rate
jumped above 50 percent.
Big Data
Training the many layers of virtual neurons in the experiment took
16,000 computer processors—the kind of computing infrastructure that
Google has developed for its search engine and other services. At least
80 percent of the recent advances in AI can be attributed to the
availability of more computer power, reckons Dileep George, cofounder of
the machine-learning startup Vicarious.
There’s more to it than the sheer size of Google’s data centers,
though. Deep learning has also benefited from the company’s method of
splitting computing tasks among many machines so they can be done much
more quickly. That’s a technology Dean helped develop earlier in his
14-year career at Google. It vastly speeds up the training of
deep-learning neural networks as well, enabling Google to run larger
networks and feed a lot more data to them.
Already, deep learning has improved voice search on smartphones.
Until last year, Google’s Android software used a method that
misunderstood many words. But in preparation for a new release of
Android last July, Dean and his team helped replace part of the speech
system with one based on deep learning. Because the multiple layers of
neurons allow for more precise training on the many variants of a sound,
the system can recognize scraps of sound more reliably, especially in
noisy environments such as subway platforms. Since it’s likelier to
understand what was actually uttered, the result it returns is likelier
to be accurate as well. Almost overnight, the number of errors fell by
up to 25 percent—results so good that many reviewers now deem Android’s
voice search smarter than Apple’s more famous Siri voice assistant.
For all the advances, not everyone thinks deep learning can move
artificial intelligence toward something rivaling human intelligence.
Some critics say deep learning and AI in general ignore too much of the
brain’s biology in favor of brute-force computing.
One such critic is Jeff Hawkins, founder of Palm Computing, whose
latest venture, Numenta, is developing a machine-learning system that is
biologically inspired but does not use deep learning. Numenta’s system
can help predict energy consumption patterns and the likelihood that a
machine such as a windmill is about to fail. Hawkins, author of On Intelligence,
a 2004 book on how the brain works and how it might provide a guide to
building intelligent machines, says deep learning fails to account for
the concept of time. Brains process streams of sensory data, he says,
and human learning depends on our ability to recall sequences of
patterns: when you watch a video of a cat doing something funny, it’s
the motion that matters, not a series of still images like those Google
used in its experiment. “Google’s attitude is: lots of data makes up for
everything,” Hawkins says.
But if it doesn’t make up for everything, the computing resources a
company like Google throws at these problems can’t be dismissed.
They’re crucial, say deep-learning advocates, because the brain itself
is still so much more complex than any of today’s neural networks. “You
need lots of computational resources to make the ideas work at all,”
says Hinton.
What’s Next
Although Google is less than forthcoming about future applications,
the prospects are intriguing. Clearly, better image search would help
YouTube, for instance. And Dean says deep-learning models can use
phoneme data from English to more quickly train systems to recognize the
spoken sounds in other languages. It’s also likely that more
sophisticated image recognition could make Google’s self-driving cars
much better. Then there’s search and the ads that underwrite it. Both
could see vast improvements from any technology that’s better and faster
at recognizing what people are really looking for—maybe even before
they realize it.
This is what intrigues Kurzweil, 65, who has long had a vision of
intelligent machines. In high school, he wrote software that enabled a
computer to create original music in various classical styles, which he
demonstrated in a 1965 appearance on the TV show I’ve Got a Secret.
Since then, his inventions have included several firsts—a
print-to-speech reading machine, software that could scan and digitize
printed text in any font, music synthesizers that could re-create the
sound of orchestral instruments, and a speech recognition system with a
large vocabulary.
Today, he envisions a “cybernetic friend” that listens in on your
phone conversations, reads your e-mail, and tracks your every move—if
you let it, of course—so it can tell you things you want to know even
before you ask. This isn’t his immediate goal at Google, but it matches
that of Google cofounder Sergey Brin, who said in the company’s early
days that he wanted to build the equivalent of the sentient computer HAL
in 2001: A Space Odyssey—except one that wouldn’t kill people.
For now, Kurzweil aims to help computers understand and even speak
in natural language. “My mandate is to give computers enough
understanding of natural language to do useful things—do a better job of
search, do a better job of answering questions,” he says. Essentially,
he hopes to create a more flexible version of IBM’s Watson, which he
admires for its ability to understand Jeopardy! queries as
quirky as “a long, tiresome speech delivered by a frothy pie topping.”
(Watson’s correct answer: “What is a meringue harangue?”)
Kurzweil isn’t focused solely on deep learning, though he says his
approach to speech recognition is based on similar theories about how
the brain works. He wants to model the actual meaning of words, phrases,
and sentences, including ambiguities that usually trip up computers. “I
have an idea in mind of a graphical way to represent the semantic
meaning of language,” he says.
That in turn will require a more comprehensive way to graph the
syntax of sentences. Google is already using this kind of analysis to
improve grammar in translations. Natural-language understanding will
also require computers to grasp what we humans think of as common-sense
meaning. For that, Kurzweil will tap into the Knowledge Graph, Google’s
catalogue of some 700 million topics, locations, people, and more, plus
billions of relationships among them. It was introduced last year as a
way to provide searchers with answers to their queries, not just links.
Finally, Kurzweil plans to apply deep-learning algorithms to help
computers deal with the “soft boundaries and ambiguities in language.”
If all that sounds daunting, it is. “Natural-language understanding is
not a goal that is finished at some point, any more than search,” he
says. “That’s not a project I think I’ll ever finish.”
Though Kurzweil’s vision is still years from reality, deep learning
is likely to spur other applications beyond speech and image
recognition in the nearer term. For one, there’s drug discovery. The
surprise victory by Hinton’s group in the Merck contest clearly showed
the utility of deep learning in a field where few had expected it to
make an impact.
That’s not all. Microsoft’s Peter Lee says there’s promising early
research on potential uses of deep learning in machine
vision—technologies that use imaging for applications such as industrial
inspection and robot guidance. He also envisions personal sensors that
deep neural networks could use to predict medical problems. And sensors
throughout a city might feed deep-learning systems that could, for
instance, predict where traffic jams might occur.
In a field that attempts something as profound as modeling the
human brain, it’s inevitable that one technique won’t solve all the
challenges. But for now, this one is leading the way in artificial
intelligence. “Deep learning,” says Dean, “is a really powerful metaphor
for learning about the world.”
A true-color image of the Americas. Much of the information in the image come from a single remote-sensing modevice—NASA's Moderate Resolution Imaging Spectroradiometer, or MODIS, flying over 700 km above the Earth on board the Terra satellite in 2001.
The ancestors of today's American Indigenous peoples were the Paleo-Indians; they were hunter-gatherers who migrated into North America. The most popular theory asserts that migrants came to the Americas via Beringia, the land mass now covered by the ocean waters of the Bering Strait. Small lithic stage peoples followed megafauna
like bison, mammoth (now extinct), and caribou, thus gaining the modern
nickname "big-game hunters." Groups of people may also have traveled
into North America on shelf or sheet ice along the northern Pacific
coast.
After the voyages of Christopher Columbus in 1492, Spanish, Portuguese and later English, French and Dutch
colonial expeditions arrived in the New World, conquering and settling
the discovered lands, which led to a transformation of the cultural and
physical landscape in the Americas. Spain colonized most of the American
continent from present-day Southwestern United States, Florida and the Caribbean to the southern tip of South America. Portugal settled in what is mostly present-day Brazil while England established colonies on the Eastern coast of the United States, as well as the North Pacific coast and in most of Canada. France settled in Quebec and other parts of Eastern Canada
and claimed an area in what is today the central United States. The
Netherlands settled New Netherland (administrative centre New Amsterdam -
now New York), some Caribbean islands and parts of Northern South
America.
European colonization of the Americas led to the rise of new
cultures, civilizations and eventually states, which resulted from the
fusion of Native American and European traditions, peoples and
institutions. The transformation of American cultures through
colonization is evident in architecture, religion, gastronomy, the arts
and particularly languages, the most widespread being Spanish (376 million speakers), English (348 million) and Portuguese (201 million). The colonial period lasted approximately three centuries, from the early 16th to the early 19th centuries, when Brazil and the larger Hispanic American nations declared independence. The United States
obtained independence from England much earlier, in 1776, while Canada
formed a federal dominion in 1867. Others remained attached to their
European parent state until the end of the 19th century, such as Cuba and Puerto Rico which were linked to Spain until 1898. Smaller territories such as Guyana obtained independence in the mid-20th century, while certain Caribbean islands and French Guiana remain part of a European power to this day.
Pre-colonization
Migration into the continents
The specifics of Paleo-Indian migration to and throughout the
Americas, including the exact dates and routes traveled, are subject to
ongoing research and discussion.[1] The traditional theory has been that these early migrants moved into the Beringia land bridge
between eastern Siberia and present-day Alaska around 40,000 – 17,000
years ago, when sea levels were significantly lowered due to the Quaternary glaciation.[1][2] These people are believed to have followed herds of now-extinct pleistocenemegafauna along ice-free corridors that stretched between the Laurentide and Cordilleran ice sheets.[3] Another route proposed is that, either on foot or using primitive boats, they migrated down the Pacific Northwest coast to South America.[4] Evidence of the latter would since have been covered by a sea level rise of a hundred meters following the last ice age.[5]
Archaeologists contend that the Paleo-Indian migration out of Beringia (eastern Alaska), ranges from 40,000 to around 16,500 years ago.[6][7][8] This time range is a hot source of debate. The few agreements achieved to date are the origin from Central Asia, with widespread habitation of the Americas during the end of the last glacial period, or more specifically what is known as the late glacial maximum, around 16,000 – 13,000 years before present.[8][9]
The American Journal of Human Genetics released an article in 2007 stating "Here we show, by using 86 complete mitochondrialgenomes, that all Indigenous American haplogroups, including Haplogroup X (mtDNA), were part of a single founding population."[10] Amerindian groups in the Bering Strait region exhibit perhaps the strongest DNA or mitochondrial DNA relations to Siberian peoples. The genetic diversity of Amerindian indigenous groups increase with distance from the assumed entry point into the Americas.[11][12]
Certain genetic diversity patterns from West to East suggest,
particularly in South America, that migration proceeded first down the
west coast, and then proceeded eastward.[13]
Geneticists have variously estimated that peoples of Asia and the
Americas were part of the same population from 42,000 to 21,000 years
ago.[14]
New studies shed light on the founding population of indigenous
Americans, suggesting that their ancestry traced to both east Asian and
western Eurasians who migrated to North America directly from Siberia. A
2013 study in the journal Nature
reported that DNA found in the 24,000-year-old remains of a young Boy
in Mal’ta Siberia suggest that up to one-third of the indigenous
Americans may have ancestry that can be traced back to western
Eurasians, who may have "had a more north-easterly distribution 24,000
years ago than commonly thought"[15]
Professor Kelly Graf said that "Our findings are significant at two
levels. First, it shows that Upper Paleolithic Siberians came from a
cosmopolitan population of early modern humans that spread out of Africa
to Europe and Central and South Asia. Second, Paleoindian skeletons
with phenotypic traits atypical of modern-day Native Americans can be
explained as having a direct historical connection to Upper Paleolithic
Siberia." A route through Beringia is seen as more likely than the Solutrean hypothesis.[16]
Several thousand years after the first migrations, the first complex
civilizations arose as hunter-gatherers settled into semi-agricultural
communities. Identifiable sedentary settlements began to emerge in the
so-called Middle Archaic period around 6000 BCE. Particular archaeological cultures can be identified and easily classified throughout the Archaic period.
In the late Archaic, on the north-central coastal region of Peru, a complex civilization arose which has been termed the Norte Chico civilization, also known as Caral-Supe. It is the oldest known civilization in the Americas and one of the five sites
where civilization originated independently and indigenously in the
ancient world, flourishing between the 30th and 18th centuries BC. It
pre-dated the Mesoamerican Olmec civilization by nearly two millennia. It was contemporaneous with the Egypt following the unification of its kingdom under Narmer and the emergence of the first Egyptian hieroglyphics.
Monumental architecture, including earthwork platform mounds and
sunken plazas have been identified as part of the civilization.
Archaeological evidence points to the use of textile technology and the
worship of common god symbols. Government, possibly in the form of
theocracy, is assumed to have been required to manage the region.
However, numerous questions remain about its organization. In
archaeological nomenclature, the culture was pre-ceramic culture of the
pre-Columbian Late Archaic period. It appears to have lacked ceramics
and art.
Ongoing scholarly debate persists over the extent to which the
flourishing of Norte Chico resulted from its abundant maritime food
resources, and the relationship that these resources would suggest
between coastal and inland sites.
The role of seafood in the Norte Chico diet has been a subject of
scholarly debate. In 1973, examining the Aspero region of Norte Chico, Michael E. Moseley
contended that a maritime subsistence (seafood) economy had been the
basis of society and its early flourishing. This theory, later termed
"maritime foundation of Andean Civilization" was at odds with the
general scholarly consensus that civilization arose as a result of
intensive grain-based agriculture, as had been the case in the emergence
of civilizations in northeast Africa (Egypt) and southwest Asia
(Mesopotamia).
While earlier research pointed to edible domestic plants such as squash, beans, lucuma, guava, pacay, and camote at Caral, publications by Haas and colleagues have added avocado, achira, and corn
(Zea Mays) to the list of foods consumed in the region. In 2013, Haas
and colleagues reported that maize was a primary component of the diet
throughout the period of 3000 to 1800 BC.[18]
Cotton
was another widespread crop in Norte Chico, essential to the production
of fishing nets and textiles. Jonathan Haas noted a mutual dependency,
whereby "The prehistoric residents of the Norte Chico needed the fish
resources for their protein and the fishermen needed the cotton to make
the nets to catch the fish."
In the 2005 book 1491: New Revelations of the Americas Before Columbus,
journalist Charles C. Mann surveyed the literature at the time,
reporting a date "sometime before 3200 BC, and possibly before 3500 BC"
as the beginning date for the formation of Norte Chico. He notes that
the earliest date securely associated with a city is 3500 BC, at Huaricanga in the (inland) Fortaleza area.
The Norte Chico civilization began to decline around 1800 BC as
more powerful centers appeared to the south and north along its coast,
and to the east within the Andes Mountains.
Mesoamerica, the Woodland Period, and Mississippian culture (2000 BCE – 500 CE)
Simple map of subsistence methods in the Americas at 1000 BCE.
The Olmec
civilization was the first Mesoamerican civilization, beginning around
1600-1400 BC and ending around 400 BC. Mesoamerica is considered one of
the six sites
around the globe in which civilization developed independently and
indigenously. This civilization is considered the mother culture of the
Mesoamerican civilizations. The Mesoamerican calendar, numeral system,
writing, and much of the Mesoamerican pantheon seem to have begun with
the Olmec.
Some elements of agriculture seem to have been practiced in Mesoamerica quite early. The domestication of maize
is thought to have begun around 7,500 to 12,000 years ago. The earliest
record of lowland maize cultivation dates to around 5100 BC.[19]
Agriculture continued to be mixed with a hunting-gathering-fishing
lifestyle until quite late compared to other regions, but by 2700 BC,
Mesoamericans were relying on maize, and living mostly in villages.
Temple mounds and classes started to appear. By 1300/ 1200 BC, small
centres coalesced into the Olmec civilization, which seems to have been a
set of city-states, united in religious and commercial concerns. The
Olmec cities had ceremonial complexes with earth/clay pyramids, palaces,
stone monuments, aqueducts and walled plazas. The first of these
centers was at San Lorenzo (until 900 bc). La Venta was the last great
Olmec centre. Olmec artisans sculpted jade and clay figurines of Jaguars
and humans. Their iconic giant heads - believed to be of Olmec rulers -
stood in every major city.
The Olmec civilization ended in 400 BC, with the defacing and
destruction of San Lorenzo and La Venta, two of the major cities. It
nevertheless spawned many other states, most notably the Mayan
civilization, whose first cities began appearing around 700-600 BC.
Olmec influences continued to appear in many later Mesoamerican
civilizations.
Cities of the Aztecs, Mayas, and Incas were as large and
organized as the largest in the Old World, with an estimated population
of 200,000 to 350,000 in Tenochtitlan, the capital of the Aztec empire. The market established in the city was said to have been the largest ever seen by the conquistadors when they arrived. The capital of the Cahokians, Cahokia, located near modern East St. Louis, Illinois,
may have reached a population of over 20,000. At its peak, between the
12th and 13th centuries, Cahokia may have been the most populous city in
North America. Monk's Mound, the major ceremonial center of Cahokia, remains the largest earthen construction of the prehistoric New World.
These civilizations developed agriculture as well, breeding maize (corn) from having ears 2–5 cm in length to perhaps 10–15 cm in length. Potatoes, tomatoes, pumpkins, beans, avocados, and chocolate
are now the most popular of the pre-Columbian agricultural products.
The civilizations did not develop extensive livestock as there were few
suitable species, although alpacas and llamas were domesticated for use as beasts of burden and sources of wool and meat in the Andes. By the 15th century, maize was being farmed in the Mississippi River Valley after introduction from Mexico. The course of further agricultural development was greatly altered by the arrival of Europeans.
Classic stage (800 BCE – 1533 CE)
Cahokia
Cahokia was a major regional chiefdom, with trade and tributary chiefdoms located in a range of areas from bordering the Great Lakes to the Gulf of Mexico.
Haudenosaune
The Iroquois League of Nations or "People of the Long House", based in present-day upstate and western New York, had a confederacy
model from the mid-15th century. It has been suggested that their
culture contributed to political thinking during the development of the
later United States government. Their system of affiliation was a kind
of federation, different from the strong, centralized European
monarchies.[20][21][22]
Leadership was restricted to a group of 50 sachemchiefs, each representing one clan within a tribe; the Oneida and Mohawk people had nine seats each; the Onondagas held fourteen; the Cayuga had ten seats; and the Seneca
had eight. Representation was not based on population numbers, as the
Seneca tribe greatly outnumbered the others. When a sachem chief died,
his successor was chosen by the senior woman of his tribe in
consultation with other female members of the clan; property and
hereditary leadership were passed matrilineally. Decisions were not made through voting but through consensus decision making, with each sachem chief holding theoretical veto power. The Onondaga were the "firekeepers",
responsible for raising topics to be discussed. They occupied one side
of a three-sided fire (the Mohawk and Seneca sat on one side of the
fire, the Oneida and Cayuga sat on the third side.)[22]
Elizabeth Tooker, an anthropologist,
has said that it was unlikely the US founding fathers were inspired by
the confederacy, as it bears little resemblance to the system of
governance adopted in the United States. For example, it is based on
inherited rather than elected leadership, selected by female members of
the tribes, consensus decision-making regardless of population size of
the tribes, and a single group capable of bringing matters before the
legislative body.[22]
Long-distance trading did not prevent warfare and displacement
among the indigenous peoples, and their oral histories tell of numerous
migrations to the historic territories where Europeans encountered them.
The Iroquois invaded and attacked tribes in the Ohio River area of
present-day Kentucky and claimed the hunting grounds. Historians have
placed these events as occurring as early as the 13th century, or in the
17th century Beaver Wars.[23]
Through warfare, the Iroquois drove several tribes to migrate
west to what became known as their historically traditional lands west
of the Mississippi River. Tribes originating in the Ohio Valley who
moved west included the Osage, Kaw, Ponca and Omaha people. By the mid-17th century, they had resettled in their historical lands in present-day Kansas, Nebraska, Arkansas and Oklahoma. The Osage warred with Caddo-speaking Native Americans, displacing them in turn by the mid-18th century and dominating their new historical territories.[23]
The Pueblo people of what is now the Southwestern United States and northern Mexico, living conditions were that of large stone apartment like adobe structures. They live in Arizona, New Mexico, Utah, Colorado, and possibly surrounding areas.
Chichimeca was the name that the Mexica (Aztecs) generically applied to a wide range of semi-nomadic peoples who inhabited the north of modern-day Mexico, and carried the same sense as the European term "barbarian". The name was adopted with a pejorative tone by the Spaniards when referring especially to the semi-nomadic hunter-gatherer peoples of northern Mexico.
Mesoamerica
Zapotec
The Zapotec emerged around 1500 years BCE. Their writing system influenced the later Olmec. They left behind the great city Monte Alban.
Olmec
The Olmec civilization emerged around 1200 BCE in Mesoamerica
and ended around 400 BCE. Olmec art and concepts influenced surrounding
cultures after their downfall. This civilization was thought to be the
first in America to develop a writing system. After the Olmecs abandoned
their cities for unknown reasons, the Maya, Zapotec and Teotihuacan
arose.
Purepecha
The Purepecha civilization emerged around 1000 CE in Mesoamerica . They flourished from 1100 CE to 1530 CE. They continue to live on in the state of Michoacán. Fierce warriors, they were never conquered and in their glory years, successfully sealed off huge areas from Aztec domination.
Maya
Maya history spans 3,000 years. The Classic Maya may have collapsed due to changing climate in the end of the 10th century.
Toltec
The Toltec were a nomadic people, dating from the 10th - 12th century, whose language was also spoken by the Aztecs.
Teotihuacan
Teotihuacan
(4th century BCE - 7/8th century CE) was both a city, and an empire of
the same name, which, at its zenith between 150 and the 5th century,
covered most of Mesoamerica.
Aztec
The Aztec
having started to build their empire around 14th century found their
civilization abruptly ended by the Spanish conquistadors. They lived in
Mesoamerica, and surrounding lands. Their capital city Tenochtitlan was
one of the largest cities of all time.
The oldest known civilization of the Americas was established in the Norte Chico region of modern Peru. Complex society emerged in the group of coastal valleys, between 3000 and 1800 BCE. The Quipu, a distinctive recording device among Andean civilizations, apparently dates from the era of Norte Chico's prominence.
Chavín
The Chavín
established a trade network and developed agriculture by as early as
(or late compared to the Old World) 900 BCE according to some estimates
and archaeological finds. Artifacts were found at a site called Chavín
in modern Peru at an elevation of 3,177 meters. Chavín civilization spanned from 900 BCE to 300 BCE.
Holding their capital at the great city of Cusco, the Inca civilization dominated the Andes region from 1438 to 1533.
Known as Tahuantinsuyu, or "the land of the four regions", in Quechua,
the Inca culture was highly distinct and developed. Cities were built
with precise, unmatched stonework, constructed over many levels of
mountain terrain. Terrace farming was a useful form of agriculture. There is evidence of excellent metalwork and even successful trepanation of the skull in Inca civilization.
Non-Native American nations' claims over North America, 1750–1999.
Political evolution of Central America and the Caribbean since 1700.
European nations’ control over South America, 1700 to present
Around 1000, the Vikings established a short-lived settlement in Newfoundland, now known as L'Anse aux Meadows.
Speculations exist about other Old World discoveries of the New World,
but none of these are generally or completely accepted by most scholars.
Spain sponsored a major exploration led by Christopher Columbus in 1492; it quickly led to extensive European colonization of the Americas.
The Europeans brought Old World diseases which are thought to have
caused catastrophic epidemics and a huge decrease of the native
population. Columbus came at a time in which many technical developments
in sailing techniques and communication made it possible to report his
voyages easily and to spread word of them throughout Europe. It was also
a time of growing religious, imperial and economic rivalries that led
to a competition for the establishment of colonies.
The Spanish colonies won their independence in the first quarter of the 19th century, in the Spanish American wars of independence. Simón Bolívar and José de San Martín,
among others, led their independence struggle. Although Bolivar
attempted to keep the Spanish-speaking parts of the continent
politically allied, they rapidly became independent of one another as
well, and several further wars were fought, such as the Paraguayan War and the War of the Pacific. (See Latin American integration.) In the Portuguese colony Dom Pedro I (also Pedro IV of Portugal), son of the Portuguese kingDom João VI, proclaimed the country's independence in 1822 and became Brazil's first Emperor. This was peacefully accepted by the crown in Portugal, upon compensation.
Effects of slavery
Slavery has had a significant role in the economic development the New World after the colonization of the Americas by the Europeans. The cotton, tobacco, and sugar cane harvested by slaves became important exports for the United States and the Caribbean countries.
20th century
North America
A Canadian World War I recruiting poster - (1914–1918)
As a part of the British Empire,
Canada immediately entered World War I when it broke out in 1914.
Canada bore the brunt of several major battles during the early stages
of the war, including the use of poison gas attacks at Ypres. Losses became grave, and the government eventually brought in conscription, despite the fact this was against the wishes of the majority of French Canadians. In the ensuing Conscription Crisis of 1917, riots broke out on the streets of Montreal. In neighboring Newfoundland, the new dominion suffered a devastating loss on July 1, 1916, the First day on the Somme.
The United States stayed out of the conflict until 1917, when it joined the Entente powers. The United States was then able to play a crucial role at the Paris Peace Conference of 1919 that shaped interwar Europe. Mexico was not part of the war, as the country was embroiled in the Mexican Revolution at the time.
The 1920s brought an age of great prosperity in the United States, and to a lesser degree Canada. But the Wall Street Crash of 1929 combined with drought ushered in a period of economic hardship in the United States and Canada.
From 1936 to 1949, there was a popular uprising against the
anti-Catholic Mexican government of the time, set off specifically by
the anti-clerical provisions of the Mexican Constitution of 1917.
Once again, Canada found itself at war before its neighbors, however
even Canadian contributions were slight before the Japanese attack on Pearl Harbor. The entry of the United States into the war helped to tip the balance in favour of the allies. Two Mexican tankers, transporting oil to the United States, were attacked and sunk by the Germans in the Gulf of Mexico
waters, in 1942. The incident happened in spite of Mexico's neutrality
at that time. This led Mexico to enter the conflict with a declaration
of war on the Axis nations. The destruction of Europe wrought by the war
vaulted all North American countries to more important roles in world
affairs, especially the United States, which emerged as a "superpower".
The early Cold War era saw the United States as the most powerful
nation in a Western coalition of which Mexico and Canada were also a
part. In Canada, Quebec was transformed by the Quiet Revolution and the emergence of Quebec nationalism.
Mexico experienced an era of huge economic growth after World War II, a
heavy industrialization process and a growth of its middle class, a
period known in Mexican history as "El Milagro Mexicano" (the Mexican miracle). The Caribbean saw the beginnings of decolonization, while on the largest island the Cuban Revolution introduced Cold War rivalries into Latin America.
The civil rights movement in the U.S. ended Jim Crow
and empowered black voters in the 1960s, which allowed black citizens
to move into high government offices for the first time since
Reconstruction. However, the dominant New Deal coalition collapsed in the mid 1960s in disputes over race and the Vietnam War, and the conservative movement began its rise to power, as the once dominant liberalism weakened and collapsed. Canada during this era was dominated by the leadership of Pierre Elliot Trudeau. In 1982, at the end of his tenure, Canada enshrined a new constitution.
Canada's Brian Mulroney not only ran on a similar platform but also favored closer trade ties with the United States. This led to the Canada-United States Free Trade Agreement in January 1989. Mexican presidents Miguel de la Madrid, in the early 1980s and Carlos Salinas de Gortari
in the late 1980s, started implementing liberal economic strategies
that were seen as a good move. However, Mexico experienced a strong
economic recession in 1982 and the Mexican peso suffered a devaluation.
In the United States president Ronald Reagan
attempted to move the United States back towards a hard anti-communist
line in foreign affairs, in what his supporters saw as an attempt to
assert moral leadership (compared to the Soviet Union) in the world
community. Domestically, Reagan attempted to bring in a package of privatization and regulation to stimulate the economy.
The end of the Cold War and the beginning of the era of sustained
economic expansion coincided during the 1990s. On January 1, 1994,
Canada, Mexico and the United States signed the North American Free Trade Agreement, creating the world's largest free trade area. In 2000, Vicente Fox became the first non-PRI candidate to win the Mexican presidency in over 70 years. The optimism of the 1990s was shattered by the 9/11 attacks of 2001 on the United States, which prompted military intervention in Afghanistan, which also involved Canada. Canada did not support the United States' later move to invade Iraq, however.
In the U.S. the Reagan Era of conservative national policies, deregulation and tax cuts took control with the election of Ronald Reagan in 1980. By 2010, political scientists were debating whether the election of Barack Obama
in 2008 represented an end of the Reagan Era, or was only a reaction
against the bubble economy of the 2000s (decade), which burst in 2008
and became the Late-2000s recession with prolonged unemployment.
Despite the failure of a lasting political union, the concept of
Central American reunification, though lacking enthusiasm from the
leaders of the individual countries, rises from time to time. In
1856–1857 the region successfully established a military coalition to
repel an invasion by United States adventurer William Walker. Today, all five nations fly flags
that retain the old federal motif of two outer blue bands bounding an
inner white stripe. (Costa Rica, traditionally the least committed of
the five to regional integration, modified its flag significantly in
1848 by darkening the blue and adding a double-wide inner red band, in
honor of the French tricolor).
In 1907, a Central American Court of Justice was created. On December 13, 1960, Guatemala, El Salvador, Honduras, and Nicaragua established the Central American Common Market
("CACM"). Costa Rica, because of its relative economic prosperity and
political stability, chose not to participate in the CACM. The goals for
the CACM were to create greater political unification and success of import substitution industrialization policies. The project was an immediate economic success, but was abandoned after the 1969 "Football War" between El Salvador and Honduras. A Central American Parliament
has operated, as a purely advisory body, since 1991. Costa Rica has
repeatedly declined invitations to join the regional parliament, which
seats deputies from the four other former members of the Union, as well
as from Panama and the Dominican Republic.
South America
In the 1960s and 1970s, the governments of Argentina, Brazil, Chile,
and Uruguay were overthrown or displaced by U.S.-aligned military
dictatorships. These dictatorships detained tens of thousands of political prisoners, many of whom were tortured and/or killed (on inter-state collaboration, see Operation Condor). Economically, they began a transition to neoliberal economic policies. They placed their own actions within the United States Cold War doctrine of "National Security" against internal subversion. Throughout the 1980s and 1990s, Peru suffered from an internal conflict (see Túpac Amaru Revolutionary Movement and Shining Path).
Revolutionary movements and right-wing military dictatorships have been
common, but starting in the 1980s a wave of democratization came
through the continent, and democratic rule is widespread now.
Allegations of corruption remain common, and several nations have seen
crises which have forced the resignation of their presidents, although
normal civilian succession has continued.
International indebtedness became a notable problem, as most recently illustrated by Argentina's default in the early 21st century. In recent years, South American governments have drifted to the left, with socialist leaders being elected in Chile, Bolivia, Brazil, Venezuela, and a leftist president in Argentina and Uruguay. Despite the move to the left, South America is still largely capitalist. With the founding of the Union of South American Nations, South America has started down the road of economic integration, with plans for political integration in the European Union style.
Elon Musk has launched a California-based company called Neuralink Corp., The Wall Street Journal reported today (Monday, March 27, 2017), citing people familiar with the matter, to pursue “neural lace” brain-interface technology.
Neural lace would help prevent humans from becoming “house cats” to
AI, he suggests. “I think one of the solutions that seems maybe the best
is to add an AI layer,” Musk hinted at the Code Conference last year. It would be a “digital layer above the cortex that could work well and symbiotically with you.
“We are already a cyborg,” he added. “You have a digital version of
yourself online in form of emails and social media. … But the constraint
is input/output — we’re I/O bound … particularly output. … Merging with
digital intelligence revolves around … some sort of interface with your
cortical neurons.”
Reflecting concepts that have been proposed by Ray Kurzweil, “over
time I think we will probably see a closer merger of biological
intelligence and digital intelligence,” Musk said at the recent World Government Summit in Dubai.
Musk suggested the neural lace interface could be inserted via veins and arteries.
Image
showing mesh electronics being injected through sub-100 micrometer
inner diameter glass needle into aqueous solution. (credit: Lieber
Research Group, Harvard University)
KurzweilAI reported on one approach to a neural-lace-like brain interface in 2015. A “syringe-injectable electronics” concept was invented by researchers in Charles Lieber’s lab at
Harvard University and the National Center for Nanoscience and
Technology in Beijing. It would involve injecting a biocompatible
polymer scaffold mesh with attached microelectronic devices into the
brain via syringe.
The process for fabricating the scaffold is similar to that used to
etch microchips, and begins with a dissolvable layer deposited on a
biocompatible nanoscale polymer mesh substrate, with embedded nanowires,
transistors, and other microelectronic devices attached. The mesh is
then tightly rolled up, allowing it to be sucked up into a syringe via a
thin (100 micrometers internal diameter) glass needle. The mesh can
then be injected into brain tissue by the syringe.
The input-output connection of the mesh electronics can be connected
to standard electronics devices (for voltage insertion or measurement,
for example), allowing the mesh-embedded devices to be individually
addressed and used to precisely stimulate or record individual neural
activity.
A
schematic showing in vivo stereotaxic injection of mesh electronics
into a mouse brain (credit: Jia Liu et al./Nature Nanotechnology)
Lieber’s team has demonstrated this in live mice and verified
continuous monitoring and recordings of brain signals on 16 channels.
“We have shown that mesh electronics with widths more than 30 times the
needle ID can be injected and maintain a high yield of active electronic
devices … little chronic immunoreactivity,” the researchers said in a
June 8, 2015 paper in Nature Nanotechnology.
“In the future, our new approach and results could be extended in
several directions, including the incorporation of multifunctional
electronic devices and/or wireless interfaces to further increase the
complexity of the injected electronics.”
This technology would require surgery, but would not have the
accessibility limitation of the blood-brain barrier with Musk’s
preliminary concept. For direct delivery via the bloodstream, it’s
possible that the nanorobots conceived by Robert A. Freitas, Jr. (and extended to interface with the cloud, as Ray Kurzweil has suggested) might be appropriate at some point in the future.
“Neuralink has reportedly already hired several high profile
academics in the field of neuroscience: flexible electrodes and nano
technology expert Venessa Tolosa, PhD; UCSF professor Philip Sabes, PhD, who also participated in the Musk-sponsored Beneficial AI conference; and Boston University professor Timothy Gardner, PhD, who studies neural pathways in the brains of songbirds,” Engadget reports.