Search This Blog

Tuesday, May 26, 2015

John Forbes Nash, Jr.



From Wikipedia, the free encyclopedia

John Forbes Nash, Jr.
John Forbes Nash, Jr. by Peter Badge.jpg
John Forbes Nash, Jr. at a symposium of game theory at the university of Cologne, Germany, 2nd Nov. 2006
Born (1928-06-13)June 13, 1928
Bluefield, West Virginia, U.S.
Died May 23, 2015(2015-05-23) (aged 86)
Monroe Township, New Jersey, U.S.
Residence United States
Nationality American
Fields
Institutions
Alma mater
Doctoral advisor Albert W. Tucker
Known for
Notable awards
Spouse Alicia Lopez-Harrison de Lardé (m. 1957–1963) (divorced); (m. 2001–2015) (their deaths)
Children 2

John Forbes Nash, Jr. (June 13, 1928 – May 23, 2015) was an American mathematician whose works in game theory, differential geometry, and partial differential equations have provided insight into the factors that govern chance and events inside complex systems in daily life.

His theories are used in economics, computing, evolutionary biology, artificial intelligence, accounting, computer science (minimax algorithm which is based on Nash Equilibrium), games of skill, politics and military theory. Serving as a Senior Research Mathematician at Princeton University during the latter part of his life, he shared the 1994 Nobel Memorial Prize in Economic Sciences with game theorists Reinhard Selten and John Harsanyi. In 2015, he was awarded the Abel Prize for his work on nonlinear partial differential equations.

In 1959, Nash began showing clear signs of mental illness, and spent several years at psychiatric hospitals being treated for paranoid schizophrenia. After 1970, his condition slowly improved, allowing him to return to academic work by the mid-1980s.[1] His struggles with his illness and his recovery became the basis for Sylvia Nasar's biography, A Beautiful Mind, as well as a film of the same name starring Russell Crowe.[2][3][4]

On May 23, 2015, Nash and his wife, Alicia de Lardé Nash, while riding in a taxi, were killed in a motor vehicle accident in New Jersey.

Youth

Nash was born on June 13, 1928, in Bluefield, West Virginia, United States. His father, John Forbes Nash, was an electrical engineer for the Appalachian Electric Power Company. His mother, Margaret Virginia (née Martin) Nash, had been a schoolteacher before she married. He was baptized in the Episcopal Church directly opposite the Martin house on Tazewell Street.[5] He had a younger sister, Martha (born November 16, 1930).

Education

Nash attended kindergarten and public school, and he learned from books provided by his parents and grandparents. Nash's grandmother played piano at home, and Nash's memories of listening to her when he visited were pleasant.[6] Nash's parents pursued opportunities to supplement their son's education, and arranged for him to take advanced mathematics courses at a local community college during his final year of high school. Nash attended the Carnegie Institute of Technology (CIT; now Carnegie Mellon University) with a full scholarship, the George Westinghouse Scholarship, and initially majored in chemical engineering. He switched to chemistry, and eventually to mathematics. After graduating in 1948 with a B.S. degree and an M.S. degree, both in mathematics, he accepted a scholarship to Princeton University, where he pursued graduate studies in mathematics.[6]

Nash's adviser and former CIT professor Richard Duffin wrote a letter of recommendation for graduate school consisting of a single sentence: "This man is a genius."[7] Nash was accepted by Harvard University, but the chairman of the mathematics department of Princeton, Solomon Lefschetz, offered him the John S. Kennedy fellowship, which was enough to convince Nash that Princeton valued him more.[8] Nash also considered Princeton more favorably because of its location closer to his family in Bluefield.[6] He went to Princeton, where he worked on his equilibrium theory, later known as the Nash equilibrium.

Major contributions

Game theory

Nash earned a Ph.D. degree in 1950 with a 28-page dissertation on non-cooperative games.[9][10] The thesis, which was written under the supervision of doctoral advisor Albert W. Tucker, contained the definition and properties of the Nash equilibrium. A crucial concept in non-cooperative games, it won Nash the Nobel Memorial Prize in Economic Sciences in 1994.

Nash's major publications relating to this concept are in the following papers:

Other mathematics

Nash did groundbreaking work in the area of real algebraic geometry:
His work in mathematics includes the Nash embedding theorem, which shows that every abstract Riemannian manifold can be isometrically realized as a submanifold of Euclidean space. He also made significant contributions to the theory of nonlinear parabolic partial differential equations and to singularity theory.

In her book A Beautiful Mind, author Sylvia Nasar explains that Nash was working on proving Hilbert's nineteenth problem, a theorem involving elliptic partial differential equations when, in 1956, he suffered a severe disappointment. He learned that an Italian mathematician, Ennio de Giorgi, had published a proof just months before Nash achieved his proof. Each took different routes to get to their solutions. The two mathematicians met each other at the Courant Institute of Mathematical Sciences of New York University during the summer of 1956. It has been speculated that if only one had solved the problem, he would have been given the Fields Medal for the proof.[6]

In 2011, the National Security Agency declassified letters written by Nash in the 1950s, in which he had proposed a new encryption–decryption machine.[11] The letters show that Nash had anticipated many concepts of modern cryptography, which are based on computational hardness.[12]

Personal life

In 1951, Nash was hired by the Massachusetts Institute of Technology (MIT) as a C. L. E. Moore instructor in the mathematics faculty. About a year later, Nash began a relationship in Massachusetts with Eleanor Stier, a nurse he met while she cared for him as a patient. They had a son, John David Stier, but Nash left Stier when she told him of her pregnancy.[13] The film based on Nash's life, A Beautiful Mind, was criticized during the run-up to the 2002 Oscars for omitting this aspect of his life. He was said to have abandoned her based on her social status, which he thought to have been beneath his.[14]

In 1954, while in his 20s, Nash was arrested for indecent exposure in an entrapment of homosexuals in Santa Monica, California. Although the charges were dropped, he was stripped of his top-secret security clearance and fired from RAND Corporation, where he had spent a few summers as a consultant.[15]

Not long after breaking up with Stier, Nash met Alicia Lopez-Harrison de Lardé (born January 1, 1933), a naturalized U.S. citizen from El Salvador. De Lardé graduated from MIT, having majored in physics.[6] They married in February 1957; although Nash was an atheist, the ceremony was performed in a Roman Catholic church.[16][17]

In 1958, Nash earned a tenured position at MIT, and his first signs of mental illness were evident in early 1959. At this time, his wife was pregnant with their first child. He resigned his position as a member of the MIT mathematics faculty in the spring of 1959[6] and his wife had him admitted to McLean Hospital for treatment of schizophrenia that same year. Their son, John Charles Martin Nash, was born soon afterward. The child was not named for a year because his wife felt Nash should have a say in the name given to the boy. Due to the stress of dealing with his illness, Nash and de Lardé divorced in 1963. After his final hospital discharge in 1970, Nash lived in de Lardé's house as a boarder. This stability seemed to help him, and he learned how to consciously discard his paranoid delusions.[18] He stopped taking psychiatric medication and was allowed by Princeton to audit classes. He continued to work on mathematics and eventually he was allowed to teach again. In the 1990s, de Lardé and Nash resumed their relationship, remarrying in 2001.

Death

While riding in a taxicab on May 23, 2015, Nash and his wife, Alicia de Lardé Nash, were killed as the result of a motor vehicle collision on the New Jersey Turnpike near Monroe Township. They were on their way home after a visit to Norway where Nash had received the Abel Prize. The driver of the cab they were riding in from Newark Airport lost control of the vehicle and eventually struck a guard rail. Both Nash and his wife were ejected from the car upon impact.[19][20][21][22][23] At the time of his death, was 86 years old and a longtime resident of West Windsor Township, New Jersey.[24][25]

Following his death, obituaries appeared in scientific and conventional media throughout the world. In addition to their obituary for Nash,[26] The New York Times also published an article containing many notable quotes of Nash, assembled from diverse media and publications, providing his reflections on his life and achievements,[27] as well as an article on the cornerstone of his game theory on making choices in life.[28]

Mental illness


Nash in November 2006 at a game theory conference in Cologne, Germany

Nash's mental illness first began to manifest in the form of paranoia; his wife later described his behavior as erratic. Nash seemed to believe that all men who wore red ties were part of a communist conspiracy against him; Nash mailed letters to embassies in Washington, D.C., declaring that they were establishing a government.[1][29] Nash's psychological issues crossed into his professional life when he gave an American Mathematical Society lecture at Columbia University in 1959. Although ostensibly pertaining to a proof of the Riemann hypothesis, the lecture was incomprehensible. Colleagues in the audience immediately realized that something was wrong.[30]

He was admitted to the McLean Hospital, April–May 1959, where he was diagnosed with paranoid schizophrenia. According to the clinical diagnosis, a person suffering from this disorder is dominated by relatively stable, often paranoid, fixed beliefs that are either false, over-imaginative or unrealistic, usually accompanied by experiences of seemingly real perception of something not actually present – particularly auditory and perceptional disturbances, a lack of motivation for life, and mild clinical depression.[31]

In 1961, Nash was admitted to the New Jersey State Hospital at Trenton.[32] Over the next nine years, he spent periods in psychiatric hospitals, where, aside from receiving antipsychotic medications, he was administered insulin shock therapy.[31][33][34]

Although he sometimes took prescribed medication, Nash later wrote that he only ever did so under pressure. After 1970, he was never committed to a hospital again, and he refused any further medication. According to Nash, the film A Beautiful Mind inaccurately implied that he was taking the new atypical antipsychotics during this period. He attributed the depiction to the screenwriter (whose mother, he notes, was a psychiatrist), who was worried about the film encouraging people with the disorder to stop taking their medication.[35] Journalist Robert Whitaker wrote an article suggesting that recovery from problems like Nash's can be hindered by such drugs.[36]

Nash has said the psychotropic drugs are overrated and that the adverse effects are not given enough consideration once someone is deemed mentally ill.[37][38][39] According to Sylvia Nasar, author of the book A Beautiful Mind, on which the movie was based, Nash recovered gradually with the passage of time. Encouraged by his then former wife, de Lardé, Nash worked in a communitarian setting where his eccentricities were accepted. De Lardé said of Nash, "it's just a question of living a quiet life".[1]

Nash dated the start of what he termed "mental disturbances" to the early months of 1959, when his wife was pregnant. He described a process of change "from scientific rationality of thinking into the delusional thinking characteristic of persons who are psychiatrically diagnosed as 'schizophrenic' or 'paranoid schizophrenic'"[6] including seeing himself as a messenger or having a special function in some way, and with supporters and opponents and hidden schemers, and a feeling of being persecuted, and looking for signs representing divine revelation.[40] Nash suggested his delusional thinking was related to his unhappiness, his desire to feel important and be recognized, and his characteristic way of thinking, saying, "I wouldn't have had good scientific ideas if I had thought more normally." He also said, "If I felt completely pressureless I don't think I would have gone in this pattern".[41] He did not draw a categorical distinction between schizophrenia and bipolar disorder.[42] Nash reported that he did not hear voices until around 1964, and later engaged in a process of consciously rejecting them.[43] He reported that he was always taken to hospitals against his will. He only temporarily renounced his "dream-like delusional hypotheses" after being in a hospital long enough to decide to superficially conform – to behave normally or to experience "enforced rationality". Only gradually on his own did he "intellectually reject" some of the "delusionally influenced" and "politically oriented" thinking as a waste of effort. However, by 1995, although he was "thinking rationally again in the style that is characteristic of scientists," he said he also felt more limited.[6][44]
Nash wrote in 1994:
I spent times of the order of five to eight months in hospitals in New Jersey, always on an involuntary basis and always attempting a legal argument for release. And it did happen that when I had been long enough hospitalized that I would finally renounce my delusional hypotheses and revert to thinking of myself as a human of more conventional circumstances and return to mathematical research. In these interludes of, as it were, enforced rationality, I did succeed in doing some respectable mathematical research. Thus there came about the research for "Le problème de Cauchy pour les équations différentielles d'un fluide général"; the idea that Prof. Hironaka called "the Nash blowing-up transformation"; and those of "Arc Structure of Singularities" and "Analyticity of Solutions of Implicit Function Problems with Analytic Data".
But after my return to the dream-like delusional hypotheses in the later 60s I became a person of delusionally influenced thinking but of relatively moderate behavior and thus tended to avoid hospitalization and the direct attention of psychiatrists.

Thus further time passed. Then gradually I began to intellectually reject some of the delusionally influenced lines of thinking which had been characteristic of my orientation. This began, most recognizably, with the rejection of politically oriented thinking as essentially a hopeless waste of intellectual effort. So at the present time I seem to be thinking rationally again in the style that is characteristic of scientists.[6]

Recognition and later career

In 1978, Nash was awarded the John von Neumann Theory Prize for his discovery of non-cooperative equilibria, now called Nash equilibria. He won the Leroy P. Steele Prize in 1999.

In 1994, he received the Nobel Memorial Prize in Economic Sciences (along with John Harsanyi and Reinhard Selten) as a result of his game theory work as a Princeton graduate student. In the late 1980s, Nash had begun to use email to gradually link with working mathematicians who realized that he was the John Nash and that his new work had value. They formed part of the nucleus of a group that contacted the Bank of Sweden's Nobel award committee and were able to vouch for Nash's mental health ability to receive the award in recognition of his early work.[citation needed]

As of 2011 Nash's recent work involved ventures in advanced game theory, including partial agency, which show that, as in his early career, he preferred to select his own path and problems. Between 1945 and 1996, he published 23 scientific studies.

Nash has suggested hypotheses on mental illness. He has compared not thinking in an acceptable manner, or being "insane" and not fitting into a usual social function, to being "on strike" from an economic point of view. He has advanced views in evolutionary psychology about the value of human diversity and the potential benefits of apparently nonstandard behaviors or roles.[45]

Nash has developed work on the role of money in society. Within the framing theorem that people can be so controlled and motivated by money that they may not be able to reason rationally about it, he has criticized interest groups that promote quasi-doctrines based on Keynesian economics that permit manipulative short-term inflation and debt tactics that ultimately undermine currencies. He has suggested a global "industrial consumption price index" system that would support the development of more "ideal money" that people could trust rather than more unstable "bad money". He notes that some of his thinking parallels economist and political philosopher Friedrich Hayek's thinking regarding money and a nontypical viewpoint of the function of the authorities.[46][47]

Nash received an honorary degree, Doctor of Science and Technology, from Carnegie Mellon University in 1999, an honorary degree in economics from the University of Naples Federico II on March 19, 2003,[48] an honorary doctorate in economics from the University of Antwerp in April 2007, and was keynote speaker at a conference on game theory. He has also been a prolific guest speaker at a number of world-class events, such as the Warwick Economics Summit in 2005 held at the University of Warwick. In 2012 he was elected as a fellow of the American Mathematical Society.[49] On May 19, 2015, a few days before his death, Nash, along with Louis Nirenberg, was awarded the 2015 Abel Prize by King Harald V of Norway at a ceremony in Oslo.[50]

Representation in culture

At Princeton, campus legend Nash became known as "The Phantom of Fine Hall"[51] (Princeton's mathematics center), a shadowy figure who would scribble arcane equations on blackboards in the middle of the night. He is referred to in a novel set at Princeton, The Mind-Body Problem, 1983, by Rebecca Goldstein.[1]

Sylvia Nasar's biography of Nash, A Beautiful Mind, was published in 1998. A film by the same name was released in 2001, directed by Ron Howard with Russell Crowe playing Nash.

Awards

The coming merge of human and machine intelligence

by Jeff Stibel 
Original link:  http://medicalxpress.com/news/2015-05-merge-human-machine-intelligence.html


Credit: Rice University

For most of the past two million years, the human brain has been growing steadily. But something has recently changed. In a surprising reversal, human brains have actually been shrinking for the last 20,000 years or so. We have lost nearly a baseball-sized amount of matter from a brain that isn't any larger than a football.

The descent is rapid and pronounced. The anthropologist John Hawks describes it as a "major downsizing in an evolutionary eyeblink." If this pace is maintained, scientists predict that our brains will be no larger than those of our forebears, Homo erectus, within another 2,000 years.

The reason that our brains are shrinking is simple: our biology is focused on survival, not . Larger brains were necessary to allow us to learn to use language, tools and all of the innovations that allowed our species to thrive. But now that we have become civilized—domesticated, if you will—certain aspects of intelligence are less necessary.

This is actually true of all animals: domesticated animals, including dogs, cats, hamsters and birds, have 10 to 15 percent smaller brains than their counterparts in the wild. Because brains are so expensive to maintain, large sizes are selected out when nature sees no direct survival benefit. It is an inevitable fact of life.

Fortunately, another influence has evolved over the past 20,000 years that is making us smarter even as our brains are shrinking: technology. Technology has allowed us to leapfrog evolution, enabling our brains and bodies to do things that were otherwise impossible biologically. We weren't born with wings, but we've created airplanes, helicopters, hot air balloons and hang gliders. We don't have sufficient natural strength or speed to bring down big game, but we've created spears, rifles and livestock farms.

Now, as the Internet revolution unfolds, we are seeing not merely an extension of mind but a unity of mind and machine, two networks coming together as one. Our smaller brains are in a quest to bypass nature's intent and grow larger by proxy. It is not a stretch of the imagination to believe we will one day have all of the world's information embedded in our minds via the Internet.

Psychics and physics

In the late 1800s, a German astronomer named Hans Berger fell off a horse and was nearly trampled by cavalry. He narrowly escaped injury, but was forever changed by the incident, owing to the reaction of his sister. Though she was miles away at the time, Berger's sister was instantly overcome with a feeling that Hans was in trouble. Berger took this as evidence of the mind's psychic ability and dedicated the rest of his life to finding certain proof.

Berger abandoned his study of astronomy and enrolled in medical school to gain an understanding of the brain that would allow him to prove a "correlation between objective activity in the brain and subjective psychic phenomena." He later joined the University of Jena in Germany as professor of neurology to pursue his quest.

At the time, psychic interest was relatively high. There were numerous academics devoted to the field, studying at prestigious institutions such as Stanford and Duke, Oxford and Cambridge. Still, it was largely considered bunk science, with most credible academics focused on dispelling, rather than proving, claims of psychic ability. But one of those psychic beliefs happened to be true.

That belief is the now well-understood notion that our brains communicate electrically. This was a radical idea at the time; after all, the electromagnetic field had only been discovered in 1865. But Berger found proof. He invented a device called the electroencephalogram (you probably know it as an EEG) that recorded brain waves. Using his new EEG, Berger was the first to demonstrate that our neurons actually talk to one another, and that they do so with electrical pulses. He published his results in 1929.

The new normal

As often happens with revolutionary ideas, Berger's EEG results were either ignored or lambasted as trickery. This was, after all, preternatural activity. But over the next decade, enough independent scholars verified the results that they became widely accepted. Berger saw his findings as evidence of the mind's potential for "psychic" activity, and he continued searching for more evidence until the day he hanged himself in frustration. The rest of the scientific community went back to what it had always been doing, "good science," and largely forgot about the electric neuron.

That was the case until the biophysicist Eberhard Fetz came along in 1969 and elaborated on Berger's discovery. Fetz reasoned that if brains were controlled by electricity, then perhaps we could use our brains to control . In a small primate lab at the University of Washington in Seattle, he connected the brain of a rhesus monkey to an electrical meter and then watched in amazement as the monkey learned how to control the level of the meter with nothing but its thoughts.

While incredible, this insight didn't have much application in 1969. But with the rapid development of silicon chips, computers and data networks, the technology now exists to connect people's brains to the Internet, and it's giving rise to a new breed of intelligence.

Scientists in labs across the globe are busy perfecting computer chips that can be implanted in the human brain. In many ways, the results, if successful, fit squarely in the realm of "psychics." There may be no such thing as paranormal activity, but make no mistake that all of the following are possible and on the horizon: telepathy, no problem; telekinesis, absolutely; clairvoyance, without question; ESP, oh yeah. While not psychic, Hans Berger may have been right all along.

The Six Million Dollar Man, for real

Jan Scheuermann lifted a chocolate bar to her mouth and took a bite. A grin spread across her face as she declared, "One small nibble for a woman, one giant bite for BCI."

BCI stands for brain-computer interface, and Jan is one of only a few people on earth using this technology, through two implanted chips attached directly to the neurons in her brain. The first human brain implant was conceived of by John Donoghue, a neuroscientist at Brown University, and implanted in a paralyzed man in 2004.

These dime-sized computer chips use a technology called BrainGate that directly connects the mind to computers and the Internet. Having served as chairman of the BrainGate company, I have personally witnessed just how profound this innovation is.

BrainGate is an invention that allows people to control electrical devices with nothing but their thoughts. The BrainGate chip is implanted in the brain and attached to connectors outside of the skull, which are hooked up to computers that, in Jan Scheuermann's case, are linked to a robotic arm. As a result, Scheuermann can feed herself chocolate by controlling the robotic arm with nothing but her thoughts.

A smart, vibrant woman in her early 50s, Scheuermann has been unable to use her arms and legs since she was diagnosed with a rare genetic disease at the age of 40. "I have not moved things for about 10 years . . . . This is the ride of my life," she said. "This is the roller coaster. This is skydiving." Other patients use brain-controlled implants to communicate, control wheelchairs, write emails and connect to the Internet.

The technology is surprisingly simple to understand. BrainGate is merely tapping into the brain's electrical signals in the same way that Berger's EEG and Fetz's electrical meter did. The BrainGate chip, once attached to the motor cortex, reads the brain's electrical signals and sends them to a computer, which interprets them and sends along instructions to other electrical devices like a robotic arm or a wheelchair.

In that respect, it's not much different from using your television remote to change the channel. Potentially the technology will enable bionics, restore communication abilities and give disabled people previously unimaginable access to the world.

Mind meld

But imagine the ways in which the world will change when any of us, disabled or not, can connect our minds to computers.

Computers have been creeping closer to our brains since their invention. What started as large mainframes became desktops, then laptops, then tablets and smartphones that we hold only inches from our faces, and now Google Glass, which (albeit undergoing a redesign) delivers the Internet in a pair of eyeglasses.

Back in 2004, Google's founders told Playboy magazine that one day we'd have direct access to the Internet through brain implants, with "the entirety of the world's information as just one of our thoughts."

A decade later, the road map is taking shape. While it may be years before implants like BrainGate are safe enough to be commonplace—they require brain surgery, after all—there are a host of brainwave sensors in development for use outside of the skull that will be transformational for all of us: caps for measuring driver alertness, headbands for monitoring sleep, helmets for controlling video games. This could lead to wearable EEGs, implantable nanochips or even technology that can listen to our brain signals using the electromagnetic waves that pervade the air we breathe.

Just as human intelligence is expanding in the direction of the Internet, the Internet itself promises to get smarter and smarter. In fact, it could prove to be the basis of the machine intelligence that scientists have been racing toward since the 1950s.

The pursuit of has been plagued by problems. For one, we keep changing the definition of intelligence. In the 1960s, we said a computer that could beat a backgammon champion would surely be intelligent. But in the 1970s, when Gammonoid beat Luigi Villa—the world champion backgammon player—by a score of 7-1, we decided that backgammon was too easy, requiring only straightforward calculations.

We changed the rules to focus on games of sophisticated rules and strategies, like chess. Yet when IBM's Deep Blue computer beat the reigning chess champion, Gary Kasparov, in 1997, we changed the rules again. No longer were sophisticated calculations or logical decision-making acts of intelligence.

Perhaps when computers could answer human knowledge questions, then they'd be intelligent. Of course, we had to revise that theory in 2011 when IBM's Watson computer soundly beat the best humans at Jeopardy. But all of these computers were horribly bad sports: they couldn't say hello, shake hands or make small talk of any kind. Each time a machine defies our definition of intelligence we move to a new definition.

What makes us human?

We've done the same thing in nature. We once argued that what set us apart from other animals was our ability to use tools. Then we saw primates and crows using tools. So we changed our minds and said that what makes us intelligent is our ability to use language. Then biologists taught the first chimpanzee how to use sign language, and we decided that intelligence couldn't be about language after all.

Next came self-consciousness and awareness, until experiments unequivocally proved that dolphins are self-aware. With animal intelligence as well as machine intelligence, we keep changing the goalposts.

There are those who believe we can transcend the moving goalposts. These bold adventurers have most recently focused on brain science, attempting to reverse engineer the brain. As the theory goes, once we understand all of the brain's parts, we can recreate them to build an intelligent system.

But there are two problems with this approach. First, the inner workings of the brain are largely a mystery. Neuroscience is making tremendous progress, but it is still early.

The second issue with reverse engineering the brain is more fundamental. Just as the Wright brothers didn't learn to fly by dissecting birds, we will not learn to create intelligence by recreating a brain. It is pretty clear that an intelligent machine will look nothing like a three-pound wrinkly lump of clay, nor will it have cells or blood or fat.

Daniel Dennett, University Professor and Austin B. Fletcher Professor of Philosophy at Tufts—whom I consider a mentor and a guide on the quest to solving the mysteries of the mind—was an advocate of reverse engineering at one point. But he recently changed course, saying "I'm trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart."

Dennett's mistake was to reduce the brain to the neuron in an attempt to rebuild it. That is reducing the brain one step too far, pushing us from the edge of the forest to deep into the trees. This is the danger in any kind of reverse engineering. Biologists reduced ant colonies down to individuals, but we have now learned that the ant network, the colony, is the critical level. Reducing flight to the feathers of a bird would not have worked, but reducing it to wingspan did the trick. Feathers are one step too far, just as are ants and neurons.

Scientists have oversimplified the function of a neuron, treating it as a predictable switching device that fires on and off. That would be incredibly convenient if it were true. But neurons are only logical when they work—and a neuron misfires up to 90 percent of the time. Artificial intelligence almost universally ignores this fact.

The new intelligence

Focusing on a single neuron's on/off switch misses what is happening with the network of neurons, which performs amazing feats. The faultiness of the individual neuron allows for the plasticity and adaptive nature of the network as a whole. Intelligence cannot be replicated by creating a bunch of switches, faulty or not. Instead, we must focus on the network.

Neurons may be good analogs for transistors and maybe even computer chips, but they're not good building blocks of intelligence. The neural network is fundamental. The BrainGate technology works because the chip attaches not to a single neuron, but to a network of neurons. Reading the signals of a single neuron would tell us very little; it certainly wouldn't allow BrainGate patients to move a or a computer cursor. Scientists may never be able to reverse engineer the neuron, but they are increasingly able to interpret the communication of the network.

It is for this reason that the Internet is a better candidate for intelligence than are computers. Computers are perfect calculators composed of perfect transistors; they are like neurons as we once envisioned them. But the Internet has all the quirkiness of the brain: it can work in parallel, it can communicate across broad distances, and it makes mistakes.

Even though the Internet is at an early stage in its evolution, it can leverage the brain that nature has given us. The convergence of computer networks and neural networks is the key to creating real intelligence from artificial machines. It took millions of years for humans to gain intelligence, but with the human mind as a guide, it may only take a century to create Internet intelligence.

Monday, May 25, 2015

Geology


From Wikipedia, the free encyclopedia

Geology (from the Greek γῆ, , i.e. "earth" and -λoγία, -logia, i.e. "study of, discourse"[1][2]) is an earth science comprising the study of solid Earth, the rocks of which it is composed, and the processes by which they change. Geology can also refer generally to the study of the solid features of any celestial body (such as the geology of the Moon or Mars).

Geology gives insight into the history of the Earth by providing the primary evidence for plate tectonics, the evolutionary history of life, and past climates. Geology is important for mineral and hydrocarbon exploration and exploitation, evaluating water resources, understanding of natural hazards, the remediation of environmental problems, and for providing insights into past climate change. Geology also plays a role in geotechnical engineering and is a major academic discipline.

Geologic materials

The majority of geological data comes from research on solid Earth materials. These typically fall into one of two categories: rock and unconsolidated material.

Rock


This schematic diagram of the rock cycle shows the relationship between magma and sedimentary, metamorphic, and igneous rock

There are three major types of rock: igneous, sedimentary, and metamorphic. The rock cycle is an important concept in geology which illustrates the relationships between these three types of rock, and magma. When a rock crystallizes from melt (magma and/or lava), it is an igneous rock. This rock can be weathered and eroded, and then redeposited and lithified into a sedimentary rock, or be turned into a metamorphic rock due to heat and pressure that change the mineral content of the rock which gives it a characteristic fabric. The sedimentary rock can then be subsequently turned into a metamorphic rock due to heat and pressure and is then weathered, eroded, deposited, and lithified, ultimately becoming a sedimentary rock. Sedimentary rock may also be re-eroded and redeposited, and metamorphic rock may also undergo additional metamorphism. All three types of rocks may be re-melted; when this happens, a new magma is formed, from which an igneous rock may once again crystallize.

The majority of research in geology is associated with the study of rock, as rock provides the primary record of the majority of the geologic history of the Earth.

Unconsolidated material

Geologists also study unlithified material, which typically comes from more recent deposits. These materials are superficial deposits which lie above the bedrock.[3] Because of this, the study of such material is often known as Quaternary geology, after the recent Quaternary Period. This includes the study of sediment and soils, including studies in geomorphology, sedimentology, and paleoclimatology.

Whole-Earth structure

Plate tectonics

Oceanic-continental convergence resulting in subduction and volcanic arcs illustrates one effect of plate tectonics.

In the 1960s, a series of discoveries, the most important of which was seafloor spreading,[4][5] showed that the Earth's lithosphere, which includes the crust and rigid uppermost portion of the upper mantle, is separated into a number of tectonic plates that move across the plastically deforming, solid, upper mantle, which is called the asthenosphere. There is an intimate coupling between the movement of the plates on the surface and the convection of the mantle: oceanic plate motions and mantle convection currents always move in the same direction, because the oceanic lithosphere is the rigid upper thermal boundary layer of the convecting mantle. This coupling between rigid plates moving on the surface of the Earth and the convecting mantle is called plate tectonics.

On this diagram, subducting slabs are in blue, and continental margins and a few plate boundaries are in red. The blue blob in the cutaway section is the seismically imaged Farallon Plate, which is subducting beneath North America. The remnants of this plate on the Surface of the Earth are the Juan de Fuca Plate and Explorer plate in the Northwestern USA / Southwestern Canada, and the Cocos Plate on the west coast of Mexico.

The development of plate tectonics provided a physical basis for many observations of the solid Earth. Long linear regions of geologic features could be explained as plate boundaries.[6] Mid-ocean ridges, high regions on the seafloor where hydrothermal vents and volcanoes exist, were explained as divergent boundaries, where two plates move apart. Arcs of volcanoes and earthquakes were explained as convergent boundaries, where one plate subducts under another. Transform boundaries, such as the San Andreas fault system, resulted in widespread powerful earthquakes. Plate tectonics also provided a mechanism for Alfred Wegener's theory of continental drift,[7] in which the continents move across the surface of the Earth over geologic time. They also provided a driving force for crustal deformation, and a new setting for the observations of structural geology. The power of the theory of plate tectonics lies in its ability to combine all of these observations into a single theory of how the lithosphere moves over the convecting mantle.

Earth structure

The Earth's layered structure. (1) inner core; (2) outer core; (3) lower mantle; (4) upper mantle; (5) lithosphere; (6) crust (part of the lithosphere)

Earth layered structure. Typical wave paths from earthquakes like these gave early seismologists insights into the layered structure of the Earth

Advances in seismology, computer modeling, and mineralogy and crystallography at high temperatures and pressures give insights into the internal composition and structure of the Earth.

Seismologists can use the arrival times of seismic waves in reverse to image the interior of the Earth. Early advances in this field showed the existence of a liquid outer core (where shear waves were not able to propagate) and a dense solid inner core. These advances led to the development of a layered model of the Earth, with a crust and lithosphere on top, the mantle below (separated within itself by seismic discontinuities at 410 and 660 kilometers), and the outer core and inner core below that. More recently, seismologists have been able to create detailed images of wave speeds inside the earth in the same way a doctor images a body in a CT scan. These images have led to a much more detailed view of the interior of the Earth, and have replaced the simplified layered model with a much more dynamic model.

Mineralogists have been able to use the pressure and temperature data from the seismic and modelling studies alongside knowledge of the elemental composition of the Earth to reproduce these conditions in experimental settings and measure changes in crystal structure. These studies explain the chemical changes associated with the major seismic discontinuities in the mantle and show the crystallographic structures expected in the inner core of the Earth.

Geologic time


Geological time put in a diagram called a geological clock, showing the relative lengths of the eons of the Earth's history.

The geologic time scale encompasses the history of the Earth.[8] It is bracketed at the old end by the dates of the earliest Solar System material at 4.567 Ga,[9] (gigaannum: billion years ago) and the age of the Earth at 4.54 Ga[10][11] at the beginning of the informally recognized Hadean eon. At the young end of the scale, it is bracketed by the present day in the Holocene epoch.

Important milestones

Brief time scale

The following four timelines show the geologic time scale. The first shows the entire time from the formation of the Earth to the present, but this compresses the most recent eon. Therefore the second scale shows the most recent eon with an expanded scale. The second scale compresses the most recent era, so the most recent era is expanded in the third scale. Since the Quaternary is a very short period with short epochs, it is further expanded in the fourth scale. The second, third, and fourth timelines are therefore each subsections of their preceding timeline as indicated by asterisks. The Holocene (the latest epoch) is too small to be shown clearly on the third timeline on the right, another reason for expanding the fourth scale. The Pleistocene (P) epoch. Q stands for the Quaternary period.
Paleocene Eocene Oligocene Miocene Pliocene Pleistocene Holocene Paleogene Neogene Quaternary Cenozoic
Gelasian Calabrian Pleistocene Pleistocene Pleistocene Holocene Quaternary
Cambrian Ordovician Silurian Devonian Carboniferous Permian Triassic Jurassic Cretaceous Paleogene Neogene Quaternary Paleozoic Mesozoic Cenozoic Phanerozoic
Siderian Rhyacian Orosirian Statherian Calymmian Ectasian Stenian Tonian Cryogenian Ediacaran Eoarchean Paleoarchean Mesoarchean Neoarchean Paleoproterozoic Mesoproterozoic Neoproterozoic Paleozoic Mesozoic Cenozoic Hadean Archean Proterozoic Phanerozoic Precambrian

Dating methods

Geologists use a variety of methods to give both relative and absolute dates to geological events. They then use these dates to find the rates at which processes occur.

Relative dating

Cross-cutting relations can be used to determine the relative ages of rock strata and other geological structures. Explanations: A – folded rock strata cut by a thrust fault; B – large intrusion (cutting through A); C – erosional angular unconformity (cutting off A & B) on which rock strata were deposited; D – volcanic dyke (cutting through A, B & C); E – even younger rock strata (overlying C & D); F – normal fault (cutting through A, B, C & E).

Methods for relative dating were developed when geology first emerged as a formal science. Geologists still use the following principles today as a means to provide information about geologic history and the timing of geologic events.

The principle of Uniformitarianism states that the geologic processes observed in operation that modify the Earth's crust at present have worked in much the same way over geologic time.[12] A fundamental principle of geology advanced by the 18th century Scottish physician and geologist James Hutton, is that "the present is the key to the past." In Hutton's words: "the past history of our globe must be explained by what can be seen to be happening now."[13]

The principle of intrusive relationships concerns crosscutting intrusions. In geology, when an igneous intrusion cuts across a formation of sedimentary rock, it can be determined that the igneous intrusion is younger than the sedimentary rock. There are a number of different types of intrusions, including stocks, laccoliths, batholiths, sills and dikes.

The principle of cross-cutting relationships pertains to the formation of faults and the age of the sequences through which they cut. Faults are younger than the rocks they cut; accordingly, if a fault is found that penetrates some formations but not those on top of it, then the formations that were cut are older than the fault, and the ones that are not cut must be younger than the fault. Finding the key bed in these situations may help determine whether the fault is a normal fault or a thrust fault.[14]

The principle of inclusions and components states that, with sedimentary rocks, if inclusions (or clasts) are found in a formation, then the inclusions must be older than the formation that contains them. For example, in sedimentary rocks, it is common for gravel from an older formation to be ripped up and included in a newer layer. A similar situation with igneous rocks occurs when xenoliths are found. These foreign bodies are picked up as magma or lava flows, and are incorporated, later to cool in the matrix. As a result, xenoliths are older than the rock which contains them.

The Permian through Jurassic stratigraphy of the Colorado Plateau area of southeastern Utah is a great example of both Original Horizontality and the Law of Superposition. These strata make up much of the famous prominent rock formations in widely spaced protected areas such as Capitol Reef National Park and Canyonlands National Park. From top to bottom: Rounded tan domes of the Navajo Sandstone, layered red Kayenta Formation, cliff-forming, vertically jointed, red Wingate Sandstone, slope-forming, purplish Chinle Formation, layered, lighter-red Moenkopi Formation, and white, layered Cutler Formation sandstone. Picture from Glen Canyon National Recreation Area, Utah.

The principle of original horizontality states that the deposition of sediments occurs as essentially horizontal beds. Observation of modern marine and non-marine sediments in a wide variety of environments supports this generalization (although cross-bedding is inclined, the overall orientation of cross-bedded units is horizontal).[14]

The principle of superposition states that a sedimentary rock layer in a tectonically undisturbed sequence is younger than the one beneath it and older than the one above it. Logically a younger layer cannot slip beneath a layer previously deposited. This principle allows sedimentary layers to be viewed as a form of vertical time line, a partial or complete record of the time elapsed from deposition of the lowest layer to deposition of the highest bed.[14]

The principle of faunal succession is based on the appearance of fossils in sedimentary rocks. As organisms exist at the same time period throughout the world, their presence or (sometimes) absence may be used to provide a relative age of the formations in which they are found. Based on principles laid out by William Smith almost a hundred years before the publication of Charles Darwin's theory of evolution, the principles of succession were developed independently of evolutionary thought. The principle becomes quite complex, however, given the uncertainties of fossilization, the localization of fossil types due to lateral changes in habitat (facies change in sedimentary strata), and that not all fossils may be found globally at the same time.[15]

Absolute dating

Geologists also use methods to determine the absolute age of rock samples and geological events. These dates are useful on their own and may also be used in conjunction with relative dating methods or to calibrate relative methods.[16]
At the beginning of the 20th century, important advancement in geological science was facilitated by the ability to obtain accurate absolute dates to geologic events using radioactive isotopes and other methods. This changed the understanding of geologic time. Previously, geologists could only use fossils and stratigraphic correlation to date sections of rock relative to one another. With isotopic dates it became possible to assign absolute ages to rock units, and these absolute dates could be applied to fossil sequences in which there was datable material, converting the old relative ages into new absolute ages.

For many geologic applications, isotope ratios of radioactive elements are measured in minerals that give the amount of time that has passed since a rock passed through its particular closure temperature, the point at which different radiometric isotopes stop diffusing into and out of the crystal lattice.[17][18] These are used in geochronologic and thermochronologic studies. Common methods include uranium-lead dating, potassium-argon dating, argon-argon dating and uranium-thorium dating. These methods are used for a variety of applications.
Dating of lava and volcanic ash layers found within a stratigraphic sequence can provide absolute age data for sedimentary rock units which do not contain radioavtive isotopes and calibrate relative dating techniques. These methods can also be used to determine ages of pluton emplacement. Thermochemical techniques can be used to determine temperature profiles within the crust, the uplift of mountain ranges, and paleotopography.

Fractionation of the lanthanide series elements is used to compute ages since rocks were removed from the mantle.

Other methods are used for more recent events. Optically stimulated luminescence and cosmogenic radionucleide dating are used to date surfaces and/or erosion rates. Dendrochronology can also be used for the dating of landscapes. Radiocarbon dating is used for geologically young materials containing organic carbon.

Geological development of an area


An originally horizontal sequence of sedimentary rocks (in shades of tan) are affected by igneous activity. Deep below the surface are a magma chamber and large associated igneous bodies. The magma chamber feeds the volcano, and sends off shoots of magma that will later crystallize into dikes and sills. Magma also advances upwards to form intrusive igneous bodies. The diagram illustrates both a cinder cone volcano, which releases ash, and a composite volcano, which releases both lava and ash.

An illustration of the three types of faults. Strike-slip faults occur when rock units slide past one another, normal faults occur when rocks are undergoing horizontal extension, and thrust faults occur when rocks are undergoing horizontal shortening.

The geology of an area changes through time as rock units are deposited and inserted and deformational processes change their shapes and locations.

Rock units are first emplaced either by deposition onto the surface or intrusion into the overlying rock. Deposition can occur when sediments settle onto the surface of the Earth and later lithify into sedimentary rock, or when as volcanic material such as volcanic ash or lava flows blanket the surface. Igneous intrusions such as batholiths, laccoliths, dikes, and sills, push upwards into the overlying rock, and crystallize as they intrude.

After the initial sequence of rocks has been deposited, the rock units can be deformed and/or metamorphosed. Deformation typically occurs as a result of horizontal shortening, horizontal extension, or side-to-side (strike-slip) motion. These structural regimes broadly relate to convergent boundaries, divergent boundaries, and transform boundaries, respectively, between tectonic plates.

When rock units are placed under horizontal compression, they shorten and become thicker. Because rock units, other than muds, do not significantly change in volume, this is accomplished in two primary ways: through faulting and folding. In the shallow crust, where brittle deformation can occur, thrust faults form, which cause deeper rock to move on top of shallower rock. Because deeper rock is often older, as noted by the principle of superposition, this can result in older rocks moving on top of younger ones. Movement along faults can result in folding, either because the faults are not planar or because rock layers are dragged along, forming drag folds as slip occurs along the fault.
Deeper in the Earth, rocks behave plastically, and fold instead of faulting. These folds can either be those where the material in the center of the fold buckles upwards, creating "antiforms", or where it buckles downwards, creating "synforms". If the tops of the rock units within the folds remain pointing upwards, they are called anticlines and synclines, respectively. If some of the units in the fold are facing downward, the structure is called an overturned anticline or syncline, and if all of the rock units are overturned or the correct up-direction is unknown, they are simply called by the most general terms, antiforms and synforms.

A diagram of folds, indicating an anticline and a syncline.

Even higher pressures and temperatures during horizontal shortening can cause both folding and metamorphism of the rocks. This metamorphism causes changes in the mineral composition of the rocks; creates a foliation, or planar surface, that is related to mineral growth under stress. This can remove signs of the original textures of the rocks, such as bedding in sedimentary rocks, flow features of lavas, and crystal patterns in crystalline rocks.

Extension causes the rock units as a whole to become longer and thinner. This is primarily accomplished through normal faulting and through the ductile stretching and thinning. Normal faults drop rock units that are higher below those that are lower. This typically results in younger units being placed below older units. Stretching of units can result in their thinning; in fact, there is a location within the Maria Fold and Thrust Belt in which the entire sedimentary sequence of the Grand Canyon can be seen over a length of less than a meter. Rocks at the depth to be ductilely stretched are often also metamorphosed. These stretched rocks can also pinch into lenses, known as boudins, after the French word for "sausage", because of their visual similarity.

Where rock units slide past one another, strike-slip faults develop in shallow regions, and become shear zones at deeper depths where the rocks deform ductilely.

Geologic cross-section of Kittatinny Mountain. This cross-section shows metamorphic rocks, overlain by younger sediments deposited after the metamorphic event. These rock units were later folded and faulted during the uplift of the mountain.

The addition of new rock units, both depositionally and intrusively, often occurs during deformation. Faulting and other deformational processes result in the creation of topographic gradients, causing material on the rock unit that is increasing in elevation to be eroded by hillslopes and channels. These sediments are deposited on the rock unit that is going down. Continual motion along the fault maintains the topographic gradient in spite of the movement of sediment, and continues to create accommodation space for the material to deposit. Deformational events are often also associated with volcanism and igneous activity. Volcanic ashes and lavas accumulate on the surface, and igneous intrusions enter from below. Dikes, long, planar igneous intrusions, enter along cracks, and therefore often form in large numbers in areas that are being actively deformed. This can result in the emplacement of dike swarms, such as those that are observable across the Canadian shield, or rings of dikes around the lava tube of a volcano.

All of these processes do not necessarily occur in a single environment, and do not necessarily occur in a single order. The Hawaiian Islands, for example, consist almost entirely of layered basaltic lava flows. The sedimentary sequences of the mid-continental United States and the Grand Canyon in the southwestern United States contain almost-undeformed stacks of sedimentary rocks that have remained in place since Cambrian time. Other areas are much more geologically complex. In the southwestern United States, sedimentary, volcanic, and intrusive rocks have been metamorphosed, faulted, foliated, and folded. Even older rocks, such as the Acasta gneiss of the Slave craton in northwestern Canada, the oldest known rock in the world have been metamorphosed to the point where their origin is undiscernable without laboratory analysis. In addition, these processes can occur in stages. In many places, the Grand Canyon in the southwestern United States being a very visible example, the lower rock units were metamorphosed and deformed, and then deformation ended and the upper, undeformed units were deposited.
Although any amount of rock emplacement and rock deformation can occur, and they can occur any number of times, these concepts provide a guide to understanding the geological history of an area.

Methods of geology

Geologists use a number of field, laboratory, and numerical modeling methods to decipher Earth history and understand the processes that occur on and inside the Earth. In typical geological investigations, geologists use primary information related to petrology (the study of rocks), stratigraphy (the study of sedimentary layers), and structural geology (the study of positions of rock units and their deformation). In many cases, geologists also study modern soils, rivers, landscapes, and glaciers; investigate past and current life and biogeochemical pathways, and use geophysical methods to investigate the subsurface.

Field methods


A standard Brunton Pocket Transit, used commonly by geologists in mapping and surveying

A typical USGS field mapping camp in the 1950s

Today, handheld computers with GPS and geographic information systems software are often used in geological field work (digital geologic mapping).

Geological field work varies depending on the task at hand. Typical fieldwork could consist of:

Petrology


A petrographic microscope, which is an optical microscope fitted with cross-polarizing lenses, a conoscopic lens, and compensators (plates of anisotropic materials; gypsum plates and quartz wedges are common), for crystallographic analysis.

In addition to identifying rocks in the field, petrologists identify rock samples in the laboratory. Two of the primary methods for identifying rocks in the laboratory are through optical microscopy and by using an electron microprobe. In an optical mineralogy analysis, thin sections of rock samples are analyzed through a petrographic microscope, where the minerals can be identified through their different properties in plane-polarized and cross-polarized light, including their birefringence, pleochroism, twinning, and interference properties with a conoscopic lens.[25] In the electron microprobe, individual locations are analyzed for their exact chemical compositions and variation in composition within individual crystals.[26] Stable[27] and radioactive isotope[28] studies provide insight into the geochemical evolution of rock units.

Petrologists can also use fluid inclusion data[29] and perform high temperature and pressure physical experiments[30] to understand the temperatures and pressures at which different mineral phases appear, and how they change through igneous[31] and metamorphic processes. This research can be extrapolated to the field to understand metamorphic processes and the conditions of crystallization of igneous rocks.[32] This work can also help to explain processes that occur within the Earth, such as subduction and magma chamber evolution.

Structural geology

A diagram of an orogenic wedge. The wedge grows through faulting in the interior and along the main basal fault, called the décollement. It builds its shape into a critical taper, in which the angles within the wedge remain the same as failures inside the material balance failures along the décollement. It is analogous to a bulldozer pushing a pile of dirt, where the bulldozer is the overriding plate.

Structural geologists use microscopic analysis of oriented thin sections of geologic samples to observe the fabric within the rocks which gives information about strain within the crystalline structure of the rocks. They also plot and combine measurements of geological structures in order to better understand the orientations of faults and folds in order to reconstruct the history of rock deformation in the area. In addition, they perform analog and numerical experiments of rock deformation in large and small settings.

The analysis of structures is often accomplished by plotting the orientations of various features onto stereonets. A stereonet is a stereographic projection of a sphere onto a plane, in which planes are projected as lines and lines are projected as points. These can be used to find the locations of fold axes, relationships between faults, and relationships between other geologic structures.

Among the most well-known experiments in structural geology are those involving orogenic wedges, which are zones in which mountains are built along convergent tectonic plate boundaries.[33] In the analog versions of these experiments, horizontal layers of sand are pulled along a lower surface into a back stop, which results in realistic-looking patterns of faulting and the growth of a critically tapered (all angles remain the same) orogenic wedge.[34] Numerical models work in the same way as these analog models, though they are often more sophisticated and can include patterns of erosion and uplift in the mountain belt.[35] This helps to show the relationship between erosion and the shape of the mountain range. These studies can also give useful information about pathways for metamorphism through pressure, temperature, space, and time.[36]

Stratigraphy

In the laboratory, stratigraphers analyze samples of stratigraphic sections that can be returned from the field, such as those from drill cores.[37] Stratigraphers also analyze data from geophysical surveys that show the locations of stratigraphic units in the subsurface.[38] Geophysical data and well logs can be combined to produce a better view of the subsurface, and stratigraphers often use computer programs to do this in three dimensions.[39] Stratigraphers can then use these data to reconstruct ancient processes occurring on the surface of the Earth,[40] interpret past environments, and locate areas for water, coal, and hydrocarbon extraction.
In the laboratory, biostratigraphers analyze rock samples from outcrop and drill cores for the fossils found in them.[37] These fossils help scientists to date the core and to understand the depositional environment in which the rock units formed. Geochronologists precisely date rocks within the stratigraphic section in order to provide better absolute bounds on the timing and rates of deposition.[41] Magnetic stratigraphers look for signs of magnetic reversals in igneous rock units within the drill cores.[37] Other scientists perform stable isotope studies on the rocks to gain information about past climate.[37]

Planetary geology[edit]


Surface of Mars as photographed by the Viking 2 lander December 9, 1977.

With the advent of space exploration in the twentieth century, geologists have begun to look at other planetary bodies in the same ways that have been developed to study the Earth. This new field of study is called planetary geology (sometimes known as astrogeology) and relies on known geologic principles to study other bodies of the solar system.

Although the Greek-language-origin prefix geo refers to Earth, "geology" is often used in conjunction with the names of other planetary bodies when describing their composition and internal processes: examples are "the geology of Mars" and "Lunar geology". Specialised terms such as selenology (studies of the Moon), areology (of Mars), etc., are also in use.

Although planetary geologists are interested in studying all aspects of other planets, a significant focus is to search for evidence of past or present life on other worlds. This has led to many missions whose primary or ancillary purpose is to examine planetary bodies for evidence of life. One of these is the Phoenix lander, which analyzed Martian polar soil for water, chemical, and mineralogical constituents related to biological processes.

Applied geology

Economic geology

Economic geologists help locate and manage the Earth's natural resources, such as petroleum and coal, as well as mineral resources, which include metals such as iron, copper, and uranium.

Mining geology

Mining geology consists of the extractions of mineral resources from the Earth. Some resources of economic interests include gemstones, metals, and many minerals such as asbestos, perlite, mica, phosphates, zeolites, clay, pumice, quartz, and silica, as well as elements such as sulfur, chlorine, and helium.

Petroleum geology


Mud log in process, a common way to study the lithology when drilling oil wells.

Petroleum geologists study the locations of the subsurface of the Earth which can contain extractable hydrocarbons, especially petroleum and natural gas. Because many of these reservoirs are found in sedimentary basins,[42] they study the formation of these basins, as well as their sedimentary and tectonic evolution and the present-day positions of the rock units.

Engineering geology

Engineering geology is the application of the geologic principles to engineering practice for the purpose of assuring that the geologic factors affecting the location, design, construction, operation, and maintenance of engineering works are properly addressed.
In the field of civil engineering, geological principles and analyses are used in order to ascertain the mechanical principles of the material on which structures are built. This allows tunnels to be built without collapsing, bridges and skyscrapers to be built with sturdy foundations, and buildings to be built that will not settle in clay and mud.[43]

Hydrology and environmental issues

Geology and geologic principles can be applied to various environmental problems such as stream restoration, the restoration of brownfields, and the understanding of the interaction between natural habitat and the geologic environment. Groundwater hydrology, or hydrogeology, is used to locate groundwater,[44] which can often provide a ready supply of uncontaminated water and is especially important in arid regions,[45] and to monitor the spread of contaminants in groundwater wells.[44][46]
Geologists also obtain data through stratigraphy, boreholes, core samples, and ice cores. Ice cores[47] and sediment cores[48] are used to for paleoclimate reconstructions, which tell geologists about past and present temperature, precipitation, and sea level across the globe. These datasets are our primary source of information on global climate change outside of instrumental data.[49]

Natural hazards

Geologists and geophysicists study natural hazards in order to enact safe building codes and warning systems that are used to prevent loss of property and life.[50] Examples of important natural hazards that are pertinent to geology (as opposed those that are mainly or only pertinent to meteorology) are:

Rockfall in the Grand Canyon

History of geology

William Smith's geologic map of England, Wales, and southern Scotland. Completed in 1815, it was the second national-scale geologic map, and by far the most accurate of its time.[51]

The study of the physical material of the Earth dates back at least to ancient Greece when Theophrastus (372–287 BCE) wrote the work Peri Lithon (On Stones). During the Roman period, Pliny the Elder wrote in detail of the many minerals and metals then in practical use – even correctly noting the origin of amber.

Some modern scholars, such as Fielding H. Garrison, are of the opinion that modern geology began in the medieval Islamic world.[52] Abu al-Rayhan al-Biruni (973–1048 CE) was one of the earliest Muslim geologists, whose works included the earliest writings on the geology of India, hypothesizing that the Indian subcontinent was once a sea.[53] Islamic Scholar Ibn Sina (Avicenna, 981–1037) proposed detailed explanations for the formation of mountains, the origin of earthquakes, and other topics central to modern geology, which provided an essential foundation for the later development of the science.[54][55] In China, the polymath Shen Kuo (1031–1095) formulated a hypothesis for the process of land formation: based on his observation of fossil animal shells in a geological stratum in a mountain hundreds of miles from the ocean, he inferred that the land was formed by erosion of the mountains and by deposition of silt.[56]

Nicolas Steno (1638–1686) is credited with the law of superposition, the principle of original horizontality, and the principle of lateral continuity: three defining principles of stratigraphy.

The word geology was first used by Ulisse Aldrovandi in 1603,[57] then by Jean-André Deluc in 1778 and introduced as a fixed term by Horace-Bénédict de Saussure in 1779. The word is derived from the Greek γῆ, , meaning "earth" and λόγος, logos, meaning "speech".[58] But according to another source, the word "geology" comes from a Norwegian, Mikkel Pedersøn Escholt (1600–1699), who was a priest and scholar. Escholt first used the definition in his book titled, Geologica Norvegica (1657).[59]

William Smith (1769–1839) drew some of the first geological maps and began the process of ordering rock strata (layers) by examining the fossils contained in them.[51]

James Hutton is often viewed as the first modern geologist.[60] In 1785 he presented a paper entitled Theory of the Earth to the Royal Society of Edinburgh. In his paper, he explained his theory that the Earth must be much older than had previously been supposed in order to allow enough time for mountains to be eroded and for sediments to form new rocks at the bottom of the sea, which in turn were raised up to become dry land. Hutton published a two-volume version of his ideas in 1795 (Vol. 1, Vol. 2).

Scotsman James Hutton, father of modern geology

Followers of Hutton were known as Plutonists because they believed that some rocks were formed by vulcanism, which is the deposition of lava from volcanoes, as opposed to the Neptunists, led by Abraham Werner, who believed that all rocks had settled out of a large ocean whose level gradually dropped over time.

The first geological map of the U.S. was produced in 1809 by William Maclure.[61][62] In 1807, Maclure commenced the self-imposed task of making a geological survey of the United States. Almost every state in the Union was traversed and mapped by him, the Allegheny Mountains being crossed and recrossed some 50 times.[63] The results of his unaided labours were submitted to the American Philosophical Society in a memoir entitled Observations on the Geology of the United States explanatory of a Geological Map, and published in the Society's Transactions, together with the nation's first geological map.[64] This antedates William Smith's geological map of England by six years, although it was constructed using a different classification of rocks.

Sir Charles Lyell first published his famous book, Principles of Geology,[65] in 1830. This book, which influenced the thought of Charles Darwin, successfully promoted the doctrine of uniformitarianism. This theory states that slow geological processes have occurred throughout the Earth's history and are still occurring today. In contrast, catastrophism is the theory that Earth's features formed in single, catastrophic events and remained unchanged thereafter. Though Hutton believed in uniformitarianism, the idea was not widely accepted at the time.

Much of 19th-century geology revolved around the question of the Earth's exact age. Estimates varied from a few hundred thousand to billions of years.[66] By the early 20th century, radiometric dating allowed the Earth's age to be estimated at two billion years. The awareness of this vast amount of time opened the door to new theories about the processes that shaped the planet.

Some of the most significant advances in 20th-century geology have been the development of the theory of plate tectonics in the 1960s and the refinement of estimates of the planet's age. Plate tectonics theory arose from two separate geological observations: seafloor spreading and continental drift. The theory revolutionized the Earth sciences. Today the Earth is known to be approximately 4.5 billion years old.[67]

Romance (love)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...