Search This Blog

Thursday, August 14, 2014

Discover Interview: Roger Penrose Says Physics Is Wrong, From String Theory to Quantum Mechanics

Discover Interview: Roger Penrose Says Physics Is Wrong, From String Theory to Quantum Mechanics

One of the greatest thinkers in physics says the human brain—and the universe itself—must function according to some theory we haven't yet discovered.

By Susan Kruglinski, Oliver Chanarin|Tuesday, October 06, 2009, in Discover Magazine
         
penrose1
penrose1

Roger Penrose could easily be excused for having a big ego. A theorist whose name will be forever linked with such giants as Hawking and Einstein, Penrose has made fundamental contributions to physics, mathematics, and geometry. He reinterpreted general relativity to prove that black holes can form from dying stars. He invented twistor theory—a novel way to look at the structure of space-time—and so led us to a deeper understanding of the nature of gravity. He discovered a remarkable family of geometric forms that came to be known as Penrose tiles. He even moonlighted as a brain researcher, coming up with a provocative theory that consciousness arises from quantum-mechanical processes. And he wrote a series of incredibly readable, best-selling science books to boot.
And yet the 78-year-old Penrose—now an emeritus professor at the Mathematical Institute, University of Oxford—seems to live the humble life of a researcher just getting started in his career. His small office is cramped with the belongings of the six other professors with whom he shares it, and at the end of the day you might find him rushing off to pick up his 9-year-old son from school. With the curiosity of a man still trying to make a name for himself, he cranks away on fundamental, wide-ranging questions: How did the universe begin? Are there higher dimensions of space and time? Does the current front-running theory in theoretical physics, string theory, actually make sense?
Because he has lived a lifetime of complicated calculations, though, Penrose has quite a bit more perspective than the average starting scientist. To get to the bottom of it all, he insists, physicists must force themselves to grapple with the greatest riddle of them all: the relationship between the rules that govern fundamental particles and the rules that govern the big things—like us—that those particles make up. In his powwow with DISCOVER contributing editor Susan Kruglinksi, Penrose did not flinch from questioning the central tenets of modern physics, including string theory and quantum mechanics. Physicists will never come to grips with the grand theories of the universe, Penrose holds, until they see past the blinding distractions of today’s half-baked theories to the deepest layer of the reality in which we live.

You come from a colorful family of overachievers, don’t you?

My older brother is a distinguished theoretical physicist, a fellow of the Royal Society. My younger brother ended up the British chess champion 10 times, a record. My father came from a Quaker family. His father was a professional artist who did portraits—very traditional, a lot of religious subjects. The family was very strict. I don’t think we were even allowed to read novels, certainly not on Sundays. My father was one of four brothers, all of whom were very good artists. One of them became well known in the art world, Sir Roland. He was cofounder of the Institute of Contemporary Arts in London. My father himself was a human geneticist who was recognized for demonstrating that older mothers tend to get more Down syndrome children, but he had lots of scientific interests.

How did your father influence your thinking?

The important thing about my father was that there wasn’t any boundary between his work and what he did for fun. That rubbed off on me. He would make puzzles and toys for his children and grandchildren. He used to have a little shed out back where he cut things from wood with his little pedal saw. I remember he once made a slide rule with about 12 different slides, with various characters that we could combine in complicated ways. Later in his life he spent a lot of time making wooden models that reproduced themselves—what people now refer to as artificial life. These were simple devices that, when linked together, would cause other bits to link together in the same way. He sat in his woodshed and cut these things out of wood in great, huge numbers.

So I assume your father helped spark your discovery of Penrose tiles, repeating shapes that fit together to form a solid surface with pentagonal symmetry.

It was silly in a way. I remember asking him—I was around 9 years old—about whether you could fit regular hexagons together and make it round like a sphere. And he said, “No, no, you can’t do that, but you can do it with pentagons,” which was a surprise to me. He showed me how to make polyhedra, and so I got started on that.

Are Penrose tiles useful or just beautiful?

My interest in the tiles has to do with the idea of a universe controlled by very simple forces, even though we see complications all over the place. The tilings follow conventional rules to make complicated patterns. It was an attempt to see how the complicated could be satisfied by very simple rules that reflect what we see in the world.

The artist M. C. Escher was influenced by your geometric inventions. What was the story there?

In my second year as a graduate student at Cambridge, I attended the International Congress of Mathematicians in Amsterdam. I remember seeing one of the lecturers there I knew quite well, and he had this catalog. On the front of it was the Escher picture Day and Night, the one with birds going in opposite directions. The scenery is nighttime on one side and daytime on the other. I remember being intrigued by this, and I asked him where he got it. He said, “Oh, well, there’s an exhibition you might be interested in of some artist called Escher.” So I went and was very taken by these very weird and wonderful things that I’d never seen anything like. I decided to try and draw some impossible scenes myself and came up with this thing that’s referred to as a tri-bar. It’s a triangle that looks like a three-dimensional object, but actually it’s impossible for it to be three-dimensional. I showed it to my father and he worked out some impossible buildings and things. Then we published an article in the British Journal of Psychology on this stuff and acknowledged Escher.

Escher saw the article and was inspired by it?

He used two things from the article. One was the tri-bar, used in his lithograph called Waterfall. Another was the impossible staircase, which my father had worked on and designed. Escher used it in Ascending and Descending, with monks going round and round the stairs. I met Escher once, and I gave him some tiles that will make a repeating pattern, but not until you’ve got 12 of them fitted together. He did this, and then he wrote to me and asked me how it was done—what was it based on? So I showed him a kind of bird shape that did this, and he incorporated it into what I believe is the last picture he ever produced, called Ghosts.

Is it true that you were bad at math as a kid?

I was unbelievably slow. I lived in Canada for a while, for about six years, during the war. When I was 8, sitting in class, we had to do this mental arithmetic very fast, or what seemed to me very fast. I always got lost. And the teacher, who didn’t like me very much, moved me down a class. There was one rather insightful teacher who decided, after I’d done so badly on these tests, that he would have timeless tests. You could just take as long as you’d like. We all had the same test. I was allowed to take the entire next period to continue, which was a play period. Everyone was always out and enjoying themselves, and I was struggling away to do these tests. And even then sometimes it would stretch into the period beyond that. So I was at least twice as slow as anybody else. Eventually I would do very well. You see, if I could do it that way, I would get very high marks.
 
penrosekey
You have called the real-world implications of quantum physics nonsensical. What is your objection?

Quantum mechanics is an incredible theory that explains all sorts of things that couldn’t be explained before, starting with the stability of atoms. But when you accept the weirdness of quantum mechanics [in the macro world], you have to give up the idea of space-time as we know it from Einstein. The greatest weirdness here is that it doesn’t make sense. If you follow the rules, you come up with something that just isn’t right.

In quantum mechanics an object can exist in many states at once, which sounds crazy. The quantum description of the world seems completely contrary to the world as we experience it.

It doesn’t make any sense, and there is a simple reason. You see, the mathematics of quantum mechanics has two parts to it. One is the evolution of a quantum system, which is described extremely precisely and accurately by the Schrödinger equation. That equation tells you this: If you know what the state of the system is now, you can calculate what it will be doing 10 minutes from now. However, there is the second part of quantum mechanics—the thing that happens when you want to make a measurement. Instead of getting a single answer, you use the equation to work out the probabilities of certain outcomes. The results don’t say, “This is what the world is doing.” Instead, they just describe the probability of its doing any one thing. The equation should describe the world in a completely deterministic way, but it doesn’t.

Erwin Schrödinger, who created that equation, was considered a genius. Surely he appreciated that conflict.

Schrödinger was as aware of this as anybody. He talks about his hypothetical cat and says, more or less, “Okay, if you believe what my equation says, you must believe that this cat is dead and alive at the same time.” He says, “That’s obviously nonsense, because it’s not like that. Therefore, my equation can’t be right for a cat. So there must be some other factor involved.”

So Schrödinger himself never believed that the cat analogy reflected the nature of reality?

Oh yes, I think he was pointing this out. I mean, look at three of the biggest figures in quantum mechanics, Schrödinger, Einstein, and Paul Dirac. They were all quantum skeptics in a sense. Dirac is the one whom people find most surprising, because he set up the whole foundation, the general framework of quantum mechanics. People think of him as this hard-liner, but he was very cautious in what he said. When he was asked, “What’s the answer to the measurement problem?” his response was, “Quantum mechanics is a provisional theory. Why should I look for an answer in quantum mechanics?” He didn’t believe that it was true. But he didn’t say this out loud much.

Yet the analogy of Schrödinger’s cat is always presented as a strange reality that we have to accept. Doesn’t the concept drive many of today’s ideas about theoretical physics?

That’s right. People don’t want to change the Schrödinger equation, leading them to what’s called the “many worlds” interpretation of quantum mechanics.

That interpretation says that all probabilities are playing out somewhere in parallel universes?

It says OK, the cat is somehow alive and dead at the same time. To look at that cat, you must become a superposition [two states existing at the same time] of you seeing the live cat and you seeing the dead cat. Of course, we don’t seem to experience that, so the physicists have to say, well, somehow your consciousness takes one route or the other route without your knowing it. You’re led to a completely crazy point of view. You’re led into this “many worlds” stuff, which has no relationship to what we actually perceive.

The idea of parallel universes—many worlds—is a very human-centered idea, as if everything
has to be understood from the perspective of what we can detect with our five senses.

The trouble is, what can you do with it? Nothing. You want a physical theory that describes the world that we see around us. That’s what physics has always been: Explain what the world that we see does, and why or how it does it. Many worlds quantum mechanics doesn’t do that. Either you accept it and try to make sense of it, which is what a lot of people do, or, like me, you say no—that’s beyond the limits of what quantum mechanics can tell us. Which is, surprisingly, a very uncommon position to take. My own view is that quantum mechanics is not exactly right, and I think there’s a lot of evidence for that. It’s just not direct experimental evidence within the scope of current experiments.

In general, the ideas in theoretical physics seem increasingly fantastical. Take string theory. All that talk about 11 dimensions or our universe’s existing on a giant membrane seems surreal.

You’re absolutely right. And in a certain sense, I blame quantum mechanics, because people say, “Well, quantum mechanics is so nonintuitive; if you believe that, you can believe anything that’s non­intuitive.” But, you see, quantum mechanics has a lot of experimental support, so you’ve got to go along with a lot of it. Whereas string theory has no experimental support.

I understand you are setting out this critique of quantum mechanics in your new book.

The book is called Fashion, Faith and Fantasy in the New Physics of the Universe. Each of those words stands for a major theoretical physics idea. The fashion is string theory; the fantasy has to do with various cosmological schemes, mainly inflationary cosmology [which suggests that the universe inflated exponentially within a small fraction of a second after the Big Bang]. Big fish, those things are. It’s almost sacrilegious to attack them. And the other one, even more sacrilegious, is quantum mechanics at all levels—so that’s the faith. People somehow got the view that you really can’t question it.

A few years ago you suggested that gravity is what separates the classical world from the quantum one. Are there enough people out there putting quantum mechanics to this kind of
test?

No, although it’s sort of encouraging that there are people working on it at all. It used to be thought of as a sort of crackpot, fringe activity that people could do when they were old and retired. Well, I am old and retired! But it’s not regarded as a central, as a mainstream activity, which is a shame.

After Newton, and again after Einstein, the way people thought about the world shifted. When the puzzle of quantum mechanics is solved, will there be another revolution in thinking?

It’s hard to make predictions. Ernest Rutherford said his model of the atom [which led to nuclear physics and the atomic bomb] would never be of any use. But yes, I would be pretty sure that it will have a huge influence. There are things like how quantum mechanics could be used in biology. It will eventually make a huge difference, probably in all sorts of unimaginable ways.

In your book The Emperor’s New Mind, you posited that consciousness emerges from quantum physical actions within the cells of the brain. Two decades later, do you stand by that?

In my view the conscious brain does not act according to classical physics. It doesn’t even act according to conventional quantum mechanics. It acts according to a theory we don’t yet have. This is being a bit big-headed, but I think it’s a little bit like William Harvey’s discovery of the circulation of blood. He worked out that it had to circulate, but the veins and arteries just peter out, so how could the blood get through from one to the other? And he said, “Well, it must be tiny little tubes there, and we can’t see them, but they must be there.” Nobody believed it for some time. So I’m still hoping to find something like that—some structure that preserves coherence, because I believe it ought to be there.

When physicists finally understand the core of quantum physics, what do you think the theory will look like?

I think it will be beautiful.

Artificial intelligence

Artificial intelligence

From Wikipedia, the free encyclopedia
 
Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also an academic field of study. Major AI researchers and textbooks define this field as "the study and design of intelligent agents",[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who coined the term in 1955,[3] defines it as "the science and engineering of making intelligent machines".[4]

AI research is highly technical and specialised, and is deeply divided into subfields that often fail to communicate with each other.[5] Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.[6] General intelligence (or "strong AI") is still among the field's long term goals.[7] Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are a large number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others. The AI field is interdisciplinary, in which a number of sciences and professions converge, including computer science, psychology, linguistics, philosophy and neuroscience, as well as other specialized field such as artificial psychology.

The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—"can be so precisely described that a machine can be made to simulate it."[8] This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth, fiction and philosophy since antiquity.[9] Artificial intelligence has been the subject of tremendous optimism[10] but has also suffered stunning setbacks.[11] Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science.[12]

History

Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the bronze robot of Hephaestus, and Pygmalion's Galatea.[13] Human likenesses believed to have intelligence were built in every major civilization: animated cult images were worshiped in Egypt and Greece[14] and humanoid automatons were built by Yan Shi, Hero of Alexandria and Al-Jazari.[15] It was also widely believed that artificial beings had been created by Jābir ibn Hayyān, Judah Loew and Paracelsus.[16] By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots).[17] Pamela McCorduck argues that all of these are some examples of an ancient urge, as she describes it, "to forge the gods".[9] Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence.

Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction.[18][19] This, along with concurrent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.[20]

The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956.[21] The attendees, including John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, became the leaders of AI research for many decades.[22] They and their students wrote programs that were, to most people, simply astonishing:[23] computers were solving word problems in algebra, proving logical theorems and speaking English.[24] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[25] and laboratories had been established around the world.[26] AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[27]

They had failed to recognize the difficulty of some of the problems they faced.[28] In 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off all undirected exploratory research in AI. The next few years would later be called an "AI winter",[29] a period when funding for AI projects was hard to find.

In the early 1980s, AI research was revived by the commercial success of expert systems,[30] a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field.[31] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began.[32]

In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry.[12] The success was due to several factors: the increasing computational power of computers (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[33]

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[34] In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[35] Two years later, a team from CMU won the DARPA Urban Challenge when their vehicle autonomously navigated 55 miles in an urban environment while adhering to traffic hazards and all traffic laws.[36] In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin.[37] The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research[38] as does the iPhone's Siri.

Goals

The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention.[6]

Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[39] By the late 1980s and 1990s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[40]

For difficult problems, most of these algorithms can require enormous computational resources – most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem-solving algorithms is a high priority for AI research.[41]

Human beings solve most of their problems using fast, intuitive judgements rather than the conscious, step-by-step deduction that early AI research was able to model.[42] AI has made some progress at imitating this kind of "sub-symbolic" problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the probabilistic nature of the human ability to guess.

Knowledge representation

An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.

Knowledge representation[43] and knowledge engineering[44] are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[45] situations, events, states and time;[46] causes and effects;[47] knowledge about knowledge (what we know about what other people know);[48] and many other, less well researched domains. A representation of "what exists" is an ontology: the set of objects, relations, concepts and so on that the machine knows about. The most general are called upper ontologies, which attempt to provide a foundation for all other knowledge.[49]

Among the most difficult problems in knowledge representation are:
Default reasoning and the qualification problem
Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969[50] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[51]
The breadth of commonsense knowledge
The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering — they must be built, by hand, one complicated concept at a time.[52] A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology.[citation needed]
The subsymbolic form of some commonsense knowledge
Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed"[53] or an art critic can take one look at a statue and instantly realize that it is a fake.[54] These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically.[55] Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.[55]

Planning

A hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy.

Intelligent agents must be able to set goals and achieve them.[56] They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or "value") of the available choices.[57]

In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be.[58] However, if the agent is not the only actor, it must periodically ascertain whether the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.[59]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[60]

Learning

Machine learning is the study of computer algorithms that improve automatically through experience[61][62] and has been central to AI research since the field's inception.[63]

Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories.
Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning[64] the agent is rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.[65]

Within developmental robotics, developmental learning approaches were elaborated for lifelong cumulative acquisition of repertoires of novel skills by a robot, through autonomous self-exploration and social interaction with human teachers, and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.[66][67][68][69]

Natural language processing (communication)

A parse tree represents the syntactic structure of a sentence according to some formal grammar.

Natural language processing[70] gives machines the ability to read and understand the languages that humans speak. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.[71]

A common method of processing and extracting meaning from natural language is through semantic indexing. Increases in processing speeds and the drop in the cost of data storage makes indexing large volumes of abstractions of the users input much more efficient.

Perception

Machine perception[72] is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others more exotic) to deduce aspects of the world. Computer vision[73] is the ability to analyze visual input. A few selected subproblems are speech recognition,[74] facial recognition and object recognition.[75]

Motion and manipulation

The field of robotics[76] is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation[77] and navigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion - where the robot moves while maintaining physical contact with an object).[78][79]

Long-term goals

Among the long-term goals in the research pertaining to artificial intelligence are: (1) Social intelligence, (2) Creativity, and (3) General intelligence.

Social intelligence

Kismet, a robot with rudimentary social skills[80]

Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects.[81][82] It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science.[83] While the origins of the field may be traced as far back as to early philosophical inquiries into emotion,[84] the more modern branch of computer science originated with Rosalind Picard's 1995 paper[85] on affective computing.[86][87] A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behaviour to them, giving an appropriate response for those emotions.
Emotion and social skills[88] play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, in an effort to facilitate human-computer interaction, an intelligent machine might want to be able to display emotions—even if it does not actually experience them itself—in order to appear sensitive to the emotional dynamics of human interaction.

Creativity

A sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative, or systems that identify and assess creativity). Related areas of computational research are Artificial intuition and Artificial thinking.

General intelligence

Many researchers think that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them.[7] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[89][90]

Many of the problems above may require general intelligence to be considered solved. For example, even a straightforward, specific task like machine translation requires that the machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's intention (social intelligence). A problem like machine translation is considered "AI-complete". In order to solve this particular problem, you must solve all the problems.[91]

Approaches

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[92] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[93] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[94] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require "sub-symbolic" processing?[95] John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence,[96] a term which has since been adopted by some non-GOFAI researchers.[97][98]

Cybernetics and brain simulation

In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast.
Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[20] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

Symbolic

 
When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI".[99] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[100] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.
Cognitive simulation
Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[101][102]
Logic-based
Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[93] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[103] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[104]
"Anti-logic" or "scruffy"
Researchers at MIT (such as Marvin Minsky and Seymour Papert)[105] found that solving difficult problems in vision and natural language processing required ad-hoc solutions – they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their "anti-logic" approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford).[94] Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.[106]
Knowledge-based
When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[107] This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[30] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

Sub-symbolic

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.[95]
Bottom-up, embodied, situated, behavior-based or nouvelle AI
Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[108] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.
Computational intelligence
Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle 1980s.[109] These and other sub-symbolic approaches, such as fuzzy systems and evolutionary computation, are now studied collectively by the emerging discipline of computational intelligence.[110]

Statistical

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a "revolution" and "the victory of the neats."[33] Critics argue that these techniques are too focused on particular problems and have failed to address the long term goal of general intelligence.[111] There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.[112][113]

Integrating the approaches

Intelligent agent paradigm
An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. The simplest intelligent agents are programs that solve specific problems. More complicated agents include human beings and organizations of human beings (such as firms). The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works – some agents are symbolic and logical, some are sub-symbolic neural networks and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fields—such as decision theory and economics—that also use concepts of abstract agents. The intelligent agent paradigm became widely accepted during the 1990s.[2]
Agent architectures and cognitive architectures
Researchers have designed systems to build intelligent systems out of interacting intelligent agents in a multi-agent system.[114] A system with both symbolic and sub-symbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling.[115] Rodney Brooks' subsumption architecture was an early proposal for such a hierarchical system.[116]

Tools

In the course of 50 years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Search and optimization

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[117] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[118] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[119] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[77] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[120] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use "heuristics" or "rules of thumb" that eliminate choices that are unlikely to lead to the goal (called "pruning the search tree"). Heuristics supply the program with a "best guess" for the path on which the solution lies.[121] Heuristics limit the search for solutions into a smaller sample size.[78]

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[122]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization)[123] and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).[124]

Logic

Logic[125] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[126] and inductive logic programming is a method for learning.[127]

Several different forms of logic are used in AI research. Propositional or sentential logic[128] is the logic of statements which can be true or false. First-order logic[129] also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic,[130] is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic[131] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence.

Default logics, non-monotonic logics and circumscription[51] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[45] situation calculus, event calculus and fluent calculus (for representing events and time);[46] causal calculus;[47] belief calculus; and modal logics.[48]

Probabilistic methods for uncertain reasoning

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[132]
Bayesian networks[133] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[134] learning (using the expectation-maximization algorithm),[135] planning (using decision networks)[136] and perception (using dynamic Bayesian networks).[137] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[137]

A key concept from the science of economics is "utility": a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[138] information value theory.[57] These tools include models such as Markov decision processes,[139] dynamic decision networks,[137] game theory and mechanism design.[140]

Classifiers and statistical learning methods

The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[141]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The most widely used classifiers are the neural network,[142] kernel methods such as the support vector machine,[143] k-nearest neighbor algorithm,[144] Gaussian mixture model,[145] naive Bayes classifier,[146] and decision tree.[147] The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem. Determining a suitable classifier for a given problem is still more an art than science.[148]

Neural networks

 
A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.

The study of artificial neural networks[142] began in the decade before the field AI research was founded, in the work of Walter Pitts and Warren McCullough. Other important early researchers were Frank Rosenblatt, who invented the perceptron and Paul Werbos who developed the backpropagation algorithm.[149]

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[150]
Among recurrent networks, the most famous is the Hopfield net, a form of attractor network, which was first described by John Hopfield in 1982.[151] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning and competitive learning.[152]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[153]

Control theory

Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.[154]

Languages

AI researchers have developed several specialized languages for AI research, including Lisp[155] and Prolog.[156]

Evaluating progress

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.[157]

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.[158]

One classification for outcomes of an AI test is:[159]
  1. Optimal: it is not possible to perform better.
  2. Strong super-human: performs better than all humans.
  3. Super-human: performs better than most humans.
  4. Sub-human: performs worse than most humans.
For example, performance at draughts (i.e. checkers) is optimal,[160] performance at chess is super-human and nearing strong super-human (see computer chess: computers versus human) and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human.

A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression.[161]
Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). as the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.[162]

Applications

An automated online assistant providing customer service on a web page – one of many very primitive applications of artificial intelligence.

Artificial intelligence techniques are pervasive and are too numerous to list. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.[163] An area that artificial intelligence has contributed greatly to is Intrusion detection.[164]

Competitions and prizes

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behavior, data-mining, robotic cars, robot soccer and games.

Platforms

A platform (or "computing platform") is defined as "some sort of hardware architecture or software framework (including application frameworks), that allows software to run." As Rodney Brooks[165] pointed out many years ago, it is not just the artificial intelligence software that defines the AI features of the platform, but rather the actual platform itself that affects the AI that results, i.e., there needs to be work in AI problems on real-world platforms rather than in isolation.

A wide variety of platforms has allowed different aspects of AI to develop, ranging from expert systems, albeit PC-based but still an entire real-world system, to various robot platforms such as the widely available Roomba with open interface.[166]

Philosophy

Artificial intelligence, by claiming to be able to recreate the capabilities of the human mind, is both a challenge and an inspiration for philosophy. Are there limits to how intelligent machines can be? Is there an essential difference between human intelligence and artificial intelligence? Can a machine have a mind and consciousness? A few of the most influential answers to these questions are given below.[167]
Turing's "polite convention"
We need not decide if a machine can "think"; we need only decide if a machine can act as intelligently as a human being. This approach to the philosophical problems associated with artificial intelligence forms the basis of the Turing test.[157]
The Dartmouth proposal
"Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." This conjecture was printed in the proposal for the Dartmouth Conference of 1956, and represents the position of most working AI researchers.[168]
Newell and Simon's physical symbol system hypothesis
"A physical symbol system has the necessary and sufficient means of general intelligent action." Newell and Simon argue that intelligences consist of formal operations on symbols.[169] Hubert Dreyfus argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a "feel" for the situation rather than explicit symbolic knowledge. (See Dreyfus' critique of AI.)[170][171]
Gödel's incompleteness theorem
A formal system (such as a computer program) cannot prove all true statements.[172] Roger Penrose is among those who claim that Gödel's theorem limits what machines can do. (See The Emperor's New Mind.)[173]
Searle's strong AI hypothesis
"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[174] John Searle counters this assertion with his Chinese room argument, which asks us to look inside the computer and try to find where the "mind" might be.[175]
The artificial brain argument
The brain can be simulated. Hans Moravec, Ray Kurzweil and others have argued that it is technologically feasible to copy the brain directly into hardware and software, and that such a simulation will be essentially identical to the original.[90]

Predictions and ethics

Many thinkers have speculated about the future of artificial intelligence technology and society. The existence of an artificial intelligence that rivals or exceeds human intelligence raises difficult ethical issues, and the potential power of the technology inspires both hopes and fears.

If research into Strong AI produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to recursive self-improvement. The new intelligence could thus increase exponentially and dramatically surpass humans.[176]

Hyper-intelligent software may not necessarily decide to support the continued existence of mankind, and would be extremely difficult to stop. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.

One proposal to deal with this is to ensure that the first generally intelligent AI is 'Friendly AI', and will then be able to control subsequently developed AIs. Some question whether this kind of check could really remain in place.

Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future,[177] and others argue that specialized artificial intelligence applications, robotics and other forms of automation will ultimately result in significant unemployment as machines begin to match and exceed the capability of workers to perform most routine and repetitive jobs. Ford predicts that many knowledge-based occupations—and in particular entry level jobs—will be increasingly susceptible to automation via expert systems, machine learning[178] and other AI-enhanced applications. AI-based applications may also be used to amplify the capabilities of low-wage offshore workers, making it more feasible to outsource knowledge work.[179]

Joseph Weizenbaum wrote that AI applications can not, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as customer service or psychotherapy[180] was deeply misguided. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum these points suggest that AI research devalues human life.[181]

Many futurists believe that artificial intelligence will ultimately transcend the limits of progress. Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029. He also predicts that by 2045 artificial intelligence will reach a point where it is able to improve itself at a rate that far exceeds anything conceivable in the past, a scenario that science fiction writer Vernor Vinge named the "singularity".[182]

Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either.[183] This idea, called transhumanism, which has roots in Aldous Huxley and Robert Ettinger, has been illustrated in fiction as well, for example in the manga Ghost in the Shell and the science-fiction series Dune. In the 1980s artist Hajime Sorayama's Sexy Robots series were painted and published in Japan depicting the actual organic human form with lifelike muscular metallic skins and later "the Gynoids" book followed that was used by or influenced movie makers including George Lucas and other creatives. Sorayama never considered these organic robots to be real part of nature but always unnatural product of the human mind, a fantasy existing in the mind even when realized in actual form. Almost 20 years later, the first AI robotic pet, AIBO, came available as a companion to people. AIBO grew out of Sony's Computer Science Laboratory (CSL).
Famed engineer Toshitada Doi is credited as AIBO's original progenitor: in 1994 he had started work on robots with artificial intelligence expert Masahiro Fujita, at CSL. Doi's, friend, the artist Hajime Sorayama, was enlisted to create the initial designs for the AIBO's body. Those designs are now part of the permanent collections of Museum of Modern Art and the Smithsonian Institution, with later versions of AIBO being used in studies in Carnegie Mellon University. In 2006, AIBO was added into Carnegie Mellon University's "Robot Hall of Fame".

Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be benevolent.[184] He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably, because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share).

Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" (1863), and expanded upon by George Dyson in his book of the same name in 1998.[185]

In fiction

The implications of artificial intelligence have also been explored in fiction. Artificial Intelligences have appeared in many roles, including:
Mary Shelley's Frankenstein considers a key issue in the ethics of artificial intelligence: if a machine can be created that has intelligence, could it also feel? If it can feel, does it have the same rights as a human? The idea also appears in modern science fiction, including the films I Robot, Blade Runner, The Machine and A.I.: Artificial Intelligence, in which humanoid machines have the ability to feel human emotions. This issue, now known as "robot rights", is currently being considered by, for example, California's Institute for the Future, although many critics believe that the discussion is premature.[186] The subject is profoundly discussed in the 2010 documentary film Plug & Pray.[187]

Polygyny in Islam

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Polygyny_in_Islam   Traditional Sunni and Shia I...