From Wikipedia, the free encyclopedia

The philosophy of artificial intelligence attempts to answer such questions as follows:
  • Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?
  • Can a machine have a mind, mental states, and consciousness in the same way that a human being can? Can it feel how things are?
These three questions reflect the divergent interests of AI researchers, linguists, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.

Important propositions in the philosophy of AI include:
  • Turing's "polite convention": If a machine behaves as intelligently as a human being, then it is as intelligent as a human being.[2]
  • The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."[3]
  • Newell and Simon's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[4]
  • Searle's strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[5]
  • Hobbes' mechanism: "For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts..."[6]

Can a machine display general intelligence?