Search This Blog

Saturday, November 17, 2018

John Searle

From Wikipedia, the free encyclopedia

John Rogers Searle
John searle2.jpg
Searle at Christ Church, Oxford, 2005
BornJuly 31, 1932 (age 86)
Denver, Colorado, U.S.
Alma materUniversity of Wisconsin
Christ Church, Oxford
Spouse(s)Dagmar Searle

EraContemporary philosophy
RegionWestern philosophy
SchoolAnalytic
Direct realism
Main interests
Notable ideas
Indirect speech acts
Chinese room
Biological naturalism
Direction of fit
Cluster description theory of names
WebsiteHomepage at UC Berkeley
Signature
John Searle Signature.png

John Rogers Searle (/sɜːrl/; born 31 July 1932) is an American philosopher. He is currently Willis S. and Marion Slusser Professor Emeritus of the Philosophy of Mind and Language and Professor of the Graduate School at the University of California, Berkeley. Widely noted for his contributions to the philosophy of language, philosophy of mind, and social philosophy, he began teaching at UC Berkeley in 1959.

As an undergraduate at the University of Wisconsin, Searle was secretary of "Students against Joseph McCarthy". He received all his university degrees, BA, MA, and DPhil, from the University of Oxford, where he held his first faculty positions. Later, at UC Berkeley, he became the first tenured professor to join the 1964–1965 Free Speech Movement. In the late 1980s, Searle challenged the restrictions of Berkeley's 1980 rent stabilization ordinance. Following what came to be known as the California Supreme Court's "Searle Decision" of 1990, Berkeley changed its rent control policy, leading to large rent increases between 1991 and 1994.

In 2000 Searle received the Jean Nicod Prize; in 2004, the National Humanities Medal;[5] and in 2006, the Mind & Brain Prize. Searle's early work on speech acts, influenced by J. L. Austin and Ludwig Wittgenstein, helped establish his reputation. His notable concepts include the "Chinese room" argument against "strong" artificial intelligence. In March 2017, Searle was accused of sexual assault.

Biography

Searle's father, G. W. Searle, an electrical engineer, was employed by AT&T Corporation; his mother, Hester Beck Searle, was a physician.

Searle began his college education at the University of Wisconsin-Madison and in his junior year became a Rhodes Scholar at the University of Oxford, where he obtained all his university degrees, BA, MA, and DPhil.

His first two faculty positions were at Oxford as Research Lecturer, and Lecturer and Tutor at Christ Church. 

Politics

While an undergraduate at the University of Wisconsin, Searle became the secretary of "Students against Joseph McCarthy". (McCarthy at that time served as the junior senator from Wisconsin.) In 1959 Searle began teaching at Berkeley, and he was the first tenured professor to join the 1964–65 Free Speech Movement. In 1969, while serving as chairman of the Academic Freedom Committee of the Academic Senate of the University of California, he supported the university in its dispute with students over the People's Park. In The Campus War: A Sympathetic Look at the University in Agony (1971), Searle investigates the causes behind the campus protests of the era. In it he declares: "I have been attacked by both the House Un-American Activities Committee and ... several radical polemicists ... Stylistically, the attacks are interestingly similar. Both rely heavily on insinuation and innuendo, and both display a hatred – one might almost say terror – of close analysis and dissection of argument." He asserts that "My wife was threatened that I (and other members of the administration) would be assassinated or violently attacked."

In the late 1980s, Searle, along with other landlords, petitioned Berkeley's rental board to raise the limits on how much he could charge tenants under the city's 1980 rent-stabilization ordinance. The rental board refused to consider Searle's petition and Searle filed suit, charging a violation of due process. In 1990, in what came to be known as the "Searle Decision", the California Supreme Court upheld Searle's argument in part and Berkeley changed its rent-control policy, leading to large rent-increases between 1991 and 1994. Searle was reported to see the issue as one of fundamental rights, being quoted as saying "The treatment of landlords in Berkeley is comparable to the treatment of blacks in the South...our rights have been massively violated and we are here to correct that injustice." The court described the debate as a "morass of political invective, ad hominem attack, and policy argument".

Shortly after the September 11 attacks, Searle wrote an article arguing that the attacks were a particular event in a long-term struggle against forces that are intractably opposed to the United States, and signaled support for a more aggressive neoconservative interventionist foreign policy. He called for the realization that the United States is in a more-or-less permanent state of war with these forces. Moreover, a probable course of action would be to deny terrorists the use of foreign territory from which to stage their attacks. Finally, he alluded to the long-term nature of the conflict and blamed the attacks on the lack of American resolve to deal forcefully with America's enemies over the past several decades.

Sexual assault allegations

In March 2017, Searle became the subject of sexual assault allegations. The Los Angeles Times reported: "A new lawsuit alleges that university officials failed to properly respond to complaints that John Searle, an 84-year-old renowned philosophy professor, sexually assaulted his 24-year-old research associate last July and cut her pay when she rejected his advances." The case brought to light several earlier complaints against Searle, on which Berkeley allegedly had failed to act.

The lawsuit, filed in a California court on March 21, 2017, sought damages both from Searle and from the Regents of the University of California as his employers. It also claims that Jennifer Hudin, the director of the John Searle Center for Social Ontology, where the complainant had been employed as an assistant to Searle, has stated that Searle "has had sexual relationships with his students and others in the past in exchange for academic, monetary or other benefits". After news of the lawsuit became public, several previous allegations of sexual harassment by Searle were also revealed.

Awards and recognitions

Searle has five honorary-doctorate degrees from four different countries and is an honorary visiting professor at Tsing Hua University and at East China Normal University.

In 2000 Searle received the Jean Nicod Prize; in 2004, the National Humanities Medal; and in 2006, the Mind & Brain Prize.

Philosophy

Speech acts

Searle's early work, which did a great deal to establish his reputation, was on speech acts. He attempted to synthesize ideas from many colleagues – including J. L. Austin (the "illocutionary act", from How To Do Things with Words), Ludwig Wittgenstein and G.C.J. Midgley (the distinction between regulative and constitutive rules) – with his own thesis that such acts are constituted by the rules of language. He also drew on the work of Paul Grice (the analysis of meaning as an attempt at being understood), Hare and Stenius (the distinction, concerning meaning, between illocutionary force and propositional content), P. F. Strawson, John Rawls and William Alston, who maintained that sentence meaning consists in sets of regulative rules requiring the speaker to perform the illocutionary act indicated by the sentence and that such acts involve the utterance of a sentence which (a) indicates that one performs the act; (b) means what one says; and (c) addresses an audience in the vicinity.

In his 1969 book Speech Acts, Searle sets out to combine all these elements to give his account of illocutionary acts. There he provides an analysis of what he considers the prototypical illocutionary act of promising and offers sets of semantical rules intended to represent the linguistic meaning of devices indicating further illocutionary act types. Among the concepts presented in the book is the distinction between the "illocutionary force" and the "propositional content" of an utterance. Searle does not precisely define the former as such, but rather introduces several possible illocutionary forces by example. According to Searle, the sentences...
  1. Sam smokes habitually.
  2. Does Sam smoke habitually?
  3. Sam, smoke habitually!
  4. Would that Sam smoked habitually!
...each indicate the same propositional content (Sam smoking habitually) but differ in the illocutionary force indicated (respectively, a statement, a question, a command and an expression of desire).

According to a later account, which Searle presents in Intentionality (1983) and which differs in important ways from the one suggested in Speech Acts, illocutionary acts are characterised by their having "conditions of satisfaction" (an idea adopted from Strawson's 1971 paper "Meaning and Truth") and a "direction of fit" (an idea adopted from Elizabeth Anscombe). For example, the statement "John bought two candy bars" is satisfied if and only if it is true, i.e. John did buy two candy bars. By contrast, the command "John, buy two candy bars!" is satisfied if and only if John carries out the action of purchasing two candy bars. Searle refers to the first as having the "word-to-world" direction of fit, since the words are supposed to change to accurately represent the world, and the second as having the "world-to-word" direction of fit, since the world is supposed to change to match the words. (There is also the double direction of fit, in which the relationship goes both ways, and the null or zero direction of fit, in which it goes neither way because the propositional content is presupposed, as in "I'm sorry I ate John's candy bars.")

In Foundations of Illocutionary Logic (1985, with Daniel Vanderveken), Searle prominently uses the notion of the "illocutionary point".

Searle's speech-act theory has been challenged by several thinkers in a variety of ways. Collections of articles referring to Searle's account are found in Burkhardt 1990 and Lepore / van Gulick 1991.

Searle–Derrida debate

In the early 1970s, Searle had a brief exchange with Jacques Derrida regarding speech-act theory. The exchange was characterized by a degree of mutual hostility between the philosophers, each of whom accused the other of having misunderstood his basic points. Searle was particularly hostile to Derrida's deconstructionist framework and much later refused to let his response to Derrida be printed along with Derrida's papers in the 1988 collection Limited Inc. Searle did not consider Derrida's approach to be legitimate philosophy or even intelligible writing and argued that he did not want to legitimize the deconstructionist point of view by dedicating any attention to it. Consequently, some critics have considered the exchange to be a series of elaborate misunderstandings rather than a debate, while others have seen either Derrida or Searle gaining the upper hand. The level of hostility can be seen from Searle's statement that "It would be a mistake to regard Derrida's discussion of Austin as a confrontation between two prominent philosophical traditions", to which Derrida replied that that sentence was "the only sentence of the 'reply' to which I can subscribe". Commentators have frequently interpreted the exchange as a prominent example of a confrontation between analytical and continental philosophy.

The debate began in 1972, when, in his paper "Signature Event Context", Derrida analyzed J. L. Austin's theory of the illocutionary act. While sympathetic to Austin's departure from a purely denotational account of language to one that includes "force", Derrida was sceptical of the framework of normativity employed by Austin. He argued that Austin had missed the fact that any speech event is framed by a "structure of absence" (the words that are left unsaid due to contextual constraints) and by "iterability" (the repeatability of linguistic elements outside of their context). Derrida argued that the focus on intentionality in speech-act theory was misguided because intentionality is restricted to that which is already established as a possible intention. He also took issue with the way Austin had excluded the study of fiction, non-serious or "parasitic" speech, wondering whether this exclusion was because Austin had considered these speech genres governed by different structures of meaning, or simply due to a lack of interest.

In his brief reply to Derrida, "Reiterating the Differences: A Reply to Derrida", Searle argued that Derrida's critique was unwarranted because it assumed that Austin's theory attempted to give a full account of language and meaning when its aim was much narrower. Searle considered the omission of parasitic discourse forms to be justified by the narrow scope of Austin's inquiry. Searle agreed with Derrida's proposal that intentionality presupposes iterability, but did not apply the same concept of intentionality used by Derrida, being unable or unwilling to engage with the continental conceptual apparatus. This, in turn, caused Derrida to criticize Searle for not being sufficiently familiar with phenomenological perspectives on intentionality. Searle also argued that Derrida's disagreement with Austin turned on his having misunderstood Austin's (and Peirce's) type–token distinction and his failure to understand Austin's concept of failure in relation to performativity. Some critics have suggested that Searle, by being so grounded in the analytical tradition, was unable to engage with Derrida's continental phenomenological tradition and was at fault for the unsuccessful nature of the exchange.

Derrida, in his response to Searle ("a b c ..." in Limited Inc), ridiculed Searle's positions. Arguing that a clear sender of Searle's message could not be established, he suggested that Searle had formed with Austin a société à responsabilité limitée (a "limited liability company") due to the ways in which the ambiguities of authorship within Searle's reply circumvented the very speech act of his reply. Searle did not respond. Later in 1988, Derrida tried to review his position and his critiques of Austin and Searle, reiterating that he found the constant appeal to "normality" in the analytical tradition to be problematic.

In the debate, Derrida praises Austin's work, but argues that he is wrong to banish what Austin calls "infelicities" from the "normal" operation of language. One "infelicity," for instance, occurs when it cannot be known whether a given speech act is "sincere" or "merely citational" (and therefore possibly ironic, etc.). Derrida argues that every iteration is necessarily "citational", due to the graphematic nature of speech and writing, and that language could not work at all without the ever-present and ineradicable possibility of such alternate readings. Derrida takes Searle to task for his attempt to get around this issue by grounding final authority in the speaker's inaccessible "intention". Derrida argues that intention cannot possibly govern how an iteration signifies, once it becomes hearable or readable. All speech acts borrow a language whose significance is determined by historical-linguistic context, and by the alternate possibilities that this context makes possible. This significance, Derrida argues, cannot be altered or governed by the whims of intention.

In 1995, Searle gave a brief reply to Derrida in The Construction of Social Reality. "Derrida, as far as I can tell, does not have an argument. He simply declares that there is nothing outside of texts (Il n'y a pas de 'hors-texte')." Then, in Limited Inc., Derrida "apparently takes it all back", claiming that he meant only "the banality that everything exists in some context or other!" Derrida and others like him present "an array of weak or even nonexistent arguments for a conclusion that seems preposterous". In Of Grammatology (1967), Derrida claims that a text must not be interpreted by reference to anything "outside of language", which for him means "outside of writing in general". He adds: "There is nothing outside of the text [there is no outside-text; il n'y a pas de hors-texte]" (brackets in the translation). This is a metaphor: un hors-texte is a bookbinding term, referring to a 'plate' bound among pages of text. Searle cites Derrida's supplementary metaphor rather than his initial contention. However, whether Searle's objection is good against that contention is the point in debate.

Intentionality and the background

Searle defines intentionality as the power of minds to be about, to represent (see Correspondence theory of truth), or to stand for, things, properties and states of affairs in the world. The nature of intentionality is an important part of discussions of Searle's "Philosophy of Mind". Searle emphasizes that the word 'intentionality, (the part of the mind directed to/from/about objects and relations in the world independent of mind) should not be confused with the word 'intensionality' (the logical property of some sentences that do not pass the test of 'extensionality'). In Intentionality: An Essay in the Philosophy of Mind (1983), Searle applies certain elements of his account(s) of "illocutionary acts" to the investigation of intentionality. Searle also introduces a technical term the Background, which, according to him, has been the source of much philosophical discussion ("though I have been arguing for this thesis for almost twenty years," Searle writes, "many people whose opinions I respect still disagree with me about it"). He calls Background the set of abilities, capacities, tendencies, and dispositions that humans have and that are not themselves intentional states. Thus, when someone asks us to "cut the cake" we know to use a knife and when someone asks us to "cut the grass" we know to use a lawnmower (and not vice versa), even though the actual request did not include this detail. Searle sometimes supplements his reference to the Background with the concept of the Network, one's network of other beliefs, desires, and other intentional states necessary for any particular intentional state to make sense. Searle argues that the concept of a Background is similar to the concepts provided by several other thinkers, including Wittgenstein's private language argument ("the work of the later Wittgenstein is in large part about the Background") and Pierre Bourdieu's habitus.

To give an example, two chess players might be engaged in a bitter struggle at the board, but they share all sorts of Background presuppositions: that they will take turns to move, that no one else will intervene, that they are both playing to the same rules, that the fire alarm won't go off, that the board won't suddenly disintegrate, that their opponent won't magically turn into a grapefruit, and so on indefinitely. As most of these possibilities won't have occurred to either player,[46] Searle thinks the Background must be unconscious, though elements of it can be called to consciousness (if the fire alarm does go off, say).

In his debate with Derrida, Searle argued against Derrida's view that a statement can be disjoined from the original intentionality of its author, for example when no longer connected to the original author, while still being able to produce meaning. Searle maintained that even if one was to see a written statement with no knowledge of authorship it would still be impossible to escape the question of intentionality, because "a meaningful sentence is just a standing possibility of the (intentional) speech act". For Searle ascribing intentionality to a statement was a basic requirement for attributing it any meaning at all.

Consciousness

Building upon his views about intentionality, Searle presents a view concerning consciousness in his book The Rediscovery of the Mind (1992). He argues that, starting with behaviorism (an early but influential scientific view, succeeded by many later accounts that Searle also dismisses), much of modern philosophy has tried to deny the existence of consciousness, with little success. In Intentionality, he parodies several alternative theories of consciousness by replacing their accounts of intentionality with comparable accounts of the hand:
No one would think of saying, for example, "Having a hand is just being disposed to certain sorts of behavior such as grasping" (manual behaviorism), or "Hands can be defined entirely in terms of their causes and effects" (manual functionalism), or "For a system to have a hand is just for it to be in a certain computer state with the right sorts of inputs and outputs" (manual Turing machine functionalism), or "Saying that a system has hands is just adopting a certain stance toward it" (the manual stance). (p. 263)
Searle argues that philosophy has been trapped by a false dichotomy: that, on the one hand, the world consists of nothing but objective particles in fields of force, but that yet, on the other hand, consciousness is clearly a subjective first-person experience.

Searle says simply that both are true: consciousness is a real subjective experience, caused by the physical processes of the brain. (A view which he suggests might be called biological naturalism.)

Ontological subjectivity

Searle has argued that critics like Daniel Dennett, who (he claims) insist that discussing subjectivity is unscientific because science presupposes objectivity, are making a category error. Perhaps the goal of science is to establish and validate statements which are epistemically objective, (i.e., whose truth can be discovered and evaluated by any interested party), but are not necessarily ontologically objective.

Searle calls any value judgment epistemically subjective. Thus, "McKinley is prettier than Everest" is "epistemically subjective", whereas "McKinley is higher than Everest" is "epistemically objective." In other words, the latter statement is evaluable (in fact, falsifiable) by an understood ('background') criterion for mountain height, like 'the summit is so many meters above sea level'. No such criteria exist for prettiness.

Beyond this distinction, Searle thinks there are certain phenomena (including all conscious experiences) that are ontologically subjective, i.e. can only exist as subjective experience. For example, although it might be subjective or objective in the epistemic sense, a doctor's note that a patient suffers from back pain is an ontologically objective claim: it counts as a medical diagnosis only because the existence of back pain is "an objective fact of medical science". The pain itself, however, is ontologically subjective: it is only experienced by the person having it.

Searle goes on to affirm that "where consciousness is concerned, the existence of the appearance is the reality". His view that the epistemic and ontological senses of objective/subjective are cleanly separable is crucial to his self-proclaimed biological naturalism.

Artificial intelligence

A consequence of biological naturalism is that if we want to create a conscious being, we will have to duplicate whatever physical processes the brain goes through to cause consciousness. Searle thereby means to contradict what he calls "Strong AI", defined by the assumption that as soon as a certain kind of software is running on a computer, a conscious being is thereby created.

In 1980, Searle presented the "Chinese room" argument, which purports to prove the falsity of strong AI. Assume you do not speak Chinese and imagine yourself in a room with two slits, a book, and some scratch paper. Someone slides you some Chinese characters through the first slit, you follow the instructions in the book, transcribing characters as instructed onto the scratch paper, and slide the resulting sheet out the second slit. To people on the outside world, it appears the room speaks Chinese—they slide Chinese statements in one slit and get valid responses in return—yet you do not understand a word of Chinese. This suggests, according to Searle, that no computer can ever understand Chinese or English, because, as the thought experiment suggests, being able to 'translate' Chinese into English does not entail 'understanding' either Chinese or English: all which the person in the thought experiment, and hence a computer, is able to do is to execute certain syntactic manipulations.

Stevan Harnad argues that Searle's "Strong AI" is really just another name for functionalism and computationalism, and that these positions are the real targets of his critique. Functionalists argue that consciousness can be defined as a set of informational processes inside the brain. It follows that anything that carries out the same informational processes as a human is also conscious. Thus, if we wrote a computer program that was conscious, we could run that computer program on, say, a system of ping-pong balls and beer cups and the system would be equally conscious, because it was running the same information processes.

Searle argues that this is impossible, since consciousness is a physical property, like digestion or fire. No matter how good a simulation of digestion you build on the computer, it will not digest anything; no matter how well you simulate fire, nothing will get burnt. By contrast, informational processes are observer-relative: observers pick out certain patterns in the world and consider them information processes, but information processes are not things-in-the-world themselves. Since they do not exist at a physical level, Searle argues, they cannot have causal efficacy and thus cannot cause consciousness. There is no physical law, Searle insists, that can see the equivalence between a personal computer, a series of ping-pong balls and beer cans, and a pipe-and-water system all implementing the same program.

Social reality

Searle extended his inquiries into observer-relative phenomena by trying to understand social reality. Searle begins by arguing collective intentionality (e.g. "we're going for a walk") is a distinct form of intentionality, not simply reducible to individual intentionality (e.g. "I'm going for a walk with him and I think he thinks he's going for a walk with me and he thinks I think I'm going for a walk with him and ...").

In The Construction of Social Reality (1995), Searle addresses the mystery of how social constructs like "baseball" or "money" can exist in a world consisting only of physical particles in fields of force. Adapting an idea by Elizabeth Anscombe in "On Brute Facts," Searle distinguishes between brute facts, like the height of a mountain, and institutional facts, like the score of a baseball game. Aiming at an explanation of social phenomena in terms of Anscombe's notion, he argues that society can be explained in terms of institutional facts, and institutional facts arise out of collective intentionality through constitutive rules with the logical form "X counts as Y in C". Thus, for instance, filling out a ballot counts as a vote in a polling place, getting so many votes counts as a victory in an election, getting a victory counts as being elected president in the presidential race, etc.

Many sociologists, however, do not see Searle's contributions to social theory as very significant. Neil Gross, for example, argues that Searle's views on society are more or less a reconstitution of the sociologist Émile Durkheim's theories of social facts, social institutions, collective representations, and the like. Searle's ideas are thus open to the same criticisms as Durkheim's. Searle responded that Durkheim's work was worse than he had originally believed and, admitting he had not read much of Durkheim's work, said that, "Because Durkheim's account seemed so impoverished I did not read any further in his work." Steven Lukes, however, responded to Searle's response to Gross and argued point by point against the allegations that Searle makes against Durkheim, essentially upholding Gross' argument that Searle's work bears great resemblance to Durkheim's. Lukes attributes Searle's miscomprehension of Durkheim's work to the fact that Searle never read Durkheim.

Searle–Lawson debate

In recent years, Searle's main interlocutor on issues of social ontology has been Tony Lawson. Although their accounts of social reality are similar, there are important differences. Lawson places emphasis on the notion of social totality whereas Searle prefers to refer to institutional facts. Furthermore, Searle believes that emergence implies causal reduction whereas Lawson argues that social totalities cannot be completed explained by the causal powers of their components. Searle also places language at the foundation of the construction of social reality while Lawson believes that community formation necessarily precedes the development of language and therefore there must be the possibility for non-linguistic social structure formation. The debate is ongoing and takes place additionally through regular meetings of the Centre for Social Ontology at the University of California, Berkeley and the Cambridge Social Ontology Group at the University of Cambridge.

Rationality

In Rationality in Action (2001), Searle argues that standard notions of rationality are badly flawed. According to what he calls the Classical Model, rationality is seen as something like a train track: you get on at one point with your beliefs and desires and the rules of rationality compel you all the way to a conclusion. Searle doubts this picture of rationality holds generally.

Searle briefly critiques one particular set of these rules: those of mathematical decision theory. He points out that its axioms require that anyone who valued a quarter and their life would, at some odds, bet their life for a quarter. Searle insists he would never take such a bet and believes that this stance is perfectly rational.

Most of his attack is directed against the common conception of rationality, which he believes is badly flawed. First, he argues that reasons don't cause you to do anything, because having sufficient reason wills (but doesn't force) you to do that thing. So in any decision situation we experience a gap between our reasons and our actions. For example, when we decide to vote, we do not simply determine that we care most about economic policy and that we prefer candidate Jones's economic policy. We also have to make an effort to cast our vote. Similarly, every time a guilty smoker lights a cigarette they are aware of succumbing to their craving, not merely of acting automatically as they do when they exhale. It is this gap that makes us think we have freedom of the will. Searle thinks whether we really have free will or not is an open question, but considers its absence highly unappealing because it makes the feeling of freedom of will an epiphenomenon, which is highly unlikely from the evolutionary point of view given its biological cost. He also says: " All rational activity presupposes free will ".

Second, Searle believes we can rationally do things that don't result from our own desires. It is widely believed that one cannot derive an "ought" from an "is", i.e. that facts about how the world is can never tell you what you should do ('Hume's Law'). By contrast, in so far as a fact is understood as relating to an institution (marriage, promises, commitments, etc.), which is to be understood as a system of constitutive rules, then what one should do can be understood as following from the institutional fact of what one has done; institutional fact, then, can be understood as opposed to the "brute facts" related to Hume's Law. For example, Searle believes the fact that you promised to do something means you should do it, because by making the promise you are participating in the constitutive rules that arrange the system of promise making itself, and therefore understand a "shouldness" as implicit in the mere factual action of promising. Furthermore, he believes that this provides a desire-independent reason for an action—if you order a drink at a bar, you should pay for it even if you have no desire to. This argument, which he first made in his paper, "How to Derive 'Ought' from 'Is'" (1964), remains highly controversial, but even three decades later Searle continued to defend his view that "..the traditional metaphysical distinction between fact and value cannot be captured by the linguistic distinction between 'evaluative' and 'descriptive' because all such speech act notions are already normative."

Third, Searle argues that much of rational deliberation involves adjusting our (often inconsistent) patterns of desires to decide between outcomes, not the other way around. While in the Classical Model, one would start from a desire to go to Paris greater than that of saving money and calculate the cheapest way to get there, in reality people balance the niceness of Paris against the costs of travel to decide which desire (visiting Paris or saving money) they value more. Hence, he believes rationality is not a system of rules, but more of an adverb. We see certain behavior as rational, no matter what its source, and our system of rules derives from finding patterns in what we see as rational.

Bibliography

Primary

  • Speech Acts: An Essay in the Philosophy of Language (1969), Cambridge University Press, ISBN 978-0521096263 
  • The Campus War: A Sympathetic Look at the University in Agony (political commentary; 1971)
  • Expression and Meaning: Studies in the Theory of Speech Acts (essay collection; 1979)
  • Intentionality: An Essay in the Philosophy of Mind (1983)
  • Minds, Brains and Science: The 1984 Reith Lectures (lecture collection; 1984)
  • Foundations of Illocutionary Logic (John Searle & Daniel Vanderveken 1985)
  • The Rediscovery of the Mind (1992)
  • The Construction of Social Reality (1995)
  • The Mystery of Consciousness (review collection; 1997)
  • Mind, Language and Society: Philosophy in the Real World (summary of earlier work; 1998)
  • Rationality in Action (2001)
  • Consciousness and Language (essay collection; 2002)
  • Freedom and Neurobiology (lecture collection; 2004)
  • Mind: A Brief Introduction (summary of work in philosophy of mind; 2004)
  • Philosophy in a New Century: Selected Essays (2008)
  • Making the Social World: The Structure of Human Civilization (2010)
  • "What Your Computer Can't Know" (review of Luciano Floridi, The Fourth Revolution: How the Infosphere Is Reshaping Human Reality, Oxford University Press, 2014; and Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014), The New York Review of Books, vol. LXI, no. 15 (October 9, 2014), pp. 52–55.
  • Seeing Things As They Are: A Theory of Perception (2015)

Secondary

  • John Searle and His Critics (Ernest Lepore and Robert Van Gulick, eds.; 1991)
  • John Searle (Barry Smith, ed.; 2003)
  • John Searle and the Construction of Social Reality (Joshua Rust; 2006)
  • Intentional Acts and Institutional Facts (Savas Tsohatzidis, ed.; 2007)
  • John Searle (Joshua Rust; 2009)

Computer-aided diagnosis

From Wikipedia, the free encyclopedia

Computer-aided diagnosis
Medical diagnostics
X-ray of a hand with automatic bone age calculation.jpg
X-ray of a hand, with automatic calculation of bone age by a computer software.
Purposecomputer assistance diagnosis of images

Computer-aided detection (CADe), also called computer-aided diagnosis (CADx), are systems that assist doctors in the interpretation of medical images. Imaging techniques in X-ray, MRI, and ultrasound diagnostics yield a great deal of information that the radiologist or other medical professional has to analyze and evaluate comprehensively in a short time. CAD systems process digital images for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the professional.

CAD also has potential future applications in digital pathology with the advent of whole-slide imaging and machine learning algorithms. So far its application has been limited to quantifying immunostaining but is also being investigated for the standard H&E stain.

CAD is an interdisciplinary technology combining elements of artificial intelligence and computer vision with radiological and pathology image processing. A typical application is the detection of a tumor. For instance, some hospitals use CAD to support preventive medical check-ups in mammography (diagnosis of breast cancer), the detection of polyps in the colon, and lung cancer.

Computer-aided detection (CADe) systems are usually confined to marking conspicuous structures and sections. Computer-aided diagnosis (CADx) systems evaluate the conspicuous structures. For example, in mammography CAD highlights microcalcification clusters and hyperdense structures in the soft tissue. This allows the radiologist to draw conclusions about the condition of the pathology. Another application is CADq, which quantifies, e.g., the size of a tumor or the tumor's behavior in contrast medium uptake. Computer-aided simple triage (CAST) is another type of CAD, which performs a fully automatic initial interpretation and triage of studies into some meaningful categories (e.g. negative and positive). CAST is particularly applicable in emergency diagnostic imaging, where a prompt diagnosis of critical, life-threatening condition is required.

Although CAD has been used in clinical environments for over 40 years, CAD usually does not substitute the doctor or other professional, but rather plays a supporting role. The professional (generally a radiologist) is generally responsible for the final interpretation of a medical image. However, the goal of some CAD systems is to detect earliest signs of abnormality in patients that human professionals cannot, as in diabetic retinopathy, architectural distortion in mammograms, ground-glass nodules in thoracic CT, and non-polypoid (“flat”) lesions in CT colonography.

Topics

Methodology

CAD is fundamentally based on highly complex pattern recognition. X-ray or other types of images are scanned for suspicious structures. Normally a few thousand images are required to optimize the algorithm. Digital image data are copied to a CAD server in a DICOM-format and are prepared and analyzed in several steps.

1. Preprocessing for
  • Reduction of artifacts (bugs in images)
  • Image noise reduction
  • Leveling (harmonization) of image quality (increased contrast) for clearing the image's different basic conditions e.g. different exposure parameter.
  • Filtering
2. Segmentation for
  • Differentiation of different structures in the image, e.g. heart, lung, ribcage, blood vessels, possible round lesions
  • Matching with anatomic databank
  • Sample gray-values in volume of interest
3. Structure/ROI (Region of Interest) Analyze Every detected region is analyzed individually for special characteristics:
  • Compactness
  • Form, size and location
  • Reference to close-by structures / ROIs
  • Average greylevel value analyze within a ROI
  • Proportion of greylevels to border of the structure inside the ROI
4. Evaluation / classification After the structure is analyzed, every ROI is evaluated individually (scoring) for the probability of a TP. The following procedures are examples of classification algorithms.
If the detected structures have reached a certain threshold level, they are highlighted in the image for the radiologist. Depending on the CAD system these markings can be permanently or temporary saved. The latter's advantage is that only the markings which are approved by the radiologist are saved. False hits should not be saved, because an examination at a later date becomes more difficult then.

Sensitivity and specificity

CAD systems seek to highlight suspicious structures. Today's CAD systems cannot detect 100% of pathological changes. The hit rate (sensitivity) can be up to 90% depending on system and application. A correct hit is termed a True Positive (TP), while the incorrect marking of healthy sections constitutes a False Positive (FP). The less FPs indicated, the higher the specificity is. A low specificity reduces the acceptance of the CAD system because the user has to identify all of these wrong hits. The FP-rate in lung overview examinations (CAD Chest) could be reduced to 2 per examination. In other segments (e.g. CT lung examinations) the FP-rate could be 25 or more. In CAST systems the FP rate must be extremely low (less than 1 per examination) to allow a meaningful study triage.

Absolute detection rate

The absolute detection rate of the radiologist is an alternative metric to sensitivity and specificity. Overall, results of clinical trials about sensitivity, specificity, and the absolute detection rate can vary markedly. Each study result depends on its basic conditions and has to be evaluated on those terms.

The following facts have a strong influence:
  • Retrospective or prospective design
  • Quality of the used images
  • Condition of the x-ray examination
  • Radiologist's experience and education
  • Type of lesion
  • Size of the considered lesion

Applications

Interface of Medical Sieve, an algorithm by IBM for assisting in clinical decisions.

CAD is used in the diagnosis of breast cancer, lung cancer, colon cancer, prostate cancer, bone metastases, coronary artery disease, congenital heart defect, pathological brain detection, Alzheimer's disease, and diabetic retinopathy.

Breast cancer

CAD is used in screening mammography (X-ray examination of the female breast). Screening mammography is used for the early detection of breast cancer. CAD systems are often utilized to help classify a tumor as malignant or benign. CAD is especially established in US and the Netherlands and is used in addition to human evaluation, usually by a radiologist. The first CAD system for mammography was developed in a research project at the University of Chicago. Today it is commercially offered by iCAD and Hologic. There are currently some non-commercial projects being developed, such as Ashita Project, a gradient-based screening software by Alan Hshieh, as well. However, while achieving high sensitivities, CAD systems tend to have very low specificity and the benefits of using CAD remain uncertain. Some studies suggest a positive impact on mammography screening programs, but others show no improvement. A 2008 systematic review on computer-aided detection in screening mammography concluded that CAD does not have a significant effect on cancer detection rate, but does undesirably increase recall rate (i.e. the rate of false positives). However, it noted considerable heterogeneity in the impact on recall rate across studies.

Procedures to evaluate mammography based on magnetic resonance imaging exist too.

Lung cancer (bronchial carcinoma)

In the diagnosis of lung cancer, computed tomography with special three-dimensional CAD systems are established and considered as appropriate second opinions. At this a volumetric dataset with up to 3,000 single images is prepared and analyzed. Round lesions (lung cancer, metastases and benign changes) from 1 mm are detectable. Today all well-known vendors of medical systems offer corresponding solutions.

Early detection of lung cancer is valuable. The 5-year-survival-rate of lung cancer has stagnated in the last 30 years and is now at approximately just 15%. Lung cancer takes more victims than breast cancer, prostate cancer and colon cancer together. This is due to the asymptomatic growth of this cancer. In the majority of cases it is too late for a successful therapy if the patient develops first symptoms (e.g. chronic croakiness or hemoptysis). But if the lung cancer is detected early (mostly by chance), there is a survival rate at 47% according to the American Cancer Society. At the same time the standard x-ray-examination of the lung is the most frequently x-ray examination with a 50% share. Indeed, the random detection of lung cancer in the early stage (stage 1) in the x-ray image is difficult. It is a fact that round lesions vary from 5–10 mm are easily overlooked. The routine application of CAD Chest Systems may help to detect small changes without initial suspicion. A number of researchers developed CAD systems for detection of lung nodules (round lesions less than 30 mm) in chest radiography and CT, and CAD systems for diagnosis (e.g., distinction between malignant and benign) of lung nodules in CT. Philips was the first vendor to present a CAD for early detection of round lung lesions on x-ray images. Virtual dual-energy imaging improved the performance of CAD systems in chest radiography. Observer performance studies demonstrated that CAD systems improved the diagnosticic performance of radiologists in detection  and diagnosis of lung nodules in CT.

Colon cancer

CAD is available for detection of colorectal polyps in the colon in CT colonography.[43][44][45][46] Polyps are small growths that arise from the inner lining of the colon. CAD detects the polyps by identifying their characteristic "bump-like" shape. To avoid excessive false positives, CAD ignores the normal colon wall, including the haustral folds. CAD is able to detect polyps “missed” [47] by radiologists. In early clinical trials, CAD helped radiologists find more polyps in the colon than they found prior to using CAD.

Coronary artery disease

CAD is available for the automatic detection of significant (causing more than 50% stenosis) coronary artery disease in coronary CT angiography (CCTA) studies. A low false positives rate (60-70% specificity per patient) allows using it as a computer-aided simple triage (CAST) tool distinguishing between positive and negative studies and yielding a preliminary report. This, for example, can be used for chest pain patients' triage in an emergency setting.

Congenital heart defect

Early detection of pathology can be the difference between life and death. CADe can be done by auscultation with a digital stethoscope and specialized software, also known as Computer-aided auscultation. Murmurs, irregular heart sounds, caused by blood flowing through a defective heart, can be detected with high sensitivity and specificity. Computer-aided auscultation is sensitive to external noise and bodily sounds and requires an almost silent environment to function accurately.

Pathological brain detection (PBD)

Chaplot et al. was the first to use Discrete Wavelet Transform (DWT) coefficients to detect pathological brains. Maitra and Chatterjee employed the Slantlet transform, which is an improved version of DWT. Their feature vector of each image is created by considering the magnitudes of Slantlet transform outputs corresponding to six spatial positions chosen according to a specific logic.

In 2010, Wang and Wu presented a forward neural network (FNN) based method to classify a given MR brain image as normal or abnormal. The parameters of FNN were optimized via adaptive chaotic particle swarm optimization (ACPSO). Results over 160 images showed that the classification accuracy was 98.75%.

In 2011, Wu and Wang proposed using DWT for feature extraction, PCA for feature reduction, and FNN with scaled chaotic artificial bee colony (SCABC) as classifier.

In 2013, Saritha et al. were the first to apply wavelet entropy (WE) to detect pathological brains. Saritha also suggested to use spider-web plots. Later, Zhang et al. proved removing spider-web plots did not influence the performance. Genetic pattern search method was applied to identify abnormal brain from normal controls. Its classification accuracy was reported as 95.188%. Das et al. proposed to use Ripplet transform. Zhang et al. proposed to use particle swarm optimization (PSO). Kalbkhani et al. suggested to use GARCH model.

In 2014, El-Dahshan et al. suggested to use pulse coupled neural network.

In 2015, Zhou et al. suggested to apply naive Bayes classifier to detect pathological brains.

Alzheimer's disease

CADs can be used to identify subjects with Alzheimer's and mild cognitive impairment from normal elder controls.

In 2014, Padma et al. used combined wavelet statistical texture features to segment and classify AD benign and malignant tumor slices. Zhang et al. found kernel support vector machine decision tree had 80% classification accuracy, with an average computation time of 0.022s for each image classification.

Eigenbrain is a novel brain feature that can help to detect AD, based on Principal Component Analysis or Independent Component Analysis decomposition. Polynomial kernel SVM has been shown to achieve good accuracy. The polynomial KSVM performs better than linear SVM and RBF kernel SVM.. Other approaches with decent results involve the use of texture analysis , morphological features, or high-order statistical features.

Nuclear medicine

CADx is available for nuclear medicine images. Commercial CADx systems for the diagnosis of bone metastases in whole-body bone scans and coronary artery disease in myocardial perfusion images exist.

With a high sensitivity and an acceptable false lesions detection rate, computer-aided automatic lesion detection system is demonstrated as useful and will probably in the future be able to help nuclear medicine physicians to identify possible bone lesions.

Diabetic retinopathy

Diabetic retinopathy is a disease of the retina that is diagnosed predominantly by fundoscopic images. Diabetic patients in industrialised countries generally undergo regular screening for the condition. Imaging is used to recognize early signs of abnormal retinal blood vessels. Manual analysis of these images can be time-consuming and unreliable. CAD has been employed to enhance the accuracy, sensitivity, and specificity of automated detection method. The use of some CAD systems to replace human graders can be safe and cost effective.

Image pre-processing, and feature extraction and classification are two main stages of these CAD algorithms.

Pre-processing methods

Image normalization is minimizing the variation across the entire image. Intensity variations in areas between periphery and central macular region of the eye have been reported to cause inaccuracy of vessel segmentation. Based on the 2014 review, this technique was the most frequently used and appeared in 11 out of 40 recently (since 2011) published primary research.

Histogram Equalization Sample Image. Left: Normal gray-scale fundoscopic image. Right: Post-histogram equalization processing.
 
Histogram equalization is useful in enhancing contrast within an image. This technique is used to increase local contrast. At the end of the processing, areas that were dark in the input image would be brightened, greatly enhancing the contrast among the features present in the area. On the other hand, brighter areas in the input image would remain bright or be reduced in brightness to equalize with the other areas in the image. Besides vessel segmentation, other features related to diabetic retinopathy can be further separated by using this pre-processing technique. Microaneurysm and hemorrhages are red lesions, whereas exudates are yellow spots. Increasing contrast between these two groups allow better visualization of lesions on images. With this technique, 2014 review found that 10 out of the 14 recently (since 2011) published primary research.

Green channel filtering is another technique that is useful in differentiating lesions rather than vessels. This method is important because it provides the maximal contrast between diabetic retinopathy-related lesions. Microaneurysms and hemorrhages are red lesions that appear dark after application of green channel filtering. In contrast, exudates, which appear yellow in normal image, are transformed into bright white spots after green filtering. This technique is mostly used according to the 2014 review, with appearance in 27 out of 40 published articles in the past three years. In addition, green channel filtering can be used to detect center of optic disc in conjunction with double-windowing system.

Non-uniform illumination correction is a technique that adjusts for non-uniform illumination in fundoscopic image. Non-uniform illumination can be a potential error in automated detection of diabetic retinopathy because of changes in statistical characteristics of image. These changes can affect latter processing such as feature extraction and are not observable by humans. Correction of non-uniform illumination (f') can be achieved by modifying the pixel intensity using known original pixel intensity (f), and average intensities of local (λ) and desired pixels (μ) (see formula below). Walter-Klein transformation is then applied to achieve the uniform illumination. This technique is the least used pre-processing method in the review from 2014.



Morphological operations is the second least used pre-processing method in 2014 review. The main objective of this method is to provide contrast enhancement, especially darker regions compared to background.

Feature extractions and classifications

After pre-processing of funduscopic image, the image will be further analyzed using different computational methods. However, the current literature agreed that some methods are used more often than others during vessel segmentation analyses. These methods are SVM, multi-scale, vessel-tracking, region growing approach, and model-based approaches.

Support Vector Machine. Support vectors (dashed lines) are created to maximize the separation between two groups.

Support vector machine is by far the most frequently used classifier in vessel segmentation, up to 90% of cases. SVM is a supervised learning model that belongs to the broader category of pattern recognition technique. The algorithm works by creating a largest gap between distinct samples in the data. The goal is to create the largest gap between these components that minimize the potential error in classification. In order to successfully segregate blood vessel information from the rest of the eye image, SVM algorithm creates support vectors that separate the blood vessel pixel from the rest of the image through a supervised environment. Detecting blood vessel from new images can be done through similar manner using support vectors. Combination with other pre-processing technique, such as green channel filtering, greatly improves the accuracy of detection of blood vessel abnormalities. Some beneficial properties of SVM include:
  • Flexibility – Highly flexible in terms of function
  • Simplicity – Simple, especially with large datasets (only support vectors are needed to create separation between data)
Multi-scale approach is a multiple resolution approach in vessel segmentation. At low resolution, large-diameter vessels can first be extracted. By increasing resolution, smaller branches from the large vessels can be easily recognized. Therefore, one advantage of using this technique is the increased analytical speed. Additionally, this approach can be used with 3D images. The surface representation is a surface normal to the curvature of the vessels, allowing the detection of abnormalities on vessel surface.

Vessel tracking is the ability of the algorithm to detect "centerline" of vessels. These centerlines are maximal peak of vessel curvature. Centers of vessels can be found using directional information that is provided by Gaussian filter. Similar approaches that utilize the concept of centerline are the skeleton-based and differential geometry-based.

Region growing approach is a method of detecting neighboring pixels with similarities. A seed point is required for such method to start. Two elements are needed for this technique to work: similarity and spatial proximity. A neighboring pixel to the seed pixel with similar intensity is likely to be the same type and will be added to the growing region. One disadvantage of this technique is that it requires manual selection of seed point, which introduces bias and inconsistency in the algorithm. This technique is also being used in optic disc identification.

Model-based approaches employ representation to extract vessels from images. Three broad categories of model-based are known: deformable, parametric, and template matching. Deformable methods uses objects that will be deformed to fit the contours of the objects on the image. Parametric uses geometric parameters such as tubular, cylinder, or ellipsoid representation of blood vessels. Classical snake contour in combination with blood vessel topological information can also be used as a model-based approach. Lastly, template matching is the usage of a template, fitted by stochastic deformation process using Hidden Markov Mode 1.

Effects on employment

Automation of medical diagnosis labor (for example, quantifying red blood cells) has some historical precedent. The deep learning revolution of the 2010s has already produced AIs that are more accurate in many areas of visual diagnosis than radiologists and dermatologists, and this gap is expected to grow. Some experts, including many doctors, are dismissive of the effects that AI will have on medical specialties. In contrast, many economists and artificial intelligence experts believe that fields such as radiology will be massively disrupted, with unemployment or downward pressure on the wages of radiologists; hospitals will need fewer radiologists overall, and many of the radiologists who still exist will require substantial retraining. Geoffrey Hinton, the "Godfather of deep learning", argues that (in view of the likely advances expected in the next five or ten years) hospitals should immediately stop training radiologists, as their time-consuming and expensive training on visual diagnosis will soon be mostly obsolete, leading to a glut of traditional radiologists. An op-ed in JAMA argues that pathologists and radiologists should merge into a single "information specialist" role, and state that "To avoid being replaced by computers, radiologists must allow themselves to be displaced by computers." Information specialists would be trained in "Bayesian logic, statistics, data science", and some genomics and biometrics; manual visual pattern recognition would be greatly de-emphasized compared with current onerous radiology training.

Applications of artificial intelligence

From Wikipedia, the free encyclopedia

Artificial intelligence, defined as intelligence exhibited by machines, has many applications in today's society. More specifically, it is Weak AI, the form of A.I. where programs are developed to perform specific tasks, that is being utilized for a wide range of activities including medical diagnosis, electronic trading, robot control, and remote sensing. AI has been used to develop and advance numerous fields and industries, including finance, healthcare, education, transportation, and more.

AI for Good

AI for Good is a movement in which institutions are employing AI to tackle some of the world's greatest economic and social challenges. For example, the University of Southern California launched the Center for Artificial Intelligence in Society, with the goal of using AI to address socially relevant problems such as homelessness. At Stanford, researchers are using AI to analyze satellite images to identify which areas have the highest poverty levels.

Aviation

The Air Operations Division (AOD) uses AI for the rule based expert systems. The AOD has use for artificial intelligence for surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.

The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights. Other than simulated flying, there is also simulated aircraft warfare. The computers are able to come up with the best success scenarios in these situations. The computers can also create strategies based on the placement, size, speed and strength of the forces and counter forces. Pilots may be given assistance in the air during combat by computers. The artificial intelligent programs can sort the information and provide the pilot with the best possible maneuvers, not to mention getting rid of certain maneuvers that would be impossible for a human being to perform. Multiple aircraft are needed to get good approximations for some calculations so computer simulated pilots are used to gather data. These computer simulated pilots are also used to train future air traffic controllers.

The system used by the AOD in order to measure performance was the Interactive Fault Diagnosis and Isolation System, or IFDIS. It is a rule based expert system put together by collecting information from TF-30 documents and the expert advice from mechanics that work on the TF-30. This system was designed to be used for the development of the TF-30 for the RAAF F-111C. The performance system was also used to replace specialized workers. The system allowed the regular workers to communicate with the system and avoid mistakes, miscalculations, or having to speak to one of the specialized workers.

The AOD also uses artificial intelligence in speech recognition software. The air traffic controllers are giving directions to the artificial pilots and the AOD wants to the pilots to respond to the ATC's with simple responses. The programs that incorporate the speech software must be trained, which means they use neural networks. The program used, the Verbex 7000, is still a very early program that has plenty of room for improvement. The improvements are imperative because ATCs use very specific dialog and the software needs to be able to communicate correctly and promptly every time.
The Artificial Intelligence supported Design of Aircraft, or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools. The AIDA uses rule based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective.

In 2003, NASA's Dryden Flight Research Center, and many other companies, created software that could enable a damaged aircraft to continue flight until a safe landing zone can be reached. The software compensates for all the damaged components by relying on the undamaged components. The neural network used in the software proved to be effective and marked a triumph for artificial intelligence.

The Integrated Vehicle Health Management system, also used by NASA, on board an aircraft must process and interpret data taken from the various sensors on the aircraft. The system needs to be able to determine the structural integrity of the aircraft. The system also needs to implement protocols in case of any damage taken the vehicle.

Haitham Baomar and Peter Bentley are leading a team from the University College of London to develop an artificial intelligence based Intelligent Autopilot System (IAS) designed to teach an autopilot system to behave like a highly experienced pilot who is faced with an emergency situation such as severe weather, turbulence, or system failure. Educating the autopilot relies on the concept of supervised machine learning “which treats the young autopilot as a human apprentice going to a flying school”. The autopilot records the actions of the human pilot generating learning models using artificial neural networks. The autopilot is then given full control and observed by the pilot as it executes the training exercise.

The Intelligent Autopilot System combines the principles of Apprenticeship Learning and Behavioural Cloning whereby the autopilot observes the low-level actions required to maneuver the airplane and high-level strategy used to apply those actions. IAS implementation employs three phases; pilot data collection, training, and autonomous control. Baomar and Bentley’s goal is to create a more autonomous autopilot to assist pilots in responding to emergency situations.

Computer science

AI researchers have created many tools to solve the most difficult problems in computer science. Many of their inventions have been adopted by mainstream computer science and are no longer considered a part of AI. (See AI effect.) According to Russell & Norvig (2003, p. 15), all of the following were originally developed in AI laboratories: time sharing, interactive interpreters, graphical user interfaces and the computer mouse, rapid development environments, the linked list data structure, automatic storage management, symbolic programming, functional programming, dynamic programming and object-oriented programming.

AI can be used to potentially determine the developer of anonymous binaries.[citation needed]
AI can be used to create other AI. For example, around November 2017, Google's AutoML project to evolve new neural net topologies created NASNet, a system optimized for ImageNet and COCO. According to Google, NASNet's performance exceeded all previously published ImageNet performance.

Education

There are a number of companies that create robots to teach subjects to children ranging from biology to computer science, though such tools have not become widespread yet. There have also been a rise of intelligent tutoring systems, or ITS, in higher education. For example, an ITS called SHERLOCK teaches Air Force technicians to diagnose electrical systems problems in aircraft. Another example is DARPA, Defense Advanced Research Projects Agency, which used AI to develop a digital tutor to train its Navy recruits in technical skills in a shorter amount of time. Universities have been slow in adopting AI technologies due to either a lack of funding or skepticism of the effectiveness of these tools, but in the coming years more classrooms will be utilizing technologies such as ITS to complement teachers.

Advancements in natural language processing, combined with machine learning, have also enabled automatic grading of assignments as well as a data-driven understanding of individual students’ learning needs. This led to an explosion in popularity of MOOCs, or Massive Open Online Courses, which allows students from around the world to take classes online. Data sets collected from these large scale online learning systems have also enabled learning analytics, which will be used to improve the quality of learning at scale. Examples of how learning analytics can be used to improve the quality of learning include predicting which students are at risk of failure and analyzing student engagement.

Finance

Algorithmic trading

Algorithmic trading involves the use of complex AI systems to make trading decisions at speeds several orders of magnitudes greater than any human is capable of, often making millions of trades in a day without any human intervention. Automated trading systems are typically used by large institutional investors.

Market analysis and data mining

Several large financial institutions have invested in AI engines to assist with their investment practices. BlackRock’s AI engine, Aladdin, is used both within the company and to clients to help with investment decisions. Its wide range of functionalities includes the use of natural language processing to read text such as news, broker reports, and social media feeds. It then gauges the sentiment on the companies mentioned and assigns a score. Banks such as UBS and Deutsche Bank use an AI engine called Sqreem (Sequential Quantum Reduction and Extraction Model) which can mine data to develop consumer profiles and match them with the wealth management products they’d most likely want. Goldman Sachs uses Kensho, a market analytics platform that combines statistical computing with big data and natural language processing. Its machine learning systems mine through hoards of data on the web and assess correlations between world events and their impact on asset prices. Information Extraction, part of artificial intelligence, is used to extract information from live news feed and to assist with investment decisions.

Personal finance

Several products are emerging that utilize AI to assist people with their personal finances. For example, Digit is an app powered by artificial intelligence that automatically helps consumers optimize their spending and savings based on their own personal habits and goals. The app can analyze factors such as monthly income, current balance, and spending habits, then make its own decisions and transfer money to the savings account. Wallet.AI, an upcoming startup in San Francisco, builds agents that analyze data that a consumer would leave behind, from Smartphone check-ins to tweets, to inform the consumer about their spending behavior.

Portfolio management

Robo-advisors are becoming more widely used in the investment management industry. Robo-advisors provide financial advice and portfolio management with minimal human intervention. This class of financial advisers work based on algorithms built to automatically develop a financial portfolio according to the investment goals and risk tolerance of the clients. It can adjust to real-time changes in the market and accordingly calibrate the portfolio.

Underwriting

An online lender, Upstart, analyze vast amounts of consumer data and utilizes machine learning algorithms to develop credit risk models that predict a consumer’s likelihood of default. Their technology will be licensed to banks for them to leverage for their underwriting processes as well.

ZestFinance developed their Zest Automated Machine Learning (ZAML) Platform specifically for credit underwriting as well. This platform utilizes machine learning to analyze tens of thousands traditional and nontraditional variables (from purchase transactions to how a customer fills out a form) used in the credit industry to score borrowers. The platform is particularly useful to assign credit scores to those with limited credit histories, such as millennials.

Geography and Ecology

An application is given by Papadimitriou (2012), in Prolog language, with reference to Mediterranean landscapes.

Job Search

The job market has seen a notable change due to Artificial intelligence implementation. It has simplified the process for both recruiters and job seekers (i.e., Google for Jobs and applying online). According to Raj Mukherjee from Indeed.com, 65% of people launch a job search again within 91 days of being hired. AI-powered engine streamlines the complexity of job hunting by operating information on job skills, salaries, and user tendencies, matching people to the most relevant positions. Machine intelligence calculates what wages would be appropriate for a particular job, pulls and highlights resume information for recruiters using natural language processing, which extracts relevant words and phrases from text using specialized software. Another application is an AI resume builder which requires 5 minutes to compile a CV as opposed to spending hours doing the same job. In the AI age chatbots assist website visitors and solve daily workflows. Revolutionary AI tools complement people’s skills and allow HR managers to focus on tasks of higher priority. However, Artificial Intelligence impact on jobs research suggests that by 2030 intelligent agents and robots can eliminate 30% of the world’s human labor. Moreover, the research proves automation will displace between 400 and 800 million employees. Glassdoor`s research report states that recruiting and HR are expected to see much broader adoption of AI in job market 2018 and beyond.

Heavy industry

Robots have become common in many industries and are often given jobs that are considered dangerous to humans. Robots have proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration and other jobs which humans may find degrading.

In 2014, China, Japan, the United States, the Republic of Korea and Germany together amounted to 70% of the total sales volume of robots. In the automotive industry, a sector with particularly high degree of automation, Japan had the highest density of industrial robots in the world: 1,414 per 10,000 employees.

Hospitals and medicine

X-ray of a hand, with automatic calculation of bone age by a computer software.

Artificial neural networks are used as clinical decision support systems for medical diagnosis, such as in Concept Processing technology in EMR software.

Other tasks in medicine that can potentially be performed by artificial intelligence and are beginning to be developed include:
  • Computer-aided interpretation of medical images. Such systems help scan digital images, e.g. from computed tomography, for typical appearances and to highlight conspicuous sections, such as possible diseases. A typical application is the detection of a tumor.
  • Heart sound analysis
  • Companion robots for the care of the elderly
  • Mining medical records to provide more useful information.
  • Design treatment plans.
  • Assist in repetitive jobs including medication management.
  • Provide consultations.
  • Drug creation
  • Using avatars in place of patients for clinical training
  • Predict the likelihood of death from surgical procedures
  • Predict HIV progression
Currently, there are over 90 AI startups in the health industry working in these fields.

IDx's first solution, IDx-DR, is the first autonomous AI-based diagnostic system authorized for commercialization by the FDA.

Human resources and recruiting

Another application of AI is in the human resources and recruiting space. There are three ways AI is being used by human resources and recruiting professionals. AI is used to screen resumes and rank candidates according to their level of qualification. Ai is also used to predict candidate success in given roles through job matching platforms. And now, AI is rolling out recruiting chat bots that can automate repetitive communication tasks.

Typically, resume screening involves a recruiter or other HR professional scanning through a database of resumes. Now startups like Pomato, are creating machine learning algorithms to automate resume screening processes. Pomato’s resume screening AI focuses on automating validating technical applicants for technical staffing firms. Pomato’ s AI performs over 200,000 computations on each resume in seconds then designs a custom technical interview based on the mined skills. KE Solutions, founded in 2014, has developed recommendation systems to rank jobs for candidates, and rank resumes for employers. jobster.io, developed by KE Solutions uses concept-based search has increased accuracy by 80% compared to traditional ATS. It helps recruiters to overcome technical barriers.

From 2016 to 2017, consumer goods company Unilever used artificial intelligence to screen all entry level employees. Unilever’s AI used neuroscience based games, recorded interviews, and facial/speech analysis to predict hiring success. Unilever partnered with Pymetrics and HireVue to enable its novel AI based screening and increased their applicants from 15,000 to 30,000 in a single year. Recruiting with AI also produced Unililever’s “most diverse class to date.’ Unilever also decreased time to hire from 4 months to 4 weeks and saved over 50,000 hours of recruiter time.

From resume screening to neuroscience, speech recognition, and facial analysis...it’s clear AI is having a massive impact on the human resources field. Yet another development in AI is in recruiting chatbots. TextRecruit, a Bay Area startup, released Ari (automated recruiting interface.) Ari is a recruiting chatbot that is designed to hold two-way text message conversations with candidates. Ari automates posting jobs, advertising openings, screening candidates, scheduling interviews, and nurturing candidate relationships with updates as they progress along the hiring funnel. Ari is currently offered as part of TextRecruit’s candidate engagement platform.

Media and E-commerce

Some AI applications are geared towards the analysis of audiovisual media content such as movies, TV programs, advertisement videos or user-generated content. The solutions often involve computer vision, which is a major application area of AI.

Typical use case scenarios include the analysis of images using object recognition or face recognition techniques, or the analysis of video for recognizing relevant scenes, objects or faces. The motivation for using AI-based media analysis can be — among other things — the facilitation of media search, the creation of a set of descriptive keywords for a media item, media content policy monitoring (such as verifying the suitability of content for a particular TV viewing time), speech to text for archival or other purposes, and the detection of logos, products or celebrity faces for the placement of relevant advertisements.

Media analysis AI companies often provide their services over a REST API that enables machine-based automatic access to the technology and allows machine-reading of the results. For example, IBM, Microsoft, Amazon and the video AI company Valossa allow access to their media recognition technology by using RESTful APIs.

AI is also widely used in E-commerce Industry for applications like Visual search, Visually similar recommendation, Chatbots, Automated product tagging etc. Another generic application is to increase search discoverability and making social media content shoppable.

Music

While the evolution of music has always been affected by technology, artificial intelligence has enabled, through scientific advances, to emulate, at some extent, human-like composition. Among notable early efforts, David Cope created an AI called Emily Howell that managed to become well known in the field of Algorithmic Computer Music.[30] The algorithm behind Emily Howell is registered as a US patent.

The AI Iamus created 2012 the first complete classical album fully composed by a computer.
Other endeavours, like AIVA (Artificial Intelligence Virtual Artist), focus on composing symphonic music, mainly classical music for film scores. It achieved a world first by becoming the first virtual composer to be recognized by a musical professional association.

Artificial intelligences can even produce music usable in a medical setting, with Melomics’s effort to use computer-generated music for stress and pain relief.

Moreover, initiatives such as Google Magenta, conducted by the Google Brain team, want to find out if an artificial intelligence can be capable of creating compelling art.

At Sony CSL Research Laboratory, their Flow Machines software has created pop songs by learning music styles from a huge database of songs. By analyzing unique combinations of styles and optimizing techniques, it can compose in any style.

Another artificial intelligence musical composition project, The Watson Beat, written by IBM Research, doesn't need a huge database of music like the Google Magenta and Flow Machines projects, since it uses Reinforcement Learning and Deep Belief Networks to compose music on a simple seed input melody and a select style. Since the software has been open sourced musicians, such as Taryn Southern have been collaborating with the project to create music.

News, publishing and writing

The company Narrative Science makes computer generated news and reports commercially available, including summarizing team sporting events based on statistical data from the game in English. It also creates financial reports and real estate analyses. Similarly, the company Automated Insights generates personalized recaps and previews for Yahoo Sports Fantasy Football. The company is projected to generate one billion stories in 2014, up from 350 million in 2013.

Echobox is a software company that helps publishers increase traffic by 'intelligently' posting articles on social media platforms such as Facebook and Twitter. By analysing large amounts of data, it learns how specific audiences respond to different articles at different times of the day. It then chooses the best stories to post and the best times to post them. It uses both historical and real-time data to understand to what has worked well in the past as well as what is currently trending on the web.

Another company, called Yseop, uses artificial intelligence to turn structured data into intelligent comments and recommendations in natural language. Yseop is able to write financial reports, executive summaries, personalized sales or marketing documents and more at a speed of thousands of pages per second and in multiple languages including English, Spanish, French & German.

Boomtrain’s is another example of AI that is designed to learn how to best engage each individual reader with the exact articles — sent through the right channel at the right time — that will be most relevant to the reader. It’s like hiring a personal editor for each individual reader to curate the perfect reading experience.

There is also the possibility that AI will write work in the future. In 2016, a Japanese AI co-wrote a short story and almost won a literary prize.

Online and telephone customer service

An automated online assistant providing customer service on a web page.

Artificial intelligence is implemented in automated online assistants that can be seen as avatars on web pages. It can avail for enterprises to reduce their operation and training cost. A major underlying technology to such systems is natural language processing. Pypestream uses automated customer service for its mobile application designed to streamline communication with customers.

Currently, major companies are investing in AI to handle difficult customer in the future. Google's most recent development analyzes language and converts speech into text. The platform can identify angry customers through their language and respond appropriately.

Sensors

Artificial Intelligence has been combined with many sensor technologies, such as Digital SpectrometryTM by IdeaCuria Inc. which enables many applications such as at home water quality monitoring.

Telecommunications maintenance

Many telecommunications companies make use of heuristic search in the management of their workforces, for example BT Group has deployed heuristic search in a scheduling application that provides the work schedules of 20,000 engineers.

Toys and games

The 1990s saw some of the first attempts to mass-produce domestically aimed types of basic Artificial Intelligence for education, or leisure. This prospered greatly with the Digital Revolution, and helped introduce people, especially children, to a life of dealing with various types of Artificial Intelligence, specifically in the form of Tamagotchis and Giga Pets, iPod Touch, the Internet, and the first widely released robot, Furby. A mere year later an improved type of domestic robot was released in the form of Aibo, a robotic dog with intelligent features and autonomy.

Companies like Mattel have been creating an assortment of AI-enabled toys for kids as young as age three. Using proprietary AI engines and speech recognition tools, they are able to understand conversations, give intelligent responses and learn quickly.

AI has also been applied to video games, for example video game bots, which are designed to stand in as opponents where humans aren't available or desired.

Transportation

Fuzzy logic controllers have been developed for automatic gearboxes in automobiles. For example, the 2006 Audi TT, VW Touareg and VW Caravell feature the DSP transmission which utilizes Fuzzy Logic. A number of Škoda variants (Škoda Fabia) also currently include a Fuzzy Logic-based controller.

Today's cars now have AI-based driver assist features such as self-parking and advanced cruise controls. AI has been used to optimize traffic management applications, which in turn reduces wait times, energy use, and emissions by as much as 25 percent. In the future, fully autonomous cars will be developed. AI in transportation is expected to provide safe, efficient, and reliable transportation while minimizing the impact on the environment and communities. The major challenge to developing this AI is the fact that transportation systems are inherently complex systems involving a very large number of components and different parties, each having different and often conflicting objectives.

Other

Various tools of artificial intelligence are also being widely deployed in homeland security, speech and text recognition, data mining, and e-mail spam filtering. Applications are also being developed for gesture recognition (understanding of sign language by machines), individual voice recognition, global voice recognition (from a variety of people in a noisy room), facial expression recognition for interpretation of emotion and non verbal cues. Other applications are robot navigation, obstacle avoidance, and object recognition.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...