Search This Blog

Monday, January 15, 2024

Discourse

From Wikipedia, the free encyclopedia

Social theory

In the humanities and social sciences, discourse describes a formal way of thinking that can be expressed through language. Discourse is a social boundary that defines what statements can be said about a topic. Many definitions of discourse are largely derived from the work of French philosopher Michel Foucault. In sociology, discourse is defined as "any practice (found in a wide range of forms) by which individuals imbue reality with meaning".

Political science sees discourse as closely linked to politics and policy making. Likewise, different theories among various disciplines understand discourse as linked to power and state, insofar as the control of discourses is understood as a hold on reality itself (e.g. if a state controls the media, they control the "truth"). In essence, discourse is inescapable, since any use of language will have an effect on individual perspectives. In other words, the chosen discourse provides the vocabulary, expressions, or style needed to communicate. For example, two notably distinct discourses can be used about various guerrilla movements, describing them either as "freedom fighters" or "terrorists".

In psychology, discourses are embedded in different rhetorical genres and meta-genres that constrain and enable them—language talking about language. This is exemplified in the APA's Diagnostic and Statistical Manual of Mental Disorders, which tells of the terms that have to be used in speaking about mental health, thereby mediating meanings and dictating practices of professionals in psychology and psychiatry.

Modernism

Modernist theorists were focused on achieving progress and believed in the existence of natural and social laws which could be used universally to develop knowledge and thus a better understanding of society. Such theorists would be preoccupied with obtaining the "truth" and "reality", seeking to develop theories which contained certainty and predictability. Modernist theorists therefore understood discourse to be functional. Discourse and language transformations are ascribed to progress or the need to develop new or more "accurate" words to describe new discoveries, understandings, or areas of interest. In modernist theory, language and discourse are dissociated from power and ideology and instead conceptualized as "natural" products of common sense usage or progress. Modernism further gave rise to the liberal discourses of rights, equality, freedom, and justice; however, this rhetoric masked substantive inequality and failed to account for differences, according to Regnier.

Structuralism (Saussure & Lacan)

Structuralist theorists, such as Ferdinand de Saussure and Jacques Lacan, argue that all human actions and social formations are related to language and can be understood as systems of related elements. This means that the "individual elements of a system only have significance when considered in relation to the structure as a whole, and that structures are to be understood as self-contained, self-regulated, and self-transforming entities". In other words, it is the structure itself that determines the significance, meaning and function of the individual elements of a system. Structuralism has made an important contribution to our understanding of language and social systems. Saussure's theory of language highlights the decisive role of meaning and signification in structuring human life more generally.

Poststructuralism (Foucault)

Following the perceived limitations of the modern era, emerged postmodern theory. Postmodern theorists rejected modernist claims that there was one theoretical approach that explained all aspects of society. Rather, postmodernist theorists were interested in examining the variety of experiences of individuals and groups and emphasized differences over similarities and common experiences.

In contrast to modernist theory, postmodern theory is more fluid, allowing for individual differences as it rejects the notion of social laws. Such theorists shifted away from truth-seeking, and instead sought answers for how truths are produced and sustained. Postmodernists contended that truth and knowledge are plural, contextual, and historically produced through discourses. Postmodern researchers therefore embarked on analyzing discourses such as texts, language, policies, and practices.

Foucault

In the works of the philosopher Michel Foucault, a discourse is "an entity of sequences, of signs, in that they are enouncements (énoncés)." The enouncement (l’énoncé, "the statement") is a linguistic construct that allows the writer and the speaker to assign meaning to words and to communicate repeatable semantic relations to, between, and among the statements, objects, or subjects of the discourse. There exist internal relations among the signs (semiotic sequences) that are between and among the statements, objects, or subjects of the discourse. The term discursive formation identifies and describes written and spoken statements with semantic relations that produce discourses. As a researcher, Foucault applied the discursive formation to analyses of large bodies of knowledge, e.g. political economy and natural history.

In The Archaeology of Knowledge (1969), a treatise about the methodology and historiography of systems of thought ("epistemes") and of knowledge ("discursive formations"), Michel Foucault developed the concepts of discourse. The sociologist Iara Lessa summarizes Foucault's definition of discourse as "systems of thoughts composed of ideas, attitudes, courses of action, beliefs, and practices that systematically construct the subjects and the worlds of which they speak." Foucault traces the role of discourse in the legitimation of society's power to construct contemporary truths, to maintain said truths, and to determine what relations of power exist among the constructed truths; therefore discourse is a communications medium through which power relations produce men and women who can speak.

The inter-relation between power and knowledge renders every human relationship into a power negotiation, because power is always present and so produces and constrains the truth. Power is exercised through rules of exclusion (discourses) that determine what subjects people can discuss; when, where, and how a person may speak; and determines which persons are allowed speak. That knowledge is both the creator of power and the creation of power, Foucault coined the term power/knowledge to show that an object becomes a "node within a network" of meanings. In The Archaeology of Knowledge, Foucault's example is a book's function as a node within a network [of?] meanings[citation needed, precise page number please]. The book does not exist as an individual object, but exists as part of a structure of knowledge that is "a system of references to other books, other texts, other sentences." In the critique of power/knowledge, Foucault identified Neo-liberalism as a discourse of political economy, which is conceptually related to governmentality, the organized practices (mentalities, rationalities, techniques) with which people are governed.

Interdiscourse studies the external semantic relations among discourses, because a discourse exists in relation to other discourses, e.g. [for example], books of history; thus, do academic researchers debate and determine "What is a discourse?" and "What is not a discourse?" in accordance with the denotations and connotations (meanings) used in their academic disciplines.

Discourse analysis

There are more than one types of discourse analysis, for example, Foucauldian Discourse Analysis and Linguistics Discourse Analysis and Political Discourse Analysis. In discourse analysis, discourse is a conceptual generalization of conversation within each modality and context of communication. In this sense, the term is studied in corpus linguistics, the study of language expressed in corpora (samples) of "real world" text.

Moreover, because a discourse is a body of text meant to communicate specific data, information, and knowledge, there exist internal relations in the content of a given discourse, as well as external relations among discourses. As such, a discourse does not exist per se (in itself), but is related to other discourses, by way of inter-discursive practices.

In Francois Rastier's approach to semantics, discourse is understood as meaning the totality of codified language (i.e.,[that is,] vocabulary) used in a given field of intellectual enquiry and of social practice, such as legal discourse, medical discourse, religious discourse, etc. In this sense, along with that of Foucault's in the previous section, the analysis of a discourse examines and determines the connections among language and structure and agency.

Formal semantics and pragmatics

In formal semantics and pragmatics, discourse is often viewed as the process of refining the information in a common ground. In some theories of semantics such as discourse representation theory, sentences' denotations themselves are equated with functions which update a common ground.

Philosophy of language

From Wikipedia, the free encyclopedia
 
In analytic philosophy, philosophy of language investigates the nature of language and the relations between language, language users, and the world. Investigations may include inquiry into the nature of meaning, intentionality, reference, the constitution of sentences, concepts, learning, and thought.

Gottlob Frege and Bertrand Russell were pivotal figures in analytic philosophy's "linguistic turn". These writers were followed by Ludwig Wittgenstein (Tractatus Logico-Philosophicus), the Vienna Circle, logical positivists, and Willard Van Orman Quine.

History

Ancient philosophy

In the West, inquiry into language stretches back to the 5th century BC with Socrates, Plato, Aristotle, and the Stoics. Both in India and in Greece, linguistic speculation predates the emergence of grammatical traditions of systematic description of language, which emerged around the 5th century BC in India (see Yāska), and around the 3rd century BC in Greece (see Rhianus).

In the dialogue Cratylus, Plato considered the question of whether the names of things were determined by convention or by nature. He criticized conventionalism because it led to the bizarre consequence that anything can be conventionally denominated by any name. Hence, it cannot account for the correct or incorrect application of a name. He claimed that there was a natural correctness to names. To do this, he pointed out that compound words and phrases have a range of correctness. He also argued that primitive names had a natural correctness, because each phoneme represented basic ideas or sentiments. For example, for Plato the letter l and its sound represented the idea of softness. However, by the end of the Cratylus, he had admitted that some social conventions were also involved, and that there were faults in the idea that phonemes had individual meanings. Plato is often considered a proponent of extreme realism.

Aristotle interested himself with the issues of logic, categories, and meaning creation. He separated all things into categories of species and genus. He thought that the meaning of a predicate was established through an abstraction of the similarities between various individual things. This theory later came to be called nominalism. However, since Aristotle took these similarities to be constituted by a real commonality of form, he is more often considered a proponent of "moderate realism".

The Stoic philosophers made important contributions to the analysis of grammar, distinguishing five parts of speech: nouns, verbs, appellatives (names or epithets), conjunctions and articles. They also developed a sophisticated doctrine of the lektón associated with each sign of a language, but distinct from both the sign itself and the thing to which it refers. This lektón was the meaning (or sense) of every term. The complete lektón of a sentence is what we would now call its proposition. Only propositions were considered "truth-bearers" or "truth-vehicles" (i.e., they could be called true or false) while sentences were simply their vehicles of expression. Different lektá could also express things besides propositions, such as commands, questions and exclamations.

Medieval philosophy

Medieval philosophers were greatly interested in the subtleties of language and its usage. For many scholastics, this interest was provoked by the necessity of translating Greek texts into Latin. There were several noteworthy philosophers of language in the medieval period. According to Peter J. King, (although this has been disputed), Peter Abelard anticipated the modern theories of reference. Also, William of Ockham's Summa Logicae brought forward one of the first serious proposals for codifying a mental language.

The scholastics of the high medieval period, such as Ockham and John Duns Scotus, considered logic to be a scientia sermocinalis (science of language). The result of their studies was the elaboration of linguistic-philosophical notions whose complexity and subtlety has only recently come to be appreciated. Many of the most interesting problems of modern philosophy of language were anticipated by medieval thinkers. The phenomena of vagueness and ambiguity were analyzed intensely, and this led to an increasing interest in problems related to the use of syncategorematic words such as and, or, not, if, and every. The study of categorematic words (or terms) and their properties was also developed greatly. One of the major developments of the scholastics in this area was the doctrine of the suppositio. The suppositio of a term is the interpretation that is given of it in a specific context. It can be proper or improper (as when it is used in metaphor, metonyms and other figures of speech). A proper suppositio, in turn, can be either formal or material accordingly when it refers to its usual non-linguistic referent (as in "Charles is a man"), or to itself as a linguistic entity (as in "Charles has seven letters"). Such a classification scheme is the precursor of modern distinctions between use and mention, and between language and metalanguage.

There is a tradition called speculative grammar which existed from the 11th to the 13th century. Leading scholars included Martin of Dacia and Thomas of Erfurt (see Modistae).

Modern philosophy

Linguists of the Renaissance and Baroque periods such as Johannes Goropius Becanus, Athanasius Kircher and John Wilkins were infatuated with the idea of a philosophical language reversing the confusion of tongues, influenced by the gradual discovery of Chinese characters and Egyptian hieroglyphs (Hieroglyphica). This thought parallels the idea that there might be a universal language of music.

European scholarship began to absorb the Indian linguistic tradition only from the mid-18th century, pioneered by Jean François Pons and Henry Thomas Colebrooke (the editio princeps of Varadarāja, a 17th-century Sanskrit grammarian, dating to 1849).

In the early 19th century, the Danish philosopher Søren Kierkegaard insisted that language ought to play a larger role in Western philosophy. He argued that philosophy has not sufficiently focused on the role language plays in cognition and that future philosophy ought to proceed with a conscious focus on language:

If the claim of philosophers to be unbiased were all it pretends to be, it would also have to take account of language and its whole significance in relation to speculative philosophy ... Language is partly something originally given, partly that which develops freely. And just as the individual can never reach the point at which he becomes absolutely independent ... so too with language.

Contemporary philosophy

The phrase "linguistic turn" was used to describe the noteworthy emphasis that contemporary philosophers put upon language.

Language began to play a central role in Western philosophy in the early 20th century. One of the central figures involved in this development was the German philosopher Gottlob Frege, whose work on philosophical logic and the philosophy of language in the late 19th century influenced the work of 20th-century analytic philosophers Bertrand Russell and Ludwig Wittgenstein. The philosophy of language became so pervasive that for a time, in analytic philosophy circles, philosophy as a whole was understood to be a matter of philosophy of language.

In continental philosophy, the foundational work in the field was Ferdinand de Saussure's Cours de linguistique générale, published posthumously in 1916.

Major topics and subfields

Meaning

The topic that has received the most attention in the philosophy of language has been the nature of meaning, to explain what "meaning" is, and what we mean when we talk about meaning. Within this area, issues include: the nature of synonymy, the origins of meaning itself, our apprehension of meaning, and the nature of composition (the question of how meaningful units of language are composed of smaller meaningful parts, and how the meaning of the whole is derived from the meaning of its parts).

There have been several distinctive explanations of what a linguistic "meaning" is. Each has been associated with its own body of literature.

Reference

Investigations into how language interacts with the world are called theories of reference. Gottlob Frege was an advocate of a mediated reference theory. Frege divided the semantic content of every expression, including sentences, into two components: sense and reference. The sense of a sentence is the thought that it expresses. Such a thought is abstract, universal and objective. The sense of any sub-sentential expression consists in its contribution to the thought that its embedding sentence expresses. Senses determine reference and are also the modes of presentation of the objects to which expressions refer. Referents are the objects in the world that words pick out. The senses of sentences are thoughts, while their referents are truth values (true or false). The referents of sentences embedded in propositional attitude ascriptions and other opaque contexts are their usual senses.

Bertrand Russell, in his later writings and for reasons related to his theory of acquaintance in epistemology, held that the only directly referential expressions are, what he called, "logically proper names". Logically proper names are such terms as I, now, here and other indexicals. He viewed proper names of the sort described above as "abbreviated definite descriptions" (see Theory of descriptions). Hence Joseph R. Biden may be an abbreviation for "the current President of the United States and husband of Jill Biden". Definite descriptions are denoting phrases (see "On Denoting") which are analyzed by Russell into existentially quantified logical constructions. Such phrases denote in the sense that there is an object that satisfies the description. However, such objects are not to be considered meaningful on their own, but have meaning only in the proposition expressed by the sentences of which they are a part. Hence, they are not directly referential in the same way as logically proper names, for Russell.

On Frege's account, any referring expression has a sense as well as a referent. Such a "mediated reference" view has certain theoretical advantages over Mill's view. For example, co-referential names, such as Samuel Clemens and Mark Twain, cause problems for a directly referential view because it is possible for someone to hear "Mark Twain is Samuel Clemens" and be surprised – thus, their cognitive content seems different.

Despite the differences between the views of Frege and Russell, they are generally lumped together as descriptivists about proper names. Such descriptivism was criticized in Saul Kripke's Naming and Necessity.

Kripke put forth what has come to be known as "the modal argument" (or "argument from rigidity"). Consider the name Aristotle and the descriptions "the greatest student of Plato", "the founder of logic" and "the teacher of Alexander". Aristotle obviously satisfies all of the descriptions (and many of the others we commonly associate with him), but it is not necessarily true that if Aristotle existed then Aristotle was any one, or all, of these descriptions. Aristotle may well have existed without doing any single one of the things for which he is known to posterity. He may have existed and not have become known to posterity at all or he may have died in infancy. Suppose that Aristotle is associated by Mary with the description "the last great philosopher of antiquity" and (the actual) Aristotle died in infancy. Then Mary's description would seem to refer to Plato. But this is deeply counterintuitive. Hence, names are rigid designators, according to Kripke. That is, they refer to the same individual in every possible world in which that individual exists. In the same work, Kripke articulated several other arguments against "Frege–Russell" descriptivism (see also Kripke's causal theory of reference).

The whole philosophical enterprise of studying reference has been critiqued by linguist Noam Chomsky in various works.

Composition and parts

It has long been known that there are different parts of speech. One part of the common sentence is the lexical word, which is composed of nouns, verbs, and adjectives. A major question in the field – perhaps the single most important question for formalist and structuralist thinkers – is how the meaning of a sentence emerges from its parts.

Example of a syntactic tree

Many aspects of the problem of the composition of sentences are addressed in the field of linguistics of syntax. Philosophical semantics tends to focus on the principle of compositionality to explain the relationship between meaningful parts and whole sentences. The principle of compositionality asserts that a sentence can be understood on the basis of the meaning of the parts of the sentence (i.e., words, morphemes) along with an understanding of its structure (i.e., syntax, logic). Further, syntactic propositions are arranged into discourse or narrative structures, which also encode meanings through pragmatics like temporal relations and pronominals.

It is possible to use the concept of functions to describe more than just how lexical meanings work: they can also be used to describe the meaning of a sentence. In the sentence "The horse is red", "the horse" can be considered to be the product of a propositional function. A propositional function is an operation of language that takes an entity (in this case, the horse) as an input and outputs a semantic fact (i.e., the proposition that is represented by "The horse is red"). In other words, a propositional function is like an algorithm. The meaning of "red" in this case is whatever takes the entity "the horse" and turns it into the statement, "The horse is red."

Linguists have developed at least two general methods of understanding the relationship between the parts of a linguistic string and how it is put together: syntactic and semantic trees. Syntactic trees draw upon the words of a sentence with the grammar of the sentence in mind. While semantic trees focus upon the role of the meaning of the words and how those meanings combine to provide insight onto the genesis of semantic facts.

Mind and language

Innateness and learning

Some of the major issues at the intersection of philosophy of language and philosophy of mind are also dealt with in modern psycholinguistics. Some important questions regard the amount of innate language, if language acquisition is a special faculty in the mind, and what the connection is between thought and language.

There are three general perspectives on the issue of language learning. The first is the behaviorist perspective, which dictates that not only is the solid bulk of language learned, but it is learned via conditioning. The second is the hypothesis testing perspective, which understands the child's learning of syntactic rules and meanings to involve the postulation and testing of hypotheses, through the use of the general faculty of intelligence. The final candidate for explanation is the innatist perspective, which states that at least some of the syntactic settings are innate and hardwired, based on certain modules of the mind.

There are varying notions of the structure of the brain when it comes to language. Connectionist models emphasize the idea that a person's lexicon and their thoughts operate in a kind of distributed, associative network. Nativist models assert that there are specialized devices in the brain that are dedicated to language acquisition. Computation models emphasize the notion of a representational language of thought and the logic-like, computational processing that the mind performs over them. Emergentist models focus on the notion that natural faculties are a complex system that emerge from simpler biological parts. Reductionist models attempt to explain higher-level mental processes in terms of the basic low-level neurophysiological activity.

Communication

Firstly, this field of study seeks to better understand what speakers and listeners do with language in communication, and how it is used socially. Specific interests include the topics of language learning, language creation, and speech acts.

Secondly, the question of how language relates to the minds of both the speaker and the interpreter is investigated. Of specific interest is the grounds for successful translation of words and concepts into their equivalents in another language.

Language and thought

An important problem which touches both philosophy of language and philosophy of mind is to what extent language influences thought and vice versa. There have been a number of different perspectives on this issue, each offering a number of insights and suggestions.

Linguists Sapir and Whorf suggested that language limited the extent to which members of a "linguistic community" can think about certain subjects (a hypothesis paralleled in George Orwell's novel Nineteen Eighty-Four). In other words, language was analytically prior to thought. Philosopher Michael Dummett is also a proponent of the "language-first" viewpoint.

The stark opposite to the Sapir–Whorf position is the notion that thought (or, more broadly, mental content) has priority over language. The "knowledge-first" position can be found, for instance, in the work of Paul Grice. Further, this view is closely associated with Jerry Fodor and his language of thought hypothesis. According to his argument, spoken and written language derive their intentionality and meaning from an internal language encoded in the mind. The main argument in favor of such a view is that the structure of thoughts and the structure of language seem to share a compositional, systematic character. Another argument is that it is difficult to explain how signs and symbols on paper can represent anything meaningful unless some sort of meaning is infused into them by the contents of the mind. One of the main arguments against is that such levels of language can lead to an infinite regress. In any case, many philosophers of mind and language, such as Ruth Millikan, Fred Dretske and Fodor, have recently turned their attention to explaining the meanings of mental contents and states directly.

Another tradition of philosophers has attempted to show that language and thought are coextensive – that there is no way of explaining one without the other. Donald Davidson, in his essay "Thought and Talk", argued that the notion of belief could only arise as a product of public linguistic interaction. Daniel Dennett holds a similar interpretationist view of propositional attitudes. To an extent, the theoretical underpinnings to cognitive semantics (including the notion of semantic framing) suggest the influence of language upon thought. However, the same tradition views meaning and grammar as a function of conceptualization, making it difficult to assess in any straightforward way.

Some thinkers, like the ancient sophist Gorgias, have questioned whether or not language was capable of capturing thought at all.

...speech can never exactly represent perceptibles, since it is different from them, and perceptibles are apprehended each by the one kind of organ, speech by another. Hence, since the objects of sight cannot be presented to any other organ but sight, and the different sense-organs cannot give their information to one another, similarly speech cannot give any information about perceptibles. Therefore, if anything exists and is comprehended, it is incommunicable.

There are studies that prove that languages shape how people understand causality. Some of them were performed by Lera Boroditsky. For example, English speakers tend to say things like "John broke the vase" even for accidents. However, Spanish or Japanese speakers would be more likely to say "the vase broke itself". In studies conducted by Caitlin Fausey at Stanford University speakers of English, Spanish and Japanese watched videos of two people popping balloons, breaking eggs and spilling drinks either intentionally or accidentally. Later everyone was asked whether they could remember who did what. Spanish and Japanese speakers did not remember the agents of accidental events as well as did English speakers.

Russian speakers, who make an extra distinction between light and dark blue in their language, are better able to visually discriminate shades of blue. The Piraha, a tribe in Brazil, whose language has only terms like few and many instead of numerals, are not able to keep track of exact quantities.

In one study German and Spanish speakers were asked to describe objects having opposite gender assignment in those two languages. The descriptions they gave differed in a way predicted by grammatical gender. For example, when asked to describe a "key"—a word that is masculine in German and feminine in Spanish—the German speakers were more likely to use words like "hard", "heavy", "jagged", "metal", "serrated" and "useful" whereas Spanish speakers were more likely to say "golden", "intricate", "little", "lovely", "shiny" and "tiny". To describe a "bridge", which is feminine in German and masculine in Spanish, the German speakers said "beautiful", "elegant", "fragile", "peaceful", "pretty" and "slender", and the Spanish speakers said "big", "dangerous", "long", "strong", "sturdy" and "towering". This was the case even though all testing was done in English, a language without grammatical gender.

In a series of studies conducted by Gary Lupyan, people were asked to look at a series of images of imaginary aliens. Whether each alien was friendly or hostile was determined by certain subtle features but participants were not told what these were. They had to guess whether each alien was friendly or hostile, and after each response they were told if they were correct or not, helping them learn the subtle cues that distinguished friend from foe. A quarter of the participants were told in advance that the friendly aliens were called "leebish" and the hostile ones "grecious", while another quarter were told the opposite. For the rest, the aliens remained nameless. It was found that participants who were given names for the aliens learned to categorize the aliens far more quickly, reaching 80 per cent accuracy in less than half the time taken by those not told the names. By the end of the test, those told the names could correctly categorize 88 per cent of aliens, compared to just 80 per cent for the rest. It was concluded that naming objects helps us categorize and memorize them.

In another series of experiments, a group of people was asked to view furniture from an IKEA catalog. Half the time they were asked to label the object – whether it was a chair or lamp, for example – while the rest of the time they had to say whether or not they liked it. It was found that when asked to label items, people were later less likely to recall the specific details of products, such as whether a chair had arms or not. It was concluded that labeling objects helps our minds build a prototype of the typical object in the group at the expense of individual features.

Social interaction and language

A common claim is that language is governed by social conventions. Questions inevitably arise on surrounding topics. One question regards what a convention exactly is, and how it is studied, and second regards the extent that conventions even matter in the study of language. David Kellogg Lewis proposed a worthy reply to the first question by expounding the view that a convention is a "rationally self-perpetuating regularity in behavior". However, this view seems to compete to some extent with the Gricean view of speaker's meaning, requiring either one (or both) to be weakened if both are to be taken as true.

Some have questioned whether or not conventions are relevant to the study of meaning at all. Noam Chomsky proposed that the study of language could be done in terms of the I-Language, or internal language of persons. If this is so, then it undermines the pursuit of explanations in terms of conventions, and relegates such explanations to the domain of metasemantics. Metasemantics is a term used by philosopher of language Robert Stainton to describe all those fields that attempt to explain how semantic facts arise. One fruitful source of research involves investigation into the social conditions that give rise to, or are associated with, meanings and languages. Etymology (the study of the origins of words) and stylistics (philosophical argumentation over what makes "good grammar", relative to a particular language) are two other examples of fields that are taken to be metasemantic.

Many separate (but related) fields have investigated the topic of linguistic convention within their own research paradigms. The presumptions that prop up each theoretical view are of interest to the philosopher of language. For instance, one of the major fields of sociology, symbolic interactionism, is based on the insight that human social organization is based almost entirely on the use of meanings. In consequence, any explanation of a social structure (like an institution) would need to account for the shared meanings which create and sustain the structure.

Rhetoric is the study of the particular words that people use to achieve the proper emotional and rational effect in the listener, be it to persuade, provoke, endear, or teach. Some relevant applications of the field include the examination of propaganda and didacticism, the examination of the purposes of swearing and pejoratives (especially how it influences the behaviors of others, and defines relationships), or the effects of gendered language. It can also be used to study linguistic transparency (or speaking in an accessible manner), as well as performative utterances and the various tasks that language can perform (called "speech acts"). It also has applications to the study and interpretation of law, and helps give insight to the logical concept of the domain of discourse.

Literary theory is a discipline that some literary theorists claim overlaps with the philosophy of language. It emphasizes the methods that readers and critics use in understanding a text. This field, an outgrowth of the study of how to properly interpret messages, is closely tied to the ancient discipline of hermeneutics.

Truth

Finally, philosophers of language investigate how language and meaning relate to truth and the reality being referred to. They tend to be less interested in which sentences are actually true, and more in what kinds of meanings can be true or false. A truth-oriented philosopher of language might wonder whether or not a meaningless sentence can be true or false, or whether or not sentences can express propositions about things that do not exist, rather than the way sentences are used.

Problems in the philosophy of language

Nature of language

In the philosophical tradition stemming from the Ancient Greeks, such as Plato and Aristotle, language is seen as a tool for making statements about the reality by means of predication; e.g. "Man is a rational animal", where Man is the subject and is a rational animal is the predicate, which expresses a property of the subject. Such structures also constitute the syntactic basis of syllogism, which remained the standard model of formal logic until the early 20th century, when it was replaced with predicate logic. In linguistics and philosophy of language, the classical model survived in the Middle Ages, and the link between Aristotelian philosophy of science and linguistics was elaborated by Thomas of Erfurt's Modistae grammar (c. 1305), which gives an example of the analysis of the transitive sentence: "Plato strikes Socrates", where Socrates is the object and part of the predicate.

The social and evolutionary aspects of language were discussed during the classical and mediaeval periods. Plato's dialogue Cratylus investigates the iconicity of words, arguing that words are made by "wordsmiths" and selected by those who need the words, and that the study of language is external to the philosophical objective of studying ideas. Age-of-Enlightenment thinkers accommodated the classical model with a Christian worldview, arguing that God created Man social and rational, and, out of these properties, Man created his own cultural habits including language. In this tradition, the logic of the subject-predicate structure forms a general, or 'universal' grammar, which governs thinking and underpins all languages. Variation between languages was investigated by Port-Royal Grammar, among others, who described it as accidental and separate from the logical requirements of thought and language.

The classical view was overturned in the early 19th century by the advocates of German romanticism. Humboldt and his contemporaries questioned the existence of a universal inner form of thought. They argued that, since thinking is verbal, language must be the prerequisite for thought. Therefore, every nation has its own unique way of thinking, a worldview, which has evolved with the linguistic history of the nation. Diversity became emphasized with a focus on the uncontrollable sociohistorical construction of language. Influential romantic accounts include Grimm's sound laws of linguistic evolution, Schleicher's "Darwinian" species-language analogy, the Völkerpsychologie accounts of language by Steinthal and Wundt, and Saussure's semiology, a dyadic model of semiotics, i.e., language as a sign system with its own inner logic, separated from physical reality.

In the early 20th century, logical grammar was defended by Frege and Husserl. Husserl's 'pure logical grammar' draws from 17th-century rational universal grammar, proposing a formal semantics that links the structures of physical reality (e.g., "This paper is white") with the structures of the mind, meaning, and the surface form of natural languages. Husserl's treatise was, however, rejected in general linguistics. Instead, linguists opted for Chomsky's theory of universal grammar as an innate biological structure that generates syntax in a formalistic fashion, i.e., irrespective of meaning.

Many philosophers continue to hold the view that language is a logically based tool of expressing the structures of reality by means of predicate-argument structure. Proponents include, with different nuances, Russell, Wittgenstein, Sellars, Davidson, Putnam, and Searle. Attempts to revive logical formal semantics as a basis of linguistics followed, e.g., the Montague grammar. Despite resistance from linguists including Chomsky and Lakoff, formal semantics was established in the late twentieth century. However, its influence has been mostly limited to computational linguistics, with little impact on general linguistics.

The incompatibility with genetics and neuropsychology of Chomsky's innate grammar gave rise to new psychologically and biologically oriented theories of language in the 1980s, and these have gained influence in linguistics and cognitive science in the 21st century. Examples include Lakoff's conceptual metaphor, which argues that language arises automatically from visual and other sensory input, and different models inspired by Dawkins's memetics, a neo-Darwinian model of linguistic units as the units of natural selection. These include cognitive grammar, construction grammar, and usage-based linguistics.

Problem of universals and composition

One debate that has captured the interest of many philosophers is the debate over the meaning of universals. It might be asked, for example, why when people say the word rocks, what it is that the word represents. Two different answers have emerged to this question. Some have said that the expression stands for some real, abstract universal out in the world called "rocks". Others have said that the word stands for some collection of particular, individual rocks that are associated with merely a nomenclature. The former position has been called philosophical realism, and the latter nominalism.

The issue here can be explicated in examination of the proposition "Socrates is a man".

From the realist's perspective, the connection between S and M is a connection between two abstract entities. There is an entity, "man", and an entity, "Socrates". These two things connect in some way or overlap.

From a nominalist's perspective, the connection between S and M is the connection between a particular entity (Socrates) and a vast collection of particular things (men). To say that Socrates is a man is to say that Socrates is a part of the class of "men". Another perspective is to consider "man" to be a property of the entity, "Socrates".

There is a third way, between nominalism and (extreme) realism, usually called "moderate realism" and attributed to Aristotle and Thomas Aquinas. Moderate realists hold that "man" refers to a real essence or form that is really present and identical in Socrates and all other men, but "man" does not exist as a separate and distinct entity. This is a realist position, because "man" is real, insofar as it really exists in all men; but it is a moderate realism, because "man" is not an entity separate from the men it informs.

Formal versus informal approaches

Another of the questions that has divided philosophers of language is the extent to which formal logic can be used as an effective tool in the analysis and understanding of natural languages. While most philosophers, including Gottlob Frege, Alfred Tarski and Rudolf Carnap, have been more or less skeptical about formalizing natural languages, many of them developed formal languages for use in the sciences or formalized parts of natural language for investigation. Some of the most prominent members of this tradition of formal semantics include Tarski, Carnap, Richard Montague and Donald Davidson.

On the other side of the divide, and especially prominent in the 1950s and '60s, were the so-called "ordinary language philosophers". Philosophers such as P. F. Strawson, John Langshaw Austin and Gilbert Ryle stressed the importance of studying natural language without regard to the truth-conditions of sentences and the references of terms. They did not believe that the social and practical dimensions of linguistic meaning could be captured by any attempts at formalization using the tools of logic. Logic is one thing and language is something entirely different. What is important is not expressions themselves but what people use them to do in communication.

Hence, Austin developed a theory of speech acts, which described the kinds of things which can be done with a sentence (assertion, command, inquiry, exclamation) in different contexts of use on different occasions. Strawson argued that the truth-table semantics of the logical connectives (e.g., , and ) do not capture the meanings of their natural language counterparts ("and", "or" and "if-then"). While the "ordinary language" movement basically died out in the 1970s, its influence was crucial to the development of the fields of speech-act theory and the study of pragmatics. Many of its ideas have been absorbed by theorists such as Kent Bach, Robert Brandom, Paul Horwich and Stephen Neale. In recent work, the division between semantics and pragmatics has become a lively topic of discussion at the interface of philosophy and linguistics, for instance in work by Sperber and Wilson, Carston and Levinson.

While keeping these traditions in mind, the question of whether or not there is any grounds for conflict between the formal and informal approaches is far from being decided. Some theorists, like Paul Grice, have been skeptical of any claims that there is a substantial conflict between logic and natural language.

Translation and interpretation

Translation and interpretation are two other problems that philosophers of language have attempted to confront. In the 1950s, W.V. Quine argued for the indeterminacy of meaning and reference based on the principle of radical translation. In Word and Object, Quine asks readers to imagine a situation in which they are confronted with a previously undocumented, group of indigenous people where they must attempt to make sense of the utterances and gestures that its members make. This is the situation of radical translation.

He claimed that, in such a situation, it is impossible in principle to be absolutely certain of the meaning or reference that a speaker of the indigenous peoples language attaches to an utterance. For example, if a speaker sees a rabbit and says "gavagai", is she referring to the whole rabbit, to the rabbit's tail, or to a temporal part of the rabbit. All that can be done is to examine the utterance as a part of the overall linguistic behaviour of the individual, and then use these observations to interpret the meaning of all other utterances. From this basis, one can form a manual of translation. But, since reference is indeterminate, there will be many such manuals, no one of which is more correct than the others. For Quine, as for Wittgenstein and Austin, meaning is not something that is associated with a single word or sentence, but is rather something that, if it can be attributed at all, can only be attributed to a whole language. The resulting view is called semantic holism.

Inspired by Quine's discussion, Donald Davidson extended the idea of radical translation to the interpretation of utterances and behavior within a single linguistic community. He dubbed this notion radical interpretation. He suggested that the meaning that any individual ascribed to a sentence could only be determined by attributing meanings to many, perhaps all, of the individual's assertions, as well as their mental states and attitudes.

Vagueness

One issue that has troubled philosophers of language and logic is the problem of the vagueness of words. The specific instances of vagueness that most interest philosophers of language are those where the existence of "borderline cases" makes it seemingly impossible to say whether a predicate is true or false. Classic examples are "is tall" or "is bald", where it cannot be said that some borderline case (some given person) is tall or not-tall. In consequence, vagueness gives rise to the paradox of the heap. Many theorists have attempted to solve the paradox by way of n-valued logics, such as fuzzy logic, which have radically departed from classical two-valued logics.

Neuroevolution

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Neuroevolution

Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, and rules. It is most commonly applied in artificial life, general game playing and evolutionary robotics. The main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network's performance at a task. For example, the outcome of a game (i.e., whether one player won or lost) can be easily measured without providing labeled examples of desired strategies. Neuroevolution is commonly used as part of the reinforcement learning paradigm, and it can be contrasted with conventional deep learning techniques that use gradient descent on a neural network with a fixed topology.

Features

Many neuroevolution algorithms have been defined. One common distinction is between algorithms that evolve only the strength of the connection weights for a fixed network topology (sometimes called conventional neuroevolution), and algorithms that evolve both the topology of the network and its weights (called TWEANNs, for Topology and Weight Evolving Artificial Neural Network algorithms).

A separate distinction can be made between methods that evolve the structure of ANNs in parallel to its parameters (those applying standard evolutionary algorithms) and those that develop them separately (through memetic algorithms).

Comparison with gradient descent

Most neural networks use gradient descent rather than neuroevolution. However, around 2017 researchers at Uber stated they had found that simple structural neuroevolution algorithms were competitive with sophisticated modern industry-standard gradient-descent deep learning algorithms, in part because neuroevolution was found to be less likely to get stuck in local minima. In Science, journalist Matthew Hutson speculated that part of the reason neuroevolution is succeeding where it had failed before is due to the increased computational power available in the 2010s.

It can be shown that there is a correspondence between neuroevolution and gradient descent.

Direct and indirect encoding

Evolutionary algorithms operate on a population of genotypes (also referred to as genomes). In neuroevolution, a genotype is mapped to a neural network phenotype that is evaluated on some task to derive its fitness.

In direct encoding schemes the genotype directly maps to the phenotype. That is, every neuron and connection in the neural network is specified directly and explicitly in the genotype. In contrast, in indirect encoding schemes the genotype specifies indirectly how that network should be generated.

Indirect encodings are often used to achieve several aims:

  • modularity and other regularities;
  • compression of phenotype to a smaller genotype, providing a smaller search space;
  • mapping the search space (genome) to the problem domain.

Taxonomy of embryogenic systems for indirect encoding

Traditionally indirect encodings that employ artificial embryogeny (also known as artificial development) have been categorised along the lines of a grammatical approach versus a cell chemistry approach. The former evolves sets of rules in the form of grammatical rewrite systems. The latter attempts to mimic how physical structures emerge in biology through gene expression. Indirect encoding systems often use aspects of both approaches.

Stanley and Miikkulainen propose a taxonomy for embryogenic systems that is intended to reflect their underlying properties. The taxonomy identifies five continuous dimensions, along which any embryogenic system can be placed:

  • Cell (neuron) fate: the final characteristics and role of the cell in the mature phenotype. This dimension counts the number of methods used for determining the fate of a cell.
  • Targeting: the method by which connections are directed from source cells to target cells. This ranges from specific targeting (source and target are explicitly identified) to relative targeting (e.g., based on locations of cells relative to each other).
  • Heterochrony: the timing and ordering of events during embryogeny. Counts the number of mechanisms for changing the timing of events.
  • Canalization: how tolerant the genome is to mutations (brittleness). Ranges from requiring precise genotypic instructions to a high tolerance of imprecise mutation.
  • Complexification: the ability of the system (including evolutionary algorithm and genotype to phenotype mapping) to allow complexification of the genome (and hence phenotype) over time. Ranges from allowing only fixed-size genomes to allowing highly variable length genomes.

Examples

Examples of neuroevolution methods (those with direct encodings are necessarily non-embryogenic):

Method Encoding Evolutionary algorithm Aspects evolved
Neuro-genetic evolution by E. Ronald, 1994 Direct Genetic algorithm Network Weights
Cellular Encoding (CE) by F. Gruau, 1994 Indirect, embryogenic (grammar tree using S-expressions) Genetic programming Structure and parameters (simultaneous, complexification)
GNARL by Angeline et al., 1994 Direct Evolutionary programming Structure and parameters (simultaneous, complexification)
EPNet by Yao and Liu, 1997 Direct Evolutionary programming (combined with backpropagation and simulated annealing) Structure and parameters (mixed, complexification and simplification)
NeuroEvolution of Augmenting Topologies (NEAT) by Stanley and Miikkulainen, 2002 Direct Genetic algorithm. Tracks genes with historical markings to allow crossover between different topologies, protects innovation via speciation. Structure and parameters
Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) by Stanley, D'Ambrosio, Gauci, 2008 Indirect, non-embryogenic (spatial patterns generated by a Compositional pattern-producing network (CPPN) within a hypercube are interpreted as connectivity patterns in a lower-dimensional space) Genetic algorithm. The NEAT algorithm (above) is used to evolve the CPPN. Parameters, structure fixed (functionally fully connected)
Evolvable Substrate Hypercube-based NeuroEvolution of Augmenting Topologies (ES-HyperNEAT) by Risi, Stanley 2012 Indirect, non-embryogenic (spatial patterns generated by a Compositional pattern-producing network (CPPN) within a hypercube are interpreted as connectivity patterns in a lower-dimensional space) Genetic algorithm. The NEAT algorithm (above) is used to evolve the CPPN. Parameters and network structure
Evolutionary Acquisition of Neural Topologies (EANT/EANT2) by Kassahun and Sommer, 2005 / Siebel and Sommer, 2007 Direct and indirect, potentially embryogenic (Common Genetic Encoding) Evolutionary programming/Evolution strategies Structure and parameters (separately, complexification)
Interactively Constrained Neuro-Evolution (ICONE) by Rempis, 2012 Direct, includes constraint masks to restrict the search to specific topology / parameter manifolds. Evolutionary algorithm. Uses constraint masks to drastically reduce the search space through exploiting domain knowledge. Structure and parameters (separately, complexification, interactive)
Deus Ex Neural Network (DXNN) by Gene Sher, 2012 Direct/Indirect, includes constraints, local tuning, and allows for evolution to integrate new sensors and actuators. Memetic algorithm. Evolves network structure and parameters on different time-scales. Structure and parameters (separately, complexification, interactive)
Spectrum-diverse Unified Neuroevolution Architecture (SUNA) by Danilo Vasconcellos Vargas, Junichi Murata (Download code) Direct, introduces the Unified Neural Representation (representation integrating most of the neural network features from the literature). Genetic Algorithm with a diversity preserving mechanism called Spectrum-diversity that scales well with chromosome size, is problem independent and focus more on obtaining diversity of high level behaviours/approaches. To achieve this diversity the concept of chromosome Spectrum is introduced and used together with a Novelty Map Population. Structure and parameters (mixed, complexification and simplification)
Modular Agent-Based Evolver (MABE) by Clifford Bohm, Arend Hintze, and others. (Download code) Direct or indirect encoding of Markov networks, Neural Networks, genetic programming, and other arbitrarily customizable controllers. Provides evolutionary algorithms, genetic programming algorithms, and allows customized algorithms, along with specification of arbitrary constraints. Evolvable aspects include the neural model and allows for the evolution of morphology and sexual selection among others.
Covariance Matrix Adaptation with Hypervolume Sorted Adaptive Grid Algorithm (CMA-HAGA) by Shahin Rostami, and others. Direct, includes an atavism feature which enables traits to disappear and re-appear at different generations. Multi-Objective Evolution Strategy with Preference Articulation (Computational Steering) Structure, weights, and biases.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...