Search This Blog

Sunday, May 24, 2020

Etymology

From Wikipedia, the free encyclopedia

Etymology (/ˌɛtɪˈmɒləi/) is the study of the history of words. By extension, the phrase "the etymology of [some words]" means the origin of the particular word. For place names, there is a specific term, toponymy.

For languages with a long written history, etymologists make use of texts, and texts about the language, to gather knowledge about how words were used during earlier periods, how they developed in meaning and form, or when and how they entered the language. Etymologists also apply the methods of comparative linguistics to reconstruct information about forms that are too old for any direct information to be available.

By analyzing related languages with a technique known as the comparative method, linguists can make inferences about their shared parent language and its vocabulary. In this way, word roots in European languages, for example, can be traced all the way back to the origin of the Indo-European language family.

Even though etymological research originally grew from the philological tradition, much current etymological research is done on language families where little or no early documentation is available, such as Uralic and Austronesian.

The word etymology derives from the Greek word ἐτυμολογία (etumología), itself from ἔτυμον (étumon), meaning "true sense or sense of a truth", and the suffix -logia, denoting "the study of".

The term etymon refers to a word or morpheme (e.g., stem or root) from which a later word or morpheme derives. For example, the Latin word candidus, which means "white", is the etymon of English candid. Relationships are often less transparent, however. English place names such as Winchester, Gloucester, Tadcaster share in different modern forms a suffixed etymon that was once meaningful, Latin castrum 'fort'.

Diagram showing relationships between etymologically related words

Methods

Etymologists apply a number of methods to study the origins of words, some of which are:
  • Philological research. Changes in the form and meaning of the word can be traced with the aid of older texts, if such are available.
  • Making use of dialectological data. The form or meaning of the word might show variations between dialects, which may yield clues about its earlier history.
  • The comparative method. By a systematic comparison of related languages, etymologists may often be able to detect which words derive from their common ancestor language and which were instead later borrowed from another language.
  • The study of semantic change. Etymologists must often make hypotheses about changes in the meaning of particular words. Such hypotheses are tested against the general knowledge of semantic shifts. For example, the assumption of a particular change of meaning may be substantiated by showing that the same type of change has occurred in other languages as well.

Types of word origins

Etymological theory recognizes that words originate through a limited number of basic mechanisms, the most important of which are language change, borrowing (i.e., the adoption of "loanwords" from other languages); word formation such as derivation and compounding; and onomatopoeia and sound symbolism, (i.e., the creation of imitative words such as "click" or "grunt").

While the origin of newly emerged words is often more or less transparent, it tends to become obscured through time due to sound change or semantic change. Due to sound change, it is not readily obvious that the English word set is related to the word sit (the former is originally a causative formation of the latter). It is even less obvious that bless is related to blood (the former was originally a derivative with the meaning "to mark with blood").

Semantic change may also occur. For example, the English word bead originally meant "prayer". It acquired its modern meaning through the practice of counting the recitation of prayers by using beads.

English language

English derives from Old English (sometimes referred to as Anglo-Saxon), a West Germanic variety, although its current vocabulary includes words from many languages. The Old English roots may be seen in the similarity of numbers in English and German, particularly seven/sieben, eight/acht, nine/neun, and ten/zehn. Pronouns are also cognate: I/mine/me and ich/mein/mich; thou/thine/thee and du/dein/dich; we/wir and us/uns; she/sie; your/ihr. However, language change has eroded many grammatical elements, such as the noun case system, which is greatly simplified in modern English, and certain elements of vocabulary, some of which are borrowed from French. Although many of the words in the English lexicon come from Romance languages, most of the common words used in English are of Germanic origin.

When the Normans conquered England in 1066 (see Norman Conquest), they brought their Norman language with them. During the Anglo-Norman period, which united insular and continental territories, the ruling class spoke Anglo-Norman, while the peasants spoke the vernacular English of the time. Anglo-Norman was the conduit for the introduction of French into England, aided by the circulation of Langue d'oïl literature from France.

This led to many paired words of French and English origin. For example, beef is related, through borrowing, to modern French bœuf, veal to veau, pork to porc, and poultry to poulet. All these words, French and English, refer to the meat rather than to the animal. Words that refer to farm animals, on the other hand, tend to be cognates of words in other Germanic languages. For example, swine/Schwein, cow/Kuh, calf/Kalb, and sheep/Schaf. The variant usage has been explained by the proposition that it was the Norman rulers who mostly ate meat (an expensive commodity) and the Anglo-Saxons who farmed the animals. This explanation has passed into common folklore but has been disputed.

Assimilation of foreign words

English has proved accommodating to words from many languages. Scientific terminology, for example, relies heavily on words of Latin and Greek origin, but there are a great many non-scientific examples. Spanish has contributed many words, particularly in the southwestern United States. Examples include buckaroo, alligator, rodeo, savvy, and states' names such as Colorado and Florida. Albino, palaver, lingo, verandah, and coconut from Portuguese; diva and prima donna from Italian. Modern French has contributed café, cinema, naive, nicotine and many more. 


Bandanna, bungalow, dungarees, guru, karma, and pundit come from Urdu, Hindi and ultimately Sanskrit; curry from Tamil; honcho, sushi, and tsunami from Japanese; dim sum, gung ho, kowtow, kumquat and typhoon from Cantonese. Kampong and amok are from Malay; and boondocks from the Tagalog word for hills or mountains, bundok. Ketchup derives from one or more South-East Asia and East Indies words for fish sauce or soy sauce, likely by way of Chinese, though the precise path is unclear: Malay kicap, Indonesian kecap, Chinese Min Nan kê-chiap and cognates in other Chinese dialects

Surprisingly few loanwords, however, come from other languages native to the British Isles. Those that exist include coracle, cromlech and (probably) flannel, gull and penguin from Welsh; galore and whisky from Scottish Gaelic; phoney, trousers, and Tory from Irish; and eerie and canny from Scots (or related Northern English dialects).

Many Canadian English and American English words (especially but not exclusively plant and animal names) are loanwords from Indigenous American languages, such as barbecue, bayou, chili, chipmunk, hooch, hurricane, husky, mesquite, opossum, pecan, squash, toboggan, and tomato.

History

The search for meaningful origins for familiar or strange words is far older than the modern understanding of linguistic evolution and the relationships of languages, which began no earlier than the 18th century. From Antiquity through the 17th century, from Pāṇini to Pindar to Sir Thomas Browne, etymology had been a form of witty wordplay, in which the supposed origins of words were creatively imagined to satisfy contemporary requirements; for example, the Greek poet Pindar (born in approximately 522 BCE) employed inventive etymologies to flatter his patrons. Plutarch employed etymologies insecurely based on fancied resemblances in sounds. Isidore of Seville's Etymologiae was an encyclopedic tracing of "first things" that remained uncritically in use in Europe until the sixteenth century. Etymologicum genuinum is a grammatical encyclopedia edited at Constantinople in the ninth century, one of several similar Byzantine works. The thirteenth-century Legenda Aurea, as written by Jacobus de Vorgagine, begins each vita of a saint with a fanciful excursus in the form of an etymology.

Ancient Sanskrit

The Sanskrit linguists and grammarians of ancient India were the first to make a comprehensive analysis of linguistics and etymology. The study of Sanskrit etymology has provided Western scholars with the basis of historical linguistics and modern etymology. Four of the most famous Sanskrit linguists are:
These linguists were not the earliest Sanskrit grammarians, however. They followed a line of ancient grammarians of Sanskrit who lived several centuries earlier like Sakatayana of whom very little is known. The earliest of attested etymologies can be found in Vedic literature in the philosophical explanations of the Brahmanas, Aranyakas, and Upanishads.

The analyses of Sanskrit grammar done by the previously mentioned linguists involved extensive studies on the etymology (called Nirukta or Vyutpatti in Sanskrit) of Sanskrit words, because the ancient Indo-Aryans considered sound and speech itself to be sacred and, for them, the words of the sacred Vedas contained deep encoding of the mysteries of the soul and God.

Ancient Greco-Roman

One of the earliest philosophical texts of the Classical Greek period to address etymology was the Socratic dialogue Cratylus (c. 360 BCE) by Plato. During much of the dialogue, Socrates makes guesses as to the origins of many words, including the names of the gods. In his Odes Pindar spins complimentary etymologies to flatter his patrons. Plutarch (Life of Numa Pompilius) spins an etymology for pontifex, while explicitly dismissing the obvious, and actual "bridge-builder":
the priests, called Pontifices.... have the name of Pontifices from potens, powerful, because they attend the service of the gods, who have power and command over all. Others make the word refer to exceptions of impossible cases; the priests were to perform all the duties possible to them; if anything lay beyond their power, the exception was not to be cavilled at. The most common opinion is the most absurd, which derives this word from pons, and assigns the priests the title of bridge-makers. The sacrifices performed on the bridge were amongst the most sacred and ancient, and the keeping and repairing of the bridge attached, like any other public sacred office, to the priesthood.

Medieval

Isidore of Seville compiled a volume of etymologies to illuminate the triumph of religion. Each saint's legend in Jacob de Voragine's Legenda Aurea begins with an etymological discourse on the saint's name:
Lucy is said of light, and light is beauty in beholding, after that S. Ambrose saith: The nature of light is such, she is gracious in beholding, she spreadeth over all without lying down, she passeth in going right without crooking by right long line; and it is without dilation of tarrying, and therefore it is showed the blessed Lucy hath beauty of virginity without any corruption; essence of charity without disordinate love; rightful going and devotion to God, without squaring out of the way; right long line by continual work without negligence of slothful tarrying. In Lucy is said, the way of light.

Modern era

Etymology in the modern sense emerged in the late 18th-century European academia, within the context of the wider "Age of Enlightenment," although preceded by 17th century pioneers such as Marcus Zuerius van Boxhorn, Gerardus Vossius, Stephen Skinner, Elisha Coles, and William Wotton. The first known systematic attempt to prove the relationship between two languages on the basis of similarity of grammar and lexicon was made in 1770 by the Hungarian, János Sajnovics, when he attempted to demonstrate the relationship between Sami and Hungarian (work that was later extended to the whole Finno-Ugric language family in 1799 by his fellow countryman, Samuel Gyarmathi).

The origin of modern historical linguistics is often traced to Sir William Jones, a Welsh philologist living in India, who in 1782 observed the genetic relationship between Sanskrit, Greek and Latin. Jones published his The Sanscrit Language in 1786, laying the foundation for the field of Indo-European linguistics.

The study of etymology in Germanic philology was introduced by Rasmus Christian Rask in the early 19th century and elevated to a high standard with the German Dictionary of the Brothers Grimm. The successes of the comparative approach culminated in the Neogrammarian school of the late 19th century. Still in the 19th century, German philosopher Friedrich Nietzsche used etymological strategies (principally and most famously in On the Genealogy of Morals, but also elsewhere) to argue that moral values have definite historical (specifically, cultural) origins where modulations in meaning regarding certain concepts (such as "good" and "evil") show how these ideas had changed over time—according to which value-system appropriated them. This strategy gained popularity in the 20th century, and philosophers, such as Jacques Derrida, have used etymologies to indicate former meanings of words to de-center the "violent hierarchies" of Western philosophy.

Construction grammar

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Construction_grammar 

In linguistics, construction grammar (often abbreviated CxG) is a family of theories within the field of cognitive and evolutionary linguistics. These posit that human language consists of constructions, or learned pairings of linguistic forms with functions or meanings. Constructions correspond to replicators or memes in memetics and other cultural replicator theories. Constructions can be individual words (aardvark, avocado), morphemes (anti-, -ing), fixed expressions and idioms (by and large, jog X's memory), and abstract grammatical rules such as the passive voice (The cat was hit by a car) or ditransitive (Mary gave Alex the ball). Any linguistic pattern is considered to be a construction as long as some aspect of its form or its meaning cannot be predicted from its component parts, or from other constructions that are recognized to exist. In construction grammar, every utterance is understood to be a combination of multiple different constructions, which together specify its precise meaning and form.

One of the most distinctive features of CxG is its use of multi-word expressions and phrasal patterns as the building blocks of syntactic analysis. One example is the Correlative Conditional construction, found in the proverbial expression The bigger they come, the harder they fall. Construction grammarians point out that this is not merely a fixed phrase; the Correlative Conditional is a general pattern (The Xer, the Yer) with "slots" that can be filled by almost any comparative phrase (e.g. The more you think about it, the less you understand). Advocates of CxG argue these kinds of idiosyncratic patterns are more common than is often recognized, and that they are best understood as multi-word, partially filled constructions.

Construction grammar rejects the idea that there is a sharp dichotomy between lexical items, which are arbitrary and specific, and grammatical rules, which are completely general. Instead, CxG posits that there are linguistic patterns at every level of generality and specificity: from individual words, to partially filled constructions (e.g. drive X crazy), to fully abstract rules (e.g. subject–auxiliary inversion). All of these patterns are recognized as constructions.

In contrast to theories that posit an innate universal grammar for all languages, construction grammar holds that speakers learn constructions inductively as they are exposed to them, using general cognitive processes. It is argued that children pay close attention to each utterance they hear, and gradually make generalizations based on the utterances they have heard. Because constructions are learned, they are expected to vary considerably across different languages.

History

Construction grammar was first developed in the 1980s by linguists such as Charles Fillmore, Paul Kay, and George Lakoff, in order to analyze idioms and fixed expressions. Lakoff's 1977 paper "Linguistic Gestalts" put forward an early version of CxG, arguing that the meaning of an expression was not simply a function of the meanings of its parts. Instead, he suggested, constructions themselves must have meanings.

Another early study was "There-Constructions," which appeared as Case Study 3 in George Lakoff's Women, Fire, and Dangerous Things. It argued that the meaning of the whole was not a function of the meanings of the parts, that odd grammatical properties of Deictic There-constructions followed from the pragmatic meaning of the construction, and that variations on the central construction could be seen as simple extensions using form-meaning pairs of the central construction.

Fillmore et al.'s (1988) paper on the English let alone construction was a second classic. These two papers propelled cognitive linguists into the study of CxG.

Grammatical construction

In construction grammar, like in general semiotics, the grammatical construction is a pairing of form and content. The formal aspect of a construction is typically described as a syntactic template, but the form covers more than just syntax, as it also involves phonological aspects, such as prosody and intonation. The content covers semantic as well as pragmatic meaning.

The semantic meaning of a grammatical construction is made up of conceptual structures postulated in cognitive semantics: image-schemas, frames, conceptual metaphors, conceptual metonymies, prototypes of various kinds, mental spaces, and bindings across these (called "blends"). Pragmatics just becomes the cognitive semantics of communication—the modern version of the old Ross-Lakoff performative hypothesis from the 1960s.

The form and content are symbolically linked in the sense advocated by Langacker.

Thus a construction is treated like a sign in which all structural aspects are integrated parts and not distributed over different modules as they are in the componential model. Consequentially, not only constructions that are lexically fixed, like many idioms, but also more abstract ones like argument structure schemata, are pairings of form and conventionalized meaning. For instance, the ditransitive schema [S V IO DO] is said to express semantic content X CAUSES Y TO RECEIVE Z, just like kill means X CAUSES Y TO DIE.

In construction grammar, a grammatical construction, regardless of its formal or semantic complexity and make up, is a pairing of form and meaning. Thus words and word classes may be regarded as instances of constructions. Indeed, construction grammarians argue that all pairings of form and meaning are constructions, including phrase structures, idioms, words and even morphemes.

Syntax-lexicon continuum

Unlike the componential model, construction grammar denies any strict distinction between the two and proposes a syntax-lexicon continuum. The argument goes that words and complex constructions are both pairs of form and meaning and differ only in internal symbolic complexity. Instead of being discrete modules and thus subject to very different processes they form the extremes of a continuum (from regular to idiosyncratic): syntax > subcategorization frame > idiom > morphology > syntactic category > word/lexicon (these are the traditional terms; construction grammars use a different terminology).

Grammar as an inventory of constructions

In construction grammar, the grammar of a language is made up of taxonomic networks of families of constructions, which are based on the same principles as those of the conceptual categories known from cognitive linguistics, such as inheritance, prototypicality, extensions, and multiple parenting.
Four different models are proposed in relation to how information is stored in the taxonomies:
  1. Full-entry model
    In the full-entry model information is stored redundantly at all relevant levels in the taxonomy, which means that it operates, if at all, with minimal generalization.

  2. Usage-based model
    The usage-based model is based on inductive learning, meaning that linguistic knowledge is acquired in a bottom-up manner through use. It allows for redundancy and generalizations, because the language user generalizes over recurring experiences of use.

  3. Default inheritance model
    According to the default inheritance model, each network has a default central form-meaning pairing from which all instances inherit their features. It thus operates with a fairly high level of generalization, but does also allow for some redundancy in that it recognizes extensions of different types.

  4. Complete inheritance model
    In the complete inheritance model, information is stored only once at the most superordinate level of the network. Instances at all other levels inherit features from the superordinate item. The complete inheritance does not allow for redundancy in the networks.

Shift towards usage-based model

All four models are advocated by different construction grammarians, but since the late 1990s there has been a shift towards a general preference for the usage-based model. The shift towards the usage-based approach in construction grammar has inspired the development of several corpus-based methodologies of constructional analysis (for example, collostructional analysis).

Synonymy and monotony

As construction grammar is based on schemas and taxonomies, it does not operate with dynamic rules of derivation. Rather, it is monotonic.

Because construction grammar does not operate with surface derivations from underlying structures, it adheres to functionalist linguist Dwight Bolinger's principle of no synonymy, on which Adele Goldberg elaborates in her book.

This means that construction grammarians argue, for instance, that active and passive versions of the same proposition are not derived from an underlying structure, but are instances of two different constructions. As constructions are pairings of form and meaning, active and passive versions of the same proposition are not synonymous, but display differences in content: in this case the pragmatic content.

Some construction grammars

As mentioned above, Construction grammar is a "family" of theories rather than one unified theory. There are a number of formalized Construction grammar frameworks. Some of these are:

Berkeley Construction Grammar

Berkeley Construction Grammar (BCG: formerly also simply called Construction Grammar in upper case) focuses on the formal aspects of constructions and makes use of a unification-based framework for description of syntax, not unlike head-driven phrase structure grammar. Its proponents/developers include Charles Fillmore, Paul Kay, Laura Michaelis, and to a certain extent Ivan Sag. Immanent within BCG works like Fillmore and Kay 1995 and Michaelis and Ruppenhofer 2001 is the notion that phrasal representations—embedding relations—should not be used to represent combinatoric properties of lexemes or lexeme classes. For example, BCG abandons the traditional practice of using non-branching domination (NP over N' over N) to describe undetermined nominals that function as NPs, instead introducing a determination construction that requires ('asks for') a non-maximal nominal sister and a lexical 'maximality' feature for which plural and mass nouns are unmarked. BCG also offers a unification-based representation of 'argument structure' patterns as abstract verbal lexeme entries ('linking constructions'). These linking constructions include transitive, oblique goal and passive constructions. These constructions describe classes of verbs that combine with phrasal constructions like the VP construction but contain no phrasal information in themselves.

Sign Based Construction Grammar

In the mid-2000s, several of the developers of BCG, including Charles Fillmore, Paul Kay, Ivan Sag and Laura Michaelis, collaborated in an effort to improve the formal rigor of BCG and clarify its representational conventions. The result was Sign Based Construction Grammar (SBCG). SBCG is based on a multiple-inheritance hierarchy of typed feature structures. The most important type of feature structure in SBCG is the sign, with subtypes word, lexeme and phrase. The inclusion of phrase within the canon of signs marks a major departure from traditional syntactic thinking. In SBCG, phrasal signs are licensed by correspondence to the mother of some licit construct of the grammar. A construct is a local tree with signs at its nodes. Combinatorial constructions define classes of constructs. Lexical class constructions describe combinatoric and other properties common to a group of lexemes. Combinatorial constructions include both inflectional and derivational constructions. SBCG is both formal and generative; while cognitive-functional grammarians have often opposed their standards and practices to those of formal, generative grammarians, there is in fact no incompatibility between a formal, generative approach and a rich, broad-coverage, functionally based grammar. It simply happens that many formal, generative theories are descriptively inadequate grammars. SBCG is generative in a way that prevailing syntax-centered theories are not: its mechanisms are intended to represent all of the patterns of a given language, including idiomatic ones; there is no 'core' grammar in SBCG. SBCG a licensing-based theory, as opposed to one that freely generates syntactic combinations and uses general principles to bar illicit ones: a word, lexeme or phrase is well formed if and only if it is described by a lexeme or construction. Recent SBCG works have expanded on the lexicalist model of idiomatically combining expressions sketched out in Sag 2012.

Goldbergian/Lakovian construction grammar

The type of construction grammar associated with linguists like Goldberg and Lakoff looks mainly at the external relations of constructions and the structure of constructional networks. In terms of form and function, this type of construction grammar puts psychological plausibility as its highest desideratum. It emphasizes experimental results and parallels with general cognitive psychology. It also draws on certain principles of cognitive linguistics. In the Goldbergian strand, constructions interact with each other in a network via four inheritance relations: polysemy link, subpart link, metaphorical extension, and finally instance link.

Cognitive grammar

Sometimes, Ronald Langacker's cognitive grammar framework is described as a type of construction grammar. Cognitive grammar deals mainly with the semantic content of constructions, and its central argument is that conceptual semantics is primary to the degree that form mirrors, or is motivated by, content. Langacker argues that even abstract grammatical units like part-of-speech classes are semantically motivated and involve certain conceptualizations.

Radical construction grammar

William A. Croft's radical construction grammar is designed for typological purposes and takes into account cross-linguistic factors. It deals mainly with the internal structure of constructions. Radical construction grammar is totally non-reductionist, and Croft argues that constructions are not derived from their parts, but that the parts are derived from the constructions they appear in. Thus, in radical construction grammar, constructions are likened to Gestalts. Radical construction grammar rejects the idea that syntactic categories, roles, and relations are universal and argues that they are not only language-specific, but also construction specific. Thus, there are no universals that make reference to formal categories, since formal categories are language- and construction-specific. The only universals are to be found in the patterns concerning the mapping of meaning onto form. Radical construction grammar rejects the notion of syntactic relations altogether and replaces them with semantic relations. Like Goldbergian/Lakovian construction grammar and cognitive grammar, radical construction grammar is closely related to cognitive linguistics, and like cognitive grammar, radical construction grammar appears to be based on the idea that form is semantically motivated.

Embodied construction grammar

Embodied construction grammar (ECG), which is being developed by the Neural Theory of Language (NTL) group at ICSI, UC Berkeley, and the University of Hawaiʻi, particularly including Benjamin Bergen and Nancy Chang, adopts the basic constructionist definition of a grammatical construction, but emphasizes the relation of constructional semantic content to embodiment and sensorimotor experiences. A central claim is that the content of all linguistic signs involves mental simulations and is ultimately dependent on basic image schemas of the kind advocated by Mark Johnson and George Lakoff, and so ECG aligns itself with cognitive linguistics. Like construction grammar, embodied construction grammar makes use of a unification-based model of representation. A non-technical introduction to the NTL theory behind embodied construction grammar as well as the theory itself and a variety of applications can be found in Jerome Feldman's From Molecule to Metaphor: A Neural Theory of Language (MIT Press, 2006).

Fluid construction grammar

Fluid construction grammar (FCG) was designed by Luc Steels and his collaborators for doing experiments on the origins and evolution of language. FCG is a fully operational and computationally implemented formalism for construction grammars and proposes a uniform mechanism for parsing and production. Moreover, it has been demonstrated through robotic experiments that FCG grammars can be grounded in embodiment and sensorimotor experiences. FCG integrates many notions from contemporary computational linguistics such as feature structures and unification-based language processing. Constructions are considered bidirectional and hence usable both for parsing and production. Processing is flexible in the sense that it can even cope with partially ungrammatical or incomplete sentences. FCG is called 'fluid' because it acknowledges the premise that language users constantly change and update their grammars. The research on FCG is conducted at Sony CSL Paris and the AI Lab at the Vrije Universiteit Brussel.

Others

In addition there are several construction grammarians who operate within the general framework of construction grammar without affiliating themselves with any specific construction grammar program. There is a growing interest in the diachronic aspect of grammatical constructions and thus in the importation of methods and ideas from grammaticalization studies. Another area of growing interest is the pragmatics of pragmatic constructions. This is probably one of the reasons why the usage-based model is gaining popularity among construction grammarians. Another area of increasing interest among construction grammarians is that of language acquisition which is mainly due to Michael Tomasello's work. Mats Andrén coined the term multimodal constructions to account for constructions that incorporate both (conventionalized) gesture and speech.

Cognitive semantics

From Wikipedia, the free encyclopedia
 
Cognitive semantics is part of the cognitive linguistics movement. Semantics is the study of linguistic meaning. Cognitive semantics holds that language is part of a more general human cognitive ability, and can therefore only describe the world as people conceive of it. It is implicit that different linguistic communities conceive of simple things and processes in the world differently (different cultures), not necessarily some difference between a person's conceptual world and the real world (wrong beliefs).

The main tenets of cognitive semantics are:
  • That grammar manifests a conception of the world held in a culture;
  • That knowledge of language is acquired and contextual;
  • That the ability to use language draws upon general cognitive resources and not a special language module.
As part of the field of cognitive linguistics, the cognitive semantics approach rejects the traditional separation of linguistics into phonology, morphology, syntax, pragmatics, etc. Instead, it divides semantics into meaning-construction and knowledge representation. Therefore, cognitive semantics studies much of the area traditionally devoted to pragmatics as well as semantics.

The techniques native to cognitive semantics are typically used in lexical studies such as those put forth by Leonard Talmy, George Lakoff, Dirk Geeraerts, and Bruce Wayne Hawkins. Some cognitive semantic frameworks, such as that developed by Talmy, take into account syntactic structures as well.

Points of contrast

As a field, semantics is interested in three big questions: what does it mean for units of language, called lexemes, to have "meaning"? What does it mean for sentences to have meaning? Finally, how is it that meaningful units fit together to compose complete sentences? These are the main points of inquiry behind studies into lexical semantics, structural semantics, and theories of compositionality (respectively). In each category, traditional theories seem to be at odds with those accounts provided by cognitive semanticists. 

Classic theories in semantics (in the tradition of Alfred Tarski and Donald Davidson) have tended to explain the meaning of parts in terms of necessary and sufficient conditions, sentences in terms of truth-conditions, and composition in terms of propositional functions. Each of these positions is tightly related to the others. According to these traditional theories, the meaning of a particular sentence may be understood as the conditions under which the proposition conveyed by the sentence hold true. For instance, the expression "snow is white" is true if and only if snow is, in fact, white. Lexical units can be understood as holding meaning either by virtue of set of things they may apply to (called the "extension" of the word), or in terms of the common properties that hold between these things (called its "intension"). The intension provides an interlocutor with the necessary and sufficient conditions that let a thing qualify as a member of some lexical unit's extension. Roughly, propositional functions are those abstract instructions that guide the interpreter in taking the free variables in an open sentence and filling them in, resulting in a correct understanding of the sentence as a whole. 

Meanwhile, cognitive semantic theories are typically built on the argument that lexical meaning is conceptual. That is, meaning is not necessarily reference to the entity or relation in some real or possible world. Instead, meaning corresponds with a concept held in the mind based on personal understanding. As a result, semantic facts like "All bachelors are unmarried males" are not treated as special facts about our language practices; rather, these facts are not distinct from encyclopaedic knowledge. In treating linguistic knowledge as being a piece with everyday knowledge, the question is raised: how can cognitive semantics explain paradigmatically semantic phenomena, like category structure? Set to the challenge, researchers have drawn upon theories from related fields, like cognitive psychology and cognitive anthropology. One proposal is to treat in order to explain category structure in terms of nodes in a knowledge network. One example of a theory from cognitive science that has made its way into the cognitive semantic mainstream is the theory of prototypes, which cognitive semanticists generally argue is the cause of polysemy.

Cognitive semanticists argue that truth-conditional semantics is unduly limited in its account of full sentence meaning. While they are not on the whole hostile to truth-conditional semantics, they point out that it has limited explanatory power. That is to say, it is limited to indicative sentences, and does not seem to offer any straightforward or intuitive way of treating (say) commands or expressions. By contrast, cognitive semantics seeks to capture the full range of grammatical moods by also making use of the notions of framing and mental spaces. 

Another trait of cognitive semantics is the recognition that meaning is not fixed but a matter of construal and conventionalization. The processes of linguistic construal, it is argued, are the same psychological processes involved in the processing of encyclopaedic knowledge and in perception. This view has implications for the problem of compositionality. An account in cognitive semantics called the dynamic construal theory makes the claim that words themselves are without meaning: they have, at best, "default construals," which are really just ways of using words. Along these lines, cognitive semantics argues that compositionality can only be intelligible if pragmatic elements like context and intention are taken into consideration.

The structure of concepts

Cognitive semantics has sought to challenge traditional theories in two ways: first, by providing an account of the meaning of sentences by going beyond truth-conditional accounts; and second, by attempting to go beyond accounts of word meaning that appeal to necessary and sufficient conditions. It accomplishes both by examining the structure of concepts.

Frame semantics

Frame semantics, developed by Charles J. Fillmore, attempts to explain meaning in terms of their relation to general understanding, not just in the terms laid out by truth-conditional semantics. Fillmore explains meaning in general (including the meaning of lexemes) in terms of "frames". By "frame" is meant any concept that can only be understood if a larger system of concepts is also understood.

Fillmore: framing

Many pieces of linguistic evidence motivate the frame-semantic project. First, it has been noted that word meaning is an extension of our bodily and cultural experiences. For example, the notion of restaurant is associated with a series of concepts, like food, service, waiters, tables, and eating. These rich-but-contingent associations cannot be captured by an analysis in terms of necessary and sufficient conditions, yet they still seem to be intimately related to our understanding of "restaurant".

Second, and more seriously, these conditions are not enough to account for asymmetries in the ways that words are used. According to a semantic feature analysis, there is nothing more to the meanings of "boy" and "girl" than:
  1. BOY [+MALE], [+YOUNG]
  2. GIRL [+FEMALE], [+YOUNG]
And there is surely some truth to this proposal. Indeed, cognitive semanticists understand the instances of the concept held by a given certain word may be said to exist in a schematic relation with the concept itself. And this is regarded as a legitimate approach to semantic analysis, so far as it goes.

However, linguists have found that language users regularly apply the terms "boy" and "girl" in ways that go beyond mere semantic features. That is, for instance, people tend to be more likely to consider a young female a "girl" (as opposed to "woman"), than they are to consider a borderline-young male a "boy" (as opposed to "man"). This fact suggests that there is a latent frame, made up of cultural attitudes, expectations, and background assumptions, which is part of word meaning. These background assumptions go up and beyond those necessary and sufficient conditions that correspond to a semantic feature account. Frame semantics, then, seeks to account for these puzzling features of lexical items in some systematic way.

Third, cognitive semanticists argue that truth-conditional semantics is incapable of dealing adequately with some aspects of the meanings at the level of the sentence. Take the following:
  1. You didn't spare me a day at the seaside; you deprived me of one.
In this case, the truth-conditions of the claim expressed by the antecedent in the sentence are not being denied by the proposition expressed after the clause. Instead, what is being denied is the way that the antecedent is framed.

Finally, with the frame-semantic paradigm's analytical tools, the linguist is able to explain a wider range of semantic phenomena than they would be able to with only necessary and sufficient conditions. Some words have the same definitions or intensions, and the same extensions, but have subtly different domains. For example, the lexemes land and ground are synonyms, yet they naturally contrast with different things—sea and air, respectively.

As we have seen, the frame semantic account is by no means limited to the study of lexemes—with it, researchers may examine expressions at more complex levels, including the level of the sentence (or, more precisely, the utterance). The notion of framing is regarded as being of the same cast as the pragmatic notion of background assumptions. Philosopher of language John Searle explains the latter by asking readers to consider sentences like "The cat is on the mat". For such a sentence to make any sense, the interpreter makes a series of assumptions: i.e., that there is gravity, the cat is parallel to the mat, and the two touch. For the sentence to be intelligible, the speaker supposes that the interpreter has an idealized or default frame in mind.

Langacker: profile and base

An alternate strain of Fillmore's analysis can be found in the work of Ronald Langacker, who makes a distinction between the notions of profile and base. The profile is the concept symbolized by the word itself, while the base is the encyclopedic knowledge that the concept presupposes. For example, let the definition of "radius" be "a line segment that joins the center of a circle with any point on its circumference". If all we know of the concept radius is its profile, then we simply know that it is a line segment that is attached to something called the "circumference" in some greater whole called the "circle". That is to say, our understanding is fragmentary until the base concept of circle is firmly grasped. 

When a single base supports a number of different profiles, then it can be called a "domain". For instance, the concept profiles of arc, center, and circumference are all in the domain of circle, because each uses the concept of circle as a base. We are then in a position to characterize the notion of a frame as being either the base of the concept profile, or (more generally) the domain that the profile is a part of.

Categorization and cognition

Membership of a graded class

A major divide in the approaches to cognitive semantics lies in the puzzle surrounding the nature of category structure. As mentioned in the previous section, semantic feature analyses fall short of accounting for the frames that categories may have. An alternative proposal would have to go beyond the minimalistic models given by classical accounts, and explain the richness of detail in meaning that language speakers attribute to categories.

Prototype theories, investigated by Eleanor Rosch, have given some reason to suppose that many natural lexical category structures are graded, i.e., they have prototypical members that are considered to be "better fit" the category than other examples. For instance, robins are generally viewed as better examples of the category "bird" than, say, penguins. If this view of category structure is the case, then categories can be understood to have central and peripheral members, and not just be evaluated in terms of members and non-members.

In a related vein, George Lakoff, following the later Ludwig Wittgenstein, noted that some categories are only connected to one another by way of family resemblances. While some classical categories may exist, i.e., which are structured by necessary and sufficient conditions, there are at least two other kinds: generative and radial.

Generative categories can be formed by taking central cases and applying certain principles to designate category membership. The principle of similarity is one example of a rule that might generate a broader category from given prototypes.

Radial categories are categories motivated by conventions, but not predictable from rules. The concept of "mother", for example, may be explained in terms of a variety of conditions that may or may not be sufficient. Those conditions may include: being married, has always been female, gave birth to the child, supplied half the child's genes, is a caregiver, is married to the genetic father, is one generation older than the child, and is the legal guardian. Any one of the above conditions might not be met: for instance, a "single mother" does not need to be married, and a "surrogate mother" does not necessarily provide nurturance. When these aspects collectively cluster together, they form a prototypical case of what it means to be a mother, but nevertheless they fail to outline the category crisply. Variations upon the central meaning are established by convention by the community of language users.

For Lakoff, prototype effects can be explained in large part due to the effects of idealized cognitive models. That is, domains are organized with an ideal notion of the world that may or may not fit reality. For example, the word "bachelor" is commonly defined as "unmarried adult male". However, this concept has been created with a particular ideal of what a bachelor is like: an adult, uncelibate, independent, socialized, and promiscuous. Reality might either strain the expectations of the concept, or create false positives. That is, people typically want to widen the meaning of "bachelor" to include exceptions like "a sexually active seventeen-year-old who lives alone and owns his own firm" (not technically an adult but seemingly still a bachelor), and this can be considered a kind of straining of the definition. Moreover, speakers would tend to want to exclude from the concept of bachelor certain false positives, such as those adult unmarried males that don't bear much resemblance to the ideal: i.e., the Pope, or Tarzan. Prototype effects may also be explained as a function of either basic-level categorization and typicality, closeness to an ideal, or stereotyping.

So viewed, prototype theory seems to give an account of category structure. However, there are a number of criticisms of this interpretation of the data. Indeed, Rosch and Lakoff, themselves chief advocates of prototype theory, have emphasized in their later works that the findings of prototype theory do not necessarily tell us anything about category structure. Some theorists in the cognitive semantics tradition have challenged both classical and prototype accounts of category structure by proposing the dynamic construal account, where category structure is always created "on-line"—and so, that categories have no structure outside of the context of use.

Mental spaces

Propositional attitudes in Fodor's presentation of truth-conditional semantics

In traditional semantics, the meaning of a sentence is the situation it represents, and the situation can be described in terms of the possible world that it would be true of. Moreover, sentence meanings may be dependent upon propositional attitudes: those features that are relative to someone's beliefs, desires, and mental states. The role of propositional attitudes in truth-conditional semantics is controversial. However, by at least one line of argument, truth-conditional semantics seems to be able to capture the meaning of belief-sentences like "Frank believes that the Red Sox will win the next game" by appealing to propositional attitudes. The meaning of the overall proposition is described as a set of abstract conditions, wherein Frank holds a certain propositional attitude, and the attitude is itself a relationship between Frank and a particular proposition; and this proposition is the possible world where the Red Sox win the next game.

Still, many theorists have grown dissatisfied with the inelegance and dubious ontology behind possible-worlds semantics. An alternative can be found in the work of Gilles Fauconnier. For Fauconnier, the meaning of a sentence can be derived from "mental spaces". Mental spaces are cognitive structures entirely in the minds of interlocutors. In his account, there are two kinds of mental space. The base space is used to describe reality (as it is understood by both interlocutors). Space builders (or built space) are those mental spaces that go beyond reality by addressing possible worlds, along with temporal expressions, fictional constructs, games, and so on. Additionally, Fauconnier semantics distinguishes between roles and values. A semantic role is understood to be description of a category, while values are the instances that make up the category. (In this sense, the role-value distinction is a special case of the type-token distinction.) 

Fauconnier argues that curious semantic constructions can be explained handily by the above apparatus. Take the following sentence:
  1. In 1929, the lady with white hair was blonde.
The semanticist must construct an explanation for the obvious fact that the above sentence is not contradictory. Fauconnier constructs his analysis by observing that there are two mental spaces (the present-space and the 1929-space). His access principle supposes that "a value in one space can be described by the role its counterpart in another space has, even if that role is invalid for the value in the first space". So, to use the example above, the value in 1929-space is the blonde, while she is being described with the role of the lady with white hair in present-day space.

Conceptualization and construal

As we have seen, cognitive semantics gives a treatment of issues in the construction of meaning both at the level of the sentence and the level of the lexeme in terms of the structure of concepts. However, it is not entirely clear what cognitive processes are at work in these accounts. Moreover, it is not clear how we might go about explaining the ways that concepts are actively employed in conversation. It appears to be the case that, if our project is to look at how linguistic strings convey different semantic content, we must first catalogue what cognitive processes are being used to do it. Researchers can satisfy both requirements by attending to the construal operations involved in language processing—that is to say, by investigating the ways that people structure their experiences through language.

Language is full of conventions that allow for subtle and nuanced conveyances of experience. To use an example that is readily at hand, framing is all-pervasive, and it may extend across the full breadth of linguistic data, extending from the most complex utterances, to tone, to word choice, to expressions derived from the composition of morphemes. Another example is image-schemata, which are ways that we structure and understand the elements of our experience driven by any given sense.

According to linguists William Croft and D. Alan Cruse, there are four broad cognitive abilities that play an active part in the construction of construals. They are: attention/salience, judgment/comparison, situatedness, and constitution/gestalt. Each general category contains a number of subprocesses, each of which helps to explain the ways we encode experience into language in some unique way.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...