Search This Blog

Thursday, July 16, 2020

Universal grammar

From Wikipedia, the free encyclopedia
 
Noam Chomsky is usually associated with the term universal grammar in the 20th and 21st centuries

Universal grammar (UG), in modern linguistics, is the theory of the genetic component of the language faculty, usually credited to Noam Chomsky. The basic postulate of UG is that a certain set of structural rules are innate to humans, independent of sensory experience. With more linguistic stimuli received in the course of psychological development, children then adopt specific syntactic rules that conform to UG. It is sometimes known as "mental grammar", and stands contrasted with other "grammars", e.g. prescriptive, descriptive and pedagogical. The advocates of this theory emphasize and partially rely on the poverty of the stimulus (POS) argument and the existence of some universal properties of natural human languages. However, the latter has not been firmly established, as some linguists have argued languages are so diverse that such universality is rare. It is a matter of empirical investigation to determine precisely what properties are universal and what linguistic capacities are innate.

Argument

The theory of universal grammar proposes that if human beings are brought up under normal conditions (not those of extreme sensory deprivation), then they will always develop language with certain properties (e.g., distinguishing nouns from verbs, or distinguishing function words from content words). The theory proposes that there is an innate, genetically determined language faculty that knows these rules, making it easier and faster for children to learn to speak than it otherwise would be. This faculty does not know the vocabulary of any particular language (so words and their meanings must be learned), and there remain several parameters which can vary freely among languages (such as whether adjectives come before or after nouns) which must also be learned. Evidence in favor of this idea can be found in studies like Valian (1986), which show that children of surprisingly young ages understand syntactic categories and their distribution before this knowledge shows up in production.

As Chomsky puts it, "Evidently, development of language in the individual must involve three factors: genetic endowment, which sets limits on the attainable languages, thereby making language acquisition possible; external data, converted to the experience that selects one or another language within a narrow range; [and] principles not specific to the Faculty of Language."

Occasionally, aspects of universal grammar seem describable in terms of general details regarding cognition. For example, if a predisposition to categorize events and objects as different classes of things is part of human cognition, and directly results in nouns and verbs showing up in all languages, then it could be assumed that rather than this aspect of universal grammar being specific to language, it is more generally a part of human cognition. To distinguish properties of languages that can be traced to other facts regarding cognition from properties of languages that cannot, the abbreviation UG* can be used. UG is the term often used by Chomsky for those aspects of the human brain which cause language to be the way that it is (i.e. are universal grammar in the sense used here), but here for the purposes of discussion, it is used for those aspects which are furthermore specific to language (thus UG, as Chomsky uses it, is just an abbreviation for universal grammar, but UG* as used here is a subset of universal grammar).

In the same article, Chomsky casts the theme of a larger research program in terms of the following question: "How little can be attributed to UG while still accounting for the variety of 'I-languages' attained, relying on third factor principles?" (I-languages meaning internal languages, the brain states that correspond to knowing how to speak and understand a particular language, and third factor principles meaning "principles not specific to the Faculty of Language" in the previous quote).

Chomsky has speculated that UG might be extremely simple and abstract, for example only a mechanism for combining symbols in a particular way, which he calls "merge". The following quote shows that Chomsky does not use the term "UG" in the narrow sense UG* suggested above.

"The conclusion that merge falls within UG holds whether such recursive generation is unique to FL (faculty of language) or is appropriated from other systems."

In other words, merge is seen as part of UG because it causes language to be the way it is, universal, and is not part of the environment or general properties independent of genetics and environment. Merge is part of universal grammar whether it is specific to language, or whether, as Chomsky suggests, it is also used for an example in mathematical thinking.

The distinction is the result of the long history of argument about UG*: whereas some people working on language agree that there is universal grammar, many people assume that Chomsky means UG* when he writes UG (and in some cases he might actually mean UG* [though not in the passage quoted above]).

Some students of universal grammar study a variety of grammars to extract generalizations called linguistic universals, often in the form of "If X holds true, then Y occurs." These have been extended to a variety of traits, such as the phonemes found in languages, the word orders which different languages choose, and the reasons why children exhibit certain linguistic behaviors.

Other linguists who have influenced this theory include Richard Montague, who developed his version of this theory as he considered issues of the argument from poverty of the stimulus to arise from the constructivist approach to linguistic theory. The application of the idea of universal grammar to the study of second language acquisition (SLA) is represented mainly in the work of McGill linguist Lydia White.

Syntacticians generally hold that there are parametric points of variation between languages, although heated debate occurs over whether UG constraints are essentially universal due to being "hard-wired" (Chomsky's principles and parameters approach), a logical consequence of a specific syntactic architecture (the generalized phrase structure approach) or the result of functional constraints on communication (the functionalist approach).

Relation to the evolution of language

In an article entitled "The Faculty of Language: What Is It, Who Has It, and How Did It Evolve?" Hauser, Chomsky, and Fitch present the three leading hypotheses for how language evolved and brought humans to the point where they have a universal grammar.

The first hypothesis states that the faculty of language in the broad sense (FLb) is strictly homologous to animal communication. This means that homologous aspects of the faculty of language exist in non-human animals.

The second hypothesis states that the FLb is a derived and uniquely human adaptation for language. This hypothesis holds that individual traits were subject to natural selection and came to be specialized for humans.

The third hypothesis states that only the faculty of language in the narrow sense (FLn) is unique to humans. It holds that while mechanisms of the FLb are present in both human and non-human animals, the computational mechanism of recursion is recently evolved solely in humans. This is the hypothesis which most closely aligns to the typical theory of universal grammar championed by Chomsky.

History

The term "universal grammar" predates Noam Chomsky, but pre-Chomskyan ideas of universal grammar are different. For Chomsky, UG is "[the] theory of the genetically based language faculty", which makes UG a theory of language acquisition, and part of the innateness hypothesis. Earlier grammarians and philosophers thought about universal grammar in the sense of a universally shared property or grammar of all languages. The closest analog to their understanding of universal grammar in the late 20th century are Greenberg's linguistic universals.

The idea of a universal grammar can be traced back to Roger Bacon's observations in his c. 1245 Overview of Grammar and c. 1268 Greek Grammar that all languages are built upon a common grammar, even though it may undergo incidental variations; and the 13th century speculative grammarians who, following Bacon, postulated universal rules underlying all grammars. The concept of a universal grammar or language was at the core of the 17th century projects for philosophical languages. An influential work in that time was Grammaire générale by Claude Lancelot and Antoine Arnauld, who built on the works of René Descartes. They tried to describe a general grammar for languages, coming to the conclusion that grammar has to be universal. There is a Scottish school of universal grammarians from the 18th century, as distinguished from the philosophical language project, which included authors such as James Beattie, Hugh Blair, James Burnett, James Harris, and Adam Smith. The article on grammar in the first edition of the Encyclopædia Britannica (1771) contains an extensive section titled "Of Universal Grammar".

This tradition was continued in the late 19th century by Wilhelm Wundt and in the early 20th century by linguist Otto Jespersen. Jespersen disagreed with early grammarians on their formulation of "universal grammar", arguing that they tried to derive too much from Latin, and that a UG based on Latin was bound to fail considering the breadth of worldwide linguistic variation. He does not fully dispense with the idea of a "universal grammar", but reduces it to universal syntactic categories or super-categories, such as number, tenses, etc.  Jespersen does not discuss whether these properties come from facts about general human cognition or from a language specific endowment (which would be closer to the Chomskyan formulation). As this work predates molecular genetics, he does not discuss the notion of a genetically conditioned universal grammar.

During the rise of behaviorism, the idea of a universal grammar (in either sense) was discarded. In the early 20th century, language was usually understood from a behaviourist perspective, suggesting that language acquisition, like any other kind of learning, could be explained by a succession of trials, errors, and rewards for success. In other words, children learned their mother tongue by simple imitation, through listening and repeating what adults said. For example, when a child says "milk" and the mother will smile and give her child milk as a result, the child will find this outcome rewarding, thus enhancing the child's language development. UG reemerged to prominence and influence in modern linguistics with the theories of Chomsky and Montague in the 1950s–1970s, as part of the "linguistics wars".

In 2016 Chomsky and Berwick co-wrote their book titled Why Only Us, where they defined both the minimalist program and the strong minimalist thesis and its implications to update their approach to UG theory. According to Berwick and Chomsky, the strong minimalist thesis states that "The optimal situation would be that UG reduces to the simplest computational principles which operate in accord with conditions of computational efficiency. This conjecture is ... called the Strong Minimalist Thesis (SMT)." The significance of SMT is to significantly shift the previous emphasis on universal grammars to the concept which Chomsky and Berwick now call "merge". "Merge" is defined in their 2016 book when they state "Every computational system has embedded within it somewhere an operation that applies to two objects X and Y already formed, and constructs from them a new object Z. Call this operation Merge." SMT dictates that "Merge will be as simple as possible: it will not modify X or Y or impose any arrangement on them; in particular, it will leave them unordered, an important fact... Merge is therefore just set formation: Merge of X and Y yields the set {X, Y}."

Chomsky's theory

Chomsky argued that the human brain contains a limited set of constraints for organizing language. This implies in turn that all languages have a common structural basis: the set of rules known as "universal grammar".

Speakers proficient in a language know which expressions are acceptable in their language and which are unacceptable. The key puzzle is how speakers come to know these restrictions of their language, since expressions that violate those restrictions are not present in the input, indicated as such. Chomsky argued that this poverty of stimulus means that Skinner's behaviourist perspective cannot explain language acquisition. The absence of negative evidence—evidence that an expression is part of a class of ungrammatical sentences in a given language—is the core of his argument. For example, in English, an interrogative pronoun like what cannot be related to a predicate within a relative clause:
*"What did John meet a man who sold?"
Such expressions are not available to language learners: they are, by hypothesis, ungrammatical. Speakers of the local language do not use them, and would note them as unacceptable to language learners. Universal grammar offers an explanation for the presence of the poverty of the stimulus, by making certain restrictions into universal characteristics of human languages. Language learners are consequently never tempted to generalize in an illicit fashion.

Presence of creole languages

The presence of creole languages is sometimes cited as further support for this theory, especially by Bickerton's controversial language bioprogram theory. Creoles are languages that develop and form when disparate societies come together and are forced to devise a new system of communication. The system used by the original speakers is typically an inconsistent mix of vocabulary items, known as a pidgin. As these speakers' children begin to acquire their first language, they use the pidgin input to effectively create their own original language, known as a creole. Unlike pidgins, creoles have native speakers (those with acquisition from early childhood) and make use of a full, systematic grammar.

According to Bickerton, the idea of universal grammar is supported by creole languages because certain features are shared by virtually all in the category. For example, their default point of reference in time (expressed by bare verb stems) is not the present moment, but the past. Using pre-verbal auxiliaries, they uniformly express tense, aspect, and mood. Negative concord occurs, but it affects the verbal subject (as opposed to the object, as it does in languages like Spanish). Another similarity among creoles can be seen in the fact that questions are created simply by changing the intonation of a declarative sentence, not its word order or content.

However, extensive work by Carla Hudson-Kam and Elissa Newport suggests that creole languages may not support a universal grammar at all. In a series of experiments, Hudson-Kam and Newport looked at how children and adults learn artificial grammars. They found that children tend to ignore minor variations in the input when those variations are infrequent, and reproduce only the most frequent forms. In doing so, they tend to standardize the language that they hear around them. Hudson-Kam and Newport hypothesize that in a pidgin-development situation (and in the real-life situation of a deaf child whose parents are or were disfluent signers), children systematize the language they hear, based on the probability and frequency of forms, and not that which has been suggested on the basis of a universal grammar. Further, it seems to follow that creoles would share features with the languages from which they are derived, and thus look similar in terms of grammar.

Many researchers of universal grammar argue against a concept of relexification, which says that a language replaces its lexicon almost entirely with that of another. This goes against universalist ideas of a universal grammar, which has an innate grammar.

Criticisms

Geoffrey Sampson maintains that universal grammar theories are not falsifiable and are therefore pseudoscientific. He argues that the grammatical "rules" linguists posit are simply post-hoc observations about existing languages, rather than predictions about what is possible in a language. Similarly, Jeffrey Elman argues that the unlearnability of languages assumed by universal grammar is based on a too-strict, "worst-case" model of grammar, that is not in keeping with any actual grammar. In keeping with these points, James Hurford argues that the postulate of a language acquisition device (LAD) essentially amounts to the trivial claim that languages are learnt by humans, and thus, that the LAD is less a theory than an explanandum looking for theories.

Morten H. Christiansen and Nick Chater have argued that the relatively fast-changing nature of language would prevent the slower-changing genetic structures from ever catching up, undermining the possibility of a genetically hard-wired universal grammar. Instead of an innate universal grammar, they claim, "apparently arbitrary aspects of linguistic structure may result from general learning and processing biases deriving from the structure of thought processes, perceptuo-motor factors, cognitive limitations, and pragmatics".

Hinzen summarizes the most common criticisms of universal grammar:
  • Universal grammar has no coherent formulation and is indeed unnecessary.
  • Universal grammar is in conflict with biology: it cannot have evolved by standardly accepted neo-Darwinian evolutionary principles.
  • There are no linguistic universals: universal grammar is refuted by abundant variation at all levels of linguistic organization, which lies at the heart of human faculty of language.
In addition, it has been suggested that people learn about probabilistic patterns of word distributions in their language, rather than hard and fast rules (see Distributional hypothesis). For example, children overgeneralize the past tense marker "ed" and conjugate irregular verbs incorrectly, producing forms like goed and eated and correct these errors over time. It has also been proposed that the poverty of the stimulus problem can be largely avoided, if it is assumed that children employ similarity-based generalization strategies in language learning, generalizing about the usage of new words from similar words that they already know how to use.

Language acquisition researcher Michael Ramscar has suggested that when children erroneously expect an ungrammatical form that then never occurs, the repeated failure of expectation serves as a form of implicit negative feedback that allows them to correct their errors over time such as how children correct grammar generalizations like goed to went through repetitive failure. This implies that word learning is a probabilistic, error-driven process, rather than a process of fast mapping, as many nativists assume.

In the domain of field research, the Pirahã language is claimed to be a counterexample to the basic tenets of universal grammar. This research has been led by Daniel Everett. Among other things, this language is alleged to lack all evidence for recursion, including embedded clauses, as well as quantifiers and colour terms. According to the writings of Everett, the Pirahã showed these linguistic shortcomings not because they were simple-minded, but because their culture—which emphasized concrete matters in the present and also lacked creation myths and traditions of art making—did not necessitate it. Some other linguists have argued, however, that some of these properties have been misanalyzed, and that others are actually expected under current theories of universal grammar. Other linguists have attempted to reassess Pirahã to see if it did indeed use recursion. In a corpus analysis of the Pirahã language, linguists failed to disprove Everett's arguments against universal grammar and the lack of recursion in Pirahã. However, they also stated that there was "no strong evidence for the lack of recursion either" and they provided "suggestive evidence that Pirahã may have sentences with recursive structures".

Daniel Everett has argued that even if a universal grammar is not impossible in principle, it should not be accepted because we have equally or more plausible theories that are simpler. In his words, "universal grammar doesn't seem to work, there doesn't seem to be much evidence for [it]. And what can we put in its place? A complex interplay of factors, of which culture, the values human beings share, plays a major role in structuring the way that we talk and the things that we talk about." Michael Tomasello, a developmental psychologist, also supports this claim, arguing that "although many aspects of human linguistic competence have indeed evolved biologically, specific grammatical principles and constructions have not. And universals in the grammatical structure of different languages have come from more general processes and constraints of human cognition, communication, and vocal-auditory processing, operating during the conventionalization and transmission of the particular grammatical constructions of particular linguistic communities."

Theories of second-language acquisition

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Theories_of_second-language_acquisition

The main purpose of theories of second-language acquisition (SLA) is to shed light on how people who already know one language learn a second language. The field of second-language acquisition involves various contributions, such as linguistics, sociolinguistics, psychology, cognitive science, neuroscience, and education. These multiple fields in second-language acquisition can be grouped as four major research strands: (a) linguistic dimensions of SLA, (b) cognitive (but not linguistic) dimensions of SLA, (c) socio-cultural dimensions of SLA, and (d) instructional dimensions of SLA. While the orientation of each research strand is distinct, they are in common in that they can guide us to find helpful condition to facilitate successful language learning. Acknowledging the contributions of each perspective and the interdisciplinarity between each field, more and more second language researchers are now trying to have a bigger lens on examining the complexities of second language acquisition.

History

As second-language acquisition began as an interdisciplinary field, it is hard to pin down a precise starting date. However, there are two publications in particular that are seen as instrumental to the development of the modern study of SLA: (1) Corder's 1967 essay The Significance of Learners' Errors, and (2) Selinker's 1972 article Interlanguage. Corder's essay rejected a behaviorist account of SLA and suggested that learners made use of intrinsic internal linguistic processes; Selinker's article argued that second-language learners possess their own individual linguistic systems that are independent from both the first and second languages.

In the 1970s the general trend in SLA was for research exploring the ideas of Corder and Selinker, and refuting behaviorist theories of language acquisition. Examples include research into error analysis, studies in transitional stages of second-language ability, and the "morpheme studies" investigating the order in which learners acquired linguistic features. The 70s were dominated by naturalistic studies of people learning English as a second language.

By the 1980s, the theories of Stephen Krashen had become the prominent paradigm in SLA. In his theories, often collectively known as the Input Hypothesis, Krashen suggested that language acquisition is driven solely by comprehensible input, language input that learners can understand. Krashen's model was influential in the field of SLA and also had a large influence on language teaching, but it left some important processes in SLA unexplained. Research in the 1980s was characterized by the attempt to fill in these gaps. Some approaches included White's descriptions of learner competence, and Pienemann's use of speech processing models and lexical functional grammar to explain learner output. This period also saw the beginning of approaches based in other disciplines, such as the psychological approach of connectionism.

The 1990s saw a host of new theories introduced to the field, such as Michael Long's interaction hypothesis, Merrill Swain's output hypothesis, and Richard Schmidt's noticing hypothesis. However, the two main areas of research interest were linguistic theories of SLA based upon Noam Chomsky's universal grammar, and psychological approaches such as skill acquisition theory and connectionism. The latter category also saw the new theories of processability and input processing in this time period. The 1990s also saw the introduction of sociocultural theory, an approach to explain second-language acquisition in terms of the social environment of the learner.

In the 2000s research was focused on much the same areas as in the 1990s, with research split into two main camps of linguistic and psychological approaches. VanPatten and Benati do not see this state of affairs as changing in the near future, pointing to the support both areas of research have in the wider fields of linguistics and psychology, respectively.

Universal grammar

From the field of linguistics, the most influential theory by far has been Chomsky's theory of Universal Grammar (UG). The core of this theory lies on the existence of an innate universal grammar, grounded on the poverty of the stimulus. The UG model of principles, basic properties which all languages share, and parameters, properties which can vary between languages, has been the basis for much second-language research.

From a UG perspective, learning the grammar of a second language is simply a matter of setting the correct parameters. Take the pro-drop parameter, which dictates whether or not sentences must have a subject in order to be grammatically correct. This parameter can have two values: positive, in which case sentences do not necessarily need a subject, and negative, in which case subjects must be present. In German the sentence "Er spricht" (he speaks) is grammatical, but the sentence "Spricht" (speaks) is ungrammatical. In Italian, however, the sentence "Parla" (speaks) is perfectly normal and grammatically correct. A German speaker learning Italian would only need to deduce that subjects are optional from the language he hears, and then set his pro-drop parameter for Italian accordingly. Once he has set all the parameters in the language correctly, then from a UG perspective he can be said to have learned Italian, i.e. he will always produce perfectly correct Italian sentences.

Universal Grammar also provides a succinct explanation for much of the phenomenon of language transfer. Spanish learners of English who make the mistake "Is raining" instead of "It is raining" have not yet set their pro-drop parameters correctly and are still using the same setting as in Spanish.

The main shortcoming of Universal Grammar in describing second-language acquisition is that it does not deal at all with the psychological processes involved with learning a language. UG scholarship is only concerned with whether parameters are set or not, not with how they are set. Schachter (1988) is a useful critique of research testing the role of Universal Grammar in second language acquisition.

Input hypothesis

Learners' most direct source of information about the target language is the target language itself. When they come into direct contact with the target language, this is referred to as "input." When learners process that language in a way that can contribute to learning, this is referred to as "intake". However, it must be at a level that is comprehensible to them. In his monitor theory, Krashen advanced the concept that language input should be at the "i+1" level, just beyond what the learner can fully understand; this input is comprehensible, but contains structures that are not yet fully understood. This has been criticized on the basis that there is no clear definition of i+1, and that factors other than structural difficulty (such as interest or presentation) can affect whether input is actually turned into intake. The concept has been quantified, however, in vocabulary acquisition research; Nation reviews various studies which indicate that about 98% of the words in running text should be previously known in order for extensive reading to be effective.

In his Input Hypothesis, Krashen proposes that language acquisition takes place only when learners receive input just beyond their current level of L2 competence. He termed this level of input “i+1.” However, in contrast to emergentist and connectionist theories, he follows the innate approach by applying Chomsky's Government and binding theory and concept of Universal grammar (UG) to second-language acquisition. He does so by proposing a Language Acquisition Device that uses L2 input to define the parameters of the L2, within the constraints of UG, and to increase the L2 proficiency of the learner. In addition, Krashen (1982)’s Affective Filter Hypothesis holds that the acquisition of a second language is halted if the learner has a high degree of anxiety when receiving input. According to this concept, a part of the mind filters out L2 input and prevents intake by the learner, if the learner feels that the process of SLA is threatening. As mentioned earlier, since input is essential in Krashen’s model, this filtering action prevents acquisition from progressing.

A great deal of research has taken place on input enhancement, the ways in which input may be altered so as to direct learners' attention to linguistically important areas. Input enhancement might include bold-faced vocabulary words or marginal glosses in a reading text. Research here is closely linked to research on pedagogical effects, and comparably diverse.

Monitor model

Other concepts have also been influential in the speculation about the processes of building internal systems of second-language information. Some thinkers hold that language processing handles distinct types of knowledge. For instance, one component of the Monitor Model, propounded by Krashen, posits a distinction between “acquisition” and “learning.” According to Krashen, L2 acquisition is a subconscious process of incidentally “picking up” a language, as children do when becoming proficient in their first languages. Language learning, on the other hand, is studying, consciously and intentionally, the features of a language, as is common in traditional classrooms. Krashen sees these two processes as fundamentally different, with little or no interface between them. In common with connectionism, Krashen sees input as essential to language acquisition.

Further, Bialystok and Smith make another distinction in explaining how learners build and use L2 and interlanguage knowledge structures. They argue that the concept of interlanguage should include a distinction between two specific kinds of language processing ability. On one hand is learners’ knowledge of L2 grammatical structure and ability to analyze the target language objectively using that knowledge, which they term “representation,” and, on the other hand is the ability to use their L2 linguistic knowledge, under time constraints, to accurately comprehend input and produce output in the L2, which they call “control.” They point out that often non-native speakers of a language have higher levels of representation than their native-speaking counterparts have, yet have a lower level of control. Finally, Bialystok has framed the acquisition of language in terms of the interaction between what she calls “analysis” and “control.” Analysis is what learners do when they attempt to understand the rules of the target language. Through this process, they acquire these rules and can use them to gain greater control over their own production.

Monitoring is another important concept in some theoretical models of learner use of L2 knowledge. According to Krashen, the Monitor is a component of an L2 learner's language processing device that uses knowledge gained from language learning to observe and regulate the learner's own L2 production, checking for accuracy and adjusting language production when necessary.

Interaction hypothesis

Long's interaction hypothesis proposes that language acquisition is strongly facilitated by the use of the target language in interaction. Similarly to Krashen's Input Hypothesis, the Interaction Hypothesis claims that comprehensible input is important for language learning. In addition, it claims that the effectiveness of comprehensible input is greatly increased when learners have to negotiate for meaning.

Interactions often result in learners receiving negative evidence. That is, if learners say something that their interlocutors do not understand, after negotiation the interlocutors may model the correct language form. In doing this, learners can receive feedback on their production and on grammar that they have not yet mastered. The process of interaction may also result in learners receiving more input from their interlocutors than they would otherwise. Furthermore, if learners stop to clarify things that they do not understand, they may have more time to process the input they receive. This can lead to better understanding and possibly the acquisition of new language forms. Finally, interactions may serve as a way of focusing learners' attention on a difference between their knowledge of the target language and the reality of what they are hearing; it may also focus their attention on a part of the target language of which they are not yet aware.

Output hypothesis

In the 1980s, Canadian SLA researcher Merrill Swain advanced the output hypothesis, that meaningful output is as necessary to language learning as meaningful input. However, most studies have shown little if any correlation between learning and quantity of output. Today, most scholars contend that small amounts of meaningful output are important to language learning, but primarily because the experience of producing language leads to more effective processing of input.

Critical Period Hypothesis

In 1967, Eric Lenneberg argued the existence of a critical period (approximately 2-13 years old) for the acquisition of a first language. This has attracted much attention in the realm of second language acquisition. For instance, Newport (1990) extended the argument of critical period hypothesis by pointing to a possibility that when a learner is exposed to an L2 might also contribute to their second language acquisition. Indeed, she revealed the correlation between age of arrival and second language performance. In this regard, second language learning might be affected by a learner's maturational state.

Competition model

Some of the major cognitive theories of how learners organize language knowledge are based on analyses of how speakers of various languages analyze sentences for meaning. MacWhinney, Bates, and Kliegl found that speakers of English, German, and Italian showed varying patterns in identifying the subjects of transitive sentences containing more than one noun. English speakers relied heavily on word order; German speakers used morphological agreement, the animacy status of noun referents, and stress; and speakers of Italian relied on agreement and stress. MacWhinney et al. interpreted these results as supporting the Competition Model, which states that individuals use linguistic cues to get meaning from language, rather than relying on linguistic universals. According to this theory, when acquiring an L2, learners sometimes receive competing cues and must decide which cue(s) is most relevant for determining meaning.

Connectionism and second-language acquisition

Connectionism

These findings also relate to Connectionism. Connectionism attempts to model the cognitive language processing of the human brain, using computer architectures that make associations between elements of language, based on frequency of co-occurrence in the language input. Frequency has been found to be a factor in various linguistic domains of language learning. Connectionism posits that learners form mental connections between items that co-occur, using exemplars found in language input. From this input, learners extract the rules of the language through cognitive processes common to other areas of cognitive skill acquisition. Since connectionism denies both innate rules and the existence of any innate language-learning module, L2 input is of greater importance than it is in processing models based on innate approaches, since, in connectionism, input is the source of both the units and the rules of language.

Noticing hypothesis

Attention is another characteristic that some believe to have a role in determining the success or failure of language processing. Richard Schmidt states that although explicit metalinguistic knowledge of a language is not always essential for acquisition, the learner must be aware of L2 input in order to gain from it. In his “noticing hypothesis,” Schmidt posits that learners must notice the ways in which their interlanguage structures differ from target norms. This noticing of the gap allows the learner's internal language processing to restructure the learner's internal representation of the rules of the L2 in order to bring the learner's production closer to the target. In this respect, Schmidt's understanding is consistent with the ongoing process of rule formation found in emergentism and connectionism.

Processability

Some theorists and researchers have contributed to the cognitive approach to second-language acquisition by increasing understanding of the ways L2 learners restructure their interlanguage knowledge systems to be in greater conformity to L2 structures. Processability theory states that learners restructure their L2 knowledge systems in an order of which they are capable at their stage of development. For instance, In order to acquire the correct morphological and syntactic forms for English questions, learners must transform declarative English sentences. They do so by a series of stages, consistent across learners. Clahsen proposed that certain processing principles determine this order of restructuring. Specifically, he stated that learners first, maintain declarative word order while changing other aspects of the utterances, second, move words to the beginning and end of sentences, and third, move elements within main clauses before subordinate clauses.

Automaticity

Thinkers have produced several theories concerning how learners use their internal L2 knowledge structures to comprehend L2 input and produce L2 output. One idea is that learners acquire proficiency in an L2 in the same way that people acquire other complex cognitive skills. Automaticity is the performance of a skill without conscious control. It results from the gradated process of proceduralization. In the field of cognitive psychology, Anderson expounds a model of skill acquisition, according to which persons use procedures to apply their declarative knowledge about a subject in order to solve problems. On repeated practice, these procedures develop into production rules that the individual can use to solve the problem, without accessing long-term declarative memory. Performance speed and accuracy improve as the learner implements these production rules. DeKeyser tested the application of this model to L2 language automaticity. He found that subjects developed increasing proficiency in performing tasks related to the morphosyntax of an artificial language, Autopractan, and performed on a learning curve typical of the acquisition of non-language cognitive skills. This evidence conforms to Anderson's general model of cognitive skill acquisition, supports the idea that declarative knowledge can be transformed into procedural knowledge, and tends to undermine the idea of Krashen that knowledge gained through language “learning” cannot be used to initiate speech production.

Declarative/procedural model

An example of declarative knowledge, procedural knowledge, and conditional knowledge

Michael T. Ullman has used a declarative/procedural model to understand how language information is stored. This model is consistent with a distinction made in general cognitive science between the storage and retrieval of facts, on the one hand, and understanding of how to carry out operations, on the other. It states that declarative knowledge consists of arbitrary linguistic information, such as irregular verb forms, that are stored in the brain's declarative memory. In contrast, knowledge about the rules of a language, such as grammatical word order is procedural knowledge and is stored in procedural memory. Ullman reviews several psycholinguistic and neurolinguistic studies that support the declarative/procedural model.

Memory and second-language acquisition

Perhaps certain psychological characteristics constrain language processing. One area of research is the role of memory. Williams conducted a study in which he found some positive correlation between verbatim memory functioning and grammar learning success for his subjects. This suggests that individuals with less short-term memory capacity might have a limitation in performing cognitive processes for organization and use of linguistic knowledge.

Semantic theory

For the second-language learner, the acquisition of meaning is arguably the most important task. Meaning is at the heart of a language, not the exotic sounds or elegant sentence structure. There are several types of meanings: lexical, grammatical, semantic, and pragmatic. All the different meanings contribute to the acquisition of meaning resulting in the integrated second language possession:

Lexical meaning – meaning that is stored in our mental lexicon;

Grammatical meaning – comes into consideration when calculating the meaning of a sentence. usually encoded in inflectional morphology (ex. - ed for past simple, -‘s for third person possessive)
Semantic meaning – word meaning;

Pragmatic meaning – meaning that depends on context, requires knowledge of the world to decipher; for example, when someone asks on the phone, “Is Mike there?” he doesn’t want to know if Mike is physically there; he wants to know if he can talk to Mike.

Sociocultural theory

Larsen-Freeman
 
Sociocultural theory was originally coined by Wertsch in 1985 and derived from the work of Lev Vygotsky and the Vygotsky Circle in Moscow from the 1920s onwards. Sociocultural theory is the notion that human mental function is from participating cultural mediation integrated into social activities.  The central thread of sociocultural theory focuses on diverse social, historical, cultural, and political contexts where language learning occurs and how learners negotiate or resist the diverse options that surround them. More recently, in accordance with this sociocultural thread, Larsen-Freeman (2011) created the triangle form that shows the interplay of four Important concepts in language learning and education: (a) teacher, (b) learner, (c) language or culture and (d) context. In this regard, what makes sociocultural theory different from other theories is that it argues that second learning acquisition is not a universal process. On the contrary, it views learners as active participants by interacting with others and also the culture of the environment.

Complex Dynamic Systems Theory

Second language acquisition has been usually investigated by applying traditional cross-sectional studies. In these designs usually a pre-test post-test method is used. However, in the 2000s a novel angle emerged in the field of second language research. These studies mainly adopt Dynamic systems theory perspective to analyse longitudinal time-series data. Scientists such as Larsen-Freeman, Verspoor, de Bot, Lowie, van Geert claim that second language acquisition can be best captured by applying longitudinal case study research design rather than cross-sectional designs. In these studies variability is seen a key indicator of development, self-organization from a Dynamic systems parlance. The interconnectedness of the systems is usually analysed by moving correlations.

Manhattan Project National Historical Park

From Wikipedia, the free encyclopedia
 
Manhattan Project National Historical Park
Hanford High School.jpg
Hanford High, a part of the park in Washington.
LocationOak Ridge, Tennessee, Los Alamos, New Mexico and Hanford, Washington, United States
EstablishedNovember 10, 2015
Governing bodyNational Park Service, Department of Energy
WebsiteManhattan Project National Historical Park

Manhattan Project National Historical Park is a United States National Historical Park commemorating the Manhattan Project that is run jointly by the National Park Service and Department of Energy. The park consists of three units: one in Oak Ridge, Tennessee, one in Los Alamos, New Mexico and one in Hanford, Washington. It was established on November 10, 2015 when Secretary of the Interior Sally Jewell and Secretary of Energy Ernest Moniz signed the memorandum of agreement that defined the roles that the two agencies had when managing the park.

The Department of Energy had owned and managed most of the properties located within the three different sites. For over ten years, the DoE worked with the National Park Service and federal, state and local governments and agencies with the intention of turning places of importance into a National Historical Park. After several years of surveying the three sites and five other possible alternatives, the two agencies officially recommended a historical park be established in Hanford, Los Alamos and Oak Ridge.

The Department of Energy would continue to manage and own the sites while the National Park Service would provide interpretive services, visitor centers and park rangers. After two unsuccessful attempts at passing a bill in Congress authorizing the park in 2012 and 2013, the House and Senate ultimately passed the bill in December 2014, with President Obama signing the National Defense Authorization Act shortly thereafter which authorized the Manhattan Project National Historical Park.

Sites

Hanford B Reactor

The Manhattan Project National Historical Park protects many structures associated with the Manhattan Project, but only some are open for touring.

Hanford, Washington

Los Alamos, New Mexico

The Los Alamos visitor center for the Manhattan Project NHP is located at 475 20th Street in downtown Los Alamos. This location is open daily 10-4 weather and staffing permitting. It is in the Los Alamos Community Building on the front left as you face the building from the street (next to the teen center). At the visitor center, visitors can learn about the Manhattan Project and related sites in the vicinity.

There are 3 locations of park that are on Los Alamos National Laboratory property. These locations are currently not open to the public:
  • Gun Site Facilities: three bunkered buildings (TA-8-1, TA-8-2, and TA-8-3), and a portable guard shack (TA-8-172).
  • V-Site Facilities: TA-16-516 and TA-16-517 V-Site Assembly Building
  • Pajarito Site: TA-18-1 Slotin Building, TA-8-2 Battleship Control Building, and the TA-18-29 Pond Cabin.
Controls of the X-10 Graphite Reactor

Oak Ridge, Tennessee

The American Museum of Science and Energy provides bus tours of several buildings in the Clinton Engineer Works including the:

Ideasthesia

From Wikipedia, the free encyclopedia
Example of associations between graphemes and colors that are described more accurately as ideasthesia than as synesthesia

Ideasthesia (alternative spelling ideaesthesia) was introduced by neuroscientist Danko Nikolić and is defined as a phenomenon in which activations of concepts (inducers) evoke perception-like experiences (concurrents). The name comes from the Ancient Greek ἰδέα (idéa) and αἴσθησις (aísthēsis), meaning "sensing concepts" or "sensing ideas". The main reason for introducing the notion of ideasthesia was the problems with synesthesia. While "synesthesia" means "union of senses", empirical evidence indicated that this was an incorrect explanation of a set of phenomena traditionally covered by this heading. Syn-aesthesis denoting also "co-perceiving", implies the association of two sensory elements with little connection to the cognitive level. However, according to others, most phenomena that have inadvertently been linked to synesthesia in fact are induced by the semantic representations. That is, the meaning of the stimulus is what is important rather than its sensory properties, as would be implied by the term synesthesia. In other words, while synesthesia presumes that both the trigger (inducer) and the resulting experience (concurrent) are of sensory nature, ideasthesia presumes that only the resulting experience is of sensory nature while the trigger is semantic. Meanwhile, the concept of ideasthesia developed into a theory of how we perceive and the research has extended to topics other than synesthesia — as the concept of ideasthesia turned out applicable to our everyday perception. Ideasthesia has been even applied to the theory of art. Research on ideasthesia bears important implications for solving the mystery of human conscious experience, which, according to ideasthesia, is grounded in how we activate concepts.

Examples and evidence

A drawing by a synesthete which illustrates time unit-space synesthesia/ideasthesia. The months in a year are organized into a circle surrounding the synesthete's body, each month having a fixed location in space and a unique color.
 
A common example of synesthesia is the association between graphemes and colors, usually referred to as grapheme-color synesthesia. Here, letters of the alphabet are associated with vivid experiences of color. Studies have indicated that the perceived color is context-dependent and is determined by the extracted meaning of a stimulus. For example, an ambiguous stimulus '5' that can be interpreted either as 'S' or '5' will have the color associated with 'S' or with '5', depending on the context in which it is presented. If presented among numbers, it will be interpreted as '5' and will associate the respective color. If presented among letters, it will be interpreted as 'S' and will associate the respective synesthetic color.

Evidence for grapheme-color synesthesia comes also from the finding that colors can be flexibly associated to graphemes, as new meanings become assigned to those graphemes. In one study synesthetes were presented with Glagolitic letters that they have never seen before, and the meaning was acquired through a short writing exercise. The Glagolitic graphemes inherited the colors of the corresponding Latin graphemes as soon as the Glagolitic graphemes acquired the new meaning.

In another study, synesthetes were prompted to form novel synesthetic associations to graphemes never seen before. Synesthetes created those associations within minutes or seconds - which was time too short to account for creation of new physical connections between color representation and grapheme representation areas in the brain, pointing again towards ideasthesia. Although the time course is consistent with postsynaptic AMPA receptor upregulation and/or NMDA receptor coactivation, which would imply that the realtime experience is invoked at the synaptic level of analysis prior to establishment of novel wiring per se, a very intuitively appealing model. 

For lexical-gustatory synesthesia evidence also points towards ideasthesia: In lexical-gustatory synesthesia, verbalisation of the stimulus is not necessary for the experience of concurrents. Instead, it is sufficient to activate the concept.

Another case of synesthesia is swimming-style synesthesia in which each swimming style is associated with a vivid experience of a color. These synesthetes do not need to perform the actual movements of a corresponding swimming style. To activate the concurrent experiences, it is sufficient to activate the concept of a swimming style (e.g., by presenting a photograph of a swimmer or simply talking about swimming).

It has been argued that grapheme-color synesthesia for geminate consonants also provides evidence for ideasthesia.

In pitch-color synesthesia, the same tone will be associated with different colors depending on how it has been named; do-sharp (i.e. di) will have colors similar to do (e.g., a reddish color) and re-flat (i.e. ra) will have color similar to that of re (e.g., yellowish), although the two classes refer to the same tone. Similar semantic associations have been found between the acoustic characteristics of vowels and the notion of size.

One-shot synesthesia: There are synesthetic experiences that can occur just once in a lifetime, and are thus dubbed one-shot synesthesia. Investigation of such cases has indicated that such unique experiences typically occur when a synesthete is involved in an intensive mental and emotional activity such as making important plans for one's future or reflecting on one's life. It has been thus concluded that this is also a form of ideasthesia.

In normal perception

Which one would be called Bouba and which Kiki? Responses are highly consistent among people. This is an example of ideasthesia as the conceptualization of the stimulus plays an important role.

Recently, it has been suggested that the Bouba/Kiki phenomenon is a case of ideasthesia. Most people will agree that the star-shaped object on the left is named Kiki and the round one on the right Bouba. It has been assumed that these associations come from direct connections between visual and auditory cortices. For example, according to that hypothesis, representations of sharp inflections in the star-shaped object would be physically connected to the representations of sharp inflection in the sound of Kiki. However, Gomez et al. have shown that Kiki/Bouba associations are much richer as either word and either image is associated semantically to a number of concepts such as white or black color, feminine vs. masculine, cold vs. hot, and others. These sound-shape associations seem to be related through a large overlap between semantic networks of Kiki and star-shape on one hand, and Bouba and round-shape on the other hand. For example, both Kiki and star-shape are clever, small, thin and nervous. This indicates that behind Kiki-Bouba effect lies a rich semantic network. In other words, our sensory experience is largely determined by the meaning that we assign to stimuli. Food description and wine tasting is another domain in which ideasthetic association between flavor and other modalities such as shape may play an important role. These semantic-like relations play a role in successful marketing; the name of a product should match its other characteristics.

Implications for development of synesthesia

The concept of ideasthesia bears implications for understanding how synesthesia develops in children. Synesthetic children may associate concrete sensory-like experiences primarily to the abstract concepts that they have otherwise difficulties dealing with. Synesthesia may thus be used as a cognitive tool to cope with the abstractness of the learning materials imposed by the educational system — referred to also as a "semantic vacuum hypothesis". This hypothesis explains why the most common inducers in synesthesia are graphemes and time units — both relating to the first truly abstract ideas that a child needs to master.

Implications for art theory

The concept of ideasthesia has been often discussed in relation to art, and also used to formulate a psychological theory of art. According to the theory, we consider something to be a piece of art when experiences induced by the piece are accurately balanced with semantics induced by the same piece. Thus, a piece of art makes us both strongly think and strongly experience. Moreover, the two must be perfectly balanced such that the most salient stimulus or event is both the one that evokes strongest experiences (fear, joy, ... ) and strongest cognition (recall, memory, ...) — in other words, idea is well balanced with aesthesia.

Ideasthesia theory of art may be used for psychological studies of aesthetics. It may also help explain classificatory disputes about art as its main tenet is that experience of art can only be individual, depending on person's unique knowledge, experiences and history. There could exist no general classification of art satisfactorily applicable to each and all individuals.

Neurophysiology of ideasthesia

Ideasthesia is congruent with the theory of brain functioning known as practopoiesis. According to that theory, concepts are not an emergent property of highly developed, specialized neuronal networks in the brain, as is usually assumed; rather, concepts are proposed to be fundamental to the very adaptive principles by which living systems and the brain operate.

McGurk effect

From Wikipedia, the free encyclopedia
 
The McGurk effect is a perceptual phenomenon that demonstrates an interaction between hearing and vision in speech perception. The illusion occurs when the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound. The visual information a person gets from seeing a person speak changes the way they hear the sound. If a person is getting poor quality auditory information but good quality visual information, they may be more likely to experience the McGurk effect. Integration abilities for audio and visual information may also influence whether a person will experience the effect. People who are better at sensory integration have been shown to be more susceptible to the effect. Many people are affected differently by the McGurk effect based on many factors, including brain damage and other disorders.

Background

It was first described in 1976 in a paper by Harry McGurk and John MacDonald, titled "Hearing Lips and Seeing Voices" in Nature (23 Dec 1976). This effect was discovered by accident when McGurk and his research assistant, MacDonald, asked a technician to dub a video with a different phoneme from the one spoken while conducting a study on how infants perceive language at different developmental stages. When the video was played back, both researchers heard a third phoneme rather than the one spoken or mouthed in the video.

This effect may be experienced when a video of one phoneme's production is dubbed with a sound-recording of a different phoneme being spoken. Often, the perceived phoneme is a third, intermediate phoneme. As an example, the syllables /ba-ba/ are spoken over the lip movements of /ga-ga/, and the perception is of /da-da/. McGurk and MacDonald originally believed that this resulted from the common phonetic and visual properties of /b/ and /g/. Two types of illusion in response to incongruent audiovisual stimuli have been observed: fusions ('ba' auditory and 'ga' visual produce 'da') and combinations ('ga' auditory and 'ba' visual produce 'bga'). This is the brain's effort to provide the consciousness with its best guess about the incoming information. The information coming from the eyes and ears is contradictory, and in this instance, the eyes (visual information) have had a greater effect on the brain and thus the fusion and combination responses have been created.

Vision is the primary sense for humans, but speech perception is multimodal, which means that it involves information from more than one sensory modality, in particular, audition and vision. The McGurk effect arises during phonetic processing because the integration of audio and visual information happens early in speech perception. The McGurk effect is very robust; that is, knowledge about it seems to have little effect on one's perception of it. This is different from certain optical illusions, which break down once one 'sees through' them. Some people, including those that have been researching the phenomenon for more than twenty years, experience the effect even when they are aware that it is taking place. With the exception of people who can identify most of what is being said from speech-reading alone, most people are quite limited in their ability to identify speech from visual-only signals. A more extensive phenomenon is the ability of visual speech to increase the intelligibility of heard speech in a noisy environment. Visible speech can also alter the perception of perfectly audible speech sounds when the visual speech stimuli are mismatched with the auditory speech. Normally, speech perception is thought to be an auditory process; however, our use of information is immediate, automatic, and, to a large degree, unconscious and therefore, despite what is widely accepted as true, speech is not only something we hear. Speech is perceived by all of the senses working together (seeing, touching, and listening to a face move). The brain is often unaware of the separate sensory contributions of what it perceives. Therefore, when it comes to recognizing speech the brain cannot differentiate whether it is seeing or hearing the incoming information.

It has also been examined in relation to witness testimony. Wareham and Wright's 2005 study showed that inconsistent visual information can change the perception of spoken utterances, suggesting that the McGurk effect may have many influences in everyday perception. Not limited to syllables, the effect can occur in whole words and have an effect on daily interactions that people are unaware of. Research into this area can provide information on not only theoretical questions, but also it can provide therapeutic and diagnostic relevance for those with disorders relating to audio and visual integration of speech cues.

Internal factors

Damage

Both hemispheres of the brain make a contribution to the McGurk effect. They work together to integrate speech information that is received through the auditory and visual senses. A McGurk response is more likely to occur in right-handed individuals for whom the face has privileged access to the right hemisphere and words to the left hemisphere. In people that have had callosotomies done, the McGurk effect is still present but significantly slower. In people with lesions to the left hemisphere of the brain, visual features often play a critical role in speech and language therapy. People with lesions in the left hemisphere of the brain show a greater McGurk effect than normal controls. Visual information strongly influences speech perception in these people. There is a lack of susceptibility to the McGurk illusion if left hemisphere damage resulted in a deficit to visual segmental speech perception. In people with right hemisphere damage, impairment on both visual-only and audio-visual integration tasks is exhibited, although they are still able to integrate the information to produce a McGurk effect. Integration only appears if visual stimuli is used to improve performance when the auditory signal is impoverished but audible. Therefore, there is a McGurk effect exhibited in people with damage to the right hemisphere of the brain but the effect is not as strong as a normal group.

Disorders

Dyslexia

Dyslexic individuals exhibit a smaller McGurk effect than normal readers of the same chronological age, but they showed the same effect as reading-level age-matched readers. Dyslexics particularly differed for combination responses, not fusion responses. The smaller McGurk effect may be due to the difficulties dyslexics have in perceiving and producing consonant clusters.

Specific language impairment

Children with specific language impairment show a significantly lower McGurk effect than the average child. They use less visual information in speech perception, or have a reduced attention to articulatory gestures, but have no trouble perceiving auditory-only cues.

Autism spectrum disorders

Children with autism spectrum disorders (ASD) showed a significantly reduced McGurk effect than children without. However, if the stimulus was nonhuman (for example bouncing a tennis ball to the sound of a bouncing beach ball) then they scored similarly to children without ASD. Younger children with ASD show a very reduced McGurk effect; however, this diminishes with age. As the individuals grow up, the effect they show becomes closer to those that did not have ASD. It has been suggested that the weakened McGurk effect seen in people with ASD is due to deficits in identifying both the auditory and visual components of speech rather than in the integration of said components (although distinguishing speech components as speech components may be isomorphic to integrating them).

Language-learning disabilities

Adults with language-learning disabilities exhibit a much smaller McGurk effect than other adults. These people are not as influenced by visual input as most people. Therefore, people with poor language skills will produce a smaller McGurk effect. A reason for the smaller effect in this population is that there may be uncoupled activity between anterior and posterior regions of the brain, or left and right hemispheres. Cerebellar or basal ganglia etiology is also possible.

Alzheimer’s disease

In patients with Alzheimer's disease (AD), there is a smaller McGurk effect exhibited than in those without. Often a reduced size of the corpus callosum produces a hemisphere disconnection process. Less influence on visual stimulus is seen in patients with AD, which is a reason for the lowered McGurk effect.

Schizophrenia

The McGurk effect is not as pronounced in schizophrenic individuals as in non-schizophrenic individuals. However, it is not significantly different in adults. Schizophrenia slows down the development of audiovisual integration and does not allow it to reach its developmental peak. However, no degradation is observed. Schizophrenics are more likely to rely on auditory cues than visual cues in speech perception.

Aphasia

People with aphasia show impaired perception of speech in all conditions (visual-only, auditory-only, and audio-visual), and therefore exhibited a small McGurk effect. The greatest difficulty for aphasics is in the visual-only condition showing that they use more auditory stimuli in speech perception.

External factors

Cross-dubbing

Discrepancy in vowel category significantly reduced the magnitude of the McGurk effect for fusion responses. Auditory /a/ tokens dubbed onto visual /i/ articulations were more compatible than the reverse. This could be because /a/ has a wide range of articulatory configurations whereas /i/ is more limited, which makes it much easier for subjects to detect discrepancies in the stimuli. /i/ vowel contexts produce the strongest effect, while /a/ produces a moderate effect, and /u/ has almost no effect.

Mouth visibility

The McGurk effect is stronger when the right side of the speaker's mouth (on the viewer's left) is visible. People tend to get more visual information from the right side of a speaker's mouth than the left or even the whole mouth. This relates to the hemispheric attention factors discussed in the brain hemispheres section above.

Visual distractors

The McGurk effect is weaker when there is a visual distractor present that the listener is attending to. Visual attention modulates audiovisual speech perception. Another form of distraction is movement of the speaker. A stronger McGurk effect is elicited if the speaker's face/head is motionless, rather than moving.

Syllable structure

A strong McGurk effect can be seen for click-vowel syllables compared to weak effects for isolated clicks. This shows that the McGurk effect can happen in a non-speech environment. Phonological significance is not a necessary condition for a McGurk effect to occur; however, it does increase the strength of the effect.

Gender

Females show a stronger McGurk effect than males. Women show significantly greater visual influence on auditory speech than men did for brief visual stimuli, but no difference is apparent for full stimuli. Another aspect regarding gender is the issue of male faces and voices as stimuli in comparison to female faces and voices as stimuli. Although, there is no difference in the strength of the McGurk effect for either situation. If a male face is dubbed with a female voice, or vice versa, there is still no difference in strength of the McGurk effect. Knowing that the voice you hear is different from the face you see – even if different genders – doesn’t eliminate the McGurk effect.

Familiarity

Subjects who are familiar with the faces of the speakers are less susceptible to the McGurk effect than those who are unfamiliar with the faces of the speakers. On the other hand, there was no difference regarding voice familiarity.

Expectation

Semantic congruency had a significant impact on the McGurk illusion. The effect is experienced more often and rated as clearer in the semantically congruent condition relative to the incongruent condition. When a person was expecting a certain visual or auditory appearance based on the semantic information leading up to it, the McGurk effect was greatly increased.

Self influence

The McGurk effect can be observed when the listener is also the speaker or articulator. While looking at oneself in the mirror and articulating visual stimuli while listening to another auditory stimulus, a strong McGurk effect can be observed. In the other condition, where the listener speaks auditory stimuli softly while watching another person articulate the conflicting visual gestures, a McGurk effect can still be seen, although it is weaker.

Temporal synchrony

Temporal synchrony is not necessary for the McGurk effect to be present. Subjects are still strongly influenced by auditory stimuli even when it lagged the visual stimuli by 180 milliseconds (point at which McGurk effect begins to weaken). There was less tolerance for the lack of synchrony if the auditory stimuli preceded the visual stimuli. In order to produce a significant weakening of the McGurk effect, the auditory stimuli had to precede the visual stimuli by 60 milliseconds, or lag by 240 milliseconds.

Physical task diversion

The McGurk effect was greatly reduced when attention was diverted to a tactile task (touching something). Touch is a sensory perception like vision and audition, therefore increasing attention to touch decreases the attention to auditory and visual senses.

Gaze

The eyes do not need to fixate in order to integrate audio and visual information in speech perception. There was no difference in the McGurk effect when the listener was focusing anywhere on the speaker's face. The effect does not appear if the listener focuses beyond the speaker's face. In order for the McGurk effect to become insignificant, the listener's gaze must deviate from the speaker's mouth by at least 60 degrees.

Other languages

People of all languages rely to some extent on visual information in speech perception, but the intensity of the McGurk effect can change between languages. Dutch, English, Spanish, German, Italian and Turkish  language listeners experience a robust McGurk effect, while it is weaker for Japanese and Chinese listeners. Most research on the McGurk effect between languages has been conducted between English and Japanese. There is a smaller McGurk effect in Japanese listeners than in English listeners. The cultural practice of face avoidance in Japanese people may have an effect on the McGurk effect, as well as tone and syllabic structures of the language. This could also be why Chinese listeners are less susceptible to visual cues, and similar to Japanese, produce a smaller effect than English listeners. Studies have also shown that Japanese listeners do not show a developmental increase in visual influence after the age of six, as English children do. Japanese listeners are more able to identify an incompatibility between the visual and auditory stimulus than English listeners are. This result could be in relation to the fact that in Japanese, consonant clusters do not exist. In noisy environments where speech is unintelligible, however, people of all languages resort to using visual stimuli and are then equally subject to the McGurk effect. The McGurk effect works with speech perceivers of every language for which it has been tested.

Hearing impairment

Experiments have been conducted involving hard of hearing individuals as well as individuals that have had cochlear implants. These individuals tend to weigh visual information from speech more heavily than auditory information. In comparison to normal hearing individuals, this is not different unless there is more than one syllable, such as a word. Regarding the McGurk experiment, responses from cochlear implanted users produced the same responses as normal hearing individuals when an auditory bilabial stimulus is dubbed onto a visual velar stimulus. However, when an auditory dental stimulus is dubbed onto a visual bilabial stimulus, the responses are quite different. The McGurk effect is still present in individuals with impaired hearing or using cochlear implants, although it is quite different in some aspects.

Infants

By measuring an infant's attention to certain audiovisual stimuli, a response that is consistent with the McGurk effect can be recorded. From just minutes to a couple of days old, infants can imitate adult facial movements, and within weeks of birth, infants can recognize lip movements and speech sounds. At this point, the integration of audio and visual information can happen, but not at a proficient level. The first evidence of the McGurk effect can be seen at four months of age; however, more evidence is found for 5-month-olds. Through the process of habituating an infant to a certain stimulus and then changing the stimulus (or part of it, such as ba-voiced/va-visual to da-voiced/va-visual), a response that simulates the McGurk effect becomes apparent. The strength of the McGurk effect displays a developmental pattern that increases throughout childhood and extends into adulthood.

Neurophilosophy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurophilosophy ...